200 88 6MB
English Pages 173 [174] Year 2023
Peter Klimczak Thomas Zoglauer Editors
Truth and Fake in the Post-Factual Digital Age Distinctions in the Humanities and IT Sciences
Truth and Fake in the Post-Factual Digital Age
Peter Klimczak • Thomas Zoglauer Editors
Truth and Fake in the Post-Factual Digital Age Distinctions in the Humanities and IT Sciences
Editors Peter Klimczak Chair of Applied Media Studies Brandenburg University of Technology Cottbus, Germany
Thomas Zoglauer Institute of Philosophy and Social Sciences Brandenburg University of Technology Cottbus, Germany
ISBN 978-3-658-40405-5 ISBN 978-3-658-40406-2 https://doi.org/10.1007/978-3-658-40406-2
(eBook)
This book is a translation of the original German edition „Wahrheit und Fake im postfaktisch-digitalen Zeitalter“ by Klimczak, Peter, published by Springer Fachmedien Wiesbaden GmbH in 2021. The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content, so that the book will read stylistically differently from a conventional translation. Springer Nature works continuously to further the development of tools for the production of books and on the related technologies to support the authors. # The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH, part of Springer Nature. The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
Introduction
“Postfaktisch” (post-factual) was chosen as the word of the year 2016 by the German Gesellschaft für deutsche Sprache and “post-truth” in the UK in the same year. The loose handling of truth in politics and the spread of “fake news” and “alternative facts” are symptoms of crisis in our time. The Relotius case1 also reveals a credibility problem in journalism and shows how difficult it is even for editors and fact checkers to distinguish truth from fiction. While fake news and disinformation campaigns are not a new phenomenon, the Internet and social media provide a technology that can spread false news and conspiracy theories faster and more effectively than before. The question of what truth is in the first place, whether truth is merely a social construction, and how to distinguish truth from falsehood and lies is therefore all the more pressing. This anthology examines the phenomenon of fake and fake news in relation to truth (or concepts thereof) from a philosophical, media studies, historical, and information technology perspective. The aim is to clarify and delimit fundamental concepts such as truth, factuality, fictionality, and fake, and to critically analyze their representation in the media. Can existing theories of truth and media firstly provide an adequate definition of truth in general and mediatized truth in particular and secondly provide reliable criteria to distinguish between truth and falsity on the one hand and facticity, fictionality, and fake on the other? Or does it require modified or even new theories and models? How can fake news be detected by information technology and what contribution can data analysis and AI research make to this? Traditionally, truth is understood as a correspondence between proposition and reality. In the social and media sciences, a constructivist approach is currently predominant, according to which our view of reality is socially and medially constructed and truth is based on discursive agreements. And also in philosophy, a truth relativism is occasionally advocated, according to which the truth of a statement depends on the context of the statement and its interpretation. Accordingly, the term “post-truth” refers to a state in which
1
https://www.spiegel.de/kultur/gesellschaft/der-fall-claas-relotius-hier-finden-sie-alle-artikel-imueberblick-a-1245066.html [last access date: 01/26/2021]. v
vi
Introduction
truth no longer matters and serves only a rhetorical function, in which facts are ignored and empirical evidence is no longer taken into account (Bufacchi 2020; Cosentino 2020; MacMullen 2020; McIntyre 2018). This truth crisis is fueled and exploited by post-factual politics. But it also has philosophical roots, which can be found in truth relativism and science skepticism. Thomas Zoglauer’s contribution, therefore, shows the consequences of relativizing and questioning truth. His starting point is the theses of Steve Fuller (2018), a leading representative of Science and Technology Studies, who advocates a post-truth philosophy. Thomas Zoglauer criticizes the relativistic approach and argues instead for a perspectival realism that acknowledges the contextual relativity and perspective-dependence of statements, but on the other hand also makes it clear that truth must have a reference to reality so that it does not fall victim to post-factualism. A hoax is a special form of fake. A hoax is – according to Axel Gelfert’s definition – a scientific construct aimed at detection. The best-known hoax is the so-called “Sokal hoax”: The physicist Alan Sokal (1996) submitted an essay to the journal Social Text that had a recognizably satirical character and parodied claims made by postmodern philosophers and social constructivists. However, the reviewers and editors of the journal were deceived by the hoax, took the author’s theses seriously, and approved the paper for publication, thereby becoming an object of ridicule in the professional community. In his contribution, Axel Gelfert discusses the ethical legitimacy of hoaxes: do they represent an instrument of enlightenment or are they to be seen as a calculated breach of trust? The dichotomy of truth and fake is usually contrasted with the dichotomy of fictionality and factuality. Peter Klimczak extends this dichotomy and develops a trichotomy in his contribution in which the three terms fictionality, factuality, and fake are conceived as equally important and are thereby unambiguously distinguished. This is made possible by a set-theoretical modeling of represented worlds. This approach, based on a thesis of Michael Titzmann (2003), allows to base the differentiation of representations as fictional, factual, or fake purely on content, on the level of the representation, and not of what is represented. But what if the truth value of a statement is still unknown or indeterminate? And what effects do contradictions have on the qualification of represented worlds? Peter Klimczak tries to answer these questions with his model. The extension of the dichotomy to a trichotomy focuses on an aspect that is all too often neglected in the course of the current discussion: What role does fictionality actually play with regard to truth and falsity, truth and fake? Christer Petersen consequently examines how alternative realities are created in narrative texts in literature and film and reflects on the difference between fiction and reality from a media-theoretic perspective. For the boundary between fact, factuality, or truth on the one hand and illusion, deception, or lie on the other, can certainly become blurred in represented worlds. Applied to the political reality of fake news, the question thus arises: what epistemological paradigm underlies the talk of alternative facts? According to Petersen, the media are in a credibility crisis that can only be overcome by defining quality criteria for news production and ensuring compliance with these criteria.
Introduction
vii
Andreas Neumann uses examples of non-fictional news on GDR television and fictional feature films and series productions of the GDR to analyze how the Berlin Wall was portrayed and legitimized, how fiction and reality were deliberately blurred, and how an ideologically distorted worldview was constructed by the media. The definition of fascism that prevailed in Marxism-Leninism, according to Andreas Neumann, allowed for a narrative that exhibited conspiracy-theoretical features and thus laid the foundation for an alternative historiography. The German workers’ and peasants’ state developed a bipolar, almost Manichean worldview that saw peace-loving socialism on one side and hostile capitalism, separated by the “anti-fascist wall”, on the other. Fake news is a special form of fake and currently the most discussed. Fake news can be understood as false or misleading news that is spread with the intention of deceiving or influencing opinion (Götz-Votteler and Hespers 2019; Hendricks and Vestergaard 2019; Rini 2017; Schmid et al. 2018). Axel Gelfert (2018) distinguishes fake news from other types of false news such as rumors, gossip, hoaxes, and urban legends and defines fake news as follows: “Fake News is the deliberate presentation of (typically) false or misleading claims as news, where the claims are misleading by design” (Gelfert 2018, p. 108). Fake news is thus characterized by the features of falsity or misleading presentation, intent to deceive, and their news design. Fake news conveys the feeling of being true, it makes use of the news format, giving itself a serious appearance, and thus gaining the trust of the recipients. As a result, it is gratefully received and further disseminated. The rapid spread of fake news and the influence of social media on elections can be seen as a consequence of digitalization. False news and disinformation campaigns can now be generated and spread automatically. Information technology is needed to stop these undesirable developments. With intelligent algorithms and refined data analysis, fake news should be detected more quickly in the future and its spread prevented. However, in order to detect and filter fake news by the use of artificial intelligence, it must be possible to distinguish fakes from facts, facts from fictions, and fictions from fakes. Katrin Hartwig and Christian Reuter, therefore, present existing white-box and black-box approaches to fake detection and explain their advantages and disadvantages. The focus is on the browser plugin TrustyTweet, developed at TU Darmstadt, which supports users in the evaluation of Twitter messages. Based on the empirical evaluation, the authors formulate design recommendations for the construction of systems to support users in dealing with fake news. In contrast to that, Ulrich Schade and Albert Pritzkau present an AI-based tool based on machine learning algorithms that can be used to better detect fake news or make it visible. Using two previously manually created corpora, one with “good” news texts, such as from dpa, and one with “bad” texts, i.e., fake news, their system has learned to differentiate “good” from “bad” texts on the basis of linguistic features and meta-data. Based on this, the artificial neural network can evaluate new, i.e., still unknown, texts by checking which corpus the new text is “more similar” to, in order to issue a warning against fake news if necessary.
viii
Introduction
Another task of information technology is to verify the authenticity of news. In news blogs and online news portals, journalists often copy news from other media, select and aggregate it, and thus assemble seemingly new news from plagiarized text modules. The aggregators, i.e., the collectors and editors of news, only have second-hand knowledge because they obtain their information from other sources (Coddington 2019, p. 45), increasing the risk of spreading false or unverified information as a result. Media users are usually unable to judge the truthfulness and authenticity of a news item. It is therefore increasingly important to identify the reuse of information. Felix Hamborg and his coauthors developed the tool NewsDeps, which analyzes such reuse of information and indicates from which sources it originates. The primary, secondary, and subsequent sources can be visualized graphically. This allows sources to be compared, which makes fact checking much easier. The contributions to this anthology show the importance of an interdisciplinary dialogue between media, historical, and IT sciences as well as philosophy in order to approach an adequate definition of truth in general and mediatized truth in particular and subsequently to improve existing theories and models or to develop better theories and models. This collaborative and interdisciplinary idea is intended to provide an impulse for future research in this field. Cottbus Remseck am Neckar June 2021
Peter Klimczak Thomas Zoglauer
References Bufacchi V (2020) Truth, lies and tweets: a consensus theory of post-truth. Philos Soc Crit 47:347–361 Coddington M (2019) Aggregating the news. Secondhand storytelling and the changing work of digital journalism. Columbia University Press, New York Cosentino G (2020) Social media and the post-truth world order. Springer/Palgrave Macmillan, Cham Fuller S (2018) Post-truth. Knowledge as a power game. Anthem Press, London Gelfert A (2018) Fake news: a definition. Informal Logic 38:84–117 Götz-Votteler K, Hespers S (2019) Alternative Wirklichkeiten? Transcript, Bielefeld Hendricks V, Vestergaard M (2019) Reality lost. Springer, Cham MacMullen I (2020) What is „Post-factual“ politics? J Polit Philos 28:97–116 McIntyre L (2018) Post-truth. MIT-Press, Cambridge, MA/London Rini R (2017) Fake news and partisan epistemology. Kennedy Inst Ethics J 27:E43–E64 Schmid CE, Stock L, Walter S (2018) Der strategische Einsatz von Fake News zur Propaganda im Wahlkampf. In: Sachs-Hombach K, Zywietz B (eds) Fake News, Hashtags & Social Bots. Springer VS, Wiesbaden, pp 69–95
Introduction
ix
Sokal A (1996) Transgressing the boundaries: towards a transformative hermeneutics of quantum gravity. Soc Text 46/47(Spring/Summer 1996):217–252 Titzmann M (2003) Semiotische Aspekte der Literaturwissenschaft. In: Posner R, Robering K, Sebeok T (eds) Semiotik. Ein Handbuch zu den zeichentheoretischen Grundlagen von Natur und Kultur. Teilbd. 3. de Gruyter, Berlin, pp 3028–3103
Contents
1
2
3
Truth Relativism, Science Skepticism and the Political Consequences . . . . . Thomas Zoglauer 1.1 Post-Truth Phenomenology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Steve Fuller’s Apology of Post-Truth . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Critique of Expertise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Science Skepticism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Truth by Consensus? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Truth Relativism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Critique of Ontological Relativism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Post-Factual Politics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 The Limits of Relativism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Perspectival Realism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate Tool of Inquiry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Axel Gelfert 2.1 Too Good to Be True . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Hoaxes in the Human Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 What Hoaxes Are and How They Work . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Dissimulation in Digital Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Instrument of Enlightenment or Calculated Breach of Trust? . . . . . . . . . 2.6 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a Discussion of Represented Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . Peter Klimczak 3.1 Form and Content. Or: On the Distinction of Basic Categories of Media Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 4 6 8 9 12 15 17 18 19 22 27 27 29 33 35 37 41 42 45
46
xi
xii
Contents
3.2
Sets and Worlds. Or: On the Distinction of Fact, Fiction and Fake by Means of Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Fiction and Fake. Or: From the One-to-One to the Unambiguous Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Truth and World. Or: Of the Real World as a Set of Sufficiently Proven Represented Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Indeterminacy and Trivalency. Or: Of the Contradiction in (Represented) Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 World and Worlds. Or: On the Case Specificity and Subjectivity of Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christer Petersen 4.1 Introduction: The Conway Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The Case of Baudrillard: Postmodern Theory . . . . . . . . . . . . . . . . . . . . 4.3 The Strange World of Harold Crick: Fictional Epistemologies . . . . . . . . 4.4 The New Generation Fake: Political Reality . . . . . . . . . . . . . . . . . . . . . 4.5 Pynchon’s Conspiracies: Postmodern Epistemology . . . . . . . . . . . . . . . 4.6 Conclusion: Agony of Credibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48 58 62 65 68 70 73 73 75 77 81 83 85 87
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual Consequences of Fake News and Conspiracy Theories Spread by the Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Andreas Neumann 5.1 On Behalf of “Terrorists”, “Warmongers” and “Fascist Creatures” – The Public Presentation of the Dieter Hötger Case . . . . . . . . . . . . . . . . 89 5.2 Fascism in the Marxist-Leninist World View . . . . . . . . . . . . . . . . . . . . 91 5.3 Anti-Fascism as a Founding Myth of the GDR . . . . . . . . . . . . . . . . . . . 95 5.4 The Anti-Fascist Protection Wall and the State Information Media . . . . . 96 5.5 From Fiction to Reality. The Anti-Fascist Protection Wall in Cinema and on the Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6
Caution: Possible “Fake News” – A Technical Approach for Early Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Albert Pritzkau and Ulrich Schade 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Fake News – A Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 A Tool to Warn About Possible Fake News . . . . . . . . . . . . . . . . . . . . . 6.4 The Tool as Part of a System for Viewing Social Media . . . . . . . . . . . .
113 113 114 116 122
Contents
6.4.1 The Linguistic Analysis of the Content . . . . . . . . . . . . . . . . . . . 6.4.2 The Metadata Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
8
Countering Fake News Technically – Detection and Countermeasure Approaches to Support Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Katrin Hartwig and Christian Reuter 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Detection of Fake News in Social Media . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Black Box Versus White Box . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Countermeasures to Support Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 TrustyTweet: A Whitebox Approach to Assist Users in Dealing with Fake News . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NewsDeps: Visualizing the Origin of Information in News Articles . . . . . . . Felix Hamborg, Philipp Meschenmoser, Moritz Schubotz, Philipp Scharpf, and Bela Gipp 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Background and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Forms of Information Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Methods to Detect Information Reuse . . . . . . . . . . . . . . . . . . . . 8.2.3 Approaches to Detect Information Reuse in News . . . . . . . . . . . 8.3 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Import of News Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Measurement of Information Reuse in Articles . . . . . . . . . . . . . 8.3.3 Visualization of News Dependencies . . . . . . . . . . . . . . . . . . . . . 8.4 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Discussion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xiii
123 124 126 127 131 131 134 134 137 138 139 142 142 149
150 151 151 152 154 154 154 155 156 158 160 161 161
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
About the Reviewers
Christoph Bläsi, Prof. Dr., is Professor of Book Studies at the Johannes Gutenberg University Mainz (Germany) and works in the fields of book economics and digital publishing. Current focal points include the search for traces of the “foreign” in Germanlanguage cookbooks from the postwar period to the present day (using cultural studies and computer philology methods) and research into the impact of AI applications on, above all, publishing value creation. Manuel Burghardt, Prof. Dr., is Professor of Computational Humanities at the Institute of Computer Science at the University of Leipzig (Germany). He is also a member of the board of the “Forum for Digital Humanities Leipzig” and spokesperson for the GI specialist group “Computer Science and Digital Humanities.” His work focuses on the areas of “Computational Literary Studies,” “Optical Character Recognition,” and “Digital Humanities Theory.” York Hagmayer, Prof. Dr., is Professor of Psychology at Georg-August-Universität Göttingen (Germany) and works in the fields of cognitive science, psychology, and methodology. Wulf Kellerwessel, Prof. Dr., is at the Institute of Philosophy at RWTH Aachen University (Germany) and works in the fields of ethics, applied ethics, and political philosophy. Larissa Krainer, Associate Prof. Dr., researches and teaches at the Institute for Media and Communication Studies at the University of Klagenfurt (Austria) in the fields of communication and media ethics, process ethics as well as philosophy of science, and methodology of inter- and transdisciplinary research. Florian Mundhenke, PD Dr. phil. habil., is ATS Associate Lecturer at the Department of Modern Languages and Cultural Studies at the University of Alberta (Canada) and works in the fields of interfaces of digital culture (AR/VR), media hybridization in relation to questions of transculturality and gender/diversity as well as aesthetics and pragmatics of film. xv
xvi
About the Reviewers
Claudia Müller-Birn, Prof. Dr., is Professor of Human-Centered Computing at the Institute of Computer Science at Freie Universität Berlin (Germany) and conducts research in the areas of human-machine collaboration, social computing, and human-computer interaction. Florentine Strzelczyk, Prof. Dr., is Provost and Vice-President (Academic) of the Western University (Canada). Research interests include contemporary German literature, culture, and film; tolerance in German culture; and gender and religion. Izabela Surynt, Prof. Dr., is Professor at the Institute of Journalism and Social Communication at the University of Wroclaw (Poland) and works in the fields of interculturality, German-Polish cultural transfer, and German cultural history. Tatjana von Solodkoff, PhD, is Assistant Professor at University College Dublin (Ireland). She works in the fields of metaphysics, ontology, and philosophy of fiction. Jan-Noël Thon, Prof. Dr., is Professor of Media Studies at the Norwegian University of Technology and Natural Sciences (NTNU), Visiting Professor of Media Studies at the University of Cologne in Germany, and Professorial Fellow at the University for the Creative Arts in the UK. Research interests include comics studies, game studies, media convergence in digital cultures, transmedial figures, and transmedial narratology. Ralf Stoecker, Prof. Dr., is Professor of Practical Philosophy at the University of Bielefeld (Germany) and works in the fields of applied ethics, action theory, and philosophy of the person. Karsten Weber, Prof. Dr., is Co-director of the Institute for Social Research and Technology Assessment and Director of the Regensburg Center of Health Sciences and Technology at the Ostbayerische Technische Hochschule Regensburg (Germany), as well as Honorary Professor of Culture and Technology at the Brandenburg University of Technology Cottbus-Senftenberg (Germany). He works on the societal impact of information and communication technology, particularly in the areas of health and mobility. Matthias Wieser, Dr. Phil. habil., is Associate Professor at the Institute for Media and Communication Studies at the Alpen-Adria-Universität Klagenfurt (Austria) and works in the fields of media and cultural theory, sociology of media and communication, and science and technology studies. Günther Wirsching, Prof. Dr., is Professor at the Department of Mathematics at the Catholic University of Eichstätt-Ingolstadt (Germany) and works in the fields of logic, geometry, and industrial applications.
1
Truth Relativism, Science Skepticism and the Political Consequences Thomas Zoglauer
Abstract
In his book Post-Truth, Steve Fuller, a leading exponent of Science and Technology Studies, defends truth relativism, criticizes scientific expertise, and argues that we are living since a long time in a post-truth age. Fuller views science as a power game, where power decides what is true and what is false. Donald Trump is particularly good at this power game and knows how to pass his personal opinion as a definite truth. The example of Fuller will be used to show how truth relativism and science skepticism contribute to the emergence of a post-truth society and may threaten democracy.
1.1
Post-Truth Phenomenology
In a world where false news, fake news and lies are permanently being spread, the gold standard of truth loses its value and we can no longer distinguish between truth and falsity. The English term post-truth, which was named word of the year by the Oxford English Dictionary in 2016, expresses this very succinctly: truth no longer matters and is seen as an obsolete relic of a bygone age when people could trust in truth. It seems as if today there are no facts and no reality anymore. Everything is socially constructed. People live in a filter bubble and no longer care about the world outside. They retreat into a world of fictions and illusions and only believe what fits into their own world view. The choice of post-truth as word of the year is to be understood as a warning: Truth is indispensable. Without the ability to distinguish between true and false, people become T. Zoglauer (✉) Institute of Philosophy and Social Sciences, Brandenburg University of Technology, Cottbus, Germany e-mail: [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_1
1
2
T. Zoglauer
victims of demagogy and propaganda. The dissemination of fake news and alternative facts aims at influencing opinions and manipulating thinking. If the truth dies, then democracy also dies. Especially in a post-truth world, it is important to verify the truth of statements. This requires reliable truth criteria, which is the task of epistemology and philosophy. Plato regarded it as the job of philosophers to show people the way out of the world of appearances and beliefs in order to reach truth and knowledge through enlightenment. Fake news and disinformation campaigns have always existed. But only with the advent of the Internet and social media fake news could spread globally and in real time. Today people can express their opinion on the internet and find like-minded people. The selective reception of news creates filter bubbles that fabricate alternative realities. One can also interpret the media’s separation from the world as a cultural phenomenon that characterizes modern society. Egon Friedell already saw signs of a creeping loss of reality in modern culture. He lamented “that what European citizens called reality for half a millennium is falling apart before their eyes like dry tinder” (Friedell 2007, p. 1493). The world is now only “phantom and matrix”, as Günther Anders aptly diagnoses. News is produced and “the world is delivered to our house” (Anders 1994, p. 129). “Everything real becomes phantom-like, everything fictitious real” (Anders 1994, p. 142). The media-constructed world is “ontologically ambiguous” because we can no longer decide whether we should regard what is shown “as real or fictional” (Anders 1994, p. 142). The identity of the indistinguishable leads to an identification of being with appearance and illusion with reality. But philosophy has also made a powerful contribution to the relativization of truth. In an unpublished fragment from 1885, Friedrich Nietzsche writes: The falseness of a concept is for me not yet an objection to it. Perhaps our new language sounds strangest in this respect: the question is, how far is it life-promoting, life-preserving, species-preserving. I am even fundamentally of the belief that the falsest assumptions are precisely the most indispensable to us, that without an acceptance of logical fiction, without measuring reality against the invented world of the absolute, self-identical, human beings cannot live, and that a renunciation of this fiction, a practical dispensing with it, would amount to the same thing as a renunciation of life. (Nietzsche 2020, p. 95; 1999, KSA 11, p. 527)1
Thus, fictions are indispensable for Nietzsche because they are useful. Even a lie is sometimes necessary in order to live or to increase one’s own power. For Nietzsche, everything is good “that enhances the people’s feeling of power, will to power, power itself” (Nietzsche 2005, p. 4; 1999, KSA 6, p. 170). Elsewhere he says: The “will to truth” develops in the service of the “will to power”: on close scrutiny its actual task is helping a certain kind of untruth to victory and to duration, taking a coherent whole of
1
In the following, quotes from Nietzsche’s writings refer besides the English edition of his works also to the Kritische Studienausgabe (Nietzsche 1999, KSA), indicating the volume number (e.g. KSA 11).
1
Truth Relativism, Science Skepticism and the Political Consequences
3
falsifications as the basis for the preservation of a certain kind of life. (Nietzsche 2020, p. 247; 1999, KSA 11, p. 699)
Thus Nietzsche becomes the mastermind and apologist of post-factual politics. A lie is justified if it serves to maintain power. Donald Trump is pursuing nothing else with his political strategy. There are no facts, everything is interpretation – this slogan of Nietzsche became the guiding tenet of postmodern philosophy.2 Other slogans that have made the rounds since then and have been often reiterated are: “everything is text”, “everything is socially constructed” or “anything goes”. It is therefore no wonder that many see postmodernism as an intellectual precursor of post-factual politics: “post-modernist texts paved the way for post-truth” (D’Ancona 2017, p. 96). Lee McIntyre (2018, p. 150) calls post-modernism the “godfather of post-truth”. And H. Sidky even holds postmodern relativism directly responsible for the outcome of the 2016 American presidential election: “[P]ostmodernists were able to launch an all-encompassing disinformation campaign to delegitimize science and rationality. The distressing effects of this campaign were painfully brought to light for many after the 2016 U.S. presidential election” (Sidky 2018, p. 40). However, such sweeping accusations are very problematic. First, postmodernism is a rather diffuse term and designates a very heterogeneous movement under which quite different philosophers and theories are subsumed. Originally conceived merely as a designation for an aesthetic style, postmodernism is now regarded, especially in the English-speaking world, as a counter-movement to analytical philosophy. Philosophers such as Derrida, Lyotard, Foucault, Rorty, Deleuze and Baudrillard are associated with postmodernism (cf. Sim 2013). Second, postmodernism cannot be blamed for the spread of fake news. One can assume that Donald Trump has not read any book of Lyotard, Derrida or Foucault. Arguably, however, there are certain tendencies within postmodern philosophy, particularly its relativism and critical attitude towards science, that create an intellectual climate in which half-truths and alternative facts can flourish. After all, if there are no objective facts and everything is interpretation, then anyone can interpret the world as he or she likes. In the following, I will focus on Science and Technology Studies (STS), which are close to postmodern philosophy and have developed a critical stance towards science with their critique of science’s truth claims and their demand for a democratization of knowledge. In the journal Social Studies of Science, a critical discussion has developed among representatives of STS around the question of whether and to what extent STS can be held responsible for the sins of post-truth politics. Within the STS, three camps have emerged that offer different answers to this question. Harry Collins, Robert Evans and Martin Weinel admit that science skepticism contributed to the emergence of a post-factual Nietzsche literally says: “Against the positivism, which halts at phenomena – ‘There are only facts’ – I would say: no, facts are just what there aren’t, there are only interpretations” (Nietzsche 2003, p. 139; 1999, KSA 12, p. 315). 2
4
T. Zoglauer
society: “we have to admit that for much of the time the views STS was espousing were consistent with post-truth” (Collins et al. 2017, p. 581). On the other hand, Sergio Sismondo (2017) and Michael Lynch (2017) indignantly reject this accusation: the STS have nothing to do with post-factual politics and are not to be held responsible for the spread of fake news and alternative facts. However, there is a radical current within the STS that advocates extreme truth relativism and embraces the emergence of a post-truth world. Steve Fuller, a prominent proponent of STS and social epistemology, openly admits to support a post-truth politics and sees nothing objectionable in it. On the contrary, according to Fuller, a toleration of other truths gives a voice to suppressed opinions and alternative worldviews, contributes to the diversity of opinions, and is therefore not a danger but a blessing for democracy. I will critically analyze Fuller’s post-truth philosophy in more detail, in particular his relativism and his critique of expertise, because his indifference towards truth is a characteristic feature of post-factual politics. The critique of Fuller should not be misunderstood as a general critique of STS, as his theses are quite controversial within the STS community and not everyone shares his anti-scientific attitude. Therefore, at the end of this paper, an epistemological model will be presented that, on the one hand, emphasizes the contextuality and perspectivity of scientific theories, but, on the other hand, adheres to a realist concept of truth that provides an indispensable orientation frame for our actions.
1.2
Steve Fuller’s Apology of Post-Truth
In 2018, Fuller published his book Post-Truth. Knowledge as a Power Game, in which he explains his philosophy of post-truth in detail. One of his central theses is implicitly mentioned in the subtitle of his book. He regards science as a power game, in which influence and power decide which opinions are considered as true and which as false. This view is a consequence of his sociological eliminativism, which reduces everything to social relations and categories (Fuller 2007, p. 124). Truth is thus not a relation between proposition and reality, but an expression of a power relation between social groups: Whoever has the power determines the rules of discourse and, as a consequence, what is true. Absolute truths do not exist. Truth is a social construct (Fuller 2018, p. 51). Different groups can have different opinions and interpret the world differently. The rules of the knowledge game are defined by the dominant group, which forces the others to play by its rules. Philosophy and science are the stage on which a permanent struggle for the interpretation of truth is performed. Fuller distinguishes between truthers and post-truthers: The truthers are the representatives of the dominant group. They see themselves as possessing the truth and enforce their view of truth against the oppositional group. The truthers hold a correspondence theory of truth and believe that truth is something that exists outside of discourse (Fuller 2018, p. 42). The post-truthers, on the other hand, want to change the rules of discourse and impose their view of reality:
1
Truth Relativism, Science Skepticism and the Political Consequences
5
Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. (Fuller 2018, p. 53)
In this picture, the history of philosophy presents itself as a history of class struggles between truthers and post-truthers. In the relativism of the Sophists, Fuller sees the beginning of a revolt against the dominant truth, in the wake of which rational argumentation is replaced by rhetoric (Fuller 2018, p. 29). And the Protestant resistance fight against the Catholic Church’s sole claim to truth, according to Fuller, brings about a second historical revolution that gives rise to a new science called protscience (Fuller 2018, pp. 107 ff.). This revolution continues in the twentieth century and leads to a “post-truth condition,” a state in which the difference between truth and falsehood no longer matters. For Fuller, Ludwig Wittgenstein and Thomas Kuhn are the most important protagonists of this post-truth movement. While the early Wittgenstein in the Tractatus logicophilosophicus still adheres to a correspondence theory of truth, the late Wittgenstein establishes in the Philosophical Investigations a theory of language games. Truth is now no longer something that exists outside language, but becomes a procedural component of a language game and is therefore immanent to language. The rules of the language game determine the conditions of truth. Fuller believes that a free negotiability of truths creates greater epistemic justice and in doing so contributes to a democratization of knowledge: “In a post-truth utopia, both truth and falsehood are themselves democratized” (Fuller 2018, p. 182). For Fuller, the accumulation of knowledge in the heads of a small elite of experts represents a form of power: “expertise is the most potent non-violent form of power available” (Fuller 2018, p. 161). If this power is broken, knowledge is liberated and can be claimed by anyone, he argues. Fuller suspects scientists of suppressing alternative opinions and considers expert judgements to be prejudiced and interest-driven. Fuller contrasts the authoritarian system of science with his vision of a scientific democracy in which all people can contribute equally to the acquisition of knowledge and in which judgments are arrived at by majority decisions. If the majority of citizens doubt climate change, then climate change does not exist. It is therefore not surprising that Fuller is sympathetic to climate change deniers and supporters of the intelligent design movement. In his opinion, the intelligent design theory should be taught in schools equally recognized as the theory of evolution. Citizens should decide for themselves what their children should learn (Fuller 2018, p. 122). Fuller believes that one only needs to change the rules of discourse and a lie would become a truth. On this point, there are striking similarities between Steve Fuller’s philosophy and Donald Trump’s politics. For Trump also wants to change the rules of political discourse. The relativization of truth, the ignoring of experts and scientific authorities, is an essential feature of post-factual politics. Whoever has the power determines what is true – at least Trump and Fuller think so. Indeed, Fuller seems to
6
T. Zoglauer
have a certain amount of sympathy for Trump, because he characterizes him as a fox (“the foxy Trump”) (Fuller 2018, p. 2). In doing so, Fuller uses a metaphor taken from Machiavelli. According to Machiavelli, to be successful, a politician has to be as strong as a lion and as cunning as a fox. Machiavelli adopts the animal metaphor of fox and lion from Cicero (1991, p. 19), who attributes the character traits of strength and violence (vis) to the lion, and cunning and deceit (fraus) to the fox. In Il Principe (The Prince) Machiavelli describes the ideal statesman as follows: “So one needs be a fox to recognize snares and a lion to frighten the wolves. Those who stay simply with the lion do not understand this” (Machiavelli 1998, p. 69). A fox thinks strategically, he is sly and cunning, he deceives his adversaries and breaks the rules to his own advantage. Fuller regards this as the crucial difference between the fox and the lion: “The lion tries to win by keeping the rules as they are, and the fox tries by changing them” (Fuller 2018, p. 3). This aptly describes also Donald Trump’s political agenda. Like the post-truthers, Trump opposes the scientific establishment. While his opponents act like lions and play by the rules, Trump breaks the rules and acts completely unpredictably: “That’s the post-truth condition in a nutshell” (Fuller 2018, p. 2). Unlike Fuller, most STS representatives disapprove of Trump’s post-truth policy. Undoubtedly, the STS cannot be blamed for Trump’s fake news. Nevertheless, Fuller’s basic principles, his truth relativism and his skeptical attitude towards science, are characteristic of large parts of this philosophical current. Stephen Turner, another influential representative of STS, holds similar radical theses as Fuller. Turner views expertise as an ideology. Scientific experts, he argues, lack democratic legitimacy, exercise authoritarian power, and operate outside of any political control (Turner 2014, p. 21). Besides that, he also supports the following theses: The stigmatization of creationism as a pseudoscientific belief system is an expression of a scientistic ideology (Turner 2014, p. 18). Alternative explanations beyond the scientific mainstream are suppressed. The neutrality of science is therefore an illusion.
1.3
Critique of Expertise
Fuller’s and Turner’s theses are very similar to the anarchist epistemology of Paul Feyerabend, who, like Fuller and Turner, considers experts to be untrustworthy and biased: “Experts are a prejudiced party in the dispute, they want to get jobs which are respectable and well paid, and so quite naturally they will praise themselves and condemn others” (Feyerabend 1999, p. 126). Feyerabend wants to break the power of experts and demands a control of science by laypeople. He criticizes science for its arrogance, sectarianism and propaganda. Science, he says, is an instrument of domination by self-appointed experts, because scientists “not only subject their fellow citizens to criticism, they also want to dominate them” (Feyerabend 1981, p. 37). He further claims that science suppresses alternatives by forcing students to learn particular subjects at school: “Physics, astronomy,
1
Truth Relativism, Science Skepticism and the Political Consequences
7
history must be taught. They cannot be replaced by magic, astrology, or the study of sagas” (Feyerabend 1975, p. 400). As an alternative to the existing knowledge society, Feyerabend outlines the vision of a free pluralistic society. The state is dissolved into a multitude of “traditions”, all of which have equal rights. Research is monitored and controlled by citizens: “Democratic judgment overrules ‘truth’ and expert opinion” (Feyerabend 1978, p. 86). Funds and research resources are distributed by a committee of laypeople. Feyerabend does not simply want to abolish science, but merely to increase the diversity of theories and methods by allowing for alternatives that are today considered as unscientific. Every person, he argues, should have the right to learn what he or she thinks is right: “If the taxpayers in California want their country universities to teach voodoo, folk medicine, astrology, rain dance ceremonies, then these subjects will just have to be incorporated into the curriculum” (Feyerabend 1978, p. 87). And one might add: If citizens believe in conspiracy theories and alien abductions, then these things should also be taught at schools and universities. Science is placed under general suspicion. As Bauchspies et al. (2006, p. 63) put it, “Science and truth speak in the interests of and are spoken by the powerful.” Stephen Turner considers climate science to be interest-driven and ideology-dominated, in which hegemonic discourse power is exercised and dissenting opinions are suppressed: “Attacking critics, even editors who allow critical papers into print, stigmatizing scientists for raising questions, and refusals to supply relevant information have been characteristic of climate science” (Turner 2014, p. 295). Experts, too, can err, can be biased, and can make interest-driven judgments. But laypeople, who have no specialist knowledge, must be able to trust these experts. Laypeople cannot conduct elaborate experiments or computer simulations to test assumptions. We cannot cast doubt on the scientific consensus just because there are also dissenting opinions among experts or because a warning about climate change doesn’t fit our own worldview. Oreskes and Conway see no alternative but to trust science: “we must trust our scientific experts on matters of science, because there isn’t a workable alternative” (Oreskes and Conway 2010, p. 272). Fuller’s and Turner’s criticism of experts is based on a generalizing fallacy. Fuller and Turner point to individual cases in which experts have erred or made interest-driven judgments (Oreskes and Conway 2010; Wynne 1989) and then generalize by questioning the credibility of all experts. They complain that experts have too much power and exert too much influence on policy decisions. But when we consider the Trump administration’s ignorant attitude toward climate change and the Covid crisis, we can come to the opposite conclusion. Under Trump, many environmental regulations have been relaxed and the influence of advisory panels has been curtailed. The Union of Concerned Scientists sharply criticizes the Trump administration for this: A clear pattern has emerged over the first six months of the Trump presidency: multiple actions by his administration are eroding the ability of science, facts, and evidence to inform policy
8
T. Zoglauer decisions, leaving us more vulnerable to threats to public health and the environment. (Carter et al. 2017, p. 1)
The scandal is that experts have too little rather than too much influence on policy. The indispensability of experts has just been demonstrated in the management of the Covid crisis: governments that trusted scientific expertise have suffered a lower death toll than those countries, such as Brazil or the US, that recklessly ignored the advice. Harry Collins, Robert Evans, Darrin Durant and Martin Weinel blame Science and Technology Studies for the decline of scientific authority and the rise of populist movements: “Contemporary science and technology studies (STS) erodes the cultural importance of scientific expertise and unwittingly supports the rise of anti-scientific populism” (Collins et al. 2020, p. 1). In the attempt to question the epistemic authority of experts and to give laypersons a say in scientific disputes, the difference between experts and laypersons is leveled and this only strengthens the followers of pseudoscience and conspiracy theories. Critical studies of science can be misused by climate change skeptics and anti-vaxxers to spread uncertainty and sow mistrust of experts. Collins et al. therefore urgently appeal to their scientific colleagues: “STS must stop celebrating the erosion of scientific expertise” (Collins et al. 2020, p. 52). They strongly warn not to ignore the advice of experts and not to let citizens’ initiative groups decide whether there is anthropogenic climate change or whether measles vaccinations can cause autism: Should we choose not to choose scientific experts when we want advice about our pressing technical problems we will have the transition to a dystopia where technical conclusions are the preserve of the rich, the strong and the celebrated – it is they who will decide, for instance, if the climate is warming and whether certain vaccines cause autism. Book burning is not far behind. (Collins et al. 2020, p. 59)
1.4
Science Skepticism
One model of how to prevent an increasingly science-skeptical attitude is to suggest that citizens and laypeople should be more involved in political decision-making processes and that the task of providing policy advice should not be left to scientists alone. Scientists should simply present the facts and citizens should draw the consequences (Wolters and Steel 2018, p. 15). In this way, it is hoped, the acceptance of and respect for science could be increased. However, it can be doubted whether laypeople can judge more objectively and make better decisions than experts. There are, in fact, significant differences between expert knowledge and lay knowledge: Experts, because of their academic training, have a deeper and more comprehensive understanding of scientific issues than laypeople. Expert knowledge is subjected to critical scrutiny in the peer-review process, whereas laypeople usually trust only their gut feelings and common sense and are therefore more prone to
1
Truth Relativism, Science Skepticism and the Political Consequences
9
error. Experts are closer to the sources of knowledge than laypeople because of their access to laboratories and historical documents.3 Bruno Latour, an early proponent and co-founder of Science and Technology Studies, is meanwhile highly critical of the science-skeptical attitude of his professional colleagues. He realized that STS provide science deniers and populists with the ammunition to question scientific findings, discredit experts and legitimize pseudo-scientific theories: And yet entire Ph.D. programs are still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular point of view, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives. (. . .) My argument is that a certain form of critical spirit has sent us down the wrong path, encouraging us to fight the wrong enemies and, worst of all, to be considered as friends by the wrong sort of allies because of a little mistake in the definition of its main target. The question was never to get away from facts but closer to them, not fighting empiricism but, on the contrary, renewing empiricism. (Latour 2004, p. 227, 231)4
Feyerabend’s and Fuller’s utopia of democratic relativism is basically populist and makes people believe that they can decide for themselves what is true and what is not true, just as they can choose between different parties. Feyerabend describes science as a big supermarket of ideas where we can choose the products we like. Since, according to Fuller, everything is socially constructed, we can construct the world as we like.
1.5
Truth by Consensus?
Steve Fuller advocates a radical relativism. This is the consequence of his social eliminativism. If truth is a social construction, then the truth value of an empirical statement is not determined by external facts, but is the result of consensus, agreement, or simply an expression of power. Truth thus becomes a matter of opinion. Whoever thinks that the earth is flat, and if a group of people firmly believe this, then this assertion is by definition true in terms of this relativistic truth criterion. Truth conditions, in Fuller’s view, are determined by discursive rules. David Bloor writes in his book Knowledge and Social Imagery, a standard work of science studies: “Knowledge for the sociologist is whatever people take to be knowledge. It consists of those beliefs which people confidently hold to and live by” (Bloor 1991, p. 5). This corresponds to Fuller’s conception of knowledge.
3
On the epistemic status of expert knowledge see: Zoglauer 2020, p. 79. Latour also criticizes social constructivism, which is a central element of STS philosophy: He argues that constructivists cannot distinguish between good and bad constructions because all constructions are of equal value (Latour 2013, p. 159). 4
10
T. Zoglauer
Barry Barnes and David Bloor see relativism as a methodological basis of sociological research: Far from being a threat to the scientific understanding of forms of knowledge, relativism is required by it. Our claim is that relativism is essential to all those disciplines such as anthropology, sociology, the history of institutions and ideas, and even cognitive psychology, which account for the diversity of systems of knowledge, their distribution and the manner of their change. (Barnes and Bloor 1982, p. 21 f.)
Barnes and Bloor see it as the task of the sociology of science to explain why scientists accept certain theories and reject other theories. To this end, they postulate a symmetry thesis according to which it should not matter for the explanation whether a theory is true or false (Barnes and Bloor 1982, p. 22). What matters to them is not the truth or falsity of a theory, but the reasons for its credibility. Sociologists should investigate whether the general acceptance of a theory is based on an authoritarian belief, whether political or economic interests are in play, or whether people hold a particular worldview because of their socialization. Even logic and mathematics are regarded as a result of education and internalized rule-following. This sociological reductionism is based on the idea that all thoughts and categories, including logic and mathematics, have social causes. Ludwig Gumplowicz expresses this social determination very succinctly: What thinks is not an individual subject, but the social community, the source of his thinking is not inside our head, but the social environment in which we live, the social atmosphere in which we breathe, and we cannot think otherwise than what necessarily results from the environmental social influences, which concentrate in our brain. (Gumplowicz 1905, p. 268)
The thesis of the social determination of human knowledge has a long history. It goes back to the German sociology of knowledge in the first half of the twentieth century and is taken up and developed further in the Edinburgh Strong Programme of Barnes and Bloor. Social, cultural, political and economic circumstances are held responsible for why we believe in a particular theory or worldview. Barnes advocates a social determinism, according to which human beings are the product of their social environment. He compares a human being with a computer, which is programmed through socialization (Barnes 1974, p. 78 f.). This has implications for our understanding of truth. For if human thought and action are predetermined, then our social environment determines what we believe. The truth content of a theory is irrelevant for the sociology of knowledge or science; what is important is only what caused people to believe in the theory. For Barnes and Bloor, the standards of rationality are context-dependent, historically and socially relative (Barnes and Bloor 1982, p. 27 f.). Such a theory may seem quite plausible at first glance. It is undisputed that our education, culture and social environment have an influence on what we believe. If we had lived 3000 years ago, we would probably believe that the earth is flat. Focusing exclusively on the social and historical context, however, ignores a crucial question,
1
Truth Relativism, Science Skepticism and the Political Consequences
11
namely: is the theory we believe true or false? Is the earth flat or is it a sphere? Even Barnes and Bloor cannot avoid the question of truth. For when their theory is reflexively applied to themselves, the question arises: is the symmetry thesis true or false? In any case, it cannot be true in an absolute sense. It is true for the adherents of the Edinburgh Strong Programme, but not for their critics, precisely because they have been socialized differently. For Barnes and Bloor, reasons, arguments and empirical evidence do not matter because they are only interested in exploring social causes. For a sociologist, a theory is true if it is generally accepted, regardless of whether it is well founded. Thomas Kuhn shows that when a paradigm shifts, scientists’ beliefs change as if they converted to another religion: “As in political revolutions, so in paradigm choice – there is no standard higher than the assent of the relevant community” (Kuhn1970, p. 94). Nevertheless, Kuhn is not a relativist because he believes in evolutionary progress in science. For him, the criterion of progress is the problem-solving capacity of paradigms (Kuhn 1970, p. 169 f.). Epistemological relativism, which is supported by many postmodern philosophers, contributes significantly to the weakening of the concept of truth. When truth becomes a matter of opinion, each social group can hold its own truths. Multiple truths can exist independently, even if they contradict each other. If there is no neutral point of view to compare different points of view, one cannot criticize opinions that contradict one’s own point of view. One cannot claim that Donald Trump is lying because he is only saying what his supporters believe to be true. Such relativistic tendencies are especially prevalent in Science and Technology Studies (STS) and the Social Studies of Science (SSS), where scientific objectivity and truth claims are disputed. In a textbook of Science and Technology Studies we find the following statement: “A fact is an idea or concept that everyone (or some subset of everyone such as a community or network) accepts as true” (Bauchspies et al. 2006, p. 19). As a consequence, a fact is something that is accepted as a fact within a group. Trump supporters and opponents thus live in different worlds where different facts exist. If Trump supporters believe they know that there is no climate change, then this is true for them. The sociologists of science cited above implicitly advocate a consensus theory of truth, according to which the discourse rules in science, and thus also the scientific truth claims, are determined by the consensus of the research community. Some disputes in science are indeed decided by consensus, but not by unqualified consensus as the result of a vote or as a mere majority opinion, but after careful consideration of arguments and empirical data. Jürgen Habermas writes: Not every factual or prospective consensus can be a sufficient criterion for the truth of propositions. Otherwise we cannot distinguish a false from a true consensus, or naive opinions from knowledge. We ascribe truth only to propositions of which we counterfactually assume that every rational subject would have to agree, if they could examine their opinions long enough in unrestricted and unconstrained communication. (Habermas 1971, p. 223)
Habermas conceives truth as a validity claim of an assertion that can be confirmed discursively, i.e. through arguments. He distinguishes between a qualified (discursive)
12
T. Zoglauer
and an unqualified consensus: a discursive consensus can always be criticized, while an unqualified consensus excludes any further discussion and demands obedience. Robert Brandom describes discursive practice as a “game of giving and asking for reasons” (Brandom 2000, p. 189). According to Brandom, the game of science is different from a mere power game. Scientific findings must be methodologically justified and cannot simply be declared true by voting and majority decision. The decision whether a hypothesis is true or not is not a simple yes-no decision, nor is it ever final and unrevisable due to the falsifiability of hypotheses. Whether, for example, the Darwinian theory of evolution or the intelligent design theory can better explain natural phenomena is not a matter of religious faith, but should be the result of careful consideration and appreciation of empirical data. The idea that science can function in the same way as a democratic parliament therefore misjudges the nature of science.
1.6
Truth Relativism
There are also moderate voices within STS who warn against an excessive relativism. Michael Lynch, who is also a prominent representative of STS, clearly shows what disastrous consequences such truth pluralism would have: If truth can be nothing more than what passes for truth, and what passes for truth is constituted by systems of power, then as those systems change, so does the truth. It follows that in the American South of the 1960s and ‘70s, African Americans really were morally and intellectually inferior to whites because that was the view of the white political power structure at the time. But this is unintuitive: surely such racist views were false then and now. (. . .) The real problem is that it pulls the rug out from under the feet of any attempt to rationally criticize the political systems of power in one’s own culture. (Lynch 2004, p. 39)
Lynch himself supports relativism, but advocates a different, weaker kind of relativism than Fuller. We must therefore distinguish between strong and weak relativism, or strong and weak perspectivism.5 Steven Hales and Rex Welshon (2000) explain the difference as follows: Strong perspectivism asserts that each perspective has its own truths. For every claim, there is one perspective in which it is true and another perspective in which it is false (Hales and Welshon 2000, p. 31).6 In weak perspectivism, the universal statement is replaced by an existential statement: there is at least one assertion that is true in one perspective and false in another perspective (Hales and Welshon, ibid.). Weak
5
In the following I will use the terms relativism and perspectivism synonymously. In epistemology, various terms are used to designate such socially and culturally shaped perspectives. Instead of perspectives, the terms conceptual schemes, paradigms, language games, discourses, narratives, ways of life, or traditions are also commonly used. Martin Kusch (2019) prefers the term “epistemic frameworks”. 6
1
Truth Relativism, Science Skepticism and the Political Consequences
13
perspectivism is thus local and limited to a few disputed assertions, whereas strong perspectivism posits a global dissent that concerns all assertions. Strong perspectivism assumes a radical incommensurability, according to which there is no neutral standard against which perspectives can be compared: “The system of perspectives contains no Archimedean point, that is, it contains no neutral standpoint from which perspectives can be surveyed” (Fogelin 2003, p. 73)7 Therefore, one cannot claim that all perspectives are equally valid, since equality would presuppose a common standard of value (Kusch 2019, p. 276). Nor can one say that one perspective is better or truer than another: “No perspective is privileged in the sense of being inherently superior to others” (Fogelin 2003, p. 73). Incommensurability, therefore, calls for an abstention from judgment regarding the comparison of different perspectives. Judgments can only be made within one perspective, but not about other perspectives. Like Lynch, John MacFarlane (2014) also advocates a weak perspectivism and explains the different views by postulating an “assessment sensitivity”: An assertion is judgmentdependent if it depends on the context of judgment (MacFarlane 2014, p. 64). For MacFarlane, at least some assertions are assessment-sensitive: “Relativism about truth is the view that there is at least one assessment-sensitive sentence” (MacFarlane 2010, p. 129). MacFarlane explains this with the example of the sentence “Licorice is tasty” (MacFarlane 2014, p. 72). Taste judgments are notoriously subjective. Not everyone likes licorice. What one person likes, another person may dislike. Therefore, there is at least one claim that is true in one person’s view but false in another person’s view. Such weak perspectivism, however, is trivial and entirely harmless, and it would be an exaggeration to speak of relativism here. Indeed, apart from such subjective judgments of taste, there are many factual assertions about which objective judgments can be made, which receive intersubjective acceptance. Weak perspectivism is harmless because the context-dependence of statements can always be decontextualized by specifying the referential context. For example, the truth of the sentence “Today the sun is shining” depends on the place and time of the utterance. If one additionally specifies the place and replaces the word “today” by a precise time specification, the truth value is fixed independent of time. Moreover, what it means that the sun is shining can be made more precise by defining truth criteria (e.g. that for a normal observer the sun is visible at the particular place and time).8 Local contextual references can be eliminated in this way. The crucial point is that there is a procedure or method to determine the truth value of an assertion. Let’s consider as an example the question whether there were more attendees at Donald Trump’s inauguration than at Obama’s oath ceremony. Pictures, the counting of people, and the analysis of other documents clearly show
7
On strong perspectivism, see also Boghossian 2006, p. 90. One may object at this point that truth also depends on arbitrary conventions (e.g. what a “normal observer” is), but as I will show in Sect. 1.9, every language presupposes a fixed system of reference by which meanings are defined. 8
14
T. Zoglauer
that the crowd was larger at Obama’s than at Trump’s inauguration (Hendricks and Vestergaard 2019, p. 50). This is not a matter of perspective. Perceived from any perspective, we come to the same conclusion. The truth in this case can be determined simply by comparing assertion with reality. The much criticized correspondence theory of truth proves in this case clearly superior to other truth theories. If one abandons the correspondence theory of truth and does not accept external truth criteria, one must regard truth as an internal relation and define it by using social criteria. Truth is then only conceivable as consensus or coherence. According to the coherence theory, truth is not a property of individual propositions but of a whole belief system. Coherence is a measure of the consistency, compatibility, non-contradiction, and inferential interconnectedness of the propositions within the system. Judith Renner and Alexander Spencer (2018, p. 318) define truth as a “socially dominant understanding of truth” and identify it with the majority opinion. Then, any oppositional opinion automatically becomes false and it is impossible to criticize the prevailing wisdom. Truth becomes a question of power: Whoever has the power is right. And whoever holds a different opinion is wrong. Resistance is futile. Relativism, which actually claims to be tolerant, thus proves to be intolerant towards minorities and dissenters. The thesis of the context-dependence and discourse-relativity of statements, which is quite correct in the sense of weak relativism, suddenly turns into an all-encompassing strong relativism. In this spirit many postmodern texts are written like this: Each discourse has its own set of procedures that govern the production of what is to count as a meaningful or truthful statement. (. . .) A discourse as a whole cannot be true or false because truth is always contextual and rule dependent. Discourses are local, heterogeneous, and often incommensurable. No discourse independent or transcendental rules exist that could govern all discourses or a choice between them. Truth claims are in principle ‘undecidable’ outside of, or between, discourses. (Flax 1999, p. 453)
If discourses are all incommensurable and if truth exists only within discourse, then one cannot criticize other discourses. Language games within a discourse thus become solipsistic performances. Or one pretends to be tolerant and claims that everyone is somehow right: “we have to recognize that all propositions are in some sense true, and hence that even those who are wrong are, from a particular point of view, right. (. . .) everybody is, in some sense, right. (. . .) everybody has the right to be right” (Tarca 2018, p. 413, 415). If one interprets relativism so broadly, then one would also have to grant populists, racists and fascists the right to be right.
1
Truth Relativism, Science Skepticism and the Political Consequences
1.7
15
Critique of Ontological Relativism
Some postmodern thinkers go one step further. In anthropology, an “ontological turn” has recently been proclaimed and it is asserted that people in other cultures not only have different views of the world, but that they literally live in other worlds or other realities (Henare et al. 2007, p. 10).9 Relativism cannot be formulated in more radical terms. The ontological turn is directed against the allegedly “eurocentric” idea that there is one world that can be viewed from different perspectives and, in particular, against the claim to superiority of Western natural science. Instead, a radical otherness is postulated: For example, if indigenous people believe in ghosts and demons, then they also exist in their world and no empirical argument, however good, can dissuade them from their belief. A special form of perspectivism is suggested by the Brazilian anthropologist Eduardo Viveiros de Castro when he speaks of a plurality of perspectives that do not represent different ways of interpreting the world, but that presents one view of different worlds: “One culture, multiple natures” is his motto (de Castro 2004, p. 6). This refers to an ontological relativism that postulates a plurality of natures or realities. Castro also uses the concept “objective relativism” or “multinaturalism”. Such relativism is incompatible with our common-sense realism, since realism assumes one world and not different realities. The ethnologist Hans Peter Duerr anticipated the ontological turn already in the 1970s when he wrote that the assumption of a fundamental incommensurability leads to a “fragmentation of reality” (Duerr 1980, p. 121): “Concepts such as ‘true’ and ‘real’ now receive their meaning only within a rather compartmentalized form of life, and since the particular forms of life are seemingly incompatible with one another, there exists one reality besides the other” (Duerr 1980, p. 122). An anthropologist is thus in a privileged position: he alone is able to travel between these incommensurable realities. But how is it possible to jump back and forth between these worlds? Anthropologists cannot simply discard their own, mostly eurocentric forms of intuition and categories, since they have learned to see things from their own perspective and to classify and interpret them according to their ontology. How can we understand radically different ontologies at all if we are always conceptually and perspectivally confined in our ontology? As Vigh and Sausdal (2014, p. 57) remark: “How the proponents of the ontological turn are able to connect to incommensurable worlds, and translate them into understandable anthropological text, remains a mystery.” And it raises the even more fundamental question of how we can know that we are dealing with different rather than identical worlds? For the postulate of otherness presupposes a comparison that, according to the incommensurability thesis, isn’t possible.
Important representatives of the “ontological turn” are Viveiros de Castro, Philippe Descola, Martin Holbraad, Morten Axel Pedersen, Paolo Heywood, Eduardo Koon and Amiria Henare.
9
16
T. Zoglauer
Ontological relativism is the opposite of cosmopolitanism.10 Instead of overcoming cultural borders, insurmountable mental walls are erected between cultures. There can be no intercultural understanding of peoples because other cultures not only speak a different language or have different views of the world, but do not belong to a common world. How can an anthropologist ask for tolerance of other cultural practices if not as an universal demand, which is valid for all cultures? But this contradicts his particularist and antiuniversalist stance. On the one hand, it is claimed that different ontologies are incommensurable and therefore none is superior than others (“no ontology is privileged over another”, Palacek and Risjord 2013, p. 12), but on the other hand, the “Western” scientifically shaped ontology is rejected (Burman 2017, p. 934). This is not compatible with tolerance and ontological equality. The ontological turn has also been echoed in STS (Holbraad and Pedersen 2017, pp. 37–46; Sismondo 2015). Some STS scholars explore how our practices create their own realities. But it is not clear what the difference should be between this “ontological” view and social constructivism. By interacting with things a specific way of looking at the world is constituted. The resulting thing ontology is at best an imagined ontology: namely, a way of how we look at things and how we interpret them. The ontology is projected into things qua social construction. Patrik Aspers therefore suspects that here ontology is confused with epistemology and that the ontological turn therefore merely represents a constructivism in a new linguistic disguise: If ontologies are made through practices, discourses, or any other social processes, and we can use ‘enacted’, ‘performed’, ‘made’, ‘fashioned’, ‘organized’, or similar words to describe these processes, the study of ontology is simply what constructivist researchers have done for years. (Aspers 2015, p. 451)
The main charge against relativism is that it is self-contradictory. Anyone who claims that everything is relative is making a statement that claims universal validity in the sense of: The statement “everything is relative” is true regardless of perspective. But this would mean that there is at least one truth that is not relative, as opposed to the assertion. To avoid this accusation, the relativist would have to retreat to a weaker position and keep his opinion to himself, without trying to convince or convert his opponents. His opinion is then true in his perspective, but need not be shared by others (Hales 2006, p. 102). Such a position, while consistent, is hopelessly self-referential and autistic. Consequently, no dialogue across perspectives would be possible, since in relativism even rules of discourse, principles of argumentation, and standards of rationality are always perspective-relative and have no universal validity. Everyone would argue according to his or her own logic. It would not even be possible to understand each other, because people would follow the rules of their own language game.
10
Cosmopolitanism is the idea of a global citizenship that unites nations, according to which all people share common values and ideals and have equal rights.
1
Truth Relativism, Science Skepticism and the Political Consequences
1.8
17
Post-Factual Politics
Relativist figures of argumentation have recently also been adopted by populists and the new right to justify their view of reality. In the USA, the phenomenon of “right-wing postmodernism” is widespread (Kakutani 2018, p. 45 f.). During the impeachment trial against President Trump, many Republican members of Congress used Lyotard’s notion of narrative to discredit claims by their political opponents about manipulation of the 2016 presidential election by Russian hackers. Thanks to their majority in the Senate, Republicans were able to enforce their interpretation of the facts and their conception of truth. Postmodern philosophers cannot be held responsible for this distortion of truth, but they unintentionally provide a legitimation for post-factual politics. Trump has often been convicted of lying, but this does not bother him as long as his supporters stand faithfully behind him. When journalists ask critical questions and cast doubt on the truth of his statements, Trump mentions that millions of American citizens share his views and that his claims therefore cannot be false (Pfiffner 2020, p. 25). This justification strategy is audacious, but it is consistent with the postmodern conception of truth. Recall how constructivism defines facts: “A fact is an idea or concept that everyone (or some subset of everyone such as a community or network) accepts as true” (Bauchspies et al. 2006, p. 19). Trump’s claims are accepted as true by his supporters. Therefore, even a constructivist would have to accept that Trump is getting the facts right. In 2007, an anthology was published entitled The Social Construction of Climate Change (Pettenger 2007), in which climate change is described as a social construction. Donald Trump would certainly agree with this, especially since he himself is convinced that this is a big hoax, a conspiracy of the Chinese to harm the American economy. This is how social constructivism does the business of climate skeptics and conspiracy theorists. For Sonja Boehmer-Christiansen, a sociologist of science, climate change is an invention of interest groups that manipulate public opinion and influence politics for their own benefit: The global warming threat was constructed in the 1980s by a coalition of interested parties offering expertise and technologies to solve the problem. The climate threat, like the limits of growth scare beforehand, offered new opportunities to environmental bureaucracies and professionals who found the proposed solutions empowering. (Boehmer-Christiansen 2003, p. 88)
Peter Lee, a political scientist at the University of Portsmouth, accuses climate scientists of establishing a “regime of truth”11 to suppress any doubt about climate change:
The concept of a “regime of truth” was introduced by Michel Foucault (1980), who sees truth as a social construct that is produced, disseminated and anchored in people’s minds by means of political, economic and media power. 11
18
T. Zoglauer Climate truth is produced using institutions, skills and interests that span a whole range of professional expertise: all focused on producing an ideologically-driven view of the world that will, in turn, shape individual behaviour within power relations as a result of political decision making. (Lee 2015, p. 45)
Trump’s victory in the 2016 presidential election can thus be interpreted as a regime change from a “regime of truth” to a “regime of post-truth” (Harsin 2015). Those who, like Stephen Turner, describe scientific experts as “dangerous to democracy” (Turner 2014, p. 42) should actually welcome the fact that Trump is drastically limiting the influence of expert panels, cutting research funding, and ignoring the advice of experts. Those who see truth merely as an expression of power cannot complain when a democratically elected president redefines truth.
1.9
The Limits of Relativism
Whoever asserts that everything is relative cannot claim that this assertion is universally true. It is only one opinion among many, which is not better founded than others, since in strong perspectivism all points of view are of equal value and none is superior to the others. A radical relativist thus makes herself smaller than she actually wants to be. A relativist who criticizes other opinions can only do so from a superior standpoint and must assert a perspective-independent truth claim for herself. She wants her opponents to accept her truth claim as well. She must therefore exclude her own assertion from any doubt, since otherwise she would involve herself in a pragmatic contradiction. Ludwig Wittgenstein writes in On Certainty, “Doubt itself rests only on what is beyond doubt” (Wittgenstein 1969, p. 68e; OC § 519). Wittgenstein explains this with the hinge metaphor: That is to say, the questions that we raise and our doubts depend on the fact that some propositions are exempt from doubt, are as it were like hinges on which those turn. That is to say, it belongs to the logic of our scientific investigations that certain things are indeed not doubted. (Wittgenstein 1969, p. 44e; OC §§ 341, 342)
In addition to the hinge metaphor, Wittgenstein also uses a river metaphor to show that our knowledge is based on a foundation. He writes that among the propositions that describe our worldview there are propositions whose meanings are permanently changing – so-called fluid propositions – and those that are fixed and unchanging, as if they were frozen, just as there are fluid and solid elements in a river (Wittgenstein 1969, p. 15e; OC §§ 96–99). The river permanently changes its shape. Parts of the riverbed are swept away by the water, other parts are deposited and remain. Without these unchanging parts the river would not exist. Wittgenstein’s point is that any meaning of words, as well as the distinction between truth and falsity, presupposes such “fixed” propositions that must be accepted as true: “The truth of certain propositions belongs to our frame of reference” (Wittgenstein 1969, p. 12e; OC § 83).
1
Truth Relativism, Science Skepticism and the Political Consequences
19
Wittgenstein, therefore, cannot be called a relativist, as Fuller does. On the contrary, he shows very clearly the limits of relativism. Certain propositions must be exempted from doubt, otherwise the language game of doubting would not work. The propositions which are accepted as true can only be scientifically verified propositions that reflect the current state of knowledge, e.g. that there is a man-made climate change or that human beings are a product of a natural evolutionary process and not the result of supernatural “intelligent design”. These scientific findings do not represent unchangeable absolute truths, because science is permanently changing. Sometimes new hypotheses are added and old, firmly believed truths are being questioned and revised. Truth is therefore indeed relative and changeable. And to a certain extent it is also socially constructed insofar as it is the result of social interactions and communications. But this should not prevent us from accepting the results of research and recognizing them at least as provisional truths. We should trust experts, because when that trust is undermined, we fall prey to the perfidious business of climate change skeptics, Covid deniers, and conspiracy theorists.
1.10
Perspectival Realism
In social constructivism, it is often said that reality is socially constructed.12 A book by Andrew Pickering (1984) is entitled “Constructing Quarks”. One wonders how the material building blocks of the universe can be socially constructed. While Pickering leaves the reader in the dark as to what his book title means, Ian Hacking explains that Pickering is not claiming that quarks are an invention of physicists, rather that “the idea of quarks” is constructed (Hacking 1999, p. 68). Similarly, David Bloor rejects the charge that he is advocating a kind of idealism. He has no doubt, he says, that there is a material reality that exerts a causal effect on human beings (Bloor 2004, p. 926). Sally Haslanger warns against generalizing the concept of construction and seeing everything as socially constructed: But once we come to the claim that everything is socially constructed, it appears a short step to the conclusion that there is no reality independent of our practices or of our language and that “truth” and “reality” are only fictions employed by the dominant to mask their power. (Haslanger 1995, p. 96)
Haslanger therefore advocates a weak constructivism, according to which some concepts and conceptual distinctions are socially constructed and also the way we perceive and interpret the world, while the world itself exists independently of us. Thus, if constructivists also believe in the reality of the material world, we have to distinguish between a real ontological reality and a socially constructed epistemic reality. Ontological reality, Kant’s “thing in itself”, exists independently of our consciousness and is unknowable to us, Berger and Luckmann state, “The sociology of knowledge has the task of analyzing the social construction of reality” (Berger and Luckmann 1991, p. 15).
12
20
T. Zoglauer
whereas epistemic reality is a product of a cognitive construction and is viewed from a particular cognitive and social perspective. The ambiguity of the concept of reality also has implications for the concept of truth. We have to ask: Do true statements refer to ontological or epistemic reality? Is truth unique or relative to an epistemic perspective? We believe that by uttering a true statement we are stating something about the world, namely about ontological reality. For example, when we claim that the sentence “The sun is shining today” is true, we do not mean to reflect a subjective experience (“It seems that the sun is shining today”), but we want to to describe an objective fact. This common-sense conception of truth is summed up in the correspondence theory: a proposition is true if and only if the fact asserted in it corresponds to ontological reality. But the correspondence theory of truth has been repeatedly and rightly criticized. The most important points of criticism are briefly summarized here: • How is a correspondence between a proposition and reality to be established? To do this we need to have a direct, unadulterated and error-free access to reality, a “god’s eye view”, so to speak, in order to recognize the truth. • What makes a proposition true? It is not clear what the “truthmakers” are supposed to be: facts, things, events, situations, or states of the world? • Scientific progress shows that even theories which were once believed to be true can be disproved some time. We cannot therefore assume to be in possession of the truth, but can only come nearer to it step by step. • Historical experience shows that truth is always context-dependent and changeable. The basic problem of the correspondence theory is that it demands a comparison of the asserted fact with objective reality, which is cognitively inaccessible to us. This definition of truth suggests finality and incorrigibility, which is practically unattainable. All scientific truths are fallible and doubtable because we can be wrong. Truth is always relative to a language, conceptual schemes, and underlying theories. The truth of the sentence “The sun is shining today” depends on the meaning of the word “shining” and presupposes a method of verification. We understand truth differently than Donald Trump or a conspiracy theorist. And a shaman has a different conception of reality than an elementary particle physicist. It is this arbitrariness and plurality of different truths and worldviews that constitute the post-truth problem. When two people argue about whether a proposition is true or false, there is always a way to find out the truth. If Peter believes that the Earth is bigger than Venus and Paula claims the opposite, then the dispute can simply be settled by looking in an encyclopedia or searching on the internet. If in doubt, we can also ask an expert. Fact checkers use the same method. We are not satisfied with the situation that there are two or more truths. Truth must always be unambiguous. The most reliable and trustworthy method of finding truth is the scientific method. Even scientists do not always agree on what is true. Truth is always relative to the current state of knowledge. But there is hope that scientific disputes can be settled over time. In the 1990s, climate scientists disagreed on whether the rise in the mean global temperature was a natural temporary phenomenon or whether it had
1
Truth Relativism, Science Skepticism and the Political Consequences
21
anthropogenic causes, indicating a stable trend. Different groups of scientists held different opinions and each had their own truth. Meanwhile, climate scientists are largely in agreement on this point. Charles Sanders Peirce (1955, p. 38) gives the following definition of truth: “The opinion which is fated to be agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real.” Peirce’s hope is to arrive in the long run at a correspondence between the propositions of science and ontological reality. Hilary Putnam (1979, p. 73) justifies this approximation theory of truth with the no-miracle argument, according to which the approximate truth of our best available theories is the best explanation for scientific progress, since otherwise it would seem to be a miracle. Another way of defining truth is to refer to the best theories presently available, and to accept that truth is variable in time. So, in this sense, a proposition p would be true exactly when the best available theories and empirical data assert that p is true. It is therefore reasonable to advocate a perspectivist epistemology according to which truth claims are always relative to an epistemic perspective (cf. Giere 2006, p. 81). But we cannot and should not regard all perspectives or interpretations as equally valid. Rather, we should prefer those perspectives that are scientifically well founded: “our current scientific truths are the best ones we are entitled to by our own lights as of today” (Massimi 2018, p. 173). Michela Massimi advocates a “perspectival realism” (PR) that postulates a mindindependent reality and defines truth as correspondence: PR endorses the realist metaphysical tenet about a mind-independent (and perspectiveindependent) world; (. . .) PR endorses the realist epistemic tenet in thinking that acceptance of a theory implies the belief that the theory is true (and even shares the realist intuition that truth has to be cashed out in terms of correspondence, rather than coherence, warranted assertibility and so forth). (Massimi 2018, p. 170 f.)
Correspondence can thereby only be a goal of research, but cannot be achieved practically. Therefore, we have to be content with provisional truths. Possible truth criteria and indicators are: coherence, evidence, credibility, empirical confirmation, proof, probability, practical success. These criteria give us an indication of how “close to truth” our theories are. The search for truth gives us orientation in the face of a chaotic world of narratives, halftruths, lies and fake news. Therefore, scientists and fact checkers use the above truth criteria to distinguish between good and bad hypotheses, truth and fake. Truth is an indispensable regulative principle and has instrumental value. Robert Nozick (2001, p. 55) speaks of “serviceability.” And Harry G. Frankfurt describes the utility of truth for everyday life as follows: “Individuals require truths in order to negotiate their way effectively through the thicket of hazards and opportunities that all people invariably confront in going about their lives” (Frankfurt 2006, p. 34 f.). If we were indifferent to truth, we would be in constant danger of being wrong and failing.
22
T. Zoglauer
Perspectival realism shows a way to take the results of STS and the sociology of science into account without slipping into a self-destructive relativism. One can critically question the judgment of experts because they too are caught in a perspectival view and are therefore always biased. The strength of STS lies in their critical attitude towards scientific practice. But scientists cannot take a neutral standpoint outside science because they themselves are part of this system. STS scientists must therefore become more self-critical. Those who do not want to live in a post-factual world must distance themselves from relativistic tendencies within STS and reject Fuller’s post-truth philosophy. Harry Collins and Robert Evans recognize the dangers of postmodern relativism and defend science against an epistemic egalitarianism that sees all claims as equally valid: Most social analysts think that democracy needs protecting against scientific and technological experts; we argue that scientific and technological experts have the potential to protect democracy! (Collins and Evans 2017, p. 8) Science’s values are democratic values. Science is important to democratic societies because it supports democracy through its very existence. (Collins and Evans 2017, p. 145)
What we need is a new Enlightenment. The Enlightenment was a movement to bring light into a dark world where people were surrounded by superstition, belief in witches and miracles, the fake news of their time, so to speak. People had to learn to trust science more than fortune tellers and clairvoyants. As a prescription against the widespread belief in miracles of his time, David Hume recommended that people should stick to scientific methodology and rely on experience and evidence (Hume 2007, p. 92). It is this trust in science that we owe the great achievements of our civilization.
References Anders G (1994) Die Antiquiertheit des Menschen, vol 1. Beck, München Aspers P (2015) Performing ontology. Soc Stud Sci 45:449–453 Barnes B (1974) Scientific knowledge and sociological theory. Routledge & Kegan Paul, London Barnes B, Bloor D (1982) Relativism, rationalism and the sociology of knowledge. In: Hollis M, Lukes S (eds) Rationality and relativism. MIT-Press, Cambridge, MA, pp 21–47 Bauchspies W, Croissant J, Restivo S (2006) Science, technology, and society. Blackwell, Malden/ Oxford Berger P, Luckmann T (1991) The social construction of reality. Penguin, London Bloor D (1991) Knowledge and social imagery. University of Chicago Press, Chicago Bloor D (2004) Sociology of scientific knowledge. In: Niiniluoto I, Sintonen M, Wolenski J (eds) Handbook of epistemology. Kluwer, Dordrecht, pp 919–962 Boehmer-Christiansen S (2003) Science, equity, and the war against carbon. Sci Technol Hum Values 28:69–92 Boghossian P (2006) Fear of knowledge. Against relativism and constructivism. Clarendon Press, Oxford
1
Truth Relativism, Science Skepticism and the Political Consequences
23
Brandom R (2000) Articulating reason. An introduction to inferentialism. Harvard University Press, Cambridge, MA Burman A (2017) The political ontology of climate change: moral meteorology, climate justice, and the coloniality of reality in the Bolivian Andes. J Polit Ecol 24:921–938 Carter J et al (2017) Sidelining science since day one. Center for Science and Democracy, Cambridge, MA. http://www.ucsusa.org/SideliningScience. Accessed 1 Apr 2020 Cicero MT (1991) On duties (De officiis). Cambridge University Press, Cambridge Collins H, Evans R (2017) Why democracies need science. Polity Press, Cambridge Collins H, Evans R, Weinel M (2017) STS as science or politics? Soc Stud Sci 47:580–586 Collins H et al (2020) Experts and the will of the people. Springer/Palgrave Macmillan, Cham D’Ancona M (2017) Post truth. Ebury Press, London de Castro EV (2004) Perspectival anthropology and the method of controlled equivocation. Tipiti J Soc Anthropol Lowland S Am 2(1):3–22 Duerr HP (1980) Traumzeit, 5th edn. Syndikat, Frankfurt am Main Feyerabend P (1975) Wider den Methodenzwang. Suhrkamp, Frankfurt am Main Feyerabend P (1978) Science in a free society. Verso, London Feyerabend P (1981) Irrationalität oder: Wer hat Angst vorm schwarzen Mann? In: Duerr HP (ed) Der Wissenschaftler und das Irrationale, vol 2. Syndikat, Frankfurt am Main, pp 37–59 Feyerabend P (1999) Knowledge, science and relativism. Cambridge University Press, Cambridge Flax J (1999) Disputed subjects. Routledge, London Fogelin R (2003) Walking the tightrope of reason. Oxford University Press, New York/Oxford Foucault M (1980) Power/knowledge. Selected interviews and other writings 1972–1977. Pantheon, New York Frankfurt H (2006) On truth. Knopf, New York Friedell E (2007) Kulturgeschichte der Neuzeit. Beck, München Fuller S (2007) New frontiers in science and technology. Polity Press, Cambridge Fuller S (2018) Post-truth. Knowledge as a power game. Anthem Press, London Giere R (2006) Scientific perspectivism. University of Chicago Press, Chicago Gumplowicz L (1905) Grundriss der Soziologie, 2nd edn. Manz, Wien Habermas J (1971) Theorie der Gesellschaft oder Sozialtechnologie? In: Habermas J, Luhmann N (eds) Theorie der Gesellschaft oder Sozialtechnologie – Was leistet die Systemforschung? Suhrkamp, Frankfurt am Main, pp 142–290 Hacking I (1999) The social construction of what? Harvard University Press, Cambridge, MA Hales S (2006) Relativism and the foundations of philosophy. MIT-Press, Cambridge, MA Hales S, Welshon R (2000) Nietzsche’s perspectivism. University of Illinois Press, Urbana Harsin J (2015) Regimes of posttruth, postpolitics, and attention economies. Commun Cult Crit 8: 327–333 Haslanger S (1995) Ontology and social construction. Philos Top 32:95–125 Henare A, Holbraad M, Wastell S (2007) Thinking through things. Routledge, London Hendricks V, Vestergaard M (2019) Reality lost. Springer, Cham Holbraad M, Pedersen MA (2017) The ontological turn. An anthropological exposition. Cambridge University Press, Cambridge Hume D (2007) An enquiry concerning human understanding. Oxford University Press, Oxford Kakutani M (2018) The death of truth. William Collins, London Kuhn TS (1970) The structure of scientific revolutions, 2nd edn. University of Chicago Press, Chicago Kusch M (2019) Relativist stances, virtues and vices. Aristot Soc 93(Suppl):271–291 Latour B (2004) Why has critique run out of steam? From matters of fact to matters of concern. Crit Inq 30:225–248
24
T. Zoglauer
Latour B (2013) An inquiry into modes of existence. Harvard University Press, Cambridge, MA Lee P (2015) Truth wars. Palgrave Macmillan, New York Lynch M (2004) True to life. Why truth matters. MIT-Press, Cambridge, MA Lynch M (2017) STS, symmetry and post-truth. Soc Stud Sci 47:593–599 MacFarlane J (2010) Making sense of relative truth. In: Krausz M (ed) Relativism. A contemporary anthology. Columbia University Press, New York, pp 124–139 MacFarlane J (2014) Assessment sensitivity. Oxford University Press, Oxford Machiavelli N (1998) The Prince, 2nd edn. University of Chicago Press, Chicago Massimi M (2018) Perspectivism. In: Saatsi J (ed) The Routledge handbook of scientific realism. Routledge, Abingdon/New York, pp 164–175 McIntyre L (2018) Post-truth. MIT-Press, Cambridge, MA Nietzsche F (1999) Kritische Studienausgabe (KSA). Deutscher Taschenbuch Verlag, München Nietzsche F (2003) Writings from the late notebooks. Cambridge University Press, Cambridge Nietzsche F (2005) The anti-christ, ecce homo, twilight of the idols, and other writings. Cambridge University Press, Cambridge Nietzsche F (2020) Unpublished fragments (Spring 1885 – Spring 1886). Stanford University Press, Stanford Nozick R (2001) Invariances. The structure of the objective world. Harvard University Press, Cambridge, MA Oreskes N, Conway E (2010) Merchants of doubt. Bloomsbury Press, New York Palacek M, Risjord M (2013) Relativism and the ontological turn within anthropology. Philos Soc Sci 43:3–23 Peirce CS (1955) Philosophical writings. Dover, New York Pettenger M (ed) (2007) The social construction of climate change. Ashgate, Aldershot Pfiffner J (2020) The lies of Donald Trump. A taxonomy. In: Lamb C, Neiheisel J (eds) Presidential leadership and the Trump presidency. Springer/Palgrave Macmillan, Cham, pp 17–40 Pickering A (1984) Constructing quarks. University of Chicago Press, Chicago Putnam H (1979) Mathematics, matter and method, Philosophical papers, vol 1, 2nd edn. Cambridge University Press, Cambridge Renner J, Spencer A (2018) Trump, Brexit and post-truth: how post-structuralist IR theories can help us understand world order in the 21st century. Zeitschrift für Politikwissenschaft 28:315–321 Sidky H (2018) The war on science, anti-intellectualism, and ‘alternative ways of knowing’ in 21st century America. Skept Inq 42(2):38–43 Sim S (2013) Fifty key postmodern thinkers. Routledge, London Sismondo S (2015) Ontological turns, turnoffs and roundabouts. Soc Stud Sci 45:441–448 Sismondo S (2017) Casting a wider net: a reply to Collins, Evans and Weinel. Soc Stud Sci 47:587– 592 Tarca LV (2018) The right to be right: recognizing the reasons of those who are wrong. Philos Soc Criticism 44:412–425 Turner S (2014) The politics of expertise. Routledge, New York Vigh HE, Sausdal DB (2014) From essence back to existence: anthropology beyond the ontological turn. Anthropol Theory 14:49–73 Wittgenstein L (1969) On certainty (OC). Blackwell, Oxford Wolters EA, Steel B (2018) When ideology trumps science. Praeger, Santa Barbara Wynne B (1989) Sheepfarming after Chernobyl. Environment 31(2):33–39 Zoglauer T (2020) Wissen im Zeitalter von Google, Fake News und alternativen Fakten. In: Klimczak P, Petersen C, Schilling S (eds) Maschinen der Kommunikation. Springer Vieweg, Wiesbaden, pp 63–83
1
Truth Relativism, Science Skepticism and the Political Consequences
25
Thomas Zoglauer, Prof. Dr., wrote the article “Truth Relativism, Science Skepticism and the
Political Consequences.” He is an adjunct professor (außerplanmäßiger Professor) at the Institute of Philosophy and Social Sciences at the Brandenburg University of Technology and works in the fields of ethics, philosophy of technology, and philosophy of science.
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate Tool of Inquiry?
2
Axel Gelfert
Abstract
The history of science amply demonstrates that science is not immune to honest mistakes, nor to fraud and misconduct. While the former are minimized by disciplinespecific methodologies, the latter are supposed to be kept in check by professional ethics guidelines of good scientific practice. But what happens when “fakes” are produced with the best of intentions and the greatest of scientific skill – in other words, when the scientific tools of the trade are turned into a “forger’s workshop”? In the case of hoaxes, the artful faking of scientific results is often combined with the enlightened claim to denounce scientific grievances – e.g. ideological distortions or a lack of critical awareness. In contrast to mere falsifications, hoaxes aim at their own exposure; they necessarily remain selective, yet pretend to be representative and generally valid. Using the so-called “Sokal Squared” hoax as an example, it is argued that the lofty claim of its originators is difficult to vindicate. Scientific hoaxes face a dilemma: The deliberate violation of scientific norms that makes their implementation possible in the first place runs the risk of disqualifying hoaxes as empirical tools of epistemic inquiry.
2.1
Too Good to Be True
“Welcome to the Club” – with these words the German Nobel Prize winner in physics in 1998, Horst Störmer, welcomed his young colleague in December 2001 on the occasion of the award of the highly endowed Otto Klung Prize to the up-and-coming young scientist. A. Gelfert (✉) Fachgebiet Theoretische Philosophie, Technische Universität Berlin, Berlin, Germany e-mail: [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_2
27
28
A. Gelfert
He said that he had researched “three things in three years”, all of them “sensational research results for the scientific community”, which had accordingly been published in quick succession in the leading scientific journals Nature and Science. In view of Jan Hendrik Schön’s productivity, further top results and awards could be expected, especially since four of the Otto Klung Prize winners – including Störmer himself – were awarded the Nobel Prize later in their careers. This was followed by a sentence which in retrospect must be classified as particularly significant: “He [=Schön] found qualities in plastics that we would never have thought possible.” (quoted by Seer 2002). What had carried the laudator and the experts to such raptures of enthusiasm were Schön’s apparently groundbreaking results in the solid-state physics of organic materials. Using ingenious methods, he had succeeded in growing crystals from organic materials, purifying them and then incorporating them into electronic circuits, where they could take the place of traditional semiconductors such as silicon. Schön also succeeded in turning organic materials into optical lasers, building transistors on a molecular basis and producing almost ready-to-use high-temperature superconductors from plastic. Schön thus seemed to congenially continue the series of groundbreaking research results at Bell Laboratories, where he had been employed since 1997. A year later, the hopes raised by the supposed research results were dashed. In at least 16 of 24 suspected cases, Schön had been proven to have manipulated the findings. Some data had been falsified, others simply fabricated; Schön, who had previously been declared “worthy of a Nobel Prize” by the scientific press, was now under general suspicion. The Otto Klung Prize awarded to him the year before was revoked, the parent company of Bell Laboratories, Lucent Technologies, immediately terminated his job, and the prospect of the directorate of the Max Planck Institute for Solid State Research in Stuttgart, with which the shooting star was to be lured back to Germany shortly before, had become a distant prospect. Its director emeritus, Joachim Queisser, complained that the ubiquitous scientific rat race was “spoiling scientific manners” and wistfully recalled times when “the forefathers of today’s physicists, chemists and engineers still wrote careful, somewhat long-winded publications” and “sceptical colleagues had to be able to confirm their own results” (quoted from Meichsner 2002). There followed a brief, somewhat bashful phase of collective self-reflection, combined with appeals to the responsibility of co-authors as well – Schön had published a number of articles together with more established colleagues who, however, stated that they had never looked at the experiments described – which came to something of a conclusion with Schön’s exclusion from the scientific establishment, and finally with the withdrawal of his doctoral degree (only confirmed by the court of last instance in 2014). In the end, Schön’s results proved to be too good to be true. One could dismiss the Schön case as the culpable misconduct of an individual whose greed for success and social recognition led him drastically astray. For the ethical classification of Schön’s misconduct – including the falsification of data and the deliberate deception of others – this is probably the most accurate perspective. But, as Queisser suggests, the case also exposes the systemic problems of a highly competitive scientific enterprise distorted by misaligned incentives, which does not allow itself sufficient time to
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
29
subject results “that we would never have thought possible” to systematic scrutiny, and which does not take notice when a young up-and-coming scientist produces scientific papers every week. Moreover, in the role of an advocatus diaboli, the counterfactual question can be asked: “What if?” What if one or the other of Schön’s merely hypostasized mechanisms had turned out to be valid after all upon closer examination?1 Would Schön then have been rehabilitated as a brilliant visionary, albeit a sloppy experimenter? It is well known that Galileo, too, did not carry out many of his experiments in the way he described – and not only the famous thought experiments that do not require practical realization, but also alleged measurements, for example on the mechanics of inclined planes. Even when measurements are carried out, the question arises as to which measured values are later taken into account and published. Robert Millikan, the discoverer of the elementary charge who was awarded the Nobel Prize in Physics in 1924, based the determination of its experimental value in his famous droplet experiments, according to his own statement, on the data of 28 oil droplets which were held in suspension by an electric field: “These 28 are, without exception, all the droplets examined during 60 consecutive days” (quoted in Di Trocchio 1994, p. 34); too bad, though, that the later review of Millikan’s laboratory notes found measurements of a total of 140 droplets, of which Millikan had quite obviously selected those 28 that came closest to the numerical values he was looking for. Schön was certainly no second Millikan, and the difference between the two cases is easily named. While Millikan conducted genuine experiments, which – assuming minimal standards of scientific realism – brought him to some extent into close contact with the causal structures of empirical reality, Schön’s method, at least in the most blatant of the criticized cases, simply consisted in the deliberate invention of alleged data. These were not based on empirical evidence – even if they were influenced by wishful thinking as Millikan’s – but skilfully exploited gaps in existing knowledge about complex processes in organic materials in order to serve the hopes of their audience, and this, as Störmer implicitly acknowledges, by counteracting existing expectations. This is precisely what makes an essential difference between scientific intuition and the mere fabrication of scientific results.
2.2
Hoaxes in the Human Sciences
The argument that a lack of “close contact” with the real causal structures of the world constitutes an essential difference between speculative fabrication on the one hand and evidence-based science on the other hand is based on a moderate scientific realism. The term “scientific realism” is meant here to mean no more than the methodological assumption that the success of science must be measured by the extent to which we can infer
1
In fact, Schön’s results could not be reproduced in other laboratories; cf. Reich (2009).
30
A. Gelfert
processes and structures present in reality from available evidence. The fact that we succeed in doing so with some regularity and that, moreover, our representations of reality obtained in different ways – at least in partial areas – often converge, can then be regarded as a confirmation of the validity of the scientific method itself. Scientific realism gains its plausibility primarily from those “mature” natural sciences (mature sciences) that are indeed characterized by far-reaching convergence, instrumental success and confirmation of their predictions, as is the case in physics, for example. One does not have to be a hard-boiled scientific realist to recognize that physics, as the leading natural science, demands different externally testable standards than may be expected, for example, from the human sciences. Even within the natural sciences, many assumptions, e.g. about the natural law character of certain processes and mechanisms, have to be weakened as soon as one turns one’s gaze from physics to biology, say. Biological “laws of nature” are almost always subject to exceptions, and the more complex and contingent the relationships to be explained, the less generalizable our scientific attempts at explanation usually are. This is all the more true in the human sciences, where the methods and categories used have a sometimes unpredictable effect on their objects of study. For, as Ian Hacking has pointed out, generalizations about human behavior and social groups cannot have any ultimate claim to validity, if only because the corresponding categories change continuously and are retroactively interpreted, modulated, and otherwise influenced by the people thus categorized. Categories in the human sciences are thus not structures set in stone, but “interactive kinds” (Hacking 1999), since the persons thus categorized “can make tacit or even explicit choices, adapt or adopt ways of living so as to fit or get away from the very classification that may be applied to them” (Hacking 1999, p. 34). This distinguishes them from the “natural kinds” (passive in this respect) that constitute the very object of study of the natural sciences: Electrons, proteins, and metabolic cycles don’t care what scientists think about them – humans do. Hacking’s attempt to develop a taxonomy of different descriptive categories that does justice to the plurality of different branches of science – from the “natural kinds” of the basic scientific disciplines to the “interactive kinds” of the social, human and cultural sciences – can also be seen as an attempt at mediation at the end of the so-called “science wars” of the 1990s. These had ignited over the question of the natural sciences’ claim to knowledge, which some natural scientists and science theorists saw as being illegitimately called into question by the fact that social and cultural science approaches within science research proceeded explicitly on the basis of a principle of symmetry (originally intended to be merely methodological), which did not attribute any explanatory value to the truth of scientific knowledge for its acceptance.2 The debate, initially kept out of the public eye, moved into the spotlight in 1996 when the New York physicist Alan Sokal revealed that he had published a parody essay in the prominent cultural and social science journal Social
In David Bloor’s formulation, the symmetry principle demands “[that the] same types of causes would explain true and false beliefs” (Bloor 1976/1991, p. 7). 2
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
31
Text entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” even though it was merely a string of pseudoscientific half-truths, supposedly postmodern beliefs, and grammatically correct but unfortunately meaningless jargon (“my article is a mélange of truths, half-truths, quarter-truths, falsehoods, non sequiturs, and syntactically correct sentences that have no meaning whatsoever,” Sokal 1998, p. 248). As his motivation, Sokal stated that he was concerned not only about intellectual standards and an overly lax approach to scientific knowledge, but also explicitly about the fact that reactionary forces could ultimately be strengthened by an epistemic relativism dressed up in progressive garb. Sokal’s hoax was successful insofar as it illustrated editorial sloppiness and an all too great tolerance for meaningless jargon in the review process of the journal Social Text; the physicist’s frustration with those representatives of humanities and cultural studies subjects who like to weave scientific results into their texts as decorative accessories without being aware of their inner connections and relevance is also understandable. To Sokal’s credit, he had submitted his essay under his own name and without giving a false affiliation, and was thus available to the editors at any time for an honest discussion of its content. And yet it can be argued that Sokal’s hoax is also based on a number of misunderstandings. For one thing, it is unclear to what extent a lax approach to scientific knowledge shatters the foundations of the theoretical edifices of the thinkers Sokal criticizes. In a later book co-authored with Jean Bricmont, which builds on Sokal’s Hoax and discusses its implications, the authors strike a more moderate tone: “We make no claim that this invalidates the rest of their work, on which we suspend judgment.” (Sokal and Bricmont 1998, p. X) Thus, in the end, all that remains is the implicit accusation that cultural studies journals such as Social Text do not subject submitted manuscripts to sufficiently rigorous scientific fact-checking. However, in a later – as the title “Le pauvre Sokal” shows: quite polemical – review of Sokal’s and Bricmont’s book, John Sturrock points out that a journal like Social Text is not in the business of conveying secure and verifiable knowledge in the manner of the natural sciences, but rather aims at keeping the intellectual discussion boiling: “[Social Text] has every reason to encourage adventurism in ideas as the way to keep the intellectual pot boiling.” (Sturrock 1998, p. 8). Being able to foist crudely questionable “findings” on an inclined intellectual public simply because they fit into an existing network of beliefs, expectations and wishful thinking is not a new phenomenon. The publication of The Poems of Ossian by the Scottish poet and man of letters James Macpherson in 1760 can serve as a historical example. Macpherson claimed that these were “fragments of ancient poetry” that had been handed down orally in the Scottish Highlands, where he had painstakingly recorded them until he happened to discover an Old Gaelic manuscript containing the very songs in their entirety. In reality, Macpherson had written the poetry himself, and in the years that followed had supplied ever new forgeries, which enjoyed great popularity in the European literary world, were translated many times, and fell on fertile ground in a pre-Romantic climate that was as receptive to the idea of the national epic as it was to the dark motifs of the Ossianic songs, reminiscent of Gothic novels. The Scottish Enlightenment philosopher David Hume
32
A. Gelfert
advised his friend Hugh Blair, acting as editor for Macpherson’s Fragments of Ancient Poetry, who was not fully convinced of Macpherson’s “finds,” that he should verify their authenticity by urging the clergy in the Highlands to “have the translation in their hands, and let them write back to you, and inform you that they heard such a one (naming him), living in such a place, rehearse the original of such a passage, from such a page to such a page of the English translation [= Macpherson’s book], which appeared exact and faithful” (Hume 1932, p. 400). What is interesting here is that in the case of phenomena that rely thoroughly on oral tradition, authenticity can only be made plausible by expanding the base of witnesses and thus establishing the actual existence of a tradition of transmission that is widespread among the population.3 Even clearer than in Macpherson’s case of Ossian’s songs, which may have been the result of wishful thinking and a desire for recognition, is Edgar Allan Poe’s intention to publicly show off to an audience eager for sensationalism. Several times in his career, Poe distinguished himself with invented tales of supposed moon travelers, world travelers, or even dying men kept alive by mere hypnosis.4 Confronted with questions about the veracity of the stories, Poe would either declare that he was “in the garb of fiction” or that he was indulging in a hoax (“hoax is precisely the word suited to Mr. Valdemar’s case,” Poe wrote when asked about the life-sustaining effects of hypnosis). His masterpiece – at least in terms of hoax qualities – may have been, however, an episode known as the “Great Balloon Hoax.” On April 13, 1844, the New York Sun published on the front page of its noon edition a nearly full-page article labeled “Astounding News!” reporting the Atlantic crossing of famed balloonist Thomas Munck Mason, who had just completed a three-day voyage that was supposed to have taken him from London to Paris. Due to a propeller mishap and high winds, he had then veered off course and landed safely near Charleston, South Carolina, after a 72-hour flight. The article was illustrated by a sketch of the flying machine that Poe had taken from a paper attributed to Mason (Remarks on the Ellipsoidal Balloon, Propelled by the Archimedean Screw, Described as the New Aerial Machine, 1843). What made Poe’s “Great Balloon Hoax” successful was not merely the (fictional) description of the balloon journey, embellished with many narrative details, which served the expectations of a public hungry for progress and increasingly placing its faith in technology. Rather, the account was sufficiently anchored in reality to be able to unfold plausibility and persuasiveness. For one thing, there were historical antecedents, including popular demonstrations of hot-air balloons, so that audiences could easily extrapolate from what was already known to what was merely claimed; for another, Mason was an author known to be real, who in a work published in 1836 described an eighteen-hour balloon journey from London to Weilburg, Nassau, completed that same year with Charles Green and Robert Hollond. Mason was thus no stranger to those familiar with the technical state
3 4
On Hume’s response to this problem, see Gelfert (2017). Cf. Walsh (2006).
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
33
of early aviation experiments, and the skilful interweaving of such real facts with merely invented incidents made Poe’s article not only a crowd-pleaser, but a veritable example of modern “fake news”.
2.3
What Hoaxes Are and How They Work
The English word “hoax” has only found its way into broader usage in German since the late 1990s, in parallel with the increasing use of the Internet as a communication platform. A brief search in text corpora of German-language newspapers confirms that the term was initially used in German only for a narrowly defined phenomenon – the forwarding of those “panic-inducing virus warnings that are constantly sent around the net as chain letters” (Borchers 2000). Occasionally, fraudulent spam emails promising unexpected windfalls were also referred to as “hoaxes” in the German media – usually with a stern warning and the admonition not to take every attention-seeking chain email at face value, let alone forward them to friends and acquaintances. This use of the word in German, however, reflects only a small section of the spectrum of meaning that the word possesses in English – namely, only the very last volte-face in its semantic development. The etymological connection to the expression “hocus” as a component of “hocus-pocus” is widely accepted, which in turn is considered a verbalization of the Latin formula hoc est corpus meum from the Catholic liturgy. In English, the word – both as a noun (“a hoax”) and (less commonly) in the verb form “to hoax” – has been in use since the early nineteenth century. The first documented written characterization of “hoaxing” (a “recently received . . . elegant term”) defines the expression as “contriving wonderful stories for the publick” (Malcolm 1808, p. 213), and in this meaning it must have been at least so common that Schwenck’s 1838 Wörterbuch der deutschen Sprache refers to it in an entry on the German word “Hokuspokus”: “da im Engl. hoax einen blauen Dunst vormachen heißt, so ist es vielleicht damit verwandt” (Schwenck 1838, p. 299). Charles Darwin writes in his autobiography that in the course of his scientific career he had rarely had to deal with hypotheses and theories that had proved to be correct at the first attempt, but also only three times with deliberate deception: “I have known in the course of my life only three intentionally falsified statements, and one of these may have been a hoax (and there have been several scientific hoaxes) which, however, took place in an American Agricultural Journal”. (Darwin 1958, p. 143) The case described here as a “hoax” was an article whose author claimed to have successfully bred non-sterile offspring by crossing different types of livestock – thereby spontaneously creating new animal species; the author claimed that he had corresponded with Darwin and that Darwin had deep admiration for his work – a claim that Darwin rejected as “impudence”. Here, too, the intention to deceive of the anonymous author quoted by Darwin dominated – not only concerning the scientific results (of which Darwin, who was familiar with animal breeding, knew from his own experience that they could not be true), but especially as relates to their claimed
34
A. Gelfert
recognition by scientific authorities. Yet this example proves once again that science is also susceptible to the manipulation of expectations and conventions of relevant publics, which, in turn, are addressed to (and partly constituted by) specific forms of publication. If a “hoax” were merely aimed at the (temporary) deception of individuals, it would indeed not differ from those “pranks” (Streiche), “antics” (Possen) or “fibs” (Schwindeleien) as which the English expression is often translated in German. But a hoax needs publicity – as is, after all, already indicated in the phrase “for the publick” of the 1808 OED.5 One might think that hoaxes, like rumors, strive for publicity simply because they are designed to spread; accordingly, a hoax would be nothing more than “a bogus news item whose sole purpose is to spread” (Droesser 2001). Their quasi-public character would thus be merely a concomitant of the dissemination mechanism they mobilize. Confidential gossip about a mutual acquaintance does not constitute a rumor, at least as long as the claims are not passed on to third parties by the interlocutors. Likewise, it would not constitute a hoax, but rather amount to a “prank” (i.e., a practical joke) to artfully, and secretly, produce a forgery (of an archaeological artifact, for example) in order to mislead a single addressee. However, a wider reception may well change circumstances in such a way that, in retrospect, they can affect the character of the deceptive act: What began as a “prank” may then end up as a “hoax”. A possible example of this is the case of the so-called “Piltdown Man”, an alleged early human whose skull – so claimed the finder, the amateur archaeologist Charles Dawson – had been found in 1912 in a gravel pit near the English village of Piltdown. The sensational find was considered a “missing link” in the history of human evolution by the curator of the Geological Survey of the British Museum, Arthur Smith Woodward, and was perceived as such by the general public. Despite simmering doubts about its authenticity, it took more than forty years before the Piltdown skull was exposed as a fake in 1953; an examination conducted a few years later using the then-new method of radiocarbon dating proved once and for all that the supposedly 150,000-year-old find had in fact been assembled from components only a few centuries old. Although the exact authorship remains unclear, one hypothesis is that the Piltdown forger originally set out to savour the credulity of his immediate addressees, not expecting the find to attract such public attention and the debate to drag on for decades. The example of the Piltdown Man shows that hoaxes are not – like rumors, for example – bound to a propositional format. Material artefacts – including not only fake archaeological or palaeontological “finds” but also technical prototypes (such as the case of Joseph Papp’s “fusion engine” uncovered by Richard Feynman in 1968) – can also be considered hoaxes, which are designed for certain interpretations but are not exhausted in their propositional content. More importantly, a hoax or claim conceived as a hoax is not
As literary scholar James Fredal aptly puts it, “Whereas other forms of deception are private and often intensely personal, hoaxes are characterized by their publicity and notoriety.” (Fredal 2014, p. 76).
5
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
35
only designed for publicity and dissemination, but also intended to be debunked as such at a given moment. Part of the success of Poe’s hoax of the Atlantic crossing was precisely that he revealed his tall tale as such on the very day of publication and educated readers about it on the steps of the Sun’s publishing building – which did not stop them from snatching the papers out of the hands of the vendors. Part of the nature of the hoax, especially in a scientific context – as in the case of Sokal’s nonsense essay – is that, in retrospect, it contains clear signs of intent to deceive – which the intended audience, had they been a little more discerning, really should not have fallen for. Unlike in the case of simple fakes, one of the conditions of a hoax’s success is therefore that it does not permanently fool its audience: “The successful hoax fools some of the people for a time and most for a while, but it does not fool everyone permanently.” (Fredal 2014, p. 77).
2.4
Dissimulation in Digital Times
The digitalization of our world of communication has created a need for new descriptive categories, including those that are adequate to the new forms of simulation and dissimulation resulting from digitalization. The possibilities of manipulating, simulating and fabricating digital information are extremely diverse, and public perception often lags behind the real existing possibilities. The non-propositional (or not necessarily propositional) character of Internet hoaxes can be illustrated by early examples, some of which have become deeply engraved in the collective memory of a network community trained not least on “Internet memes”. For example, the so-called “Tourist Guy” hoax caused a furor immediately after the terrorist attacks of September 11, 2011, when the alleged photo of a tourist on the observation deck of the New York World Trade Center was disseminated via relevant Internet forums, showing the man later referred to as “Tourist Guy” in front of the New York skyline, with the plane heading straight for him in the background, about to crash into the tower below the man. The image – manipulated, of course – quickly became the target of projections of fear of terror and death, but could also be interpreted as an ambiguous commentary on the conspiracy theories that quickly began to circulate. Yet the image would have been quickly identifiable as a fake: For example, the man in the photo was wearing winter clothing that would have been completely inappropriate on a sunny September day in New York; the direction of flight was the wrong one, and even the type of aircraft depicted did not match the crashed plane. Inconsistencies of the kind described should make a hoax easily recognizable as such, but they presuppose background knowledge – as well as a willingness on the part of the audience to examine supposedly unambiguous evidence (even more so if it is presented in a visually catchy way) for its truth content. Not only in phases of information scarcity, but also in phases of epistemic uncertainty – which can result from the dynamics of an unclear situation, but also from an oversupply of contradictory sources – rumours, hoaxes and other “non-official” sources can flourish. That the public’s willingness to critically examine its
36
A. Gelfert
sources is not always at its best is also shown by the recent heated debate about so-called “fake news”. There has been – and still is – much debate about whether “fake news” is actually a new category of falsehoods in our information environment, or whether the phenomena referred to in this way are not merely the counterpart, accelerated by social media and other Internet formats, of classic eyewash, deliberate manipulation and traditional newspaper hoaxes. Critics of the term complain, on the one hand, that the term, which became popular in connection with Donald Trump’s 2016 US presidential election campaign – initially referring primarily to defamatory pseudo-news about political enemies – was soon used in an inflationary manner. Not only was the term “fake news” therefore irredeemably vague (see Habgood-Coote 2019), it was also easily appropriated. Trump himself was quick to refer to “fake news” not just in terms of individual content, but in terms of entire press organs (“the Washington Post is Fake News”). This, according to the underlying criticism, reinterpreted the term as a means of political slander and contaminated it as a descriptive category. The accusation that the “fake news” label rashly deprives legitimate minority opinions of legitimacy is similar. David Coady expresses this most drastically when he writes that it is not fake news but the term “fake news” that is the problem (Coady 2019, p. 40). And yet it would be rash to assume that the “fake news” problem can be adequately and completely characterized with traditional descriptive categories. For it is not only the medium of dissemination – in this case, of course, primarily internet-based social media – that distinguishes “fake news” from familiar phenomena such as newspaper hoaxes or classic disinformation, but the systemic character of the processes of its production and dissemination (Gelfert 2018). Much discussed in this context is the role of adaptive algorithms that determine what information is displayed to a Facebook user in his or her “feed”, for example. Even if the danger of possible “filter bubbles” (Pariser 2011) through the algorithmic curation of information has possibly been overestimated, it is nevertheless a historical novelty that the offer of information and news is tailored to the consumer in an almost completely opaque way. Commercial “microtargeting”, which – as the Cambridge Analytica scandal of 2018 made clear – can also be used for political purposes, also allows “fake news” producers to achieve considerable reach. The design and layout of social media such as Facebook also contributes to this: When a post is shared by a user, his or her contacts are shown an excerpt cleansed of any contextual markers, from which it is not immediately possible to tell what kind of source it is, such as a private blog entry or a wellfounded “New York Times” article. This levels out relevant differences in a way that would take a lot of effort to reproduce in the offline world, for example. For scientific communication processes, it can be stated in a very similar way that, due to the proliferation of online formats and their comparatively easy imitability, the potential for at least partially successful forgeries and hoaxes has tended to grow in recent years and has by no means been compensated for by the equally expanded possibilities for verification through online research. This is also true with regard to the scientific publication culture, which continues to prove susceptible to hoaxes. Whether this susceptibility is more
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
37
pronounced in science than in other social subsystems is an open question; even from the undisputed fact of selective successful hoaxes, no such conclusion can be drawn. At best, it can be argued that procedures such as peer review are particularly dependent on cooperative behaviour and the mutual respect of all participants. Several trends converge in the vulnerability of scholarly publishing processes: Similar to the case of online news sources, barriers to access in scholarly publishing have largely been overcome through technological means. Even more so than in the case of news content, scientific publications are now primarily consumed electronically. On the one hand, this has made it possible – at least in principle – to publish scientific articles in legitimate Open Access journals, for example, without having to deal with commercial publishers. On the other hand, an industry of predatory publishing has established itself without regard for scientific standards, profiting from the general pressure to publish in science. Even those who are otherwise well versed in their sub-discipline find it difficult to keep track of all this – especially as there is a large grey area. To foist a hoax publication on a so-called “pirate journal” is no great feat. But even the editors of perfectly legitimate small-scale journals can easily become targets of hoax authors, especially considering that by far the majority of them work on a voluntary basis without major staff support and have to perform a number of tasks at the same time, including selecting reviewers, deciding on publication and possibly even individual production steps. If a manuscript is submitted under a pseudonym, editors can at best check the identity of an author for plausibility. A search query on the Internet may occasionally be successful, but there can hardly be any guarantee, not least in the case of common names (“John Smith”). A missing homepage on the pages of a university can equally be an indication of the fictionality of the author or of his precarious employment. And even for fictitious authors, with minimal technical means – and with sufficient fraudulent energy – digital “backstories” can easily be constructed that would withstand any plausibility test of an overtaxed editor.
2.5
Instrument of Enlightenment or Calculated Breach of Trust?
“Fake News Comes to Academia” was the title of an op-ed piece published by journalist Jillian Kay Melchior in the Wall Street Journal on October 2, 2018. In it, the author lays out how a trio of authors, consisting of journalist Helen Pluckrose, mathematician James Lindsay, and philosopher Peter Boghossian, had spent the previous twelve months pushing a kind of “long-term hoax.” Twenty essays, modeled on Alan Sokal’s famous example, which parodied both the jargon and the imputed ideological presuppositions of various sub-disciplines while admixing illogical and unethical arguments, were submitted for peer review to selectively chosen (often interdisciplinary) academic journals; at the time it made headlines – the journalistic exposure took place before the intended revelation by the hoaxers – seven of the manuscripts had been accepted for publication (and some had already been published), seven were still in the review process, while six essays had already
38
A. Gelfert
been rejected. Of particular note in the journalistic reappraisal of the hoax was an essay published in the journal Gender, Place and Culture that declared dog parks in Portland, Oregon, to be centers of “rape culture” and attempted to establish a link between the dogs’ behavior and the gender performativity of their owners. Another essay accepted for publication, in, of all places, a feminist journal of social work, contained excerpts from Adolf Hitler’s Mein Kampf that had been stylistically modernized and altered in content by substituting the word “National Socialism” for “feminism” and “Jewish” for “privileged.” Had the project not been uncovered prematurely, the trio of authors would, they professed, have tried to place even more hoax essays. According to Pluckrose, Lindsay and Boghossian, the aim of their hoax project (sometimes also referred to as the “Sokal Squared” hoax) was to demonstrate that large parts of academic research on topics of social and political relevance were not truth-oriented and open-ended, but were merely aimed at confirming certain ideological presuppositions. For the sub-disciplines concerned – including “(feminist) gender studies, masculinities studies, queer studies, sexuality studies, psychoanalysis, critical race theory, critical whiteness theory, fat studies, sociology, and educational philosophy” (Pluckrose et al. 2018) – the three co-authors created the new umbrella term “grievance studies,” which is meant to suggest that these disciplines are simply concerned with making nonspecific complaints against (perceived or imagined) discrimination and trying to give them an academic underpinning. The aim of their hoax, they said, was not to deny the existence of social phenomena such as group- and identity-specific discrimination, but to bring them back to an analysis according to the standards of rigorous scholarship: “Our recommendation begins by calling upon all major universities to begin a thorough review of these areas of study [. . .] in order to separate knowledge-producing disciplines and scholars from those generating constructivist sophistry” (ibid.). In this way, Pluckrose, Lindsay, and Boghossian, as self-professed card-carrying liberals, also wanted to avoid capture by anti-intellectualist forces of the reactionary right, which had long taken up the fight against a campus culture at American universities and colleges they deemed too progressive. That precisely such a hijacking quickly occurred following their hoax – which in turn was blamed on the trio of authors – can hardly be surprising. Without wishing to trace the debate following the unmasking of the hoax in detail, here are some general observations for the purpose of putting the “Sokal Squared” hoax project into perspective. First, it can be stated that the project would hardly have been feasible without the online tools used: from the initial research to the faking of researcher personalities to the YouTube-ready quasi-cinematic documentation of the hoax (complete with its own hashtag “#grievancestudies”), “Sokal Squared” is a product of the Internet age. Unlike Alan Sokal, who still communicated with the editors of Social Text using his full postal address, the trio of authors of “Sokal Squared” had all submitted their essays electronically using pseudonyms. Some commentators see the very use of a pseudonym and a fictitious institutional affiliation as an ethically problematic intent to deceive. For example, in a similar case, Aceil Al-Khatib and Jaime A. Teixeira da Silva argue that “[through] deceiv[ing] the journalsʼ editors with fake authorʼs name and fake institutional
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
39
affiliations, [. . .] the ethics of submission associated with submission to an academic journal was compromised” (Al-Khatib and Teixeira da Silva 2016, p. 214). As for the mere use of a pseudonym, this verdict seems to me too harsh; not every pseudonym undermines the integrity of the scientific process. However, in at least one case, the trio of authors had submitted an essay – with his consent – under the name of an emeritus historian they were friends with. Although it could be argued that a “blind peer review” meeting scholarly standards should proceed regardless of the identity of the author of the essay under review, this effectively deprives the editors of the opportunity to verify whether the submission is prima facie serious. Some objections to the actions of the “Sokal Squared” trio were based on formal allegations of academic misconduct, which was said to consist in not explicitly seeking the consent of editors and reviewers to participate in a human-subjects study of the integrity of academic subdisciplines. Carl Bergstrom, in a piece for The Chronicle of Higher Education, lamented that reviewers had invested time and effort without having consented to being the subject of a study and that they had been harmed as a result: “The hoax, described by its architects as a ‘reflexive ethnography,’ appears to lack IRB [= institutional review board] approval for ethnographic work with human subjects” (Bergstrom 2018). This criticism is quite ambivalent. On the one hand, the trio of authors may indeed be accused of not only demanding unpaid (and ultimately futile, due to the deliberate intent to deceive) labor from the peer reviewers, but also treating them – without seeking their informed consent – as mere empirical subjects of investigation. On the other hand, it seems hardly legitimate to demand pre-approval by an independent ethics council for every performative intervention in social contexts – any more than one would expect an investigative journalist to inform his target in advance of every planned investigative move. However, this dichotomy reveals a deeper dilemma of the “Sokal Squared” hoax: The scientific claim – viz., to have empirically proven a systematic distortion of the scientific process in an ostensibly “ethnographic” way – can hardly be upheld due to the deliberately accepted violation of scientific norms and procedures. What remains is ultimately a series of individual cases that at best serve illustrative purposes and for which it remains completely unclear whether they are representative of so-called “grievance studies” – especially since the latter is a novel, and extremely heterogeneous category, first stipulated specifically for the purpose of their hoax by Pluckrose, Lindsay, and Boghossian. There is little evidence that malfunctions of the peer review process occur statistically more frequently in the sub-disciplines lumped together in this way than in other branches of science, nor does it demonstrate that the selectively identified misjudgments in the review process are of ideological origin. There has been the further accusation that the trio of authors dishonestly faked scientific data and results. Some of the papers submitted by Pluckrose, Lindsay and Boghossian claimed to be based on extensive empirical data – such as long periods of participant observation or extensive surveys of test subjects. This data, of course, had never existed, but had all been fabricated. While Sokal had limited himself to the elaboration of a theoretical pseudo-argument and in this way was able to demonstrate the inability of the
40
A. Gelfert
Social Text editors to distinguish between coherent arguments and pseudo-scientific gibberish, the “Sokal Squared” authors quoted extensively from alleged interviews with test persons, the validity and authenticity of which could in no way be verified by outsiders – thus also not by the reviewers, who had to take it on trust that the allegedly conducted interviews had actually taken place. But this calls the validity of the “Sokal Squared” hoax into question. Whereas Sokal could indeed claim to have shown that practically anyone could produce a postmodern essay that was obviously suitable for publication simply by adopting a certain jargon and going through the corresponding stylistic forms, Pluckrose, Lindsay and Boghossian have merely illustrated the – well-known – fact that the peer-review process is not always capable of unmasking defectively produced studies as such. This might speak in favor of radically rethinking scientific peer review procedures and, for example, exchanging the anonymity of peer review for an open format in the sense of “open science”. Within the framework of existing scientific practice, however – especially in the case of those hoax essays that are based on alleged primary data (e.g. subject interviews) – the reviewers cannot fully be blamed. This is because the anonymous peer review process can, by its very nature, only address those data that are documented in the submitted manuscript; queries or direct access to raw data are not typically part of currently established peer review procedures. If the data cited in the manuscript seem unexpected or far-fetched, this may be a warning sign in the exact sciences – as the case of Jan-Hendrik Schön illustrates; in the human sciences, on the other hand, e.g. when it comes to some of the (invented) interview statements by the dog park visitors, it could legitimately be taken as a sign of how diverse and wide-ranging human attitudes and behaviors are. In general, as Bergstrom rightly points out, the peer review process cannot be designed to put a stop to determined fraudsters and those who merely simulate scientific results by imitating all the methodological approaches customary in their respective discipline: “Peer review is simply not designed to detect fraud. It does not need to be. Fraud is uncovered in due course, and severe professional consequences deter almost all such behavior” (Bergstrom 2018). In their professionally designed YouTube companion video (Nayna 2018), Pluckrose, Lindsay, and Boghossian explicitly describe how their first attempts at hoaxing failed quite miserably: Our first papers were really only suited to test the hypothesis that we could penetrate their leading journals with poorly researched hoax papers. That wasnʼt the case, and we were wrong for thinking that we might be able to. (Quoted from YouTube transcript)
Then they changed their strategy: “So we walked back from the hoaxing and began to engage with the existing scholarship in these fields more deeply. This led us to learn a lot more about the inner workings of grievance studies” (Ibid.). As the trio of authors became familiar with the methods and internal structure of the criticized disciplines and were eventually able to write convincing (albeit fabricated) studies that formally conformed to disciplinary conventions, their success rate also increased. Yet it is precisely the large
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
41
number of submissions, with which the authors presumably wanted to increase the significance of their hoax project (which, recall, was originally designed to be even more extensive), that – it may be argued – reduces the overall effect of their undertaking: after all, two thirds of the submitted manuscripts were rejected as a result of the peer review process or were sent back to the authors with extensive requests for revision (“revise and resubmit”). Even if it would certainly be going too far to claim that the “Sokal Squared” hoax merely reflects the immanent limits of what the peer review process can achieve, it seems clear that the hoaxers’ ambitious rhetoric of enlightenment (“this is now a plea to all the progressives and minority groups”. . .“these people don’t speak for us”; ibid.) fails to stand up to closer scrutiny.
2.6
Outlook
Are scientific hoaxes a legitimate instrument of enlightenment and (scientific) criticism? A blanket answer to this question is difficult. I would venture this much: they can be, but their authors should be aware of the tough trade-offs that result in each specific case. On the one hand, hoaxes place themselves outside what, following Thomas Kuhn, one might call the everyday practice of “normal science” aimed at problem-solving and incremental increases in knowledge. Precisely because they do not just acknowledge their own fallibility, but anticipate it in practice – i.e., they are intended by their authors to be falsified already at the moment of their production – they do not share the epistemic goal of science itself. The superficial criticism that scientific hoaxes are no more than simple cases of scientific misconduct, whether due to the concealment of the identity of their authors or due to the pretence of false facts, falls short of the mark. Especially when the overriding aim is to criticize the scientific community (or parts of it), it is hardly possible to judge a hoaxer using the criteria of professional scientific ethics. Of course, the problem remains that this crucial difference usually remains hidden from view until a hoax is uncovered. And it is precisely here that the question arises as to whether reviewers and editors, who in good faith have devoted time and effort to constructively reviewing a hoax essay, are not being treated unfairly if they are later exposed as being too uncritical. Even more serious, however, is the concern that hoaxes, or the reaction to them, may exacerbate pre-existing biases in the peer review process, including a marked intolerance of genuinely new and surprising insights and contexts. If counterintuitive research results or “maverick ideas” were to be placed under general suspicion merely on the assumption that they could potentially be fakes or hoaxes, this would be a loss for science as a whole – a loss that could hardly be compensated for by any merely perceived gains in methodological rigor.
42
A. Gelfert
References Al-Khatib A, Teixeira da Silva JA (2016) Stings, hoaxes and irony breach the trust inherent in scientific publishing. Publ Res Q 32(3):208–219 Bergstrom CT (2018) A hollow exercise in mean-spirited mockery (Chronicle Forum “What the ‘Grievance Studies’ Hoax Means”). In: The Chronicle of Higher Education. https://www. chronicle.com/article/What-the-Grievance/244753. Accessed 10 May 2020 Bloor D (1976/1991) Knowledge and social imagery. University of Chicago Press, Chicago Borchers D (2000) Online. In: DIE ZEIT No. 50/2000. https://www.zeit.de/2000/50/Online. Accessed 10 May 2020 Coady D (2019) The trouble with ‘fake news’. Soc Epistemol Rev Reply Collective 8(10):40–52 Darwin C (1958) In: Barlow N (ed) The autobiography of Charles Darwin, 1809–1882. Collins, London Di Trocchio F (1994) Der große Schwindel. Betrug und Fälschung in der Wissenschaft. Campus, Frankfurt/New York Droesser C (2001) Das Virus ist die Nachricht. In: DIE ZEIT No. 23/2001. https://www.zeit.de/2001/ 23/Das_Virus_ist_die_Nachricht. Accessed 10 May 2020 Fredal J (2014) The perennial pleasures of the hoax. Philos Rhetor 47(1):73–97 Gelfert A (2017) “Keine gewöhnlichere, nützlichere und selbst für das menschliche Leben notwendigere Schlussart”: Ein neues Bild von David Hume als Theoretiker menschlichen Zeugnisses. In: Däumer M, Kalisky A, Schlie H (eds) Über Zeugen: Szenarien von Zeugenschaft und ihre Akteure. Wilhelm Fink, Paderborn, pp 19–211 Gelfert A (2018) Fake news: a definition. Informal Logic 38(1):84–117 Habgood-Coote J (2019) Stop talking about fake news! Inquiry 62(9–10):1033–1065 Hacking I (1999) The social construction of what? Harvard University Press, Cambridge, MA Hume D (1932) In: Thomson Greig JY (ed) The letters of David Hume, vol 1. Oxford University Press, Oxford Malcolm JP (1808) Manners and customs of London during the eighteenth century. Longman, Hurst, Rees, and Orme, London Mason TM (1843) Remarks on the Ellipsoidal Balloon, propelled by the Archimedean Screw, described as the New Aerial Machine. Howlett and Son, London Meichsner I (2002) Heftiger Durchzug im Elfenbeinturm. In: Kölner Stadt-Anzeiger. https://www. ksta.de/heftiger-durchzug-im-elfenbeinturm-14253876. Accessed 10 May 2020 Melchior JK (2018) Fake news comes to academia. Wall Street Journal. https://www.wsj.com/ articles/fake-news-comes-to-academia-1538520950. Accessed 10 May 2020 Nayna M (2018) Academics expose corruption in grievance studies, YouTube video. https://www. youtube.com/watch?v=kVk9a5Jcd1k. Accessed 10 May 2020 Pariser E (2011) The filter bubble: what the Internet is hiding from you. Penguin, New York Pluckrose H, Lindsay J, Boghossian P (2018) Academic grievance studies and the corruption of scholarship. Areo Magazine. https://areomagazine.com/2018/10/02/academic-grievance-studiesand-the-corruption-of-scholarship. Accessed 10 May 2020 Reich ES (2009) Plastic fantastic. How the biggest fraud in physics shook the scientific world. Palgrave Macmillan, New York Schwenck K (1838) Wörterbuch der deutschen Sprache in Beziehung auf Abstammung und Begriffsbildung, 3rd edn. Sauerländer, Frankfurt am Main Seer I (2002) Nobelpreisverdächtig. In: FU:Nachrichten, No. 01–02/2002. https://userpage.fu-berlin. de/~fupresse/FUN/2002/01-02/leute/leute4.html. Accessed on 10 May 2020 Sokal A (1998) Transgressing the boundaries: an afterword. In: Sokal A, Bricmont J (eds) Intellectual impostures: postmodern philosophersʼ abuse of science. Profile Books, London, pp 248–258
2
Of Fakes and Frauds: Can Scientific “Hoaxes” Be a Legitimate . . .
43
Sokal A, Bricmont J (1998) Intellectual impostures: postmodern philosophersʼ abuse of science. Profile Books, London Sturrock J (1998) Le pauvre Sokal. London Rev Books 14(20):8–9 Walsh L (2006) Sins against science. The scientific media hoaxes of Poe, Twain, and others. SUNY Press, Albany
Axel Gelfert, Prof. Dr., wrote the article “Of Fakes and Frauds: Can Scientific ‘Hoaxes’ Be a Legitimate Tool of Inquiry?”. He is Professor of Theoretical Philosophy at the Institute for Philosophy, Literature, History of Science and Technology at the Technical University of Berlin and works in the fields of social epistemology, philosophy of science, and history of philosophy.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a Discussion of Represented Worlds Peter Klimczak
Abstract
Following Aristotle, factuality, fictionality and fake are differentiated exclusively on the level of content. The concrete starting point for this is Michael Titzmann’s proposal to understand represented worlds as a set of ordered propositions. Accordingly, the necessary set-theoretical foundations are presented step by step and, on the basis of three exemplary represented worlds, a one-to-one differentiation as well as an exact definition of factuality, fictionality and fake is undertaken. Based on this, the limits of a one-to-one model can be discussed and its possible modification can be shown. Independently of this, for the classification of a represented world as factual, fictional or fake, the conception of which propositions are true in the real world is crucial, which is why, after a consideration of philosophical theories of truth, the real world is modelled as a set of sufficiently proven represented worlds. Subsequently, it is shown that it is necessary to model indeterminacy with respect to propositions, which is solved by means of the introduction of trivalence of truth values. The paper concludes with a discussion of the case-specificity and subjectivity of reality and their implications for the model presented here, showing that due to the relational approach the model remains productive even in the case of the assumption of subjective or “alternative” realities.
P. Klimczak (✉) Chair of Applied Media Studies, Brandenburg University of Technology, Cottbus, Germany e-mail: [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_3
45
46
3.1
P. Klimczak
Form and Content. Or: On the Distinction of Basic Categories of Media Studies
One of the fundamental distinctions in literary and media studies is that between factual and fictional speech. Matías Martínez and Michael Scheffel make the following attribution, which reflects the current consensus: factual speech claims, in non-poetic form, to report real events (cf. Martínez and Scheffel 2003, p. 10). The following are cited as examples: a newspaper report on a specific incident or the biography of a historical person. Fictional speech, on the other hand, tells of clearly invented events in poetic form (cf. Martínez and Scheffel 2003, p. 10), e.g. fairy tales, fables, novels, etc. As a special case of factual speech, both authors understand the “non-poetic narration of invented events” (Martínez and Scheffel 2003, p. 10), i.e. the lie, deception or false report. Traditionally, one speaks here of ‘fingierter Rede’ (fictitious speech); in modern German, with its preference for anglicisms, one would probably rather speak of fake.1 For Martínez and Scheffel, the decisive distinguishing feature is the difference between “poetic” and “non-poetic” speech. For them, the distinction between “real” and “non-real”, or “invented” and “not invented”, serves only to subdivide factual speech into true factual and false factual speech (“fake”). The first distinction, that is that between “poetic” and “non-poetic”, is located on the level of form or, following the linguistic idiom of the French structuralists, on the level of ‘discours’. The second distinction, that is that between “real” and “not real”, on the other hand, concerns content, or, again in structuralist terms, ‘histoire’ (cf. Titzmann 2003, pp. 3028–3103). Martínez and Scheffel are not alone in emphasizing ‘discours’ for the distinction between fictional and factual speech. One can speak of a consensus in this respect, not only in literary studies, but also in film and media studies (cf. Keppler 2006). Even though the form in audiovisual communications is different from that in purely written ones, it is the form here that is being made responsible for a classification as factual or fictional. Differentiating on the basis of form, however, is accompanied by a crucial problem: a clear classification is not possible, as will be shown by two examples. In the long history of the Nobel Prize for Literature, for example, there have been several awards for non-fiction. The second Nobel Prize in Literature was awarded to Theodor Mommsen in 1902, with special reference to his monumental work, The History of Rome. The Nobel committee
1 Many things can be subsumed under the term fake, e.g. false reports (fake news), which require a corresponding intention as well as (or connected to this) a corresponding form of presentation. Here, a purely substantive definition is to be made, i.e. exclusively at the level of ‘histoire’ and thus not including the level of discourse and the (assumed or assumed or explicit) sender intentions (with regard to the common fake news descriptors, cf. e.g. Antos 2017). Given the exclusively contentbased definition of fake, counterfactual storytelling would also fall under it. This indiscriminateness can be criticized, but the point here is not to differentiate what can be called fake or fictitious speech in the broadest sense, but first of all to unambiguously distinguish between factual, fictional, and fictitious speech.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
47
described him as “the greatest living master of the art of historical writing”.2 Since then, the Nobel Prize in Literature has also been awarded for philosophical (Bertrand Russell in 1950) and historical-biographical as well as political texts (Winston Churchill in 1953). So there are indeed “non-fiction texts” that meet the criteria for literature in terms of form. According to the model outlined above, these would, at first glance, have to be classified as works of fiction. At the same time, however, there are entire genres that present fiction in the same way that facts are presented, e.g. mockumentaries. A mockumentary is a type of film or television program that depicts fictional events but is presented as a documentary (cf. High 2008, pp. 204–216). The basic distinction between fact, fiction and fake on the basis of form is thus quite problematic. This fact has also been pointed out by the first media theorist of the West, Aristotle. For him, it is not the linguistic form but the truth or falsity of what is said that is the decisive criterion (cf. also Martínez and Scheffel 2003, pp. 10–16): The difference between a historian and a poet is not that one writes in prose and the other in verse – indeed the writings of Herodotus could be put into verse and yet would still be a kind of history, whether written in metre or not. The real difference is this, that one tells what happened and the other what might happen. (Aristotle 1932, 1451a-b)
In the following, I will differentiate factuality, fictionality and fake exclusively on the level of content, of ‘histoire’. Nevertheless, this will not be directly connected to Aristotle. That is to say: I will not start from the ontological difference (actuality vs. potentiality) between “what happened” and “what might happen”, which Aristotle rightly touches on. Rather, the starting point will be the notion of ‘histoire’ as a “represented world”, which is (still) current in the discourse of literature, film and media studies. Michael Titzmann formulates an interesting approach in this respect: A represented world could be represented (whether selectively and abstractly or completely) as a set of propositions ordered on the one hand according to the extent of their domains of validity, and on the other hand according to their logical, temporal, causal sequence. (Titzmann 2003, p. 3071)
What Titzmann proposes is nothing less than a set-theoretic modeling of represented worlds. The fact that he does not carry out this proposal himself and, as far as I know, no one else does either, does not mean that it is not feasible. However, some set-theoretical explanations are needed.
2
https://www.nobelprize.org/prizes/literature/1902/summary/
48
3.2
P. Klimczak
Sets and Worlds. Or: On the Distinction of Fact, Fiction and Fake by Means of Set Theory
As a case study, assume the following universe of speech with three worlds represented: (1) The statement by Sean Spicer, President Donald Trump’s press secretary, from January 21, 2017, in which he comments on the number of attendees at Trump’s inauguration the day before: first, he claims that there were more people at Donald Trump’s inauguration than at Barack Obama’s, and then immediately states that it was the largest audience ever at a swearing-in ceremony. (2) The January 20, 2017 online report by the British newspaper The Daily Telegraph, which cites both photos and Washington Metro transportation figures to show that Donald Trump’s inauguration not only had fewer attendees than Barack Obama’s, but even fewer than George W. Bush’s. (3) The seventeenth episode of the eleventh season of The Simpsons. First aired on March 19, 2000, the episode titled “Bart to the Future” depicts a future in which 38-year-old Lisa Simpson has been elected President of the United States (The Simpsons 2000). In doing so, she faces the challenge of rebuilding a country that has been run down by her predecessor, Donald Trump. About Set Theory If the elements of a set are known, the set can be specified in the form of so-called list notation. The names of the elements are written next to each other separated by commas and surrounded by curly brackets. The order in which the elements of a set are listed is irrelevant. If a, b, and c are given as objects, the set containing b and c as elements can be specified as follows:
fb, cg, fc, bg: Given an object and a set, the object is either an element of the set or not. There is no such thing as being a half or a multiple element. By means of the element sign, the statement can be made that a given object is an element of a given set3: c 2 fb, cg:
3
Read: “small-c is element of the set with elements small-b and small-c”.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
49
Analogously, the fact that an object is not an element of a set is symbolized by a diagonally crossed-out element sign4: a= 2fb, cg: To make it easier to refer to a set, a name can be assigned to a set, generally using capital letters of the Latin alphabet5: X = fb, cg:
Back to the Case Study Only singular, simple, predicate-logical statements are allowed as objects, made up of the individuals Barack Obama, Donald Trump, Lisa Simpson and the predicates “x is/was president of the USA” and “the number of visitors to x’s inauguration exceeded that of his predecessor”: o: = Barack Obama t: = Donald Trump, s: = Lisa Simpson, Px: = “x is/was president of the USA”, Ax: = “the number of people attending x’s inauguration exceeded that of his predecessor”. Since the specification of singular, simple statements allows only negated or non-negated predicate-subject combinations, the basic set Ω of the universe of speech has only 12 objects (= 3 individuals times 2 predicates times 2 truth values): Ω = fPo, Pt, Ps, Ao, At, As, ØPo, ØPt, ØPs, ØAo, ØAt, ØAsg: Following this, the sets corresponding to each of the three represented worlds can be specified. The represented world of Sean Spicer’s press statement, DSS, then contains the statements Po,6 Pt and At as elements: DSS = fPo, Pt, Atg: The world represented in The Daily Telegraph report, on the other hand, contains four elements: Po, Pt, Ao and ØAt7:
Read: “Small-a is not an element of the set containing the elements small-b and small-c”. Read: “Large-X is equal to the set with elements small-b and small-c”. 6 Read: “P of o”. 7 Read: “non-A of t”. 4 5
50
P. Klimczak
DTDT = fPo, Pt, Ao, ØAtg: The represented world of the Simpsons episode “Bart to the Future”, DTS, on the other hand, features only the two elements Pt and Ps in our universe of speech: DTS = fPt, Psg: In addition to these three represented worlds, we can also specify the set whose elements “are” true in the real world.8 This shall be those objects from the basic set Ω that we intuitively or conventionally assume to be true – without first problematizing what truth means. As true in the real world and thus as elements of the set W one will (probably) assume the four statements Po, Pt, Ao and ØAt: W = fPo, Pt, Ao, ØAtg: Further, due to the form of distribution, we assume that the represented world of The Daily Telegraph is factual and the represented world of The Simpsons cartoon series is fictional. The world portrayed by Sean Spicer, on the other hand, is assumed to be fake due to the lack of trustworthiness of the Trump administration. The question will then be whether corresponding quantity ratios between the represented worlds and the real world can be formulated that allow for a corresponding distinction of the three portrayed worlds. Back to Set Theory With respect to the relation of sets, two statements can be made: one of identity and one of inclusion. Only the latter is relevant for us, i.e. the inclusion or subset relation. Here the following holds: if all elements of large-X are elements of large-Y, then large-X is called a subset of large-Y. With the help of the all-quantifier (8), the conditional (→) and the already familiar element sign, the subset or inclusion relation (⊂) can be represented as follows9:
X ⊂ Y :$ 8x ðx 2 X → x 2 YÞ: The corresponding negation of the subset relation is symbolized by means of an oblique crossed-out subset sign and is given if at least one element of large-X is not an element of
8
For the determination of what is true, see the discussion in Sect. 3.4. For the time being, an intuitive determination of truth will be assumed. 9 Read: “X is subset of Y precisely if for all x the following holds: if x is element of X, then x is element of Y”.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
51
large-Y, which in turn requires the existential quantifier or the at-least-one quantifier (∃) and the conjunct (^)10: X 6 Y :$ ∃x ðx 2 X ^ x= 2YÞ:
Back to the Case Study Knowing the elements of the four given sets, we can determine whether subset relations exist between the three represented worlds and the real world. Between the set that stands for the represented world of The Daily Telegraph and the set that stands for the real world, there is not only one, but a mutual subset relation: both sets have the same four statements. What is crucial, however, is that the aforementioned represented world is a subset of the real world:
DTDT ⊂ W, fPo, Pt, Ao, ØAtg ⊂ fPo, Pt, Ao, ØAtg: However, there is no subset relationship between Sean Spicer’s represented world and the real world: the reason for this is the statement contained in the represented world of Sean Spicer’s press statement that the number of visitors to Donald Trump’s inauguration exceeded that of his predecessor, i.e. At. The set that describes the real world precisely does not contain this statement. On the contrary, it has the negation of this statement as its element: ØAt. In symbols: DSS 6 W,
fPo, Pt, Atg ⊂ fPo, Pt, Ao, ØAtg: The represented world (in the future) of the 243rd episode of “The Simpsons” also has no subset relationship to the real world: the set representing the represented world in question contains the statement that Lisa Simpson is President of the United States. A corresponding Ps, on the other hand, is, while not negated, not contained in the set symbolizing the real world: DTS ⊂ = W,
Read: “X is not a subset of Y if and only if for at least one x: x is an element of X and x is not an element of Y”. 10
52
P. Klimczak
= fPo; Pt; Ao; ØAtg: fPt; Psg ⊂
Interim Conclusion By means of the subset relation, no distinction between the represented worlds of Sean Spicer and “The Simpsons” is possible. However, if we look at the reasoning for the negation of the subset relation of Sean Spicer’s represented world and that of the represented world of “The Simpsons”, we still notice a difference: while the statement that Lisa Simpson is president is not included in the set representing the real world, the statement that the number of people attending Donald Trump’s inauguration exceeded Barack Obama’s is not only not included in the real world – it is also negated. A possible solution to being able to differentiate between the represented worlds of Sean Spicer and “The Simpsons” after all, then, might be offered by a distinction between propositional content and propositional value. But this requires a different representation of the objects. Therefore, Back to Set Theory A fundamental property of every set considered so far was that its elements had the same status, that is: no order was fixed for the elements of the sets, which is why one can also speak of disordered sets. If, on the other hand, we want to build more complex structures, we need ordered sets, i.e. sets for whose elements an order is fixed. The notation for ordered sets differs from that for unordered sets in that their elements (we then also speak of components) are placed between round brackets instead of curly brackets:
ða, b, c, dÞ: Ordered pairs are of fundamental importance. These are ordered sets with exactly two components11: ða, bÞ:
Returning to the Case Study The concept of ordered pairs makes it possible to distinguish propositional content from propositional value. Instead of assuming non-negated and negated subject-predicate combinations as before, now only ordered pairs are allowed as elements of the basic set Ω′. The first component is to be drawn from the set of positive, 11 There are different definitions of the relation between ordered and disordered sets. The most common definition that à la Casimir Kuratowski: an ordered pair with a as the first component and b as the second component is equal to a disordered set that contains as one element the disordered set with the element a and as the other element the disordered set with the elements a and b: (a, b): = {{a}, {a, b}} (cf. 1921, p. 171).
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
53
i.e. non-negated, subject-predicate combinations; specifically, from the basic set G, the set of propositional contents, formulated as positive propositions:
G = fPo, Pt, Ps, Ao, At, Asg: The second component is to be taken from the basic set V, the set of truth values 0 for false and 1 for true: V = f0, 1g: The new basic set Ω′ thus contains, like the old basic set Ω, twelve objects, but these are now twelve ordered pairs.12 Ω0 = fðPo,1Þ, ðPt,1Þ, ðPs,1Þ, ðAo,1Þ, ðAt,1Þ, ðAs,1Þ, ðPo,0Þ, ðPt,0Þ, ðPs,0Þ, ðAo,0Þ, ðAt,0Þ, ðAs,0Þg: There is a 1-to-1 relationship between the twelve objects of the old basic set Ω and the new basic set Ω′. Accordingly, the sets representing the three represented worlds and the real world can be adapted to the new notation: DSS 0 = fðPo, 1Þ, ðPt, 1Þ, ðAt, 1Þg, DTDT 0 = fðPo, 1Þ, ðPt, 1Þ, ðAo, 1Þ, ðAt, 0Þg, DTS 0 = fðPt, 1Þ, ðPs, 1Þg, W0 = fðPo, 1Þ, ðPt, 1Þ, ðAo, 1Þ, ðAt, 0Þg: Despite the new notation, the four sets do not entail a new specification, since both notation systems are semantically equivalent (due to the aforementioned 1-to-1 relation). Accordingly, it is not surprising that when determining the subset relations of the represented worlds and the real world, one obtains the same result as before: (1) There continues to be a (reciprocal) subset relation between the represented world of The Daily Telegraph and the real world. (2) Between the represented world of Sean Spicer and the real world there is no subset relation, just as before. (3) And the represented world of “The Simpsons” is also not a subset of the real world, despite the new modeling. Consequently, it is not sufficient to
Thus Ω′ is the Cartesian product of G and V (G × V), which is equal to the set of all ordered pairs with x as the first component and y as the second component, such that x is an element of G and y is an element of V. Or in symbolic notation: Ω′ = G × V = {(x,y) | x 2 G, y 2 V}. 12
54
P. Klimczak
merely distinguish propositional content and propositional value in the notation; a way must also be found to differentiate the two with respect to establishing the subset relation. Back to Set Theory If Z is a set containing the ordered pairs (a, d) and (b, e) as elements,
Z = fða, dÞ, ðb, eÞg, then the set whose elements contain all the first components of the ordered pairs in Z as elements is called the predomain of Z (Pre Z): Pre Z = fa, bg: Formally, the predomain of Z can be determined by assuming that the first components of the ordered pairs that are elements of Z are elements of a set X, e.g. X = fa, b, cg: The second components of the ordered pairs that are elements of Z, on the other hand, are elements of a set Y, e.g. Y = fd, e, f g: The predomain of Z is then the set of all x that are elements of X, for which it holds that they are at least once the first component of the ordered pairs that are elements of Z. Elements of X in our case are a, b and c, but only a and b are assigned at least once an element from Y as the second component (of the ordered pairs that are elements of Z), which is why the predomain contains only a and b as a set13: Pre Z = fx 2 X j∃y ðx, yÞ 2 Zg = fa, bg:
Returning to the Case Study If we determine the predomain of the three represented worlds as well as the real world, we get:
For the post-domain of Z („post Z“) – the set whose elements contain all second components of the ordered pairs in Z – the analogous holds, i.e., post Z = {y 2 Y | ∃x (x,y) 2 Z}. 13
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
55
Pre DSS 0 = fx 2 G j∃y ðx, yÞ 2 DSS 0 g = fPo, Pt, Atg, Pre DTDT 0 = fx 2 G j∃y ðx, yÞ 2 DTDT 0 g = fPo, Pt, Ao, Atg, Pre DTS 0 = fx 2 G j∃y ðx, yÞ 2 DTS 0 g = fPt, Psg, Pre W0 = fx 2 G j∃y ðx, yÞ 2 DW 0 g = fPo, Pt, Ao, Atg: If, as originally intended, we ask about the respective subset relationship of the represented world and the real world, now in terms of the respective predomains, we arrive at the following four statements: Pre DTS ′ ⊂ = Pre W ′ resp:fPt; Psg ⊂ = fPo; Pt; Ao; Atg, Pre DSS 0 ⊂ Pre W0 resp:fPo, Pt, Atg ⊂ fPo, Pt, Ao, Atg, Pre DTDT 0 ⊂ Pre W0 resp:fPo, Pt, Ao, Atg ⊂ fPo, Pt, Ao, Atg, Pre W0 ⊂ Pre DTDT 0 resp:fPo, Pt, Ao, Atg ⊂ fPo, Pt, Ao, Atg: While there is no subset relationship between the predomain of the represented world of “The Simpsons” and the predomain of the real world, there is indeed a subset relationship between the predomain of the represented world of Sean Spicer’s press statement and the predomain of the real world. And there is even a reciprocal subset relationship between the predomain of the represented world of The Daily Telegraph and the predomain of the real world. Thus a differentiation of the three worlds by means of quantity relations seems to be given, but appearances are deceptive. The mutual subset relation between the predomain of the represented world of “The Daily Telegraph” and the predomain of the real world is due solely to the limited power of the basic set Ω′. For example, if one were to assume Bill Clinton as merely another individual of the universe of speech, there would no longer be a reciprocal subset relation between the two sets: for while the presidency of the individual Bill Clinton would be an element of the world, it would not be an element of the set that stands for the represented world of “The Daily Telegraph”. The online report of “The Daily Telegraph” does not make any statement concerning the person Bill Clinton and thus also not whether he was American president or not. Thus, if one assumes a universe of speech enriched only by the individual Bill Clinton, the differentiation of the three represented worlds is no longer given. However, a differentiation of the three represented worlds only by means of the subset relation of their predomains to the predomain of the real world is not necessary. It is only decisive that in this way the represented world of “The Simpsons” can be differentiated from the other two represented worlds: only the predomain of the represented world of “The Simpsons” is not
56
P. Klimczak
characterized by any subset relation to the predomain of the real world. The differentiation of the represented worlds of Sean Spicer and of “The Daily Telegraph”, which are both characterized by a subset relation of their predomains to the predomain of the real world, is possible by means of the previous step. It is no longer necessary to ask about the subset relation between the predomains of the represented worlds and the predomain of the real world, but – as already done – about the subset relations of the sets representing the represented worlds and the real world as such. If, against the background of these initial sets, one asks about the subset relation of the two represented worlds to the real world, one obtains the following result: DSS ′ ⊂ = W′ DTDT 0 ⊂ W0 : Since the set representing the represented world of “The Daily Telegraph” has the same elements as the set representing the real world, a corresponding subset relation is also given. In the case of the set representing the represented world of Sean Spicer’s press statement, however, this is not the case: the set representing his represented world contains an ordered pair as an element, which on the one hand contains the propositional content At as an element, but on the other hand contains the propositional value 1. Such a set – and sets are defined by their elements alone – is not an element of the set representing the real world. Thus, not every element of the set that represents the represented world of Sean Spicer is also an element of the set that represents the real world, so there is no relation of subsets. But in this way a differentiation is given between the represented worlds of Sean Spicer and “The Daily Telegraph”. Conclusion Based on the determination of the subset relationships both with respect to the sets representing the represented worlds and the real world and with respect to the predomains of the represented worlds and the real world, the three represented worlds can be precisely differentiated in terms of set theory:
ðPre DTDT 0 ⊂ Pre W0 Þ ^ ðDTDT 0 ⊂ W0 Þ $ DTDT 0 , ðPre DSS 0 ⊂ Pre W0 Þ ^ ðDSS 0 6 W0 Þ $ DSS 0 , ðPre DTS 0 6 Pre W0 Þ ^ ðDTS 0 6 W0 Þ $ DTS 0 : Building on this set-theoretical distinction of the three represented worlds and their assumed quality as factual, fictional and fake,
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
57
ðPre DTDT 0 ⊂ Pre W0 Þ ^ ðDTDT 0 ⊂ W0 Þ $ DTDT 0 is factual, ðPre DSS 0 ⊂ Pre W0 Þ ^ ðDSS 0 6 W0 Þ $ DSS 0 is a fake, ðPre DTS 0 6 Pre W0 Þ ^ ðDTS 0 6 W0 Þ $ DTS 0 is fictional, corresponding definitions of factuality, fictionality and fake are possible14: (1) DX is factual: $ (Pre DX ⊂ Pre W) ^ (DX ⊂ W). A represented world is factual if it contains only propositional contents that occur in the real world (so that a subset relation between the predomains of the represented world and the real world is given) and if all propositional values of the represented world correspond to the propositional values of the real world (so that a subset relation between the represented world and the real world is given). (2) DX is a fake: $ (Pre DX ⊂ Pre W) ^ (DX 6 W). A represented world is a fake if, firstly (as in the case of factual represented worlds), it only contains statements that occur in the real world and, secondly (in contrast to the factual represented world), at least one of the statements has a different propositional value than in the real world. In this case, the statement of the represented world no longer corresponds to the corresponding statement of the real world, so that there is no subset relationship between the represented world and the real world. (3) DX is fictional: $ (Pre DX 6 Pre W) ^ (DX 6 W). A represented world is fictional if it contains at least one propositional content that does not occur in the real world (so that there is no subset relationship between the predomains of the represented world and the real world). Which propositional value is ascribed to this propositional content is not relevant, since this automatically means that no subset relationship between the represented and the real world is possible any more. Supplementary Background Information Since the predomain is the set of all first components of all ordered pairs which are part of a set, in the case of the non-existence of a subset relation with respect to the (continued)
14
For the formal definition of represented worlds (Dx) and real world (W) or real worlds (Wx) see below Sect. 3.4.
58
P. Klimczak
predomains of the sets, there can also be no subset relation with respect to the sets, since the first components of the ordered pairs are part of the ordered pairs. Therefore, it holds in general: ðPre X 6 Pre YÞ → ðX 6 YÞ, or related to represented worlds: ðPre DX 6 Pre WÞ → ðDX 6 WÞ: The fourth variant from a combinatorial point of view, (Before DX 6 Before W) ^ (DX ⊂ W), is therefore not possible. Accordingly, there are systemically only three possibilities in this modeling – predicated here as factuality, fakery and fictionality. This is significant insofar as the traditional triad of fictionality, factuality, and fake can also be seen as supported by the model.
3.3
Fiction and Fake. Or: From the One-to-One to the Unambiguous Model
In terms of model theory, a represented world is already fictional if at least one propositional content of the represented world is not an element of the real world: even then, there is no subset relation with respect to the predomains of the sets depicting the represented and real world. The extreme case of fictionality, on the other hand, is given when the represented world is completely situated in the future in relation to its time of origin or lies beyond the realm of knowledge; think of science fiction, fantasy or fables of alien planets or parallel worlds. In such cases, there are necessarily no equivalents in the real world for all statements – or more precisely: propositional contents: apart from natural law, everything that happens in the future is necessarily contingent.15 And in the case of fantastic texts, not even the laws of nature known to us need have any validity. Accordingly, all statements of such texts are not an element of the set that depicts the real world. And since this applies not only to the statements, but also or especially to the propositional contents, such a represented world is to be classified as fictional according to the above set-theoretical definitions. What has just been stated, however, only applies to the “authentic” reading of fictional texts. Since every text, every speech act is a product of a very specific time and a very specific culture, it is also true for the “strangest” worlds that they can be examined for analogies to the real world of their respective time of origin for references to it. The alien 15
Unless you are a follower of worldviews in which the future is predetermined.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
59
worlds can then be read not only authentically, but also inauthentically, figuratively. The degree of explicitness as well as concreteness of the respective references may vary from text to text, from speech act to speech act, but in most cases the analogies to the respective extra-textual existing circumstances will suffice to neutralize and substitute temporal, spatial or ontological differences (cf. Rastier 1974, pp. 153–190; cf. Klimczak 2020): the alien world of the text or speech act then represents only a mirror image, distorted image or wishful image of the real, extra-textual, actual world. As soon as, however, an inauthentic reading that establishes the substitution of characters, places, events can be assumed, the specification of the set that depicts the represented world of the fictional text changes fundamentally. The propositional contents of the represented world and the real world are then no longer disjoint: a subset of the propositional contents of the represented world must be regarded as a subset of the real world. In many, if not most cases, however, not all propositional contents of the represented world will become subsets of the propositional contents of the real world, so that there is no subset relation between the represented world and the real world concerning the propositional contents. This would only be the case if there were not a single propositional content of the represented world that was not part of the real world. In this case, however, the represented world in question would still be labeled as fictional and not as factual or fake, despite the inauthentic reading. However, this labeling is a product of the set-theoretic model presented here, and like any model, it can be questioned. Thus, it could be argued that the existence of at least one propositional content within the represented world that is not an element of the real world – according to the previous definition – should not be sufficient to qualify the represented world as a whole as fictional and thus neither factual nor fake. That the existence of a single propositional content in the represented world that is not an element of the real world is sufficient to classify that world as non-factual should be evident, or at least to most observers. Regarding the labeling of the represented world as non-fake, the facts will probably be different: after all, the existence of a single “fictive” or “fictional” statement content, i.e. one not occurring in the real world, would be sufficient to protect a represented world that is – from the perspective of reality – otherwise brimming with false statements from the judgment of being fake. This would provide a wonderful alibi for the proliferation of fake statements. Does this expose the set-theoretical model presented above for distinguishing fictionality, factuality and fakery as useless? Not at all, in my view. The aim was to develop a model that allows classification as fictional, factual or fake on the basis of ‘histoire’ alone. Moreover, this classification should be unambiguous, i.e. double or triple classification should be excluded. This is exactly what the presented model does. But back to the – so far only theoretically envisaged – drawback: there should at least be the possibility to classify an authentically fictional text as fake. And this possibility does indeed exist, it can likewise be modeled set-theoretically, but not within the existing,
60
P. Klimczak
disjunct distinction of fictionality, factuality and fake as a fourth possibility,16 but as a special case of fictionality. The idea is as follows: as already realized in the original model, a falsity can be defined in such a way that both the represented world and the real world contain the same propositional content, but with a different propositional value. Thus, the subset relation with respect to the statements is still to be determined, but now no longer with respect to all statements, but only to those statements whose propositional content is part of both the represented world and the real world. Or in other words: all statements of the represented world that would classify it as fictional, because no statement is made within the real world with respect to their propositional contents, are not to be considered. Formally, this requires the operation of averaging. The average or the intersection of any two sets X and Y is the set of all objects, here x, which are both elements of X and elements of Y17: X \ Y≔fx j x 2 X ^ x 2 Yg: Since, as just described, first those statements have to be identified whose propositional content is part of both the represented and the real world, the average of the predomains of DX and W, i.e. of the sets standing for the represented and the real world, has to be formed: Pre DX \ Pre W: The average of Pre DX and Pre W thus contains all the propositional contents which are contained in both DX and W. The subset relation, however, is now to be determined not with respect to the propositional contents, but with respect to the propositions containing these propositional contents. Accordingly, on the one hand, the set of all statements from DX whose first component is an element of the average of the predomains of DX and W is to be formed, fðx, yÞ 2 DX j∃x ðx, yÞ 2 Pre DX \ Pre Wg, and on the other hand the set of all statements from W whose first component is an element of the average of the predomainsof D and W: fðx, yÞ 2 Wj∃x ðx, yÞ 2 Pre DX \ Pre Wg:
16
As already explained above, there are only three possibilities for the determination of subset relations in terms of statement and statement content. 17 An example: two unordered sets are given. The first set contains the elements a and b, the second set the elements b, c and d: B = {a, b}, C = {b, c, d}. The intersection of B and C then contains all elements that are elements in both sets, here the object b: {a, b} \ {b, c, d} = {b}.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
61
With regard to these two sets, the subset relation is then to be determined. If there is no subset relation, the represented world is a fictional fake (fiction fake): fðx,yÞ 2 DX j∃x ðx,yÞ 2 Pre DX \ Pre Wg 6 fðx, yÞ 2 W j∃x ðx,yÞ 2 Pre DX \ Pre Wg $: DX is fiction fake: For example, suppose a represented world with the three statements (a, 0), (b, 0), and (c, 1), D = fða, 0Þ, ðb, 0Þ, ðc, 1Þg, and a real world with the statements (a, 1), (b, 0) and (d, 1), W = fða, 1Þ, ðb, 0Þ, ðd, 1Þg: As usual, the first component represents the propositional content, the second component the propositional value, here 0 for false and 1 for true. The predomains of D and W are then on the one hand a, b, c, Pre D = fa, b, cg, and on the other hand a, b, d: Pre W = fa, b, dg: The average of the two sets contains the two propositional contents a and b, since c is an element of Pre D but not an element of Pre W, while d is an element of Pre W but not an element of Pre D: Pre D \ Pre W = fa, bg: Then, the set of all statements from D whose propositional content is an element of the average of the predomains of D and W can be formed: fða, 0Þ, ðb, 0Þg: Similarly, the set of all propositions from W whose propositional content is an element of the average of the priors of D and W is to be determined: fða, 1Þ, ðb, 0Þg:
62
P. Klimczak
In conclusion, {(a, 0), (b, 0)} is not a subset of {(a, 1), (b, 0)}, since (a, 0) – unlike (b, 0) – is an element of the former but not of the latter: fða, 0Þ, ðb, 0Þg 6 fða, 1Þ, ðb, 0Þg: Accordingly, D is fiction fake.
3.4
Truth and World. Or: Of the Real World as a Set of Sufficiently Proven Represented Worlds
In 1912 Bertrand Russell formulated a correspondence theory of truth (cf. Russell 2001, pp. 17–24). He thus held that the truth of statements depends on the correspondence between statements and facts.18 Of interest, however, are also his immediately preceding considerations on truth and falsity: It seems fairly evident that if there were no beliefs there could be no falsehood, and no truth either, in the sense in which truth is correlative to falsehood. If we imagine a world of mere matter, there would be no room for falsehood in such a world, and although it would contain what may be called “facts,” it would not contain any truths, in the sense in which truths are things of the same kind as falsehoods. (Russell 2001, p. 18).
Russell goes on: But, as against what we have just said, it is to be observed that the truth or falsehood of a belief always depends upon something which lies outside the belief itself. If I believe that Charles I. died on the scaffold, I believe truly, not because of any intrinsic quality of my belief, which could be discovered by merely examining the belief, but because of an historical event which happened two and a half centuries ago. If I believe that Charles I. died in his bed, I believe falsely: no degree of vividness in my belief, or of care in arriving at it, prevents it from being false, again because of what happened long ago, and not because of any intrinsic property of my belief. Hence, although truth and falsehood are properties of beliefs, they are properties dependent upon the relations of the beliefs to other things, not upon any internal quality of the beliefs. (Russell 2001, p. 18 ff.)
One is inclined – especially against the background of the currently dominant “alternative facts”19 – to agree with Bertrand Russell. And yet, we subscribe here to arguments by
18
The following brief overview may also be helpful: Zoglauer 2016, pp. 28–34. Kellyanne Conway, advisor to US President Donald Trump, defended Trump’s press secretary Sean Spicer’s statement that most viewers came to the inauguration of Donald Trump as President of the USA by saying that he had presented „alternative facts“ (cf. NBC News 2017). Since then, there is/was a vital discourse about facts, alternative facts, fake etc., especially in the media public (cf. Kusch and Beckmann 2018; cf. Hendricks and Vestergaard 2018). 19
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
63
Rudolf Carnap and Carl Hempel. The latter was convinced that “statements are never compared with a ‘reality’, with ‘facts’” (Hempel 2000, p. 11): None of those who support a cleavage between statements and reality is able to give a precise account of how a comparison between statements and facts may possibly be accomplished – nor how we may possibly ascertain the structure of the facts. (Hempel 2000, p. 11)
It is not surprising that Hempel then asks how truth can be described from such a standpoint. Obviously, according to Hempel, what is needed is not a correspondence theory but a coherence theory of truth. The decisive step towards this is then taken by Rudolf Carnap when he states: The difference between the two concepts “true” and “confirmed” (“verified”, “scientifically accepted”) is important and yet frequently not sufficiently recognized. [. . .] The statements of (empirical) science are such that they can never be definitely accepted or rejected. They can only be confirmed or disconfirmed to a certain degree. [. . .] It follows that a scientific statement cannot simply be called true or false. (Carnap 2016, p. 89 f.)
What is decisive is not only the result, i.e. Carnap’s statement that scientific propositions cannot be proven true, but his justification of this statement with the nature of scientific propositions. Carnap has propositions of the empirical or inductive sciences in mind, which is hardly surprising against the background of the Vienna Circle. These have as laws – in simplistic terms – on the one hand the form of universal propositions, on the other hand they are inductively obtained from singular propositions and are the product of observation and experiment. The logical conclusion of a universal proposition from however many singular propositions is, though, impossible. And even in retrospect, a universal proposition cannot be proved, i.e. verified (cf. Carrier 2006, pp. 98–132). Interestingly, however, sciences that, according to their self-understanding, do not aim at the formulation of laws, but at the description (and interpretation) of individual cases, also face the same problem: their propositions cannot be proven to be true either. They can only be tried and tested. And so back to Bertrand Russell’s example: the fact that Charles I died on the scaffold and not in bed is not something that we or historians can observe. All one can do is refer to sources that say that Charles I died on the scaffold. But can you trust sources just like that? Of course not. One has to examine them critically, weigh them against other sources with different opinions, examine their coherence with what has been reconstructed up to that point, and so on. To speak in the words of Russell, and to contradict him at the same time, the care I have taken to arrive at a conclusion is decisive after all. A statement that refers to an event that is in the past (and does not follow any law of nature or is definitionally true) cannot be proven true, only tried and tested. This is despite the fact that this event must either have or not have occurred. Thus, although there is a material world of facts, a correspondence between the facts of that world and propositions about that world cannot be established so simply.
64
P. Klimczak
But does this mean that we have to say goodbye to the values true and false for modeling the real world and that every statement has to be classified as indeterminate – that is, in the set-theoretic modeling presented here, we have to assume an empty set for the real world? Not at all: one merely has to bear in mind that the elements of the set representing the real world are not facts of this (material) world, but merely statements about this world that are assumed to be true because they are considered to be sufficiently confirmed. The confirmation of statements again takes place in the form of sources, texts, (moving) pictures, sound recordings, etc. The set representing the real world thus corresponds to the union set20 of those represented worlds to which one ascribes a proving quality. Formally, one assumes that the set W representing the real world is equal to the union set of the represented worlds Dx that are elements of the set A, which is the set of all represented worlds that are sufficiently confirmed21:,22 W = [x
ε A
Dx
with A = fx j Dx is sufficiently confirmedg
Whether a represented world can be considered sufficiently confirmed then depends on whether the derivation of its statements satisfies certain criteria. Criteria to be mentioned would be: lack of contradiction, evidence, method, citation, depth of research, authority, etc.23 In the context of the above (Sect. 3.2) modeling of factuality, it is important to point out that ascribing sufficient confirmation is not the same as stating factuality. If it were the same thing, one would be caught in a circle: a represented world would be factual precisely if it were a subset of the union set of all factual represented worlds. In this way, no extension of knowledge about the real world would be possible: statements that are not elements of the set representing the real world could never become elements of the world set, since the represented world of which they are elements could not be classified as factual. And if their represented world cannot be classified as factual, their world cannot become part of the unification set of all factual worlds that the real world represents. Thus, whether the propositions of a represented world count as sufficiently confirmed depends not on whether the represented world can be classified as factual, but on whether the derivation of its propositions satisfies the criteria listed above, so that the propositions of the represented world can count as sufficiently confirmed. Since a sufficiently confirmed represented world is a subset of the union set of all sufficiently confirmed represented worlds, a sufficiently 20
By definition, the union of any two sets X and Y is equal to that set which contains as elements all and only those elements which are contained in X or in Y or both in X and in Y: X [ Y: = {x | x 2 X _ x 2 Y}. For example, again, given the two unordered sets B and C already known from the intersection, their union set contains all elements of B and C: {a, b} [ {b, c, d} = {a, b, c, d}. 21 Read: “W is equal to the union of Dx, where x is element of A”. 22 Read: “A is equal to the set of all x such that: the represented world of x is sufficiently confirmed”. 23 On truth criteria, but especially on the differentiation of truth criterion and truth definition, see the contribution of Thomas Zoglauer (esp. Sect. 1.10) in this volume.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
65
confirmed represented world is always also factual. However, the converse does not hold: not every represented world that is factual is also sufficiently confirmed – although it is a subset of the real-world set and thus of the union set of all sufficiently confirmed represented worlds.
3.5
Indeterminacy and Trivalency. Or: Of the Contradiction in (Represented) Worlds
Let us return to the set-theoretic representation of the real and the represented world. As can easily be seen, the modeling was based on two-valued statements. This was true for the first classical representation of statements by means of predicate, subject and negation sign as well as for the more complex representation of statements as ordered pairs with the propositional content as the first component and the propositional value as the second component. Only two elements were assumed for propositional value: 0 for false and 1 for true. Despite this “classical” restriction to two truth values, the fact that a proposition is an element or a non-element of a set representing a represented world or the real world seems to allow the representation of both being true and being neither true nor false. First of all, the supposedly simpler case of being neither true nor false: if a propositional content p is neither in affirmative, (p, 1), nor in negative form, (p, 0), an element of the set Dx or W, then this is regarded in the model presented above as a representation of the fact that no statement is made in the represented or real world concerning a fact. The propositional content p in question is then not part of the represented world or not part of a sufficiently proven represented world (underlying the set W representing the real world). In concrete terms, this means that in the respective text or speech act there is no talk about the propositional content p, thus, inevitably, no propositional value is assigned to it. Against this background, does it make sense at all to change from two truth or propositional values to three? Absolutely, to which end only the above interpretation of the non-element of a propositional content, whether with affirmed or negated propositional value, has to be considered more exactly: no propositional value is assigned to the propositional content p only because there is no mention of propositional content p at all in the underlying speech act or text. But what about the case when in a speech act or text a propositional content is indeed or even especially mentioned, thematized, but no propositional value is explicitly (by thematizing) or implicitly (by leaving it open) assigned to it. Such a case may seem absurd at first sight, but then one thinks of skeptical philosophical discourses (cf. Albrecht 1995) or postmodern literature (cf. Petersen 2010). Without a third propositional value, a differentiation between not-talking about a propositional content and talking about it while leaving the indeterminacy of its truth value intact is impossible. Thus, a third value is certainly needed. In order to achieve such an indeterminacy, a three-valued modeling of statements, in the notation by means of ordered pairs, only a third element is to be assumed for the basic
66
P. Klimczak
set V, the set of all statement values, besides 0 for false and 1 for true, e.g. U for indeterminate24: V = f0, 1, Ug: Instead of a U, any other symbol could have been used, whereby a numerical value (traditionally 0.5) is to be discouraged in order to avoid any misunderstandings: 0.5 could suggest, due to the number line, that this third value is an in-betweenness, a state of half-false and half-true, which would ultimately presuppose a continuum between 0 and 1 with an infinite number of truth values of the in-between. Such an interpretation does not do justice to the indeterminateness here (as well as to the indeterminateness in most discourses of cultural studies), since the indeterminateness or thirdness is precisely not a mere reduction of a continuum between true and false to a middle value of the linear trueas-well-as-false or neither-true-nor-false. But now back to the second case, which results from the condition of being an element or not being an element of statements in the set Dx or W: the true-as-well-as-false-ness due to the fact that a propositional content p is an element of the set Dx or W both in affirmed, (p, 1), and negated form, (p, 0). Against the background of the third propositional value U, which has just been introduced as a required addition, it would be necessary to specify that in a set at least two propositional contents are assigned to one propositional content: fðp, 0Þ, ðp, 1Þgor fðp, 0Þ, ðp, UÞgor fðp, 1Þ, ðp, UÞgor fðp, 0Þ, ðp, 1Þ, ðp, UÞg: In the two-valued as well as in the three-valued model the respective statements contradict each other. Formally, however, there is no contradiction in any of the cases, since the statements are elements of sets and elements of a set do not contradict each other per se. What is formally possible is not necessarily possible in “reality”, though. But at least for represented worlds, the assumption of the possibility of “contradicting” statements makes
24 Incidentally, even in the case of the notation of statements without the aid of ordered pairs, i.e. in the case of the classical mapping of statements by means of statement constants and statement variables, a trivalence can be mapped. For this purpose, additional operators are usually used to set a statement as indeterminate, true or false. A good introduction to the subject (especially for humanities scholars) is provided by Blau 1978.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
67
sense – at least when it comes to fictional speech or fake speech.25 People contradict themselves, and not only across several speech acts, but also in relation to one and the same speech act or text. But can factually represented worlds contain “contradictory” statements? If so, the set representing the real world should also be able to contain “contradictory” statements, since a factual represented world is a subset of the real world (cf. Sect. 3.4). But does the assumption of “contradictions” in the set representing the real world make sense? And since the real world is based on special factual represented worlds, namely those sufficiently confirmed represented worlds (cf. Sect. 3.4), the question also arises whether two represented worlds can at the same time be regarded as sufficiently confirmed, although one (with respect to a fact) states the opposite of the other? Especially the latter offers a starting point for answering both questions: due to the quality criterion of consistency, a sufficiently confirmed represented world should not contain two contradictory statements. But can two represented worlds contradict each other and still satisfy the criteria for confirmed represented worlds? Since even in science there is almost invariably a dissenting minority opinion26 and since scientific texts can normally be assumed to satisfy the criteria for sufficiently confirmed represented worlds, the question must probably be answered in the affirmative. What would be the alternative? Should two represented worlds that meet the criteria for confirmed represented worlds and yet contradict each other both be regarded as not sufficiently confirmed because of such an external contradiction? Should, then, the criterion of external consistency be adopted as a criterion for sufficiently confirmed represented worlds in addition to the criterion of internal consistency? The number of sufficiently confirmed texts would shrink rapidly and with them also the number of sufficiently confirmed statements, since in each case the entire represented world could no longer be considered sufficiently confirmed. And what about the idea that in the case of a contradiction of two sufficiently confirmed represented worlds with respect to a statement,27 the propositional value of the relevant propositional content in the union set of both represented worlds would have to be assumed to be indeterminate? This would also involve a new problem: admittedly, the represented worlds in question would still have to be assumed as sufficiently confirmed, since they had a direct influence on the determination of the statements of the real world. But they would be – except in the case that they assign the truth-value Indeterminate to the respective propositional content – no longer a subset of the set representing the real world, thus no longer factual (but fake). This in turn, i.e. the circumstance that a sufficiently confirmed The fact that fictional texts, or rather literature, can contain contradictory statements, but that the scientific description of these contradictions must be free of contradictions, has also been widely explained by Michael Titzmann (cf. Titzmann 1993). 26 See, for example, Vickers 2013. 27 This applies not only to cases where the truth values True and False collide, but also to the truth values True and Indeterminate and False and Indeterminate. 25
68
P. Klimczak
represented world qualifies as sufficiently confirmed but not as factual, however, seems completely absurd. Accordingly, there is no way around the admission of contradictory statements in the unification set that represents the real world.
3.6
World and Worlds. Or: On the Case Specificity and Subjectivity of Reality
If the real world can only be determined relative to a certain set of represented worlds as their unification set (cf. Sect. 3.4), it is obvious that the set of these represented worlds can vary, and can do so in two respects: on the one hand, a variation is possible with respect to the size of the set W or with respect to the number of represented worlds underlying the unification set W, respectively. Even if the number of represented worlds to which that sufficient confirmation is attributed is not formally infinite, because there is only a finite number of texts and thus also represented worlds and thus also sufficiently confirmed represented worlds, it can be assumed that it is potentially a very large set. Thus, at first sight, the determination of the real world confronts one with a quite large, indeed virtually (if not formally) impossible problem.28 In practice, however, at least in the determination of a represented world as factual, fictional or fake, there is no problem: for the necessary determination of the two subset relations between the represented and the real world, the set representing the real world must only contain information about those propositional contents which are part of the represented world. The determination of the propositions can thus be done on the basis of the represented world to be determined: propositional contents that occur in the represented world being examined are to be checked for their existence or non-existence in the real world and, in the case of their existence, to be determined according to their propositional value (true/false/indeterminate). The verification as well as the determination is then done by means of represented worlds to which that special, sufficient confirmation is ascribed. In this way, one would have to deal with different sets Wx, which, however, would all have in common that their union set is a subset of the set W to be potentially determined, the union set of all sufficiently confirmed represented worlds (cf. the definition of W in Sect. 3.4). However, as mentioned at the beginning, the set W may not only vary with respect to the size of the set W or with respect to the number of represented worlds underlying the unification set W, respectively. Variation is also possible with respect to the represented worlds that are considered to be sufficiently confirmed. While in the case of the first variation all case-specific real (partial) worlds, individually as well as a unification set, are subsets of a – albeit potentially – single real (total) world W, in the case of this second 28
Incidentally, this is a problem that is not alien to computer science or computational linguistics, since all attempts to establish a suitable representation of world knowledge must be regarded as having failed. See Lobin 2017 for an introduction, while Thar 2015 provides more detailed information.
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
69
variation no total world W exists anymore. Such a case comes about if, for the determination of what is a sufficiently confirmed represented world, the above-mentioned “objective” criteria were no longer used, but rather subjective ones. As such, for example, for certain members of faith communities, texts or speech acts of God/gods or his/her/their messengers have the quality of an absolute provenness.29 Even beyond religions and ideologies, people assign to certain distributors and authors a special or stronger confirmation, which is not objectively comprehensible in every case. And quite a few people, by all means atavistically conditioned, regard as sufficiently proven in particular such represented worlds whose statements essentially correspond to their own views.30 And even in the field of science there are schools and their followers who do not accept the findings of others, despite better arguments by the latter, and try to defend their own views as more proven through a variety of mechanisms.31 All this is well known and one could regard this behaviour as non-objective, as purely subjective and thus not admissible for the determination of W orWx or one could regard the existence of an objectivity per se as a mere construct and assume the general impossibility of the determination as well as of the existence of W.32 The third possibility, on the other hand, would be to leave this question undetermined, as we have done here, since our approach has been descriptive and not normative, and to establish that the model presented here excludes neither the one nor the other. On the contrary, through the relational determination of factuality, fictionality and fake on the basis of the real world or the respective assumption about the real world, the model is able to determine what qualifies for individual groups or people as factual, fictional and fake or,
29
In most cases, such contents that are decreed to be irrefutably true are called dogmas. Cf. for a historical as well as systematic overview: Wickert and Ratschow 1982. 30 The fact that people unconsciously seek out information that is essentially consistent with their pre-existing beliefs is referred to in psychology and communication science as selective exposure. This is by no means a new insight and has not only been virulent since the emergence of the buzzword filter bubble (cf. on the theories of selective exposure: Stroud 2017). However, the phenomenon of selective exposure has experienced a new popularity in the course of the algorithmization of communication. The filter bubble created by algorithms is mostly assumed to harbor a risk that cannot really be validated (cf. Thies 2017). 31 “With sufficient resourcefulness and some luck, any theory can be defended ‘progressively’ for a long time, even if it is false.” (Lakatos 1978, p. 111). This phenomenon is impressively described in the course of the history of science in Kuhn 1967. 32 The first view would have to be held by radical proponents of the correspondence theory of truth. Some of the moderate adherents of the correspondence theory of truth, however, advocate a “perspectival realism” that recognizes that truth claims are dependent on the epistemic perspective in question (on this, see Giere 2006; Massimi 2018). Arguably, the second view would have to be held by representatives of epistemological relativism, as found within the sociology of science and science and technology studies (cf. e.g. Barnes and Bloor 1982; Bloor 1991; Bauchspies et al. 2006). Cf. on both the first and the second also the remarks by Thomas Zoglauer in this volume (Sects. 10 and 5 respectively).
70
P. Klimczak
conversely, on the basis of their attribution of represented worlds as factual, fictional and fake, to extrapolate the subjective reality underlying this attribution.
References Albrecht M (1995) Skepsis; Skeptizismus. In: Ritter J, Gründer K, Gabriel G (eds) Historisches Wörterbuch der Philosophie 9: Se-Sp. Schwabe, Basel, pp 938–974 Antos G (2017) Fake News. Warum wir auf sie reinfallen. Oder: ‚Ich mache euch die Welt, so wie sie mir gefällt‘. Der Sprachdienst 1:3–22 Aristotle (1932) Poetics. In: Aristotle in 23 Volumes, Harvard University Press, Cambridge Barnes B, Bloor D (1982) Relativism, rationalism and the sociology of knowledge. In: Hollis M, Lukes S (eds) Rationality and relativism. MIT-Press, Cambridge, MA, pp 21–47 Bauchspies W, Croissant J, Restivo S (2006) Science, technology, and society. Blackwell, Malden/ Oxford Blau U (1978) Die dreiwertige Logik der Sprache. In: Ihre Syntax, Semantik und Anwendung in der Sprachanalyse. de Gruyter, Berlin Bloor D (1991) Knowledge and social imagery. University of Chicago Press, Chicago Carnap R (2016) Wahrheit und Bewährung (1936). In: Skirbekk G (ed) Wahrheitstheorien. Eine Auswahl aus den Diskussionen über Wahrheit im 20. Jahrhundert. Suhrkamp, Frankfurt am Main, pp 96–108 Carrier M (2006) Wissenschaftstheorie zur Einführung. Junius, Hamburg, pp 98–132 Conway K (2017) Press secretary Sean Spicer gave ‚alternative facts‘. In: NBC News. https://www. youtube.com/watch?v=VSrEEDQgFc8. Accessed 18 Apr 2020 Giere R (2006) Scientific perspectivism. University of Chicago Press, Chicago Hempel C (2000) On the logical positivists᾽ theory of truth (1935). In: Hempel C (ed) Selected philosophical essays. Cambridge University Press, Cambridge, pp 9–20 Hendricks VF, Vestergaard M (2018) Postfaktisch. Die neue Wirklichkeit in Zeiten von Bullshit, Fake News und Verschwörungstheorien. Blessing, München High C (2008) Mockumentary: a call to play. In: Austin T, de Jong W (eds) Rethinking documentary: new perspectives, new practices. Open University Press, Berkshire, pp 204–216 Keppler A (2006) Mediale Gegenwart. Eine Theorie des Fernsehens am Beispiel der Darstellung von Gewalt. Suhrkamp, Frankfurt am Main Klimczak P (2020) Fremde Welten – Eigene Welten. Zur kategorisierenden Rolle von Abweichungen für Fiktionalität. Medienkomparatistik 2:113–137 Kuhn T (1967) Die Struktur wissenschaftlicher Revolutionen. Suhrkamp, Frankfurt am Main Kuratowski C (1921) Sur la notion de l’ordre dans la Théorie des Ensembles. Fundam Math 2:161– 171 Kusch R, Beckmann A (2018) Wahrheit oder Lüge? Eine Kulturgeschichte ‚alternativer Fakten‘. In: Deutschlandfunk. https://www.deutschlandfunk.de/eine-kulturgeschichte-alternativer-faktenwahrheit-oder.1148.de.html?dram:article_id=407821. Accessed 18 Apr 2020 Lakatos I (1978) History of science and its rational reconstructions. In: Worrall J, Currie G (eds) The methodology of scientific research programmes, vol 1. Cambridge University Press, New York, pp 102–138 Lobin H (2017) Sprachautomaten. In: Spektrum.de SciLogs. https://scilogs.spektrum.de/engelbartgalaxis/sprachautomaten. Accessed 18 Apr 2020 Martínez M, Scheffel M (2003) Einführung in die Erzähltheorie. Beck, München, p 10
3
Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a . . .
71
Massimi M (2018) Perspectivism. In: Saatsi J (ed) The Routledge handbook of scientific realism. Routledge, Abingdon/New York, pp 164–175 Petersen C (2010) Der postmoderne Text. Rekonstruktion einer zeitgenössischen Ästhetik am Beispiel von Thomas Pynchon, Peter Greenaway und Paul Wühr. Ludwig, Kiel Rastier F (1974) Systematik der Isotopien. In: Kallmeyer W (ed) Lektürekolleg zur Textlinguistik, vol 2. Fischer Athenäum, Frankfurt am Main, pp 153–190 Russell B (2001) Truth and falsehood. In: Lynch M (ed) The nature of truth. Classic and contemporary perspectives. MIT Press, London, pp 17–24 Stroud NJ (2017) Selective exposure theories. In: Kenski K, Jamieson KH (eds) The Oxford handbook of political communication. Oxford University Press, Oxford, pp 531–548 Thar E (2015) „Ich habe Sie leider nicht verstanden.“ Linguistische Optimierungsprinzipien für die mündliche Mensch-Maschine-Interaktion. Peter Lang Verlagsgruppe, Bern The Simpsons („Bart to the Future“, Director: Michael Marcantel, USA 2000) Thies B (2017) Mythos Filterblase. In: Kappes C, Krone J, Novy L (eds) Medienwandel kompakt 2014–2016. Springer, Wiesbaden, pp 101–104 Titzmann M (1993) Strukturale Textanalyse. Fink, München Titzmann M (2003) Semiotische Aspekte der Literaturwissenschaft. In: Posner R, Robering K, Sebeok T (eds) Semiotik. Ein Handbuch zu den zeichentheoretischen Grundlagen von Natur und Kultur, vol 3. de Gruyter, Berlin, pp 3028–3103 Vickers P (2013) Understanding inconsistent science. Oxford University Press, Oxford Wickert U, Ratschow CH (1982) Dogma. In: Müller G, Krause G (eds) Theologische Realenzyklopädie 9. de Gruyter, Berlin/New York, pp 26–41 Zoglauer T (2016) Einführung in die Logik formale für Philosophen. Vandenhoeck & Ruprecht, Göttingen
Peter Klimczak, Prof. Dr., wrote the paper “Fiction, Fake and Fact: A Set-Theoretic Modeling Together with a Discussion of Represented Worlds.” He is an adjunct professor (außerplanmäßiger Professor) at the Brandenburg University of Technology and was Feodor Lynen Fellow of the Alexander von Humboldt Foundation at the University of Wroclaw. He works in the fields of digital media, artificial intelligence, and media and cultural theory.
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies Christer Petersen
Abstract
Starting from the term ‘alternative facts’ and its epistemological implications, three fields of discourse are brought together: that of the public rhetoric of a new political ‘Generation Fake’, that of postmodern or post-structuralist media theory, and that of fictional epistemologies as found in examples of postmodern literature and contemporary film. The aim of this is to examine the epistemological content of the new political rhetoric of the postfactual, on the one hand with regard to the theory of reality on which it is based, and on the other hand with regard to the theory of truth that it challenges.
4.1
Introduction: The Conway Case
Background Information The fact that the following refers to Donald Trump as the current US President and Kellyanne Conway as his advisor is owing to its having been written in June 2020 and thus before the 59th presidential election on November 3, 2020. However, in December 2020, when the article was finally edited, Trump was still in (continued)
C. Petersen (✉) Chair of Applied Media Studies, Brandenburg University of Technology, Cottbus, Germany e-mail: [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_4
73
74
C. Petersen
office.1 He had just suffered a narrow electoral defeat at the hands of Joe Biden, but refused to concede defeat by filing – completely baseless and mostly immediately dismissed – voter fraud lawsuits in several states. The new slogan Trump propagated with the beginning of the first projections was now ‘fraud’, instead of the up until then more frequent ‘fake news’, election fraud perpetrated on him, instead of supposed falsehoods uttered about him. And even after the official confirmation of the victory of his Democratic competitor Joe Biden by the electoral college on December 14, 2020, Trump still clung to the accusation of election fraud. Conway, for her part, had already abandoned the sinking ship at the end of August 2020 in the run-up to the 59th presidential election, resigning from her advisory position and, as she announced at the time, leaving the presidential staff for private life. In January 2017, Kellyanne Conway, the Senior Counselor to the incumbent US President Donald Trump, brought a term into the world that landed – not only in the media – like a bomb: during “an interview on the American political talk show Meet the Press,” Conway used the term ‘alternative facts’ “to justify false statements made by [then] White House Press Secretary Sean Spicer about audience size during Donald Trump’s inauguration in front of the Capitol” (Wiki 2020).2 Spicer had claimed that there was a significantly larger audience at Trump’s inauguration than had been at Barack Obama’s. However, aerial photos of both events, as well as counts by Washington’s mass transit system, proved otherwise. In Germany, for example, Alternative Fakten was not only promptly chosen as the Unwort, the ‘un-word’ or ‘non-word’ of the year 2017,3‘alternative facts’, or more precisely the implications of the term, have led to fundamental epistemological irritations. For even though Donald Trump already diligently discredited unfavorable reports about himself as fake news during the 2016 election campaign and continues to do so to this day, and while he just as diligently produces news that can justifiably be called fake, lies and deceptions, these do not have the subversive quality of Conway’s alternative facts: unlike deception, lies and fiction, which still imply the existence of a truth and a reality, alternative facts seem to call into question precisely those facts that make it possible to In the course of the translation of the original German-language article – in September 2022 – no more content revisions were made, so that, among other things, no mention is found here of Trump’s more than dubious role in the Capitol riots. 2 Since this article originally appeared in German, I cite here a German-language Wikipedia entry, which – like all subsequent citations from German-language sources – has been translated into English. 3 See the un-words since 2010 (Unwort des Jahres 2019). The year before, 2016, postfaktisch (postfactual) was chosen as word of the year by the German Language Society (Gesellschaft für deutsche Sprache 2016), while in the UK ‘post-truth’ was chosen as word of the year in the same year (Oxford Languages 2016); these terms certainly paved the way for Conway’s alternative facts. 1
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies
75
distinguish lies from truth, fiction from reality. Whereas the term ‘alternative facts’ in the discourse of physics, for example, simply denotes other facts, i.e. verified data, which demand a different theoretical description of the material world, Conway’s alternative facts apparently unsettle an entire epistemology, namely a materialist, mirror-image-realistic or correspondence-theoretical epistemology. Alternative facts unsettle not only, but especially, an epistemology of mass media, which in any case no longer seems to be in a very good state. Thus, for example, media sociology announced shortly before the turn of the last century: In the fifties [of the 20th century], the idea still prevailed that with the help of the mass medium of television, the world could be reflected as it is [. . .]. The medium was considered an incorruptible eyewitness [. . .] and an instrument of comprehensive information. At present we are experiencing – triggered certainly by the debate about the new interactive media – a lasting disruption of this understanding of the media. An awareness of the possibilities of influencing media realities is increasing and threatens the ‘ontological certainty’ that had resulted from the invisibility of media construction principles. (Wehner 1997, p. 171)
When Conway justifies the fake news of Donald Trump’s Press Secretary by referring to alternative facts, ontological certainty is indeed a thing of the past. In doing so, Conway implies no less than that the world, about which the news is supposed to provide information, has simply become obsolete as a fixed, intersubjective point of reference: a fact is now determined by whatever one wants to believe it to be true. And a news item is no longer true – as the old, for Conway seemingly obsolete epistemology would have it – on the mere strength of it relating to an observable event in reality. Conversely, fakes, lies and deception can no longer be identified as such because they can no longer be disproved by events of reality.
4.2
The Case of Baudrillard: Postmodern Theory
What a coup Conway pulls off here! Everything becomes fact and fake in equal measure, and with the loss of any real referent, all news now moves beyond the realm of true and false. However, all this is not so new, but has been circulating in philosophy, in literary and media studies, in art and cultural studies since the 1970s under the label of postmodernism. The turn of phrase ‘Beyond True and False’ not only echoes Nietzsche’s moral nihilism,4 it can also be traced, for example, as a direct quotation from Jean Baudrillard’s Agonie des Realen (Agony of the Real) from 1978.5 Baudrillard’s radical constructivist media
4
Namely, Friedrich Nietzsche’s Jenseits von Gut und Böse. Vorspiel zu einer Philosophie der Zukunft from 1886. 5 See also Baudrillard’s short article, published in German in 1986, “Jenseits von Wahr und Falsch, oder Die Hinterlist des Bildes”.
76
C. Petersen
epistemology, his immanence theory of the media, is derived from a concept that, at least in media theory, can no longer be thought of without Baudrillard – the concept of simulation. Thus we read in the Agony of the Real: “To dissimulate is to pretend not to have what one has. To simulate is to feign to have what one doesn’t have. One implies a presence, the other an absence” (Baudrillard 1994, p. 3).6 If this is applied to the relation between a real event and a media event, a real phenomenon and its media representation, then the operation of dissimulation does not yet call the old epistemology into question: by covering up the existence of a real event, dissimulation still remains related, even if only negatively, to a reference event. Or as Baudrillard’s puts it: [P]retending, or dissimulating, leaves the principle of reality intact: the difference is always clear, it is simply masked, whereas simulation threatens the difference between the ‘true’ and the ‘false’, the ‘real’ and the ‘imaginary’. (Baudrillard 1994, p. 3)
In that a media event merely simulates something – like the malingerer feigns the symptoms of a disease7 – the media event frees itself from its real reference event. This real event may exist, but does not have to, so that the relationship between event and media event ultimately becomes contingent, the event dissolves as witness, reference and residual in the stream of media events: Today abstraction is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal. (Baudrillard 1994, p. 1)
Welcome to Baudrillard’s media hyperreality, in which Conway, Spicer, and ultimately Trump seem to swim like fish in water. This, at any rate, is the latest state of the philosophical debate on alternative truths – alternative facts, post-truth and fake news – as read, among others, by Tom Nichols (2017), Steve Fuller (2018) and, in the Germanlanguage discourse, most recently by Thomas Zoglauer (2020).8 In particular, Zoglauer identifies a radical Foucaultian social constructivism as the core and fatal legacy of postmodernism: Talk of a social construction of facts levels the ontological difference between the world and our knowledge of the world. If facts are valid only because a social group recognizes them as
6
In the original publication of this article, I cite from Die Agonie des Realen (Baudrillard 1978), an early German translation of La Précession des Simulacres, whereas in the present version, I cite from Simulacra and Simulation (Baudrillard 1994), a later English translation with revisions by Baudrillard. 7 This also the example in Baudrillard (1978, 1994). 8 While Zoglauer and Nichols argue against a radical constructivism of postmodern provenance, Fuller makes a strong case for a radical social constructivism and thus argues for truth relativism. See also the contribution by Thomas Zoglauer in this volume.
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies
77
facts and not because they exist independently of individual consciousness and human culture, then reality becomes a construct. (Zoglauer 2020, p. 71)
And further, referring to Trump’s media practices, Zoglauer states: If truth is based solely on social acceptance, then we must note with alarm that Donald Trump’s Twitter messages meet with more acceptance in broad quarters of the American citizenry than warnings about climate change. Then there is no longer any difference between scientific and pseudoscientific theories, between truth and fake news. [. . .] Each social group can construct the facts as it sees fit. (Zoglauer 2020, p. 71)
A poststructuralist or postmodern theory9 in the person of Jean Baudrillard, Jacques Derrida, Jean-François Lyotard and, last but not least, Michel Foucault is now supposedly – if not necessarily the cause, then certainly – the enabler and facilitator of all this (Zoglauer 2020, p. 73).
4.3
The Strange World of Harold Crick: Fictional Epistemologies
If one accepts this for the time being and takes it as prologue and theoretical framing for one of the guiding questions of this article, then this question is: what do fictional formats enable, accomplish, even cause, when it comes to designing not alternative facts, but alternative epistemologies? What models of the world are presented to the audience in narrative media, in literature and film, in terms of what counts as fact and truth as opposed to illusion, deception and lies within the worlds presented there? A first clue is offered by Stranger than Fiction, a 2006 comedy by Marc Forster (director), where right at the beginning of the film one can observe protagonist Harold Crick, a bachelor and tax official inclined to obsessive-compulsive disorder, making a strange discovery. Harold stands in front of the bathroom mirror and brushes his teeth: Voiceover:
Voiceover: Harold: Voiceover:
9
“If one had asked Harold, he would have said that this particular Wednesday was exactly like all Wednesdays prior. And he began it the same way he . . .” Harold pauses, takes the toothbrush out of his mouth, looks sceptically to the side, and then continues to brush his teeth. “And he began it the same way he always did.” Harold takes the toothbrush out of his mouth again, looks at it and turns it in his hand. “Hello?” He holds the toothbrush to his ear, shakes the toothbrush, shakes his head, and proceeds to brush his teeth. “He began it the same way he always did. When others’ minds would . . .”
See Petersen (2020, p. 84 ff.) and Petersen (2003, p. 199 ff.).
78
Harold: Voiceover:
Harold:
Harold:
C. Petersen
“Hello, is someone there?” He turns to the side, looks back and continues brushing his teeth. “When others’ minds would fantasize about their upcoming day or even try to grip onto the final moments of their dreams, Harold just counted brushstrokes.” Harold spits the toothpaste into the sink. He is seen in the mirror looking back and forth. “All right, who just said, ‘Harold just counted brushstrokes’? And how do you know I’m counting brushstrokes?” He looks around, turns to the back ... “Hello?” . . . and discovers no one there either. (Foster 2006, 4:44–5:42)
In fact, Harold hears the female voice of an authorial narrator who not only comments on Harold’s life, but, as it turns out, literally dictates it. The rest of the cinematic plot ultimately amounts to Harold having to stop the writer – who is writing the novel that is his life – from killing him, both literarily and factually. Not only does Harold succeed, he finds the love of his life, conquers his compulsions and becomes a hero by saving the life of a child. Harold is, however, only able to save his own life by persuading the author of his life story to rewrite her novel against her better judgment.10 What is this strange relationship between fiction and reality that is presented to the audience here? If we assume with Aristotle, in a traditional mirror-image-realist way, that art reproduces the world mimetically,11 then this is precisely the other way round in the world depicted in the film: reality is not a model for the novel about Harold Crick, but rather the novel is a model, or even more: a strict causal agent and instigator of Harold’s life. And indeed, again since Aristotle, we know the idea that fiction somehow has a performative effect on reality12: we can imitate art itself, take it – willingly or unwillingly, intentionally or negligently – as a model for our real actions. However, artistic fiction does not have such a direct effect on real action as in the case of Harold Crick. In this way, the film addresses the relationship between fiction and reality in a thoroughly self-reflexive way by playing out a reverse mirror-image realism: a correspondence theory of reality in which reality must follow fiction unconditionally. However, such a ‘correspondence theory of fiction’ does not fundamentally call into question a correspondence theory of reality, since it is, after all, still a correspondence
This is only in passing, although the film spends some narrative effort discussing the extent to which the literary quality of the novel suffers by the author not letting Harold die at the end. She discusses this with, among others, a professor of literature, whom she can only convince by ultimately rewriting the entire novel to fit the new ending (cf. Foster 2006, 95:05–95:25, 97:43–99:02, and 99: 18–99:21). 11 In his Poietike, for example, Aristotle says: “Epic and tragic poetry, furthermore comedy and dithyramb poetry, as well as – for the most part – flute and zitherplaying; they are all, considered as a whole, imitations” (Aristoteles 1994, p. 5). 12 One can also ‘read this out of’ the Poietike. 10
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies
79
theory. The epistemological confusion caused by the film is also limited, precisely because the world depicted in the film is constructed in every respect on the basis of correspondence theory: Harold is indeed the protagonist of the film, but he is also the exception, the deviation from the norm of the depicted world. As an utter and total stranger, Harold is and remains a solitary curiosity in an epistemologically quite normal world. And it is precisely this curiosity that we, as viewers, are allowed and expected to laugh at – certainly as an act of containment of the deviant and the concomitant consolidation of normality. In contrast, there is nothing to laugh about in the next film example, the 2015 drama Room by Lenny Abrahamson (director). The film tells the story of Jack, a boy born in captivity, and his exploration of the world. As a teenager, Jack’s mother Joy is kidnapped and locked in a twelve-square-foot shed, where Jack is also born. But let us hand over to the dust jacket blurb of the German-language edition of the novel, which sketches out the world depicted quite succinctly: For Jack, Room is the whole world. It’s where he and his Ma eat, play and sleep. Jack loves to watch TV because he can see his ‘friends’, the cartoon characters. But he knows that the things behind the screen are not real – only Ma, he and the things in Room are real. Until the day comes when Ma explains to him that there is a world out there after all, and that they must try to escape from Room . . . (Donoghue 2018)
The constellation is at first reminiscent of Plato’s Allegory of the Cave: prisoners, passing shadows on the wall in front of them. A closer look, however, reveals that it is not a matter here of the beings behind the apparitions, as in Plato, but that a media epistemology is actually being created: while the people and things in the room are perceived as real, to Jack the objects on the screen appear as purely fictional. They are no more than images and sounds from the television set that have no correspondence with real people and things, which, in order to be considered real, would have to be in the room. This is not only suspiciously reminiscent of Baudrillard’s media hyperreality of referentless simulacra, it is – at least structurally – the same media epistemology. However, in Room, construction is accompanied by deconstruction, and creation of this epistemology is accompanied by its discarding. Shortly after Jack’s fifth birthday, Joy confronts him with ‘the truth’ in a long conversation,13 which costs her some effort and several attempts: Joy: Jack: Joy: Jack: Joy:
13
“Hey, Jack, do you remember Mouse?” “Yeah.” “Yeah? You know where he is?” He shakes his head. “Mmm. I do. He’s on the other side of this wall.” She looks toward the wall.
From which, for reasons of space, I can only quote excerpts, as the dialogue (Abrahamson 2015, 25: 23–30:00) lasts about 5 min.
80
Jack: Joy: Jack: Joy: Jack: Joy:
Jack: Joy: Jack: Joy:
Jack: Joy: Jack: Joy: Jack: Joy: Jack: Joy: Jack: Joy: Jack: Joy: Jack: Joy: Jack: Joy: Jack: Joy: Jack: Joy:
C. Petersen
“What other side?” “Jack, there’s two sides to everything.” “Not on an octagon.” “Yeah, but . . .” “An octagon has eight sides.” “But a wall, okay, a wall’s like this, see?” She holds up her outstretched left hand, the fist of her right hand to the right. “And we’re on the inside and Mouse is on the outside.” She now holds the fist of her right hand to the left of her outstretched left hand. “In Outer Space?” “No, in the world. It’s much closer than Outer Space.” “I can’t see the outside-side.” “Listen, I know that I told you something else before, but you were much younger. I didn’t think that you could understand, but now you are so old, you’re so smart, I know that you can do this.” Shakes his head. Takes another run at it. “Where do you think that Old Nick gets our food?” “From TV by magic.” “There is no magic. What you see on TV, those are pictures of real things, of real people. It’s real stuff. [. . .] That’s real oceans, real trees, real cats, dogs.” “No way. Where would they all fit?” “They just do. They just fit. They just fit out in the world. Jack, come on, you’re so smart. I know that you’ve been wondering about this.” “Can I have something else to eat?” Joy tries again to convince Jack by pointing to the skylight. “There’s a leaf. Do you see that?” “Where?” “Look!” “I don’t see a leaf.” “Come here. I want you to see.” She lifts Jack up to the skylight. “Come have a closer look. You can see that? See!” “Dumbo, Ma. That’s not a leaf. Leaves are green.” “Yeah, on trees, but then they fall, and they rot like salad in the fridge.” “Where is all the stuff you said? Trees and dogs and cats and grass?” “We can’t see it from here because Skylight looks upwards instead of sideways.” She puts Jack back on the floor. “You’re just tricking me.” “No, I’m not.” He yells angrily. “Liar, liar, pants on fire!” “Jack, I couldn’t explain it before because you were too small. You were too small to understand, so I had to make up a story. But know I’m doing the opposite, okay?
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies
Jack: Joy:
81
I’m doing the opposite of lying. I’m un-lying because you’re five now. You’re five, and you’re old enough to understand what the world is. You have to understand.” She grows increasingly annoyed. “You have to understand. We can’t keep living like this. You need to help me.” “I want to be four again.” She gives Jack a disappointed look and takes another run at convincing him . . . (Abrahamson 2015, 25:25–28:12)
Since Joy has lied to her son about the nature of the world in order to make life bearable for him in the limited world of the room – lying is followed by un-lying – Jack’s whole epistemology is carried on ad absurdum in retrospect: they do exist, all the things and people outside of the room. They are the referents of television programs. And television – here we are back in the correspondence-theoretical paradigm – quite simply represents the world.14 The film also doesn’t leave the slightest doubt about that when Jack and Joy finally manage to escape from their prison. Everything that now happens to the characters, is experienced by them and observed by the viewer, takes place within the framework of a correspondence-theoretical epistemology.
4.4
The New Generation Fake: Political Reality
If the films always mark the models in them that deviate from a correspondence theory as curiosities or as lies, deceptions and fakes, then the question may be asked once again as to what extent a current political practice of the fake and of alternative facts actually questions common epistemologies at all. First of all, it should be noted that fake news do not substantially challenge, let alone overcome, a correspondence theory of reality. Fake news are, namely, lies, or in the words of Axel Gelfert (2018, p. 86), “false or misleading claims [. . .] by design”. And as lies, fake news – following Baudrillard’s terminology – function like dissimulations in that they still refer, even if only negatively, to facts. So Donald Trump may be a liar, a populist, and the 14 At the same time, Jack’s initial epistemology is anything but simple. When Jack explains the world to himself and thus to the audience in an inner monologue, it is full of speculative ad hoc constructions and inconsistencies: “There’s Room, then Outer Space, with all the TV planets, then Heaven. Plant is real, but not trees. Spiders are real, and one time the mosquito that was sucking my blood, but squirrels and dogs are just TV, except Lucky. He’s my dog who might come someday. Monsters are too big to be real, and the sea. The TV persons are flat and made of colors, but me and you are real. [. . .] Old Nick, I don’t know if he’s real, maybe half” (Abrahamson 2015, 10:21–11:08). Thus, the kidnapper Old Nick does not quite fit into Jack’s ontology and, because he is the only one who can enter and leave the room, keep appearing and disappearing from Jack’s perspective there, is considered ‘maybe half-real’ by Jack. In contrast, Jack ascribes a reality to his imaginary dog Lucky, even though he is not – according to Jack’s logic, not yet – in the room: “He’s my dog who might come someday” (Abrahamson 2015, 10:48).
82
C. Petersen
representative of the “new political Generation Fake” (Petersen 2020, p. 88), but he is by no means a radical constructivist or immanence theorist trained in postmodernism. He – quite strategically – is no stickler for the truth, lying and discrediting others, preferably journalists, as liars. But he does not question the principle of reality itself nor a correspondence theory of reality. But what about Kellyanne Conway? Here it is worth taking another close look at what Conway is actually talking about when she brings up alternative facts: as previously mentioned, this was in regard to attendance figures for the inaugurations of Obama and Trump. When press secretary Sean Spicer’s estimate turns out to be wrong, Conway justifies Spicer’s miscalculation of crowd numbers by claiming, “there is no way to really quantify crowds”.15 Purportedly, this is the only reason for Spicer’s misassessment, which is not a deliberately misleading statement, not a lie, but rather just a case of Spicer referring to his alternative facts. However, there is a way to quantify crowds. You simply have to count the people, in this case the ones in the photos. So what is Conway’s claim? Is she actually looking to question the countability of entities? Apparently not. For, in other contexts, Conway herself repeatedly invokes figures and facts, figures as facts.16 Conway is simply trying to talk herself and Spicer out of a corner. She wants to create confusion with her musings on alternative facts and uncountable viewers, so as to absolve Trump’s press secretary of the charge of spreading fake news. What Conway does not, however, succeed in doing with her alternative facts is to advocate a radical constructivism in the wake of postmodern theory – even though she has been repeatedly credited with doing so ever since. She does so no more than Trump, who simply spins out targeted lies and brazen untruths. Indeed, Conway would then be a radical constructivist in the best postmodernist manner if she were consistent enough to attack the regime of the countable itself: why do we measure an audience’s response to and enthusiasm for a new president by the size of the audience and not – beyond numbers and counts – by, say, the impression left by the enthusiastic audience? It is only in approaching the world, Conway might proclaim, that the world is constructed. And the approach itself is socially constructed, not determined by realities and facts. As it stands, however, Kellyanne Conway and Donald Trump appear at time as brazen liars, at times as alleged victims of supposed falsehoods, at other times as populists wielding seemingly simple political solutions and, in essence, as demagogues of a new Generation Fake. But even just accusing them of being representatives of a new epistemology, of a radical and social constructivist, a post-factual and postmodern epistemology is to grant them far too much weight.
15
Conway’s interview on Meet the Press can be found all over the web, including at NBC itself (NBC News 2017). The quote begins at 5:34; the keyword ‘alternative facts’ is mentioned before at 4:16. 16 She even does so in the same interview at 4:31 and elsewhere.
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies
83
Thus one encounters such things not in political but in an artistic practice, not in the factual but in fiction, especially in that of a literary postmodernism. The novels of Thomas Pynchon are the example of this. Background Information None other than Friedrich Kittler – and thus probably the German-language literary scholar at the end of the twentieth century – confirms precisely in his ironic rejection of a literary postmodernism the central role that Pynchon’s work plays for it, if one wishes to adhere to the category of postmodernism at all (as the author of the present article does). Kittler (2003, p. 123) writes: “Pynchon’s novels are readily treated among literary scholars as model cases of so-called postmodernism. If Thomas Pynchon did not exist in obscurity, one would almost have to invent him in order to substantiate postmodernism altogether.” Today, on the other hand, the discussion about a literary postmodernism, which was still being conducted with all vehemence and polemic into the 2000s, seems to have come to an inconclusive conclusion.17At the same time, there seems to be widespread agreement among philosophers on the well-founded assumption of a postmodernist or post-structuralist philosophy (in the person of its above-mentioned protagonists, among others).
4.5
Pynchon’s Conspiracies: Postmodern Epistemology
Pynchon’s works, some of which run to over a 1000 pages, hardly seem readily comprehensible any more. Thus, one is warned – by competent sources, to be sure – that the exegetes of Thomas Pynchon’s prose run the risk of encountering “facts and circumstances” before which “the well-rehearsed tools” of not only literary studies, but of the entire “humanities miserably fail” (Kittler 2003, p. 123). And although Pynchon’s novels do indeed employ of a myriad of heterogeneous themes and forms of expression, there are in Pynchon’s œuvre recurring plot motifs and narrative techniques that can be discerned both at the level of narrative objects and narrative modes. One central motifcomplex in particular proves to be recurrent: conspiracies that structure the depicted world of the respective text and are accompanied by paranoid conspiracy theories, which in turn dominate the consciousness of the central characters right on into the narrative modes. Thus, for example, the depicted world of Gravity’s Rainbow, Pynchon’s major work from 1973, is also constructed along the reversible figure of such a paranoid conspiracy.18
17 18
See for example Petersen (2007, p. 9 ff.). See Petersen (2003, p. 42 ff.) on this and the following.
84
C. Petersen
The plot of Gravity’s Rainbow is structured as well as dissected by a so-called “Theysystem” over its 760 pages.19 This ominous system, in its ubiquitous influence on all economic, technical and military spheres, appears as a web of control mechanisms to which the entire depicted world of the novel seems subjected. Just as it controls the life of the novel’s hero, Tyrone Slothrop, from his earliest childhood on, so it also spans all topographical, political and ideological boundaries as an international cartel. Accordingly, the They-system is referred to in the novel as, among other things: A Rocket-cartel. A structure cutting across every agency human and paper that ever touched it. Even to Russia . . . Russia bought from Krupp, didn’t she, from Siemens, the IG. . . . Are there arrangements Stalin won’t admit . . . doesn’t even know about? Oh, a State begins to take form in the stateless German night, a State that spans oceans and surface politics, sovereign as the International or the Church of Rome, and the Rocket is its soul. IG Raketen. (Pynchon 2000, p. 566)
The They-system thus functions on the one hand as a kind of ordering system of the depicted world, when it is spoken of as a “structure”. On the other hand, at the center of this structure is the rocket – first the V2 of the Nazi regime, then the nuclear missile of the Cold War – as a symbol of ultimate destruction. Thus, not only does the system already bear within itself the destruction of the world it seems to order, but in fact its ontological status within the depicted world remains indeterminate, since the They-system is staged simultaneously as a conspiracy and as a paranoid conspiracy theory of Slothrop as well as of the other characters. And not least because Gravity’s Rainbow correlates the figural modes of perception with the state of paranoia, the novel arrives at a fundamental critique of empirical-rational perception, since the concept of paranoia describes a psychopathological phenomenon which, according to Scott Sanders, assigns two fundamental misconceptions to the paranoiac: Clinical paranoia is zealously self-referential: the paranoid asserts that (1) there is an order to events, a unifying purpose, however sinister, behind the seeming chaos; and (2) this purpose is focused upon the self, the star and victim. Thus the paranoid individual becomes a hero once again, he stands at the center of the plot. (Sanders 1976, p. 145)
By moving the perception of its characters into the realm of the pathological and discrediting characters’ attempts at sense-making as a clinical phenomenon, the novel criticizes a rationalism that believes it can grasp the world in a presupposed system. The attempt to subjugate things to a rational system of order appears as a circular and ultimately immanent act; since the subject, completely detached from any existence independent of consciousness, constructs an order of the world and of perception that is created solely to confirm the subject in its empirical dispositions and to elevate it to the center of the world conspiracy it has devised. 19
This, at least, the length of the English edition from the year 2000 quoted here.
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies
85
The epistemological discourse that Thomas Pynchon conducts in Gravity’s Rainbow, as in all his novels of the 1960s and 1970s, thus does not just resemble an immanencetheoretical epistemology as developed by Baudrillard in his theory of medial hyperreality at the end of the 1970s. Rather, Pynchon’s novels follow precisely the same paradigm, an immanence-theoretical one in which real objects, events and facts evaporate, as it were, in and with their mediatization. All that remains is a self-referential and, as such, immanent system of referentless signs and images. As we have seen, this is not something we encounter either in current film or in a new post-factual discourse in politics; rather, it is a poetic and philosophical discourse of the 1960s, 1970s and 1980s. The new Generation Fake, on the other hand – with all its reflexes and strategies of cover-up and obfuscation, deceptions and lies – is still based on a correspondence-theoretical paradigm that, while continually unsettled and pushed to its limits, remains without alternative. And so the real problem posed by the new political Generation Fake is also not located on the level of an epistemology or a theory of reality, but on that of a theory of truth. Background Information In actuality, the statement that no immanence-theoretical or other alternative epistemologies are staged in current film is a rather bold thesis insofar as it cannot be ruled out that a film might yet be found that stages precisely such alternative epistemologies. A cursory review of contemporary US and Western European feature films, however, reveals nothing of the sort, so that the thesis can claim validity at least until it is disproved. Thus, even the usual suspects, such as Benny’s Video directed by Michael Haneke (AT/CH 1992), The Truman Show directed by Peter Weir (US 1998), Matrix directed by Lana and Lili Wachowski (US 1999) or Adaptation directed by Spike Jones (US 2002), despite all their apparent mixing of fiction – be it as video recording, TV production, screenplay or virtual reality – and reality, ultimately leave no doubt that they, or the worlds they depict, are based on a paradigm of correspondence theory.
4.6
Conclusion: Agony of Credibility
The real problem then, as posed by the ‘media politics’ of a Donald Trump, a Kellyanne Conway or, in Europe, a Boris Johnson and Viktor Orbán, lies in the fact that they build their very image on undermining a paradigm of correspondence theory from within, as it were, through deceptions and lies, through the delegitimization of referential events and facts, by repeatedly and self-indulgently questioning the credibility of news and thus also the credibility of sources and messengers. This is disastrous insofar as we, as recipients and users of news, are in almost all cases unable to observe for ourselves the events and facts
86
C. Petersen
that are (re)presented in the news. We must therefore rely on the credibility of the media and its representatives as witnesses, mediators and messengers of events and facts. This remains simply without any alternative within the framework of a correspondencetheoretical epistemology, a correspondence theory of reality, in which we as media users circulate just as Trump and the others do. And so there is a need for a theory of truth based on this epistemology, i.e. theory of reality, which in turn makes it possible to fix criteria according to which the credibility of the sources and messengers of news can be checked and thereby, not least, defended against the attacks of the populists of fakery. In actuality, in the current post-factual discourse of populist media politics we are not dealing with the agony of the real implicitly invoked by Pynchon and explicitly by Baudrillard, but with an agony of credibility. In this vein, one can read in Peter Klimczak,20 who, following Rudolf Carnap,21 argues for a proving of statements on the basis of a correspondence-theoretical epistemology, by supplementing a correspondence theory of reality precisely not with a correspondence theory of truth, but with a coherence theory of truth: Although there is a material world of facts, a correspondence cannot simply be established between the facts of this world and the statements about this world. But does this mean that we have to say goodbye to the values true and false for modelling the world and that every statement has to be classified as indeterminate? Not at all: one merely has to keep in mind that the elements of the set representing the real world are not facts of this (material) world, but merely statements about this world that are assumed to be true because they are considered to be sufficiently tried and tested [bewährt]. (Klimczak 2020, p. 136 f.)
And whether statements about the world “can be regarded as tried and tested” depends in turn “on whether the derivation” of these statements “satisfies certain criteria. Criteria to be mentioned would include: consistency, evidence, method, citation, depth of research, authority, etc.” (Klimczak 2020, p. 137). A statement about the world can therefore be considered true if and only if it satisfies certain quality criteria. The specific quality criteria of news production in turn generate the credibility of news as well as that of its media and reporters. Thus one knows, of course always only with provisional certainty, which correspondent, reporter and journalist, and even more so which format one can trust, because their news has proven itself in the past. Conversely, those whose news has not stood the test of time, who have covered up, obfuscated and demonstrably lied, are not to be trusted, not even and especially not in their attacks on the credibility of others. This, in turn, should apply not only to journalists but also to press secretaries, political counsellors and politicians. And so, in the end, the fact that someone who gambles with their own credibility time and time again, and so has
20
For the following, see also the contribution by Peter Klimczak in this volume. In particular, to Rudolf Carnap’s 1936 article “Wahrheit und Bewährung (Truth and Confirmation)” (Carnap 1977).
21
4
Stranger than Fiction: On Alternative Facts and Fictional Epistemologies
87
long since squandered it away, is elected to political office and is tolerated there remains a mystery from the point of view of democratic reason – unless we want to deny the voters and their representatives, and thus ourselves, that very faculty of reason. Background Information As already noted in the introduction, Donald Trump is today, in mid-December 2020, facing the end of his presidency. So, the US electorate obviously does not wish to tolerate him for another term as US president. That is, the majority of voters do not want that. After all, according to the preliminary results of the 59th presidential election (as of December 16, 2020), 74.2 million, or 46.9% of the vote, still went to Trump, which is more votes – albeit with a much higher turnout – than in the previous presidential election. Back then, Trump prevailed over Hillary Clinton with around 62 million and 46.1% of the vote.
References Abrahamson L (2015) Raum. Liebe kennt keine Grenzen [Room, IE/CA/UK/US]. DVD, Universal Pictures Germany, Hamburg, 118 min Aristoteles (1994) Poetik [Poietike]. Übersetzt und herausgegeben von Manfred Fuhrmann. Reclam, Stuttgart Baudrillard J (1978) Agonie des Realen. Merve, Berlin Baudrillard J (1986) Jenseits von Wahr und Falsch, oder Die Hinterlist des Bildes. In: Bachmayer M, van de Loo O, Rötzer F (eds) Bildwelten – Denkbilder, Texte zur Kunst, vol 2. Boer, München, pp 265–268 Baudrillard J (1994) Simulacrum and Simulation. (trans: Glaser SF). University of Michigan Press, Ann Arbor Carnap R (1977) Wahrheit und Bewährung [1936]. In: von Skirbekk G (ed) Wahrheitstheorien. Eine Auswahl aus den Diskussionen über Wahrheit im 20. Jahrhundert. Suhrkamp, Frankfurt am Main, pp 89–95 Donoghue E (2018) Raum. Roman [Room. Novel 2010]. Aus dem Amerikanischen von Armin Gontermann, 7th edn. Piper, München Foster M (2006) Schräger als Fiktion [stranger than fiction, US]. DVD, Columbia Pictures/Sony Pictures Home Entertainment, München, 108 min Fuller S (2018) Post-truth: knowledge as a power game. Anthem Press, London/New York Gelfert A (2018) Fake news: a definition. Informal Logic 38(1):84–117 Gesellschaft für deutsche Sprache (2016) GfdS wählt ‘postfaktisch’ zum Wort des Jahres 2016. Gesellschaft für deutsche Sprache. https://gfds.de/wort-des-jahres-2016/. Accessed 3 Apr 2020 Kittler F (2003) Pynchon und die Elektromystik. In: Siegert B, Krajewski M (eds) Thomas Pynchon, Archiv – Verschwörung – Geschichte, medien, vol 15. Verlag und Datenbank für Geisteswissenschaften, Weimar, pp 123–136 Klimczak P (2020) Fremde Welten – Eigene Welten. Zur kategorisierenden Rolle von Abweichungen für Fiktionalität. Medienkomparatistik 2:113–137
88
C. Petersen
NBC News (2017) Full Conway interview: presidents ‘aren’t judged by crowd size’. NBC News, Meet the Press. https://www.nbcnews.com/meet-the-press/video/full-conway-interview-presidentsaren-t-judged-by-crowd-size-860134979785. Accessed 3 Apr 2020 Nichols T (2017) The death of expertise. The campaign against established knowledge and why it matters. Oxford University Press, New York Nietzsche F (1886) Jenseits von Gut und Böse. Vorspiel einer Philosophie der Zukunft. Naumann, Leipzig Oxford Languages (2016) Word of the year 2016. In: Oxford Languages. https://languages.oup.com/ word-of-the-year/2016/. Accessed 3 Apr 2020 Petersen C (2003) Der postmoderne Text. Rekonstruktion einer zeitgenössischen Ästhetik am Beispiel von Thomas Pynchon, Peter Greenaway und Paul Wühr. Ludwig, Kiel Petersen C (2007) Von der Moderne zur Postmoderne: Aspekte des Epochenwandels. In: Sagmo I et al (eds) Moderne, Postmoderne – und was noch? Osloer Beiträge zur Germanistik; 39. Peter Lang, Frankfurt am Main, pp 9–27 Petersen C (2020) Generation Fake. Täuschung, Lüge und Fiktion nach der Postmoderne. In: Ächtler N et al (eds) Generationalität – Gesellschaft – Geschichte. Schnittfelder in den deutschsprachigen Literatur- und Mediensystemen nach 1945. Verbrecher Verlag, Berlin, pp 83–95 Pynchon T (2000) Gravity’s rainbow [1973]. Vintage, London Sanders S (1976) Pynchon’s paranoid history. In: Levine G, Leverenz D (eds) Mindful pleasures. Essays on Thomas Pynchon. Little, Brown and Company, Boston, pp 139–159 Unwort des Jahres (2019) Unwörter ab 2010. In: Unwort des Jahres. http://www.unwortdesjahres.net/ index.php?id=112. Accessed 3 Apr 2020 Wehner J (1997) Das Ende der Massenkultur? Visionen und Wirklichkeit der neuen Medien. Campus, Frankfurt am Main Wiki (2020) Alternative Fakten. In: Wikipedia. https://de.wikipedia.org/wiki/Alternative_Fakten. Accessed 25 Aug 2020 Zoglauer T (2020) Wissen im Zeitalter von Google, Fake News und alternativen Fakten. In: Klimczak P, Petersen C, Schilling S (eds) Maschinen der Kommunikation. Interdisziplinäre Perspektiven auf Technik und Gesellschaft im digitalen Zeitalter. Springer Vieweg, Wiesbaden, pp 63–83
Christer Petersen, Prof. Dr., wrote the paper “Stranger than Fiction: On Alternative Facts and
Fictional Epistemologies.” He holds the Chair of Applied Media Studies at the Brandenburg University of Technology and works in the fields of media and cultural semiotics and the materiality and technicality of media.
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual Consequences of Fake News and Conspiracy Theories Spread by the Media Andreas Neumann
Abstract
Dieter Hötger was imprisoned in 1962 in the detention hospital of the central MfS (Ministry of State Security) detention center Berlin-Hohenschönhausen because he had tried to smuggle his wife and her children to West Berlin with the help of a tunnel – bypassing the Anti-Fascist Protection Wall. Based on the media coverage of his case, the verdict of the trial as well as the internal investigation records, the essence of fake news is elaborated. This is placed in a context with the definition of fascism according to Georgi Dimitrov that prevails in Marxism-Leninism – the core element of the charges against Hötger – and thus its conspiracy-theoretical potential is demonstrated. Afterwards, the text describes the from the Dimitrov thesis derived anti-fascism as the GDR’s founding myth, before going on to discuss in the concluding sections its media treatment in news media as well as in fictional film and series productions of the GDR, specifically in relation to the building of the Anti-Fascist Protection Wall.
5.1
On Behalf of “Terrorists”, “Warmongers” and “Fascist Creatures” – The Public Presentation of the Dieter Hötger Case
On June 28, 1962, less than a year after the erection of the Anti-Fascist Protection Wall, Dieter Hötger was shot by seven bullets, including in the lungs and face, while exiting a 35-metre tunnel on East Berlin territory. One of Hötger’s comrades-in-arms, Siegfried Noffke, who is still in the tunnel entrance, will later even die from his bullet wounds. Shortly before Dieter Hötger is to be operated in the prison hospital in the central detention A. Neumann (✉) Department “Bildung und Vermittlung” Berlin-Hohenschönhausen Memorial, Berlin, Germany e-mail: [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_5
89
90
A. Neumann
center of the Ministry for State Security in Berlin-Hohenschönhausen, a Stasi interrogator storms into the operating room and insists on interrogating Hötger first. Hötger is threatened by the interrogator that there will be no operation without his testimony. He then lies in the MfS detention hospital for 3 months, subjected to constant interrogation (BStU, MfS, AU 14699/69, vol. 8, sheet 99; cf. Voigt and Erler 2011, pp. 22–24). Dieter Hötger was sentenced in October 1962 to 9 years in prison for acts of violence endangering the state in combination with incitement to leave the German Democratic Republic (BStU, MfS, AU, 14699/69, vol. 4, sheet 116). In justification, the verdict states that the defendant was “an enemy of our workers’ and peasants’ state” who had been “ideologically poisoned” in the “West Berlin front city swamp” and had allowed himself to be “corrupted” by the “imperialists” residing there. After the government of the German Democratic Republic implemented measures to protect our state and to maintain peace at our state border on August 13, 1961, it was no longer possible for him to conduct further speculative business in democratic Berlin without hindrance. He was very angry about this and had a hatred for the German Democratic Republic because he believed that his ‘freedom’ was inhibited. (BStU, MfS, AU, 14699/69, vol. 4, sheet 117)
Hötger is introduced right at the beginning of the written judgment as an ideologically blinded opponent of the GDR, who is also said to have made his living by dubious methods. The term freedom is also discredited in this context not only by the quotation marks, but also by the suggested proximity to criminal activity. After he had broken through the state border in Berlin on April 24, 1962, the document continues, and thus left the GDR, he established contact in West Berlin “with incorrigible enemies of our state in order to harm the German Democratic Republic” (BStU, MfS, AU, 14699/69, vol. 4, p. 118). The fact that in this case the citizens of the GDR were regarded as the state’s property becomes apparent when the reason for Hötger’s plans and thus the cause of the damage to the GDR is directly linked to Hötger’s attempt to bring his wife and their two children to him to the West (BStU, MfS, AU, 14699/69, vol. 4, p. 118). When Hötger and his helpers finally broke through the floor slab of the house in East Berlin at the end of the tunnel, the East German officers, according to the verdict, asked the terrorists [. . .] to surrender to the border security forces. Immediately after this request, the terrorist who had escaped to West Berlin and remained in the tunnel opened fire, seriously injuring a member of the border security forces. (BStU, MfS, AU, 14699/69, vol. 4, p. 122).
The fact that this is not true is already stated in the final report of the MfS on the operational procedure moles of June 28, 1962, which states in this regard: After that, the cellar door was pushed open by an operative and the three bandits present were ordered to surrender. Comrades Lehmann, Drabandt and Merkel were standing in the doorway and behind them was Comrade Captain Schulz. Before the bandits could surrender, comrade
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
91
Lehmann’s nerves failed, and he opened fire on the bandits. All three bandits were injured. Comrade Lehmann was also injured, although these were obviously caused by ricochets or the carelessness of another comrade. (BStU, MfS, AOP 9745/65, sheet 507)
Furthermore, Hötger was accused in the verdict as the alleged “boss of a terrorist group[. . .] to induce a total of 13 citizens of the German Democratic Republic to leave our workers’ and farmers’ state. Among them were 3 children” (BStU, MfS, AU 14699/69, vol. 4, sheet 123). The fact that these were exclusively family members of those involved in the construction of the tunnel is not mentioned. Instead, Hötger is accused of having acted on behalf of, among others, the “front city authorities”, the “warmonger Brandt” and “fascist creatures of the ilk of Marshal-Major Duensing and the SS butcher Graurock”, as well as Bonn Chancellor Adenauer, who himself was also “one of these murderers and warmongers” (BStU, MfS, AU 14699/69, vol. 4, pp. 123–125). Instead of a man who simply wanted to bring his family to West Berlin, Dieter Hötger is portrayed as an agent of fascist-imperialist Western powers. If the motive of smuggling people is at least mentioned in the verdict, there was not a word about it in the report about the incident in Neues Deutschland of July 8, 1962. Instead, “a gang of antisocial and criminal elements” dug a tunnel “to enable armed terrorists and agents of West Berlin underground organizations to enter the GDR’s territory unhindered” (Neues Deutschland 1962/07/08, p. 1 f.). Court rulings and newspaper articles illustrate very clearly how events can be pressed into a certain friend-foe or good-versus-evil pattern through framing. Actually, the battle of good versus evil forms the basic narrative of all conspiracy theories (Götz-Votteler and Hespers 2019, p. 169). Framing emphasizes some aspects of a perceived reality while omitting others. This serves, for example, to give specific problem definitions or moral evaluations and thus to charge the event with a certain ideological meaning (Entman 1993, p. 52). Not only certain facts are emphasized, and others are omitted, the newspaper article even adds new facts, which is why one can already speak of targeted disinformation, so-called fake news.1 But what is the actual narrative underlying Hötger’s indictment and conviction?
5.2
Fascism in the Marxist-Leninist World View
At the seventh World Congress of the Comintern in Moscow from July 25 to August 20, 1935 – at which time the NSDAP had been in power in Germany for over 2 years – the General Secretary of the Communist International, Georgi Dimitrov, established the theory
1
For the definition of fake news, see below in the text or e.g. also: Gelfert 2018.
92
A. Neumann
of fascism2 that was henceforth valid in Marxism-Leninism. In the process, one’s own previously valid understanding of the nature of fascism was also revised. Some evaluate Dimitrov’s binding postulate for the Marxist-Leninist world movement as a radical about-turn of communist politics, which had not only seen the social democrats from the early 1920s onward, who were recruiting from the same client base with the communists, as the “main social support of the bourgeoisie” and a moderate wing of fascism, but declared them as the actual main enemy of communist politics (Sabrow 2016, p. 86).
Dimitrov, at any rate, now characterized fascism as the open, terrorist dictatorship of the most reactionary, chauvinist, most imperialist elements of finance capital. [. . .] Fascism is the power of finance capital itself. It is the organization of the terrorist reckoning with the working class and the revolutionary part of the peasantry and the intelligentsia.
The ruling bourgeoisie, according to Dimitrov, turns to fascism in times of “revolutionization of the working masses” due to the aggravation of the general crisis of capitalism, to carry out the worst plundering measures against the working people, to prepare an imperialist war of robbery, to prepare the invasion of the Soviet Union, the enslavement and partition of China, and by all these measures to prevent the revolution (Dimitroff 1958, p. 523 f.)
and in this way to assert their power (Dimitroff 1958). The gradual deviation from Moscow’s and thus the Comintern’s previously valid image of fascism lay in the fact that from now on a decisive qualitative difference between bourgeois democracy and fascism was stated. Thus, the new definition was demarcated against Moscow’s own less differentiating use of the term in the 1920s and 1930s, under which not only the German Nazi movement and Mussolini-Italy were subsumed, but also bourgeois authoritarian and presidential systems (“Brüning fascism,” “Papen fascism”) or even social democrats (“social fascists”). Now a distinction was made by no longer equating the normal form of bourgeois rule with the terror now openly evident in Germany during the Third Reich, but only classifying it as latent or potentially fascist (Blank 2014, p. 20). However, this correction did not change the fact that the Dimitrov thesis or agent theory understands both bourgeois democracy and fascism as only two different forms of rule by capitalism, which 2
This paper follows a general and non-Marxist understanding of fascism, according to which the term denotes all extreme nationalist, leaderist, anti-liberal, and anti-Marxist movements, ideologies, or systems of rule that sought to abolish parliamentary democracies in the period following the First World War. However, the term used to describe a certain type of politics is controversial, since numerous regimes or movements described in this way, such as German National Socialism, did not named themselves fascist at all. Goals and methods of different manifestations also varied widely (cf. Der Brockhaus 1997, p. 133).
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
93
can be replaced depending on necessity. Thus, every state organized as a market economy and every parliamentary-democratic constituted state system is per se in a pre-fascist stage, which runs the risk of being transformed into an openly fascist dictatorship at any time. The precondition for the assumption that both bourgeois democracy and fascism are only two different instruments of domination of the finance capitalists is the Marxist axiom according to which capitalism is a political system and not only a certain way of organizing the economy. This in turn can be traced back to Marx’s philosophical materialism, according to which “the economic structure is the basis on which the political superstructure rises” (Lenin 1976, p. 13). Meanwhile, even contradictions between theory and practice, such as the cooperation at the trade union level between the Comintern-controlled Revolutionary Trade Union Opposition and Nazi factory cells during the Weimar Republic (cf. Weber 1983, pp. 199 f.; Winkler 1990, pp. 765–773), were unable to change the supposed validity of this friend-foe dogma. Finally, the clearest contradiction was the Hitler-Stalin Pact of 1939, where, according to Wolfgang Leonhard, all anti-fascist argumentation and rhetoric disappeared from the Soviet press and all other public pronouncements from one day to the next (Leonhard 2014, pp. 81 f.). In Marxism-Leninism, the rule of abstract and independent relations in capitalism is interpreted as the personal rule of a certain group that would exploit the working proletarians out of pure greed. Admittedly, the Dimitrov thesis is not itself anti-Semitic, since Jewish people are not directly described as the secretly acting string-pullers of capital. Nevertheless, the basic form of thought of the bipolar dichotomy of the owning bourgeoisie, in whose hands are the means of production, and the proletariat, which must sell its manpower to the former, is reminiscent of the distinction made by the National Socialists between good, creative labor and evil, rapacious finance capital, which they clearly associated with Jewry (TKA 2018, p. 14). As early as the 1930s, Stalin’s rival Leon Trotsky, who was perceived worldwide as Jewish, and his (alleged) followers were branded as insidious internal enemies of the Soviet Union who would undermine the country through conspiratorial networks and social demagogy in order to hand it over to international finance capital. This was structurally very similar to Nazi ideologemes of Jewish Bolshevism (Koenen 2019, p. 98), thus serves anti-Semitic narratives. In the late-Stalinist show trials in Eastern Europe, anti-Semitic resentments were then woven directly into the pattern of the Dimitrov thesis. In the Slánský trial, held in 1952 in the ČSSR, which was marked by a campaign against cosmopolitanism and Zionism, 11 of the 14 main defendants received the death penalty. It was considered proven that they had formed a “TrotskyistTitoist, Zionist, bourgeois-nationalist” conspiratorial center “under the guidance of hostile Western spy services”. It was repeatedly emphasized during the trial that the majority of the defendants were Jewish, which, according to the indictment, made them nationally unreliable: rather, they were cosmopolitans, conspirators, Zionists, and agents of imperialism (Gerber 2016, p. 11 f.). Even if not explicitly described as string-pullers, the accused political officials are placed with direct reference to their Jewish roots – and thus Zionism – in a context of meaning with imperialism, bourgeois ideology, the Western capitalist states,
94
A. Neumann
Leon Trotsky, who was also associated with Judaism, as well as a cosmopolitan, i.e. worldspanning conspiracy. A dictionary compiled in 1980 at the Institute for International Relations at the Academy of Political Science and Law of the GDR then already shows clear similarities between the understanding of Zionism and Dimitrov’s definition of fascism from 1935: It describes Zionism as a “chauvinist ideology,” as “the widely ramified organizational system and the racist, expansionist political practice of the Jewish bourgeoisie, which forms a part of international monopoly capital” (Authors’ Collective 1980, p. 703). The dictum cultivated for decades by the Soviet Union and the GDR that Israel was a new Third Reich (Herf 2016, p. 9) finally fully confirms the convergence of MarxistLeninist anti-Semitism disguised as anti-Zionism with the Dimitrov thesis. One may not agree with all of Karl Popper’s analyses of Marx and Marxism, but he accused numerous exponents of Marx’s doctrine of (consciously) misinterpreting historical materialism when they described “wars, depressions, unemployment, hunger in the midst of plenty” as “the result of a cunning conspiracy on the part of the ‘big capitalists’ or the ‘imperialist warmongers’”. On the contrary, Popper argues, that Marx described these as rather “undesirable [. . .] social [. . .] consequences of actions [that were] directed towards other ends, by people caught up in the network of the social system“. He called such interpretations of historical materialism – which certainly includes the Dimitrov thesis – as “vulgar Marxist conspiracy theory” (Popper 1992, p. 119). At the very least, the monocausal accusation that the financial capital deliberately evoked fascism justifies attributing a conspiracy-theoretical character to the Dimitrov thesis or agent theory. The conspiracy in a conspiracy theory can be defined “as the secret collaboration of a (usually manageable) group of persons whose agreements and actions are aimed at influencing events to their own advantage (and thus at the same time to the disadvantage of the general public)” (Hepfer 2015, p. 24). Conspiracy theories are not robust theories, but “the result of a subjective interpretation of selective perceptions” (Götz-Votteler and Hespers 2019, p. 36), which is why terms such as conspiracy ideologies or conspiracy myths are often considered more appropriate. They are often expressions of existential threats or far-reaching political changes because the belief in knowing what is behind complex developments gives a sense of control and security. But it also creates a sense of superiority over other parts of the population, which in turn is a characteristic of groups of people prone to political or religious extremism. In fact, these same groups of people tend to believe in conspiracy theories. But this is not just an assumed superiority in knowledge and understanding, but also a moral superiority coupled with emotional indignation and anger toward others. These others – certain ethnic, social, political, or religious groups – are stigmatized and violence towards them tends to be legitimized. In this way, conspiracy theories also serve to assert the power interests of those who circulate them. They are not intended to enlighten facts at all, but, among other things, discredit political opponents (cf. Götz-Votteler and Hespers 2019, pp. 31–45). All this can be said of the MarxistLeninist definition of fascism, which not only ignores historical facts and, by simplifying the explanation for the rise of fascism, trivializes this phenomenon, but also delegitimizes the market economy and parliamentary democracy per se.
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
5.3
95
Anti-Fascism as a Founding Myth of the GDR
Whether it was really such an “impudence” of Hitler’s party to “call itself National Socialism, although it has nothing in common with socialism” as Dimitrov claimed (1958, p. 223), or whether this self-description rather goes back to the origins of the fascist movement (cf. Neumann 2019, p. 97) in the 1920s under, for example, Mussolini in Italy, which was then also reflected in the social-revolutionary programmatic of the NSDAP re-founder Gregor Strasser, who was influential in northern Germany and to whom, for example, Joseph Goebbels also leaned at times (von Bilavsky 2009, pp. 30–37), remains to be seen. Perhaps such vehement indignation over the use of the term sought only to conceal structural and anti-liberal similarities between the allegedly contrary ideologies. However, the anti-Semitic undertones in the struggle against Trotskyism (in this article), the Soviet silence on the cruel persecution of Jewish people in Germany and later (at the time of the Hitler-Stalin Pact) in Poland (cf. Rapoport 1992, p. 74 f.), as well as the 1935 invitation to German (young) communists to join the Nazi mass organizations (cf. Sabrow 2016, pp. 264–267), speak a clear language. Last but not least, the instructions from Moscow to French communists to come to terms with the German occupying power after the invasion of France by the Wehrmacht, according to Gerd Koenen, “do not amount to a simple convergence of the two systems, but they do point to parallelisms in the construction of enemy and even world views” (2017, p. 928 f.). The Dimitrov understanding of fascism in principle was a lasting ideological cornerstone of the GDR until the end of state’s existence, although the classification of certain events such as the people’s uprising on June 17, 1953, as a fascist act was not made quite so directly in later times (cf. Beleites 2013, p. 53). This is because the GDR viewed itself as the anti-fascist German state, while West Germany would act in a (pre-)fascist manner. This can be seen, among others, in a project discussed in a meeting of the secretariat of the Central Committee of the Socialist Unity Party (SED) in 1966, in which a “world campaign about the GDR and to expose neo-Nazism in West Germany” was planned, in the context of which the “anti-fascist, humanist peace policy of the GDR on the one hand and the aggressive neo-Nazi, anti-peace policy of the Bonn government on the other hand should be convincingly presented” (quoted from Leide 2005, p. 81 f.). Anti-fascism was omnipresent in the political and social everyday life of the GDR because it was the founding narrative par excellence that was supposed to legitimize the existence of a socialist Germany. Political communities have always resorted to political myths in order to present themselves to the outside world and to integrate themselves internally. They often make use of legendary narratives or actual historical events are mythically exaggerated and reshaped. According to Herfried Münkler, myths differ from mere historiography in that they are not interested in the historical event itself, but in its significance with regard to the continuation of history. “In this respect, political myths do not report on events, but on caesuras of time and punctuations of history.” Following this logic, the SED leadership understood the GDR as a state that emerged from the experiences and struggles of broad anti-fascist segments of the population, led by the heroically
96
A. Neumann
sacrificing communists in the resistance, against the Nazi regime (Münkler 2009, pp. 32–34). Typical for communism, however, is that these myths were always embedded in the postulates of historical materialism, i.e. they corresponded to a socio-historical theory of evolution (Koenen 2017, pp. 48–50): According to the Marxist-Leninist understanding of history, anti-fascist socialists were not only better people per se, but also members of a higher developed level of society. The belief in the accuracy of news by a more or less large group of people is a necessary precondition for the implementation of both fake news (Keil and Kellerhoff 2017, p. 19) and ideologies in general. But the belief in this bipolar world-explanation of (pre-)fascismvs. anti-fascism, with quite religious salvatory echoes, also offered practical benefits for many people who were not themselves unconditional disciples of the pure doctrine. With this narrative of anti-fascism, GDR citizens were not only allowed to “cling to the fiction of the blamelessness of the majority of the population in the German catastrophe and to historically situate the GDR as a workers’ and peasants’ state outside the context of entanglement with National Socialism”, i.e. to absolve themselves from it (Danyel 1993, p. 134). At the same time, by shifting responsibility for Nazi crimes to West Germany, the SED-state was able to set itself morally apart from the Federal Republic of Germany (FRG). Thus, anti-fascism was the only supposedly unassailable legitimation for the existence of the GDR in domestic and foreign policy terms. Because there was no actual political legitimacy through free elections, the rule of law, real pluralism nor through the satisfaction of the population’s consumer needs, anti-fascism served as the substitute legitimation in the GDR (Leide 2019, p. 13 f.). It was intended to stylize the anti-fascist struggle of the communists during the Third Reich into the fixed point in the cultural memory of the GDR citizens. This term, coined by Jan Assmann, describes how the past is not preserved as such, but rather transformed into a symbolic figure. For cultural memory, it is not factual history that counts, but only remembered history. One could also say that in cultural memory factual history is transformed into remembered history and thus into myth. Myth is a founding story, a story that is told in order to narrate a present from its origin. [. . .] This does not make [history] unreal, but on the contrary real in the sense of a continuing normative and formative force. (Assmann 2002, p. 52)
5.4
The Anti-Fascist Protection Wall and the State Information Media
However, the focus on anti-fascism as a state obligation did not only materialize in the invocation of the anti-fascist resistance-fighters and their self-sacrificing struggles in obligatory interviews with contemporary witnesses in schools, as street names, in countless commemorative events, etc., in order to pass on the anti-fascist spirit to the next generations. In addition to this mythological component, which at least superficially
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
97
aimed at the political-ideal education of believing, i.e. faithful GDR citizens, the ideological reference to anti-fascism could also mean serious consequences for life and limb, as not only the example of Dieter Hötger and his comrades-in-arms illustrates. After the GDR leadership reacted to the mass exodus in May 1952 by sealing off the inner-German border and making it a punishable offence to cross it escape to the FRG was criminalized and the possibilities of getting to the West narrowed down. In the divided but not sealed-off city of Berlin, however, it was still feasible, and people took advantage of this opportunity. In July 1961 alone, 30,415 citizens of the GDR applied for admission to the FRG. By the day the Wall was built, around 2.5 million people had left the country (Heidemeyer 2008, p. 87). On the night of August 13, 1961, Action Rose was launched – not to be confused with Action Rose from 1953 for the nationalization of private holiday facilities, especially on the Baltic coast (cf. Marxen and Werle 1999, p. 42) – in which the crossings between East Berlin and the western sectors were closed and a total of 143.6 kilometers of West Berlin were fenced in, initially with 473 tons of barbed wire. Subsequently, the border facilities were fortified and expanded over the years. By agreeing to this action, which Walter Ulbricht had long requested, Nikita Khrushchev gave his consent to separate friends and families for a long time and to keep people away from their jobs. Ulbricht’s request was tantamount to an oath of revelation for the SED, as he painted the future of a socialist Germany in the bleakest of colors if no one will act: With the sector border open, he could no longer guarantee the GDR’s existence. Although the construction of the Wall was staged as a sovereign step by the SED leadership, the actual control lay with Marshal Ivan Konev, i.e. with the Soviet authorities, who had after all increased their military presence in Central Europe by 25% in the run-up to the action. At Ulbricht’s instigation, the Warsaw Pact finally classified the border closure as a defense against warmongering subversion and thus as a case for the alliance, which Klaus-Dietmar Henke describes as the birth of the legend of the protection wall (2011, pp. 15–17). The following morning, Neues Deutschland, the central organ of the SED, immediately delivered the official announcements from the Council of Ministers of the GDR and the Warsaw Pact states on the nocturnal action. Although West Germany is not directly described as fascist in the declaration of the Council of Ministers, it is trying, according to the ND, to continue what was started between 1939 and 1945. Thus, the government in Bonn was after all taking a stand according to which the Second World War was not yet over, which was why it was constantly undertaking military provocations and civil war measures against the GDR. This imperialist policy, conducted under the mask of anti-communism, is the continuation of the aggressive aims of fascist German imperialism at the time of the Third Reich. From the defeat of Hitler’s Germany in World War II, the Bonn government has drawn the conclusion that the predatory policy of German monopoly capital and its Hitler generals is to be tried again by renouncing a German nation-state policy and turning West Germany into a NATO state, a satellite state of the U.S. (Neues Deutschland 1961a/08/13, p. 1)
98
A. Neumann
While in the Second World War it was German monopoly capital that pulled the strings in the background, it is now integrated into a larger context: the entire Western world (Neues Deutschland 1961a/08/13). What is also interesting about this statement is that the West German nationalists are in this way accused of a certain kind of internationalism, while the socialist internationalists present themselves as the guardians of the grail of an independent German policy. This is certainly a reply to the not entirely unjustified verdict from West Germany that the GDR was merely a fifth column of the Soviet Union (cf. Adenauer 1950). At the same time, in the understanding of Eastern European communism, there have been various forms of internationalism since the 1950s, of which the accusation of so-called cosmopolitanism in particular shows a clear proximity to anti-Semitic conspiracy theories because of its anti-Jewish patterns of argumentation (in this article). However, the government declaration regarding the construction of the Berlin Wall argues that by closing the sector borders, the citizens of the GDR, who were “increasingly exposed to terrorist persecution” during visits to West Germany will be protected from harm. The following explanation comes closer to the actual reason for the building of the Berlin Wall, even if the choice of residence or job in West Berlin is presented here as the consequences of a criminal act: “A systematic poaching of citizens of the German Democratic Republic and a regular human trafficking is organized by West German and West Berlin agent headquarters” (Neues Deutschland 1961a/08/13, p. 1). The portrayal of people as commodities to be traded clearly points to the GDR’s prevailing understanding of individuals and their autonomous decisions. In this context, parliamentary democratic institutions are also discredited as clumsy tricks for warmongering: “The West German militarists want to use all kinds of fraudulent maneuvers, such as ‘free elections,’ to first extend their military base to the Oder River, and then start the big war” (Neues Deutschland 1961a/08/13). In this way, it only seems right that “revanchist politicians and agents of West German militarism” were denied entry to East Berlin. The related announcement that “citizens of the capital of the German Democratic Republic require a special certificate to cross the border into West Berlin,” while the “visit of peaceful citizens of West Berlin to the capital of the German Democratic Republic (democratic Berlin) [. . .] is possible upon presentation of the West Berlin identity card,” adds to the overall argumentation, which seems richly contrived, but hints against whose border crossings the Wall was actually directed to (Neues Deutschland 1961a/08/13). The formula of the Anti-Fascist Protection Wall did not function from the beginning as an official linguistic regulation, whose use in social practice also signaled political good conduct (Demke 2011, p. 96). At the end of 1961, the East Berlin Committee for German Unity published the brochure Die Wahrheit über Berlin (The Truth about Berlin) for West Berlin. It was intended to explain to West Berliners why it had become necessary to build a wall: “At the GDR’s state border with West Berlin, an anti-fascist protection wall was erected and the fire source of the war in West Berlin was brought under control” (quoted from Kubina 2011, p. 83). The formula of the Anti-Fascist Protection Wall already appears here, although it is not officially mentioned until July 31, 1962, in a submission to the Politburo (Holzweißig 1995, p. 1700). This word construction was thought up by Horst
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
99
Sindermann, then head of the Agitation Department at the Central Committee of the SED, who was chairman of the Council of Ministers between 1973 and 1976 and then president of the People’s Parliament until 1989, and in these positions, he was at least pro forma the second or third most powerful man in the GDR. In 1990, in an interview with Der Spiegel, a German journal, he indirectly admitted that the Berlin Wall had of course not protected against fascists and other external enemies. However, since the GDR was not supposed to “bleed out”, which would have put an end to the “antifascist-democratic order”, even 29 years later he felt that the term was still appropriate (Harenberg 1990, p. 60). The brochure from 1961 pointed out to the citizens of West Berlin the following: “Walled into West Berlin were all those forces and organizations that for years had been increasingly disrupting the peaceful construction of the GDR and preparing for the hot war” (quoted from Kubina 2011, p. 83). Correct in terms of geography, because a wall was indeed built around West Berlin, but functionally wrong, because it was the East Germans who were now finally cut off from the (Western) outside world, this formulation suggests that socialist humanism walled in the Western imperialist aggressors, i.e. transported them like criminals into a prison. However, citizens of West Berlin were still able to enter West Germany via the transit routes. The front page of the issue from August 14, 1961, of Neues Deutschland (Fig. 5.1) testifies to the fact that the clandestinely planned action a day earlier was in the interests of the people of the state. Not only were GDR citizens quoted repeating the slogans of state propaganda and referring to the separated families, acquaintances, etc. by saying that one could “accept any inconveniences that may arise” (Neues Deutschland 1961b/08/14, p. 1), which is already a contradiction in terms. Workers also expressed their support for Action Rose by announcing that they would achieve a higher work target (Neues Deutschland 1961c/08/14, p. 1), which makes clear references to the Stakhanov and Hennecke movements.3 It was also typical of the enactment of building the Wall from the East Berlin perspective to emphasize the closing of the border as the alleged will of the working class by highlighting the significant part played by the brigade groups of the working class. This, too, is already given on the title page of August 14 by the illustration of two members of this paramilitary unit of the Publicly Owned Enterprises (VEB) with clearly identifiable industrial plants in the background. The caption then also explains that the two pictured are the bar puller Klaus Abraham (left) from the Berlin Metal smelting and semi-finished
3
On October 13, 1948, miner Adolf Hennecke at the Oelsnitz coal plant Gottes Segen (God’s Blessing) produced 387% of his normal daily output, emulating Soviet miner Alexei Stakhanov, who in 1935 produced well over 1000% of his normal daily coal output in one day in the Donetsk Basin. Like the Stakhanov movement in the Soviet Union, the Hennecke movement in the GDR was intended to urge workers to invest their labor even more intensively in building up the country. In the weeks that followed, the newspapers were full of reports that numerous activists were setting new work records. Many miners, however, saw in this staging only a means of the rulers to raise labor standards (Satjukow 2002; Hartmann and Eggeling 1998, pp. 114–123).
100
Fig. 5.1 Neues Deutschland of August 14, 1961, p. 1
A. Neumann
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
101
product and the setter Harald Gündel (right). The latter is finally quoted as saying: “With my machine pistol, just as at the lathe, I represent a completely normal and just cause: we protect our state, peace and socialism” (Neues Deutschland 1961d/08/14, p. 1). This reference to the armed branch of the working people was intended to demonstrate once again the harmony between the people of the GDR, which claimed to be the state of workers and peasants, and the measures taken on August 13 – another euphemism for the building of the Berlin Wall. Thus, the official coverage mainly showed the brigade groups of the working class (see Fig. 5.1), in a posture of armed vigilance, but occasionally also erecting the barbed wire fence. The photo icon of the Anti-Fascist Protection Wall par excellence was a photograph in which members of these brigade groups close off the Brandenburg Gate with their bodies as a human wall (Demke 2011, p. 98 f.). As late as 1989, during the May Day celebrations on Karl-Marx-Allee in Berlin, a parade of the brigade groups was taken down by the state and party leadership to the sound of The Internationale. Prominently presented in the Aktuelle Kamera, the main news program in the GDR, the off-camera commentary reads: Traditionally, these hours are concluded by delegations of the brigade groups from Berlin factories. Some of them already wore this uniform on August 13, 1961. What happened then is well known. And it is also known that since that day, Europe has been able to live more calmly, despite many staged tensions, that European relations have developed on a sovereign basis. The GDR will continue to work for every honest disarmament agreement. But it will also continue to know how to defend its socialist achievements at all times. (16:29–17:54 min)
The peace-keeping consequences of the events of August 13, 1961, are once again emphasized. On the contrary, the reference to the fundamental willingness to honestly disarm seems contradictory, while uniformed paramilitaries armed with Kalashnikovs march through the picture. Ultimately, the threat to continue defending the GDR at gunpoint is directed just as much against the GDR’s own population as the construction of the Anti-Fascist Protection Wall was primarily directed against the inhabitants of the GDR.
5.5
From Fiction to Reality. The Anti-Fascist Protection Wall in Cinema and on the Screen
However, the Berlin Wall or the Anti-Fascist Protection Wall was not only made a topic in daily newspapers, political brochures and news broadcasts, i.e. in media for which the recipients assume a certain journalistic duty of care with regard to truthfulness or authenticity. The subject has also been addressed in the realm of fiction. The 1963 DEFA (stateowned film studio) agent thriller For Eyes Only – Streng Geheim by János Veiczi, based on a screenplay by Harry Thürk and dramaturgy by Heinz Hafke, is set in 1961. The film tells the prequel to the building of the Wall from the perspective of SED ideologues. And indeed, the film opens with the faded in premise: “The plot of the film is fictitious –
102
A. Neumann
similarities to actual events and living persons are intentional” (00:01:27 h). Such remarks suggest a certain documentary claim, which, because the recipients cannot distinguish between the two, leads to fiction and reality being interwoven and thus merging into a new truth. The claim to truth of the events portrayed is also emphasized by fading in the front pages of the international press at the end of the film (Fig. 5.2), a stylistic device also used for this purpose in GDR television (cf. Neumann 2019, p. 86 f.). This documentary character was already confirmed at the film’s release in 1963 by MfS Lieutenant Colonel Gerhard Kehl, deputy head of the General/Agitation Department, one of the numerous expert advisors for what was probably the closest collaboration between DEFA and the State Security, to the Volkswacht of July 20, 1963, published in Gera. Thus, although the plot was fictitious, it resulted “from many facts, findings and experiences of a long struggle with imperialist secret services” (Stöver 2006, p. 69). The plot of For Eyes Only, in which Hansen, a scout for peace of the MfS in the Federal Republic, works for the US-American MID (Military Intelligence Division), the intelligence service of the US Army, in Würzburg, is woven together from several real incidents. Hansen, whose real name is Lorenz and who is considered a fugitive from the Republic in the GDR because of his camouflage, manages after several twists and turns and an action-packed chase to transport important documents about Western spies in East Germany and the imperialists’ attack plans for the conquest of GDR territory across the inner-German border. This provided incontrovertible proof of the bellicose intentions of the Western powers, and the state leadership was able to initiate appropriate countermeasures, i.e. (unspoken) the building of the Wall. The aspect of the plot in which an MfS spy smuggles file material on Western agents in the GDR to East Berlin goes back to the MfS double spy Horst Hesse, who in 1956 steals a safe with documents on agents from the Würzburg MID headquarters and transports it to the GDR. The operation led to the arrest of around 140 employees of Western services in the GDR. However, the safe stolen by Hesse did not contain any plans of attack (Iken 2016). This part of the story goes back to the strategy paper DECO II of the Bundeswehr, the West German Armed forces, which had already reached the MfS in 1955 and which unmistakably proved an imminent attack of the German armed forces on the GDR, at least according to the allegations of the SED (Wenzke 2013, p. 192). According to Bernd Stöver, the film constitutes the most successful fictional contribution to the thematic complex of liberation policy4 in terms of audience (Stöver 2006, p. 62). Although the uprisings in the
4
Liberation Policy or simply Rollback was an addition or intensification of the already existing Containment Policy of the US-Americans, in which not only the further growth of the Soviet sphere of power was to be prevented, but through political, economic, propagandistic, and secret service activities Soviet satellite states were to be destabilized and the population was to be encouraged to (passively) resist the communist regimes. The line of intervention was to be drawn where there was a danger of direct confrontation between U.S. military forces and the Soviet Red Army, which could have easily grown into a nuclear war (Stöver 2014, pp. 216–218). On the Rollback Policy, see: Bischof 2003, pp. 111–117.
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
103
Fig. 5.2 For Eyes Only – Streng geheim (Screenshot DVD Icestorm Entertainment Hamburg 2013) (1:33:52 h)
Soviet sphere of power of 1953 and 1956 were not initiated from the outside, as the SED leadership was never tired of emphasizing until their end, the demonstrators nevertheless hoped for the fulfillment of the promises of the Liberation Policy. Thus, the Liberation Policy also remained a constant sword of Damocles for the SED, constantly hovering over its rule (Stöver 2006, p. 51). But regardless of the question of whether or not this fear of a direct invasion of the GDR by the Bundeswehr or NATO troops was rationally justified, it is important to note that the so-called Anti-Fascist Protection Wall – if one analyzes how it functioned – was clearly directed against the GDR’s own population, so the closure of the sector borders was certainly not related to this supposed threat scenario. Here, the DEFA film simply repeats the lies already used in the official press releases from 1961. In For Eyes Only it is mentioned that all 35 group leaders in the Würzburg MID staff came from the Gestapo, the SD or the SS (56:07 to 56:15 min). The German MID headquarters is located in Frankfurt am Main, the German financial center since the beginning of the early modern period with the founding of the Frankfurt Stock Exchange, which means that monopoly capital also seems to be at least indirectly involved in the plans of the imperialists. Thus highlights the connection of capitalism, fascism and parliamentary democracy (West Germany), which goes back to the Marxist-Leninist ideology, i.e. the triad of the Dimitrov thesis. There is not even a need for the US-American doctor, reminiscent of a concentration camp doctor, who takes the lie detector test. He becomes assaultive towards women and reveals himself as an anti-Semite (1:00:15 to 1:00:35 h). On the one hand, the always friendly agent Hansen, who drinks milk and is particularly suited to his job, i.e. extremely professional (even a lie detector does not expose him) as well as a family man, who actually only wants to visit his son in the GDR, and on the other hand, the constantly drinking alcohol, cheating on their spouses and unscrupulously murdering Western agents, shows the viewer not only which political system is the more humanistic, but also that For Eyes Only is a particularly successful image film for the Ministry for State Security. Even years after the Berlin Wall was built, GDR television continued to justify the erection of the Anti-Fascist Protection Wall. Another example is the 11-part series Rendezvous mit Unbekannt (Rendezvous with the Unknown), which was first broadcast in 1969.
104
A. Neumann
reports from the pioneering days of the defense organs the ddr
Fig. 5.3 Rendezvous mit Unbekannt (Part 11) (Screenshot DVD Studio Hamburg 2016 (00:12 min))
Produced in the DEFA Studio for Feature Films, János Veiczi (director) and Harry Thürk (scenarist) worked together again on this project. The main actor from For Eyes Only, Alfred Müller, played the MfS protagonist again, this time Major Wendt. The series deals with “reports from the pioneer days of the GDR’s defense organs” (Fig. 5.3), as the subtitle vividly puts it. The quasi-documentary character of the events depicted from 1952 to 1954 is further reinforced by the note under each title: “The plot reenacts true events” (Fig. 5.4). In addition, a high degree of authenticity is suggested by showing original footage of historical events or personalities, e.g. from newsreels, etc. and title pages of daily newspapers at the beginning of each episode, so that the viewers understand how to place the following series plot in the political events of the time. A narrator from off-screen ensures that they adopt the correct point of view regarding of the class position. This commentary points out, for example, at the beginning of the eigth episode: “The reactionary policy of the imperialist powers is practicing the hot war in Korea and the cold war in Europe. Acts of diversion against the GDR are on the agenda. The front city of West Berlin serves as a springboard for this. Our part of the city is considered the next target!” (Episode 8, 0:39 to 0:57 min). Moreover, at the end of Episode 4, for example, documentary shots of wrestling women, wildly dancing youths and scary movie posters point to the decline of (cosmopolitan) culture in the West. Almost neglected-looking and worried people are shown, while a hand takes numerous banknotes from the table, thus referring to capitalist class society. Military parades of Western allies stand for the militarism and imperialism prevailing in the West. In between, there are images that suggest Westerndirected incitement against the young GDR, such as the RIAS logo or a poster, presumably
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
105
Fig. 5.4 Rendezvous mit Unbekannt (Part 11) (Screenshot DVD Studio Hamburg 2016 (01: 08 min))
of a person living in West Berlin, that reads: “Passivity is betrayal of Berlin”. This sequence is then concluded by what is portrayed as the necessity of these inflaming conditions: a parallel montage of shots of a military parade in Berlin’s Olympic Stadium and shots of German signposts to now Eastern European cities, including Gliwice and Wroclaw, but also the East German city Leipzig, are meant to testify to the revanchism prevailing in the West (25:04 to 25:58 min). In each of the eleven episodes, Major Wendt and his protégé Lieutenant Faber put a stop to people whose activities are directed against the GDR. They are often former SA men, Einsatzgruppen members or members of relevant SS divisions, “werewolf people, HJ leaders and Napola pupils”, who are now supposed to cause damage on behalf of the MID, which corresponds to the agent theory, according to which fascists act on behalf of monopoly capital. Actions of the KgU (German for “Combat Group against Inhumanity”) are also thwarted by the employees of the Ministry for State Security. Every case that the two comrades of the MfS investigate turns out to be orchestrated from the West, which sends its spies and agents across the sector border. “Hundreds of thousands of people flit back and forth between East and West Berlin every day” (Episode 9, 04:12 min), according to Lieutenant Faber, which is why the building of the Berlin Wall and the final closing of the Iron Curtain are legitimized as an absolute necessity. The fact that illegal border crossings are always a matter of entering one’s own territory, an intrusion as it were, and not of leaving, is made clear in Episode 9, in which an exiled Pole is supposed to cross the Oder River illegally by diving on behalf of the MID and enter the People’s Republic of Poland, in order to then carry out certain actions on behalf of the USA. In the eleventh and last episode, military installations and landscapes are spied upon on behalf of the Gehlen Organization (the predecessor organization of the Federal Intelligence Service BND, founded in 1956), “cartography for a military action” (17:04 min), as the two MfS employees explain, thus indirectly but clearly referring to the necessary construction of the Wall. In this context, it is not without a certain irony that Ingolf Gorges, who in the role
106
A. Neumann
of Lieutenant Faber protected the GDR from harmful imperialist influences, moved to West Berlin in 1977 in connection with the protests against Wolf Biermann’s expatriation and became, among other things, a spokesman for RIAS. In episodes 8 (Mörder machen keine Pause) and 9 (Sieben Augen hat der Pfau) of the well-known GDR agent series Das unsichtbare Visier (The Invisible Sight), the erection of the Wall is 1976 again named as a direct consequence of capitalist-fascist plans for a rollback of the GDR. Already in the previous episodes of the series, MfS agent Werner Bredebusch, disguised as former air force fighter pilot Achim Detjen, who was active in Western countries. He established contacts with former World War II military officers who built up the Federal Republic’s Ministry of Defense, which is always referred to as the “War Ministry” in the series. Some of these are leaders in the weapons industry after World War II and exert strong social and political influence in West Germany in the background in the form of ODESSA (Organization of Former SS Members), subordinate only to the CIA. At the beginning of episode 8 – both episodes form a common story arc – Colonel Brinkmann, Detjen’s superior in the Federal German Ministry of Defense, is found dead in his hunting lodge in the early summer of 1961. He had opposed a plan by his old comrades of ODESSA to provoke a military incident on the territory of the GDR in connection with an interzonal train on the route between West Germany and West Berlin. The consequence of this forced incident would be the occurrence of the NATO casus foederis. In this way, the revanchists could undo the results of World War II with the help of the Western powers. This scenario has not only been rehearsed before in a Bundeswehr maneuver. In addition, the exact procedure is demonstrated to the Bundeswehr officers involved – and thus to the viewers – vividly and in great detail using a model railway layout. After Detjen is able to secure the missing microfilm recordings of Brinkmann’s notes after a lengthy game of catand-mouse and thus prove the intentions of the revanchists, the MfS’s scout of peace finally returns to East Berlin. Shortly after crossing the border, he was greeted by a comrade who immediately presented him with the latest issue of Neues Deutschland. The title of the issue is: “Measures for the Protection of Peace and the Security of the German Democratic Republic Come into Effect”. It is the issue of August 14, 1961, the day after the Berlin Wall was built (Fig. 5.5). Again, the authenticity of the events is attested to by depicting a real news medium. Yet this is even an outwardly shielded media circle reference. For not only does the series suggest truthfulness with the help of the ND, but this type of (fictional) media representation is also intended to help the narrative of the fascist-imperialist danger from the West that officially prevails in the GDR gain more acceptance among the population and thus ultimately transcend the ND report to reality in the consciousness of the people living in the GDR. The debates on fake news, especially since the election of Donald Trump as President of the USA, often give the impression that this is a new phenomenon. The example of the construction of the Berlin Wall and the associated media coverage is just one example of the fact that such phenomena existed even before the digital era. After all, the definition of fake news often used in the social science research literature as a knowingly manufactured
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
107
Fig. 5.5 Das unsichtbare Visier (2009) (Part 9) (Screenshot DVD Studio Hamburg (1:16:21 h))
or disseminated piece of false information that actually stands in a context with a claim to truth (Müller and Denner 2019, p. 6) applies to the political statements of SED members and the press reports on the building of the Wall as well as to the coverage of the Dieter Hötger case. Even the fictional-dramatic justifications of the erection of the Anti-Fascist Protection Wall in the films and television series discussed can be ascribed to the definition-relevant attributes of fake news due to their suggested claim to documentation and truth. However, the targeted disinformation about the cause of the construction of the Wall also achieved particular development because it could build on the construction of the capitalism-fascism conspiracy theory, which had already been established for decades – at least in communist circles. The media examples also showed that the narrative of an imminent attack by the Western powers remained present for years. This makes perfect sense, as frequent repetition over a long period of time leads to a cumulative consolidation of narratives (Schulze 2019, p. 7). In the case of representatives of the SED-party and the GDR-state, especially the employees of the MfS, which always saw themselves as the party’s shield and sword, the question must also be asked to what extent they themselves were bound by this worldview. For on the one hand, fake news and conspiracy theories usually do not convince anyone who fundamentally disagrees, but rather have a reinforcing effect on those who share the fundamental view of disinformation (Schulze 2019, p. 7). Thus, individuals who believed in the narrative of the agent theory are unlikely to have doubted
108
A. Neumann
the truthfulness of the official justification for the Berlin Wall. On the other hand, it should be borne in mind that, unlike other socialist states, the GDR was not nearly shielded away from the Western media. A large part of the GDR population had access to West German media offerings and was thus able to gain a different look on the GDR. Many residents also made extensive use of this opportunity, which is why the average GDR person certainly did not believe the narrative in its entirety. For those individuals who believed in the necessary and singular connection between capitalism and fascism, for whom it represented an ideological truth, the accusations brought against Hötger actually made sense. By his aim of smuggling people out of the GDR, and thus depriving them of the socialist construction work, he harmed the GDR and, from the point of view of the dichotomous Marxist-Leninist worldview, thus automatically at least aided and abetted fascism. However, more detailed information on how deeply rooted the belief in this narrative really was among people living in the GDR requires further investigation. Acknowledgements I would like to thank Stephanie Roth, Freya Tasch and Dr. Michael Schäbitz for constructive criticism and stimulating comments.
References Adenauer K (1950) Regierungserklärung des Bundeskanzlers Adenauer in der 98. Sitzung des Deutschen Bundestages vom 08. November 1950. https://www.konrad-adenauer.de/quellen/ erklaerungen/1950-11-08-regierungserklaerung. Accessed 13 Mar 2020 Aktuelle Kamera vom 01. Mai 1989. https://www.youtube.com/watch?v=hS5w6cVypik. Accessed 13 Mar 2020 Assmann J (2002) Das kulturelle Gedächtnis. Schrift, Erinnerung und politische Identität in frühen Hochkulturen. C.H. Beck, München Autorenkollektiv (1980) Zionismus. In: Institut für Internationale Beziehungen an der Akademie für Staats- und Rechtswissenschaften der DDR (ed) Wörterbuch der Aussenpolitik und des Völkerrechts. Dietz, Berlin (Ost) Beleites J (2013) Versuche über die Konterrevolution. Der 17. Juni 1953 in den Geschichtslehrbüchern der DDR. Horch Guck 79(1/2013):49–53 Bischof G (2003) Eindämmung und Koexistenz oder „Rollback“ und Befreiung. Die Vereinigten Staaten, das Sowjetimperium und die Ungarnkrise im Kalten Krieg, 1948–1956. In: Schmidl EA (ed) Die Ungarnkrise 1956 und Österreich. Böhlau, Wien/Köln/Weimar, pp 101–127 Blank B (2014) „Deutschland einig Antifa?“. „Antifaschismus“ als Agitationsfeld von Linksextremisten. Nomos, Baden-Baden BStU, MfS, AOP 9745/65 BStU, MfS, AU 14699/69, Bd 4 BStU, MfS, AU 14699/69, Bd 8 Danyel J (1993) Die geteilte Vergangenheit. Gesellschaftliche Ausgangslagen und politische Disposition für den Umgang mit Nationalsozialismus und Widerstand in beiden deutschen Staaten nach 1949. In: Kocka J (ed) Historische DDR-Forschung. Aufsätze und Studien. Akademie Verlag, Berlin, pp 129–147 Das unsichtbare Visier (2009) 16 Teile. DDR: Fernsehen der DDR, 1973–1979, Regie: Peter Hagen. DVD, Studio Hamburg
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
109
Demke E (2011) „Antifaschistischer Schutzwall“ – „Ulbricht-KZ“. Kalter Krieg der Mauerbilder. In: Henke K-D (ed) Die Mauer. Errichtung, Überwindung, Erinnerung. dtv, München, pp 96–110 Der Brockhaus – Die Enzyklopädie (1997) In 24 Bänden, vol 7. Brockhaus, Leipzig/Mannheim Dimitroff G (1958) Die Offensive des Faschismus und die Aufgaben der Kommunistischen Internationale. Bericht auf dem VII. Weltkongress der Kommunistischen Internationale (2. August 1935). In: Institut für Marxismus-Leninismus beim ZK der SED (ed) Ausgewählte Schriften, vol 2. Dietz, Berlin (Ost), pp 523–558 Entman RM (1993) Framing. Toward clarification of a fractured paradigm. J Commun 43(4):51–58 For Eyes Only (Streng Geheim) (2013) DDR: DEFA-Studio für Spielfilme, 1963, Regie: János Veiczi. DVD, Icestorm Entertainment Gelfert A (2018) Fake news: a definition. Informal Logic 38:84–117 Gerber J (2016) Ein Prozess in Prag. Das Volk gegen Rudolf Slánský und Genossen. Vandenhoeck & Ruprecht, Göttingen Götz-Votteler K, Hespers S (2019) Alternative Wirklichkeiten. Wie Fake News und Verschwörungstheorien funktionieren und warum sie Aktualität haben. Transcript, Bielefeld Harenberg W (1990) „Wir sind keine Helden gewesen“. Der frühere Volkskammer-Präsident Horst Sindermann über Macht und Ende der SED. Der Spiegel vom 07. Mai 1990 (19/1990):53–64 Hartmann A, Eggeling W (1998) Sowjetische Präsenz im kulturellen Leben der SBZ und frühen DDR 1945–1953. de Gruyter, Berlin Heidemeyer H (2008) „Antifaschistischer Schutzwall“ oder „Bankrotterklärung des UlbrichtRegimes“? Grenzsicherung und Grenzüberschreitung im doppelten Deutschland. In: Wengst U, Wentker H (eds) Das doppelte Deutschland. 40 Jahre Systemkonkurrenz. Ch. Links, Berlin, pp 87–109 Henke K-D (2011) Die Berliner Mauer. In: Henke K-D (ed) Die Mauer. Errichtung, Überwindung, Erinnerung. dtv, München, pp 11–31 Hepfer K (2015) Verschwörungstheorien. Eine philosophische Kritik der Unvernunft. Transcript, Bielefeld Herf J (2016) Undeclared Wars with Israel. East German and the West German Far Left. Cambridge University Press, New York Holzweißig G (1995) Die Presse als Herrschaftsinstrument der SED. In: Deutscher Bundestag (ed) Materialien der Enquete-Kommission „Aufarbeitung von Geschichte und Folgen der SED-Diktatur in Deutschland“, Bd II/3. Nomos, Baden-Baden, pp 1689–1722 Iken K (2016) Doppelspion Horst Hesse. Der James Bond des Ostens. Spiegel-Online vom 11. Mai 2016. https://www.spiegel.de/geschichte/for-eyes-only-vorbild-horst-hesse-der-james-bond-desostens-a-1091394.html. Accessed 13 Mar 2020 Keil L-B, Kellerhoff SF (2017) Fake News machen Geschichte: Gerüchte und Falschmeldungen im 20. und 21. Jahrhundert. Ch. Links, Berlin Koenen G (2017) Die Farbe Rot. Ursprünge und Geschichte des Kommunismus. C.H. Beck, München Koenen G (2019) Mythen des 19., 20. und 21. Jahrhunderts. In: Heilbronn C et al (eds) Neuer Antisemitismus? Fortsetzung einer globalen Debatte. Suhrkamp, Berlin, pp 90–127 Kubina M (2011) Die SED und ihre Mauer. In: Henke K-D (ed) Die Mauer. Errichtung, Überwindung, Erinnerung. dtv, München, pp 83–95 Leide H (2005) NS-Verbrecher und Staatssicherheit. Die geheime Vergangenheitspolitik der DDR. Vandenhoeck & Ruprecht, Göttingen Leide H (2019) Ausschwitz und Staatssicherheit. Strafverfolgung, Propaganda und Geheimhaltung in der DDR. BStU, Berlin
110
A. Neumann
Lenin WI (1976) Drei Quellen und drei Bestandteile des Marxismus. In: Institut für MarxismusLeninismus beim ZK der SED (ed) Karl Marx, Friedrich Engels. Ausgewählte Schriften in zwei Bänden, vol 1. Dietz, Berlin (Ost), pp 11–16. [urspr. 1913] Leonhard W (2014) Die Revolution entlässt ihre Kinder. Kiepenheuer & Witsch, Köln. [urspr. 1955] Marxen K, Werle G (1999) Die strafrechtliche Aufarbeitung von DDR-Unrecht. Eine Bilanz. de Gruyter, Berlin/New York Müller P, Denner N (2019) Was tun gegen Fake News. Friedrich-Naumann-Stiftung für die Freiheit, Potsdam/Babelsberg Münkler H (2009) Antifaschismus als Gründungsmythos der DDR. Abgrenzungsinstrument nach Westen und Herrschaftsmittel nach Innen. In: KAS (ed) Der Antifaschismus als Staatsdoktrin der DDR. KAS, Sankt Augustin/Berlin, pp 31–47 Neues Deutschland (1961a/08/13) Beschluss des Ministerrates der Deutschen Demokratischen Republik. In: Neues Deutschland vom 13. August 1961, p 1 Neues Deutschland (1961b/08/14) Richtig und notwendig. In: Neues Deutschland vom 14. August 1961, p 1 Neues Deutschland (1961c/08/14) Schwarzer Tag für die Kriegstreiber. Die Weiche ist auf Frieden gestellt. Wir werden noch einen Zahn zulegen. Arbeitertat: Produktionsrekorde. In: Neues Deutschland vom 14. August 1961, p 1 Neues Deutschland (1961d/08/14) Bildunterschrift. In: Neues Deutschland vom 14. August 1961, p 1 Neues Deutschland (1962/07/08) Informationen für Herrn Rusk und Lord Home und für die Außenminister aller neutralen Staaten. In: Neues Deutschland vom 08. Juli 1962, pp 1 f Neumann A (2019) Von Indianern, Geistern und Parteisoldaten. Eskapistische DDR-Fernsehmehrteiler der 1980er-Jahre. bebra wissenschaft, Berlin Popper KR (1992) Die offene Gesellschaft und ihre Feinde, vol II. UTB, Tübingen. [ursprünglich: 1957] Rapoport L (1992) Hammer, Sichel, Davidstern. Judenverfolgung in der Sowjetunion. Ch. Links, Berlin Rendezvous mit Unbekannt (2016) 11 Teile. DDR: Deutscher Fernsehfunk (DFF), 1969, Regie: János Veiczi. DVD, Studio Hamburg Sabrow M (2016) Erich Honecker. Das Leben davor. 1912–1945. C.H. Beck, München Satjukow S (2002) „Früher war das eben der Adolf . . .“. Der Arbeiterheld Adolf Hennecke. In: Satjukow S, Gries R (eds) Sozialistische Helden. Kulturgeschichte von Propagandafiguren in Osteuropa und der DDR. Ch. Links, Berlin, pp 115–132 Schulze M (2019) Desinformation – Vom Kalten Krieg zum Informationszeitalter. In: Bundeszentrale für politische Bildung: Dossier Digitale Desinformation, (02.05.2019). https:// www.bpb.de/gesellschaft/digitales/digitale-desinformation/290487/desinformation-vom-kaltenkrieg-zum-informationszeitalter. Accessed 10 Mar 2020 Stöver B (2006) „Das ist die Wahrheit, die volle Wahrheit“. Befreiungspolitik im DDR-Spielfilm der 1950er- und 1960er-Jahre. In: Lindenberger T (ed) Massenmedien im Kalten Krieg. Akteure, Bilder, Resonanzen. Böhlau, Köln/Weimar/Wien, pp 49–76 Stöver B (2014) Politik der Befreiung? Private Organisationen des Kalten Krieges. Das Beispiel Kampfgruppe gegen Unmenschlichkeit (KgU). In: Creuzberger S, Hoffmann D (eds) „Geistige Gefahr“ und „Immunisierung der Gesellschaft“. Antikommunismus und politische Kultur in der frühen Bundesrepublik. de Gruyter Oldenbourg, München, pp 215–228 TKA (Theorie, Kritik und Aktion) (2018) Überlegungen zur Kontinuität von Antisemitismus in der deutschen Linken. In: (K)eine Diskussion! Antisemitismus in der radikalen Linken, Berlin, pp 11–21 Voigt T, Erler P (2011) Medizin hinter Gittern. Das Stasi-Haftkrankenhaus in BerlinHohenschönhausen. Jaron, Berlin
5
The Marxist-Leninist Definition of Fascism and the Berlin Wall: Individual . . .
111
von Bilavsky J (2009) Joseph Goebbels. Rowohlt, Hamburg Weber H (1983) Kommunismus in Deutschland 1918–1945. WBG, Darmstadt Wenzke R (2013) Ulbrichts Soldaten. Die Nationale Volksarmee 1956–1971. Ch. Links, Berlin Winkler HA (1990) Der Weg in die Katastrophe. Arbeiter und Arbeiterbewegung in der Weimarer Republik 1930–1933. Dietz, Bonn
Andreas Neumann, Dr., wrote the article “The Marxist-Leninist Definition of Fascism and the
Berlin Wall: Individual Consequences of Fake News and Conspiracy Theories Spread by the Media.” He is a research assistant at the Berlin-Hohenschönhausen Memorial and works in the fields of history and worldview of communism, media representation of ideology, and anti-Semitism.
Caution: Possible “Fake News” – A Technical Approach for Early Detection
6
Albert Pritzkau and Ulrich Schade
Abstract
This chapter is intended to supplement the discussion of fake news in this book with an explanation of whether and under what conditions it is possible to automatically detect fake news or at least to mark corresponding posts with a warning. An important goal of this presentation is to keep the explanations, especially the information science aspects, as generally understandable as possible. The explanations are given with reference to a tool which – limited thematically – performs such a marking.
6.1
Introduction
The propagation of so-called fake news is not a new phenomenon, but the associated effects are intensified by the fact that fake news is (also) spread via social media and that it can thus potentially reach and influence a great many consumers in a very short time. In this chapter, however, the focus is not on the phenomenon of fake news per se or on the problematics of its dissemination, but on the question of whether and how a warning against fake news can be technically realized. A corresponding approach is presented. The technical basics are explained as comprehensibly as possible. This also applies to the limits and the boundary conditions to which such an approach is subject. The chapter is divided into five sections. Following this introduction, the second section defines and establishes what we mean by “fake news”. Then, in the third section, we explain our application for warning against fake news and how it works. In the fourth A. Pritzkau (✉) · U. Schade Forschungsgruppe Informationsanalyse, Fraunhofer-Institut für Kommunikation, Informationsverarbeitung und Ergonomie FKIE, Wachtberg, Germany e-mail: [email protected]; [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_6
113
114
A. Pritzkau and U. Schade
section, we discuss how the integration can be incorporated into a larger overall system and what the overall system needs to contribute in order for the fake news detection application to work. In the concluding fifth section, we again show the limitations to which the application is subject.
6.2
Fake News – A Definition
The Duden gives the meaning of “fake news” as follows1: In den Medien und im Internet, besonders in sozialen Netzwerken, in manipulativerAbsicht verbreitete Falschmeldungen.(False reports propagated with manipulative intent in the media and on the Internet, especially on social networks.)
This corresponds to the definition in the German Wikipedia, which refers to the Duden definition2: Als Fake News [. . .] werden manipulativ verbreitete, vorgetäuschte Nachrichten oder Falschmeldungen bezeichnet, die sich überwiegend im Internet, insbesondere in sozialen Netzwerken und anderen sozialen Medien zum Teil viral verbreiten. Zunehmend wurde Fake News auch zu einem politischen Schlagwort und Kampfbegriff. Der Rechtschreibduden, der den Begriff 2017 in die 27. Ausgabe aufnahm, definiert ihn als „umgangssprachlich für in den Medien und im Internet, besonders in den Social Media, in manipulativer Absicht verbreitete Falschmeldungen.“ (Fake news [. . .] is the term used to describe manipulatively disseminated, feigned news or false reports that spread predominantly on the Internet, especially on social networks and other social media, in some cases virally. Increasingly, fake news has also become a political buzzword and fighting term. The Duden spelling dictionary, which included the term in its 27th edition in 2017, defines it as “colloquial for false news spread with manipulative intent in the media and on the Internet, especially on social media”.)
We would like to adopt the definition proposed by the Duden to a large extent, but not without referring to the discussion between Zimmermann and Kohring on the one hand (2018; Kohring and Zimmermann 2019) and Scholl and Völker on the other (2019), which profitably supplements this definition. We thus assume that the following four points apply: (a) fake news uses the format of news, (b) it is not true in terms of content, (c) it has a high degree of dissemination through the use of “social media”, and (d) it is disseminated to
1 2
https://www.duden.de/rechtschreibung/Fake_News https://de.wikipedia.org/wiki/Fake_News
6
Caution: Possible “Fake News” – A Technical Approach for . . .
115
achieve a certain effect and manipulate the recipient. Not all definitions of fake news apply these four points. For example, Lazer et al. (2018, p. 2) limit themselves to points 1 and 3 (“We define ‘fake news’ to be fabricated information that mimics news media content in form but not in organizational process or intent.”). Point 4 is explicitly attributed only to “disinformation” (“Fake news overlaps with other information disorders, such as misinformation (false or misleading information)) and disinformation (false information that is purposely spread to deceive people)” (Lazer et al. 2018, p. 1). Since there are different definitions of fake news, we would like to explain the four points we have mentioned in more detail below. The use of the news format implies that the content is postulated as true and is accordingly linked to the claim of truth. This is not the case, for example, for narratives, even if they are about real people. It is also not the case if the content-related information is marked as uncertain (“I picked up the following rumour about this: . . .”). However, we will have to point out in the course of the discussions in this chapter that the assumption of a use of the news format in fake news cannot be fully sustained because, although at first glance and as a rule fake news seem to adhere to the news format, their choice of words and the patterns of language used often deviate from it, which ultimately contributes to the possibility of detecting fake news in an automated way. In any case, however, what is characteristic of fake news is that, contrary to the truth claim derived from the use of the news format, truth is not given. We use here an intuitive notion of truth in the sense of a contradiction with the factual situation. For a deeper and detailed discussion of truth, we would like to refer to Kolmer (2017). In addition to the claim of truth, which cannot be sustained under possible scrutiny, we presuppose for fake news that it is spread virally via social media, since without such spreading the potential harm or – from the point of view of its authors – the benefit of fake news would not be given. In addition, as a further point of our understanding of the term fake news, there is an intention associated with it. The term “fake news” must be distinguished from numerous other terms. Fake news is not an accidentally produced error (example: “Bayer Leverkusen dementierte inzwischen die Meldung (...), dass Michael Jackson zu Bayer Leverkusen wechselt.” (Bayer Leverkusen has since denied the report (. . .) that Michael Jackson is moving to Bayer Leverkusen – Ulrich von der Osten on June, 25th, 2010, in the early news of n-tv). Errors are produced without the intention of deceiving the recipient. This ultimately also applies to what Frankfurt (2005) calls “bullshit”, i.e. when someone “reports” something of which they do not care whether it is true or not. In contrast, the dissemination of “disinformation” and “conspiracy myths” is intentional. Disinformation campaigns and conspiracy myths use fake news: “Desinformation ist niemals nur die Verbreitung einer einzelnen Nachricht oder Information. Vielmehr machen mehrere – in den meisten Fällen viele – Einzelteile das Puzzle aus, das dazu führt, dass ein neues, bestimmten Interessen entsprechendes Bild entsteht. Eine einzelne Nachricht kann man als wahr oder falsch bezeichnen, doch die Kombination von mehreren Nachrichten bildet ein Geflecht, das die Macht hat, Meinungen langfristig und oft unbemerkt zu beeinflussen.” (Disinformation is never just the spread of a single piece of news or
116
A. Pritzkau and U. Schade
information. Rather, several – in most cases many – individual pieces make up the puzzle that results in the creation of a new picture that suits particular interests. A single news item can be described as true or false, but the combination of several news items forms a web that has the power to influence opinions in the long term and often unnoticed.– Becker 2003, p. 100). Disinformation campaigns also use hate speech for their dissemination; for example, by using hate speech in a tweet to stir up sentiment against another group and then attaching a link under which, for example, fake news can be found as part of a disinformation campaign. People who tend to agree with the hate speech can thus be persuaded to click on the disinformation link and spread it.
6.3
A Tool to Warn About Possible Fake News
In this section, we want to present our tool, which can warn about possible fake news under certain constraints, explaining how it works technically. An overview of work on “fake news” and its detection is provided by Zhou and Zafrani (2018). In their paper, they explain that there are four starting points for detecting “fake news”: (a) via the inaccurate information they contain, (b) via the way they are written, (c) via the pattern underlying their spread, and (d) via the source and its credibility. We treat fake news detection as a classification task, taking into account the text and available metadata. Thus, we use the possible starting points (b) and (c). Classification involves categorizing posts as “fake” or as “non-fake”. The categorization of mails as “spam” or “non-spam” is also a classification task. The same is true for categorizing texts according to the language in which each was written, asking whether texts contain hate speech, or asking whether the sentiment of the text is neutral, positive, or negative (sentiment analysis). All these tasks can be understood as classification tasks. The tool we use works with Machine Learning methods and algorithms. It was originally developed to investigate which of these methods produce the best results for which classification task. Accordingly, the tool has been used for “Hate Speech” detection (Kent 2018; Krumbiegel 2019), Named Entity Recognition (Claeser et al. 2018), and “Affective Content” detection (Claeser 2019). There are different machine learning methods, but we have restricted ourselves to the so-called “supervised” approaches. “Unsupervised” methods try to use similarities between texts to arrange them in “clusters”, see for instance Blei (2012) on “topic modeling”. “Supervised” procedures, on the other hand, use corpora of examples and generally work as follows: In a first step, corpora have to be created. They consist of examples that can be used to train their application, which here is referred to as learning. For each category or class to which objects to be tested later might be assigned to, one needs a corpus of corresponding examples. In the case of classification by fake news, these are corpora of news, more precisely two corpora, one with examples of real news and one with examples of fake news. The creation of the corpora is done by humans and therefore involves effort.
6
Caution: Possible “Fake News” – A Technical Approach for . . .
117
The extent of this effort depends on which “machine learning” algorithm is to be used to train the system. Algorithms of the latest type, which rely on deep learning, require much more training examples than classical algorithms such as SVMs (support vector machines). More information on “deep learning” can be found in (Patterson and Gibson 2017), details on classical algorithms in (Manning et al. 2008). So the “deep” algorithms have the disadvantage that in order to use them, one needs much more examples. In return, they can determine for themselves from these examples those features, such as those of the text, on which the decision, fake news or not, is ultimately made. With classical algorithms, on the other hand, these features have to be predefined. In a second step, the algorithms learn feature patterns that are typical for one of the categorization classes from the self-extracted or the given features. In the simplest case, these patterns say for each feature whether it is present or not with respect to the represented class. The presence or non-presence of the features can then be represented in vectors, which the methods use to compute. Once the feature patterns of the categorization classes are trained, the application is evaluated in its usage. For the evaluation, parts of the example corpora are “withheld” beforehand, i.e. not used for training. These evaluation examples, for which, since they are taken from the example corpora, it is known how they are to be classified (“target evaluation”), are evaluated by the trained classifier (“actual evaluation”). The comparison between the results provided by the classifier, the actual evaluations, and the target evaluations, shows how well the trained classifier performs. If the evaluation results satisfy a predefined application-related requirement, the classifier can be used. If the results do not satisfy the requirements, the training must be continued, if necessary with larger training corpora. The brief outline of how the application works and how it is trained already reveals a major limitation. The performance of the application depends on the corpora that are set up. These corpora must not only be large enough to provide the algorithms with the necessary database, they must also provide suitable examples so that the right thing can be learned. A corpus of fake news relating to, say, scandalous news about members of the scene, be they royals or Oscar-winning actors, is certainly not suitable for identifying fake news intended to influence an election. This can be stated in general terms: If the corpora differ in whether their examples contain a phenomenon A (Corpus 1) or not (Corpus 2), then the classifier trained with these examples can be used to classify by phenomenon A. Apart from an assignment in the run-up to the 2017 German federal election, we had not initially used our tool to detect fake news or disinformation, but instead participated in classification competitions of scientific conferences (Claeser et al. 2018; Kent 2018; Claeser 2019; Krumbiegel 2019), which required classifying according to other phenomena. Then, however, as media attention focused more on the possibilities of “fake news” detection, we investigated the extent to which the tool could be used to compete in this area as well (Pritzkau et al. 2021). In all these cases, the training was carried out with corpora that were specified by the respective competition. A thematic restriction of the application, for example detection of those fake news that threaten to influence a certain election, allows the user to define the already mentioned features, which, as also mentioned, in addition has the advantage that classical ML methods
118
A. Pritzkau and U. Schade
can be chosen for the application. On the one hand, this reduces the number of necessary examples in the corpora. On the other hand, the results of the application as well as its operation are easier to interpret. This is briefly illustrated by the example of the fake news detection contest in which we participated with the “Deep Learning” variant of our tool (Pritzkau et al. 2021). In a contest, the initial corpora are given by the organizer, as mentioned above. These corpora are used to train the applications that participate in the competition. The trained applications are then uploaded to a platform run by the organizer and confronted with objects that were not included in the training corpora, but for which the organizer knows the correct mapping. The application that then correctly maps the most of the new objects wins. However, the problem with the competition addressed was that for the corpora, training corpora and evaluation corpora, the correct messages were created at a different time than the fake news. Moreover, all examples carried a timestamp. Since Deep Learning methods determine the features from which they create categorization patterns, the applications, ours as well as those of most other participants, learned during training that the timestamp is the best feature to distinguish correct news from fake news. According to the organizer, without the use of the timestamp, our system achieved an “area under the ROC curve” score of 0.924 on the evaluation data, and an AUC score of 0.996 after including the timestamp. If we work with the classic variant of the tool and accordingly predefine the features themselves, from which the application then learns the categorization patterns, we use a mixture of linguistic features and meta features. In the following, we will briefly explain which types of features can be considered using an example from the 2016 US election. We did not examine the US election with our tool, but analogous German-language examples; however, the US election provides examples via the Mueller Report (Mueller 2019) that can be used to illustrate very well the features we want to consider. The example in the figure (Fig. 6.1) shows an advertisement directed against Hillary Clinton. This example is taken from the Mueller Report (Mueller 2019, p. 25 as well as footnote 57) and was sent via Facebook to all US citizens aged “18–65+”, “who like Being Patriotic” as well as to “Friends of people who are connected to Being Patriotic” (cf. Figure 6.2). The fact that the post is not the view of a private person but a type of advertisement can be seen from the “Sponsored” (Fig. 6.2, top left). This label is slightly greyed out and is not necessarily noticed by readers of the target audience, so the post is easily taken as a private opinion. In total, the Russian Internet Research Agency has placed 3500 such ads on Facebook for about $100,000 (Mueller 2019, p. 25). Among the linguistic features that could have been used in identifying fake news in the context of the 2016 US election is the term “patriot”. In the example itself, “patriotic” is used. This shows that it is useful to look at word stems instead of the actual word forms that occur. This is even more true for German than for English, because German has a richer declension and a richer conjugation. At this point, however, it should be noted that the linguistic work of determining word stems (lemmatization) only yields better results when classical methods are used. Modern “deep learning” methods that determine their own features use pre-trained language models called embeddings, such as ELMo (from Allen
6
Caution: Possible “Fake News” – A Technical Approach for . . .
119
Fig. 6.1 Distribution-specific metadata with target group definition
AI) (Peters et al. 2018) or BERT (from Google) (Devlin et al. 2018). In these language models, declension and conjugation are already represented. It should also be noted that we only want to show here with our examples what kind of features are used for the classical methods. Which features are actually used depends on the application and what one wants to achieve with this application. In general, a single characteristic does not say very much. Only when a large number of features point in the same direction, the classifier can come to a good decision. It is obvious that the relevance of lexical features, i.e. the presence of certain words or word stems, is strongly linked to the topic on which the fake news is to be found. This includes specifically offensive terms, such as the term “liar” in the example. While the use of potentially offensive terms violates the news format which is a characteristic of fake news as well, it contributes to emotionalization, which in turn increases the likelihood of dissemination. Specific offensive terms in German-language fake news against persons or groups that can be assigned to the left-wing spectrum are, for example, “Linksmade” (leftfacing maggot) or “antifa terrorist”. Conversely, “Fascist” or “Nazi” are used against persons or groups of the right-wing spectrum. In fake news, which is generally directed
120
A. Pritzkau and U. Schade
Fig. 6.2 Example of political advertising distributed via Facebook during the 2016 US election campaign
against our democratic order, “corrupt” is also frequently used as an adjective to characterize persons, groups or organisations that stand up for democracy. These examples show particularly clearly that the occurrence of specific words alone is not sufficient to identify fake news, since these words naturally also occur in regular news. Only when other characteristics also point in the direction of fake news, the corresponding classification result will stand its ground. The factual style, which is one of the characteristics of the news format, uses fewer descriptive elements and especially fewer adjectives than is the case in texts that are intended to appeal to readers emotionally rather than informing them. Since fake news in some cases de-emphasizes the factual style of the news format in favor of a more emotional style and thus also in favor of a higher probability of dissemination, fake news should on average have more descriptive elements and thus also more adjectives, especially those in
6
Caution: Possible “Fake News” – A Technical Approach for . . .
121
the superlative. The frequency of adjectives, say in relation to the frequency of nouns, can thus be used as a feature. At this point, it should be noted that descriptions in fake news partly refer to an external perspective, so that things that are taken for granted in Germany are named in an extravagant manner (the current Federal Chancellor Angela Merkel and her CDU and CSU Union), and that words used are partly generally superfluous (the mass murder of the murdering refugee and asylum seeker Anis Amri). The consideration of errors as possible characteristics is not about a general check of spelling (“Why do you discriminate against people who do not know how to spell?”), but about finding circumstantial evidence that a person has written the news item who does not have German as their mother tongue, but pretends to do so. This is often the case with fake news, which is intended to shake confidence in our democratic order. People who write in German but have not learned this as their mother tongue make specific mistakes, see for example (Böttger 2008) on the mistakes that native speakers of Russian often make when writing in German. In general, such errors refer to those aspects of German that are learned without difficulty in first language acquisition but not in second language acquisition, such as correct declension, correct conjugation, or the choice of the article or its contraction with a preposition. Since second language acquisition is easier when principles of the mother tongue also apply to the second language, but harder when these contradict each other, certain errors point to specific mother tongues. This is illustrated by two simple examples: The hotel review “Allessüper!”, for example, indicates that the author’s native language is Turkish, which, together with the sender’s indication “Heidi from Heidelberg”, raises doubts about the correctness of the review. The omission of the article in “gegenüber Deutsche Tageszeitung”, for example, together with other features, indicates a Slavic mother tongue, in the case of this example it is Russian. In addition to the linguistic characteristics, meta-characteristics can and should also be considered. These often have a high informative value. In the given example of the US election of 2016, there are two very clear meta features that indicate fake news. First, as mentioned, the post is labeled as “Sponsored.” However, this labeling is kept comparatively unobtrusive and may not have been noticed by many readers. Thus, the post looks like an opinion piece by an ordinary U.S. citizen, which is aimed at a specific group of fellow citizens, namely those who consider themselves patriots. Secondly, the payment for this advertisement has been made in rubles, but this is not visible to the user. We can only speculate about the reasons for the payment in rubles, it could have easily been in dollars. An interesting meta-date in general is the time at which a post is published on social media. Such a timestamp may indicate, for example, that the post was made in a remote time zone. However, such a conclusion only has greater significance when looking at the broadcasting behavior of participants as a whole. Individuals who create fake news for a living often have regular work schedules, which may be inferred from their broadcasting behavior. For example, not broadcasting on holidays (specific to a particular country) may indicate where the post was sent. The sending pattern of an account is also of particular interest if it broadcasts virtually all the time, every hour of the day and every day of the week. In such a case, it can be assumed that the account is a bot. Bots as broadcasters indicate that there is
122
A. Pritzkau and U. Schade
an intention behind the posts, which is also served by the dissemination of fake news. Bots that are professionally set up to send out fake news, which can specifically influence elections in other countries, for example, will not necessarily be detectable via the sending pattern, as the pattern can be determined by its setter. Less easy, on the other hand, is the obfuscation of other parameters that are typical for bots. These can be detected via network analyses in which it is evaluated who has whom as a friend or follower or who quotes whom (@-mention network). Bots have few friends and also few followers. To disguise this, entire networks of bots are created, which then list each other as friends or followers and also quote each other. However, the resulting structures are visible via special network analyses, so that such analyses can deliver results that can then in turn be used as features in the detection of fake news.
6.4
The Tool as Part of a System for Viewing Social Media
The discussion on meta-data shows that the tool for detecting fake news must be integrated into an overall system. In our case, we use a system that monitors Twitter as an example social media platform. The monitoring is based on an exploratory search. This is characterized by successive queries interspersed with phases of meaning assignment or interpretation (Pirolli and Card 1999). For the queries, the overall system allows the specification of search terms, similar to Google search. It then extracts from the stream of tweets those tweets that contain one or more of these search terms. Currently, we use the Twitter API v1.1 in the overall system to retrieve data based on given search terms. The interface is limited to 1% of the total daily data volume of approximately 500,000,000 tweets. However, this limitation is not a problem as long as you avoid the very most common terms like “Trump” or “Corona” in your search. Those of the retrieved tweets that link to web pages are stored together with the text from the linked web page, analyzed and evaluated, and ultimately presented to the user together with the “fake news” rating. For the meaning assignment and interpretation phases, the system supports the user with statistics and visualizations. For example, the user is shown terms that occur particularly frequently in the posts that the system found based on the given search terms. Using that, the user can often find search terms, such as hashtags, which can be used to find posts that are relevant to the user more precisely, i.e. without displaying non-relevant posts. Further filtering is also supported in this way. The fake news detection tool is responsible for rating the posts. The user is shown the rating with a kind of configurable traffic light system (green = probably no fake news / yellow = possibly fake news / red = probably fake news). The rating tool is trained with the sample corpora as explained, before its integration into the overall system. The rating system can be configured via threshold values to determine how similar the feature pattern of an incoming tweet must be to the feature pattern for fake news or the pattern for correct news in order to be displayed in green, yellow or red. If the tool is integrated into another overall system that looks at posts from other platforms, evaluates web pages via a crawl, or
6
Caution: Possible “Fake News” – A Technical Approach for . . .
123
has another type of input channel, the tool can also be adapted to such an environment. However, it is important that the respective overall system can calculate the feature patterns for contributions. For this, two analyses, a content-linguistic analysis and a metadata analysis, are necessary in the overall system. We will briefly discuss both types of analysis in the following. The merged result of both analyses is the feature pattern of the received post, which is then evaluated by the actual tool for the detection of fake news.
6.4.1
The Linguistic Analysis of the Content
The content-linguistic analysis is intended to capture which linguistic features are present in a post for the application of fake news detection: Are words used whose usage in the given use case serve as features to make the classification? Are other of the features fulfilled, for example, are a particularly large number of adjectives used or does the post contain specific linguistic errors? The content-linguistic analysis is carried out using a so-called NLP pipeline, which provides numerous results for the work of the overall system, including answers to the questions about the presence of the features. NLP stands for Natural Language Processing. The NLP pipeline consists of several modules that are executed sequentially on the text of the post, actually on its content sequence of letters and other characters. In a first module, tokenization takes place. In this process, parts of the sequence are combined into tokens. Tokens are words, punctuation marks, and other characters and sequences, such as smileys or links. The determination of words as tokens already allows their statistical registration and thus the answer to questions whether a word like “Linksmade” is part of the text. However, if this question is already addressed after tokenization, only word forms can be considered and counted. “Murder” and “murders” are then two different sequences and would have to be recorded separately. After tokenization, it must be determined in which language the article was written. This can be done lexically, a text with many occurrences of “the” is probably written in English, or via the strings, a text with many occurrences of “ij” is probably written in Dutch. Language determination is straightforward when the entire text is written in only one language, and when a choice can be made from a set of pre-specified languages. In our system, however, language changes can also be recorded with respect to specific language pairs, such as English-German (cf. Claeser et al. 2018). If the language is known, POS tagging and lemmatization modules can be applied to the sequence of tokens. Both are specific to the language used in the contribution. POS tagging (POS stands for Part of Speech) involves the first step of syntactic analysis, determining the grammatical category to all words in the text. This is less straightforward than it sounds; “light”, for example, can be a noun, a verb, or an adjective. For German, POS tagging is on the one hand easier than for English, because nouns (should) start with a capital letter and therefore the distinction between, say, “fliegen” (“to fly” – verb) and “Fliegen” (“flies” – the insects – noun) is simplified. On the other hand, POS tagging in German has to be able to
124
A. Pritzkau and U. Schade
handle the splitting of verb particles: in “ich holte meinen Freund am Bahnhof ab” (“I picked up my friend at the station”) the verb is “abholen” (“to pick up”). Good applications for POS tagging exist for languages such as English and German, and their best applications are statistically based (Toutanova et al. 2003). POS tagging allows answering questions such as the number of adjectives used versus the number of nouns used. The lemmatization module determines the basic forms to the word forms, i.e. “murders” ! “murder”. This allows to count basic forms (lemmata). In many cases POS taggers already determine the basic forms for the word tokens. NLP pipelines also allow further linguistic content analyses. These include the identification of so-called Named Entities (through a module of Named Entity Recognition (NER)), the determination of the tone of a text (neutral vs. positive vs. negative; this is important for evaluative texts) through a module of Sentiment Analysis, comprehensive and deeper syntactic analyses or the determination of the intention (What is asked for in this question?). Since the analyses of such modules do not contribute to the determination of values for the features needed for the detection of fake news, we will not discuss these modules further here. However, Jurafsky and Martin (2009) provide an overview. Determining by using the NLP pipeline whether linguistic features are present for a text is only necessary for classifiers that have been trained classically. In contrast, the classifier trained with “Deep Learning” is a neural network at whose input layer the text to be evaluated is fed in. Its output layer consists of two nodes, one for the presence of fake news and one for the presence of real news. Since the training of corresponding classifiers is associated with a high effort, e.g. in terms of computational resources, one starts with pre-trained networks that represent language models (“embeddings”) of the target language (cf. Sect.6.3). Freely available “embeddings” have become better and better in recent years, which is why it is no longer possible to win competitions with classifiers trained in the classical way. For the classifier used in (Pritzkau et al. 2021), we used BERT as embedding and source network, then added nodes to represent the metadata.
6.4.2
The Metadata Analysis
Parallel to the analysis of the content and language, the metadata of a post to be classified is evaluated. This is done with the aim of deriving further indicators for assessing the quality, credibility and reliability of posts, which can also be used as features for the automatic detection of fake news. In addition to the assessment of the linguistic content of a contribution, as explained in Sect.6.4.1, the metadata analysis enables the evaluation of the source. The data obtained can be used to characterize individual actors or derive dissemination patterns. The structure of the resulting dynamic diffusion networks is used to identify and evaluate controlled information campaigns. The analysis is performed in two steps. Simple metadata, such as the timestamp, are taken directly from the post. For other data, accesses are also made to a database where user profiles, their sending behavior and their network data (about friends and followers as well as @-mention behavior) are
6
Caution: Possible “Fake News” – A Technical Approach for . . .
125
stored during the training phase in parallel to the training of the classifier. From this, it can be learned during the training how the user profiles of propagators of fake news typically look like. Network data form the basis for network analysis. It measures relationships at the micro level, such as who quotes whom, in order to derive the existence of social structures at the macro level. Thus, at the micro level, individual nodes and connections between them are examined. Various forms of centrality analysis (degree, proximity, and eigenvector centrality) are applied to first characterize individual nodes in a network, such as sources (publishes new posts), bridges (forwards others’ messages), or sinks (consumes messages but does not disseminate them). The micro-level patterns then allow us to infer macrostructures, such as cliques. For example, Fig. 6.3 illustrates reference behavior. By visualizing the resulting network – Twitter accounts as nodes and reference (@-mention) as edges – synthetic structures can be identified as in the lower left area of the figure, for example, which are characterized by an extraordinarily high linkage. Corresponding structures are indicators for bots. In addition to an existing content-based approach, we expect the network-based approach to provide further features that can be used in the automatic detection of fake news, and also provide useful insights that pave the way for the future development of an overall system for the detection of misleading and harmful information in terms of assessing the credibility of content and source. We are particularly interested in those
Fig. 6.3 Visualization of synthetic structures from the reference behavior on Twitter
126
A. Pritzkau and U. Schade
features that indicate strategic maneuvering, such as controlled influence in an upcoming election (Guess et al. 2020). The analysis of information diffusion in online networks has long been the focus of many social network studies (Leetaru 2011). The algorithms presented have already been applied to a number of social phenomena, including political and social movements (Sakaki et al. 2010), mobilization and protest behavior (Colbaugh et al. 2010), and dynamics of user behavior in social media (Colbaugh and Glass 2012; Lerman and Hogg 2012).
6.5
Conclusion
In this article, we have presented a tool that can be used under limited conditions to detect fake news or to point out possible fake news. This tool is integrated within an overall system that allows thematic monitoring of social media. In our article, we have limited ourselves to explaining the technical functionality of the tool and those parts of the overall system that are important for the function of the tool. At this point, however, we should once again refer to limitations to which the tool is subject. One major limitation is that no fact check is performed. The tool does not check whether a news item is true or false; it only assesses whether it could be fake news on the basis of linguistic characteristics such as word choice and the occurrence of specific errors, as well as on the basis of metadata. The tool thus looks for indications which, if they are present in larger numbers, lead to a warning about possible fake news. Another limitation, which has also been discussed in detail, is the thematic-content limitation. The tool learns to detect fake news using example corpora and only reacts to what it could at least similarly extract from the examples. If the corpora are chosen to contain examples of election interference by fake news, it will warn about fake news related to elections, but not about fake news that addresses the origin and spread of the Corona virus. The tool presented here is an application of machine learning. Such applications act digitally and based on what could be extracted from the training corpora. Both characteristics can be used to deceive the application. What such deceptions can look like can be seen in the simpler example of spam filters. For example, if a spam filter is trained to weed out mail that contains the word “potency”, the “o” in that word can be replaced by a zero. People then still read “potency”, just as we can read and understand sentences in which the letters in the middle of each word have been reversed (Grainger and Whitney 2004; with the telling title “Does the huamn mnid raed wrods as a wlohe?”). For digital processing, however, these are letter sequences that initially have nothing to do with the words that constitute the features for the feature patterns of the decision. Only the use of additional modules resembling modules for automated spelling correction would change this. In our case of influencing elections, if the application has learned that the indication “Sponsored” is a good feature for potential fake news, then this indication can be printed in
6
Caution: Possible “Fake News” – A Technical Approach for . . .
127
a locked manner, for example, which would deceive the application. This type of miscategorization and deception is being discussed as a research question in AI debates, especially in the field of image recognition, including how to harden applications against such possibilities of manipulation (Goodfellow et al. 2018). The development of fake news detection systems will lead to fake news being made more sophisticated in order to avoid detection. This will be followed by further development of detection systems and a kind of spiral of developments. Ultimately, then, fake news detection systems cannot prevent fake news from spreading; they do, however, have the effect of increasing the effort required to generate and spread fake news, which could ultimately (perhaps) lead to less fake news.
References Becker K (2003) Die Politik der Infosphäre: World-Information.Org. Verlag für Sozialwissenschaften, Bonn Blei DM (2012) Probabilistic topic models. Commun ACM 55(4):77–84 Böttger K (2008) Die häufigsten Fehler russischer Deutschlerner: Ein Handbuch für Lehrende. Peter Lang, Münster Claeser D (2019) Affective content classification using convolutional neural networks. In: Proceedings of the 2nd workshop on affective content analysis @ AAAI (AffCon2019). Honolulu Claeser D, Felske D, Kent S (2018) Token level code-switching detection using Wikipedia as a lexical resource. In: Rehm G, Declerck T (eds) 56th Annual meeting of the Association for Computational Linguistics. Melbourne Colbaugh R, Glass K (2012) Early warning analysis for social diffusion events. Secur Inform 1(1):18 Colbaugh R, Glass K, Gosler J (2010) Some intelligence analysis problems and their graph formulations. Sandia Natl lab [internet]. 2010 [cited 2021 Apr 7];1–27. https://www. researchgate.net/publication/267413817. Accessed on 21.04.2021 Devlin J, Chang M-W, Lee K, Toutanova K (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv.org – computer science – computation and language. https://arxiv.org/pdf/1810.04805.pdf. Accessed on 07.04.2021 Frankfurt HG (2005) On bullshit. Princeton University Press, Princeton Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61(7):56–66 Grainger J, Whitney C (2004) Does the huamn mnid raed wrods as a wlohe? Trends Cogn Sci 8:58– 59 Guess AM, Nyhan B, Reifler J (2020) Exposure to untrustworthy websites in the 2016 US election. Nat Hum Behav:1–9. https://doi.org/10.1038/s41562-020-0833-x. Nature Publishing Group. Accessed on 03.04.2021 Jurafsky D, Martin HJ (2009) Speech and language processing: an introduction to natural language processing, speech recognition, and computational linguistics, 2. Aufl. Prentice-Hall, Upper Saddle River Kent S (2018) German hate speech detection on Twitter. Konvens – Germ Eval 2018; Shared task on the identification of offensive language. Wien Kohring M, Zimmermann F (2019) Die wissenschaftliche Beobachtung aktueller Desinformation. Eine Entgegnung auf Armin Scholls und Julia Völkers Anmerkungen in “Fake News, aktuelle
128
A. Pritzkau and U. Schade
Desinformationen und das Problem der Systematisierung” in M&K 2/2019. Medien Kommun 67(3):319–325 Kolmer P (2017) Wahrheit – Ein philosophischer Streifzug. Aus Polit Zeitgesch 67(13/2017 Wahrheit):40–44 Krumbiegel T (2019) FKIE – Offensive Language Detection on Twitter at GermEval 2019. In: Konvens – proceedings of the GermEval 2019 workshop. Erlangen Lazer D, Baum M, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, Schudson M, Sloman SA, Sunstein CR, Thorson EA, Watts DJ, Zittrain JL (2018) The science of fake news. Science 359:1094–1096 Leetaru K (2011) Culturomics 2.0: forecasting large-scale human behavior using global news media tone in time and space. First Monday. https://doi.org/10.5210/fm.v16i9.3663. Accessed on 03.04.2020 Lerman K, Hogg T (2012) Using stochastic models to describe and predict social dynamics of web users. ACM Trans Intel Syst Tech 3:1–33 Manning CD, Raghavan P, Schütze H (2008) Introduction to information retrieval. Cambridge University Press, Cambridge, UK Mueller RS (2019) Report on the investigation into Russian interference in the 2016 presidential election. U.S. Department of Justice, Washington, DC. https://cdn.cnn.com/cnn/2019/images/04/ 18/mueller-report-searchable.pdf. Accessed on 07.04.2021 Patterson J, Gibson A (2017) Deep learning: a practitioner’s approach. O’Reilly, Sebastopol Peters ME, Neumann M, Iyyer M, Gradner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. arXiv.org – computer science – computation and language. https://arxiv.org/pdf/1802.05365.pdf. Accessed on 07.04.2021 Pirolli P, Card S (1999) Information foraging. Psychol Rev 106(5):643–675 Pritzkau A, Winandy S, Krumbiegel T (2021) Finding a line between trusted and untrusted information on tweets through sequence classification. 2021 International conference on military communications and information systems (ICMCIS). IEEE Sakaki T, Okazaki M, Matsuo Y (2010) Earthquake shakes twitter users. In: Proceedings of the 19th international conference on world wide web – WWW ‘10. New York, S 851 Scholl A, Völker J (2019) Fake News, aktuelle Desinformationen und das Problem der Systematisierung. Anmerkungen zum Aufsatz von Zimmermann F, Kohring M (2018) ‚Fake News‘ als aktuelle Desinformation – systematische Bestimmung eines heterogenen Begriffs. Medien Kommun 66(4); Medien Kommun 67(2):206–214 Toutanova K, Klein D, Manning CD, Singer Y (2003) Feature-rich part-of-speech tagging with a cyclic dependency network. In: Proceedings of the 2003 conference of the north American chapter of the Association for Computational Linguistics on human language technology – NAACL ‘03, Association for Computational Linguistics (ACL), Morristown, pp S 173–S 180. https://doi.org/ 10.3115/1073445.1073478. Accessed on 07.04.2021 Zhou X, Zafarani R (2018) Fake news: a survey of research, detection methods, and opportunities. ACM ComputSurv 1(1) Zimmermann F, Kohring M (2018) Fake News‘ als aktuelle Desinformation – systematische Bestimmung eines heterogenen Begriffs. Medien Kommun 66(4):528–541
Albert Pritzkau, Dipl. Inform., co-authored the paper “Caution: Possible ‘Fake News’ – A Technical
Approach for Early Detection” with Ulrich Schade. He is a research associate at the Fraunhofer Institute for Communication, Information Processing and Ergonomics (FKIE) and works in the areas of text classification by NLP and ML, identification and assessment of information campaigns and infrastructures for strategic communication.
6
Caution: Possible “Fake News” – A Technical Approach for . . .
129
Ulrich Schade, Prof. Dr., co-authored the paper “Caution: Possible ‘Fake News’ – A Technical Approach for Early Detection” with Albert Pritzkau. He is a research group leader for information analysis at Fraunhofer FKIE and an adjunct professor (außerplanmäßiger Professor) at the Institute for English, American and Celtic Studies at the Rheinische Friedrich-Wilhelms-Universität Bonn, working in the field of computational linguistics.
Countering Fake News Technically – Detection and Countermeasure Approaches to Support Users
7
Katrin Hartwig and Christian Reuter
Abstract
The importance of dealing with fake news has increased in both political and social contexts: While existing studies mainly focus on how to detect and label fake news, approaches to help users make their own assessments are largely lacking. This article presents existing black-box and white-box approaches and compares advantages and disadvantages. In particular, white-box approaches show promise in counteracting reactance, while black-box approaches detect fake news with much greater accuracy. We also present the browser plugin TrustyTweet, which we developed to help users evaluate tweets on Twitter by displaying politically neutral and intuitive warnings without generating reactance.
7.1
Introduction
For some time now, social networks such as Facebook and Twitter have increasingly served as important sources of news and information. The result is a dissemination of information that is partially independent of professional journalism. The large amounts of data and information available can be overwhelming. In this context, the term “information overload” was coined (Kaufhold et al. 2020). At the same time, it facilitates the dissemination of dubious or fake content. Steinebach et al. (2020) cite “high speed, reciprocity, low cost, anonymity, mass dissemination, fitfulness, and invisibility” as characteristics that K. Hartwig (✉) · C. Reuter Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit, Technische Universität Darmstadt, Darmstadt, Germany e-mail: [email protected]; [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_7
131
132
K. Hartwig and C. Reuter
favor the spread of disinformation on the Internet and especially in social networks. Furthermore, similar phenomena such as the spread of false rumors or clickbaiting can also occur in professional journalism, favoured by the highly attention-based online market. Since the 2016 presidential election in the United States, the term fake news has become widespread and has been taken up both in academic contexts and in public debates. Fake news are defined by the EU Commission as “all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm” (European Commission 2018). Allcott and Gentzkow (2017, p. 213) define fake news as “news articles that are intentionally and verifiably false, and could mislead readers.” Studies have shown that fake news often results from minor changes in the wording, so that the basic sentiment changes, for example, rather than being completely made up (Rashkin et al. 2017). In Germany, too, the 2017 federal elections were accompanied by discussions about the influence of fake news. However, the research results of a study by Sängerlaub (2017) show that there was no significant fake news during the election campaign that would have influenced the election results. These observations suggest that the public’s perception of fake news is different from its actual influence. People often find it difficult to distinguish between fake news and true news, as hardly any fake news is completely false and true news can also contain errors (cf. Potthast et al. 2018). In addition, even more recent events are accompanied by a flood of misinformation. In particular, problematic clips on the spread of the coronavirus have been called out hundreds of thousands of times on the video platform TikTok. To counteract this, TikTok users are “increasingly reminded to report content” (Breithut 2020). Videos with misleading information are deleted by the company accordingly. Recent research continues to show that only a limited number of individuals are actually vulnerable to being influenced by fake news (Dutton and Fernandez 2019). A Twitter analysis in the US found that “only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news” (Grinberg et al. 2019). Although the actual impact of fake news is still a controversial topic and research suggests that only a few users are susceptible to it, large parts of the population seem to have already encountered fake news. A representative survey in Germany from 2017 shows that fake news plays a significant role in the perception of the population. 48% stated that they had already experienced fake news. Furthermore, 84% were of the opinion that fake news posed a danger and could manipulate the opinion of the population. 23% stated that they had already deleted or reported fake news. In contrast, only 2% said they had ever created fake news themselves (Reuter et al. 2019). An overview of the results is shown in Fig. 7.1. In summary, fake news can certainly have negative effects, for example on democracy and public trust (Zhou et al. 2019). In fact, there have already been cases where the spread of fake news has caused significant damage. In 2013, for example, a fake tweet from the hacked account of the US news agency Associated Press caused $130 billion in stock market damage, falsely reporting explosions at the White House (Rapoza 2017). Further,
Receivables
48 % Perceive Fake News
2 % Post Fake News
Fig. 7.1 Results of the representative survey on the perception of fake news. (Reuter et al. 2019)
Federal Ministry for education and research
84 % Fake news poses a threat 84 % Fake News manipulates the opinion of the population 68 % Fake news harms democracy 56 % Social bots pose a threat
Attitude towards Fake News
Contact with Fake News
Source : www.peasec.de/2019/fake-news
72 % Establishment of state IT defence centres
76 % Transparent and self-critical journalism
81% operators should flag Fake News
81% operators must delete malicious and invented content
81 % Rapid response by the authorities
Countermeasures
13 % Comment on Fake News
7 Countering Fake News Technically – Detection and Countermeasure . . . 133
134
K. Hartwig and C. Reuter
Table 7.1 Steps for technical support in dealing with fake news Steps Description
1. Detection Detecting disinformation; for example, identifying from a set of tweets those tweets that contain fake news
2. Countermeasure approaches Take measures to protect users from the effects of fake news and empower them to evaluate content themselves
fake news of the #PizzaGate conspiracy theory led to a shooting at a pizzeria in Washington D.C. (Aisch et al. 2016). Technical solutions for dealing with fake news, especially in social networks, have great potential to counteract the influence of fake news with less user effort. In principle, two steps are necessary in the development of technical support approaches for dealing with fake news: Detect fake news and take countermeasures to protect and support users (Potthast et al. 2018). These are explained in more detail in Table 7.1. It is also important to consider who is ultimately responsible. The representative study by Reuter et al. (2019) investigated the opinions of the German population on how to deal with fake news. Among other things, participants were asked to rate the following suggestions for dealing with fake news on a five-point Likert scale: quick reactions by the authorities, operators must delete malicious and invented content, operators should flag fake news, transparent and self-critical journalism, and the establishment of state IT defense centers. Most participants said they agreed with all the proposed measures. The idea of setting up state IT defence centres to combat fake news showed the lowest level of acceptance (72%) compared to the other items (Reuter et al. 2019).
7.2
Detection of Fake News in Social Media
7.2.1
Approaches
Since, according to Vosoughi et al. (2018), fake news spread faster than true news, interdisciplinary approaches are essential to address the complex challenges involved. Various methods already exist to detect fake news on social media. For example, platforms can allow their users to report suspicious content. Furthermore, professional fact checkers can manually verify or refute the reported content. In addition, the research field of automated fake news detection is growing through technical solutions, such as stylebased fake news detection, propagation-based and context-based fake news detection (Potthast et al. 2018; Zhou et al. 2019). A good overview of common detection approaches for fake news is provided by Steinebach et al. (2020). The authors distinguish between the detection of misinformation regarding texts, images and bots. Zhang and Ghorbani (2020) further differentiate automatic detection methods according to three categories – component-based, data miningbased and implementation-based approaches. In this context, component-based detection
7
Countering Fake News Technically – Detection and Countermeasure . . .
135
methods examine, for example, the authors of fake news or users of social media based on sentiment analysis. Sentiment analysis belongs to the field of text mining and uses signal words, for example, to automatically investigate which sentiments and moods prevail in texts by certain authors. Furthermore, component-based detection methods examine news content on the basis of linguistic (e.g., particularly many exclamation marks), semantic (e.g., particularly attention-grabbing titles that conflict with the body of the text in terms of content), knowledge-based (e.g., websites that use expert knowledge), or style-based (e.g., writing style with a particularly high number of emotional words) features as well as the social context on the basis of user network analyses or distribution patterns. The category of data mining-based detection methods, on the other hand, distinguishes supervised and unsupervised learning. Further, the category of implementation-based approaches distinguishes real-time and offline detection of fake news (cf. Zhang and Ghorbani 2020). The categorization of fake news detection methods can be found in Fig. 7.2. Many approaches focus on characteristics of text content (Granik and Mesyura 2017; Gravanis et al. 2019; Hanselowski et al. 2019b; Potthast et al. 2018; Rashkin et al. 2017; Zhou et al. 2019). The annotated corpus of Hanselowski et al. (2019a) provides a foundation for machine learning approaches to automated fact checking. Others study user interaction (Long et al. 2017; Ruchansky et al. 2017; Shu et al. 2019b; Tacchini et al. 2017) or content propagation within social networks (Monti et al. 2019; Shu et al. 2019a; Wu and Liu 2018). Other work addresses the relationship of the headline to the body of the text (Bourgonje et al. 2018), argumentation (Sethi 2017), and conflicting perspectives on a topic (Jin et al. 2016). Following existing approaches for identifying spam messages, Naive Bayes classifiers are often used for the detection and probability calculation of fake news. Here, objects are assigned to a class (e.g., (a) fake news or (b) correct information) that they are most likely to resemble, based on Bayes’ mathematical theorem. Since articles that contain fake news often share the same word groups, Naive Bayes classifiers can be used to calculate the probability that articles contain fake news (Granik and Mesyura 2017). Both Pérez-Rosas et al. (2017) and Potthast et al. (2018) resort to linguistic and semantic features (e.g., certain N-grams, sentence and word proportions) for fake news detection. In this context, Potthast et al. (2018) focus in particular on stylistic features for news containing left- or right-wing extremist content. Here, it is noticeable that despite very different political orientations, the writing styles used are very similar. Furthermore, it becomes apparent that superlatives and exaggerations are increasingly used in fake news (Rashkin et al. 2017). Algorithmic machine learning approaches, e.g. from Perspective API, are used to recognize certain speech patterns that are essential for detecting fake news. These check statements and messages for short sentences and certain tenses, for example. Fact-checking websites such as PolitiFact rank verified articles on a scale scaled from “true” to “absolutely false” (Rashkin et al. 2017, p. 2931). Another approach by Gravanis et al. (2019) describes that identifying fake news requires a tool that can detect the profiles of people who create fake news. Castillo et al. (2011) also use various characteristics of user profiles (e.g., registration age) to identify fake news. In addition to content and stylistic verification
Fig. 7.2 Automatic detection methods of fake news (According to Zhang and Ghorbani 2020)
136 K. Hartwig and C. Reuter
7
Countering Fake News Technically – Detection and Countermeasure . . .
137
Fig. 7.3 Visualization of the black box (top) and white box approach (bottom)
mechanisms, there is the propagation-based fake news detection approach. This examines how news is propagated in social networks (Zhou et al. 2019).
7.2.2
Black Box Versus White Box
However, the aforementioned detection algorithms have a significant drawback: they are black-box based and accordingly do not provide end-users with an explanation of automated decision-making. Users can observe the input (e.g., a tweet) and the output (e.g., the marking of the tweet as fake news), but receive no information about what happens in between (e.g., why a tweet was marked as fake news). The counterpart of blackbox approaches is called whitebox approach. In this, the internal processes between input and output can be observed. In the context of fake news, whitebox approaches enable the traceability of indicators for false content. Accordingly, users here have access to all the necessary information to understand why the algorithm generated a specific output. A corresponding visualization is shown in Fig. 7.3. In other contexts where machine learning is applied, the need for “interpretability, explainability and trustworthiness” is already highlighted and increasingly discussed (Conati et al. 2018, p. 24). Explainable machine learning sets out to build user trust in the results of systems (Ribeiro et al. 2016). So far, however, there are few approaches to fake news detection that use explainable machine learning. The approaches of Reis et al. (2019) and Yang et al. (2019) are worth mentioning. Other whitebox approaches focus on user education with the aim of improving media literacy. Studies have shown that improved media literacy can have promising counteracting effects in dealing with fake news (Kahne and Bowyer 2017; Mihailidis and Viotty 2017). If the ability to autonomously evaluate online content is improved through whitebox approaches, this can reduce reactance and prevent the backfire effect. Nyhan and Reifler (2010) refer to the backfire effect as the emergence of anger and defiance when political content in particular contains a warning label. Users tend to believe the content all the more then, as they “perceive the correction as an illegal persuasion attempt” (Müller and Denner 2017, p. 17). Hartwig and Reuter (2019) designed a browser plugin that provides politically neutral and transparent cues about characteristics of a tweet on Twitter that indicate untrustworthy content. In a similar approach, Bhuiyan et al. (2018) present a browser plugin designed to help users on Twitter better assess the credibility of
138
K. Hartwig and C. Reuter
news articles through nudging. Targeted questions (e.g., Does the post tell the whole story?) serve as a nudge to encourage users to think reflectively. Further, Fuhr et al. (2018) present an approach in which they label online texts in terms of, for example, facts and emotions, similar to nutritional information on food labels, to help readers make informed judgments. Instead of clear black-box or white-box approaches, platforms usually use combinations of different strategies to detect fake news. For example, Facebook offers users the possibility to report suspicious content and at the same time applies algorithms to detect and prioritize fake news, which are examined by independent fact checkers in the following (McNally and Bose 2018; Mosseri 2016).
7.3
Countermeasures to Support Users
A large body of academic work has already explored ways to automatically detect fake news. However, less attention has been paid to the next step, namely what to do when misinformation has finally been detected in social media. Has misinformation been successfully identified? Are there different approaches to deal with it in the following? As misinformation spreads primarily through social media, platforms such as Facebook, Twitter and Instagram have begun to counteract it. Many of the approaches are directly visible to users and influence the experience on social networks. Facebook, in particular, has employed a number of practices as potential countermeasures since 2016 (Tene et al. 2018). For example, after the 2016 US election, Facebook began displaying warnings under controversial posts (Mosseri 2016). However, according to media reports, this feature was withdrawn after persistent criticism. Since then, Facebook has used more subtle techniques to limit the reach of controversial posts, such as reducing the post size, listing fact-check articles, and lowering the post ranking in the newsfeed (McNally and Bose 2018). These countermeasures appear to have roughly the desired effect of reducing the spread of fake news on the social network. Since their introduction in 2016, interaction with fake news on Facebook has been reduced by more than 50% (Allcott et al. 2019). In their paper, Kirchner and Reuter (2020) provide an overview of different social media techniques used. They further compare the effectiveness and user acceptance of different measures such as the display of warnings or related articles and the provision of additional information. However, flagging and deleting false content may not be effective and sometimes even counterproductive. In contrast, many researchers see media literacy training as a promising strategy (Müller and Denner 2017; Stanoevska-slabeva 2017; Steinebach et al. 2020). Studies have shown that people with high media literacy are able to easily identify much of German-language fake news based on various factors such as text structure, as misinformation in the body of the text often has more than two spelling mistakes, consistent capitalization, or punctuation errors (cf. Steinebach et al. 2020). However, since most approaches to automatically detect and label fake news use black-box algorithms, and this is also the case with many widely used machine learning techniques, they usually cannot
7
Countering Fake News Technically – Detection and Countermeasure . . .
139
clarify why they label certain content as fake news. Presenting users with a label can even lead to reactance if it does not match their own perception. This effect is generated by the so-called confirmation bias, which occurs when news is considered true precisely when it corresponds to one’s ideology (Kim and Dennis 2018; Nickerson 1998; Pariser 2011). Bode and Vraga (2015) investigated the possibility of combating misinformation with corrective information in the “Related Articles” section under the respective article. Researchers have previously shown that warnings about misinformation reduce its perceived accuracy (Ecker et al. 2010; Lewandowsky et al. 2012; Sally Chan et al. 2017), but these can also fail (Berinsky 2017; Nyhan and Reifler 2010; Nyhan et al. 2013). For example, Garrett and Weeks (2013) compared immediate versus delayed rectification for misformation. They found that immediate rectification had the most significant impact on perceived correctness. However, when misinformation confirms users’ opinions, the potential for a backfire effect is greater (Kelly Garrett and Weeks 2013). Pennycook et al. (2018) show that a related phenomenon – repeated consumption of misinformation increases perceived Illusory Truth Effect accuracy – can also be applied to fake news on social media. In addition, they found that warnings can decrease the perceived accuracy of content. Pennycook et al. (2019) confirm the positive effect of such warnings. Using a Bayesian implied truth model, they argue that showing warning notifications for false news not only reduces belief in its accuracy, but also increases belief in the accuracy of news without an attached warning. Clayton et al. (2019) compare several types of warnings. In addition to specific warnings about false headlines, they also test a general warning without reference to a specific post. Facebook had displayed such a warning across users’ newsfeed in April 2017 and May 2018, in which it warned against misinformation in general. In addition, they examine two different ways of phrasing specific warnings about headlines: “disputed” and “rated false”. Their results show that general warnings have a minimal effect, but specific warnings have a significant effect. Thus, they confirm the findings of Pennycook et al. (2019). This group of researchers concluded that “rated false” warnings are significantly more effective than those labelled “disputed”.
7.4
TrustyTweet: A Whitebox Approach to Assist Users in Dealing with Fake News
As shown in Sect.7.3, increasing media literacy is a promising strategy for dealing with fake news. By providing transparent and identifiable indicators of fake news, users can be supported in forming opinions about online content. In this context, it is important to differentiate between assistance systems that give neutral advice based on transparent indicators and systems that cause reactance in order to counteract a backfire effect. The use of a white box approach instead of a black box approach is an important step to reduce or prevent reactance. In the following, the browser plugin TrustyTweet is presented, which aims to support users in dealing with fake news on Twitter by providing politically neutral, transparent and
140
K. Hartwig and C. Reuter
Table 7.2 Potential indicators for fake news Indicator Continuous capitalization
Example CONTINIOUS CAPITALIZATION
Excessive use of punctuation
Excessive use of punctuation!!! Wrong punctuation at the end of the sentence!! 1
Wrong punctuation at the end of a sentence Excessive use of emoticons and especially attentiongrabbing emoticons The use of the standard profile screen Lack of official account verification, especially for celebrities
Literature Steinebach et al. (2020); Wanas et al. (2008); Weerkamp and De Rijke (2008); Weimer et al. (2007) Morris et al. (2012);Wanas et al. (2008). Morris et al. (2012); Weimer et al. (2007)
Wanas et al. (2008); Weerkamp and De Rijke (2008) Morris et al. (2012) Morris et al. (2012)
intuitive advice (Hartwig and Reuter 2019). In particular, this approach aims to be a helpful assistant without leading to reactance. Users are thus not deprived of their own judgment. The aim is to bring about a learning effect regarding media literacy that makes the plugin redundant after prolonged use. In contrast to other approaches, TrustyTweets is therefore based on a white-box technology. The plugin was developed in a user-centered design process within the “design science” approach. Potential indicators of fake news were identified by weighing approaches that have already proven promising in scientific work. The focus is on heuristics that people intuitively and successfully use and that are easy to understand. However, it is important to emphasize that this approach cannot encompass all relevant indicators of fake news. The following characteristics are used as potential indicators (Table 7.2): TrustyTweet was developed for the Firefox web browser. Its main components are a text box that contains all the indicators detected in a tweet and serves as a warning notification, two different icons to indicate whether indicators have been detected in the tweet and, finally, another icon to access the settings that open in a popup window. Next to each indicator is a link to access general information about that indicator in a popup window. Moving the mouse over an indicator dynamically highlights the corresponding component in the tweet (see Fig. 7.4). This allows users to immediately see why a warning is displayed. The main icon of the plugin serves as a toggle button for the text box. Users can decide if they want to see all detected indicators next to the respective tweet or if they just want to see an icon and switch to the textbox if needed to see why the current warning is displayed. A key feature of TrustyTweet is the configuration popup. By using checkboxes, users can turn on and off individual indicators to investigate tweets. In this way, our plugin provides a stronger sense of autonomy and counters paternalism.
7
Countering Fake News Technically – Detection and Countermeasure . . .
141
Fig. 7.4 Sample output from TrustyTweet. (Hartwig and Reuter 2019)
The usability and user experience of the plugin were evaluated in initial qualitative thinking aloud studies with a total of 27 participants. The support tool was largely rated as helpful and intuitive. Furthermore, the findings of our study provide indications for the following design implications for support tools in dealing with fake news: 1. Personalization to maintain personal autonomy: The configuration feature is important to increase autonomy and prevent reactance. 2. Support users by providing transparent and objective information: The indicators need detailed descriptions that make it clear why they are relevant for detecting fake news. According to our testers, it is of great importance that the descriptions are politically neutral and formulated in an objective way. 3. Clear mapping of alerts: Highlighting components of a tweet when hovering over it when a warning has been triggered has been deemed one of the most helpful plugin features and is indispensable to achieve a learning effect. 4. Personalized perceptibility: The toggle feature of the warnings was also positively received. Many participants liked the feature of displaying detailed text boxes only when needed and otherwise mainly paying attention to the color of the icon. 5. Minimizing false alarms: As in many other contexts (e.g. warning apps), it is very important to minimize false alarms, otherwise users might lose attention to the plugin or uninstall it before a learning effect has occurred. To improve the plugin in this respect, some respondents suggested the display of gradual warnings (for example in traffic light colors) as a possible alternative.
142
7.5
K. Hartwig and C. Reuter
Conclusion and Outlook
Dealing with fake news is currently a major challenge for society and politics (cf. Granik and Mesyura 2017). Studies have shown that there is a great need for assistance systems to support social media users. So far, research has focused in particular on using machine learning algorithms to detect and label fake news. For example, Gupta et al. (2014) present a browser plugin that automatically assesses the truthfulness of content on Twitter. Other approaches (e.g., Fake News AI) also use machine learning. Still other approaches rely on whitelists and blacklists (e.g., B.S. Detector) to detect fake news. However, black-box methods run the risk of causing reactance, as they cannot give reasons for their fake news alerts. In our eyes and following the opinion of other studies (Müller and Denner 2017; Stanoevska-slabeva 2017), improving individual media literacy is a central strategy in dealing with fake news. The initial empirical results of the conducted study show that our indicator-based white-box approach to support Twitter users in dealing with fake news is potentially promising if the following five design implications are considered: Personalizability to increase autonomy, transparent and objective information, unambiguity of warnings, personalized perceptibility, and minimization of false alarms. For future studies, a combination of automatic detection of fake news and subsequent use of TrustyTweet as a support measure is planned. Here, the advantages of both methods could be used: the transparent and easy-to-understand indicators and the accurate detection of black-box methods. A corresponding representative online experiment on the effectiveness of TrustyTweet in combination with automatic detection procedures as a supplement to the qualitative study conducted is being planned. Acknowledgements Funded by the German Research Foundation (DFG) – SFB 1119 – 236615297 (CROSSING) as well as by the German Federal Ministry of Education and Research (BMBF) and the Hessian Ministry of Science and the Arts (HMWK) in the context of their joint funding for the National Research Center for Applied Cyber Security ATHENE. This article is partly based on the article “Fake News Perception in Germany: A Representative Study of People’s Attitudes and Approaches to Counteract Disinformation” (Reuter et al. 2019) and “TrustyTweet: An Indicator-based Browser Plugin to Assist Users in Dealing with Fake News on Twitter” (Hartwig and Reuter 2019). Moreover, it is partly based on the conference paper “Countering Fake News: A Comparison of Possible Solutions Regarding User Acceptance and Effectiveness” (Kirchner and Reuter 2020). We thank Jan Kirchner for his support.
References Aisch G, Huang J, Kang C (2016) Dissecting the #PizzaGate conspiracy theories. New York times. https://www.nytimes.com/interactive/2016/12/10/business/media/pizzagate.html. Accessed on 18.04.2020 Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–236. https://doi.org/10.1257/jep.31.2.211
7
Countering Fake News Technically – Detection and Countermeasure . . .
143
Allcott H, Gentzkow M, Yu C (2019) Trends in the diffusion of misinformation on social media. Res Politics 6(2):205316801984855. https://doi.org/10.1177/2053168019848554 Berinsky AJ (2017) Rumors and health care reform: experiments in political misinformation. Br J Polit Sci 47(2):241–262. https://doi.org/10.1017/S0007123415000186 Bhuiyan MM, Zhang K, Vick K, Horning MA, Mitra T (2018) Feed reflect: a tool for nudging users to assess news credibility on twitter. In: Companion of the 2018 ACM conference on computer supported cooperative work and social computing – CSCW ‘18, S 205–208. https://doi.org/10. 1145/3272973.3274056 Bode L, Vraga EK (2015) In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J Commun 65(4):619–638. https://doi.org/10.1111/ jcom.12166 Bourgonje P, Moreno Schneider J, Rehm G (2018) From clickbait to fake news detection: an approach based on detecting the stance of headlines to articles. In: Proceedings of the 2017 EMNLP workshop: natural language processing meets journalism, S 84–89. https://doi.org/10. 18653/v1/w17-4215 Breithut J (2020) Falschinformationen im Netz: so reagieren Facebook, Google und TikTok auf das Coronavirus. Spiegel online. https://www.spiegel.de/netzwelt/web/coronavirus-wie-facebookgoogle-und-tiktok-auf-falschinformationen-reagieren-a-6bc449fc-2450-4964-a6757d6573316ad9. Accessed on 03.02.2020 Castillo C, Mendoza M, Poblete B (2011) Information credibility on twitter. In: Proceedings of the International Conference on World Wide Web, Hyderabad, S 675–684 Clayton K et al (2019) Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit Behav 42:1073–1095. https://doi.org/10.1007/s11109-019-09533-0 Conati C, Porayska-Pomsta K, Mavrikis M (2018) AI in education needs interpretable machine learning: lessons from open learner modelling. In: Proceedings of 2018 ICML workshop on human interpretability in machine learning (WHI 2018). http://arxiv.org/abs/1807.00154. Accessed on 22.04.2021 Dutton WH, Fernandez L (2019) How susceptible are internet users? InterMedia 46(4). https://doi. org/10.2139/ssrn.3316768 Ecker UKH, Lewandowsky S, Tang DTW (2010) Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem Cogn 38(8):1087–1100. https://doi.org/10.3758/ MC.38.8.1087 European Commission (2018) A multi-dimensional approach to disinformation. Report of the independent High Level Group on fake news and online disinformation (bd 2). https://doi.org/ 10.2759/0156 Fuhr N et al (2018) An information nutritional label for online documents. ACM SIGIR Forum 51(3): 46–66. https://doi.org/10.1145/3190580.3190588 Granik M, Mesyura V (2017) Fake news detection using naive Bayes classifier. In: 2017 IEEE 1st Ukraine conference on electrical and computer engineering, UKRCON 2017 – proceedings, S 900–903. https://doi.org/10.1109/UKRCON.2017.8100379 Gravanis G, Vakali A, Diamantaras K, Karadais P (2019) Behind the cues: a benchmarking study for fake news detection. Expert Syst Appl 128:201–213. https://doi.org/10.1016/j.eswa.2019.03.036 Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D (2019) Political science: fake news on twitter during the 2016 U.S. presidential election. Science 363(6425):374–378. https://doi.org/ 10.1126/science.aau2706 Gupta A, Kumaraguru P, Castillo C, Meier P (2014) TweetCred: real-time credibility assessment of content on twitter. In: International conference on social informatics, S 228–243. http://arxiv.org/ abs/1405.5490. Accessed on 22.04.2021
144
K. Hartwig and C. Reuter
Hanselowski A, Stab C, Schulz C, Li Z, Gurevych I (2019a) A richly annotated corpus for different tasks in automated fact-checking. In: proceedings of the 23rd conference on computational natural language processing, S 493–503. https://doi.org/10.18653/v1/k19-1046 Hanselowski A et al (2019b) UKP-Athene: multi-sentence textual entailment for claim verification. In: proceedings of the first workshop on fact extraction and verification (FEVER), S 103–108. https://doi.org/10.18653/v1/w18-5516 Hartwig K, Reuter C (2019) TrustyTweet: an indicator-based browser-plugin to assist users in dealing with fake news on twitter. In: proceedings of the international conference on Wirtschaftsinformatik (WI). http://www.peasec.de/paper/2019/2019_HartwigReuter_ TrustyTweet_WI.pdf. Accessed on 18.04.2020 Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exBploiting conflicting social viewpoints in microblogs. In: 30th AAAI conference on Artificial Intelligence, AAAI 2016, Phoenix, S 2972–2978 Kahne J, Bowyer B (2017) Educating for democracy in a partisan age: confronting the challenges of motivated reasoning and misinformation. Am Educ Res J 54(1):3–34. https://doi.org/10.3102/ 0002831216679817 Kaufhold M, Rupp N, Reuter C, Habdank M (2020) Mitigating information overload in social media during conflicts and crises: design and evaluation of a cross-platform alerting system. Behav Inform Technol 39(3):319–342 Kelly Garrett R, Weeks BE (2013) The promise and peril of real-time corrections to political misperceptions. Proceedings of the ACM conference on Computer Supported Cooperative Work, CSCW, S, In, pp 1047–1057. https://doi.org/10.1145/2441776.2441895 Kim A, Dennis A (2018) Says who?: how news presentation format influences perceived believability and the engagement level of social media users. In: proceedings of the 51st Hawaii international conference on system sciences. https://doi.org/10.24251/hicss.2018.497 Kirchner J, Reuter C (2020) Countering fake news: a comparison of possible solutions regarding user acceptance and effectiveness. In: proceedings of the ACM: human computer interaction (PACM): computer-supported cooperative work and social computing, ACM, Austin, USA Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J (2012) Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interest 13(3): 106–131. https://doi.org/10.1177/1529100612451018 Long Y, Lu Q, Xiang R, Li M, Huang C-R (2017) Fake news detection through multi-perspective speaker profiles, Bd 2, 8. Aufl. In: Proceedings of the eighth international joint conference on Natural Language Processing, Taipei, S 252–256 McNally M, Bose L (2018) Combating false news in the Facebook news feed: fighting abuse @scale. https://atscaleconference.com/events/fighting-abuse-scale/. Accessed on 24.01.2020 Mihailidis P, Viotty S (2017) Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in “post-fact” society. Am Behav Sci 61(4):441–454. https://doi.org/ 10.1177/0002764217701217 Monti F, Frasca F, Eynard D, Mannion D, Bronstein MM (2019) Fake news detection on social media using geometric deep learning. [Preprint] Morris MR, Counts S, Roseway A, Hoff A, Schwarz J (2012) Tweeting is believing? Understanding microblog credibility perceptions. Proceedings of the ACM conference on Computer Supported Cooperative Work, CSCW, S, In, pp 441–450. https://doi.org/10.1145/2145204.2145274 Mosseri A (2016) Addressing hoaxes and fake news. https://about.fb.com/news/2016/12/news-feedfyi-addressing-hoaxes-and-fake-news/. Accessed on 24.01.2020 Müller P, Denner N (2017) Was tun gegen „Fake News“? Friedrich Naumann Stiftung Für die Freiheit, Bonn
7
Countering Fake News Technically – Detection and Countermeasure . . .
145
Nickerson RS (1998) Confirmation bias: a ubiquitous phenomenon in many guises. Rev Gen Psychol 2:175–220 Nyhan B, Reifler J (2010) When corrections fail: the persistence of political misperceptions. Polit Behav 32(2):303–330. https://doi.org/10.1007/s11109-010-9112-2 Nyhan B, Reifler J, Ubel PA (2013) The hazards of correcting myths about health care reform. Med Care 51(2):127–132. https://doi.org/10.1097/MLR.0b013e318279486b Pariser E (2011) The filter bubble: how the new personalized web is changing what we read and how we think. Penguin, London Pennycook G, Cannon TD, Rand DG (2018) Prior exposure increases perceived accuracy of fake news. J Exp Psychol Gen 147(12):1865–1880. https://doi.org/10.1037/xge0000465 Pennycook G, Bear A, Collins E (2019) The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. In: management science. http://www.mnsc.2019.3478.pdf. Accessed on 18.04.2020 Pérez-Rosas V, Kleinberg B, Lefevre A, Mihal R (2017) Automatic detection of fake news. In: proceedings of the 27th international conference on computational linguistics. https://www. aclweb.org/anthology/C18-1287. Accessed on 22.04.2021 Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B (2018) A stylometric inquiry into hyperpartisan and fake news. In: ACL 2018 – 56th annual meeting of the Association for Computational Linguistics, proceedings of the conference (Long papers) Vol. 1, S 231–240. https://doi.org/10.18653/v1/p18-1022 Rapoza K (2017) Can “fake news” impact the stock market? In: Forbes. https://www.forbes.com/ sites/kenrapoza/2017/02/26/can-fake-news-impact-the-stock-market/#33dc99c02fac. Accessed on 24.01.2020 Rashkin H, Choi E, Jang JY, Volkova S, Choi Y (2017) Truth of varying shades: analyzing language in fake news and political fact-checking. In: EMNLP 2017 – conference on empirical methods in natural language processing, proceedings, S 2931–2937. https://doi.org/10.18653/v1/d17-1317 Reis JCS, Correia A, Murai F, Veloso A, Benevenuto F (2019) Explainable machine learning for fake news detection. In: WebSci 2019 – proceedings of the 11th ACM conference on web science, S. Association for Computing Machinery, Inc, pp 17–26. https://doi.org/10.1145/3292522. 3326027 Reuter C, Hartwig K, Kirchner J, Schlegel N (2019) Fake news perception in Germany: a representative study of people’s attitudes and approaches to counteract disinformation. In: proceedings of the international conference on Wirtschaftsinformatik. http://www.peasec.de/paper/2019/2019_ ReuterHartwigKirchnerSchlegel_FakeNewsPerceptionGermany_WI.pdf. Accessed on 18.04.2020 Ribeiro MT, Singh S, Guestrin C (2016) „Why should i trust you?“ explaining the predictions of any classifier. In: proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, S 1135–1144. Association for Computing Machinery. https://doi.org/10.1145/ 2939672.2939778 Ruchansky N, Seo S, Liu Y (2017) CSI: a hybrid deep model for fake news detection. In: international conference on information and knowledge management, proceedings, S 797–806. Association for Computing Machinery. https://doi.org/10.1145/3132847.3132877 Sally Chan M, Jones CR, Hall Jamieson K, Albarraci D (2017) Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol Sci 28(11):1531–1546. https://doi.org/10.1177/0956797617714579 Sängerlaub A (2017) Verzerrte Realitäten – Die Wahrnehmung von „Fake News“ im Schatten der USA und der Bundestagswahl. Stiftung Neue Verantwortung, Berlin. https://www.stiftung-nv.de/ sites/default/files/fake_news_im_schatten_der_usa_und_der_bundestagswahl.pdf. Accessed on 18.04.2020
146
K. Hartwig and C. Reuter
Sethi RJ (2017) Crowdsourcing the verification of fake news and alternative facts. In: HT 2017 – proceedings of the 28th ACM conference on hypertext and social media, S. Association for Computing Machinery, Inc, pp 315–316. https://doi.org/10.1145/3078714.3078746 Shu K, Bernard HR, Liu H (2019a) Studying fake news via network analysis: detection and mitigation. In: Emerging research challenges and opportunities in computational social network analysis and mining, S 43–65. https://doi.org/10.1007/978-3-319-94105-9_3 Shu K, Wang S, Liu H (2019b) Beyond news contents: the role of social context for fake news detection. In: WSDM 2019 – proceedings of the 12th ACM international conference on web search and data mining, S. Association for Computing Machinery, Inc., pp 312–320. https://doi. org/10.1145/3289600.3290994 Stanoevska-slabeva K (2017) Teaching social media literacy with storytelling and social media curation. In: twenty-third Americas conference on information systems, S 1. https://aisel.aisnet. org/cgi/viewcontent.cgi?article=1524&context=amcis2017. Accessed on 18.04.2020 Steinebach M, Bader K, Rinsdorf L, Krämer N, Roßnagel A (2020) Desinformation aufdecken und bekämpfen: Interdisziplinäre Ansätze gegen Desinformationskampagnen und für Meinungspluralität, Bd 45, 1. Aufl. Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden. https://doi.org/10.5771/9783748904816 Tacchini E, Ballarin G, Della Vedova ML, Moret S, de Alfaro L (2017) Some like it hoax: automated fake news detection in social networks. In: CEUR workshop proceedings (Vol. 1960). https:// developers.facebook.com/docs/graph-api. Accessed on 22.04.2021 Tene O, Polonetsky J, Sadeghi A-R (2018) Five freedoms for the momodeus. IEEE Secur Priv 16(3): 15–17. https://ieeexplore.ieee.org/abstract/document/8395137/. Accessed on 22.04.2021 Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359:1146–1151 Wanas N, El-Saban M, Ashour H, Ammar W (2008) Automatic scoring of online discussion posts. In: International conference on Information and Knowledge Management, Proceedings, S 19–25. https://doi.org/10.1145/1458527.1458534 Weerkamp W, De Rijke M (2008) Credibility improves topical blog post retrieval. In: ACL-08: HLT – 46th annual meeting of the Association for Computational Linguistics: human language technologies, proceedings of the conference, Columbus, S 923–931 Weimer M, Gurevych I, Mühlhäuser M (2007) Automatically assessing the post quality in online discussions on software. In: proceedings of the 45th annual meeting of the Association for Computational Linguistics Companion Volume Proceedings of the demo and poster sessions, S 125–128. Association for Computational Linguistics. https://doi.org/10.3115/1557769.1557806 Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: WSDM 2018 – proceedings of the 11th ACM international conference on web search and data mining, S. Association for Computing Machinery, Inc, pp 637–645. https://doi. org/10.1145/3159652.3159677 Yang F et al (2019) XFake: explainable fake news detector with visualizations. In: The Web Conference 2019 – proceedings of the World Wide Web Conference, WWW 2019, S 3600–3604. Association for Computing Machinery, Inc https://doi.org/10.1145/3308558. 3314119 Zhang X, Ghorbani AA (2020) An overview of online fake news: characterization, detection, and discussion. Inf Process Manag 57(2):102025. https://doi.org/10.1016/j.ipm.2019.03.004 Zhou X, Jain A, Phoha VV, Zafarani R (2019) Fake news early detection: a theory-driven model. Digit threats res Pract. http://arxiv.org/abs/1904.11679. Accessed on 22.04.2021
7
Countering Fake News Technically – Detection and Countermeasure . . .
147
Katrin Hartwig, M.Sc., co-authored the paper “Countering Fake News Technically – Detection and
Countermeasure Approaches to Support Users” with Christian Reuter. She is a research associate at the Chair of Science and Technology for Peace and Security (PEASEC) at TU Darmstadt and works in the fields of human-computer interaction, disinformation in social media and usable security. Christian Reuter, Prof. Dr., co-authored the paper “Countering Fake News Technically – Detection and Countermeasure Approaches to Support Users” with Katrin Hartwig. He holds the Chair of Science and Technology for Peace and Security (PEASEC) at TU Darmstadt and works in the fields of security-critical human-computer interaction, IT for peace and security, and resilient IT-based (critical) infrastructures.
8
NewsDeps: Visualizing the Origin of Information in News Articles Felix Hamborg, Philipp Meschenmoser, Moritz Schubotz, Philipp Scharpf, and Bela Gipp
Abstract
In scientific publications, citations allow readers to assess the authenticity of the presented information and verify it in the original context. News articles, however, for various reasons do not contain citations and only rarely refer readers to further sources. As a result, readers often cannot assess the authenticity of the presented information as its origin is unclear. In times of “fake news,” echo chambers, and centralization of media ownership, the lack of transparency regarding origin, trustworthiness, and authenticity has become a pressing societal issue. We present NewsDeps, the first approach that F. Hamborg (✉) Department of Computer and Information Science, University of Konstanz, Konstanz, Germany e-mail: [email protected] P. Meschenmoser School of Computer Science and Information Technology, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland e-mail: [email protected] M. Schubotz Department of Mathematics, FIZ Karlsruhe - Leibniz Institute for Information Infrastructure, Karlsruhe, Germany e-mail: moritz.schubotz@fiz-Karlsruhe.de P. Scharpf AI4 Future Dataconsulting, Konstanz, Germany e-mail: [email protected] B. Gipp Institute of Computer Science, University of Göttingen, Göttingen, Germany e-mail: [email protected] # The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2_8
149
150
F. Hamborg et al.
analyzes and visualizes where information in news articles stems from. NewsDeps employs methods from natural language processing and plagiarism detection to measure article similarity. We devise a temporal-force-directed graph that places articles as nodes chronologically. The graph connects articles by edges varying in width depending on the articles’ similarity. We demonstrate our approach in a case study with two real-world scenarios. We find that NewsDeps increases efficiency and transparency in news consumption by revealing which previously published articles are the primary sources of each given article.
8.1
Introduction
The rise of online news publishing and consumption has made information from various sources and even other countries easily accessible (Hamborg et al. 2018b), but has also led to a decrease of reporting quality (Scheufele 2000; Marchi 2012). The increasing pace of the publish-consume cycle leaves publishers with less time for journalistic investigation and information verification. At the same time, the pressure to publish a story soon after the event has happened rises, as competing outlets will do so likewise. Also, journalists routinely copy-edit or reuse information from previously published articles, which increases the chance of spreading incorrect or unverified information even more (Hamborg et al. 2019b; Hamborg 2020). In 2010, a study showed that over 80% of articles reporting on the same topic did not add any new information, but merely reused information contained in articles published previously by other outlets (The Media Insight Project 2014). Currently, regular news consumers cannot effectively assess the authenticity of information conveyed in articles. Understanding where information stems from, and how the information differs from the used sources could help readers to assess the authenticity of such information and ease further verification. For the same reasons, documentation of the origin of information is a fundamental standard in academic writing.1 News, however, do not contain citations and only rarely refer readers to further articles (Christian et al. 2014). The objective of our research is to identify and visualize information reuse (or content borrowing) in articles. These news dependencies manifest themselves as text snippets reused from prior articles. A field that aims at finding instances of text and information reuse is plagiarism detection. The relatedness of plagiarism detection (PD) to our project can be seen directly in the definition of plagiarism: “[. . .] the use of ideas and/or words from sources [. . .]” (Meuschke and Gipp 2013). These news dependencies should be found between articles, e.g., how similar are two articles content-wise, and within articles, e.g., which piece of information in one article was used from another article. We call this task
1
Of course, in academic publishing additional means are implemented to increase authenticity, for example especially the peer-review process.
8
NewsDeps: Visualizing the Origin of Information in News Articles
151
news information reuse detection (NIRD). By showing how articles reuse information from other related articles, NIRD helps users to assess the articles’ authenticity. Additionally, NIRD increases the efficiency of news consumption as users can quickly see if an article contains novel information or is a mere copy of other articles. In Sect. 8.2, we give a brief overview of related work. In Sect. 8.3, we propose NewsDeps, a NIRD approach that integrates analysis and visualization of news dependencies. Our main contribution is a visualization that reveals which articles reuse information from other articles. We demonstrate this functionality in a case study in Sect. 8.4. Our study shows that NewsDeps also provides an overview of current topics, a common use case in regular news consumption. The article concludes with a discussion of future work (Sect. 8.5) and a summary (Sect. 8.6).
8.2
Background and Related Work
This section first describes fundamental forms of information reuse in the context of our research objective. Second, we discuss previous methods used to measure semantic textual similarity and approaches using them to identify information reuse.
8.2.1
Forms of Information Reuse
Information reuse in news is common and often necessary, e.g., due to resource limitations of publishers (Frank 2003). In other cases, information reuse is even intended, e.g., by press agencies, which write articles that licensed publishers may copy-edit and publish. Copyedited articles are often nearly identical and represent the only dependencies that established PD methods detect reliably (Kienreich et al. 2006; Ryu et al. 2009; Pera and Ng 2011). The PD community calls this form of information reuse (1) copy and paste (Maurer et al. 2006). Figure 8.1 shows a real-world example of copy and paste information reuse in news. Three articles published on November 7, 2014 reported on an event that happened during the Ukraine crisis. The colored boxes in each rectangle represent individual paragraphs in the article. The Fig. 8.1 shows that CNBC (2014) took most of their text from an article previously published by Reuters (blue) (Croft 2014). In addition to the copy and paste text reuse, the CNBC article also contains novel information (green). The article published by TASS (TASS Russian News Agency 2014), a news agency owned by the Russian government, contains different information on the event (yellow), which, however, was not mentioned by any of the Western news outlets. Forms of information reuse that are more complex than copy and paste also exist in news articles. For instance, journalists may (2) paraphrase other articles; (3) tightly copy and merge text segments with slight adjustments (shake and paste) (Weber-Wulff 2010), e.g., by substituting words with synonyms; and (4) translate articles written in other
152
F. Hamborg et al. TASS Statements on alleged Russian troops near Ukrainian border provocave — Defense Ministry MOSCOW, November 7. /TASS/. Canadian Foreign Minister […] In this regard, the Russian Defense Ministry […]
Reuters NATO sees increase in Russian troops along Ukraine border NATO has seen an increase in Russian troops […]
CNBC Tank column crosses from Russia into Ukraine: Kiev military NATO has seen an increase in Russian troops […] "We are aware of the reports of Russian troops […] "The deployment connues of military equipment and […]
"We are aware of the reports of Russian troops […] "If this crossing into Ukraine is confirmed it would be […]
Fig. 8.1 An example of content dependencies between news articles reporting on the same event. Each transparent box represents a single news article; each colored box within a transparent box represents one or more paragraphs of the article. Paragraph boxes from different articles having the same color contain (almost) identical information determined by manual inspection
languages, e.g., from publishers in other countries (Weber-Wulff 2010). In practice, information reuse in news coverage is a mixture of these four main forms. While to our knowledge no research has been published detailing the different types of information reuse in news, Meuschke et al. provide an in-depth discussion of information reuse in the context of plagiarism detection in academic writing (Meuschke and Gipp 2013).
8.2.2
Methods to Detect Information Reuse
Established plagiarism detection methods can reliably find copy and paste, i.e., the most basic form of information reuse in news articles (Sanderson 1997; Kienreich et al. 2006; Pera and Ng 2011). In the following, we give a brief overview of related techniques. An in-depth discussion can be found in (Meuschke and Gipp 2013). To detect disguised forms of plagiarism, including paraphrases, translations and structural plagiarism – in academic documents – researchers have proposed approaches that use syntactic analysis, semantic analysis, cross-language analysis, or a combination thereof often using machine learning (Mozgovoy et al. 2010; Moreau et al. 2015). Syntax-based methods examine syntactic
8
NewsDeps: Visualizing the Origin of Information in News Articles
153
features to compare the structure of two documents (Uzuner and Katz 2005; Elhadi and Al-Tobi 2009). Because parsing and part-of-speech (POS) tagging is computationally expensive, syntax-based approaches are commonly not applied for plagiarism detection use cases, in which the potentially suspicious document is compared to a large collection. Semantic detection approaches typically consider related terms (Bao et al. 2007; Alzahrani and Salim 2010; Tsatsaronis et al. 2010; Vu et al. 2014). Often, semantic approaches employ pairwise comparisons of sentences and use a semantic network, such as WordNet (Miller et al. 1990), to retrieve terms that are semantically related to the terms in the sentences being compared. Using the set of exactly matching and related terms, the detection approaches derive similarity scores and flag documents as suspicious if the texts exceed a given similarity threshold. Some prior work has gone beyond comparing termbased semantic similarity by also considering similarity in the sentence structure (Kent and Salim 2010; Osman et al. 2012a). Such approaches apply semantic role labeling to identify the arguments of a sentence, e.g., the subject, predicate, and object, as well as how they relate, using a pre-defined set of roles from linguistic resources, such as PropBank (Palmer et al. 2005), VerbNet (Schuler 2005), or FrameNet (Baker et al. 1998). Existing detection approaches then typically combine the information on semantic arguments with the termbased semantic similarity. For instance, Osman et al. only consider exactly matching words and WordNet-derived synonyms for the similarity assessment if they belong to the same argument in both sentences (Osman et al. 2012b). Semantic plagiarism detection approaches have been shown to be more effective in identifying disguised plagiarism when compared to text matching approaches (Osman et al. 2012b). However, the computational effort of semantic approaches is significantly higher than that of character-based approaches. This makes them infeasible for large collections, and hence unsuitable for most practical use cases of plagiarism detection. For example, Bao et al. showed that considering WordNet synonyms, which exemplify a relatively straightforward semantic analysis, increased processing times on average by factor 27 compared to character-based approaches (Bao et al. 2007). Cross-language plagiarism detection uses machine translation or other cross-lingual information retrieval methods (Potthast et al. 2011). Since machine translating text is computationally expensive, and thus infeasible for large document collections, cross-language plagiarism detection methods typically extract only keywords from the input text and query these keywords against an index of keywords extracted from documents in the reference collection. One or both sets of keywords are machine-translated – prior to being matched. Despite recent advances, e.g., the use of word embeddings (Moreau et al. 2015; Ferrero et al. 2017; Thompson and Bowerman 2017), cross-language plagiarism detection is currently not reliable enough for practical use (Potthast et al. 2011). Aside from plagiarism detection, semantic textual similarity (STS) methods are useful to identify instances of information reuse. Many of the advances in semantic textual similarity research are due to the SemEval series, where in the corresponding STS track the task is to measure the semantic equivalence of two sentences (Agirre et al. 2016). State-of-the-art semantic textual similarity methods use basic approaches, such as n-gram overlap,
154
F. Hamborg et al.
WordNet-based node-to-node distance, and syntax-based comparisons, e.g., comparing whether the predicate is identical in two sentences (Šarić et al. 2012). More advanced methods combine various techniques using deep learning networks and achieve a Pearson correlation to human coders of 0.78 (Rychalska et al. 2016) and F1 scores up to 91.6 on the STS datasets of the SemEval series (Yang et al. 2019). Each example in the STS datasets consists two sentences or text segments and a similarity score that was obtained by asking human coders to rate both sentences’ relatedness and similarity. Given two sentences, STS methods need to yield a similarity score identical or similar to the human rated score. Since these semantic textual similarity methods focus on sentence similarity, they are useful for the detection of complex, paraphrased instances (see Sect. 8.2.3). Yet, for news there are currently neither training nor evaluation datasets, making the use of deep learning or language models difficult.
8.2.3
Approaches to Detect Information Reuse in News
Few NIRD approaches have been proposed, but news dependencies are still non-transparent to users because none of the approaches visualizes the results. One approach employs methods from PD to find unauthorized instances of information reuse in articles written in Korean (Ryu et al. 2009). The approach computes the similarity of each article pair in a set of related articles. A high similarity between two articles suggests that the latter article contains information from the former article. To assess the similarity of two articles, the approach uses fingerprinting (see Sect. 8.2.2). A similar approach is qSign, which finds instances of information reuse in news blogs and articles using also fingerprinting (Kim et al. 2009). However, since none of the approaches visualizes dependencies, the origin of information remains unclear in regular news consumption.
8.3
System Overview
NewsDeps is a NIRD approach that aims to reveal information reuse on article level, i.e., identify and visualize the main sources of information for each of the articles. When applied to a set of related articles, NewsDeps shows which articles use information from previous articles, and which articles are unrelated. The workflow consists of three phases: (1) news import, (2) similarity measurement, (3) visualization.
8.3.1
Import of News Articles
NewsDeps can import articles either from JSON files, the Common Crawl News Archive (also referred to as CC-NEWS in the literature) or by URL import. The system accepts any document that contains a title, main text, publisher, and publishing date and time.
8
NewsDeps: Visualizing the Origin of Information in News Articles
155
Additional fields, such as to define the background color of each article in the visualization, can also be processed. The URL import allows users to import a list of URLs referring to online articles. The systems then retrieve the required fields using news-please, a web crawler and information extractor tailored for news (Hamborg et al. 2017). News-please is also used to import news articles from the Common Crawl News Archive (Nagel 2016). To do so, users define filter criteria, such as a date range for the publication date, search terms that headlines or the articles’ main text must contain, or a list of outlets. These filter criteria are passed to newsplease, which subsequently searches for relevant news articles in the Common Crawl News Archive.2
8.3.2
Measurement of Information Reuse in Articles
Upon user request, NewsDeps computes a k × k document-to-document similarity (d2d) matrix, where k is the number of imported articles. Specifically, we compute only the upper right diagonal half of the d2d matrix, as information can only flow from previous articles to articles published afterward. Which article existed first and thus may have served as a source of information for articles published afterward is determined by the publishing date (see Sect. 8.3.1). NewsDeps currently supports two basic text similarity measures (1) term frequency and inverse document frequency (TF-IDF) & cosine similarity and (2) Jaccard, and two plagiarism scores provided by the well-established approaches (3) Sherlock (Joy and Luck 1999) and (4) JPlag (Prechelt et al. 2002), which commonly serve as base-line similarity measures in PD. The user can choose which similarity measure to apply. TF-IDF is a means to represent a text document as a numerical vector of size k, where k is the number of terms in a pre-defined vocabulary, which is typically chosen to be the set of all terms occurring in all documents that are to be processed or analyzed. For each term t in a document d given a document corpus D, TF-IDF balances how often t occur in d (term frequency or TF) and how specific t is for d given all documents D (inverse document frequency or IDF). The higher TF-IDF is for a specific term in a specific document, the more “representative” the term is for the document. Documents represented in a so-called vector space, e.g., by using TF-IDF to retrieve the previously described numerical representation, can then be compared using the cosine similarity, which uses the angle between two vectors representing two documents to determine the two documents’ similarity. Sherlock and JPlag try to estimate how likely one of two documents was “plagiarized” from the other (first case in Sect. 8.4), while the text similarity measures TF-IDF & cosine and Jaccard are more suitable to group similar articles, e.g., by topic (second case). We 2
News-please currently accesses the raw archive provided by the Common Crawl project. To speed up the search process, we are planning to preprocess the archive and import extracted articles in a database. A first step towards this goal has been implemented in the POLUSA dataset (Gebhard and Hamborg 2020).
156
F. Hamborg et al.
linearly normalize all similarity values between 0 and 1, where 1 indicates maximum similarity between two articles.
8.3.3
Visualization of News Dependencies
NewsDeps visualizes the dependencies of the imported articles in a directed graph (see Fig. 8.1). The articles are ordered temporally on one axis (by default the x-axis, but the user can swap the function of both axes). Each node represents a single article. We use the width of the edge between two articles as a visual variable to encode the pair’s similarity, i.e., the more similar, the thicker the edge. Pairs below a user-defined similarity threshold are not connected visually. The other axis in the graph, by default the y-axis, allows placing the article in a way that avoids occlusion and visual clutter, e.g., due to crossing edges. Technically, the graph is created by first placing articles at their temporal position. Afterwards, we apply force-direct node placement (Fruchterman and Reingold 1991) in the other dimension, i.e., the articles are moved along the second axis until all articles are placed in a way that their distances to each other best represent their similarities. We call the resulting graph temporal-forcedirected (TFD) graph. The primary purpose of the TFD graph is to reduce overlap of articles and edge crossings while adhering to chronological order. Note that most visualizations use force-directed graphs to show (dis-)similarity of elements, but the TFD graph’s temporal placement of nodes may skew perceived distances between nodes. Therefore, we encode article similarity using the width of the edges. NewsDeps follows the Information Seeking Mantra (Shneiderman 1996): overview first, zoom and filter, then details on demand. Therefore, the approach offers four levels of detail (LOD) to display articles: none (the graph shows each article as a point only), source (the node displays an article’s publisher), title (title only), detailed (title, publisher, main picture, and lead paragraph). A fourth, automated mode automatically chooses proper LOD, by balancing between showing more details and reducing overlap of visuals. This way, the visualization can be used in a broad spectrum of use cases, ranging from closely investigating only few articles (for which the automated mode chooses the most detailed LOD, i.e., “detailed”) to getting an overview of hundreds of articles (where LOD “none” is used). NewsDeps provides various interaction techniques (Shneiderman 1996) to support the visual exploration by the user. For example, both axes and the graph can be dragged and zoomed to adjust the position and granularity of the time frame, details on demand are shown by hovering over important elements, such as articles and edges, and clicking on an article opens a popup showing the full article (see Fig. 8.2). Each article has a linked axisindication on each of the axes, which allows users to easily find the article that was published at a specific time (by hovering the article, the indications on both aces are highlighted), and vice versa. Users can set the opacity of nodes, edges, and axisindications, which helps with occlusion that can occur in analyses of many articles. The
Fig. 8.2 NewsDeps shows articles as nodes in a temporal-force-directed graph. The x-axis encodes the publishing date, whereas nodes’ placement on the y-axis is used to improve readability of the visualization, e.g., by reducing crossing edges
8 NewsDeps: Visualizing the Origin of Information in News Articles 157
158
F. Hamborg et al.
edges are naturally directed concerning the chronological order since articles can only reuse information from previously published articles. While on the one hand other visualization techniques, such as the arc diagram or hierarchical edge bundling, could also be used to visualize the results of the system’s analysis, we think that especially non-expert users may find it difficult to navigate and use these visualizations due to their complexity, e.g., caused by increased functionality and capabilities. The temporal-force-directed graph on the other hand is a comparatively simple visualization, which we think allows users to quickly understand its visualization concept and how to use it.
8.4
Case Study
We analyze the strengths and weaknesses of our approach in a theoretical case study of two real-world news scenarios. In the first case, we investigate how NewsDeps supports exploration of news dependencies. Specifically, in the first case, a non-expert news consumer seeks to find out which articles reuse information and also which information from which other articles in a set of articles reporting on the same topic. Figure 8.2 shows the first case: six articles reporting on the same event during the Ukraine crisis on November 11, 2014. We use JPlag as we are interested in the likelihood that one article reused information from another. The edges in Fig. 8.1 show that Western outlets (blue articles) reused information from other Western outlets but not or to a lesser extent from Russian outlets (red articles), and vice versa.3 Hence, Western and Russian articles form separate, mostly not interconnected groups. The thick edge in the middle reveals that the CNBC article contains a large portion of information from the Reuters press release: the authors only changed the title and few other paragraphs towards the end of the article; the lead paragraph is a 1:1 copy, see the labels of both nodes. NewsDeps additionally reveals content differences through the detailed LOD: Western media reported that Russian troops invaded Ukraine, while the Russian state news agency TASS contrarily stated that even the claims that tanks were near the borders were false. In the second case, we investigate how NewsDeps supports regular news consumption. Specifically, in the second case, a non-expert news consumer seeks to get an overview of the main news topics in set of articles from the same day. We use TF-IDF & cosine since we are interested in grouping articles reporting on the same topic, and not to find instances of information reuse. Figure 8.3 shows three clusters of interconnected articles, each cluster reporting on a separate topic on May 19th, 2017. With the LOD set to title, the visualization helps to get an overview of the different topics quickly. The TFD graph helps to see when coverage on each of the topics began, and when new articles were published. The temporal placement also allows users to track when and what information is added to the coverage.
3
This phenomenon is typically studied as media bias by source selection in the social sciences.
Fig. 8.3 Three groups of articles, each representing a different topic. The popup shows details of a selected article
8 NewsDeps: Visualizing the Origin of Information in News Articles 159
160
F. Hamborg et al.
For example, early articles of the blue topic state that a Turkish passenger was detained on a flight, while the last article adds more detailed information on how the passenger was fixated. Finally, the full article popup allows users to read the whole story.
8.5
Discussion and Future Work
NewsDeps is a first step in the direction of semi-automated NIRD. The first case in our study demonstrated that NewsDeps helps readers to differentiate between articles that add new information or present the event from a different perspective and others that merely repeat previously published information. Hence, the visualization increases the transparency because users can follow the dissemination and determine the origin of information effectively in a group of related articles. NewsDeps can also be helpful to researchers concerned with relations and dependencies of articles and publishers, such as in the social sciences. The second case showed that NewsDeps also allows getting an overview of multiple topics, which is the primary purpose of popular news aggregators, such as Google News. However, to quickly get an overview of many articles, the visualization currently lacks cluster labels. By summarizing a topic with one label, readers would also be able to get an overview of large-scale news scenarios. The first case of our case study showed that NewsDeps detects simple forms of information reuse. In the future, we plan to investigate how to identify also more complex forms, such as paraphrased articles or when authors omitted single sentences or paragraphs. We plan to analyze the events described in articles: an event can be described by answers to the five journalistic W and one H-questions (5W1H), i.e., who did what, when, where, why, and how (Sharma et al. 2013; Hamborg et al. 2018a, 2019a). Instead of analyzing all tokens in the text, we plan to conceive similarity measures that analyze and compare the 5 W of described events in two articles. Another research direction will be to train a language model, such as BERT (Devlin et al. 2018), and devise a neural architecture to identify information reuse in news. While datasets for training exist and have successfully been used on other domains (Yang et al. 2019), for the news domain a dataset must first be created. NewsDeps currently allows users to analyze dependencies on article level. Following the overview first, details on demand mantra (Shneiderman 1996), we plan to add a visualization where users can select an article and compare it with its main sources. The visualization will then enable the users to view which information has been committed or omitted compared to the sources. PD visualizations commonly allow for detailed comparison of a selected document and suspicious documents (Gipp 2014), in our case the main sources.
8
NewsDeps: Visualizing the Origin of Information in News Articles
8.6
161
Conclusion
In this paper, we described NewsDeps, the first information reuse detection system for news, which combines analysis and visualization of news dependencies on article-level. With the help of state-of-the-art similarity measures, our system determines the main sources of information for each of the analyzed articles. The visualization shows all articles in a temporal-force-directed (TFD) graph, which reveals if and how strongly articles relate to another. By showing how related articles reuse information, NewsDeps helps users to assess the authenticity. Additionally, NewsDeps increases the efficiency of news consumption as users can see if an article contains novel information or is a mere copy of other articles. In a case study, we demonstrated that NewsDeps reveals which articles repeat known information, and which articles contain new information or show a different perspective than previously published articles. Hence, the TFD graph helps to understand the dependencies in news coverage on article level. Acknowledgments This work was supported by the Carl Zeiss Foundation, the Zukunftskolleg program of the University of Konstanz, and the Heidelberg Academy of Sciences and Humanities (Heidelberger Akademie der Wissenschaften). We thank the anonymous reviewers for their valuable comments that significantly helped to improve this article.
References Agirre E, Banea C, Cer D et al (2016) SemEval-2016 Task 1: semantic textual similarity, monolingual and cross-lingual evaluation. In: Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pp 497–511 Alzahrani S, Salim N (2010) Fuzzy semantic-based string similarity for extrinsic plagiarism detection: lab report for PAN at CLEF 2010. In: CEUR workshop proceedings, pp 1–8 Baker CF, Fillmore CJ, Lowe JB (1998) The Berkeley FrameNet Project. In: Proceedings of the 36th annual meeting on Association for Computational Linguistics, Stroudsburg, pp 86–90 Bao J, Lyon C, Lane PCR et al (2007) Comparing Different Text Similarity Methods. Tech report. University of Hertfordshire Christian D, Froke P, Jacobsen S, Minthorn D (2014) The Associated Press stylebook and briefing on media law. The Associated Press CNBC (2014) Tank column crosses from Russia into Ukraine: Kiev military. http://www.cnbc.com/ id/102155038. Accessed 22 Aug 2018 Croft A (2014) NATO sees increase in Russian troops along Ukraine border. http://www.reuters.com/ article/us-ukraine-crisis-nato-idUSKBN0IR1KI20141107. Accessed 23 Aug 2018 Devlin J, Chang M-W, Lee K, Toutanova K (2018) BERT: pre-training of deep Bidirectional transformers for language understanding. arXiv Prepr arXiv:181004805 Elhadi M, Al-Tobi A (2009) Duplicate detection in documents and webpages using improved longest common subsequence and documents syntactical structures. In: 2009 4th international conference on computer sciences and convergence information technology. IEEE, Seoul Ferrero J, Agnes F, Besacier L, Schwab D (2017) Using Word Embedding for Cross-Language Plagiarism Detection. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics, pp 415–421
162
F. Hamborg et al.
Frank R (2003) “These crowded circumstances”: when pack journalists bash pack journalism. Journalism 4:441–458. https://doi.org/10.1177/14648849030044003 Fruchterman TMJ, Reingold EM (1991) Graph drawing by force-directed placement. In: Software practice and experience, vol 21, pp 1129–1164. https://doi.org/10.1002/spe.4380211102 Gebhard L, Hamborg F (2020) The POLUSA dataset: 0.9M political news articles balanced by time and outlet popularity. In: Proceedings of the ACM/IEEE joint conference on digital libraries (JCDL), Virtual Event, pp 1–2 Gipp B (2014) Citation-based plagiarism detection. Springer Vieweg Hamborg F (2020) Media bias, the social sciences, and NLP: automating frame analyses to identify bias by word choice and labeling. In: Proceedings of the 58th annual meeting of the association for computational linguistics: student research workshop. Association for Computational Linguistics, Stroudsburg, pp 79–87 Hamborg F, Meuschke N, Breitinger C, Gipp B (2017) News-please: a Generic News Crawler and Extractor. In: Proceedings of the 15th international symposium of information science. Verlag Werner Hülsbusch, pp 218–223 Hamborg F, Lachnit S, Schubotz M et al (2018a) Giveme5W: main event retrieval from news articles by extraction of the five journalistic W questions. In: Proceedings of the iConference 2018. Sheffield Hamborg F, Meuschke N, Gipp B (2018b) Bias-aware news analysis using matrix-based news aggregation. Int J Digit Libr. https://doi.org/10.1007/s00799-018-0239-9 Hamborg F, Breitinger C, Gipp B (2019a) Giveme5W1H: a universal system for extracting main events from news articles. In: Proceedings of the 13th ACM conference on recommender systems, 7th international workshop on news recommendation and analytics (INRA 2019). Copenhagen Hamborg F, Donnay K, Gipp B (2019b) Automated identification of media bias in news articles: an interdisciplinary literature review. Int J Digit Libr 20:391–415. https://doi.org/10.1007/s00799018-0261-y Joy M, Luck M (1999) Plagiarism in programming assignments. IEEE Trans Educ 42:129–133. https://doi.org/10.1109/13.762946 Kent CK, Salim N (2010) Web based cross language plagiarism detection. In: 2010 second international conference on Computational Intelligence, Modelling and Simulation (CIMSiM), pp 199–204 Kienreich W, Granitzer M, Sabol V, Klieber W (2006) Plagiarism Detection in Large Sets of Press Agency News Articles. In: 17th International Workshop on Database and Expert Systems Applications 2006. DEXA ’06 Kim JW, Candan KS, Tatemura J (2009) Efficient overlap and content reuse detection in blogs and online news articles. In: Proceedings of the 18th international conference on World wide web, pp 81–90 Marchi R (2012) With Facebook, Blogs, and Fake News, Teens Reject Journalistic “Objectivity”. J Commun Inq 36:246–262. https://doi.org/10.1177/0196859912458700 Maurer H, Kappe F, Zaka B (2006) Plagiarism – a survey. J Univ Comput Sci 12:1050–1084. https:// doi.org/10.3217jucs-012-08-1050 Meuschke N, Gipp B (2013) State-of-the-art in detecting academic plagiarism. Int J Educ Integ 9:50. https://doi.org/10.21913/IJEI.v9i1.847 Miller GA, Beckwith R, Fellbaum C et al (1990) Introduction to wordnet: an on-line lexical database. Int J Lexicogr 3:235–244. https://doi.org/10.1093/ijl/3.4.235 Moreau E, Jayapal A, Lynch G, Vogel C (2015) Author verification: basic stacked generalization applied to predictions from a set of heterogeneous learners. In: CEUR Workshop Proceedings Mozgovoy M, Kakkonen T, Cosma G (2010) Automatic student plagiarism detection: future perspectives. J Educ Comput Res 43:511–531. https://doi.org/10.2190/ec.43.4.e
8
NewsDeps: Visualizing the Origin of Information in News Articles
163
Nagel S (2016) News dataset available. http://web.archive.org/save/http://commoncrawl.org/2016/ 10/news-dataset-available. Accessed 3 Mar 2020 Osman AH, Salim N, Binwahlan MS et al (2012a) Plagiarism detection scheme based on Semantic Role Labeling. In: International conference on information retrieval & knowledge management (CAMP). IEEE, Kuala Lumpur, pp 30–33 Osman AH, Salim N, Binwahlan MS et al (2012b) An improved plagiarism detection scheme based on semantic role labeling. Appl Soft Comput 12:1493–1502. https://doi.org/10.1016/j.asoc.2011. 12.021 Palmer M, Kingsbury P, Gildea D (2005) The proposition bank: an annotated corpus of semantic roles. Comput Linguist 31:71–106. https://doi.org/10.1162/0891201053630264 Pera MS, Ng YK (2011) SimPaD: a word-similarity sentence-based plagiarism detection tool on Web documents. Web Intell Agent Syst 9(1):27–41. https://doi.org/10.3233/WIA-2011-0203 Potthast M, Barrón-Cedeño A, Stein B, Rosso P (2011) Cross-language plagiarism detection. Lang Resour Eval 45:45–62. https://doi.org/10.1007/s10579-009-9114-z Prechelt L, Malpohl G, Philippsen M (2002) Finding plagiarisms among a set of programs with JPlag. J Univ Comput Sci 8:1016–1038. https://doi.org/10.3217/jucs-008-11-1016 Rychalska B, Pakulska K, Chodorowska K et al (2016) Samsung Poland NLP Team at SemEval2016 Task 1: necessity for diversity; combining recursive autoencoders, WordNet and ensemble methods to measure semantic similarity. In: Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pp 602–608 Ryu C-K, Kim H-J, Cho H-G (2009) A detecting and tracing algorithm for unauthorized internetnews plagiarism using spatio-temporal document evolution model. In: Proceedings of the 2009 ACM symposium on applied computing, pp 863–868 Sanderson M (1997) Duplicate detection in the Reuters collection. Technical report. Department of Computing Science, University of Glasgow. http://eprints.whiterose.ac.uk/4571/1/Duplicates.pdf Šarić F, Glavaš G, Karan M et al (2012) Takelab: systems for measuring semantic text similarity. In: Proceedings of the first joint conference on lexical and computational semantics, pp 441–448. https://www.aclweb.org/anthology/S12-1060.pdf Scheufele DA (2000) Agenda-setting, priming, and framing revisited: another look at cognitive effects of political communication. Mass Commun Soc 3:297–316 Schuler KK (2005) VerbNet: a broad-coverage, comprehensive verb lexicon. University of Pennsylvania Sharma S, Kumar R, Bhadana P, Gupta S (2013) News event extraction using 5W1H approach & its analysis. Int J Sci Eng Res 4:2064–2068 Shneiderman B (1996) The eyes have it: a task by data type taxonomy for information visualizations. In: Proceedings 1996 IEEE symposium on visual language, pp 336–343. https:// doi.org/10.1109/VL.1996.545307 TASS Russian News Agency (2014) Statements on alleged Russian troops near Ukrainian border provocative – Defense Ministry. http://tass.com/russia/758504. Accessed 23 Aug 2018 The Media Insight Project (2014) The personal news cycle: how Americans get their news. http:// web.archive.org/web/20200220173318/https://www.americanpressinstitute.org/publications/ reports/survey-research/personal-news-cycle/. Accessed 23 Aug 2019 Thompson V, Bowerman C (2017) Detecting cross-lingual plagiarism using simulated word embeddings. CoRR abs/1712.1 Tsatsaronis G, Varlamis I, Giannakoulopoulos A, Kanellopoulos N (2010) Identifying free text plagiarism based on semantic similarity. In: Proceedings of the 4th International Plagiarism Conference. Citeseer, Newcastle upon Tyne Uzuner O, Katz B (2005) Capturing expression using linguistic information. In: Proceedings of the 20th national conference on Artificial intelligence. AAAI Press, Pittsburgh, pp 1124–1129
164
F. Hamborg et al.
Vu HH, Villaneau J, Saïd F, Marteau PF (2014) Sentence similarity by combining explicit semantic analysis and overlapping n-grams. In: Sojka P et al (eds) Text, speech and dialogue. Lecture notes in computer science, vol 8655. Springer, pp 201–208 Weber-Wulff D (2010) Test cases for plagiarism detection software. In: Proceedings of the 4th international plagiarism conference Yang Z, Dai Z, Yang Y et al (2019) XLNet: generalized autoregressive pretraining for language understanding. Adv Neural Inf Proces Syst 23:5753–5763
Felix Hamborg, M.Sc., is coauthor of the paper “NewsDeps: Visualizing the Origin of Information
in News Articles.” After his time as a scholarship holder of the Carl Zeiss Foundation, he is doing research as a research associate at the Heidelberg Academy of Sciences on the automated identification of certain forms of media bias in news articles as well as in the fields of deep learning and natural language processing (NLP).
Philipp Meschenmoser, MSc, is coauthor of the paper “NewsDeps: Visualizing the Origin of Information in News Articles.” He is a PhD student at the Department of Data Analysis and Visualization at the University of Konstanz and works in the fields of Temporospatial Visual Analytics and Open Science.
Moritz Schubotz, PhD, is coauthor of the paper “NewsDeps: Visualizing the Origin of Information
in News Articles.” He is a researcher in the Department of Mathematics at FIZ Karlsruhe – Leibniz Institute for Information Infrastructure and works in the areas of Mathematical Information Retrieval and Open Science.
Philipp Scharpf, MSc, is coauthor of the paper “NewsDeps: Visualizing the Origin of Information in
News Articles.” He is a researcher at the University of Konstanz and the University of Wuppertal and works in the areas of information retrieval, natural language processing, and machine learning.
Bela Gipp, Prof. Dr., is coauthor of the paper “NewsDeps: Visualizing the Origin of Information in
News Articles.” He is Professor of Data and Knowledge Engineering at the University of Wuppertal and works in the areas of information retrieval, distributed ledger technologies, and open science.
Index
A
F
Anti-fascism, 95, 96 Anti-Fascist Protection Wall, 89, 98, 101, 103, 107 Aristotle, 47
Facts, alternative, v, vi, 2, 4, 62, 74–77, 81, 82 Factuality, v, vi, 47, 57–60, 64, 69 Fake news, v–vii, 2–4, 6, 21, 33, 36, 37, 74, 76, 81, 82, 91, 96, 106, 107, 113–118, 120–122, 124, 125, 127, 132, 134, 135, 137–140, 142 Fascism, vii, 92, 94, 95, 103, 108 Feyerabend, Paul, 6, 7, 9 Fiction, v, vi, 1, 2, 47, 58, 75, 78, 102 Fictionality, v, vi, 47, 57–60, 69 Fiction fake, 62 Filter bubble, 1, 36 For Eyes Only, 101–104 Foucault, Michel, 3, 77 Framing, 91 Fuller, Steve, vi, 4–6, 9, 22, 76
B Barnes, Barry, 9, 11 Baudrillard, Jean, 75–77, 79, 81, 85, 86 Berlin Wall, 98, 99, 101, 103, 105, 106, 108 Black box, vii, 137, 138 Bloor, David, 9, 11, 19
C Carnap, Rudolf, 62, 63, 86 Coherence theory, 14, 63, 86 Common Crawl News Archive, 154 Consensus theory, 11 Conspiracy theory, v, 8, 83, 84, 91, 94, 98, 107, 134 Conway, Kellyanne, 73–75, 82, 85 Copy and paste, 151 Correspondence theory, 4, 5, 14, 20, 62, 63, 78, 79, 81, 82, 85, 86
G German Democratic Republic, 90, 91, 98, 106 Grievance studies, 38–40
H D Darwin, Charles, 33 Deep learning, 118, 124 Dimitrov thesis, 92–94, 103 Disinformation, v, vii, 2, 3, 36, 107, 115–117, 132
Habermas, Jürgen, 11 Hacking, Ian, 19, 30 Hempel, Carl, 63 Histoire, 46, 47, 59 Hoax, vi, 17, 31–35, 37–41 Hötger, Dieter, 89, 91, 97, 107, 108
E Edinburgh Strong Programme, 10 Expertise, 5, 6, 8
I Incommensurability, 13, 15
# The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 P. Klimczak, T. Zoglauer (eds.), Truth and Fake in the Post-Factual Digital Age, https://doi.org/10.1007/978-3-658-40406-2
165
166
Index
K
R
Kuhn, Thomas, 5, 11, 41
Realism, 15 realism, perspectival, 21, 22 realism, scientific, 29 Reality, v, vi, 4, 15, 19, 20, 30, 59, 63, 66, 70, 75, 78, 85, 102 Relativism, vi, 3, 9, 11–16, 19, 22 truth relativism, v, 6 Russell, Bertrand, 47, 62, 63
L Latour, Bruno, 9 Lynch, Michael, 12
M Machiavelli, Niccolo, 6 Machine learning, 116, 126, 135, 137, 138, 142, 152 Marxism-Leninism, vii, 92, 93 Media literacy, 137, 139, 142 Metadata analysis, 124 Misinformation, 132, 138, 139 Mockumentary, 47 Mueller Report, 118
N Natural language processing, 123 NewsDeps, viii, 151, 154–156, 158, 160, 161 Nietzsche, Friedrich, 2, 75
S Schön, Jan Hendrik, 28, 29 Science and Technology Studies, vi, 3, 8, 9, 11 Science skepticism, vi Semantic textual similarity, 153 Shake and paste, 151 Simpsons, 48, 50–53, 55 Social constructivism, 17, 19, 76 Social media, v, vii, 2, 113–115, 126, 134, 138, 142 Sokal, Alan, vi, 30, 31, 35, 37–39 Sokal Squared, 38–41 Spicer, Sean, 48–53, 55, 56, 74, 82 Symmetry thesis, 10, 30
T O Obama, Barack, 48, 49, 52 Ontological turn, 15, 16
P Perspectivism, 12, 13, 15, 18 Plagiarism, 150, 152, 153, 155 Poe, Edgar Allan, 32, 35 Politics, post-factual, vi, 3–5 POS tagging, 123 Post-factualism, vi Postmodernism, 3, 17, 76, 82, 83 Post-truth, v, 1, 3–5, 18, 22, 76 Publish-consume cycle, 150 Pynchon, Thomas, 83, 85, 86
Titzmann, Michael, vi, 47 Trump, Donald, 3, 5, 7, 11, 13, 17, 18, 20, 36, 48–52, 73–75, 77, 81, 82, 85, 87, 106 TrustyTweet, vii, 139, 140, 142 Truth, v, vi, 1, 4, 5, 9, 11, 13, 17, 19–21, 62, 63, 65, 75, 77, 79, 85, 86, 102, 108, 115 Turner, Stephen, 6, 7, 18
W White box, vii, 137 Wittgenstein, Ludwig, 5, 18, 19
Z Zoglauer, Thomas, vi, 76