Science and Democracy: Controversies and Conflicts 9027200742, 9789027200747

The relationship between science and democracy has become a much-debated issue. In recent years, we have even seen an ex

278 55 25MB

English Pages 206 [208] Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Table of contents
About the contributors
Foreword: Like dwarfs on the shoulders of giants • Giovanni Scarafile
Introduction: The relationship between science and democracy: Harmonic and confrontational conceptions • Pierluigi Barrotta
1 The dam project: Who are the experts? A philosophical lesson from the Vajont disaster • Pierluigi Barrotta and Eleonora Montuschi
2 Rational decisions in a disagreement with experts • István Danka
3 Rethinking the notion of public: A pragmatist account • Roberto Gronda
4 The expert you are (Not): Citizens, experts and the limits of science communication • Selene Arfini and Tommaso Bertolotti
5 Decisions without scientists? Two case studies about GM plants and invasive acacia in Hungary • Anna Petschner
6 Save the planet, win the election: A paradox of science and democracy, an Israeli perpetuum mobile and Donald Trump • Aviram Sariel
7 Science and the source of legitimacy in democratic regimes • Oded Balaban
8 The ethics of communication and the Terra Terra project • Giovanni Scarafile and Maria Elena Latino
9 The political use of science: The historical case of Soviet cosmology • Mauro Stenico
10 The dialectical legacy of epigenetics • Flavio D’Abramo
Index
Recommend Papers

Science and Democracy: Controversies and Conflicts
 9027200742, 9789027200747

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Science and Democracy Controversies and conflicts Edited by Pierluigi Barrotta and Giovanni Scarafile

controversies

13

John Benjamins Publishing Company

Science and Democracy

Controversies (CVS)

Ethics and Interdisciplinarity issn 1574-1583 Controversies includes studies in the theory of controversy or any of its salient aspects, studies of the history of controversy forms and their evolution, casestudies of particular historical or current controversies in any field or period, edited collections of documents of a given controversy or a family of related controversies, and other controversy-focused books. The series also acts as a forum for ‘agenda-setting’ debates, where prominent discussants of current controversial issues take part. Since controversy involves necessarily dialogue, manuscripts focusing exclusively on one position will not be considered. For an overview of all books published in this series, please see http://benjamins.com/catalog/cvs

Editor

Founding Editor

Giovanni Scarafile

Marcelo Dascal

University of Salento

Tel Aviv University

Advisory Board Jens Allwood

University of Gothenburg

Han-Liang Chang

National Taiwan University

Frans H. van Eemeren

University of Amsterdam

Adriano Fabris

University of Pisa

Gerd Fritz

University of Giessen

Alan G. Gross

University of Minnesota

Geoffrey Lloyd

Cambridge University

Kuno Lorenz

University of Saarbrücken

Stephen Toulmin

University of Southern California

Ruth Wodak

Lancaster University/ University Vienna

Quintín Racionero

Chaoqun Xie

Yaron M. Senderowicz

Ghil’ad Zuckermann

UNED, Madrid

Tel Aviv University

Thomas Gloning

University of Giessen

Volume 13 Science and Democracy. Controversies and conflicts Edited by Pierluigi Barrotta and Giovanni Scarafile

Fujian Normal University University of Adelaide

Science and Democracy Controversies and conflicts Edited by

Pierluigi Barrotta University of Pisa

Giovanni Scarafile University of Salento

John Benjamins Publishing Company Amsterdam / Philadelphia

8

TM

The paper used in this publication meets the minimum requirements of the American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.

doi 10.1075/cvs.13 Cataloging-in-Publication Data available from Library of Congress: lccn 2018003758 (print) / 2018016997 (e-book) isbn 978 90 272 0074 7 (Hb) isbn 978 90 272 6404 6 (e-book)

© 2018 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Company · https://benjamins.com

Table of contents

About the contributors Foreword: Like dwarfs on the shoulders of giants Giovanni Scarafile Introduction: The relationship between science and democracy: Harmonic and confrontational conceptions Pierluigi Barrotta chapter 1 The dam project: Who are the experts? A philosophical lesson from the Vajont disaster Pierluigi Barrotta and Eleonora Montuschi

vii 1

7

17

chapter 2 Rational decisions in a disagreement with experts István Danka

35

chapter 3 Rethinking the notion of public: A pragmatist account Roberto Gronda

53

chapter 4 The expert you are (Not): Citizens, experts and the limits of science communication Selene Arfini and Tommaso Bertolotti chapter 5 Decisions without scientists? Two case studies about GM plants and invasive acacia in Hungary Anna Petschner chapter 6 Save the planet, win the election: A paradox of science and democracy, an Israeli perpetuum mobile and Donald Trump Aviram Sariel

71

87

109

vi

Science and Democracy

chapter 7 Science and the source of legitimacy in democratic regimes Oded Balaban

127

chapter 8 The ethics of communication and the Terra Terra project Giovanni Scarafile and Maria Elena Latino

145

chapter 9 The political use of science: The historical case of Soviet cosmology Mauro Stenico

165

chapter 10 The dialectical legacy of epigenetics Flavio D’Abramo

185

Index

197

About the contributors

Selene Arfini is a Ph.D candidate in Philosophy at the University of Chieti and Pescara and member of Computational Philosophy Laboratory at the University of Pavia. Her current work involves the foundation of an epistemology of ignorance, and it is framed in the dialog between extended cognition, cognitive niches construction, and the project of the “Naturalization of Logic.” Oded Balaban. Full professor – Emeritus – at the University of Haifa. His fields of specialization include: theory of knowledge, political philosophy, history of philosophy. Pierluigi Barrotta is full Professor of Philosophy of Science and the Head of the Department of Civilizations and Forms of Knowledge at the University of Pisa. He is currently carrying out research on the relationships between science and democracy. His book Scientists, Democracy and Society: A Community of Inquirers is forthcoming from Springer. Tommaso Bertolotti is post-doctoral fellow and adjunct professor of cognitive philosophy at the University of Pavia. His main research areas include philosophy of technology, applied and social epistemology, philosophy and cognitive science of religion. On top of several articles on peer reviewed journals, he authored “Patterns of Rationality” (2015) and co-edited with Lorenzo Magnani the “Springer Hanbdook of Model-Based Science” (2017). Flavio D’Abramo is Marie Curie Fellow at Freie Universität and Charité  – Universitätsmedizin Berlin. He has been working at University of Rome La Sapienza, Ecole des Hautes Etudes en Sciences Sociales Paris, Imperial College London and Ruhr University Bochum. His work is based on highly interdisciplinary methodologies and focuses on history, philosophy, sociology and anthropology of biomedical research. István Danka, Associate Professor, John von Neumann University, Kecskemét, Hungary and Assistant Professor, Budapest University of Technology and Economics, Hungary, earned his PhD in Philosophy at the University of Leeds, UK and held various scholarships at the Universities of Pécs (Hungary), Vienna (Austria), LMU Munich, the Hungarian Academy of Sciences and Wittgenstein Archives Bergen (Norway).

viii Science and Democracy

Roberto Gronda. I am currently a post doc at the University of Pisa. After graduating at the University of Torino, I got my Ph.D from Scuola Normale Superiore of Pisa. My main field of expertise is philosophy of science. I recently edited a book entitled “Pragmatismo e filosofia della scienza” (PUP), and I am now working on a book on the notion of articulation. Maria Elena Latino. Graduated with honors in Management Engineering at the University of Salento in 2010. She is Research Fellow at the Department of Innovation Engineering of University of Salento since 2012. Her research is characterized by a cross-disciplinary focus, with major interest on technology for food traceability, technology for marine and aquaculture, new product development process, methodology and technology of complex systems. Eleonora Montuschi is an Associate professor in Philosophy of science at the Department of Philosophy and Cultural Heritage at the University Ca’ Foscari of Venice. She is also senior research fellow at the London School of Economics and Political Science. She currently works on scientific objectivity, on the theory and practice of evidence, and on issues of expertise in scientific research and policy. Anna Petschner is currently a PhD student at the Budapest University of Technology and Economics, Hungary. In the Doctoral School of Philosophy and History of Science, she is examining scientific blogs and science communication theories in practice. Her research interests also include controversies, especially in usage of gene-modified plants and nuclear energy power plants. Aviram Sariel is a doctoral candidate in Philosophy at the University of Tel Aviv. His current work includes theories of polemic exchanges (following Dascal’s Theory of Controversies), Early Modern Gnosticism, and occasional contribution to Invention/Patent Theory: the present paper combines them all. Giovanni Scarafile. PhD in philosophy (Lecce, 2001), is a Senior Lecturer in Philosophy at the University of Salento (Lecce, Italy). He is in charge of the discipline of Ethics of Communication. He also serves as vice president of IASC (International Association for the Study of Controversies). His research interests include ethics and its relation with the arts, applied ethics. He is editor and co-editor of several books and the editor of the CVS Series, John Benjamins – Amsterdam. Mauro Stenico is currently mayor of the municipality of Fornace and Cultore della materia at the Università degli Studi di Trento. He is interested in cosmology, history of science, theoretical and moral philosophy. After a degree in “Philosophy and the Languages of Modernity”, he received an international PhD (Italy-Germany) in “Political communication: from antiquity to the 20th century” in 2013.

Foreword Like dwarfs on the shoulders of giants Giovanni Scarafile



It is in the moment of controversy, of the critique and defense of opposed ideas, that doing philosophy is revealed in the presence of the observer in all its magnitude, beauty and difficulty. Marcelo Dascal (2017: XII)

1. Archeology of sense In general terms, philosophy is the search for the necessary orientation to find the sense of things. For this reason, to go back to the original meaning of the term orientation is not irrelevant, because it allows us to approximate even more the definition of philosophy itself. As you know, the word orientation comes from the Latin oriens, present participle of the Latin verb oriri, whose meaning is announced in an ancient saying: «Oriri apud antiquos surgere frequenter significat»; thus, for the ancients, oriri meant to arise frequently. Oriens is, therefore, what repeatedly arises and, as such, it escapes the transience of the elements that cannot assert the same constancy for themselves. The “surgere frequenter” is the principle of identifying what is constant and which, as such, becomes a point of reference. The term orientation, therefore, by derivation, means: to discern one’s position in relation to reference points, identified on the basis of the constancy of presentation of what arises. What is constantly present, in fact, has a distinctive value compared to the other elements. The oriens represents things bringing themselves out of fusional immanence, because of a transcendence affirmed on the basis of constant presentation. This is why the early Greek and Roman temples are aimed at the oriens.

doi 10.1075/cvs.13.01sca © 2018 John Benjamins Publishing Company

2

Giovanni Scarafile

2. Sense as intelligibility Orientation is the recognition of a difference in the order of things, from which it is possible to make choices – that is, opting for a thing considered to have a value, while at the same time renouncing others. Sense, grasped in the originality of its manifestation, is the intelligibility of the path that directs the gaze towards what is more constant, isolating it from what is not. Thinking about sense means to remember this original passage every time, to relive what is at the base of it, to feel the need to separate oneself from the kingdom of indistinction, and to go further, towards what is other than ourselves. Seeking sense is philosophizing, that is to say recognizing ourselves in the movement that projects us towards what is not yet at our disposal and which from that distance convokes us. The archaeology of sense then represents the mythical place to return to when one is confused, disoriented. This is a frequent and physiological condition in philosophy. Sometimes, in fact, we lose sight of what directs our gaze and so the path to be taken becomes unintelligible. When this happens, we are led to see that “there is no sense”. The numerous metaphors of movement used so far show the tribute that the definition of philosophy owes to the meaning of orientation and therefore the connection between the search for meaning and the ability to draw maps. This is a non-extrinsic connection, so we can say that the map is not only an activity aimed at creating an object functional to explorations. It, in fact, presents structural analogies with that search in which latu sensu summarized every authentic philosophizing. Explicating the characteristics of a map, therefore, can help to find further features of philosophical research. In general terms, a map is, first of all, the testimony that something has been found. Faced with the possibility that a search may lead to a blind alley, the map is an attestation of the intrinsic achievement of sense. Consequently, each map, while it indicates the already travelled route, intrinsically invites us to continue the journey; that is to say, to put oneself in the same direction. The testimony of sense given by a map is first of all negative. Maps in fact tells us that not every route makes sense. The more objective a representation is, the more objectively it is able to show which is the best path to reach a given goal. Given the indirect testimony that not all roads are the same, a method is needed to achieve what one wants. The existence of the maps, moreover, demonstrated the affirmation of a specific conception of man as that entity which, in order to represent the world, is able to bring it back to its own measure. This is Heidegger’s well-know thesis in his essay “The Age of the World Picture”:

Foreword 3

Beings as a whole are now taken in such a way that a being is first and only in being insofar as it is set in place by representing-producing humanity. […]. The being of beings is sought and found in the representedness of beings (2002: 67–68)

This goal becomes fully attainable in the modern age, when the representative function, fully deployed, focuses on the subject, condition of possibility of representation. Talking about the subject in these terms means referring to monology, being a law unto oneself. Drawing maps in monological terms means to attribute to the monotony of a single voice the narration of the path of approximation to the sense. In these terms, sense is traced back to the individual enterprises, whether it is drawing maps or doing the reverse operation, returning from the map to reality, to the thing itself.

3. The role of controversies Now, if we want to see better the territory indicated by the map – in other words, if we really want to consider the interweaving of philosophical questions – then the gaze of the individual is not enough. It is precisely this warning that underlies the theoretical proposal identified by Marcelo Dascal, founding editor of the CVS Series, with the well-known theory of controversies. Instead, we should be able to look at the world through a scaffolding that structurally knows how to place and enhance the gaze of the other. From this point of view, Dascal is, once again, authentically Leibnizian. It was indeed Leibniz himself who claimed that the other person’s point of view was the best place to see the world, when he said: “The other’s place is the true point of view both in politics and in morals” (Leibniz, 2006: 164). Actually, these indications of the Leipzig philosopher can easily be misunderstood. It is quite easy to understand these words as a generic invitation to savoir vivre. I am convinced, however, that Dascal’s words allude to a very different challenge, the character of which is first and foremost heuristic. Therefore, it is not a general invitation to give space to others within a vision of the world that would remain firmly in our hands, but a reading and vision key to the world that puts us far beyond the monological approach. Each element of a map, of reality, and of a philosophy is formed by a complex set of relationships. That is the point. To effectively see a map, to see reality, to philosophize means to realize these connections. This intrinsic dialogicity is an eminent condition for interpreting data under our eyes and thus arriving at grasping the very meaning of what we seek. Thinking well means first of all seeing well; and to see well, we need to resort not only to ourselves.

4

Giovanni Scarafile

Dascal’s theory of controversies offers a morphology of such connections and therefore it is essential in the research work. Through it, we are pushed towards reality with a different intentionality that aims to frame things through their relationships. Reality must therefore be seen and understood in the right way, through appropriate means of interpretation. This is the meaning of Dascal’s appeal to pragmatics, in the formulation offered by Paul Grice. Recourse to the specificity of the other is not only a means of seeing reality better, remaining anchored in one’s own point of view. In Dascal’s proposal, the other is a value in himself and that is why his prerogatives must be protected and taken care of. This protection is primarily, albeit not exclusively, achieved at the moment when the forms of interaction with others are being studied and brought to light more and more.

4. The CVS series Studying controversies responds to this primary objective, to which the CVS series, founded by Marcelo Dascal in 2005, is dedicated. It has so far seen twelve volumes published, rightly considered milestones in the context of contemporary philosophical production. The first element that, without triumphalism, should be emphasized is the innovative approach that Dascal has been able to imprint on his “creature”. Faced with philosophical methodologies that, in a very traditional way, univocally favour historiographic or philological reconstructions, the series focuses on controversies, considering the forms of encounter between people and ideas. Today, the CVS Series intends to remain faithful to this inspiration, continuing to be a point of reference for scholars and readers of controversies. The fidelity to the Dascalian approach becomes concrete in the extension of the name of the series, with the two references to ethics and interdisciplinarity. They were originally included in the description of the series. Now their role is made explicit. On the one hand, it is clear, in fact, that a research approach within which otherness is not a marginal element, but its structural dimension, can clearly be declared ipso facto ethical. On the other hand, it is inevitable that the climate of listening and dialogue initiated by this constitutive openness to otherness becomes concrete when it is translated into dialogue between disciplines. The ability to investigate the controversies implies the participation of disciplines. It is by developing the fundamental postulates of the theory of controversy that today ethics and interdisciplinarity become our travel companions.

Foreword 5

5. Viaticum While I predispose myself to continue the editing of this prestigious series of publications, I cannot deny sensing the weight of an overwhelming responsibility. For the support and encouragement received, I would like to thank Varda Dascal and all the members of the Advisory Board who have renewed, or expressed for the first time, their willingness to contribute to this collective scientific enterprise. As a good omen for the continuation of the CVS series’ editorial activities, I would like to draw inspiration from an ancient expression of Gregorio Magno, who in his Regula Pastoralis (I, 3) writes: “Pro Veritate Adversa Diligere”, that is, “In seeking the truth, prefer adversity”. Without useless and misleading illusions, the maxim confronts us with the most demanding of choices: to be authentic, the experience of meaning must be entrusted to a centrifugal thrust, which projects us towards what is different from us, up to the level of incommensurability where every analogy between the ego and the other is rarefied. Faced with such an ascetic and ambitious destination, we can only feel all our inadequacy. For this reason, my hope, addressed to the readers and authors of this series, is that as dwarfs on the shoulders of giants we can continue to follow the path indicated by Marcelo Dascal and draw philosophical maps with which to orient ourselves in the urgent questions that await us.

References Dascal, M. (2017). Foreword. In J. Navarro, How to Do Philosophy with Words (pp. xi–xiii). Amsterdam: John Benjamins. Heidegger, M. (2002). The Age of the World Picture. In Off the Beaten Track. Cambridge: Cambridge University Press Leibniz, G. W. (2006). The art of controversies (translated and edited, with an introductory essay and notes, by M. Dascal with Q. Racionero & A. Cardoso). Dordrecht: Springer.

Introduction The relationship between science and democracy: Harmonic and confrontational conceptions Pierluigi Barrotta The relationship between science and democracy has become a much-debated issue. In recent years, we have even seen an exponential growth in literature on the subject. No doubt, the interest has partly been justified by the concern of public opinion over the technological repercussions of scientific research. Moreover, there are scientific theories that, if they were accepted, would allegedly imply the adoption of policies that have wide social consequences, as well as a rethinking of deeply-rooted habits on the part of the citizens. These considerations alone allow us to understand the reasons for the interest in the at times troublesome relationships between science and public opinion which characterize democratic societies. Of course, the relationship between science and democracy has never been entirely peaceful. Just to make an example among many, think of the fierce controversy generated in the 1950s by Velikovsky’s bizarre theories. On that occasion, to quote the sociologist Alfred de Grazia (1978: 16) “the scientific establishment rose up in arms, not only against the new Velikosvsky theories, but against the man himself ”. There was part of public opinion that stood up for Velikovsky and against what seemed an unjustifiable arrogance. However, today the relationship between science and democracy is certainly more complex. In this regard, it is sufficient to examine, even superficially, the disputes related to the use of nuclear power plants or GMOS, or the disputes on global warming and the limits to economic growth. These disputes cannot be easily brushed off as a clash between scientific knowledge and pseudo-science, as could have been done in the Velikovsky affair. Thus, the present historical phase at least partially explains the interest in the topic; however, it is indeed only a partial explanation. In fact, we must carefully consider other phenomena if we want to understand the present debate. Along with conflicts between science and public opinion, we have to consider with equal attention the incessant controversies surrounding the nature of these conflicts. The title of the book – Science and Democracy. Controversies and Conflicts – refers to these two different levels, which in detail explain the interest that the topic arouses today. doi 10.1075/cvs.13.02bar © 2018 John Benjamins Publishing Company

8

Pierluigi Barrotta

We have said that conflicts between scientific communities and public opinion surface with some continuity in the contemporary world. These conflicts tend to penetrate the same legal structures and economic conditions of democracies, and consequently require reflection on the way they should be understood. The academic and scientific world does not appear to be unanimous. In particular, it appears divided on a central question: do these conflicts belong to the “pathology” or “physiology” of the relationships between science and democracy? The first view claims that science and democratic societies cannot in principle conflict. In fact, a clear-cut division of tasks would exist: Science aims to explain and predict facts (or, in short, it concerns the growth of knowledge), while democratic societies would have the task to make the best use of this knowledge in accordance with the purposes and values that they freely choose. Therefore, conflicts belong to the pathology of the relationship, since they can only take place when either scientific communities or public opinion infringes this clear division of tasks. In the second interpretation of the nature of the conflicts, this view is challenged. Conflicts are inevitable because science pretends to offer genuine knowledge of facts, while it is actually imbued with social values. In its most radical version, this interpretative line claims that social values, once they are unfolded, show why the alleged neutrality of science conceals power relations and negotiations among different social interests. If this interpretation is accepted, conflicts between scientific communities and public opinion would become physiological, since they are inherent in the nature of scientific inquiry itself. The Seventies and Eighties in all probability represent the turning point. In these years, the need for careful reflection on science and democracy came to the fore, since we witnessed particularly bitter conflicts between science and public opinion. Alarmed by public hostility toward new technologies, the Royal Society and the American Association for the Advancement of Science developed the programme of Public Understanding of Science, in which for the first time the scientific community addressed the problem in a conscious and meditated way. As we will see, the attempt was not successful, but it was nonetheless a serious try. Along with these social conflicts, throughout these years, we also witnessed the birth of claims that have given rise to extensive controversies in the academic world, since they radically questioned the above-mentioned clear division of tasks between science and society. We find, for example, the first formulations of the strong programme in the sociology of knowledge (cf. Bloor, 1976) and the publication of the influential book by Carolyn Merchant (1980), which marked the development of feminist epistemology. Their theses had vast cultural echoes. The identification of a precise period as a turning point is largely arbitrary (on this point see also the periodization defended by H. Collins and R. Evans, 2002, and 2017). Both conflicts and controversies took place well before those years. Suffice it

Introduction 9

to think of the conflicts following the birth of the environmental movements in the Sixties or to the controversy surrounding the provocative work of Paul Feyerabend, who, starting from research carried out at the turn of the Fifties and Sixties, ended up with defending a society free from the “authoritarianism” of science. Certainly, the epistemological debate that has seen both Thomas Kuhn that Paul Feyerabend as protagonists has exerted a lasting influence. The current debate is the son of a cultural and social environment that should not be reduced in schematic ways to a few causes. Simplification is necessary in an introduction, but it should be taken with caution. If the harmonic vision between science and democratic societies was once predominant among scholars, today this vision has been so strongly criticized as to put his supporters on the defensive. The collection of essays edited by Noretta Koertge (1998) or the book of Hugh Lacey (1999), to mention a couple of examples, could be seen as manifestos in favour of the harmonic, non-confrontational, vision of the relationship between science and democratic societies. Some brief remarks will show how the pendulum has oscillated between these two positions.

The conception of the intrinsic harmony The conception of the intrinsic harmony between science and democracy has a long tradition. The alleged distinction of roles and tasks does not preclude the existence of beneficial mutual influences and fruitful analogies between the proper functioning of science and the functioning of democratic societies. Henri Poincaré, cited with approval from Lacey, is an example. In the essay La morale et la Science, Poincaré (1917) defends the ideal of value-free science, but at the same time he claims that the scientific community is an example that should be followed by the whole of society. In fact, as he writes (Poincaré, ibid., Eng. trans. p. 105): “Science keeps us in constant relation with something which is greater than ourselves […]. He who has tasted of this, who has seen it, if only from afar, the splendid harmony of the natural laws will be better disposed than another to pay little attention to his petty, egoistic interests”. Therefore, values such as intellectual honesty, openness to criticism and tolerance are essential prerequisites for both the growth of knowledge and civil progress. In a book edited from Noretta Koertge (2005), many scholars underline the thesis that scientific inquiry requires many virtues on the part of the scientist, which should also characterize democratic societies. This is a fascinating claim. It is even obvious that under various aspects science and democracy follow different procedures. It is sufficient to recall that science makes decisions through experiments and rigorous observations, while democracy decides through the majority principle. However, beyond these obvious differences,

10

Pierluigi Barrotta

we should also note deeper and more interesting similarities, which contribute to a better understanding of the operation of both. In this regard, we must certainly mention two scholars, whose influence has had considerable historical importance. The first example is offered by Michael Polanyi. Almost at the culmination of a long philosophical reflection, supported by profound scientific experience as a scholar of chemistry and physics, at the beginning of the Sixties, Polanyi (1962) proposed an idea that turned out to be very influential: science is organized as a republic, without any centralized authority. Polanyi noticed that both science and the economic market, which is typical of western democracies, work thanks to an “invisible hand” process. In this process, everyone proceeds having a look at the persons acting in neighboring areas, thereby creating a network of partial overlaps. In the market process, the mechanism to which Polanyi refers should be clear: it is the mechanism of perfect competition, where each agent is a price-taker. In science, it is the process of peer review (to which Polanyi does not refer explicitly). At the basis of both mechanisms, it is the division of knowledge and competence that leads to spontaneous coordination. Again, in the last century, the philosopher Karl Popper (1966) proposed an idea that turned out to be equally very influential, that of an “Open Society”. Here too, the close similarities between science and liberal and democratic societies are evident: for their progress, both science and democratic societies need critical discussion. In a nutshell, here we find the idea behind the critical rationalism which Popper developed in both epistemological context and the social sphere. With regard to scientific activity, Popper sustained that science proceeds through bold conjectures and refutations. This is the well-known methodology of falsificationism, which for Popper is the epistemological version of critical rationalism: scientists propose theories for solving empirical and theoretical problems and then they test those theories by means of experiments and observations, which is how they attempt to eliminate false theories. Regarding democratic societies, Popper suggested that a similar procedure should be followed in the field of social philosophy. In the logic of social research, the task is the elimination of suffering or “unhappiness” in general: governments present programs for the resolution of social problems, and citizens through democratic elections have the duty of checking whether the objectives have been achieved in a satisfactory manner. Therefore, for Popper, in both science and society the process of learning from experience is basically eliminative. These ideas are as simple as fruitful, not only from the point of view of epistemology, but also for social philosophy. Polanyi used his idea of a Republic of science to fight the claim, at the time fairly common, that scientific research should be planned in order to better serve social progress. That was a historical period in which scientists who were only concerned with the truth were often accused of

Introduction 11

being “bourgeois” and therefore enemies of socialism. In the Soviet Union, this idea became an integral part of the research policy adopted by Stalin. Polanyi, with great lucidity, denounced its dangers: the search for truth, without concern for its possible practical applications, is a necessary condition for scientific progress. The attempt, typical of socialist democracies, to plan not only economies but also scientific research would condemn those societies to backwardness. Popper’s ideas proved fruitful as well. Like Polanyi, Popper’s polemical target was totalitarian societies. Upholder of humanitarian socialism, at least at the time in which he wrote The Open Society, he saw Soviet totalitarianism as a regime not only antithetical to the aspirations of freedom and equality, but also as a regime that had completely misunderstood the logic of research, both scientific and social. These ideas fascinated both conservatives and social democrats (cf. Magee, 1973). Prominent political leaders, such as Margaret Thatcher and Carl Schmidt, explicitly claimed they were inspired by Popper in their manifestos for the democracies in their respective countries. From the point of view of the contemporary world, the ideas of Polanyi and Popper, although certainly fascinating, show a clear shortcoming. They focus on the analogies between science and democratic societies. However close they might be, analogies emphasize mutual similarities, never potential conflicts and tensions. In fact, Polanyi and Popper never took into account possible contrasts between science (represented by the scientific expert) and democracy (represented by the public). Today, in an age very far from that in which they lived, it is instead the insurgence of potential conflicts that has mainly attracted the attention of philosophers, scientists, lawyers and sociologists.

The conception of the unavoidable conflict Once more, the Public Understanding of Science programme is probably the turning point leading from the “harmonic” to “confrontational” conceptions. As mentioned, alarmed by the hostility toward new technologies, the Royal Society and the American Association for the Advancement of Science suggested that a capillary work of scientific information was in need. In their opinion, science was too difficult and the public showed clear misunderstandings of its reliability and scope. The remedy consisted in inviting scientists to leave their laboratories and universities in order to engage in work of improving the clarity and dissemination of the results achieved by scientific research. This programme, which had obvious paternalistic connotations, failed and today only a few would propose it, at least for its original intentions. Many reasons explain the failure. First of all, the assessment of technological risks is not reducible to a mere matter of information. Even though information is very

12

Pierluigi Barrotta

important we must also consider the importance of social and moral values, in which scientists are not better experts than laypeople. An example is often reported to illustrate the role played by values. At the time of the massive construction of new nuclear power stations, it was argued that smoking a few cigarettes per day is much more dangerous than living next to a nuclear power plant. This example was advanced to show how nuclear power stations were entirely safe, but it is far from being convincing. In fact, this line of argument disregards the values of freedom and autonomy. People might freely decide to risk their lives for the pleasure of smoking, while the construction of a nuclear power plant, maybe next to home, is most likely imposed on them. In a similar way, think about the value of justice. Not only is the amount of risk important, but also the way in which it is distributed (including intergenerational distribution of the risk. Suffice it to consider the problem of nuclear waste, which can be dangerous for tens of thousands of years, far beyond the lifetime of any nuclear power plant). So far, I have given examples from technological research. It could be argued that in this way I have fallen into a clear misunderstanding: basic or “pure” science is something quite different from technology. While the former only takes care of the explanation of facts or the search for truth, the latter is more closely connected with social and moral values. Yet, today this distinction is not that obvious and cannot be accepted without caution. Many sociologists intentionally avoid speaking of “science” and “technology”. Instead, they would rather speak of “technoscience”, in order to emphasize their close connection in contemporary research. Popper (1994) himself, toward the end of his life, noticed how the happy times where scientists could confidently assert to be involved only in the search for truth were over. The emphasis on the potential conflicts between science and democracy emerges not only in the behaviour of the public but also in large sectors of the academic world. The philosophy of radical constructivism, feminist epistemologies, the strong programme in the sociology of knowledge, the movements of deep ecology represent examples in which science is perceived to be in conflict with democratic societies. The alleged objectivity of scientific research and the honest search for truth are seen as dangerous illusions that must be unmasked. In particular, the most radical constructivists emphasize how scientific theories are actually “social constructs”, namely the outcome of social negotiations and power relations, the asymmetries of which should be denounced. For this current of thought, the acceptance of a theory is not the result of experiments and observations, but the fruit of authoritarian power. Accordingly, the most urgent duty today would be to denounce the presumed objectivity of science through the “deconstruction” of its concepts and theories, in order to elicit the social relations on which they are based. Scientists, and with them many philosophers of science, have sharply distanced themselves from this. In this regard, it suffices to recall the so-called “science wars”,

Introduction 13

when in the Nineties some scientists – in particular physicists – along with many philosophers of science opposed polemically constructivists and postmodernists, judged as windbags unable to understand the difficulties inherent in scientific inquiry, which requires extreme rigour to steal secrets from nature.

Beyond harmonic and confrontational conceptions Today, debates oscillate from the thesis of the intrinsic harmony between science and democracy to that of their radical and inevitable conflict. Both theses appear to have obvious shortcomings. However, it is not easy to find a “balance” or, to use a typically philosophical terminology, locate their “sublation” through a third point of view, which avoids the dichotomy between intrinsic harmony and inevitable conflict. This challenge is not only philosophical, but also in the broadest sense cultural and social. It is in fact the prerequisite to understand the connection between scientific growth (the objective increase of our knowledge of nature) and social progress (the development and affirmation of values characterizing our democratic societies). Yet, we believe that something clearer, though not exhaustive, could be said. In our view, a pre-requisite to overcome the dichotomy is to reject an implicit presumption shared by both theses: Science and society are seen as two clearly identifiable separate blocs or entities. Stated this way, the presumption is clearly false. It is hardly questionable that science is not unanimous in the assessment of scientific research. One could object that sometimes scientists are not unanimous simply because evidence is not sufficient. Yet it is worth noticing that insufficient evidence would not necessarily prevent scientists from achieving unanimity. In fact, in the case of insufficient evidence scientists might agree on the scale of uncertainty and wait for more empirical findings. This usually does not happen. On the contrary, we have heated disagreements where each scientific party shows certainties contradicting the certainties of the other party. As a consequence, it is far from surprising that. scientific disagreements are sometimes strictly related to social conflicts, because each party is supported by different sections of society. This is the case in the controversies over the use of nuclear power, the safety of genetically modified organisms, the experimental use of embryonic stem cells, or the controversy about the limits to growth. There are also curious reversals of alliances. In the 60’s, the founder of environmental movements, Rachel Carson, strongly criticized the science of her time. For Carson, official science was incapable of seeing the consequences of human actions on the food chain because of its narrow specializations. Since then, environmental movements have been wary of science. Today, with the controversy over

14

Pierluigi Barrotta

anthropogenic global warming we are witnessing an alliance between environmentalism and official science, whose consensus is represented by the Intergovernmental Panel on Climate Change. Therefore, science and society are both heterogeneous and fragmented, showing variable and shifting alliances between components of science and society. It is taken to be a mere sociological fact, which is difficult to question. As such, it is only the starting-point for more satisfactory views. Nonetheless, it is believed it is a good starting-point. For instance, it makes the harmonic views implausible. The very same could be said for the confrontational views, even though in this case their upholders would probably claim that the fragmentation of science and society is what should be expected if existing power relations affects both science and society at the same time. Yet, confrontational views claim something more: they claim that power relations in society explain (or make us understand) scientific disputes. This is a one-way model of explanation or understanding that goes well beyond sociological facts. The fragmentation of science and society we refer to shows the existence of mutual influences between science and society. At the very least, nothing suggests that science is merely a passive mirror of what takes place in society. In our view, the main shortcoming of the two views is that they are too general and philosophically loaded. The essays in this book avoid strong philosophical perspectives and show how detailed analyses of both historical cases and specific problems are needed to get better elucidation of the relationships between science and democratic societies.

The structure of the book When speaking of science and democracy, the relationships between public opinion and scientific experts come inevitably to the fore. This is a multi-faceted issue, which actually addresses several interconnected problems that should be carefully scrutinized. Chapters 1 to 4 focus on the scope and limits of experts’ knowledge and the role of experts in democratic societies. It is in this context that it has been noted that public opinion is sometimes right to claim to have a say in the way science is applied to local circumstances. Pierluigi Barrotta and Eleonora Montuschi (Chapter 1) analyze the differences and the connections between two types of knowledge: “scientific” knowledge and “local” knowledge. Through a case-study (the Vajont disaster, which took place in Italy at the beginning of the Sixties) they discuss a theoretical framework of conditions and practical requirements that should be articulated to allow scientific information and informal experience held by laypeople to combine suitably. István Danka (Chapter 2) defends and clarifies the relevance of the distinction between

Introduction 15

evidence (the evaluation of which concerns the scientific experts) and reason and normative claims, which do not require expertise, but competence in argument evaluation and moral judgment. As such, the latter can also be achieved by educated laypeople. Given this distinction, Danka argues that a dialectical approach is more suitable than other approaches to collective epistemology about group decisions. In Chapter 3, following some Deweyan insights, Roberto Gronda constructs a pragmatist conception of expertise in the attempt to overcome the dominant revolt against the elites. Finally, Selene Arfini and Tommaso Bertolotti (Chapter 4) discuss the limits of science communication. They argue argue that, while encouraging the diffusion of a general “love for science” should inspire an appetite for more robust scientific knowledge, it also fosters the emergence of problematic cognitive situations, with the propagation of the so-called epistemic bubbles or the progressive belittlement of the role of experts in society. Other essays equally focus on the relationship between experts and public opinion, but with greater emphasis on the role of public opinion. Anna Petschner (Chapter 5) presents two case studies, one on the reputation of the research on GM plants and their experts in Hungary, and the other on the status of invasive acacia, showing that media partly disregards the scientific standpoint in decision making processes. Aviram Sariel (Chapter 6) dwells on the processes underway in electoral campaigns which, on the one hand, seem to represent a moment when scientific budgets are decided, but on the other hand, may register a ritualisation of the violation of rules and standards. Oded Balaban (Chapter 7) too addresses the role of science in political decision-making. Balabam argues that a discussion of practical issues regarding the scope and limits of democracy, such as the danger of the tyranny of the majority, is only possible if there is awareness of the absence of a source of authority. Given this awareness, the principles of tolerance, freedom and equality before the law appear as fundamental democratic principles no less than the value of decisions by the majority. Finally, Giovanni Scarafile and Maria Elena Latino (Chapter 8) illustrate the “Terra-Terra Project”, which is the result of an interdisciplinary collaboration. In it several components converge, and they can be traced back to the following two macro-categories: on the one hand, the need for renewed food traceability as a result of satisfying the demands of the movements of food democracy; on the other hand, the need to provide specific and personalized information to consumers in compliance with ethical standards The last two chapters concern more specific issues. Mauro Stenico (Chapter 9) exposes the strategies used by the Soviet astronomy to oppose Western cosmology and the aspects of this debate after 1953. Finally, Flavio D’Abramo (Chapter 10) analyses the controversy about the way organisms develop, evolve and interact in the environment. During the centuries, this controversy has taken different shapes, from the schism between epigenesis and preformation, running through

16

Pierluigi Barrotta

Lamarckian inheritance versus a neo-Darwinian approach, to the current battle between those sustaining a genetic determinism and scholars embracing a codetermination between environments and organisms where the nature/culture dichotomy fades away.

References Bloor, D. (1976). Knowledge and Sovial Imagery, 2nd ed. 1991), London and Boston. Collins, H. and Evans, R. (2002). The Third wave of Science Studies: Studies of Expertise and Experience, reprinted in E. Selinger and R. P. Crease, Eds., (2006): 39–110. Collins, H. and Evans, R. (2017). Why Democracies Need Science, Cambridge: Polity Press. Grazia, A. De. (1978). The Velikovsky Affair, 1st ed. 1966, London: Sphere. Koertge, N., Ed., (1998). A House Built on Sand. Exposing Postmodernist Myths about Science, Oxford: Oxford University Press.  doi: 10.1093/0195117255.001.0001 Koertge, N., Ed., (2005). Scientific Values and Civic Virtues, Oxford: Oxford University Pres.

doi: 10.1093/0195172256.001.0001

Lacey, H. (1999). Is Science Value Free? Values and Scientific Understanding, 2nd ed. 2005, London and New York: Routledge. Magee, B. (1973). Karl Popper, New York: Viking. Merchant, C. (1980). The Death of Nature. Women, Ecology, and the Scientific Revolution, San Francisco: Harper & Row Poincaré, H. (1917). La Morale et la Science, in Dernières Pensées, 1st ed. 1913, Biblioteque de Philosophie Scientifique, E. Flammarion éditor, Parigi, Ch. viii; Eng. trans Ethics and Science, in Mathematics and Science: Last Essays, New York: Dover. Polanyi, M. (1962). The Republic of Science: Its Political and Economic Theory, Minerva, 38, 2000: 1–32. Popper, K. (1966). The Open Society and Its Enemies, 5th ed., London: Routledge and Kegan Paul. Popper, K. (1994). The Moral Responsibility of the Scientist in Popper, The Myth of the Framework. In Defence of the Science and Rationality, London: Routledge: 121–129.

Chapter 1

The dam project: Who are the experts? A philosophical lesson from the Vajont disaster Pierluigi Barrotta and Eleonora Montuschi University of Pisa, Ca’ Foscari University of Venice

In 1963 a huge landslide covered the Vajont valley (north-east of Italy), where one of the tallest arch dams in the world had been put in place (completed in 1959). More than 2000 people died. The locals had repeatedly warned the scientists that the sides of the valley were too fragile to hold significant impact, and publicly raised concern. The ensuing media debate surrounding issues of safety in the valley soon became manipulated for political purposes, and the important message got wasted. With the help of this case study we analyse how two types of knowledge (official science and local experience) may confront each other and why they fail to interact. We then draw some lessons concerning how the use of expert knowledge becomes effective and valuable in the context of non-expert knowledge. Keywords: experts, local knowledge, public opinion, inductive risk, fact/value dichotomy

1. Preliminaries On October 9, 1963, shortly after 10.30pm, a massive landslide detached from Monte Toc and fell in the reservoir of Vajont, 1 where the tallest (at the time) arch dam in the world had been built just a few years earlier (1959). For many good reasons the dam was considered a scientific and technological masterpiece. At the end of the construction work the engineer who designed it, Carlo Semenza, filmed a short documentary where he explained the many challenges scientists had had to face in building it, and how ingeniously they had dealt with them. 2 Semenza’s pride was justified. Even today the dam arouses a sense of admiration. We went to see it in person: driving along the narrow road leading to the small village of Erto, overcoming yet another bend in 1. The reservoir of Vajont is part of the Piave valley in the Dolomites, in north-east Italy. 2. The documentary can easily be found online. See, for instance, http://temi.repubblica.it/ corrierealpi-diga-del-vajont-1963-2013-il-cinquantenario/il-cortometraggio-del-59/ doi 10.1075/cvs.13.03bar © 2018 John Benjamins Publishing Company

18

Pierluigi Barrotta and Eleonora Montuschi

the road, all of a sudden the dam materialsed in front of our startled eyes. One of the pictures we took gives some idea of its majestic size (see Picture 1). The look of the landslide of that infamous October 9, 1963, was to say the least, impressive. Picture 2 proves it: the debris that can be seen behind the dam is not part of the mountain, it is the very landslide that filled up the basin in a matter of seconds.

Picture 1. 



Chapter 1.  The dam project: Who are the experts? 19

Picture 2. 

The dynamic of the Vajont disaster was somewhat astonishing: the dam resisted the impact of the landslide – a demonstrable sign that it was an excellent specimen of engineering work – but a wave with peaks over 200 metres tall overflowed all the way down to the valley, reaching the town of Longarone, located at the far bottom of it. It was a catastrophe, with over two thousand victims. From being the symbol and pride of Italian engineering, the dam turned into something altogether different and sinister. Part of the Italian media and public opinion unanimously pointed the finger to science – namely, to its ambiguous involvement with political power, and to the fact that the scientific experts employed by the company in charge of building the dam (the Adriatic Energy Corporation – or SADE from its Italian name ‘Società Adriatica di Elettricità’) had – to many unjustifiably – dismissed the locals’ concerns over the stability of Monte Toc, neglecting several signs of danger and warnings reported by locals well before the disaster occurred. No doubt, the case of Vajont would easily fall into the wide array of case studies that sociologists of science use to identify and analyse the deep asymmetries in power relations between experts and laypeople. 3 In the case of the Vajont disaster, 3. To mention just one, in discussing the antagonistic relationship between government appointed scientists and Cumbria sheep farmers over the safety of lamb meat for human

20 Pierluigi Barrotta and Eleonora Montuschi

the tainted relationship between science and power was emphatically raised by Tina Merlin, a journalist who played a central role in this story, well before the disaster occurred. In her book (Merlin, 2001) we find a most unreserved condemnation of the science involved: truth was evident from the outset, Merlin claims, but it was ignored because official science was totally at the mercy of the political and economic power of the Adriatic Energy Corporation. Merlin’s reconstruction indeed brings to the fore some undeniable evidence. Yet, the story is more complex than the way she recounts it. In this paper, we do not intend to deny the relevance of sociological and political analyses. However, we believe that these analyses do not satisfactorily address equally pressing and relevant issues in stories of expertise and social decision making – for example, in what sense scientific knowledge sometimes proves inadequate as the sole adjudicator of what course of action to undertake; or how to assess the cognitive strengths of different types of knowledge, even those that are deemed to be ‘non-expert’, and whether/how they can fruitfully combine. These types of issues, we believe, can be more appropriately and effectively addressed by adopting a philosophical/ epistemological perspective. In what follows, we offer an illustration of what such perspective can achieve. With the help of some conceptual tools drawn from the philosophy of science, we will detect two crucial errors in the experts’ formulation of scientific judgement in the Vajont story. The first error is epistemological (scientists’ views were based on poor evidence). The second error is moral (they did not sufficiently take into account the villagers’ well-being). These claims might appear to be rather obvious. However, the main thesis of this paper is far from obvious: both errors share a common root: a neglect of local knowledge. In a nutshell, by neglecting local knowledge scientific experts made both an epistemic and a moral error. Analyzing this double-edged error will lead to a better understanding of what was at stake in the Vajont disaster, what was overlooked, and why experts ‘qua experts’ felt entitled (or at least, epistemologically justified) to act the way they did – with the tragic consequences that unfolded. In this essay we will proceed as follows. In Section 2 we will provide a brief historical reconstruction of the events that led to the disaster. In Section 3 we will examine the epistemological reasons that explain why the type of knowledge official science relied on proved insufficient and inadequate to circumvent the disaster. We will also suggest that those epistemological reasons were intertwined with moral considerations. As mentioned above, the Vajont story shows that the scientific community was at fault both epistemically and morally: by overlooking local knowledge consumption post the Chernobyl disaster in 1986, Brian Wynne identifies the reason of the dispute with the clash between two cultures of knowledge and intervention (Wynne, 1996).



Chapter 1.  The dam project: Who are the experts? 21

(in the two meanings we will qualify) scientists showed both poor judgement and moral ineptness. This does not imply that non-expert knowledge is more effective than official science, least of all that it should replace it. Rather, in cases such as the Vajont’s, both experts and so-called non-experts should be fundamental components of a united research community. As we will point out in the final section, this would lead us in the direction of conceiving an idea of a community of inquirers extended to both scientists and laypeople.

2. The case study: Historical background The first feasibility studies on the Vaiont dam project date back to the 1920s. 4 It was the geologist and academic Giorgio Dal Piaz, a close collaborator of Carlo Semenza, the talented engineer who designed the dam and supervised the works until October 1961, the year when he died, who finally suggested the choice of location for the water reservoir. It is important to note that the studies on the structural stability of the valley were confined to the abutment area and its hydraulic properties. Nowadays it may seem foolhardy to embark on a work of such magnitude without preliminarily studying the inner constitution and resistance of the slopes in the valley. Yet, at the time, the formidable engineering challenges of the project took precedence over the geological problems posed by the natural environment – a choice partly justified by the existence of well-supported knowledge of the nature and behaviour of the rocks typical of that area, namely limestone (cf. E. Semenza, 2001: 32 ff.; Carloni, 1995, pp. 13 ff.). “From a geological point of view – Carlo Semenza wrote – the rocks [of the Veneto region] are generally very good […]. Overall, limestone is honest because it reveals its flaws on its surface” (cited in Gervasoni, 1969: 11). In-depth geological studies were therefore considered unnecessary because the typical rocks of the area did not raise any visible concern. With hindsight, we are now in a position to say that that was an overly optimistic judgment: contrary to Semenza’s expectations, the Vaiont valley would unfortunately prove to be of “exceptional singularity” (E. Semenza, 2001: 29), and as a result a number of potentially relevant chains of causally linked observations were left unexplored (as will be discussed below), with devastating consequences to come. It seems appropriate at this point to summarize briefly the sequence of steps that led to the disaster, in order to provide some useful background to the forthcoming discussion. 4. For a previous historical reconstruction of the Vajont disaster, see also Barrotta (2016), Section 3.5.

22

Pierluigi Barrotta and Eleonora Montuschi

At first glance, the Vajont valley appeared to possess all the characteristics that would make it an ideal location for the construction of a dam. Indeed, its creators had set their eyes on it ever since the 1920s, though the actual excavation work only began in 1957. At that time, the inhabitants of the valley had already started campaigning against the construction of the dam. This was justified partly by local opposition to the considerable pressure exerted by SADE to acquire large plots of land in order to commence work on a legally secure footing; and partly by some deep concern regarding the safety of the small town of Erto, located dangerously near the construction site. The locals knew that the area was subject to landslides. The very origin of the name ‘Monte Toc’ bore witness to it. In local dialect ‘toc’ (short for ‘patoc’) means rotten, deteriorated, prone to disintegration. The whole area was in fact geologically fragile. This fact is also documented by Carloni (1995: 13, italics added) in his historical reconstruction of the disaster: “Monte Toc overlooking the left bank of the artificial basin standing at 1921 metres is a heavily tectonic limestone relief in which fractures and surface movements of the earth are visible.” One of the most important landslides took place as far back as 1647 and destroyed the nearby village of Casso. The inhabitants’ worries found a voice in the local and national press thanks to the journalist Tina Merlin, an outspoken critic of the Adriatic Energy Corporation. In an article published on May 5, 1959 in the national newspaper L’ Unità, Merlin denounced the incumbent danger. She wrote: “the villagers […] sense a serious danger to the very existence of a town situated close to where a reservoir of 150 million cubic meters of water is being built, which will eventually erode a terrain prone to landslides and plunge the housing complex into the lake.” The town mentioned by Merlin in her article was Erto. SADE commissioned some investigations to test the stability of the slopes surrounding the town, but ruled out any danger. They were partly proven right, in that the village was almost entirely left untouched by the landslide that provoked the disaster. However, ruling out danger for one village was indeed indicative of Sade’s more general attitude of not taking seriously into account the possibility that the entire area was subject to landslides and therefore unsuitable for the construction of a dam of such a size. Unfortunately, the instability of surrounding slopes was to gain more and more credence. On March 22, 1959, a landslide occurred in the nearby dam of Pontesei. SADE scientists began to monitor the area and consulted with leading experts. On June 10, 1960, the geologist Edoardo Semenza (Carlo’s son), delivered a report in which he warned of the existence of an ancient landslide on Monte Toc which – he claimed – could be set in motion by the construction of the dam. Though Edoardo’s claim was not unanimously accepted, monitoring the area became more attentive. Residents in the meantime experienced several light earthquakes, and in November 1960 a small landslide of 700,000 cubic metres fell into the artificial



Chapter 1.  The dam project: Who are the experts? 23

lake. Immediately Carlo Semenza decided to build a by-pass in order to control any dangerous rising of water levels in the event of a new and bigger landslide. Possible consequences of landslides were also studied by means of a purpose-built model of the whole Vajont reservoir, commissioned to Augusto Ghetti, Director of the Institute of Hydraulics at the University of Padua. By the time Carlo Semenza died on 31 October, 1961, the situation had grown out of control, but Semenza’s successor, Alberico Biadene, was determined to complete the work. On October 9, 1963, the catastrophe took place. In the trial that followed Biadene was sentenced to six years in prison. One of his main collaborators, Mario Pacini, committed suicide. If Biadene’s liability is somehow acknowledged thanks to the trial (though its extent might be disputed), Carlo Semenza’s responsibility is still the subject of lively discussions. Was Semenza reckless and culpable? As we have already remarked, at the time both the existing practice within the scientific community and legal norms and regulations did not require accurate in situ geological investigations prior to the construction of artificial lakes. So Semenza’s expert decision making was not criminal, nor unusual – yet, at least with hindsight, it remains poor. How can we account for what appears as gross shortsightedness, if not plain misconduct? We suggest reformulating the question above differently: what was missing from the judgement that brought Semenza to his purportedly reckless and irresponsible decision of relying on existing background knowledge, without testing it in situ? In the rest of our essay we identify two crucial areas of neglect from Semenza’s use of his expert knowledge that led him to underestimate a number of aspects which, we believe, would have proved crucial to a correct formulation of his expert judgement. These two areas can be described and questioned in both epistemological and moral terms. As we remarked above, the two types of neglect share a common root. Let’s see how this is the case.

3. Two areas of neglect in expert judgement The first area of neglect we consider concerns the relation between expert and local knowledge. In order to pursue a project of such magnitude as that of the Vajont arch dam, the scientific community needed accurate general knowledge about the chemical and physical characteristics of the rocks forming the terrestrial crust of the area. Semenza and his fellow scientists had all or most of this knowledge. What they failed to see was that (and how) this knowledge should demonstrate its real worth (proof of effectiveness) in the particular circumstances where it was called upon. Possessing general knowledge did not ipso facto entail that the scientists could make it useful/usable in the specific circumstances (e.g. what could the impact of a specific environment be on the rocks in questions? what could the particular reaction

24

Pierluigi Barrotta and Eleonora Montuschi

to that impact of these particular rocks in that particular location be? etc.). In other words, what was missing from Carlo Semenza’s judgment was local knowledge. Semenza seems to be inclined to believe that good general theory has automatically all the resources and logical/technical tools to apply to specific circumstances. However, shifting from the general to the particular is not just a question of logical implication of the kind ‘if the situation is X, theory T will tell us that Y happens’ (Cartwright, 1999, p. 183). It is instead a matter of building an empirical judgment of relevance with the help of local circumstances. In what sense? Let’s see in the concrete case of the Vajont. In using general knowledge of limestone to inform a specific situation (the stones in the Vajont valley) not all knowledge about limestone is required. But this is not only a question of quantity. Some facts will matter more (or less) than others in assessing the behaviour of these stones in situ. In weighing facts against each other a situational range (and arrangement) of assumptions and contextual factors play a crucial part. On one side, there will be local information; on the other, an empirical assessment of how local information directs our attention to what matters in the circumstances. This is why deciding relevance is not an automatic given of well-established theories (theories well corroborated by evidence). Even the best theory is not ipso facto a relevant theory in some circumstances. Deciding on relevance is partly a judgement informed by local knowledge. Part of the problem in our story was that some of this local knowledge came from the locals (the inhabitants of the valley – peasants and mountaineers), and expressed in forms that the scientists felt entitled to disregard (experience, tradition, even feelings towards the mountains). The scientists believed they had enough good general knowledge to enable them to control what occurred in the area. To do their job properly they did not need, on top of that knowledge, local information (either in the form of fresh geological observation or, even less so, of fuzzy laypeople anecdotes). Knowledge of the locals and knowledge of local facts – so they appeared to believe – were redundant or confusing. 5 The reason behind this first area of neglect had nothing to do with criminal responsibility. As mentioned above, there was no legal requirement at the time to pursue in-situ geological investigation in advance of undertaking these types of grand works, nor was it a current scientific practice to pursue them. Still we can say, with hindsight, that Semenza’s judgment was poor, and it is by using an epistemological perspective that we can see why. Semenza took for granted the well-established geological background knowledge existing at the time and believed that possessing that knowledge was sufficient to undertake the construction of the 5. In what follows, when speaking of ‘local knowledge’ we should understand it in both its meanings: knowledge of local facts and knowledge of the locals.



Chapter 1.  The dam project: Who are the experts? 25

dam in the chosen area. Thus, the neglect of local knowledge led Semenza to accept poor evidence – more specifically, poor evidence in view of its use. This is an epistemic mistake. Facts do not come with a tag attached to them that says ‘evidence’. To say that certain facts count as evidence means that they count as evidence for a certain situation or problem. This entails a construction of their relevance as evidence in view of understanding the specific situation or problem they are intended to provide evidence for. Furthermore, as we will see shortly, this mistake at the same time points to a second area of neglect the nature of which is not epistemic but moral. Back in March 22, 1959, approximately three million cubic metres of material slid down into the artificial lake of Pontesei, close to the Vajont valley. One man died because of water overflowing, even though the lake was 13 metres below its maximum capacity. For the locals, that was a confirmation of their concern; for SADE scientists, it was an alarm bell. In fact, the latter were struck not only by the magnitude, but also by the compactness and the speed of the landslide. They began realising that a more detailed geological analysis of the area would prove useful at that point. Carlo Semenza asked Leopold Müller, a distinguished scientist and a pioneer in geomechanics, to do a survey. In order to write his report Müller went to the Vajont valley on July 21 and asked a young geologist, Edoardo Semenza, (Carlo’s son) to investigate the phenomena of instability that appeared to pervade the valley. Together with his colleague Franco Giudici, Edoardo Semenza (1960) came to an astonishing conjecture: the presence on Monte Toc of a pre-existent large ancient landslide that could be set in motion again by the construction of the artificial lake – in particular by running incremental series of filling of the lake to test the reservoir’s safety levels. The conjecture did not include a confirmed estimate of the size of this ancient landslide, though a guess of 50 million cubic metres was put forward. Such an estimate was already big enough to raise serious concern, but the actual size of the landslide that eventually precipitated into the reservoir were to prove five times bigger (260 million cubic metres, spread over an area of 2 sq. km). The scientific community did not unanimously agree with Edoardo’s hypothesis nor his estimate. Müller was inclined to accept the conclusion supported by Edoardo (though at the time he did not explicitly and officially endorse the hypothesis of an ancient landslide), while experienced geologists such as G. Dal Piaz, F. Penta and P. Caloi overtly disagreed with Edoardo. In particular, Caloi, who carried out a geoseismic investigation, came to the conclusion that fractured rocks only existed at the surface of Monte Toc. For him, no ancient landslide ever existed (See Caloi, 1966). Situations of disagreement are not unusual in science. Evidence can prove uncertain and contested, and often scientific disciplines are not established enough to assist in disputes. Carlo Semenza found himself right in the middle of one of

26 Pierluigi Barrotta and Eleonora Montuschi

these thorny situations. At the time, geology was not a well-established paradigm. Geologists used different methods of investigation, the results of which were not always compatible with each other. As a consequence, he had to face extreme uncertainty about the evidence and deep disagreement among experts. The way he made his decision about what to do and how to solve the difficult and complex situation he was confronted with can be reconstructed in the following terms: he made an epistemological error of judgement that carried with it a clear moral implication. To anticipate the framework of the error: by neglecting local knowledge, Semenza considerably underestimated the probability that the construction of the dam could set in motion the ancient landslide and consequently a series of devastating geological phenomena in the valley. There is here a clear entanglement between the epistemic and the moral: an epistemic error (neglecting local knowledge) led the scientists to commit a moral error (they appeared to be willing to accept too a high risk by jeopardizing the life of the inhabitants). In the 1950s the philosopher Richard Rudner (1953) had already pointed out that when scientists decide to accept or reject a hypothesis they implicitly make a moral decision. This is because the strength of evidence “is going to be a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis” (Rudner, 1953: 2). 6 Rudner’s view put us on the right track to address the problem of the entanglement of the epistemic and the moral in the case of the Vajont disaster, though in this case the entanglement itself takes a different, and in a sense more radical, form. In Rudner the entanglement between the epistemic and the moral occurs because in accepting (or rejecting) a hypothesis the scientist needs to make both an epistemic decision (a probability assignment, i.e. the probability of a hypothesis being true) and a morally risky decision (a risk evaluation, i.e. what is the risk involved in wrongly either accepting or rejecting a hypothesis). Thus, following Rudner in principle we could avoid the entanglement by claiming that scientists are only concerned with the probability assignment problem. 7 Whether the scientist is, or should be, interested in accepting or rejecting a hypothesis is in principle a separate/separable issue. 6. Here we have touched upon the distinction between type I error (the error of accepting a hypothesis when it is false) and type II error (the error of rejecting a hypothesis when it is true). We will not discuss this issue here. 7. This is the line of reasoning suggested by Jeffrey (1956) and, more recently, by Betz (2013). Here we have simplified the argument proposed by Rudner. In all likelihood, Rudner would not accept the way we have reconstructed it (see Rudner, 1953: 3–4). However, simplification is justified here as the problem raised by us is in any case different from the one raised by Rudner.



Chapter 1.  The dam project: Who are the experts? 27

The Vajont disaster shows us that the entanglement between the epistemic and the moral cannot be avoided, not even in principle, since in the final analysis it is rooted in the very problem we have raised in the first part of this essay: the neglect of the locals’ views. In fact, probability assignment appears to have a moral import from the start, since it crucially depends on a different kind of risky acceptance, namely one that entails what kind of evidence should be considered relevant. As argued above, this is where local knowledge comes to the fore. The locals in the Vajont case were pertinent knowers, since their opinion was germane to the gathering of relevant evidence (the importance of local knowledge in making relevant use of general knowledge). Besides, it should be noted that in this case the locals were also stakeholders and understandably asked for the minimizing of the risks of incidents. In short, Semenza’s error consists of not involving the locals in the search of the best line of action to undertake, in the light of morally-loaded epistemic considerations. It is interesting to examine how Semenza faced up to the situation when the ancient landslide hypothesis acquired further support. He knew that the artificial lake could affect the stability of the landslide and put not only the dam, but also the inhabitants of the valley in serious danger. He clearly had to avoid a human catastrophe. Yet, he acted as if the problem was only (or primarily) technical and epistemic. He looked for scientific/technical solutions for what he took to be a specifically scientific/technical problem. In November 1960, he started building a by-pass (Picture 3). In the case of a landslide, it would prevent the rising of upstream water, which would endanger the town of Erto. In the same month, a landslide of 700.000 cubic metres fell into the basin: it was a small part of the ancient landslide that had set itself in motion. Edoardo’s hypothesis was closer to reality than ever, and growing evidence also lent further support to the devastating prospect that the conjectured ancient landslide could be much bigger than Edoardo first estimated (only Dal Piaz and Penta continued to be optimistic at that point). 8

8. Caloi carried out a second investigation, which led him to change his previous view dramatically. According to his new results, now the area of fractured rocks appeared to be much deeper and wider. We must also mention the 15th report drafted by Müller (1961). That report was probably the strongest blow to Carlo Semenza’s hopes. Following Paolini and Vaci’s (2013: 84) historical reconstruction, Müller suggested that SADE should have abandoned the whole project. This is actually not entirely correct, but the report was all the same rather disturbing. Müller (1961: 14–16) mentioned several measures that could have been taken in order to control the landslide, though none of them were feasible given the technology available at the time.

28

Pierluigi Barrotta and Eleonora Montuschi

Picture 3. 

Before dying Semenza played yet another card. He asked Augusto Ghetti (Director of The Institute of Hydraulics, University of Padua) to build a model of the dam and of the whole reservoir – a highly innovative move at the time. It was the first purpose-built model in Italy, and one of the first in the world (Picture 4). It was meant to assist in understanding the effect of a landslide on the reservoir. Since scientists were unsure about the stability of the slope which the landslide laid on, Ghetti carried out several experiments using the purpose-built model. Given the worst-case scenario, he came to the conclusion that there was no danger if the level of water in the basin did not exceed 700 metres, and provided that the dam resisted water impact. 9 9. As Ghetti (Zanetti, 2013, p. 49) wrote: “This seems sufficient to conclude that, starting from the maximum reservoir flooded top water level, the fall of the expected mass landslide could get to produce an overflow of about 30,000 cubic metres/sec. and a rising wave of 27.5 metres only in catastrophic conditions – that is, when the landslide occurs in an exceptionally short time, from 1 to 1.30 minutes. By just doubling this time the phenomenon is attenuated and we expect an overflow below 14,000 cubic metres/sec. and a rising wave of 14 metres. Decreasing the initial level of the reservoir, these effects of overflow and rising wave rapidly declined, and 700 metres above sea level could already be considered to be of absolute safety with regard to even the most catastrophic of the expected events.” Ghetti’s full report has been published by Zannolli (2013, pp. 48–50).



Chapter 1.  The dam project: Who are the experts? 29

Picture 4. 

None of these measures avoided the catastrophe. As a matter of fact, with good reason Semenza continued to feel uneasy. In a well recorded letter to his old master Vincenzo Ferniani, dated April 21, 1961, he wrote: “After so many lucky works and imposing constructions, I am facing something of such a size that it appears beyond my control” (C. Semenza, 1961: 2). Furthermore, Alberico Biadene (Semenza’s successor) made one further mistake, which should be clear to those who are acquainted with the philosophical debate on the use of models in science. A model predicts on the basis of its own assumptions, which do not necessarily comply with the factors and conditions occurring in the actual situation. Drawing conclusions about real situations relying on the results offered by a model is a risky business. Semenza’s successors showed a lack of understanding of the logic of models. Biadene took the experiments carried out by Ghetti at face value. To give an example, he did not consider that in the model experimenters used gravels, which proved to be less compact than the composition of the real landslide. So, the calculation of the speed of the landslide proved inaccurate (unfortunately, the speed of the real landslide would be much faster). 10 Biadene grasped the conclusion (until a height of 700 hundred metres 10. Ghetti himself warned that his results should not be taken at face value: “the final sentence on safety level is like a foreign body in the context of the report. The experiments were conducted with the original data provided by SADE, which do not adhere to reality” (quoted in Zanolli, 2013: 50).

30

Pierluigi Barrotta and Eleonora Montuschi

there would be no danger), but not the process through which the conclusion was achieved. Though Ghetti insisted on making more experiments, Biadene stopped making use of the model’s results, and in the weeks before the disaster he even decided to exceed the water safety level of the lake. 11 So, we come to the end of our analysis. The description of the two areas of neglect sheds light on what ultimately went wrong in this dramatic episode. They point out what was crucially missing from expert scientific judgement: an awareness of the importance of local knowledge, in both above-mentioned meanings (knowledge of local facts and knowledge of the locals). This lack of awareness led scientists to make mistakes in the probability assignment, (they disregarded relevant evidence) and in the moral consequences it implied, (what should have been factored in while assessing the risk involved). Furthermore, the locals were neglected as stakeholders, since they were never appropriately informed – let alone consulted – about the impending risks. Although we pitched our philosophical reconstruction at the level of Semenza’s decision-making strategies, this should not lead us to believe that the Vajont disaster comes down to one individual’s mistakes. Indeed, we should not disregard the ‘bigger picture’. Poor individual judgement is often the result of a ‘systemic failure’. Put in these terms, understanding what went wrong in some individual instance would lead us towards tracing the causal chain that ends in a disastrous outcome not so much back to the individual, but to the structural failure of a system that explains why the individual committed some error. Errors often occur within systems that are ill equipped (both theoretically and practically) for preventing them. This is said not to abrogate individual responsibility but to put us in a position so as to understand it better, both at individual and system level, and for acting on those aspects of the system that allow for mistakes to emerge – as we will point out in some concluding remarks.

4. By way of conclusion: A community of inquirers The Vajont story is a story of huge gaps in communication among relevant areas of expertise. Firstly, within geology. At the time, we have already mentioned, geology 11. It is interesting to mention here that Italian authorities were only partially informed about the decisions being made, and the locals were completely left in the dark. The latter seemed to be neglected twice – not only as contributors to the understanding of the geological features of the area, but also as stakeholders, whose lives were at risk as a consequence of what knowledge it was decided to take into account.



Chapter 1.  The dam project: Who are the experts? 31

was not a paradigm. Geologists belonged to different schools and did not routinely communicate much among each other. In his parliamentary report, MP Pietro Vecellio mentioned the geologist Schnitter, who specifically raised this issue when providing his explanation of the disaster. Vecellio (1965: 3) writes: “It is interesting to note that Prof. Schnitter stresses that in the future the newly born science of rock mechanics will have to develop alongside descriptive geology, which is now considered in itself, insufficient when facing such complex situations.” Secondly, there was a lack of communication between engineers and geologists. Although they provide radically different accounts of the disaster, both Edoardo Semenza (2001) and Paolini and Vacis (2013) notice how geologists were not sufficiently consulted by engineers. In particular, Paolini and Vacis (2013) maintain that engineers were so confident in their knowledge and expertise as to believe that they were able to cope with any challenging situation without consulting others (and indeed disagreements among geologists did not boost confidence in their potential contribution). Thirdly, there were huge information gaps between the scientific and local communities. Locals were not involved, not listened to and not factored in when the hypotheses and predictions of scientists were tested and assessed. The risks taken by science were dealt with primarily in epistemic terms, and as if the epistemic can take care not only of itself but also of the moral. Risks were not looked at as primarily moral and the (degrees of) epistemic acceptance of a hypothesis as a consequence of the moral nature of those risks. This inevitably led to a progressive detachment of the local communities from the decision-making procedures of the scientists, and to an increasing generalised distrust in the scientific community. From within each of these domains of faulty communication it becomes understandable how a whole series of factors, considerations and pieces of evidence (as partly exposed in this essay) were easily neglected or interpreted in inadequate forms – gradually and inexorably leading to a catastrophe that still haunts and shames the memory of a nation. However, faulty communication is not only the making of individual or corporate neglect (more or less intentional). Especially in the case of the disconnection between the locals and the communities of experts. What is missing is an appropriate framework of conditions and linguistic tools for communicating effectively (Cf. Montuschi, 2011). The difficulties of communication among different scientific communities and different types of experts, have been widely studied in philosophy and the history of science, and in science studies – starting, for example, from Galison (1997),

32

Pierluigi Barrotta and Eleonora Montuschi

who used the metaphor of a ‘trading zone’ 12 to explain how physicists belonging to different paradigms managed to collaborate with each other and with engineers to develop high-energy physics particle detectors and radars. Collins (Cf. Collins and Evans, 2007) put forward the idea of an ‘interactional expert’, as someone who can train in different disciplines and areas of expertise and who is able to exchange information and communication with different experts. These various views of how crucial communication and cooperation prove to be among different knowledge bearers finally lead us to envisage the idea of a ‘community of inquirers’ that should allow, in cases like the Vajont disaster, to extend membership to all stakeholders, including laypeople (Cf. Barrotta, 2016). Working out the details and implications of this idea surely cannot be crammed into some concluding remarks. Nonetheless, reference to the type of literature mentioned above might be a suitable starting point to address, in a careful and considerate manner, the strenuous problem of creating effective and collaborative communication between distant parties in a democratically conceived idea of public debate.

Acknowledgments We thank the audience at the IASC conference ‘Science and Democracy’ (Pisa, 26–28 October 2017) for interesting questions and insight. Special thanks also go to Giulia Bossi (IRPI, University of Padua) for assistance with technical terminology.

References Barrotta, P. (2016). Scienza e democrazia. Roma: Carocci. Eng. Trans. Scientists, Democracy, and Society: A Community of Inquirers. Berlin and New York: Springer, forthcoming. Betz, G. (2013). In Defence of the Value Free Ideal. European Journal for Philosophy of Science, 3, 207–220.  doi: 10.1007/s13194-012-0062-x Caloi, P. (1966). L’evento del Vajont nei suoi aspetti geodinamici. Annali di Geofisica. vol. 19, no. 1. Carloni, G. C. (1995). Il Vaiont trent’anni dopo. Esperienza di un geologo. Bologna: Clueb. Cartwright, N. (1999). The Dappled World. A Study of the Boundaries of Science. , Cambridge: Cambridge University Press.  doi: 10.1017/CBO9781139167093

12. Literally it points to the real situation in which different peoples are able to exchange goods, despite differences in their language and their culture. In Galison’s own words: “Two groups can agree on rules of exchange even if they ascribe utterly different significance to the objects being exchanged; they may even disagree on the meaning of the exchange process itself. Nonetheless, the trading partners can hammer out a local coordination, despite vast global differences” Galison, 1997, p. 783.



Chapter 1.  The dam project: Who are the experts? 33

Collins, H. & Evans, R. (2007). Rethinking Expertise. Chicago and London: The University of Chicago Press.  doi: 10.7208/chicago/9780226113623.001.0001 Galison, P. (1997). Image and Logic. A Material Culture of Microphysics. Chicago: The University of Chicago Press. Gervasoni, A. (1969). Il Vajont e le responsabilità dei manager. Milano: Bramante editrice. Jeffrey, R. (1956). Valuation and Acceptance of Scientific Hypotheses. Philosophy of Science, 22, 197–217. Merlin, T.. (1959). La SADE spadroneggia, L’Unità, May 5, also in http://tinamerlin.it/pubblicazioni/ articoli-di-tinamerlin/vajont/ Merlin, T. (2001). Sulla pelle viva. Come si costruisce una catastrofe. Il caso del Vajont, 4th ed., Verona: Cierre edizioni. Montuschi, E.. (2011). Oggettività ed evidenza scientifica. Ricerca empirica, politiche sociali, scienza responsabile. Roma: Carocci. Müller, L. (1961). Rapporto geologico per conto della SADE, Documenti utilizzati dal 1° gruppo di lavoro della Commissione. Senato Italiano (2003). Paolini, M. & Vacis, G. (2013). Il racconto del Vajont. Nuova edizione con due saggi inediti. Milano: Garzanti. Rudner, R. (1953). The Scientist qua Scientist Makes Value Judgments. Philosophy of Science, 20, 1–6.  doi: 10.1086/287231 Semenza, C. (1961). Lettera a Vincenzo Ferniani. Documentazione Tecnica, Senato Italiano (2003). Semenza, E. & Giudici, F. (1960). Studio geologico sul serbatoio del Vajont, Documentazione tecnica, Senato Italiano (2003). Semenza, E. (2001). La storia del Vajont, raccontata dal geologo che ha scoperto la frana. Ferrara: K-flash. Senato Italiano (2003). Commissione parlamentare d’inchiesta sul Vajont. Archivio storico, Soveria Mannelli: Rubettino. Vecellio, P. (1965). Relazione su studi apparsi in pubblicazioni straniere dopo la sciagura del Vajont. Documenti redatti dal 1° Gruppo di lavoro della Commissione, Senato Italiano (2003). Wynne, B. (1996). May the Sheep Safely Graze? A Reflexive View of the Expert-Lay Knowledge Divide. In S. Lash, B. Szerszynski, B. Wynne (Eds.), Risk, Environment & Modernity (44–83). London: Sage Publications. Zanolli, R. (2013). Vajont. Cronaca di una tragedia annunciata, Vittorio Veneto: Dario De Bastiani Editore.

Chapter 2

Rational decisions in a disagreement with experts István Danka

Budapest University of Technology and Economics

In a ‘post-truth’ society, expert opinion in public decisions is often taken to be of minor impact. This paper considers recent developments in collective epistemology about group decisions, arguing that a general assumption of recent trends to be called as the Summative View makes them insufficient for responding to this problem properly. At least two important aspects are missing from the accounts discussed: a diversity of relevant expertise, and the fact that disagreement implies debating, the latter making a dialectical account applicable to the situation. I shall build mainly on the latter line, discussing different notions of rational movements in a debate that can occasionally make prima facie irrational decisions to be rational. Keywords: post-truth, expert opinion, collective epistemology, disagreement, summativism, argumentation, dialectics, pragma-dialectics, strategic manoeuvring, rationality

The problem We live in an age when expertise has become gradually worthless. Postmodernism had taken truth to be a matter of opinion, but in 2017, the year of ‘post-truth’, 1 truth straightforwardly has become intentional deception. In public decisions, decisionmakers often seem to be uninterested in truth and knowledge, even though the decisions they make are at least partly affected by scientific issues and hence disregarding expert knowledge in these matters is prima facie irrational. In this paper, I shall investigate some aspects of this phenomenon, arguing that a conflict among different senses of rationality is in the background of irrational decisions of this kind. Hence, in order to be rational in one sense can imply being irrational in another sense. 1. https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016. Last Accessed: 05. 04. 2017 doi 10.1075/cvs.13.04dan © 2018 John Benjamins Publishing Company

36

István Danka

This explanation is of course no excuse for the irrationality of the decision. But it may help develop better strategies for public decisions of scientific relevance. In particular, a public evaluation of reasons seems to me preferable to a simple voting system as the latter is more vulnerable to deception. Prioritising among different sorts of rationality and a proper evaluation of expert arguments can help decision-makers to choose more rather than less rational choices. In order to understand the situation, let us first imagine that a group of people (a local community, a nation or somewhere in between) has to make a decision in a matter of urgency. Options are clear but the possible effects of choices are much less so. Many group members feel to be not qualified enough to make a responsible decision. They decide to ask the opinion of those who are better qualified. A number of group members can be considered as experts on the topic: they have preliminary knowledge of decisions made in similar situations and their consequences; they have competence to evaluate evidence for (dis)similarities between past situations and the present one; finally, they also have competence in applying their background knowledge to the present situation in accordance with the identified (dis)similarities. To put things into epistemological terms, experts have knowledge on the matter of their expertise; they have a set of skills and methods relevant for that matter; and they are successful in deploying new questions on the field (Goldman, 2001). One can be, of course, an expert in some matter but a layperson in others. Expertise is relative to the specific field of expertise. Hence, the group decides to ask expert opinion on the matter, so that they could have a better ground for coming to their decision. The experts provide their pro and contra arguments. The majority of experts comes to an agreement on the ground of their shared evidence and reasons. Now the group has expert opinion on the matter, with all evidence and arguments evaluated properly (within time and resource limitations). The group comes to vote what to do; and despite all expert opinion, they decide to the contrary. What happens in cases like this? This is not a fictive scenario. A great variety of examples can be provided to illustrate it. The first may be the ‘Brexit’ phenomenon (also re-occurring in the latest US presidential election to some extent). It is widely assumed that leaving the European Union is a serious disadvantage for the United Kingdom. Economists have argued extensively, both prior to and after the referendum, that pro-EU arguments significantly outweigh con-EU arguments, the latter consisting mostly of irrational wishes and emotionally heated prejudices. Disregarding expert opinion, the Brits made their decision on the ground of unreasonable personal values rather than economic or other scientifically measurable grounds. 2 Expert opinion was not 2. From the countless analyses, see esp. Eric Kaufmann’s: http://blogs.lse.ac.uk/europpblog/ 2016/07/09/not-economy-stupid/. Last Accessed: 05. 04. 2017



Chapter 2.  Rational decisions in a disagreement with experts 37

something taken as relevant for their decision by a significant portion of British voters. They disregarded valid reasoning and made their decision on (as many would say) purely irrational grounds. Politics is perhaps a complex issue where values may count occasionally more than truth, facts and hence expert opinion. This is hardly a valid reason in matters of pure scientific nature though. A following example about the legacy of alternative medicines is precisely of that nature. In Switzerland, a recent referendum in 2009 decided alternative medicines, including homeopathy, acupuncture and traditional Chinese medicine, to be supported by health insurance in the same status as conventional (Western) medicines. In 2005, the decision has been made despite of an extended research ordered by the Swiss government that resulted in the experts’ (almost) univocal opinion that the efficiency of alternative medicines lacks scientific proofs (the exceptions who disagreed included committed proponents of these medicines only). The decision involved a six-year trial period for alternative medicines to prove their efficacy (letting it be covered by health insurance in the meantime). Having this trial been unsuccessful, the conclusion of the Swiss interior ministry was that it was impossible to prove the efficiency of alternative medicines in scientific terms, and hence a scientifically approvable efficiency is to be taken as irrelevant for their health insurance coverage. Expert opinion, even in a matter of pure scientific relevance, has been disregarded. 3 Let me clarify that my intention is not to take a stance in these very complex issues. What I intend to demonstrate is that even a wide agreement of experts does not force the public to make a decision in accordance with expert opinion. So what happens then in situations like these? Why does a group decide contrary to all arguments, evidence and expertise? The only prima facie reason that comes to mind is that they make an irrational decision. But is it really so? Are these decisions necessarily taken to be irrational? If so, making a decision contrary to expert opinion makes a decision irrational. But as I shall argue, no such general implication is validly made, even if the examples above seemingly suggest that. The main reason why disregarding expert opinion is not necessarily irrational is as follows. Let us modify the scenario a bit and imagine that in a certain decision, the nature of the issue implies expert opinion to be significantly divergent. Experts provide claims even contradictory to each other, as a group of experts supports a position and another supports its opposite. Let us suppose that the level of expertise of the experts in disagreement, as far as it can be judged (e.g. on criteria presented by Goldman, 2001), is approximately the same. Also suppose that from a layperson perspective, the evidence and arguments for each side are equally valid, coherent and complete. Yet, given that expertise is taken to be an important factor 3. See e.g. http://www.swissinfo.ch/eng/society/complementary-therapies_swiss-to-recognisehomeopathy-as-legitimate-medicine/42053830. Last Accessed: 05. 04. 2017

38

István Danka

in the decision, and experts disagree on the matter, decision-makers seemingly have no rational ground to decide which option to choose. There are equally strong arguments and evidence for each alternative, and the authority of their epistemic superiors prevents them from excluding any of the options. These circumstances much better justify if decision-makers listen to their own values, interests or gut feeling. What differs in the original situation is that even though evidence and reasons are well-balanced from the layperson point of view, they show a pretty clear imbalance from an expert point of view. Expertise is justifiably ignored by the group in the case when experts disagree, as expert disagreement about the matter indicates that expertise does not help in deciding the issue. Even if the same factor does not occur in the cases of expert agreement, there certainly are cases when expert opinion can be rationally ignored. Hence, expertise does not necessarily imply an adequate answer to a question because experts can and often do disagree on matters of their expertise. But how can expertise be irrelevant (or at least not relevant enough to outweigh personal interests, values and emotions) e.g. in the case of decisions with strong economic impact? How can expertise be irrelevant (or not relevant enough) in the case of decisions having implications on medical and health care issues? And how can expert opinion be disregarded in issues when their opinion is (almost) univocal? This is much less clear at first glance, but developing a suitable theoretical framework for understanding collective decisions might help. This shall be aimed at in this paper.

Standard responses Before turning towards those questions, let us develop a model for cases when experts themselves disagree. It is certainly not a rare case, and can serve as a reason why expert opinion is to be disregarded. If their expertise does not provide them final evidence or sound reasoning for their decision, what makes them epistemically superior to their peers in matters of their expertise? Even if superiority applies, this is not relevant, or not sufficient, for making a univocal decision in question. Nevertheless, if, as often, urgency is also involved in decision-making, it is also irrational not to choose any of the options. Decision-makers sometimes simply cannot wait until science decides what to do. Disagreements are sometimes irresolvable. Irresolvability here does not necessarily mean irresolvability in theory; practical irresolvability (due to limits in time, resources, etc.) also counts. In irresolvable disagreements, pro and contra sides do have (approx.) the same support: the same amount of evidence, equally strong arguments, equal authoritative power due to experts standing behind the sides with



Chapter 2.  Rational decisions in a disagreement with experts 39

all their knowledge and competence to evaluate evidence and arguments. These disagreements are (ideally) taken to be faultless: neither side is in fault because they have done everything in their power to reach their conclusion, and there is simply nothing further to do in order to make their point more reliable. Assume, however, that their disagreement does not disappear after all considerable effort. In such cases, three fundamental stances can be taken, all of them seeming to be a matter of decision rather than expertise. Irresolvable or faultless disagreements cannot be decided on the ground of evidence or reasoning about the subject matter; how to take stance in them is subject to the evaluation of faultless disagreement situations. This requires a sort of expertise in decision theory, collective epistemology or social psychology; i.e., expertise in a meta-theoretical question of decisions under circumstances of this specific kind. This sort of expertise cannot be expected from the participants themselves but only from the analyst reconstructing the situation. In the collective epistemology of disagreement, three directions have been developed as possible relations to faultless disagreement (for taxonomies, see esp. Elga, 2010, Siegel, 2013 and Lalumera, 2015). First, the standard answer to group decision in a disagreement (under idealised circumstances) is the Steadfast View (van Inwagen, 1996, Kelly, 2005, Goldman, 2010). The Steadfast View claims that it is irrational to give up one’s point if there are no decisive arguments to the contrary. Reasons supporting this claim may differ from case to case and theory to theory; an under-determination of available evidence as well as different epistemic standards of acceptability can serve as possible grounds for faultless disagreement. Given that experts in a faultless disagreement do not have decisive counter-arguments (that is at least one reason why they have kept committed to that particular view), if they take the Steadfast View to be a rational advice, disagreement will not disappear. Furthermore, if the Steadfast View is right, it is rational that disagreement does not disappear, given that both sides are faultless, having no decisive arguments against their point (and still having good arguments and evidence supporting it). A second option can be called as the Suspension View (Feldman, 2006, Christensen, 2007, Kornblith, 2010). It says that in faultless disagreement, a suspension of opposing views is the most rational choice. The reason behind this is that if one, despite of all her expertise, cannot fully convince an epistemic peer of hers that her point is right then there is no (available) conclusive argument for the view she is supporting. Hence, in order for her not to commit to a position that is not supported well enough, a suspension of her views is in order until better support is found. As can be seen, both views build on the lack of a sound argumentation for either side; but while the Steadfast View is biased towards the position accepted earlier, the Suspension View is biased against the view accepted. They take the question of

40 István Danka

a burden of proof differently, but they agree that a clear-cut stance must be taken towards one’s original position. But while the Steadfast View also implies a response to the original question, the Suspension View does not help in making decisions in practice. For practical decisions, the Suspension View is inapplicable for obvious reasons. Making a decision requires taking a stance; suspension may be good for preparation but not for a decision. If a decision is urgent, it has to be made, regardless of insufficient support for the side chosen. Following the Steadfast View is not always a good option either as it cannot be applied for expert-laypeople situations. As the differing options are supported by experts (i.e., epistemic superiors of the decision-makers regarding the issue), decision-makers are, prima facie at least, supposed to accommodate their views to expert opinion. Suppose that some experts support the point a decision-maker also holds. Regardless of counter-arguments, the Steadfast View suggests her to stay committed to her point if there is faultless expert disagreement about the issue. But this is what she would do if she disregarded expert opinion altogether. The Suspension View has been further developed to a somewhat different, third direction called as the Conciliatory View (Elga, 2010, Lackey, 2010, Christensen-Lackey eds. 2013). A main reason behind conciliatorism is that while the Steadfast View does not allow enough space for progressive ideas, the Suspension View leads to an all-out scepticism regarding undecided questions that is to be avoided (Christensen, 2009). Conciliatorists take the best solution to be some sort of harmonisation between the extremes: if neither is supported by sound argumentation, so they claim, the answer must be somewhere in between. For this, probability calculations seem to serve the best solution; giving up a binary notion of (dis)beliefs, conciliatorists develop on a Bayesian epistemology and consider further relevant factors like the connectedness/isolatedness of beliefs to revise, the number of peers evaluating evidence as having a further effect on probability, etc. An important aspect disregarded by the Conciliatory View is that an evaluation of reasons normally happens through an evaluation of inferential and/or dialectical structures rather than an evaluation of probabilities of isolated pieces of reasoning. To put it simply, the Conciliatory View takes decision processes to be like negotiations rather than argumentation (for the difference, see esp. Provis, 2004). Argument evaluation requires a qualitative analysis of how reasons for a claim can be inferentially connected to other reasons one still has accepted. Probability calculations lack this characteristic, making decisions to be isolated and hence purely opportunistic: for example, decision-makers are not required to satisfy a coherence of reasoning (and hence a consistent moral character) if probability is the main criterion in decision-making. If inferential and dialectical relations among items of reasoning are important, a system of calculation and voting is to be replaced with dialectics. While a probabilistic account can offer important insights



Chapter 2.  Rational decisions in a disagreement with experts 41

for an understanding how group decisions can or should work in a conciliatory way, it is rather one-sided in disregarding argumentative and dialectical aspects of conciliation. A further general problem with all the above-mentioned accounts (as applied to the present problem) is that they seemingly take beliefs to be subject to voluntary revisions. Introducing an argumentative/dialectical aspect of conciliatory processes into the account implies to see reasons as they force arguers to stay committed to, or revise, beliefs, in accordance with the arguments. It is, so the claim goes, not a choice for them to stay steadfast, suspend or reconcile their views; their choice lies rather in the argumentative and dialectical nature of the discussion they take part in. An important characteristic of that nature is that a debate finishes rationally if one and only one reasonable position remains. Hence, a debate of opposing positions has to be continued as long as pro and contra arguments can be provided and arguers are dialectically forced to stay committed to their views. This makes, of course, harder rather than easier to come to a decision if reasonable arguments seem to support more than one position. But that does not affect the claim that it is irrational to give up a debate while it is undecided. Recent developments in collective epistemology define clear directions how to manage peer disagreement. The models taken as grounds are, however, not sophisticated enough to grasp the problem I have introduced. Below I shall argue that considering dialectical aspects of disagreement does not only increase the complexity of the model but it also offers a rational explanation for prima facie irrational decisions. Let us continue with answering the question why an increasement in complexity is required at all.

Increasing complexity I Epistemic peerhood and expertise An important structural characteristic of the group that is worth clarifying is how members relate to each other in terms relevant for making a good decision. At a most abstract level, groups are analysed on the supposition that they consist of epistemic peers: each member has the same (shared) evidence, the same epistemic virtues like being equally open-minded; unbiased; charitable; fair-minded; rigorous in argument evaluation; reflective; intellectually serious; truth-loving and truth-seeking; etc. They also have the same epistemic abilities like skilful argument evaluation; evidence assessment; understanding inferences, logical and dialectical relations of claims and positions; sensitivity to context like a cultural, political or historical background; etc. No less importantly, they also believe that they are peers:

42

István Danka

they acknowledge that each other’s evidence, virtues and abilities are the same as theirs; so they take each other to be equally qualified to make a competent decision in questions covered by their shared expertise (or a lack of that). This model, though widely accepted as a starting point (e.g. Kelly, 2005, Elgin, 2010, Lackey 2014), is overidealistic. It is subject to counter-arguments in each point: a full equality of peers is unrealistic that cannot describe phenomena properly (Siegel, 2013). Siegel’s criticism shows that a refinement of the original model is in order. As response to Siegel (2013)’s objections, a neutrality of expertise rather than equality of peers can be introduced. Epistemic peers in this model are not considered as equals in epistemic terms; they do not (necessarily) share all evidence, and they may differ in epistemic virtues and abilities. The only criterion to be applied is their belief that their peers are equally competent, or if not, their competence is irrelevant in their decision. This is a classical democratic model where experts do not claim a right to have a greater voice in decisions relevant for their own field insofar as the decision counts as a matter subject to a democratic decision of the whole society. They offer their opinion as a possible reason for decision but they do not claim their votes to count more than the vote of any layperson. In this sense, a model of groups involving epistemically neutral rather than epistemically equal peers requires considering arguments and reasons rather than positions or votes to be summed up in order to have the best decision. Under standard circumstances, democracies apply a somewhat different model, having a space for public debates to discuss reasons and a separate space to make individual decisions to vote. By this, reasons do not force a decision; aspects important for individual decision-makers but uncovered by expert debates can and usually do appear in the outcome of decisions. All the same, as is obvious from the above-mentioned, a sort of hierarchy in expertise among group members is at least allowed in neutrality models, and their introduction seems to be unavoidable anyway. The reason is not only that groups in real life are always heterogeneous regarding expertise, but also that difference in a level of expertise is relevant for making rational decisions. The more expertise one has regarding the issue, the higher is the probability that she makes a well-grounded decision. So the model should reflect on the matter of expertise, also taking into account that it is context-dependent as no general expertise can be provided that is applicable to all possible fields. For a rational group decision, expertise is of central importance if careful consideration is taken to be a guiding principle in decisions: a collective decision can be better than an individual decision precisely on the ground that different aspects of the problem can be taken into consideration from different angles by different evaluation methods. The ideal would be if an aggregation of reasons and an aggregation of decisions would result in the same group decision. Harmonising members’ reasons with their



Chapter 2.  Rational decisions in a disagreement with experts 43

decisions in a collective framework can be a possible solution, so that the two operational order would result in the same outcome. A possibility to harmonise reasons and decisions is a dialectical exchange of reasons in order for the individuals to convince each other rather than simply voting in a dialectical vacuum.

Evidence and reasons: Expertise and competence Ideally, a dialectical exchange results in a convergence of opinions due to the logically forcing and/or persuasive character of reasons. Other than logics and rhetorics, an important factor is evidence; a category normally taken to be a matter of scientific investigations. Evaluating the outcome of scientific investigations requires some specific sort of competence. Regarding particular questions of scientific nature, scientists working on relevant fields are epistemically superior to other group members. They have more preliminary knowledge and competence, and also a better access to available evidence, to decide a question relevant to their field. What expertise is relevant is often taken to be a matter of choice rather than a matter of (meta-) expertise. Having a division between experts and laypeople, a scenario of groups consisting of epistemic peers can be re-written. Experts presumably have better reasons than laypeople. So their reasons should count more than laypeople’s reasons. From this, two possible modifications can follow. First, experts (of a certain topic) can have a greater voice in decisions relevant to their expertise. Second, laypeople can be expected to listen to relevant expert arguments more than laypeople arguments in debates. While the first is common in many decisions (that is the place for e.g. scientific committees to decide in certain questions) but those that have the most social impact are normally decided in an equal voice scenario. In these cases, better reasons rather than more votes can make a choice more rational. Laypeople behaving rationally should listen to expert arguments more than laypeople arguments in order to make the best decision in cases where expertise of some kind seems to matter for them. Their listening to expert arguments should therefore somehow occur in the model for decision. Ideally, argumentation leads to an agreement, even though this is not always the case. But even when no agreement is reached, expert opinion can be evaluated by laypeople to make decisions. Though a model of accepting expert opinion as laypeople’s relying on it blindly may seem to be a good starting point (Hardwig, 1985), there are reasons not to accept it. Goldman (2001) argues that even where a level of knowledge for evaluating expert arguments seems inaccessible for laypeople, other factors are available for them to support their evaluation, including an understanding of logical and dialectical relations within and between positions; possible evidence for expert bias; or evidence for expert track record and credentials. Insofar

44 István Danka

as expert opinion is taken seriously on grounds that can be largely indicated by these factors, and there is nothing field-specific about evaluating them, competent laypeople can rely on their judgment of expert opinion in public debates. A condition for this evaluation is following debates publicly. Experts normally do not form their opinion in a dialectical vacuum; being acknowledged as a piece of scientific work, their results relate to an existing body of scientific knowledge and arguments. Expert opinion relevant for public decisions is normally not formed internal to science but in a (more or less) publicly accessible dialectical space. Hence, an analysis of some general characteristics of a dialectics of public debates on scientific issues is a helpful way for laypeople to form their opinion on the issue. In order to do so, two distinctions are to be made in parallel. Distinguishing evidence from reasons, experts are taken to be responsible for evaluating the first, as evidence is subject-specific and hence requires relevant expertise. Reasons and normative claims, in contrast, are not subject-specific and an evaluation of them does not require expertise but competence in argumentation, argument evaluation and moral judgment. These latter can be expected from the educated public as well. Therefore, in an ideal scenario, debates about evidence should be managed within scientific debates, whereas public debates should be about reasons accessible for laypeople as well. Precisely due to a lack of laypeople’s expertise, conducting debates about scientific issues in the public sphere is deceptive; it is a misunderstanding, rather than proper application, of democratic principles to let everyone decide how to judge evidence of scientific relevance. A clear separation of scientific from science-related public issues is of course a hard task in itself. But the difficulty of the task is no excuse for disregarding its importance.

Increasing complexity II The summative view and the totality view What has been mentioned above implies a revision of how group decisions are to be made. Normally, models of group decisions build on the idea of a democratic voting. What shall be offered here instead is an idea of a (no less democratic) debate where reasons rather than positions count. In this respect, two approaches to collective decisions can be taken. They are to be called as the Summative View and the Totality View respectively. The first takes group decisions to be analysed as an aggregation of votes of members (see esp. Lackey ed. 2014). The second, an approach which is rather disregarded in contemporary debates, takes the group to be a separate agent, making its own decisions, and even though its decision is intimately related to the decisions of its members, their relation cannot be grasped by a single aggregation.



Chapter 2.  Rational decisions in a disagreement with experts 45

In their metaphysical background, a distinction between unity and totality can be identified as follows. A unity of x is a sum of x’s parts (where a sum is, usually but not necessarily, a simple algebraic addition). Knowing all features of parts of x results in knowing all features of x as a sum or unity. Understanding unity as a sum of parts can be taken to be a standard mereological view. Unity as totality, in contrast, is taken to be something more than a sum of parts: in totality, qualitatively new features emerge (as it happens e.g. in Hegelian dialectics). Hence, knowing a sum of all features of parts of x does not result in knowing x as a totality. In group decisions, a group stands for x in the above-mentioned model, members stand for its parts, and their reasons for a decision stand for some features of parts of x. In the Summative View, group decisions are made up from the votes of group members. Considering their reasons for one option or the other, group members make their individual decision and vote accordingly. An aggregation of votes results in the decision of the group. The Totality View understands decisions differently. It (usually, but as will be shown not necessarily) takes the group to be a separate agent, making its decision on the basis of its own reasons. Consequently, members do not contribute to a group decision through their votes but directly their reasons. Reasons of individual members constitute reasons of the group, and those shared and evaluated reasons rather than member votes contribute to the group decision. How this constitution actually works is often thought to be mysterious by summativists. As mentioned, a quasi-Hegelian way of supposing a group agent seems to be a possible way but it is not a viable option for most collective epistemologists. Building on pragma-dialectics, I shall take a space for that constitution to be a public space of discourse, a space in which arguments as speech acts of contributors are actively involved in a dialectical way of making up decisions. Just as for a summativist, it is absurd to think that reasons of individuals can constitute reasons of a group, for a totalitist, it is absurd to think that votes rather than reasons can constitute a valid agreement within the group. Obviously, a group is not a real agent. No one expects a group to make “its own” decisions. Decision-making can be taken as a pure algorithmic calculation, an aggregation of reasons (just as it is an aggregation of votes). The Totality View does not necessarily suppose a group agent separate from group members. A shared dialectical space for publicly accessible reasons can play the role attributed to a group agent. This dialectical space is, in my view, also useful to suppose. The reason is some phenomena hardly explicable in the Summative View alone. The perhaps most stressing is the so-called discursive dilemma, demonstrating that summing up reasons of members for a decision and summing up their votes do not always lead to the same group decision. If priority is given to reasons, as expert opinion cases should imply, then decisions cannot be made on the ground of votes if the latter diverges from a decision made on the ground of reasons of group members.

46 István Danka

A classical scheme (Bird, 2014, Pettit, 2014, Briggs-Cariani-Easwaran-Fitelson, 2014) illustrating the problem is as follows. Let there be a group of three members A, B, C and a decision that can be described in the form (p&q). A, B and C are epistemic peers; i.e., their preliminary knowledge, competence and available evidence to decide whether to do (p&q) is taken to be approx. the same. Let A have reasons for p and reasons for q, B reasons for p and against q, and C reasons for q and against p. Summing up their reasons to individual decisions, A supports both p and q and hence decides to do (p&q); B and C decides against (p&q) because they do not support one component or the other. Summing up their votes results in a group decision against (p&q). Summing up their reasons constituting a group reason, for both p and q there are two pro votes and one contra only, making the group decide for (p&q). As the two results are different, depending on the timely order of operations, there is a difference between making a decision on the ground of group reasons and making a decision on the ground of votes. Despite of this standard argument against it, the Summative View is more attractive for contemporary collective epistemologists. Supposing a group agent simply seems to be a too high price for avoiding problems like the discursive dilemma. Hence, a standard strategy is dissolving the discursive dilemma within the framework of the Summative View rather than abandoning that framework. But since a version of the Totality View has been developed which supposes no group agent, the main worry of summativists would become groundless. That is what shall be aimed below by a (pragma-)dialectical approach.

Strategic manoeuvring and different senses of rationality The claim that disagreement generates debates to be explained in a dialectical framework may sound all too obvious. Hence, one may claim my point to be too trivial. The reason why I take it to be worth emphasising is that the above-mentioned collective epistemologist views on peer disagreement do not take this aspect of disagreement into account. The level of abstraction they stand on makes disregarding these aspects reasonable; but in order to apply their approaches to the present topic, a more detailed explanatory framework is required. As mentioned, recent collective epistemologist accounts tendentiously disregard argumentative and dialectical aspects of decision making. Dialectical approaches, on the other hand, are normally not applied to specifically epistemological problems of disagreement as their primary focus is on the general dialectical and argumentative characteristics of debates. A dialectical approach, if applied to this field, may bring novel insights onto the surface. Steps towards this is aimed at in this section. My suggestion is to overcome a simplistic view that takes group



Chapter 2.  Rational decisions in a disagreement with experts 47

decisions to be some sort of aggregation of equal group members’ decisions, and to do so on the neglected ground of the Totality View. The Totality View implies, in my understanding, that taking reasons rather than votes into account, an additional characteristic emerges at the group level that is unapparent at the level of individuals. This additional characteristic is explicable in dialectical rather than purely logical terms because it is to be taken as irreducibly social and processual, and its essence lies in a shared argument evaluation: prioritising reasons on the ground of different perspectives of group members, and expert opinion in particular. First of all, for simplicity, some further idealisations are in order. Let expert participants of the debate be epistemic peers of each other as above (i.e., being approx. equal in expertise) and also intellectually virtuous. Equality in expertise, as Siegel (2013) argues, is an unsatisfiable condition. For the present purposes, however, a suitable qualification can play the (authoritative) role of distinguishing experts from laypeople in relevant fields, so that assessing evidence can be taken as a task of experts, whereas laypeople’s task is evaluating the line of reasoning of experts. Intellectual virtuosity is a set of characteristics of arguers that they are ideally expected to have. It includes an intention to reach truth (rather than purely persuading others about personal opinion); an intention to do so in accordance with certain rules of logic and dialectics (rather than accessing true conclusions by a mere chance of guessing); and an intention to convince their opponents about the truth reached (rather than just accessing truth in a disagreement with their fellows). The first is an epistemic criterion of intellectual virtuosity, the second is a dialectical criterion, and the third is a rhetorical criterion. 4 The first aims at truth; the second aims at a fair debate; and the third aims at winning the debate. Reaching a true conclusion is a necessary aim for intellectually virtuous arguers (epistemic criterion). Under these conditions, a dialectic of a debate is a progress from disagreement towards an expected agreement (dialectical criterion). Finally, arguers also intend to win the debate, as taking part in a debate is not rational if persuading the opponent(s) and/or a neutral audience is not an aim of the arguer (rhetorical criterion). This idealisation is less demanding than the other because intellectually not virtuous arguers are also expected to comply with these rules, and even if they have no inner compulsion to do so, their fellows monitor their moves and penalise them for rule-breaking e.g. by withdrawing authoritative power from their reasoning. 4. Rhetorics here occurs as merged with dialectics by pragma-dialecticians’ conception of strategic manoeuvring rather than rhetorics in the classical (ancient) sense. For strategic manoeuvring, see esp. van Eemeren – Houtlosser (1999). How pragma-dialetics relate to epistemic accounts – and hence epistemic criteria of a debate – is summarised in Biro-Siegel (2006).

48 István Danka

The three criteria of intellectual virtuosity in a debate sets up three means and three ends of the debate. The ends as idealised goals of virtuous arguers are as follows. An epistemic goal of a debate is accessing truth (about a matter); it is driven by scientific inquiry as means. A dialectical aim is a resolution of dispute; its corresponding means is following rules of logic and dialectics. A rhetorical aim is winning the debate; the means leading towards it is exploring persuasive possibilities available for the arguer. These three aims, and the three means leading towards them, determine possible moves of intellectually virtuous arguers within a dialectical space of a debate. The argumentative moves must lead towards these ends as the debate progresses. As said, in order to aim at these different ends, different sorts of means are to be applied. Conforming to the normativity of these means define rational moves within the debate. Rationality is taken to be normative in this sense: if something is rational to do under some circumstances, it ought to be done under all relevantly similar circumstances. The three above-mentioned means-end structures imply three different notions of rationality. Since each is normative, each implies their corresponding ought-claims. They can, however, and in some systematically classifiable cases they also necessary are, in a conflict with each other. If so, following norms implied by one sense of rationality will necessarily result in breaking norms implied by another sense of rationality. This makes decisions in such cases necessarily irrational in one sense or the other. This is, however, no excuse for irrational decision-makers, insofar as there is some sort of hierarchy among different senses of rationality, and it is required from decision-makers to always break lower-level rationality rather than higher-level rationality. Winning a debate is inferior to reach an agreement, and reaching an agreement is inferior to truth. Though the relations among these three elements are much more complex, as an abstract guide, this simplification is mostly worth to be followed. But precisely this complexity implies that no general rule can be provided for a priority of these aspects. In the case of conflicting pieces of rational advice applicable to a situation, it is not a matter of expertise but a matter of decision (or, ideally, argument evaluation) which option to be chosen. Rationality is to be understood here in three separate but related senses. The first may be called, following the above-suggested terminology, as epistemic (or inquiry-driven) rationality: a norm of doing x in order to have an access to truth. The second may be called as dialectical (or rule-following) rationality: doing x in order to be consistent with the rules of elementary logic and dialectics of a debate. The third may be called as rhetorical (or instrumental) rationality: doing x in order to satisfy the arguer’s self-interest (i.e., winning the debate). Different criteria of rationality sometimes support each other, but sometimes they are in a conflict. Keeping committed to dialectical rules is sometimes the



Chapter 2.  Rational decisions in a disagreement with experts 49

best available way to reach truth because a wide agreement is, though possible but hardly accessible about falsities. But in some cases (namely, in the case of faultless disagreements), truth is inaccessible via a resolution of dispute because a resolution itself is also inaccessible. Even in such cases, one of the mutually exclusive options is nonetheless true insofar as realism about truth is accepted; disregarding dialectical rules aiming at an agreement and being steadfast may sometimes be a better strategy to maintain truth than aiming at a conciliation. Epistemic and rhetorical aims can also be in a conflict; in fact, in all controversies, at least one participant’s rhetorical aim (winning the debate) is in a conflict with truth, as two conflicting views cannot be true at the same time. Rhetorical aims are also often in a conflict with dialectical rules, as holders of two conflicting views cannot win the debate at the same time either. But ideally, one of the positions is true and supportable by appropriate arguments, so in ideal debates, one of the intellectually virtuous arguers can reach all three goals of theirs: accessing truth, reaching an agreement, and winning the debate. But needless to say, it is the exception rather than the rule that it happens (Goodwin, 2007).

Conclusion I have argued that disregarding expert opinion in public decisions is not necessarily irrational. There are at least three senses of rationality applicable to public debates, and they are often in a conflict; following rationality in one sense often implies being irrational in another. Furthermore, even though a priority among these senses of rationality can be provided, it cannot be applied to each individual decision, because priority among rational choices is not a matter of expertise but that of a decision. Relevant aspects of rationality cannot be grasped, however, in recently developed frameworks of collective epistemology as these frameworks disregard two important aspects of group decisions: expertise on one hand, and dialectics on the other. This paper has intended to open up new directions for existing models towards these topics. It seems that these extensions cannot provide a final answer to the question raised in this paper. A dialectical approach can, however, either offer guides for a good decision, or at least explain why there is not one single good choice. What decision-makers are expected to do in the latter case is subject to a further extension of the model.

50

István Danka

Acknowledgments This paper builds on two presentations of mine, one held at the Decision, Rational and Joint Action conference (Budapest, Hungary, 14–15th of April, 2016), the other held at the IASC Science and Democracy: Controversies and Conficts conference (Pisa, Italy, 26–28th of October 2016). I am grateful for the organisers and participants of both events, and Ákos Gyarmathy in particular, for their valuable comments and inspiration. Research leading to this paper was supported by the Hungarian National Research Fund (OTKA K-109456, OTKA K-116191).

References Bird, A. (2014). When Is There a Group that Knows? In J. Lackey (Ed.), Essays in Collective Epistemology. (pp. 42–63.) Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780199665792.003.0003

Biro, J., & Siegel, H. (2006) Pragma-Dialectic Versus Epistemic Theories of Arguing and Arguments: Rivals or Partners? In P. Houtlosser, & A. van Rees (Eds.), Considering Pragma-Dialectics. A Festschrift for Frans H. van Eemeren on the Occasion of his 60th Birthday. (pp. 1–11.) New York: Routledge. Briggs, R., Cariani, F., Easwaran, K., & Fitelson, B. (2014). Individual Coherence and Group Coherence. In J. Lackey (Ed.), Essays in Collective Epistemology. (pp. 215–249.) Oxford: Oxford University Press.  doi: 10.1093/acprof:oso/9780199665792.003.0010 Christensen, D. (2007). Epistemology of Disagreement: The Good News. Philosophical Review, 116 (2), 187–217.  doi: 10.1215/00318108-2006-035 Christensen, D. (2009). Disagreement as Evidence: The Epistemology of Controversy. Philosophy Compass, 4 (5), 756–767.  doi: 10.1111/j.1747-9991.2009.00237.x Christensen, D., & Lackey, J. (Eds.) (2013). The Epistemology of Disagreement: New Essays. Oxford: Oxford University Press.  doi: 10.1093/acprof:oso/9780199698370.001.0001 Elga, A. (2010). How to Disagree About How to Disagree. In R. Feldman, & T. Warfield (Eds.), Disagreement. (pp. 175–186.) Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780199226078.003.0008

Elgin, C. Z. (2010). Persistent Disagreement. In R. Feldman, & T. Warfield (Eds.), Disagreement. (pp. 53–68.) Oxford: Oxford University Press.  doi: 10.1093/acprof:oso/9780199226078.003.0004 Feldman, R. (2006) Epistemological Puzzles about Disagreement. In S. Hetherington (Ed.), Epistemology Futures. (pp. 218–227.) Oxford: Oxford University Press. Goldman, A. (2010). Epistemic Relativism and Reasonable Disagreement. In R. Feldman, & T. Warfield (Eds.), Disagreement. (pp. 187–215.) Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780199226078.003.0009

Goldman, A. I. (2001). Experts: Which Ones Should You Trust? Philosophy and Phenomenological Research, 64 (1), 85–110.  doi: 10.1111/j.1933-1592.2001.tb00093.x Goodwin, J. (2007). Argument Has No Function. Informal Logic, 27(1), 69–90. Hardwig, J. (1985). Epistemic Dependence. Journal of Philosophy 82, 335–349.  doi: 10.2307/2026523 Kelly, T. (2005). The Epistemic Significance of Disagreement. In T. Szabo Gendler, & J. Hawthorne (Eds.), Oxford Studies in Epistemology, Vol. 1. (pp. 167–196.) Oxford: Oxford University Press.



Chapter 2.  Rational decisions in a disagreement with experts 51

Kornblith, H. (2010). Belief in the Face of Controversy. In R. Feldman, & T. Warfield (Eds.), Disagreement. (pp. 29–52.) Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780199226078.003.0003

Lackey, J. (2010). What Should We Do When We Disagree? In T. Szabo Gendler & J. Hawthorne (Eds.), Oxford Studies in Epistemology. (pp. 274–293.) Oxford: Oxford University Press. Lackey, J. (2014a). A Deflationary Account of Group Testimony. In J. Lackey (Ed.), Essays in Collective Epistemology. (pp. 65–97) Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780199665792.003.0004

Lackey, J. (Ed.). (2014b). Essays in Collective Epistemology. Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780199665792.001.0001

Lalumera, E. (2015). Overcoming Expert Disagreement in a Delphi Process. An Exercise in Reverse Epistemology. Humana.Mente Journal of Philosophical Studies, 28, 87–103. Pettit, P. (2014). How to Tell if a Group Is an Agent. In J. Lackey (Ed.), Essays in Collective Epistemology. (pp. 97–121.) Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780199665792.003.0005

Provis, C. (2004). Negotiation, Persuasion and Argument. Argumentation, 18 (1), 95–112.

doi: 10.1023/B:ARGU.0000014868.08915.2a

Siegel, H. (2013). Argumentation and the Epistemology of Disagreement. OSSA Conference Archive Paper 157. URL: 157.http://scholar.uwindsor.ca/ossaarchive/OSSA10/papersandcommentaries/157. Last Accessed: 05. 04. 2017 Van Eemeren, F. H. & Houtlosser, P. (1999). Strategic Maneuvering in Argumentative Discourse. Discourse Studies, 1(4), 479–497  doi: 10.1177/1461445699001004005 Van Inwagen, P. (1996). It Is Wrong, Everywhere, Always, for Anyone, to Believe Anything upon Insufficient Evidence. In J. Jordan, & D. Howard-Snyder (Eds.), Faith, Freedom and Rationality. (pp. 137–154.) Savage, Maryland: Rowman and Littlefield.

Chapter 3

Rethinking the notion of public A pragmatist account Roberto Gronda University of Pisa

Recent times have witnessed that democracy and science may easily be in conflict. The goal of the paper is to reframe the issue of the relations between democracy and science in a way that makes it possible to preserve the distinction between the two, while rejecting the pessimistic view that their conflict is harmful. To achieve this goal, I will refine Dewey’s concept of public so as to develop a semantic interpretation of it grounded on the notion of articulation. My proposal is to conceive of the public as a logical space not reducible to that of science, in which the truths discovered by scientists are renegotiated through a process of interaction between the experts and the citizens. Keywords: philosophy of expertise, pragmatism, technical democracy, science, public, articulation

A prominent feature of contemporary political debate is the revolt against the elites. The conflict between the people and the ruling classes is getting stronger and stronger, and is spreading in almost every area of public life. It is not only that politicians, bureaucrats, political analysts and journalists are viewed as unreliable sources of information; scientific experts too are perceived as not credible by the public. As Michael Gove – one of the leaders of the Leave campaign for Brexit – has recently said, “People in this country [Great Britain] have had enough of experts.” And other people in many other countries share the same feeling. One of the consequences of this wholesale distrust of expertise is that democratic deliberation is constantly on the verge of collapsing into irrationality, thus exacerbating the tension between science and democracy. This scenario seems therefore to undermine the plausibility of any view that takes democracy and science to be essentially – and fruitfully – connected. There are several arguments that can be brought to bear in favor of the thesis that there is no essential connection between science and democracy. First of all, one can say that science should be better conceived of as a republic than a democracy: the idea that lies behind the allegedly ‘democratic’ character is rather the doi 10.1075/cvs.13.05gro © 2018 John Benjamins Publishing Company

54

Roberto Gronda

republican principle that everybody can recognize herself in the laws that rule the community to which she belongs. In addition, democracy and science are governed by different logics, with different relations to time: indeed, there are temporal constraints that limit democratic deliberation, which do not affect scientific investigation. One of the consequences of the institution of laboratory life is the creation of a protected environment in which the demands of practice are somehow suspended or sublimated. Now, it is clear that, being a form of life, laboratory life cannot be entirely subtracted from the imbroglio that characterizes everyday transactions with the social milieu. In their seminal book, Latour and Woolgar have insisted precisely on this point, stressing the fact that the production of scientific truths is part of an ‘earthly’ and extremely concrete set of actions and practices (Latour and Woolgar, 1979). Yet, all these qualifications notwithstanding, it remains true that the image of science that most scientists and philosophers of science hold is centered around the idea of a fallible, revisable and progressive activity of refinement of a certain body of propositions. Suspension of judgment and delay of decision have therefore different meanings in democratic and scientific practice: in both cases they are legitimate moves, but while in the latter they are part of the process of construction – or confirmation – of truths, in the former they amount to a sort of rejection of the plans of action under examination. Put in more pragmatist terms, doubt arises when belief cannot lead action anymore. In the light of what we have just said, we can well state that scientific doubt is different from democratic doubt because of their different ways of blocking action. Peirce was well aware of the difference between the two, and he tried to avoid possible confusion by stressing the distinction in his Cambridge Conferences (Peirce, 1992). 1 Accordingly, it is a fact that democracy and science, people and experts, are often in conflict. Any a priori warrant of their agreement is therefore to be discarded, and their recomposition is to be achieved on a local basis, from time to time. Now, I believe that a complication of the theoretical landscape is most welcome. It is true that such complication knocks out, so to say, all the attempts to get rid of the conflict between science and democracy by defusing – through essentialist and once-and-for-all arguments – their alterity. However, the challenge that it raises to philosophical thought is liberatory since it helps us to free ourselves from the temptation to take theoretical shortcuts, and to radically reframe the way in which we think their relation. It compels us to reach a higher level of reflexivity. In the present essay my aim is to bring pragmatism to that higher level of reflexivity. Among the many traditions of thought that attempted to solve the conflictual tension between science and democracy, pragmatism is probably the most nuanced and refined in that it does not embrace any reductionist strategy. Pragmatism, 1. On this point, see (Misak, 2004: 158ff.).



Chapter 3.  Rethinking the notion of public 55

especially John Dewey’s version, was confident that democratic deliberation could take the form of a scientific (intelligent) process of inquiry. Dewey believed that the task of philosophy was to broaden the scope of inquiry so as to cover moral and political subject matters, thus breaking them free from the hold of passions and tradition. He did not believe, however, that the difference between democracy and science should be completely erased or denied. Dewey’s scientification of morals and politics consists in the extension of the scientific method – though it would be better to say ‘scientific attitude’ – to the analysis of human affairs; it does not amount to the idea that the solution of human problems should be delegated to groups of experts who know better than lay people what should be done. The problem that haunted Dewey throughout his whole life was how to discover educational practices that enabled citizens to learn how to participate in an intelligent way in the process of problem solving. Dewey’s social philosophy has often been charged with being too optimistic, mainly because of its faith in the possibility of transforming democracy in a community of inquirers. Nonetheless, I believe that Deweyan pragmatism has the theoretical resources to provide the means to develop a consistent theory of expertise that could pave the way for a different way of framing the relation between science and democracy. The key notion here is that of public, as sketched by Dewey in his The Public and Its Problems (1927). Obviously, classical pragmatism being a late nineteenth-century philosophical tradition which originated in a different, and less refined, academic milieu, it has to be radically renewed and reshaped. In particular, Dewey was not clear about the composition of the public and the role that experts play in it. 2 In this paper I will suggest that technical democracy – with its emphasis on the ideas of hybrid forum, research in the wild and co-production of science and society – may be helpful to correct and integrate Dewey’s conception of public, so as to transform it into a viable theoretical option suited to define what an expert is. To avoid any possible confusion, it should be borne in mind, while reading this essay, that my account of expertise is an idealization of the different factors involved in the process of technical decision-making; 3 it does not purport to be a descriptive analysis of 2. For an interesting and well-documented analysis of this issue, see (Brown, 2009: 44–83). 3. For a definition of the notion of ‘technical decision-making’, see Collins and Evans (2002: 236):“By ‘technical decision-making’ we mean decision-making at those points where science and technology intersect with the political domain because the issues are of visible relevance to the public: should you eat British beef, prefer nuclear power to coal-fired power stations, want a quarry in your village, accept the safety of anti-misting kerosene as an airplane fuel, vote for politicians who believe in human cloning, support the Kyoto agreement, and so forth. These are areas where both the public and the scientific and technical community have contributions to make to what might once have been thought to be purely technical issues.”

56

Roberto Gronda

what actually takes place in concrete processes of deliberation. In this sense, my approach is distinctively philosophical rather than sociological. In the first section, which is mainly historical and reconstructive, I will outline Dewey’s conception of public and highlight its strengths – what can be preserved in a pragmatist theory of expertise – as well as its weaknesses – what should be radically rethought and revised. In the second and third sections, I will present my pragmatist conception of expertise. In the second section, I will sketch a pragmatist ontology of the public, and account for what I call ‘the cognitive autonomy of the public’. With this label I mean to refer to the fact that the public is a cognitive agent that can yield new knowledge – that is, knowledge that could not be achieved by other agencies such as natural and social sciences. Finally, in the third section I will elaborate on that idea to clarify the role and function of experts in the process of public deliberation.

1. Dewey’s conception of public Inspired by the reading of Walter Lippmann’s Public Opinion (1922) and The Phantom Public (1925), in 1927 Dewey published his first book on political philosophy, The Public and its Problems. His aim was to defuse the threat posed by Lippman, who had argued that in modern times participatory democracy was impossible. Participatory democracy rests on the assumption that citizens are capable of forming a reliable opinion on political issues. Yet, because of the growing influence of mass media on society, public opinion is perverted, and people act according to what they are made to believe to be real. The only available solution to this predicament is to delegate decisions to groups of experts and decision-makers who know how reality actually works and, consequently, know how to transform it so as to conform to our desires and hopes. Dewey immediately felt that Lippmann had hit a major point. Dewey was well aware that democracy was on the verge of transforming into something else: he was no less worried than Lippmann about the effects of propaganda on American society. 4 Nonetheless, he thought that Lippmann’s solution was too simplistic. It is true that experts are in charge of producing knowledge, and that for a political deliberation to be rational and effective it has to be directed and constrained by scientific truths – or, at least, by what is held by a consistent part of the scientific community to be the most accurate descriptions of objective reality. However, this does not amount to a denial of the possibility of a genuine – which means, not perverted by propaganda and mass media – public opinion.

4. On this point, see Gronda (2015a).



Chapter 3.  Rethinking the notion of public 57

The conceptual tool that Dewey forged in order to answer to Lippmann’s criticisms was his functional theory of public. Dewey’s line of argumentation is rather simple: the only possible way in which public deliberation can be saved from the tyranny of experts is by acknowledging that people know better than the experts what they want, what they feel, and what they are going through. As Dewey summarizes this point, even though the shoemaker is the best judge of how to remedy the shoe pinching the foot, the man who wears the shoe knows better than anyone else that it pinches and where it pinches. This means that public opinion cannot be erased from public deliberation, both because it represents the starting point of the process of inquiry and because it sets the criteria and standards for the activities of experts and decision-makers. But how is this possible? What are the assumptions that lie behind the idea of the centrality of the public in the process of technical decision-making? And who or what is the public? Dewey tackled all these issues from a radical perspective which refuses to take the public as a fixed entity. According to this approach, the public is not a thing but rather a function: “the public”, he wrote, “consists of all those who are affected by the indirect consequences of transactions to such an extent that is deemed necessary to have those consequences systematically cared for” (LW2: 245). The key notion here is that of indirect consequences. Dewey distinguishes between the consequences of an action which affect only those who participate in it and those that have effects on other people who do not share any responsibility for that action. In some particular cases, the indirect consequences of an action may be harmful (actually or potentially) to a certain group of people. So, for instance, a firm may decide to build a dam which would dramatically change the natural and social environment of the community living in that area. The effects of that action are therefore in no way restricted to the economical and juridical transactions between the firm and the owners of the parcels of land that have to be bought to build the dam. The transactions between the firm and the owners belong to the category of ‘direct consequences’. Beyond these consequences, there are many other effects that affect the life of the community. These are the indirect consequences of those transactions. In cases like this, if the members of a group – which, at this level of development, can be named ‘group’ only proleptically since a group has not been formed yet, properly speaking – perceive that those consequences may have some effects on their lives, and accordingly organize themselves as a unity of inquiry, they constitute a public. 5 5. Here a word may be needed in order to avoid a possible misunderstanding. As I read him, Dewey does not maintain that the public should be conceived exclusively as a cognitive agent. It may well be that other relevant features of the public exist that do not belong to the cognitive dimension – think, for instance, of the use of religion or mythological images to create bonds within a community. A complete theory of the public should therefore take all these different aspects into account.

58

Roberto Gronda

The public is therefore a functional entity that originates from a cognitive act of apprehension of the possible effects of a certain set of actions. It is their perception of the relevance of those effects that gives birth to a new kind of issues, namely public or political issues. In doing so, by insisting on the importance of cognitive mediation for the constitution of the public, Dewey avoids the Scylla of ‘deterministic’ realism and the Charybdis of extreme relativism. The public is not necessarily determined by the consequences of a certain action. It may well be that a group does not perceive the consequences of an action to be potentially or actually harmful to the life of its members or to their well-being; in this case, the public simply does not arise. At the same time, however, the consequences that a certain group of people may come to perceive as harmful, thereby constituting a public, are grounded on the objective conditions of the social and natural world. Such relation of ‘grounding’ excludes any conversational – à la Rorty – or relativistic approach according to which public issues are just a matter of imagination, deception, illusion or fancy. Stated in a more technical way, I would say that the public is an articulative function, in that it articulates – from its particular perspective point – some factors actually existing and operating in the world. In the pragmatist perspective that I endorse, to articulate means to express in a different medium – no matter whether it be linguistic or material, as in the case of technological artifacts – some aspect that is effective in a certain practice.6 As I read it, the goal of The Public and Its Problems is to show that the public is an autonomous articulative agency, where ‘autonomous’ here means that the articulative process which defines and constitutes the public qua public cannot be boiled down to those of other articulative agencies such as, for instance, social sciences or the perspectives and interests of the agents directly involved in the course of action. This is an important point which we will come back to in the last section of the essay. As should be evident now, Dewey’s functional account of the public has many points of strength. Not only does it provide an account of the genesis of the public; it also clarifies the factors that operate in such a process of constitution, and indicates how they should be understood. It has, however, some remarkable weaknesses. We may divide them into three broad classes. Some of its weaknesses depend on an insufficient analysis of the features of the factors involved: as is typical with Dewey, he was more concerned with painting the theoretical landscape with a broad brush than with a clear and detailed description of the particular elements that make up the whole scenario. Another class of weaknesses is strictly related to some ambiguities connected to Dewey’s functional vocabulary. Still other weaknesses are a direct consequence of the time in which Dewey wrote The Public and Its Problems: indeed, many of the problems that we are facing now – think, for instance, of the 6. I have presented my pragmatist conception of articulation in Gronda (2017, 2015b).



Chapter 3.  Rethinking the notion of public 59

proliferation of experts stemming from the enormous specialization of intellectual work, or the entanglement of public and private agencies in scientific research – were completely unknown to Dewey. No surprise, therefore, that many of his remarks may seem to us completely unsatisfactory. I am not interested in the first and third class of weaknesses since I am not concerned here with offering an overall assessment of Dewey’s position – let alone with defending his views against possible objections raised by possible critics. I will rather focus on the terminological and conceptual difficulties that stem from Dewey’s functional approach to the public, with the aim of highlighting a possible line of conceptual development of some of its features. In particular, I would like to call attention to an ambiguity in Dewey’s analysis. Dewey often speaks of the public without further qualification. It is therefore not clear whether he believes that there is just one public or whether he is ready to admit that it is possible to have many different publics, one for every possible set of indirect consequences that may be perceived by a particular group of people. For my purposes, I will assume a pluralistic view of the public; I will hold therefore that any perceived set of indirect consequences form a public, and that the public is defined by the set of consequences that makes its constitution possible. Such an interpretation is not in contrast with the letter of Dewey’s text, after all.7 The main difference is rather one of emphasis. I want to stress more strongly than Dewey does the local, temporal and contextual character of the publics. Publics originate in a specific situation, at a specific time, for specific purposes; their components may change during the processes of investigation through which the problems originating the public are assessed and solved. Publics are not fixed entities, as their boundaries are being defined from time to time by the partial results of the articulative process. So, for instance, a group of people that initially did not perceive itself to be affected by the indirect consequences of some act may later become a relevant part of the public. The public is ontologically unstable: it is in constant need of affirmation and redefinition. In addition, I think it is important to emphasize the semantic import of the articulative process that leads to the establishment of the public. This is a point strangely neglected by Dewey, who was usually very attentive to the semantic

7. The point is that, in Dewey’s hands, the notion of public becomes a tool to account for the nature and logical origin of the State. If taken in a narrow sense, it is evident that a genuine plurality of publics is somehow impeded by the fact that to a public corresponds one and only one State. However, Dewey is quite explicit that the State too should be defined in functional terms. In this sense, the possibility of a plurality of publics is preserved. As often with Dewey, it would have been better if he had chosen a different vocabulary to convey his views, so as to avoid unhappy terminological confusions.

60 Roberto Gronda

dimension of human experience.8As has been remarked above, the public is first and foremost an articulative agency: through the articulation of the potentialities of the situation, the meaning of the latter is made explicit and available for further reflection and analysis.9 Put in different terms, the public is an autonomous dimension of reality that provides a clarification of the meaning of the indirect consequences from which it has originated. Such conclusion follows directly from the pragmatist theory of meaning, according to which the meaning of a thing or a concept is the set of the conceivable consequences that a certain thing or concept may possibly generate. This means that because of its entering into new relations, the object acquires new meanings and, consequently, undertakes a process of redefinition. So, for instance, the implicit public meaning of the private transactions between the firm planning to build a dam and the owners of the parcels of land on which the dam would be built is made explicit and articulated for the first time when the public perceives the indirect consequence of that action. This particular form of semantic emergentism – which is, at least in my view, the conceptual kernel of Dewey’s theory of public – lies at the basis of the pragmatist conception of expertise that I will lay out in the next two sections.

2. Toward a pragmatist philosophy of expertise: The ontology of the public Now that Dewey’s conception of public has been roughly sketched, let’s go back to our starting point – that is, the process of technical decision-making. The question here at stake is that of understanding whether and how experts and citizens can cooperate toward the production of a rational solution to a technical problem – that is, a public problem which is characterized by the presence of a scientific element amongst its defining factors. Because of the growing complexity of natural and social environments, almost all the problems that a society, through its representatives, is asked to face are technical in nature. Economical, environmental, health issues – to name only the most relevant ones – are all strictly dependent, for their solution, on the best scientific knowledge available. 8. In this essay, I will use the term semantic to label everything which is related to meaning. Such choice may seem awkward since that term is usually employed in a different sense. However, I think that, there are some theoretical advantages in using semantic in this way. In particular, it highlights the primacy of meaning over our cognition of it. This is strictly related to Dewey’s anti-subjectivism. For a further clarification of this point, see Section 2 below. 9. In a pragmatist framework, potentialities are different from possibilities because while the latter belong to the realm of logical conceivability, the former are actual forces that act in reality, even though only in an implicit and undeveloped manner. For an analysis of this notion of potentiality, see Gronda (2015b).



Chapter 3.  Rethinking the notion of public 61

One traditional answer to that question is to say that for a technical decision to be truly rational, decision makers should rely on the body of knowledge provided by scientists. The rationale behind this view is that scientists are in charge of telling what there really is in the world, what is going on out there, so to say; indeed, being ignorant about matters of fact, citizens are in a sort of defective position for what concerns the production of knowledge. In this sense, democracy – here conceived not as a particular form of government, but rather as the ultimate source of values that govern the choices that a community makes10 – is cognitively irrelevant: its task is rather to establish the ends to be reached by applying scientific knowledge. According to this view, science is deputed to assess what is real, while democracy is deputed, through its organs, to set the goals and to find the best way to achieve them. There are some theoretical advantages with this account of the differences between science and democracy. First of all, it seems to conform well to our common-sense intuitions about the relations that hold between them. Secondly, it preserves their distinction, thus escaping scientism or reductionism, on the one side, and anti-scientism or populism, on the other. Thirdly, it avoids reductionism without implying any ascription of cognitive relevance to democracy: the legitimate space of democracy is that of decision, not that of knowledge-production. That account of the relation between science and democracy – conceived as ideal types, in the sense specified above – is therefore modeled on the dichotomy between facts and values. As is well known, pragmatism has traditionally been critical of any dualism, the dichotomy between facts and values being one of its favorite targets. In the present context I will expand on that idea, showing that the pragmatist suspicion of this bunch of dichotomies paves the way for a different account of the role of experts and citizens in the process of production of knowledge. My thesis is that it is wrong to assume that the conflict between science and democracy can be solved by distinguishing their reciprocal fields of pertinence. The point that I would like to call into question is therefore the view that science – that is, groups of experts – is in charge of the production of the factual knowledge, while democracy – that is, groups of citizens – is the only legitimate source of moral authority. I hold, on the contrary, that groups of citizens can actively and effectively participate in knowledge production processes. Some studies have been done to illustrate actual cases of fruitful interaction between experts and groups of citizens – arguably, the most famous example of that trend of research is Epstein (1995). I will take a different approach, which will end in a substantial redefinition 10. This is the way in which Deweyan pragmatism has usually conceived democracy. According to Dewey, democracy is not a form of government, but rather a form of life. On this point, see Frega (2017).

62

Roberto Gronda

of the terms of the question. The argument that I am going to present is a priori, as it derives the possibility of such a fruitful interaction from the ontological constitution of the public. In order to prove that, I will adopt a constructivist perspective, proceeding from the simplest to the most complicated. I assume that, in the case of technical decision, scientific facts are structurally and ontologically simpler than those issues that are to be settled through processes of technical decision making – issues that, by definition, have as one of their constitutive features those very scientific facts. Take a scientific fact like GMOs, for instance. 11 Science tells us what GMOs are, how they can be produced, what their pros and cons are, what their impact on the environment (social and natural) is. If we are ready to accept a distinction between essential and accidental properties of a thing, we could say that science provides us with all the relevant factual knowledge about GMOs. So, there is a sense in which it seems plausible to state that science tells everything one has to know about an object. If the scientific description of GMOs were complete, one would be tempted to say, nothing else remains to be known. However, if we move from here, from the level of scientific knowledge, to the public context in which scientific knowledge has to be applied, things change dramatically. The point is that, by entering into new relations with other objects, scientific facts acquire new meanings that are irreducible to those of origin. When it comes down to deciding whether or not to allow the use of GMOs in agriculture, for instance, GMOs enter in a wide and thick net of relations with social actors (farmers, landowners, producers of agricultural machines and equipment, and so on), economical agents, political and social organizations (political parties, trade unions, lobbies) and many other agencies – for example, cultural associations for the preservation or restoration of the landscape. They also enter into relation with the particular constitution of the soil, as well as with the climate of the region and the possible effects on the behavior of the species living in it. As Wynne has convincingly shown (Wynne, 1996), these aspects of the whole situation are better known by people living and working in the area than by scientists and experts. In all those cases in which the features of a specific context are factors influencing the elements of the problem at stake, local knowledge is likely to be more effective and reliable than scientific knowledge. The point that I would like to stress is that this is possible not because scientific knowledge is somehow defective or biased, but because local and scientific knowledge have different objects. 11. It may be argued that this example is badly chosen since GMOs are technical products rather than scientific facts. I do not think this remark hits a point. It relies on an image of science which is deeply anachronistic, as if it were possible now, in the epoch of technoscience, to clearly distinguish between pure and applied sciences. Is the study of the human genome pure or applied science? I am indebted to Pierluigi Barrotta for calling my attention to this point.



Chapter 3.  Rethinking the notion of public 63

The latter remark lies at the basis of my pragmatist ontology of the public. The grounding idea is that public issues cannot be boiled down to scientific issues because the former are much more complex objects than the latter, being constituted by all the relations that scientific facts entertain with relevant aspects of social reality, the criteria of relevance being obviously contingent and contextual. If we use Dewey’s terminology, we could say that the relations that a certain thing entertains in a laboratory context are its direct consequences – as a scientific entity – while its relations with social and political affairs amount to its indirect consequences. This account is backed by the pragmatist conception of scientific objects, which is in its turn grounded on the pragmatist theory of meaning. According to this view, scientific objects are idealizations defined by the operations that can be performed on them in a laboratory – no matter whether real or imaginary, as in the case of mathematics. An object being defined – at least for what concerns its cognitive import (remember that the pragmatic maxim was originally conceived as a tool for the clarification of the meaning of scientific concepts) – by the consequences that follow from the operations performed on it, its semantic nature undergoes a change when it is taken from the aseptic laboratory environment and brought to open air. Clearly, there is a continuity between the scientific object and the public object of which the former is one of its constitutive elements. It is such continuity that allows us to say that scientific knowledge grounds, and puts constraints on, technical decision. But continuity does not mean identity: scientific and public issues are indeed essentially different. I argue that the irreducible difference between the two is what makes it possible for the public to be an autonomus cognitive agent capable of yielding new knowledge. The point is that if we assume that experts and citizens are concerned with the same object, it is difficult to avoid the conclusion that democratic deliberation is only a second-best option: laypersons will always be less competent than the experts – no matter how much expert knowledge is communicated to them by newspapers and information media – and their judgments and decisions will always be less accurate than those made by the latter. On the contrary, if experts and citizens are concerned with different objects, then it is possible to conclude that the public is an autonomous cognitive agent, and that it is the sole possible ‘knower’ of the public issue. Such conclusion follows directly from the Kantian idea that subject and object are mutually interdependent: since the object – as far as its cognitive content is concerned – is not a thing-in-itself, but coincides with the concept of that object, it is clear that the object is necessarily the object-of-a-subject. The other side of the relation – the subject being the subject-of-an-object – is made clear when we accept the pragmatist view that the subject is not an entity but rather a function which originates from, and within, a problematic situation and whose goal consists in developing an intelligent solution of the issue of stake.

64 Roberto Gronda

Even though one may question the general idea, this is surely the case with the public, as it has been defined above. Reflexivity provides the clearest example of the essential interdependence of subject and object in the case of the public. With reflexivity I mean the fact that actors can take the content of a theory as one of the elements that influence their future actions. Accordingly, the ‘representation’ of reality that an agent articulates in order to cope with the world transforms the nature of reality with which she is concerned, since it introduces a new factor into the general problem, a factor that was not present when the representation was made.12 Now, the (specific) object that the (specific) public has to address – what I have called ‘the public issue’ – is constituted by the perception of the consequences of a certain set of actions or facts – in the case under consideration, of the possible consequences of a scientific fact (broadly conceived) on the public itself. So, it is clear that the process of meaning clarification that constitutes the life of a public is a process of mutual construction of subjectivity (as a function of problem-solving) and objectivity (as the conceptual content of the problem at stake, which establishes the conditions that have to be met by the action). Through the growing understanding of the possible reactions of its members to the transformations brought by scientific facts, the public structures itself as well as its object. Reflexivity is not only an ontological category; it provides also a proof of the autonomy of the public as a cognitive agent. This because it seems plausible to argue that nobody knows better than the agent what she is going to do. So, if this corollary of the idea of autonomy is correct, we can conclude that nobody knows – which means, no group of expert can know – better than the public how the latter will act to solve the public issue that calls for its existence. And since the actions and plans of the public are definitory of the public issue at stake, the autonomy of the public as cognitive agent is grounded on its very ontology. The public issue in its whole complexity is accessible only to the public that constitutes it. There is another argument that supports – even though in a slightly different manner – the idea of the cognitive autonomy of the public. While the argument from reflexivity points to a somehow constitutive condition of possibility of knowledge (and reality) – and in this sense it is an ontological argument, ontology being conceived here in a pragmatist way – this second argument is an argument from prudence. It is therefore less compelling than the first one, even though I think that it lends some confirmation to that idea, especially if one is willing to endorse a fallibilistic account of reason which acknowledges its limits as its distinctive character. This second argument is related to the principle of radical uncertainty, which 12. This is related to the idea of transaction as defined by Dewey and Bentley (1949). For an interesting analysis of the application of that idea to philosophy of science, see Barrotta (2016).



Chapter 3.  Rethinking the notion of public 65

reminds us that the complexity of a technical decision is so high that is not possible to predict with certainty which consequences will follow from a certain action. This argument is directed against a possible objection that can be raised by the defenders of a strong scientism. According to this view, science can provide, at least in principle, adequate knowledge of all the forces at play in the situation, including the intentions of the actors that compound the public. If this were true, the cognitive autonomy of the public would be strongly challenged: indeed, it would be possible to restate the identity of the subject matter of science and public deliberation, and consequently the primacy of the former over the latter. I do not think that that objection can be easily addressed; on the contrary, I do not think it can be conclusively defused. It is always possible, against all possible counterarguments, to stick to the idea that science has the theoretical resources to provide a complete representation of reality: at the end of the day, that objection is an ‘in principle’ objection that cannot be falsified by single, specific counterexamples. This objection is rather complex. There are at least two different aspects that are involved in the argument: on the one hand, the fact that it is not possible to forecast all the consequences of a possible action because of the indefinite richness of the effects that it may bring about; on the other hand, the fact – related to the principle of reflexivity – that the members of the public are factors that influence the course of the action. Let’s start from the second point. In this form, the objection amounts to saying that social sciences can offer a faithful and satisfactory account of the intentions of the actors. In my view, this objection relies on a troublesome assumption; that is, that intentions of the agents that constitute the public are there and remain unchanged and unmodified from the beginning to the end of the whole articulative process of deliberation. That assumption is therefore grounded on the adoption of a mentalistic vocabulary, according to which the most accurate description of action is in terms of mental states. It is that vocabulary that pragmatists suggest we call into question: 13 pragmatists work from a different theoretical perspective, assuming (a) that temporality is the distinctive dimension of experience, (b) that time affects and changes meaning, so that things undergo a constant process of transformation and redefinition, and (c) that mental states are, at best, useful abstractions from a more complex situation which encompasses both the subject and the object. From a pragmatist point of view, mental states are not there from the very beginning since there are no mental states at all. From a descriptive point of view, what we have is an organism – or a group of organisms – which interacts with an environment that is laden with meanings, meanings cutting across the distinction between mental and physical. 13. Things are more complicated than that. Even within the pragmatist tradition there are approaches more mentalistically oriented. For a lucid analysis of this point, see Levi (2010).

66 Roberto Gronda

I won’t elaborate on this line of reasoning since I think it is too demanding. It asks the objector to accept something which is theoretically very expensive, and that she is probably not ready to do. I will rather adopt a more cautious strategy centered around the idea of the opacity of intentions. I argue that, since intentions are presumably not stable throughout the whole process, but undergo a process of clarification, it is wiser and more prudent to ask people to tell what they think about a situation, and what they would do if something happened. Again, this conclusion can be read as following from the principle of autonomy. However, I think it is possible to add something more to this point. Since meanings are not fixed, but are produced in the articulative process of public deliberation, it is likely that through the participation in that process, the public may generate new meanings – and, consequently, new factors influencing the action – that would not be present if the process of meaning-clarification were completely delegated to science. In any case, this is an empirical thesis that needs factual confirmation: it is advanced here as a theoretical suggestion. As to the other part of the objection, regarding the objective uncertainty of the consequences of a certain act, I think that the principle of prudence may be usefully brought to bear on this issue too. The rationale behind this proposal is that since it is constitutively impossible to take into account all the possible consequences of action, it is better to give the opportunity to all the members of the public to share their knowledge. Local knowledges should be allowed to take part in the articulative process of public deliberation. Again, the participation in that process may foster the production of new (and unpredictable) knowledge that could bring to light some previously neglected aspect of the whole situation. In both cases, the cognitive autonomy of the public is reaffirmed and strengthened by focusing on the creativity of the process of meaning-formation.

3. The autonomy of the public and the role of experts In the previous section I have outlined what I have called a pragmatist ontology of the public. The goal of that section was to argue for the cognitive autonomy of the public as a function of reorganization and reconstruction of problematic situations, called public issues. However, no attempt has been made to clarify the internal composition of the public. In particular, the question about the role of experts in this scenario has not been addressed so far. This section will be devoted to tackle precisely this issue, starting from a terminological clarification. First of all, it is important to avoid a possible misunderstanding. Contrary to Dewey, with ‘public’ I do not mean to refer to the group of citizens that are affected by the indirect consequences of a certain action. Clearly, that group is part of the



Chapter 3.  Rethinking the notion of public 67

public, but it does not coincide with it. Experts too are part of it. In the account of the relation between science and democracy that I propose, the public encompasses citizens and experts, lay and expert knowledge, the democratic process of legitimation of the ends to pursue and the process of knowledge production which provides the criteria to assess the validity of those ends, as well as the means to achieve them. This is not only because, in the case of technical decision, scientific knowledge is necessary to clarify and make the course of action intelligent. This remark is trivially true, unless one is willing to reject completely the very idea of expertise; consequently, it is useless for our purposes. Something more should be said, therefore, to prove the validity of that approach. To do that we should take the idea of the cognitive autonomy of the public seriously. As has been remarked above, the autonomy of the public is grounded on the fact that public issues are constitutively different from – and more complex than – scientific issues. Their difference also consists in that public issues concern specific and individual situations; when a public is asked to deliberate, it deliberates on a specific issue, temporally and spatially located. It does not aim to discover universal laws or general regularities of natural and social events. This means that the public cannot perform that act of idealization which lies at the basis of scientific objectivity: things are to be taken in their concrete reality, with no possibility to escape their messy complexity. This is another way to express the idea that the public is concerned with the indirect consequences of a scientific fact: it is those consequences that define the singularity of a technical decision. Stated in other terms, it is those consequences that make it possible for a scientific fact to pass from the realm of abstraction and idealization to the dimension of concreteness. The public is a process of articulation of meanings. What is important to remark is that the process of meaning articulation does not leave the ‘original meanings’ untouched. Scientific facts are transformed – more prudently, may be transformed – by taking part in this process. Contexts are not neutral to the life of meaning: what is true in a laboratory context may reveal itself to be inadequate when doing research in the wild, asking for a further refinement. Indirect consequences retroact on direct consequences. Such process of retroaction – strangely neglected by Dewey in The Public and Its Problems – clarifies why experts are a constitutive part of the public. The process of articulation that scientific facts undergo when used in a technical decision is something which may affect them, thus setting the stage for further scientific research. 14 Accordingly, scientists could legitimately say “res nostra agitur” when scientific knowledge comes to be applied in public deliberation.

14. For an interesting analysis of the productive relation that holds between science and society, see Barrotta (2016).

68 Roberto Gronda

In the light of what we have just said, we should thus distinguish between scientists and experts for what concerns their function in the process of inquiry. Scientists turn into experts when they take part in public deliberation. As scientists, experts contribute to that articulative process of meaning clarification by providing a description of some of the fundamental traits of the public issue. However, they do not provide a complete description of those traits: the gap between universality and individuality warrants the creativity of the public. From an idealized perspective, scientific knowledge puts constraints to public deliberation, but those constraints are open to constant redefinition. It is the public taken as a whole, as an articulative process, that is the ultimate source of cognitive legitimation, expertise and democracy being its internal components. Any attempt to acknowledge a source of legitimation that is given outside and independently of the process of the public is a denial of the autonomy of the public. In conclusion, I would like to add a few words about the kind of solution to the conflict between democracy and science that has been advanced in the present essay. It is worth noting that mine does not purport to be an overall and definitive account of the relation between science and democracy. It may well be that many other features of their relation are not grasped by this account. My goal was simply to call attention to the possibility of defusing their tension by developing a pragmatist conception of expertise, grounded on a functionalist theory of the public. In doing so, I argue, it is possible to recompose the conflict between democracy and science on a local basis, in a piecemeal way.

References Barrotta, P. (2016). Scienza e democrazia. Verità, fatti e valori in una prospettiva pragmatista. Roma: Carocci. Brown, M. (2009). Science in Democracy. Expertise, Institutions, and Representation. Cambridge: MIT Press.  doi: 10.7551/mitpress/9780262013246.001.0001 Collins, H. M. and Evans, R. (2002). The Third Wave of Science Studies: Studies of Expertise and Experience. Social Studies of Science, 32, 2, 235–296.  doi: 10.1177/0306312702032002003 Dewey, J. (1927). The Public and Its Problems. In J. A. Boydston, Later Works of John Dewey, Vol. 2 1925–27 (pp. 235–372). Carbondale: Southern Illinois University Press, 2008. Dewey, J. & Bentley, A. F. (1949). Knowing and the Known. In J. A. Boydston, Later Works of John Dewey, Vol. 16 1949–52 (pp. 1–279). Carbondale: Southern Illinois University Press, 2008. Epstein, S. (1995). Impure Science. Aids, Activism, and the Politics of Knowledge. Berkeley: University of California Press. Frega, R. (2017). The Normativity of Democracy. European Journal of Political Theory, forthcoming. Gronda, R. (2017). Mezzi della ragione: pragmatismo e tecnologia. In M. Negrotti (Ed.), Uomini e macchine (pp. 73–90). Roma: Armando.



Chapter 3.  Rethinking the notion of public 69

Gronda, R. (2015a). What Does China Mean for Pragmatism? A Philosophical Interpretation of Dewey’s Sojourn in China (1919–1921). European Journal of Pragmatism and American Philosophy, Vol. 7 No. 2, 45–70. Gronda, R. (2015b). Normativity and Objectivity: The Semantic Nature of Objects and the Potentiality of Nature. European Journal of Pragmatism and American Philosophy, Vol. 7 No. 1, 115–129. Latour, B. & Woolgar, S. (1979). Laboratory Life: The Construction of Scientific Facts. Beverly Hills: SAGE Publications. Levi, I. (2010). Dewey’s Logic of Inquiry. In M. Cochran (Ed.), The Cambridge Companion to Dewey (pp. 80–100). Cambridge: Cambridge University Press.  doi: 10.1017/CCOL9780521874564.005 Lippmann, W. (1922). Public Opinion. New York: Harcourt, Brace and Company. Lippmann, W. (1925). The Phantom Public. Piscataway: Transactions Publishers. Misak, C. (2004). C. S. Peirce on Vital Matters. In C. Misak (Ed.), The Cambridge Companion to Peirce (pp. 150–174). Cambridge: Cambridge University Press.  doi: 10.1017/CCOL0521570069.006 Peirce, C. S. (1992). The Cambridge Conferences Lectures of 1898. In K. L. Ketner (Ed.), Reasoning and the Logic of Things (pp. 105–268). Cambridge: Harvard University Press. Wynne, B. (1996). May the Sheep Safely Graze? A Reflexive View of the Expert-Lay Knowledge Divide. In S. Lash, B. Szerszynski & B. Wynne (Eds.), Risk, Environment & Modernity. Towards a New Ecology (pp. 44–83). London: SAGE Publications.

Chapter 4

The expert you are (not) Citizens, experts and the limits of science communication Selene Arfini and Tommaso Bertolotti

University of Chieti and Pescara / University of Pavia

Considering any democratic government, it goes without saying that the more knowledgeable the citizens are, the better the democratic process will work. Therefore, leveraging scientific information among laypeople is intuitively linked to the growth of an educated population; some factors, though, taint this positivist account. Amateurization as an explicit stance on the one hand, “edutainment” matched with the ever-growing complexity of scientific matters on the other. In this paper we argue that while encouraging the diffusion of a general “love for science” should inspire an appetite for more robust scientific knowledge, it also foster the emergence of problematic cognitive situations, as the propagation of the so-called epistemic bubbles or the progressive belittlement of the role of experts in society. Keywords: science communication, democratic culture, amateurization, expert epistemology, epistemology of ignorance, on-line communities, cognitive niches, affordance, epistemic bubble, black box arguments

Introduction With respect to any democratic government, it goes without saying that the more knowledgeable the citizens are, the better the democratic process will work: in case of direct participation (for instance through referendums) or by representation (e.g. elections), they should be enabled to vote for the most sensible policies (still according to their political views). Leveraging scientific information among laypeople is intuitively linked to the likelihood of one voting for the best, just as sheer ignorance is responsible for poor political choices. Knowledge leverages the ownership of our own destiny.

doi 10.1075/cvs.13.06arf © 2018 John Benjamins Publishing Company

72

Selene Arfini and Tommaso Bertolotti

Some factors, though, taint this positivist account. Amateurization as an explicit stance on the one hand, “edutainment” matched with the ever-growing complexity of scientific matters on the other. The Web 2.0 fostered the emergence of what Keen (2007) defined the “cult of the amateur”, a positive obsession for user-generated material as opposed to the establishment-generated one. This lead to a systematic mistrust of the traditional expert in favor of self-appointed or crow-appointed ones. Social networking websites increased the ranks of users producing and reproducing content, and so the diffusion of pseudo-scientific or incomplete information that is often confused with the bona-fide sharing of scientific results. Also, the ever-more complex nature of the scientific endeavor and its results turns scientific information into ever more extreme forms of “edutainment”, sometimes striving to show exciting tidbits that can be enjoyed within the attention span of casual web-surfing, such as in the case of the famous blog “I fucking love science”. We argue that while the encouraging aim of this attitude is to grow an appetite for more robust scientific knowledge (also at a layperson’s level), the risk is to trigger the opposite result and foster the emergence of epistemic bubbles and ignorance bubbles, situations in which an agent is unable to tell the difference between what she knows and what she ignores. Thus, instead of (in)forming better citizens, edutainment might fall short of its aim and produce citizens that think they are expert in science, medicine, ecology, economics and so on: they refuse the advises and indications of recognized experts and can be manipulated by politicians who, like in La Fontaine’s fable about the Crow and the Fox, praise them as knowledgeable repositories of the “truth”.

1. The golden age of electoral democracies? Most nations known as “occidental liberal democracies” are electoral democracies. This definition means that the democratic form is indirect, as opposed to the direct model of democracy (for instance the historical Athenian model): in a direct democracy, citizens hold the power to the extent that they have their say on everything concerning the life of the state. It is so, and it must be so. In an indirect democracy, citizens perform their democratic right/duty by electing the lawmakers (and, in certain frameworks, the heads of state as well): in turn, the democratic process directly intended affects the elected lawmakers. In electoral democracies, once the will of the people has been expressed in the elections, decision making rests in the hands of the elected representatives. By “decision making” we do not merely refer to the executive branch in Montesquieu’s famous separation of power, but in general to the process of deciding on the life of the country, which is instanced by the legislative and judicial branches as well.



Chapter 4.  The expert you are (not) 73

Elected members of the government can be seen as epistemic representatives, since – together with their mandate – they are supposed to get informed, and be knowledgeable in the relevant areas, on behalf of their citizens. Of course, representatives are human beings and cannot expected to be knowledgeable about everything, and that is why they typically rely on advisors, permanent domain-competent organisms, state-funded research and, occasionally, on ad-hoc panels. Some forms of direct democratic involvement still remain in electoral democracies, for instance referendums. Upon certain matters that are perceived to transcend the mandate of the elected government and legislators, citizens may be asked to express their opinion by answering a precise question, usually in the form of a YES/NO answer. According to the juridical frameworks, referendums may be or may not be binding for the government of the nation. The increased volume of science communication circulated into the mass media, and the following increased scientific literacy seem extremely beneficial to electoral democracies because they resonate with Condorcet’s famous Jury theorem, often assumed as a mathematical foundation of democracy. The theorem famously demonstrates that if the average probability of a single voter to reach a correct decision is greater than 50%, the chance of the whole group of voters as an entity to reach the correct decision increases with the number of participating voters. If we loosen up Condorcet’s framing, allowing for situations where there is not a right and a wrong outcome at stake (for instance, political elections do not have a wrong and a right candidate to vote for, in the absolute sense), the increase of science communication should anyway foster the probability that a voter makes the “right” decision. This is true as far as political elections are concerned, and all the more true when referendums are called for: a more scientifically literate electorate should be expected to make better and better choices at referendums, elect better politicians who would in turn increase the standards of policies and new laws and so on. But is it really so? Are we really living in the golden age of electoral democracies thanks to the impact of science communication? Without engaging in political judgements, the results of the BREXIT referendum baffled the expectations (and the recommendations) of most competent scientists and intellectuals; furthermore, the election of Donald Trump as the 45th President of the United States, after a campaign that witnessed an unprecedented role of information circulated via social media, raises epistemological doubts on many aspects of his presidential program, and has boosted the popularity of the expression “post-truth” to indicate the state of the current politics. Given these premises, in the rest of the paper we will attempt at answering the following question: assuming that the information is accurate, does a hyper-exposition to scientific information make one scientifically literate? In order to do so, we will take a look at the quality of actual science communication, and then at how scientific information, diffused in the current fashion, can be absorbed.

74

Selene Arfini and Tommaso Bertolotti

2. The asymmetry of science communication Intuitively, science communication to laymen should represent the bridge between the complex, esoteric, and domain specific world of scientists and the larger, general, and civically interested public. This definition is usually born in mind with a sketch of an ideal society scenario, where scientists have time, energy, and resources to communicate in the most intuitive and straightforward way the outcomes of their work to non-specialists, without disregarding methodological explanations, trial and error results, and timelines. In the same ideal world, the laymen public stands for a diverse group of people with a medium-high level of education, who care for the welfare of the community more and above their self-interest. Notwithstanding the well-grounded reasons to hope in the upcoming realization of this scenario, in the actual world the terms “science communication”, “scientific community”, and “lay public” have very different meaning. In primis, because the communication between scientists and lay public are rarely direct, also rather overlooked or harshly critiqued by scientific and academic community. It is hardly new that, today, hard-core science is a monologue given by and transmitted to a very specific audience. The main aim of scientists is to achieve significant results, do ground-breaking research and publish it in the top journals of their fields. These publications are valued just if they are shared, verified and approved by specialists in the same discipline and are rarely meant for or addressed to other readers. Very few important articles, published in influential journals, are transmitted to the public preserving their original contents. This happens for two main reasons. The first reason depends on the academic system of publication that is too expensive for non-academics, takes a long time between the submission of articles (the end of a scientific project) and the publication, and requires an academically specialized audience to examine and understand the products of the research. Consequently, the second reason depends on, as well-described by Christie Wilcox (2012, p. 87), “jargon walls – the barriers that keep the people we want to become more scientifically literate from understanding what we do because they do not know the terminology”. A scientific work may be understood by the public just when the technical process that led to the results is widely comprehensible; unfortunately, the terminology used in a specific field of research is seldom even vaguely accessible to a generally educated public. Scientific sectors now are so specialized that, even within the same field of research, two teams of scientists may ordinarily use different definitions for the same object. Moreover, even if the improvement and extension of science communication to a vaster audience would benefit citizens of any democratic state, the learning



Chapter 4.  The expert you are (not) 75

activities beyond the years of compulsory education depend on individual choices and lifestyles. Thus, in order to reach as many people as possible, science journalists apply certain cognitive principles to make science communication appealing to a miscellaneous audience: they distribute data in a fast way (short articles, brief videos, etc.), using easy or self-explanatory terms and concepts, and highlighting the potential employment of scientific products and research to make them interesting for most of the people. These features could describe the cognitive requirements of science communication, though they do not require it to match basic epistemic prerequisites, as accuracy and comprehensiveness. If science communication can be described with the words of Burns et al. (2003, p. 183) as “the use of appropriate skills, media, activities, and dialogue to produce one or more of the following personal responses to science (the AEIOU vowel analogy) Awareness, Enjoyment, Interest, Opinion-forming, and Understanding”, then the balance between cognitive requirements and epistemic demands should represent the main target for science communicators. On the contrary, it is easy to imagine that by satisfying the communicative requirements and disregarding the epistemic features, public media could answer more effectively the demands of a general public. Thus, considering these not little constraints at hand, what is currently transmitted to the public media?

3. Scientific facts as black box arguments According to Miller (2010), civic scientific literacy is a concept that aims at representing the level of understanding of science and technology citizens need in order to act freely and responsibly in a modern industrial society. It defines a threshold level more than an ideal level of understanding and it is seen as a result of a continuous update of citizens education within and beyond the standard educational channels. To make sure that the lay public achieves this level, mass media are the most convincing tool to communicate scientific and technological advancements to lay public. But since civic scientific literacy is a constantly in-progress goal and the media have to consider as potential targets people of different generations and environments (both college freshmen that have recently studied the properties of neutrino particles and over 65-years-old men that once nearly finished high school), they need to apply cognitive strategies that can appeal to a broad and diverse audience instead of a narrow and specialized one: using, for example, the so-called black box arguments, analyzed by Jackson. Specifically, she defines a black box argument:

76

Selene Arfini and Tommaso Bertolotti

[…] a metaphor for modular components of argumentative discussion that are, within a particular discussion, not open to expansion. […] A black box argument is very like any other appeal to authority, and what might be said about any particular form of black box will turn out to be a particularized version of what might be said about evaluating arguments based on authority. In another way of looking at black box arguments, they are a constantly evolving technology for coming to conclusions and making these conclusions broadly acceptable. Black boxes are to argumentation what material inventions are to engineering and related sciences. They are anchored in and constrained by fundamental natural processes, but they are also new things that require theoretical explication and practical assessment.  (Jackson, 2008, p. 437)

Thus, using black box arguments instead of complete explanations, public media offer to the laymen public what the latter is looking for: mere information wrapped in an authoritative fashion. Black box arguments embed the function of informing without fully explaining and are able to deliver the same amount of data to academics and retired carpenters alike. Thus, the oversimplified narratives that are used by mass media to cover some technological and scientific advancement are appreciated because they help to update fast and clearly the beliefs of the public every time they switch on the tv or read a newspaper. This applies also even if they do not explain the complicated process that drove scientists to some conclusion. Therefore, the easy adoption of black box arguments in mass media is a double-edged sword that can be describe in two points: a. it increases knowledge of some sophisticated scientific and technological topic in a vast population; b. it increases the sense of knowing of the public, making some information seem affordable to common people, as it displays some issues in ordinary terms and without extensively displaying the complicated generation and defense of some theories. The point (a) implies that, in certain curated environments (as national news, accredited newspapers, scientific-oriented journals), committed science journalists will provide to the vastest public accurate (even if not open to expansion) data that depend on the reliability on the expert figure. Moreover, while the transmission of data is limited by the cognitive constraints of good communication (as it is as fast, easy, and interesting as possible), in curated environments science journalists are considered accountable for the information they display and so they are also responsible for the level of accuracy and comprehensiveness of the data. Black box arguments employed by these specific figures that manage the mass media are just rhetorical tools that permit a balance between the cognitive affordability



Chapter 4.  The expert you are (not) 77

and the epistemic features of scientific information for a vast public. Instead, the positive features of the employment of black box arguments to discuss scientific issues disappear when people are no longer accountable for the information they discuss and share. So, the consequences of point (b) emerge without the possibility of controlling their effects. Indeed, in communication environments where there are no epistemic constraints that link the distributors of information to a certain level of accountability, the leveled relationship between public and scientists (just apparently leveled: black box arguments are still not open to expansion, even if they are discussed in laymen’s terms) makes scientific issues appear as debatable, and not only by specialists. In other words, even if black box arguments rarely increase actual comprehension of complicated topics, they raise the participation of laymen in the discussion, belittling the difference between scientific facts and personal opinions. Thus, while science communication in mass media should aim at promoting commitment to scientific knowledge, especially by engaging the experts on the field, the use of black box arguments in de-authorized environments promotes the emergence of overconfidence in laypeople, together with a fallacious reliance on decontextualized data. De-authorized environments are, for example, on-line communities, that are now the major substitutes of traditional mass media devices for scientific information distribution. 1 Indeed, the Internet is one the most powerful resources of information currently available. Last year, the Pew Research Centre reported that more than two thirds of the American population use social media, the vast majority to get news about politics, science and technology. At the same time, also a distribution of disinformation occurs in these networks. In the next section, we will discuss the consequences of the distribution of scientific information as black box arguments in on-line communities and the cognitive features that makes these digital environments dangerous media for science communication.

1. To clearly represent distribution of information in on-line structures, we should briefly recall a terminological clarification that we introduced in (Arfini et al., 2017). We use the term on-line communities in order to employ the most general definition to embrace different types of Internet-based frameworks, as social network websites, newsgroups, forums, blogs, and miniblogs. We use this term in order to take a target broad enough to support different references as social media, digital frameworks, and social networks, without being general enough to hold the equivalence with traditional media, as newspapers and television programs.

78

Selene Arfini and Tommaso Bertolotti

4. Discovering information on-line: Produsers, filter bubbles, and self-made experts It is easy to think about social networking websites and on-line communities focusing only on their role as social aggregators. Originally, as commented in (Bertolotti et al., 2017), social media were indeed designed as personal spaces to gossip and share personal information, but now the amount of news, scientific data and political statements that are distributed on their platforms should force even the most skeptic person to consider them common venues for sharing – and consuming and commenting – external content with one’s (actual and virtually extended) network. Indeed, on-line communities could be powerful instruments for education, but the current distribution of fake or, at best, “oversimplified” scientific reports, political facts, and news in on-line platforms are the main reasons to consider social networks actual ignorance spreaders. 2 Indeed, on-line communities distribute misinformation as well as news and the problem regarding this double diffusion is the lack of epistemological tools the users have in order to distinguish what is relevant and accurate and what is not. But how did social oriented tools develop into mechanisms for sharing news and data that can also easily distribute misinformation and hoaxes as well? First of all, on-line communities do not equally distribute the same amount of information to all their users. Ranking algorithms work under the fabric of websites in order to give out specific information to specific types of user, according to their 2. In our analysis, we will consider a very broad definition of the term ignorance. We claim that ignorance as generally understood by analytic philosophers as a “lack of knowledge” fails to understand the employment of the term in ordinary situations. Lack of knowledge, indeed, refers to only a particular state of the ignorant cognition: the one that does not possess enough information or the right information to be considered in a “knowledge state”. The problem of this definition is evident if we consider cases where all the relevant information is offered to the subject, who refuses to believe in the truthful data, or misinterprets it, or fails to understand it. Ignorance, in our definition, is not limited to the situation where the agent has not all the relevant information to gain a particular epistemic goal, but encompasses also the situations where the agents lacks the epistemic tools to recognize and employ the relevant, true, or useful information and, even if she has those data, she fails to believe in them or refuses to use them to reach her epistemic goals. The lack of epistemic tools can be included in the definition of ignorance as lack of factual or procedural knowledge to gain knowledge. In this sense, misinformation, fake data, biased beliefs, and inaccurate statements are rightfully designated as instances of ignorance: they are misinterpreted data, incomprehensible information and false statements that are not recognized as such by the agents. And in the development of the digital era, the diffusion of ignorance through social media not only affects the analysis of ignorant people, but also the proper philosophical definition of informed citizens that we should adopt.



Chapter 4.  The expert you are (not) 79

previously manifested choices and preferences. These algorithms increase the personalization of websites, filtering the information that the users can actually access. The employment of these kinds of software was an answer to the increasing amount of information that was distributed on-line and initially they were implemented by programmers of web search engines, with Google being the first to establish personalized filters for the searches. Since the implementation of this feature in 2009, different people have been accessing different contents when googling the same term, depending on more or less personal information Google has stored (where the users were logging in from, what browser they were using and their browsing history, etc.) In on-line communities, when a ranking software was used in order to compact the social networks feeds into personalized frames, it affected not only the sense of social gathering that these websites promoted, but also the contents that the users shared. On Facebook, for instance, the algorithm that now implements the personalization of the default page of the site is EdgeRank, which ranks every interaction on the site. In other words, if you have more contacts with a person through Facebook or pay more attention to her profile – chatting with her, commenting her posts, liking her photos, spending time to check her profile, and so on – the more likely it is that Facebook will show you more of her updates. This tool powers the influence of peer opinion on these websites and the sense of being part of an actual community, making preferable for users to acquire socially filtered news. Moreover, with this implementation, the users not only see the updates of all their “friendliest” friends but, given that consuming information that conforms to one’s ideas is easy and pleasurable, they are more and more pushed to see them, rather than information that challenges their opinions and questions their assumptions. In this sense, it created what Eli Pariser (2011) calls a “Filter bubble,” which is an extension of the confirmation bias through the means of social networks and on-line communities. The confirmation bias is the tendency to consider and accept just the information that confirms one’s precedent beliefs and opinion. Through personalized on-line platforms, this psychological fallacy is reiterated in a web-space constructed for social aggregation but developed into an information-sharing site. On the consequences of this phenomenon, Pariser wrote: Partisans are more likely to consume news sources that confirm their ideological beliefs. People with more education are more likely to follow political news. Therefore, people with more education can actually become mis-educated. And while this phenomenon has always been true, the filter bubble automates it. In the bubble, the proportion of content that validates what you know goes way up.  (Pariser, 2011, pp. 51–52)

80 Selene Arfini and Tommaso Bertolotti

The distinction between the proportion of what the agent sees because it is validated by many sources, and what she sees because her friends share her same opinion is no longer visible. And the visibility of this distinction is very important for how science is communicated. For example, in “The Panic Virus”, the journalist Seth Mnookin argues that Andrew Wakefield, a British gastroenterologist who alleged that the measles-mumps-rubella vaccine might cause autism, was still very successful in disseminating misleading information on vaccines through social media where it garnered fame for that, even after losing his medical license (Mnookin, 2011). His fame has been spread by supporters of this argument and, through the mediation of a confirmation-driven network, it produced a sense of validation through emotional coherence for the hypotheses of concerned parents. According to a UNICEF report, the anti-vaccination sentiment is hard to take down, notwithstanding the many scientific studies that confirm that there is no connection between inoculations and the occurrence of cases of autism: the networks that spread this information are hardly penetrable to contrary opinions. Moreover, the fact that information navigates trough social media and spreads by “homophilia” (that is the drive to like what is similar to us), makes true what Pariser (2011, p. 7) highlights: “With Google personalized for everyone, the query ‘stem cells’ might produce diametrically opposed results for scientists who support stem cell research and activists who oppose it. ‘Proof of climate change’ might turn up different results for an environmental activist and an oil company executive”. Thus, the fact that the news consumption (as information receiving and sharing) is increasing on social media platforms such as Facebook and Twitter, renders this situation more and more dangerous from an epistemological perspective. Primarily because the visibility of the information does not depend on its epistemic strength, but on the popularity of a particular argument in a particular community or on the popularity of the person who shared it. As pointed out also by Oeldorf-Hirscha and Sundar: The key factor is that news is coming from a trusted personal source: most news links on Facebook (70%) are from friends and family rather than news organizations that individuals follow on the site.  (Oeldorf-Hirscha and Sundar, 2015, p. 240)

Indeed, in a social network, users play the role of what Bruns and Highfield (2012) call “produsers”: which are not simply consumers of news contents nor producers, but they exhibit a hybrid role in the on-line media networks that permit them to share information created by another source as it was their own. User-generated elaborations of news and the sharing activities on the network are proven to boost the “sense of agency” of users, the feeling that the agents have some control on the information they share. And this belief may be not utterly wrong, since the important



Chapter 4.  The expert you are (not) 81

thing about a content shared on an on-line community is who shared it, not what has been shared. In this sense, the feeling of agency and control over the information shared on on-line media can be experienced as “epistemological power” over that information in that particular community. The emergence of “produsers” gave birth also to the phenomenon of self-proclaimed experts, who are the acclaimed leaders in social driven networks. People who shared a number of posts regarding the recent discovery of gravitational waves (posts that contain fancily disguised black box arguments) may believe that they effectively know something more than those who did not. But knowing and believing to know something are two different cognitive states and, while believing is a pleasurable condition, it is also a fallible state not always recognized by the first-person perspective of the agent. This phenomenon is at the core of shallow understanding bubbles that abound in the net, which derive from the role of produsers that users play and the black box arguments that are distributed in the on-line communities. These conditions generate forms of “epistemic bubbles”.

5. The appeal of ignorance and the epistemic bubble The entailments between the pleasure of believing to know something and the incapacity to distinguish it from the actual knowledge is the core of Woods’ idea of Epistemic Bubble (2005). It describes the incapacity of distinguishing one’s own ignorance from her knowledge. An epistemic bubble is a phenomenon of epistemic self-deception, by which the agent becomes unaware of the difference between knowing something and believing that she knows the same thing. It derives from the fact that believing to have some knowledge is a pleasurable condition for the individual: it permits her to act according to her beliefs and to relieve the irritation that the lack of some important information may raise. Since the relief is experienced when the knowledge is acquired, “feeling relieved” is taken as a clue to the knowledge acquisition. Posting information on on-line community indeed provokes a sense of control and agency over it: this may cause also the delusion to have a special epistemic privilege over it, as to have acquired actual knowledge. This can explain why there is a multiplication of self-proclaimed experts in on-line communities over a variety of topics. In order to make an example, we can speak again about the diffusion of anti-vaccine sentiments in Europe. One problem that agencies like UNICEF have to face is the diffusion of medically unqualified opinion leaders that guide the anti-vaccines crusades. They often have no college education, but they appear to have been well trained in alternative medicine. Some are just popular people of the show business, as Jenny McCarthy, who has presented herself as educated, “Internet-savvy” mother that aims to defy the medical establishment of

82

Selene Arfini and Tommaso Bertolotti

information about vaccinations. Some often proclaim themselves as “experts” about vaccinations because of their experiences as religious authorities, political experts or “well-informed” parents: they especially present the vaccinations as religiously problematic or part of a conspiracy, also because they believe to be well informed experts on religious matters and conspiracy theories. Often, parents that proclaim themselves as experts in the correlation between vaccinations and the insurgence of autism highlight negative stories that focus on individual cases. These cases, as religious impositions over vaccinations and conspiracy schemes, are black box arguments that delude opinion leaders into having acquired a particular knowledge over a sensible issue, without any reference to the medical understanding of the practice of vaccination. They are in epistemic bubbles that entraps them into the self-delusion of possessing relevant knowledge about an issue, without actually possessing it. On-line networks offer them the possibility of acting as competent opinion leaders: asking the networks’ opinion, targeting specific people and sharing sensible information, the users raise greater involvement in the relative content from the network and feel like they can be at the center of the movement for the vaccination control. Therefore, entrapped in epistemic bubbles, sharing black box argument and fomenting the anti-vaccine sentiments, instead, they only diffuse ignorance and misinformation in their on-line networks. Summing up, the diffusion of black box arguments, the epistemic constraints imposed by the filter bubble, and the generation of epistemic bubbles in on-line communities can effectively distribute a variety of misinformation and hoaxes that can compromise the epistemic judgement of users and multiply phenomena of ignorance distribution. This obviously alters also the perception of the entitled “experts” in these digital environments: in this sense, it seems that the major science communication media is slowly turning the public in a confused, self-assured group of self-made experts.

6. Desultory scientific information Since its massive diffusion, the Internet (and related technological devices) have been triggering a particular cluster of effects that, as a matter of fact, seem to range on two sides of the same fence. On the one hand, it empowered people to access any content, service, tool on demand, right on time, wherever one is – aspects that are generally greeted as positive advancements. On the other hand, it can be correlated with reduced attention spans and a general unwillingness to spend much effort in achieving what one is trying to get. Bertolotti et al. (2011), analyzing the impact of the Internet on activism, suggested that one of the main shortcomings is the “desultory” nature of Internet



Chapter 4.  The expert you are (not) 83

activism, that is mistaking actual engagement (with its costs) with one-off clicks supporting this and that cause. Desultory clicktivism can make people feel profoundly engaged, inasmuch as they sign (frequent) petitions and read Internet posts by political groups, NGO’s and other organizations, but again, they do not display the kind of costly commitment implied by traditional forms of activism: for instance, there is not always a satisfactory correlation between the media resonance of a campaign and the funding the campaign actually receives. As we already saw, a 2015 Pew research revealed that 65% of internet users use social networking websites: 63% of Twitter users and 64% of Facebook users now say each platform serves as a source for news about politics, science and technology. This means that science communication is displayed in the user’s “feed” together with friends’ posts, celebrity gossip, kitten gifs, advertisement and so on and so forth. This entails a twofold constraint: as we saw earlier, on the producers’ side, the news must compete in relevance and in being entertaining with a lot of different content: first of all, science news has to be “new.” Also, they have to be compelling, and in order to achieve this they must bypass the fact that the reader does not necessarily have the background knowledge to appreciate the content in all of its scientific accuracy, nor he is necessarily inclined to acquire it. At this point, it seems possible to further explore the analogy with Internet-based activism: just as the latter elicits people’s desire to have a positive impact on some causes they deem noble (for instance humanitarian or environmental), yet without the cost of a full-on activist engagement, so science communication relayed on the Internet answers human curiosity as for scientific matters without asking, in return, to commit to a costly scientific literacy. From this perspective, the evaluation of Internet science communication is akin to that of Internet activism: it is clearly not an evil to be demonized, quite the opposite, it is positive to share science news just as it is positive to share information about humanitarian or environmental causes, if only because – at a relatively low cost – such activities spread commitment and knowledge. Things get awkward, though, as far as the recipients of the information are concerned: a social media user that is very keen on clic-supporting causes and sharing posts might be unable to tell his own engagement from that of an actual civic worker, or that of an activist on the field (and for instance, not feel compelled to help financially a cause, feeling that he has already done his share). Similarly, a desultory access to science communication can have negative side-effects if he things that his opinion, informed by occasional readings, is as authoritative as that of an expert that was trained in the field. Such an evaluation must not be taken as an ivory-tower claim that the world of knowledge is black or white: either you are an expert and know and can have an opinion, or you are a layperson and should mind your business and stick to a blissful scientific ignorance. Quite the contrary, there is a most wide range of accesses

84

Selene Arfini and Tommaso Bertolotti

to scientific knowledge, each with its own cost and merits, as long as the person who accesses them is able to take them for what they are. A 500-word long blog post on infectious diseases will give the reader some information, and it would be dangerously foolish to claim that it would be better that she did not read it at all. Still, if she watches a two-hour TV show on the same topic, she will gain a different, probably higher, knowledge on the matter. If she were to follow a series of YouTube lectures given by an intern in epidemiology, she would gain more knowledge still, as she would if she read one or more non-fiction books about infectious diseases. In some cases, a layperson could easily become an “expert” in the topic that interests her, even if that is not her professional occupation. One just needs to avoid the “epistemic populism” and be honest with oneself in accepting the self-evident truth that the level of expertise one gains is a function of what is spent on the matter in terms of time and intellectual resources.

7. Concluding remarks Nobel prize winner, physicist and science communicator Richard Feynman did not believe in philosophy. More specifically, he did not believe in philosophy of science. He is in fact famously reported to have quipped that philosophy of science is about as useful to scientists as ornithology is to birds. Ornithologists cannot teach birds how to fly, and we can only speculate about the gratitude a bird may feel towards an ornithologist patching its broken wing, but ornithology is dramatically useful for birds for instance as far as conservation efforts are at stake. You might say that birds would not need conservation if men had not put their environment in jeopardy: point taken, but one has better set off from how things are, rather than how they ought to be. In an ideal Atlantis, where men can dedicate freely to science, and have all the time and resources needed to share their findings, and make sure every interested layperson gets things the way they are, then the conservationist effort of philosophy of science would be redundant. Considering how things are, though, with scientists busy doing science or struggling for some fundings, media (public or social) trying to make an audience, and laypeople wanting to get it with the least possible effort, science might need help from philosophy of science, at least as far as its diffusion is concerned. The role philosophy of science can claim in this picture is embodied, for instance, by the stance presented in this paper. It is about fostering a minimal understanding of science communication and the trade-off between its cognitive and epistemic constraints, about recognizing the unavoidable presence of black box arguments, actually understanding how the media work and how it tampers without cognitive expectations. It also comes down to applying the tools of critical thinking and



Chapter 4.  The expert you are (not) 85

understanding our troublesome relationship with ignorance, and the fallacious ways by which we try to avoid facing it. This is not only good for science, and for respecting the curiosity of human beings by serving them the best epistemic product accordingly to the cognitive price they are willing to pay. This is vital because liberal democracies work on the assumption that citizens have a basic scientific literacy to help them navigate (and vote) on extremely complicated matters. A matching level of real literacy, obtained by any possible means, is therefore a goal to achieve to make sure that democratic assumptions are not merely wishful.

References Arfini, S., Bertolotti, T., & Magnani, L. (2017). Online communities as virtual cognitive niches. Synthese, online first.  doi: 10.1007/s11229-017-1482-0 Bertolotti, T., Arfini, S., & Magnani, L. (2017). Of cyborgs and brutes: Technology-inherited violence and ignorance, Philosophies, 2(1), 1–14. Bertolotti, T., Bardone, E., & Magnani, L. (2011). Perverting activism: Cyberactivism and its potential failures in enhancing democratic institutions. International Journal of Technoethics (IJT), 2(2), 14–29.  doi: 10.4018/jte.2011040102 Bruns, A. & Highfield, T. (2012). Blogs, Twitter, and breaking news: The produsage of citizen journalism. In R. A. Lind (Ed.), Produsing Theory in a Digital World: The Intersection of Audiences and Production in Contemporary Theory, (pp. 15–32). New York: Peter Lang Publishing Inc. Burns, T. W., O’Connor, D. J., & Stocklmayer, S. M. (2003). Science communication: A contemporary definition. Public Understanding of Science, 12, 183–202.  doi: 10.1177/09636625030122004 Jackson, S. (2008). Black box arguments. Argumentation, 22, 437–446.  doi: 10.1007/s10503-008-9094-y Keen, A. (2007). The Cult of Amateur. How Today’s Internet is Killing Our Culture and Assaulting Our Economy. London: Nicholas Brealey Publishing. Miller, J. D. (2010). Civic scientific literacy: The role of the media in the electronic era. Science and the Media, 40, 44–63. Mnookin, S. (2011). The Panic Virus. New York: Simon & Schuster. Oeldorf-Hirscha, A. & Sundar, S. S. (2015). Posting, commenting, and tagging: Effects of sharing news stories on Facebook. Computers in Human Behavior, 44, 240–249.

doi: 10.1016/j.chb.2014.11.024

Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. London: Penguin. Wilcox, C. (2012). It’s time to e-volve: Taking responsibility for science communication in a digital age. Biology Bullettine, 222, 85–87. Woods, J. (2005). Epistemic bubbles. In S. Artemov, H. Barringer, A. Garcez, L. Lamb, & J. Woods (Eds.), We Will Show Them: Essay in Honour of Dov Gabbay (Volume II) (pp. 1–39). London: College Pubblications.

Chapter 5

Decisions without scientists?  Two case studies about GM plants and invasive acacia in Hungary Anna Petschner

Budapest University of Technology and Economics

In my paper I will present two case studies showing that politics partly disregard the scientific standpoint in decision processes. I will examine the reputation of the research on genetically modified (GM) plants and their experts in Hungary as well as the status of invasive acacia, observing the articles on these cases in the most popular daily newspapers and online news portals by rhetorical and content analysis. I also conducted interviews with experts of GM plants to get to know the research in practice. I will present that scientific viewpoints in these cases were mostly treated in the process of policy-making in an imbalanced manner, although the opinions of experts should have been treated equally. Keywords: scientific policy, science journalism, media, controversies, Hungary, genetically modified plants, invasive acacia, rhetoric, content analysis

1. Introduction Whether scientists enjoy controversies in scientific policy or not, these conflicts do exist. And objects of these disputes are not just whether the financial capacity of a research is enough or scientists achieved proper results, but for example whether science is beneficial at all or whether science has gone too far, where nobody is capable to judge its eventually dubious outcomes. Do we need science or not? Answering this question is one challenge of the Science and Technology Studies (shortly STS). In the introduction I present some of the most impressive studies of this discipline that are relevant for my topic.

doi 10.1075/cvs.13.07pet © 2018 John Benjamins Publishing Company

88

Anna Petschner

1.1

The beginning of STS and positivist approach

STS with historical perspective began with Vannevar Bush’s work (1945). His description was inspired by an experience of the World War II, especially the Manhattan Project which enlarged the importance and prestige of science. Research attained an organizational and institutional system which was unknown before. Light was also thrown on utility and expensiveness of science and it became necessary to treat it as an independent institution. Bush’s work described the ‘linear model’ which dominated the science and scientific policy until the 1990s. It divides scientific research into two levels. The first is basic research which dominates the academic sphere and its purpose is to reveal the truth. The second level is applied research which uses the results of basic research to solve practical problems. The researchers and the community itself are autonomous entities, they accept the tacit rules and they serve the interests of society. In turn, politics support science and leave its autonomy in peace. In this system, Mertonian norms prevail: norms of universalism, ‘communism’, disinterestedness and organized skepticism (Merton, 1973). According to universalism, “truth claims, whatever their source, are to be subjected to pre-established impersonal criteria: consonant with observation and knowledge with previously confirmed knowledge” (Merton, 1973, p. 270). ‘Communism’ claims that “the substantive findings of science are a product of social collaboration and are assigned to the community” (Merton, 1973, p. 273). According to disinterestedness, science has to stay independent from any kind of interest, while organized skepticism says that scientific claims have to be critically scrutinized before being accepted. In this approach, a positivist science notion emerged. Science is capable to say how things are working in the nature, it can reveal the truth about nature, humanity and society, and it is independent from the scientists’ personal beliefs. A success of science is ensured by its method, thus, science constantly advances, it cumulates knowledge and it results in prosperity in society. 1.2

Constructivism, ‘Mode 2’ and boundaries

From the Fifties, a growing number of philosophers and historians began to give up their commitment to positivist scientific image and gradually changed their views to (social) constructivist approach. Main participants of this change were Karl Popper (1963), Michael Polányi (1958), Norwood Russell Hanson (1958), Thomas Kuhn (1962), Imre Lakatos (1977) and Paul Feyerabend (1975). According to constructivism, scientific claims are necessarily connected to metaphysical beliefs and to social (religious, cultural, political, etc.) spheres of their age. It is impossible to



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 89

elaborate a unique scientific method which is capable of ensuring the truth of scientific claims, regardless of time and other local circumstances. Science is always influenced by renewable hypotheses, paradigms, research programs and commitments of the scientific community. In virtue of their approach, the scientific claims are constructed by scientists, generated after debates and compromises. Thus, science itself is not uncriticizable, contrary the positivist approach, and many social institutions have rights to scrutinize it. Nowadays, in the fields of STS it is very rare that somebody does not represent constructivist view, but in the fields of natural and techno science, workers still stand for positivist scientific image. Not only the scientific image but scientific practice also has changed. Already in the beginning of 1970s, Jeromy Ravetz drew a line between science questing after truth and science which resembles to an industrial enterprise, he considered science to a manufactured product (Ravetz, 1971). Michael Gibbons and his colleagues elaborated ‘Mode 2’ knowledge production, in contrast with ‘Mode 1’. The latter is academic, investigator-initiated and discipline-based knowledge production. In contrast, ‘Mode 2’ knowledge is a product of problem-solving that is created within the context of application which represents the whole environment of the scientific problem, methodology, outcomes and uses. It breaks with the linear model, there is no basic or applied research, basic knowledge is construed during problem-solving (Gibbons et al., 1994). Contract has become an important notion not only in its social context, but as a legal term. Government, industrial company, and so on are financing scientific research and in turn, science constructs knowledge as a product. Because of this, politics can treat science in the same legal and institutional manner as other spheres of governance, this process terminates the uncriticizable position of science (Guston, 2000a, 2000b). This approach involves the so-called principal-agent theory which describes principals of politics interacting with agents of science to contract for solving a scientific problem. This relation generates tension between the two participants, for example, there is no guarantee that the principals employ the best agent who can do a task. The thousands of links to science and politics have to be warranted by ‘boundary organizations’ (Gieryn, 1995). This is where ‘boundary organizations’ come in, whose central goal is to generate and maintain mutually beneficial and meaningful relations between knowledge users and producers. ‘Mode 2’ knowledge and using contracts in science infringe its independence, autonomy, impartiality and the Mertonian norms (Ziman, 1996). This new scientific image is about the renegotiation of boundaries and finding optimal balance between science and governance. The balance is essential, because if science got too far from politics, it would become isolated, an ‘ivory tower’ problem would emerge and politics would be incapable for using its benefits. But if science got too

90 Anna Petschner

close to politics, it would become unreliable, corrupt and in a corrupt science the society would not trust. Thus, science and politics have to respect their mutual boundaries. But sometimes politics has gone too close to science controlling their quasi-independence, harming scientists’ rights and making too rigid laws. A great example for this situation is the case of genetically modified plants and invasive acacia in Hungary. In the Section 2 I examine the communication and reputation of research on GM plants. This paper does not commit to moral evaluation or judgment of scientific innovations. Its aim is to present the debate and media reaction to the current stance of genetically modified plants in Hungary. In the Section 3 I will describe the case of invasive acacia. To summarize, I found that policymakers made limitations on freedom of GM plants research and did not take the researchers’ opinion into account in the case of acacia, although The Fundamental Law of Hungary, which is a ground for the Hungarian democracy, would force them to do so. It is disadvantageous not just for the science and media, but also for politics, because it cannot use the benefits of these research.

2. A case for genetically modified plants 2.1

What should we know about GMOs and GM plants?

2.1.1 Genetic modification: spread Genetically Modified Organisms (briefly GMOs) or transgenic organisms are carrying artificially generated DNA sequences artificially inserted into the genome of host cells of a given animal. After the insertion, they begin to produce new proteins or they are capable of influencing the functions of the original proteins. The history of genetic modification is more than 50 years old. The description of the DNA structure by James D. Watson and Francis Crick was a milestone and they also received a Nobel Prize in 1962 (Watson & Crick, 1953). Ten years later, Paul Berg and his colleagues won another Nobel Prize for their successful combination of the DNAs of two different organisms, namely the DNA sequence of a bacteriophage (a bacteria which can infect a virus) and of an animal (Jackson, Symons & Berg, 1972). In 1973, Stanley Cohen and Herbert Boyle built a foreign sequence into the DNA of a host organism (Cohen, Chang & Boyer, 1973). Nine years later, Richard Palmiter and Ralph L. Brinster were the first to be capable for inserting a foreign gene sequence into animal and later human cells (Palmiter & Brinster, 1982). Since then, genetics and genomics are not only among the most progressive areas of medical research but they are also broadly used in other practical (for example industrial) fields. An application of GM microbes is common not only in dairy and



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 91

brewing industries or crude oil purification, but also in the pharmaceutical industry. In 2014, 1.25 million Americans were living with type 1 diabetes including 200000 youth (less than 20 years old) and more than a million adults (CDC National Diabetes Statistics Report, 2014). For them, a treatment of this disease is insulin injections multiple times per day made from genetically modified bacteria. Although the regulation of genetic modification of microbes and animals is almost as rigid as GM plants, the former have got less critics than the latter. Cultivation of genetically modified plants is primarily important because of the reduction of damages in agriculture (damages can be caused by environmental factors – for example drought – or pests and weeds). Because of these benefits, since 1996, areas of GM plants cultivation have linearly grown, and in 2014, they possessed 181.5 million hectares (James, 2014). 2.1.2 Regulation of GM plants Already in 1990, the European Union introduced an obligatory impact assessment to implement before usage of a GM plant (European Directive 90/220/EEC). It was supplemented two years later with public engagement and environmental impact assessment (European Directive 2001/18/EC). In 2004, the EU permitted the cultivation of the species MON810 maize, produced by Monsanto, whereas introduction of other species needed strict authorization. Thus, the multinational biotechnological companies gained rights to cultivate or sell GM plants in every member state and member states had to exhibit scientific and economic arguments to prohibit the appearance of these plants in their countries. But the European Union – because of insufficient arguments – mostly rejected these proposals. Hence, in 2008, Hungary introduced a moratorium against cultivation of GM plants in agricultural areas and later, in 2010, together with other countries (Austria, Bulgaria, Cyprus, the Netherlands, Ireland, Latvia, Lithuania, Malta and Slovenia) demanded a unique decision-making of every country in this question. In 2011, the Hungarian parliament built into the fact in The Fundamental Law of Hungary that genetically modified plants had to devoid from the Hungarian cultivation. This is what we call ‘zero-tolerance’ against GM plants. The new regulation came into effect in January the 1st, 2012: (1) Everyone shall have the right to physical and mental health. (2) Hungary shall promote the effective application of the right referred to in Paragraph (1) by an agriculture free of genetically modified organisms, by ensuring access to healthy food and drinking water, by organising safety at work and healthcare provision, by supporting sports and regular physical exercise, as well as by ensuring the protection of the environment.  (Article XX, The Fundamental Law of Hungary) 1

1. Emphasis added by the author.

92

Anna Petschner

These paragraphs generated a heated debate, because of the following reason. “It seems from the constitution that genetic modification in agriculture is carrying some kind of health risk, although it is just a technology that has different effects in different plants. It is unacceptable to pass a sentence upon them uniformly.” (Index.hu, 2011) This quotation is derived from János Györgyey who was a senior research fellow at the Biological Research Centre in Szeged, Hungary. 2.1.3 Support of GM plants Because of the heated debate, a strict regulation and antipathy of government against the GM plants, a question has been raised. What is the opinion of the society? According to the Eurobarometer results, the Hungarian society is not so much towards the GM plants that the initiation of ‘zero-tolerance’ against GM plants had to be so justified. The 2005 Eurobarometer showed, which made from the answers of 25–26000 people, approximately 1000 from every country in the EU, 37% of the Hungarian population supports GM plants. The mean of the support of every EU members states was 41.28%, which is regarded as a higher value, but the difference between them is not so big (Gaskell, 2006). In 2010, Eurobarometer asked another claim, namely that GM food is not good for you and your family. In Hungary, 56% of the surveyed agreed with the claim, 31% of them disagreed and 13% did not answer. In the European Union, the mean was almost the same: 54% agreed, 30% disagreed and 16% did not answer to it (TNS Opinion, 2010). These results show that the attitude against GM plants was still not so negative that the activity of parliament could be explainable. 2.1.4 Conflicts about GM plants The advancement of the recombinant DNA technique was unbroken from the 1960s, but critics of genetic modification also emerged in this period. The fact that “anybody” can unite different sequences of bacteria or viruses, which can survive in the environment causing imponderable consequences, made the scientists uncertain. One of the skeptical voices was Paul Berg’s, the Nobel-prize winner for the development of recombinant DNA-technique (see above), who was worried about genetic modification. He wanted to suspend the research until new regulations could guarantee its safety. It was in 1974 and one year later at the Asilomar Conference on Recombinant DNA, the participants concluded that the research had to be suspended for 16 months (Nyitray, 2013). It is important to see that not just the society or the government but the researchers themselves wanted the regulation of genetic modification. Thirteen years later, on August the 10th, 1998, in the TV-show World in Action a two-and-a-half-minute long statement of Árpád Pusztai, who was one of the most



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 93

acknowledged lectin-researchers in the world, upset the whole scientific world. According to his results the rats fed with potatoes containing lectin genes caused the young animals’ liver, lungs, kidneys and brain developments to be stunted. He recommended further research while according to him, the genetically modified plants were not examined in enough depth. He said, ”I find that it’s very, very unfair to use our fellow citizens as guinea pigs. We have to find guinea-pigs in the laboratory.” Although the leader of the research center congratulated to the successful performance, two days later Pusztai was banished from his workplace, he was not allowed to consult with his colleagues and his research documents were confiscated. A supervisor committee, the Audit Committee was established, but later the case was got to the British parliament. They asked the Royal Society to look into the documents that later released a four-point bulletin about their stand. It recognized that Pusztai had no right to make this kind of profound conclusion, there was no significant difference between the examined and control-group that was necessary for reliable results. In his research, they modified only one gene in one plant testing on only one species, which was incapable for proving that it caused the same effect in humans. They also claimed, in further case it was necessary to a researcher in this kind of moot question to consult first with his or her colleague, not with the public. But with this bulletin the controversies have still remained. Pusztai – after he could see their documents again – sent the results to other colleagues. In Hungary there was a debate in the journal Biokémia (Biochemistry) involving many researchers (Sajgó, 1999; Dudits, 1999; Darvas, 1999; Venetianer, 1999; Baintner, 1999). In the end, in 2000 Pusztai wrote an article in this journal and he criticized both sides of the conflict and he offered a new test-program for GM plants (Pusztai, 2000). Up to this day, many people think that Pusztai never wanted to be a vanguard fighter against GM plants, he was rather the victim of the circumstances. Participants are very heterogeneous in this debate. One group of them is the producers, containing multinational companies, dealers and farmers. Motivations of multinational companies are mostly not based on ideology, rather on business interests. While some of them are pro-GM plants (for example Monsanto) and others are against them (e.g. SPAR International), sometimes they conflicted with each other, not just with the political or social organizations. The stands of farmers are also polarized, they also have business interests. The second big group is regulators, containing government and other authority. The third group is manipulators, the non-governmental organizations, farmers’ organizations, the Church, science and the media. We can say that in Hungary, one of the most important participants in the debate is the media, in which provocative voices mostly appear on the contra side. For example, in one of the most popular newspapers, a woman wrote an article with a title The Frankensteins in the larder (Torkos, 2011). She called scientists who

94 Anna Petschner

worked with GM plants to be like Frankensteins, but also said that “It’s going to be good if we not just prohibit cultivation of genetically modified plants on our fields, but with greater rigor we would obstruct that the Frankensteins’ genetically contaminated muck get into our food chain.” I think the most important part in this paragraph is the notice of Frankensteins’ muck, which I take to be a good example for the hot-tempered attitude in the GM plants debate. The fourth group of the participants is the consumers, containing not just the customers, but also the farmers. For them, the most important aspect is the quality of foods. 2.2

Method of GM plants examination

2.2.1 Cases of GM plants While many articles, news and gossips about GM plants appeared in the press almost in every month, thus, I narrowed down the examination to two occasions. Two aspects existed during the choices. I wanted to choose occasions which happened in recent past and it was essential that the occasions had some connection to the scientific world. These aspects led to two occasions about GM plants. The first one had an international publicity, thus I could examine other countries’ communication, too. The other one was specifically Hungarian. So the first scandal was a molecular biologist’s, Gilles-Éric Séralini’s and his colleagues’ research. In 2012, in Food and Chemical Toxicology an article was published. In a two years long examination the research group fed a group of so-called Sprague-Dawley rats with RoundUp maize, which is a product of Monsanto containing herbicide, whereas they primed another group of rats with water including the same herbicide. They found out that huge tumors developed in the rats that were presented by frightening pictures and videos. The publication raised dust, but many critics were made on their results. For example, it was claimed that the number of rats was insufficient for demonstrating significant difference. According to the dealer company of the rats, these animals were already disposed to cancer due to their old age and near to the end of their lives. 60–80% of them had some tumors in their body. Some said that Séralini wanted to promote his new book and film about GM plants. Despite of the fact that the article was retracted, some conspiracy theories still have remained popular because they thought that the retraction was caused by a lobby (Séralini et al., 2012). The second example was the scandal of GM maize. As I previously mentioned, in Hungary, a so-called ‘zero-tolerance’ exists, which means that it is forbidden to sell or cultivate GM plants. Despite of this law, in July-August 2011, an official investigation found GM maize in crop on the fields of South-West Hungary. The



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 95

seeds were sold by Monsanto and Pioneer, the biggest biotechnological companies. Because of the GMO-content and the fear of cross-contamination with GMOs, in the so-called ‘isolation zone’, approximately five thousand acres of maize were destroyed causing loss to the farmers. The destruction involved many farmers because of the specific small-territories farming in Hungary. But according to the scientists this huge volume of the damage was unnecessary, because maize is a self-pollinating plant, thus, the defined 400 meter circle of ‘isolation zone’ was unjustified. The scandal continued, when in January 2012, 5% of GM maize crop were used as pig feed. The government claimed they treated it with propionic acid and thus it became safe. It is true that this compound is commonly used as preservative of animal feed because of its fungicide and bactericide effects. But the scientific fact is that except it can destruct microorganisms, it has no other effect and it is incapable of making GM plants “GM-free”. 2.2.2 Media An important aspect was in the choice of media outlets how both offline and online media represented themselves. Thus I examined the 3 most popular newspapers and 3 most popular news portals. Metropol is the Hungarian version of the Metro International as a free daily newspaper. It exists since September the 7th, 1998 and in this period its name was Metro. In 2008, its name changed to the final Metropol. 2 In 2011, 400000 copies were printed per day, meaning that in this time it was the most popular newspaper in Hungary. 3 The second is Népszabadság, which exists since 1956 and it had in this time 65000 readers. This newspaper widely held to oppose to the government. 4 The last one is Magyar Nemzet. It had some roots in 1899, present-day form is the outcome of the union with another newspaper, called Új Magyarország. It had in this time 45000 readers and it widely held to be closer to the government. From the online media outlets, origo.hu was the largest one. The origin of this news portal goes back to 1997, when the editorship established the base of

2. In June, 2016 Metropol fell dead, it replaced with another newspaper, called Lokál, of which owner is a government friendly person. 3. Just for comparison, approximately 9 million people live in Hungary. 4. Népszabadság had been ceased on October 8, 2016. The termination made a scandal, because the journalists, who were working by the editorship, were banished from their office overnight, it was prohibited for them to see their documents. In the first few weeks the explanation was that the editorship has some internal transformation because of the change of ownership. Today only its website works, but it is not updated.

96 Anna Petschner

the website. It became in 2015 the most popular internet portal in Hungary, with about 600000 readers. 5 The second biggest news portal was index.hu, which was established as iNteRNeTTo and got its final name in May the 17th, 1999. In 2015 it had 500000 readers per day. The last website is hir24.hu, which began to operate later, but in 2015 the number of readers was approximately 350–400000. 2.3

Results of the media’s examination in the case of GM maize

Four main questions have been examined in my research. My first question was which words were used in the newspapers’ articles. Did they use the expression “genetically modified” or “genetic modification” which is neutral, a correct scientific concept, or they used the expression “genetically manipulated” or “genetically contaminated” which has some negative implication? The abbreviation of GMO means genetically modified organisms, which cannot be only plants but animals, bacteria and so on. Thus when a journalist says “GMO plants”, this means “genetically modified organism plants”. It is wrong because the abbreviation unnecessarily duplicates the nouns. It seems to imply that the journalist authoring the article does not know the topic because she used wrong words. My third question was how many interviewees were in the article. It was an important question in the sense that journalists had to present both sides of the conflict, the pro and contra arguments. My last question was whether the usage of scientific facts was correct in the article. Examining the offline and online media outlets, I found that in offline media, Metropol and Magyar Nemzet are similarly negative in word usage, approximately 65% of the mention of GM plants use “genetically manipulated” or “genetically contaminated” (see Figure 1a), Népszabadság uses the least negative words. By the online media outlets, origo.hu is the most neutral, it uses only once negative word, while index.hu and hir24.hu use these notions in 30–40% of words (see Figure 1b). In order to answer my third question in the case of GM maize I made a distinction between pro and contra side. Those who argued against the destruction of maize were farmers, researchers and Monsanto, the biotechnological company. On the other side, those who argued for destruction were environmental protectionists and politicians. By the examined offline media outlets, both Magyar Nemzet and Népszabadság presented three participants, although Magyar Nemzet rather shows the pros, while Népszabadság rather shows the contras. Metropol was balanced in viewpoints (see Figure 2a). Regarding online media, none of them made a good balance between pros and contras, but origo.hu was the best of them (see Figure 2b). A further result was that the loss of the farmers during the scandal of GM maize 5. Origo.hu in the end of 2016 has got a new owner, who is a government friendly person.



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 97

was critical. They had serious damages because of destruction, especially because many of them had accounts payable, which would presolved after harvest. Despite of this serious fact, only one newspaper, Népszabadság paid great attention to the farmers’ opinion. Another examined case was Séralini-affair. By the offline media outlets, Magyar Nemzet used 60% of words with negative implication (see Figure 3a). Online media outlets, origo.hu and index.hu were neutral regarding word usage (see Figure 3b). It can be seen that in both cases Magyar Nemzet used more words with negative implication than the others. a. Word usage in offline media outlets in the case of GM maize

b. Word usage in online media outlets in the case of GM maize 1

4

6

5

9

10 10

index.hu (2)

hir24.hu (4)

24

4

9

9 2

14

Metropol (2)

Magyar Nemzet (4)

Népszabadság (4)

Genetically modified

origo.hu (4)

Genetically modified/genetically contaminated

Figure 1.  Word usage in the offline and online media outlets in the case of GM maize* *Next to the name of the news portals or daily newspapers I show how many articles were examined.

a. Participants in the offline media outlets in the case of GM maize

b. Participants in the online media outlets in the case of GM maize

2 2 1 1 Metropol (2)

2

1

3

4

1

1 Magyar Népszabadság (4) Nemzet (4) Opponents of destruction Farmer Researcher Biotechnological company

1 origo.hu (3)

4 4 1 index.hu (2)

hir24.hu (4)

Supporters of destruction Environmental protectionist Politician

Figure 2.  Participants in the offline and online media outlets in the case of GM maize* *Next to the name of the news portals or daily newspapers I show how many articles were examined.

98 Anna Petschner

a. Word usage in offline media outlets in the case of Séralini-affair

b. Word usage in online media outlets in the case of Séralini-affair

Magyar Genetically modified

Genetically manipulated/genetically contaminated

Figure 3.  Word usage in the offline and online media outlets in the case of Séralini-affair*, **

*Next to the name of the news portals or daily newspapers I show how many articles were examined. **In Metropol and on hir24.hu there was no article about the Séralini-affair.

a. Participants in the offline media outlets in the case of Séralini-affair 1 1 2 Metropol (0)

Magyar Nemzet (2)

b. Participants in the online media outlets in the case of Séralini-affair

1

1 2 1 1 2

1 1 3

Népszabadság (1)

origo.hu (2)

index.hu (3)

1

Supporters of Séralini Researcher Environmental protectionist Politician (pro)

hir24.hu (0)

Opponents of Séralini Monsanto Journal Politician (contra)

Figure 4.  Participants in the offline and online media outlets in the case of Séralini-affair*, ** *Next to the name of the news portals or daily newspapers I show how many articles were examined. ** In Metropol and on hir24.hu there was no article about the Séralini-affair.

As the case of GM maize, I also made a distinction in the participants of Séralini-affair to answer my third question. Those who were on the pro side were Séralini and his colleagues, environmental protectionists and Hungarian politicians. Those who stand on the contra side were Monsanto, politicians and editors of the journal which published the article. By the offline media outlets, Magyar Nemzet has the most interviewees, but it is biased, has most participants from the side of supporters of Séralini (see Figure 4a). Among the online media outlets, origo.hu is the most balanced and it also has the most interviewees (see Figure 4b). My second question was about the abbreviation GM or GMO, because of their possibly wrong usage. But I did not find any surprising values, every media outlets (online and offline) use these words correctly.



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 99

I found two interesting results by the examination of representation of scientific facts. Only one newspaper, Népszabadság, mentioned the fact that because of the ‘isolation zone’, many hectares of maize were destroyed. The other controversial situation mentioned by only origo.hu and Népszabadság, namely, that GM maize crop were used as pig feed after treated with propionic acid. 2.4

Results of interviews

In 2012, during an interview, the deputy director of Hungarian Research Institute of Agricultural Economics claimed that GM plants are useful for Hungary and providing food for mankind. After the interview, he was sent down for the reason that he was lobbying for GM plants. The case caused consternation between the GM researchers, they thought the decision was unfair and unfounded (Index.hu, 2012). Since then most of the researchers are afraid of giving an interview, especially with names, and it is held that the atmosphere in GM plants research is hostile. But I intended to supply my examination with some interviews to see the effect of politics not just in the media but also in real research in progress. According to the results, the milieu is really hostile, the interviewed researchers claimed that it is hard to deal with GM plants in Hungary. It is important that the following quotations are not necessarily in accordance with my opinion. I just present the situation through the eyes of the researchers. My first interviewee is working at the Agricultural Institute, Centre for Agricultural Research, Hungarian Academy of Sciences and his research is selective maize breeding. When I attempted to find out the attitude in GM plants research he responded the following: I do not suggest discussing this genetic modification topic. Not just because of the politics, but here [i.e., in Hungary] a proper milieu and social circumstances do not exist to discuss this calmly in a normal way. (personal interview, 2014) 6

My second interviewee was a researcher with fifteen years of experience from 1995 to 2010. But since 2010 it has been prohibited for him to make any statement about GM plants, he cannot do research in his discipline, he can only teach about genetically modified plants. For an environmental council of ministers, he wrote a book with other scientists, its title was Plain facts about GMOs. 7 It is also known 6. The interviewees demanded anonymity, thus, it is not allowed to make the audio record public. For consultation, please send an email to the author ([email protected]). 7. Available online: http://www.pannonbiotech.hu/_upload/editor/book-small_angol-javitott_ VEGSO_1_.pdf

100 Anna Petschner

as the Hungarian White Book. The topics of the book included a discussion of GM plants, their history, the technique and the sides of e.g. ethics or religion. But this book was not presented on this council. Instead, another book was presented, the Hungarian background on views of 1st generation genetically modified plants, 8 the so-called Hungarian Yellow Book. It was a “counterbook”, it contains reports of previous councils and it claimed unbiased that GM plants are dangerous. Another story in the interview was related to a grant application. The main argument of contra side was that the government spent a lot of money on GM plants research and there was no outcome. These seeds were not applied on fields, so nobody knew they were useful or dangerous. Nobody made a research with them and nobody knew about their safety. So my interviewee wrote a grant application, because he wanted to answer this question. After the review, he was informally told that although his work was the best grant application they had, the committee would never support him because of the milieu in Hungary (personal interview, 2014). 9 2.5

Conclusions for the case of GM plants

What is the conclusion of the GM plants case? First, the Hungarian media is miscellaneous, the most news portals and newspapers are reliable. In online media origo.hu was the most reliable and unbiased media outlet, whereas in the offline media Népszabadság was the best in this sense. Magyar Nemzet was the least, it used much words with negative implications. It is important to know that ‘zero-tolerance’ against GM plants is accepted univocally by the whole parliament (the government and the opposition too). Because of the unified decision, it is expected that every media outlets will be against GM plants. But the most hostile with GM plants was Magyar Nemzet, which is mentioned above to be a government friendly newspaper, the other newspapers remained mostly unbiased. To supply my examination I also examined articles in American and British newspapers during the Séralini-affair. In the American media I examined the most popular ones, The Washington Post, The Los Angeles Times and The New York Times. The most impressive observation was that these American newspapers always tried to present both sides of a conflict. 10 In the Séralini affair, they presented opinion of

8. Available online: http://bdarvas.hu/download/pdf/GenetE.pdf 9. The interviewee asked anonymity, thus, it is not allowed to make the audio record public. For consultation, please, send an email to the author ([email protected]). 10. An example from The New York Times: http://www.nytimes.com/2012/09/20/business/ energy-environment/disputed-study-links-modified-corn-to-greater-health-risks.html



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 101

the researcher but also that of Monsanto and the politicians’. Another interesting outcome was that in the examined British newspapers, in Daily Mail and in The Guardian, much more biased articles were written. Although they presented the pro side of GM plants, they often said something negative about the interviewees. For example: ““This study appears to be without scientific merit,” said Martina Newell-McGloughlin, director of the International Biotechnology Program at the University of California/Davis, which has close links to Monsanto and other GM companies.” (The Guardian Online, 2012) As I see in the articles, Hungarian journalism is moderate. I do not want to say that American journalism is the best and the British is the worst, but given these two, between them Hungarian is in the middle. Another outcome is that according to my interviewees, because of the Hungarian milieu in politics and society, it is almost impossible to do GM plants research. On its own it is a problem, but The Fundamental Law of Hungary also claims that Hungary shall ensure the freedom of scientific research. (1) Hungary shall ensure the freedom of scientific research and artistic creation, the freedom of learning for the acquisition of the highest possible level of knowledge and, within the framework laid down in an Act, the freedom of teaching.  (Article X, The Fundamental Law of Hungary) 11

3. Case of invasive acacia 3.1

What should we know about invasive species and acacia?

A development of biodiversity is a consequence of isolated evolution of organisms, when they adapted to the local environment. An arrival of a new species into this ecosystem is not a novelty, it is a natural process. The most foreign species are incapable for surviving this new environment, they disappear in a few years or they take place among the native species 12 being an integral part of the flora and fauna. An invasive species is a new species in a given area that suddenly appears in large quantities upsetting the balance of the local ecosystem, and in serious case causing extinction of native species. The seriousness of this problem can be seen from the fact that in every year, six new, non-native species appear in Europe. After the climate change, invasive species are regarded the second most important reason for the reduction of biodiversity.

11. Emphasis added by the author. 12. Native species is a species which live in a given area since the Ice Age.

102 Anna Petschner

In the European ecosystem, approximately 6000 invasive plant species exist, half of them are derived from other continents, and three-quarters of them winded up there accidentally. In the last 25 years, the number of the invasive plant species was tripled. In Hungary, two species of acacia are spread. White acacia (Robinia pseudoacacia) possesses greater area as an invasive species. It derives from Mexico and North-America but nowadays it grows in 3.25 million hectares on the Earth. In Hungary it has been brought in 1700–1720 to establish parks. Its first important settling was around 1750, when 290 hectares of acacia were planted in a Hungarian area to the effect of afforesting. From 1923 to World War II, almost 37000 hectares of acacia tree were planted as a part of an afforesting program. During another afforesting project in 1949, this species was used again, thus, in 2003 22.2% of Hungarian forests was acacia, and according to a current estimation, it is already 24% (approximately 457000 hectares). Although it is an invasive species, its importance in forestry and economy is significant, because it is easy to plant and raise it, it grows fast and it is drought-tolerant. But on the other hand, it is very harmful from an environmental protection viewpoint. It changes the content of nitrogen in the soil, thus modifying its environmental conditions. Furthermore, the destruction of acacia is very complicated, because it can grow from root and it has long-lived seed. Despite of these negative features, it is an important tree in forestry because it is a good raw wood material and firewood, and similarly good in honey production (Csiszár, 2012). Honey production in European Union made 268000 tons of honey in 2015, 30700 tons of them is derived from Hungary (only Romania and Estonia had greater percent). In Hungary approximately half of this production derived from acacia flowers (European Commission, 2015). Furthermore, acacia ensured livelihood for more than 100000 people, almost 1% of the country’s population (Csiszár, 2012). 3.2

Conflict of invasive acacia in Hungary

In 2014, the European Union made a proposal against invasive species in order to protect native species. The project emphasized that invasive species can cause serious loss in the native biodiversity, that they are capable for spreading microorganisms of various diseases and they can increase the economic damage, especially by crop. The purposes of the program are to interfere for invasive species to get into further territory and to reduce the size of areas of already pervading invasive species. This project would have benefits, especially for the players of economy. It would offer less expenditure of mitigation for public administrations and for small enterprises (for example plant producer, fish farm) to reduce their loss. The description emphasized



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 103

the importance of a unified action among EU member states against invasive species. Although the initiation did not contain a precise list of these species, the government made great efforts to save acacia, the biggest honey producer tree in Hungary. It was unexplainable since after the announcement of EU, more countries asked a protection for some invading species, for example the United Kingdom asked defense for water lily, because it’s a common plant in English gardens. The most media outlets dealt with this case for months, they brought out several environmental protectionists, foresters and honey producers. The debates got a new impetus when in May 2014 the acacia and acacia-honey became so-called ‘Hungarikum’, which is a very prestigious Hungarian status acquired by certain food, products, etc. The conflicts have two main sources. First, The Fundamental Law of Hungary implies that in scientific questions, as in the case of invasive acacia, a scientific viewpoint has to be applied. Second, a previous law states that only native and bred species can possess the status ‘Hungarikum’. According to the opposition, the case has two main causes. According to the first one, the government wants to control the situation as another struggle against the European Union, intending to uproot acacia and also causing serious damage to Hungary. According to the other, by focusing on the acacia problem, the government only tries to cover up more serious problems, for example education or healthcare. Although the case of invasive acacia generated as large and heated debate as genetically modified plants did, in the media some critical voice filled with emotion also has been created. On March the 1st, 2014, Magyar Nemzet had an article with the title “Acacia as ‘Hungarikum’?!”, containing the following sentence: “It is not favored that in this question which has important effects on the lives of future generations, only the current interest of lobby of firewood tradespeople and honey producers be decisive.” (Magyar Nemzet, 2014) Maybe it is not as rough a phrasing as what I showed in the case of GM plants but it is also open to emotional subjective opinion-making. 3.3

Method of invasive acacia’s examination

In this examination I applied the same resources as the case of GM plants. Thus, the daily newspapers were the Metropol, the Magyar Nemzet and the Népszabadság, and news portals were index.hu, origo.hu and hir24.hu. Two of the three questions were also the same, namely whether they applied the relevant terminology correctly and how many interviewees are in the article, whether each side of the conflict appears (for example ecologists’, honey producers’ or foresters’). My last question was whether the article mentioned the conflict between The Fundamental Law of Hungary and the other law.

104 Anna Petschner

Results

3.4

My first question concerned about the relevant terminology in articles of invasive acacia. My results show that the journals and online news portals use correctly the terminology in this case. In order to answer my second question I separated the supporters of acacia from its opponents. On the pro side, those who want to protect acacia were the government, the foresters and honey producers. On the contra side are the opposition and environmental protectionists. From the offline media outlets, Magyar Nemzet and Népszabadság are the most objective in this sense, furthermore, Magyar Nemzet has the opinion of foresters (see Figure 5a). Among the online media outlets, both origo.hu and hir24.hu present all sides, mostly unbiased (see Figure 5b). My last question was whether the media outlets mentioned the conflict between The Fundamental Law of Hungary, the Hungarikum Law and the decision which voted the acacia and acacia honey as ‘Hungarikum’. There was only one media outlet, origo.hu which mentioned this conflict. The conclusion of this case study is that during the scandal of invading acacia the online media outlets, especially origo.hu and hir24.hu presented the most balanced view on the conflict and only origo.hu mentioned the former conflict. a. Participants in the offline media outlets in the case of invasive acacia 1 1 1 1

Metropol (1)

1 1

b. Participants in the online media outlets in the case of invasive acacia 3

2

2

2

Magyar Nemzet (3)

Népszabadság (2)

Supporters of invasive acacia Politician (pro) Forester Honey producer

1 2 1

2 1 1 1

1 1 2

3

2

1

origo.hu (4)

index (2)

hir24 (4)

Opponents of invasive acacia Politician (contra) Environmental protectionist

Figure 5.  Participants in the offline and online media outlets in the case of invading acacia* *Next to the name of the news portals or daily newspapers I show how many articles were examined.



3.5

Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 105

Conclusion

In the beginning it was claimed that an optimal distance between science and politics needs to be retained. This distance is essential because an unbalanced farness can cause an isolated science or closeness can cause an unreliable and corrupt science. But in the case of GM plants and invasive acacia, politics had a pressure on media, used them for political purposes and this way had an influence also on the opinion of citizens, strengthening the biased milieu in Hungarian society. Furthermore, they contradicted to The Fundamental Law of Hungary and other regulations that should guarantee the freedom of scientific research and also the unrestricted opinion-making of researchers. With these decisions the Hungarian government cut themselves from potential, future inventions and developments of research that can be beneficial not just for society, but also for politics.

Acknowledgments This work was supported by OTKA 109456 (Hungary).

References Baintner, K. (1999). A genetikai módosítás és a félremódosított tájékoztatás. [The Genetic Modification and ’Mis-modified’ Information.] Biokémia 23: 64–67. Bush, V. (1945). Science the Endless Frontier: A Report to the President by Director of the Office of Scientific Research and Development, July 1945. Washington: United States Government Printing Office. CDC (2014). National Diabetes Statistics Report. Cohen, S. N., Chang, A. C., & Boyer, H. W. (1973). Construction of Biologically Functional Bacterial Plasmids In Vitro. Proceedings of the National Academy of Sciences, 70(11), 3240–3244.  doi: 10.1073/pnas.70.11.3240 Csiszár, Á. (2012). Inváziós növényfajok Magyarországon. [Invasive Plant Species in Hungary.] Sopron: Nyugat-magyarországi Egyetem Kiadó. Darvas, B. (1999). Nézőpontok, ha különböznek. [If Viewpoints are Different.] Biokémia, 23(4): 99–102. Dudits, D. (1999). A géntechnológia szerepvállalása a növénynemesítésben: A Pusztai-botrány üzenete. [The Role of Gene Technology in Plant Breeding: The Message of Pusztai-scandal.] Biokémia, 23(2): 41–43. European Commission (2015). Honey Market Presentation. Agricultural and Rural Development. https://ec.europa.eu/agriculture/sites/agriculture/files/honey/presentation-honey-2015_en.pdf Feyerabend, P. (1975). Against Method. London: New Left Book. Gaskell, G. (2006). Europeans and Biotechnology in 2005: Patterns and Trends. European Commission’s Directorate-General for Research.

106 Anna Petschner

Gibbons, M. et al. (1994). The New Production of Knowledge: The dynamics of science and research in contemporary societies. London: Sage Publications. Gieryn, T. F. (1995). Boundaries of Science. In S. Jasanoff et al. (Eds.) Handbook of Science and Technology Studies (pp. 393–443). London: Sage Publications.  doi: 10.4135/9781412990127.n18 Guston, D. (2000a). Between Politics and Science. Assuring the Integrity and Productivity of Research. Cambridge: Cambridge University Press.  doi: 10.1017/CBO9780511571480 Guston, D. (2000b). Retiring the Social Contract for Science. Issues in Science and Technology, Summer 2000. Hanson, N. R. (1958). Patterns of Discovery. An Inquiry into the Conceptual Foundations of Science. Cambridge: Cambridge University Press. Index.hu (2011, September 21). GMO: az alkotmány dönt a tudósok helyett. [GMO: The Constitution Decides instead of Scientists.] Retrieved from http://index.hu/gazdasag/magyar/ 2011/09/21/gmo/ Index.hu (2012, March 29). GMO-élelmiszerek mellett érvelt, kirúgták. [Argued in favor of GM Foods: He Was Fire Out.] Retrieved from http://index.hu/tudomany/2012/03/29/genmodositott_ elelmiszerek_mellett_ervelt_kirugtak/ Jackson, D. D., Symons, R. H., & Berg, P. (1972). Biochemical Method for Inserting New Genetic Information into DNA of Simian Virus 40: Circular SV40 DNA Molecules Containing Lambda Phage Genes and the Galactose Operon of Escherichia coli. Proceedings of the National Academy of Sciences of the United States of America, 69(10), 2904–2909.

doi: 10.1073/pnas.69.10.2904

James, C. (2014). 2014 ISAAA Report on Global Status of Biotech/GM Crops. International Service for the Acquisition of Agri-biotech Applications (ISAAA). https://www.isaaa.org/resources/ publications/briefs/49/pptslides/pdf/B49-Slides-English.pdf Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago: Chicago University Press. Lakatos, I. (1977). The Methodology of Scientific Research Programmes: Philosophical Papers. Volume 1. Cambridge: Cambridge University Press. Magyar Nemzet (2014, March 1) Az akác mint Hungarikum?! [Acacia as ‘Hungarikum’?!] Merton, R. K. (1973). The Normative Structure of Science. In N. Storer (Eds.) & R. K. Merton. The Sociology of Science (pp. 267–279). Chicago: Chicago University Press. Nyitray, L. (2013). Géntechnológia és fehérjemérnökség. [Gene Technology and Protein Engineering.] Budapest: ELTE TTK Biológiai Intézet. Palmiter, R. D., & Brinster, R. L. (1982). Dramatic Growth of Mice that Develop from Eggs Microinjected with Metallothionein – growth Hormone Fusion Genes. Nature, 300, 611–615.

doi: 10.1038/300611a0

Polányi, M. (1958). Personal knowledge. London: Routledge and Kegan. Popper, K. (1963). Conjectures and Refutations. New York: Harper. Pusztai, Á. (2000). Gondolatok a génmódosított élelmiszerek kapcsán kialakult vitáról. [Thoughts about the Debate on Genetically Modified Plants.] Biokémia, 26(2): 51–56. Ravetz, J. M. (1971). Scientific Knowledge and its Social Problems. Oxford: Clarendon Press. Sajgó, M. (1999). Biotechnológia: a szellem már kívül, de megvan-e még a palack? [Biotechnology: the Genie is Outside, But Do We Still Have the Bottle?] Biokémia, 23(2): 38–40. Séralini, G. -E., Clair, E., Mesnage, R., Gress, S., Defarge, N., Malatesta, M., Hennequin, D. and de Vendômois, J. S. (2012). Long Term Toxicity of a Roundup Herbicide and a Roundup-Tolerant Genetically Modified Maize. Food and Chemical Toxicology, 50(11): 4221–4231. (Retraction published January, 2014, Food and Chemical Toxicology, 63: 244)  doi: 10.1016/j.fct.2012.08.005



Chapter 5.  Two case studies about GM plants and invasive acacia in Hungary 107

The Guardian Online (2012, September 28). Study linking GM maize to cancer must be taken seriously by regulators. Retrieved from https://www.theguardian.com/environment/2012/ sep/28/study-gm-maize-cancer TNS Opinion (2010). Eurobarometer 73.1 Biotechnology. Belgium: European Commission. Torkos, M. (2011, September 15). Frankensteinék a spájzban. [The Frankensteins in the larder.] Retrieved from https://mno.hu/vezercikk/frankensteinek-a-spajzban-886910 Venetianer, P. (1999). A nézőpontok valóban különböznek. [Viewpoints are Truly Different.] Biokémia, 23(4): 102–104. Watson, J. D., & Crick, F. H. (1953). A Structure for Deoxyribose Nucleic Acid. Nature, 171, 737–738.  doi: 10.1038/171737a0 Ziman, J. (1996). “Postacademic Science”: Constructing Knowledge with Networks and Norms. Science Studies, 9(1): 67–80.

Chapter 6

Save the planet, win the election A paradox of science and democracy, an Israeli perpetuum mobile and Donald Trump Aviram Sariel

Tel Aviv University

Science and scientific institutions are not necessarily passive players in the power and manipulation games which constitute democratic elections. In this paper, I shall present a seeming paradox, in which a party well-aligned with science is vulnerable to attack which claims access to alternative and superior science. The argument is demonstrated by the Israeli elections of 1981, won by a party which presented the public with a Perpetuum Mobile, two days before the elections. My analysis, indebted to Hans Jonas’s model of interregnums of values and norms, seems to justify this maneuver and render it rational, in a sense restricted to election campaigns. The analysis was used to predict the victory of Donald Trump in the US elections of 2016. Keywords: science and democracy, science and politics, controversies, Donald Trump, interregnum

Introduction The overall theme of this paper is that science and democratic politics do relate, and that the relation is both politically important and theoretically counter intuitive. Here, ‘importance’ is a political measure: I shall argue that this relation decides certain election campaigns. ‘Counter-intuitiveness’ is an attribute of the theory, as well as its content: I shall propose that politicians may use preposterous claims, which fly in the face of accepted science, to expose a limitation of an established knowledge, and inform a rational decision to vote against a party aligned with science. I start with observing two types of constraints imposed on scientists and politicians, which, together, suggest that scientific speculation provides political room for maneuver, unconstrained by political systems of checks and balances, nor by the scientific methods of peer-review. I then define two ideal types of political doi 10.1075/cvs.13.08sar © 2018 John Benjamins Publishing Company

110 Aviram Sariel

parties, one which pledges alliance to science and the establishments of knowledge, and another which operates outside and in many ways against existing science: its success therefore constitutes a refutation of science, or clear identification of limits of its claim for knowledge. Leaders of the latter party are therefore in possession of a skeptic meta-view of what knowledge and science are, and may attempt to communicate their worldview to the electorate by ironic challenges, which serve to expose inherent limitations in the alliance with science, to which their adversaries are committed. This type of performance is illustrated, I propose, by an Israeli electoral anecdote, known as Meridor’s Invention: during the Israeli elections of 1981, Jacob Meridor, a senior member of the Likud party, claimed to have developed a thermodynamic cycle with over-unity efficiency, otherwise known as perpetual motion machine, or Perpetuum Mobile. With this invention, claimed Meridor and the Likud party, they could overcome any energy crisis, for the benefit of the nation and the planet. As shall be seen, the claim for an unlikely scientific surprise did not damage the Likud; the party won the elections not although it presented this perpetuates claim, but, among other reasons, because of it. In election periods, where norms are partially yet significantly questioned, this move serves as a performance that fits the antinomian logic associated, according to Hans Jonas, with periods of normative interregnum. In Jonas’s analysis, the antinomian logic is not irrational, rather, it is a rational possibility highlighted by a certain set of human experiences. Radical acceptance of this possibility, transforms the meaning of the symbol of trusted knowledge into a symbol of error and vanity. While Jonas considered this situation an important step towards the emergence of innovative religiosity, it seems to be also relevant to democratic politics in which the alliance with knowledge is strongly asymmetric. As such, the model may justify the presentation of a Perpetuum Mobile two days before the day of election in the Israeli elections of 1981, or the tactics of Donald Trump throughout the US elections of 2016.

Scientists and democratic politicians My interest is with democratic politics, where it is science that tally the votes. In other words, it is elementary mathematics that informs us which party won an election. Science also informs candidates that in order to win an election and govern a state, such-and-such percent of the votes should be won. The prescription is sometimes complex: in the United States, presidential candidates are asked to win a majority in the US Electoral College, which is different from winning a simple majority among the electorate. Therefore, science informs strategies of democratic politicians, and the mode of providing advice can be as complex as



Chapter 6.  Save the planet, win the election 111

statistic inference: it involves means, standard distributions, maximum likelihood estimates, data collection methodologies, and so on. Yet, science also holds a promise to advise after the elections, and a party may therefore present the public with its views about science, politics and what constitutes a scientifically-qualified program of political action. Naturally, scientists would be called to provide testimony that the presentation is reasonable. Therefore, election campaigns may involve not only the specific brands and ideologies chosen by the parties that participate in them, but internal images of science and politics. Let us begin with a rather standard view of an ideal type of the self-image of current science. In this self-image, trained experts are engaged in responsible pursuit after discoveries and innovations, reasoned in ways developed by fellow experts, and subject to the exhaustive criticism, also of fellow experts. In this image, or worldview, democratic politics seems to form an opposite pole: the realm of non-expertise, save perhaps for the expertise associated with mobilization of electorate at the right moment and power games in its wake. The first mode of political expertise pertains to a relationship between political experts and political non-experts, while the second form of expertise pertains to internal discourse within the political experts group. Indeed, politicians view themselves as experts in these special fields. However, politicians seem to emphasize the formal aspects of their expertise: coping with rules, laws, norms, interests, counter-interests, checks and balances, a system developed explicitly to keep them on a leash (scientists would tend to agree here, for the formal aspects of science are often called ‘politics’as in academic politics). And from the politicians point of view, science, on the other hand, may seem to hold a privilege to speculate, propose unlikely outcomes, and innovate on a large scale. That is, from what I take to be a politicians’ view of science, it is not held in check. More precisely, democracy puts emphasis on making its politicians less political, while attempting to allow its scientists to be more speculative. Within this view, democratic politicians may consider science (and technology) as a potential source of as a way out of constraining situations. The position is somewhat Kantian: enlightenment – often equated with science – is a way out of chains in which ‘we’ chained ourselves. As such, scientific assertions may provide a valuable resource of speculation which political parties may use in their craft. Thus, parties may include scientific speculation in their campaigning agenda, both for negative and positive messages. However, it does not follow that all parties would use the license in the same way. Furthermore, as shall be argued immediately, there is a difference between aligning with science and applying the speculative license.

112 Aviram Sariel

Parties and elections In general, one party may swear alliance to certain pieces of science, such as free-market economics, while the other may team with social-democratic economics. The first party would cite Milton Friedman; the second party would credit Thomas Piketty. However, sometimes parties are centered on meta-views regarding science, or science-as-presented. Specifically, one party may be aligned, at least in image, with science per se, while the other with certain revolt against scientific discourse, again at least in image and at large, notwithstanding specific topics in which it is decidedly scientific. As such, these parties do not exactly represent voters. Rather, they provides voters with alternative worldviews, from which the electorate may choose. The former party, the one that aligns with science, represents the opinion of scientific establishment. The latter, the party that contest science or aspects of science, or contemporary scientific establishment, presents a view centered on the thoughts and experience of its leadership. In the following, I shall argue that this party necessarily presents the public also with a meta-scientific position. I shall call the former a ‘type A’ party or ‘A’ party, while the latter would be a ‘type B’ party, or ‘B’ party. These terms do not refer to different levels of command over an electorate, but to the relationship between parties with science and with authoritative knowledge establishments in general, i.e., including responsible and truthful media channels. At least in Israel, these distinctions are easy to recognize: the Israeli left is composed, by and large, of ‘A’ parties, while the right wing leans more towards my designation of ‘B’ parties.

The Israeli elections of 1981 The Israeli elections of 1981 are often noted for their unprecedented intensity (Lehman-Wilzig, 1983: 191, Bradley, 1985: 108–111, Greilsammer, 1986: 93–94). The contender on the left side was the Israeli Labor party, led by Shimon Peres (1923–2016). The Labor party did in fact found the state of Israel, and at the time its leadership was composed of people whose public record was directly relevant to Israel’s establishment, survival, and successes (including, of course, Peres himself). Among other points of strength, the left was extremely well established in the academy: the founder of Tel-Aviv University, Professor Zvi Yavetz, was fond of saying that the political opinions of his faculty members run the gamut from the extreme left to the Labor party, but no further (Kleinberg, 2016). The Israeli left had similar presence in the Military, important in Israel, in the state establishment, and in the media. The Labor party was therefore an ‘A’ party. The other contender, on the right wing, was the Likud party, led by Menachem Begin (1913–1991). The Likud was in power for four years, but its coming to power



Chapter 6.  Save the planet, win the election 113

was not considered conclusive or foundational as it seems today, with forty years of nearly uninterrupted Likud governance. The victory of the Likud in the 1977 elections was considered rooted in social factors, and often attributed to a general climate of disappointment after the unsuccessful war of 1973, as well as a certain contingent, or political, luck (Bradely, 1985: 73–77). In retrospect, these explanations are often grouped together under a single heading; that a constantly growing part of the Israeli electorate felt underrepresented in the establishment and by the establishment. Below, I shall attempt to elaborate some more on the general state of affairs. However, it should be noted that the sense of alienation from the Labor developed over time, for in the first decades after Israel was founded, the Labor party constantly won the elections. Thus, the growth of alienation paralleled the development of the Israeli establishment itself, including a progressive level of academic sophistication. Since the Israeli left occupied the academy to such overwhelming levels, the Likud – and Israeli conservatives, religious and right wing parties in general – were anything but proportionally represented there. However, perhaps because of the past domination of the Labor, the Likud was also under-represented in the Army, establishment and Media. This meant that the leadership of the Likud was less conversant in the vocabulary of military tactics developed in Israel’s early years. At the time, Israeli tactics were considered revolutionary, and thus the military elite had a justified claim for knowledge, often highly technical.1 A vision of market economy serves as a counter-example, in which the Likud did indeed have ties with a dominant and evolved theory, known today as neo-liberalism. But while socialist, the Labor party has been cooperating with Capitalism ever since it was founded (Sternhell, 2009), and its gift for opportunistic pragmatics allowed it to switch sides quickly: Peres was to become a prophet of privately owned enterprises, in line with the dictates of the Israel’s knowledge elite (Peres, 1999:89–90, 100–102). Thus, although market economy ideology was clearly on a rising curve in Israel as elsewhere in the early 1980’s, the Likud was still a ‘B’ party.2 Therefore, it may 1. Begin was aware of the deficiency and in his previous government made successful attempts to enlist the services of military leaders with high public profile: Ariel Sharon, Ezer Weizmann, and Moshe Dayan. However, the 1981 elections were occasioned, among other factors, by the resignation of Weizmann and Dayan from Begin’s government. 2. These categories describe a general public identity of the party, as opposed to the actual biographies of its leadership. Although he never practiced law except as a member of the parliament, Begin had a university degree in law and classics (oratory and rhetoric), and was the first Israeli Prime Minister who had a university degree in the first place. Peres, on the other hand, did not spend a single moment of his immensely fruitful life on academic studies. In 1981 Begin was already a Nobel laurate, accorded for the peace process with Egypt (1978). Peres’s Nobel Prize, for his part in developing a peace process with the Palestinian Liberation Organization, was yet in the future (accorded in 1994).

114 Aviram Sariel

have been natural that it was the Labor party which led the poles. The Likud, however, was anything but defensive. Under Begin’s leadership, the Likud run an intensive, high-volume campaign, focused on identity politics and politics of religion, and narrowed the gap between them and the Labor (Bradley, 1985: 91–94).

Meridor’s invention On June 28, two days before the elections, the Likud party presented the public with a scientific breakthrough in energy engineering: a thermodynamic process supposedly efficient enough to produce 17,000 kilo calories from an input of 23 kilo calories. The announcement was presented by Jacob Meridor (1913–1995), a senior party member who invested in it in private. He claimed the invention would transform energy production, worldwide. To make things particularly clear, he explained – incorrectly – that this invention is equivalent to using a single light bulb to light the entire city of Ramat-Gan – a mid-size city, about twice the size of Pisa. And so, the headlines described his invention as ‘lighting Ramat-Gan with a single light bulb’. But he also claimed that science as we know it is erroneous to a huge degree, since production of 17,000 kilo calories out of 23 kilo calories is an energy gain of 73,913%, which necessarily refutes the second law of thermodynamics. I tried to find an equivalent, yet so far failed: Meridor’s invention is probably the boldest Perpetuum Mobile ever presented. And it worked: the Likud won the elections, by an advantage of mere 10,000 votes. The bulk of the victory probably traces to long-term factors, and the bulk of the rest it is surely due to Begin’s intensive and effective campaigning of the Likud. However, some part of the victory – perhaps the crucial addition of necessary votes – is due to the invention. After the elections, Meridor was appointed Minister of Economics and InterMinistry Coordination. But the public retained its interest in the invention, and popular accounts of thermodynamic analysis were for a while a standard element of political discourse. The media went a step further and exposed the invention as a hoax, engendered by the fruitful mind of Daniel Berman, advisor to the Faculty of Arts in Tel Aviv University (Avneri, 1993, 82–84). Berman, it was discovered, obtained this position assisted by a counterfeit PhD degree in Acoustic Engineering. The faculty’s personal still remembers, and not fondly, the mess he made in equipment procurement.3

3. I am indebted to Liviu Carmely from the Tisch School of Film and Television, the Faculty of Arts in Tel Aviv University.



Chapter 6.  Save the planet, win the election 115

Berman’s machine was researched thoroughly. It was found that it was, in fact, a relatively minor yet effective improvement in the performance of existing heat engines, which used Ammonia as a working fluid, disregarding the dangers associated with its toxicity (Fruend-Avraham, 2015). In short, heat and energy experts knew all about the thermodynamics of Ammonia, and the excellent reasons why it is not used. 4 They could also prove, albeit in ways less accessible to the general public and fellow experts from other disciplines, why the second law of thermodynamics is not challenged by an Ammonia heat cycle. Meridor lost his seat in the 1984 elections, and never returned to politics: his reputation was damaged beyond repair, and left-wing ‘A party’ politicians tended to use his claim for Perpetuum Mobile to mock Likud’s command of technological and scientific theories, and further the identification of Likud as a ‘B party’. Scientific reason was restored, the planet was not saved, the A\B partitioning remained in power, and so did the Likud. Surprises are not uncommon in election campaigns. In the United States, where elections are scheduled for November, such surprises are termed ‘October surprises’ in American political jargon (Parry, 1993). It is often alleged that these form a campaigning tool for incumbent governments, which display diplomatic or technological achievements. Opposition also uses last-minute surprises, where they often attempt to expose facts embarrassing for the other parties (say, a sex scandal). However, Meridor’s Invention seems unique, or at least rare, due to the notion of attacking the core of science. Further, since the general tactic of last minute ‘surprise’ is wellknown (as it is), the invention could have been prepared in advance (as it was). However, let us make two observations: first, that the dark horse, i.e. the B party, Likud, did win the elections, and with short interventions, ruled Israel ever since. Second, that Meridor’s invention took aim of the second law of thermodynamics itself: the Likud did claim to have invented the most radical of all Perpetuum Mobile machines. Why did these savvy campaigners made such a ridicules and potentially damaging claim? More theoretically, what kind of consideration allows a B Party to design and perform such an act? Can we assume that the very situation of being outside the sphere of established knowledge provides a performative privilege, which consists in its rejection?

4. “[T]he present preliminary study gives a decided advantage to ammonia as a choice for the working fluid in a solar sea power plant. However, it is recognized that the selection of an optimum working fluid will require careful evaluation of many factors such as toxicity, materials compatibility, fire and explosive hazards, and plant maintenance and operational factors. Ammonia has problems in several of these areas… an exhaustive search for other media that may approach the performance of ammonia should be made” (Olsen, Dugger, Shippen and Avery, 1973: abstract).

116 Aviram Sariel

A symbol of radical interregnum Let us begin with the possibility condition to act especially silly yet being taken seriously. To my proposal, election campaigns actualize a time of ritualized breaking of rules and norms. The ‘ritualization’ portion means it is not wholly radical, since it does follows a pattern, and enacted only once every few years. However, some potential level of antinomy is clearly present, since the previous governance is questioned. In other words, election campaigns do lend themselves to meta-discourses regarding truth and power. If we attempt a very wide view, election campaigns are somewhat like the medieval quodlibeta, a term derived from ‘quodlibet’ – “what pleases you”, or “as you wish”, or “whatever you want”. In the Thirteenth and Fourteenth centuries, the term referred to free public disputes held by reputed universities twice a year, in which a question could be raised by anyone: the parties could dispute on essentially any topic, regardless of agreement on method or concept of truth. In the quodlibeta, the dispute would be decided by a master who resided and provided the decision in the following day (Leff, 1968; 171–173, 184). In elections, the dispute is occasioned every few years, lasts for months, and then we tally the votes. Unlike Carnivals, what happens in the elections does not necessarily stay there, and breaking the norms is judged by the potential success or potential failure of its effect within the electorate. That is, all utterances of any candidate, of any party, is performative: to an extent, elections are an arena designed to amplify autonomous choices, so they would impact reality. As such, election periods – the heart of the democratic procedure – are also periods of interregnum. Interregnum, ‘between kings’, was originally used to denote a time-lag separating the death of one royal sovereign from the enthronement of the successor. These used to be the main occasions on which the past generations experienced (and customarily expected) a rupture in the otherwise monotonous continuity of government, law, and social order. Roman law put an official stamp on such understanding of the term (and its referent) when accompanying interregnum with temporary suspension of laws heretofore binding, presumably in anticipation of new and different laws being possibly proclaimed (Bauman, 2012: 49–50, Agamben, 2005: 41–51). As Bauman notes, the term is useful to understand much of Gramsci’s notion of crisis: a period in which “the old is dying and the new cannot be born” (Gramsci, 1971: 276). Thus, interregnum is a time gap between two regimes, or between two different systems of values, which govern the state (or two systems of combinations of facts and values, if we reject fact/value distinctions). Interregnum between periods of different system of values and concepts of truth informed much of Hans Jonas’s work on the existential roots of Gnosis, which



Chapter 6.  Save the planet, win the election 117

in a sense is centered on the internal dynamics of interregnums, and their lasting impact. Writing in Weimar Germany, Jonas proposed that an antinomian attitude may result, directly, from an experience of a break in the unity of a system of norms. In a famous paragraph (famous because of its usage in Scholem’s ‘Redemption through Sin’), Jonas wrote: … [W]e are confronted both with a total and overt rejection of all traditional norms of behavior, and with an exaggerated feeling of freedom that regards the license to do as it pleases as a proof of its own authenticity and as a favor bestowed upon it from above… and, inasmuch as it implies a positive realization of this freedom, [t] his uninhibited nihilism fully reveals the crisis of a world in transition: by arbitrarily asserting its own complete freedom and pluming itself on its abandonment to the sacredness of sin, the self seeks to fill the vacuum created by the ‘interregnum‘ between two different and opposing periods of law  (Jonas, 1934: 234, translated in Scholem, 1995: 133–134)

Therefore, to Jonas, interregnums are marked by an imperative to misbehave. Some interpretative context may be in order: for Jonas, the situation is due to a commitment to the Correspondence Theory of Truth. That is, to the view which still often governs science, and considers truth to consist in a correspondence, or match between ideas and facts. Jonas argued that this view is fundamentally ungrounded: that totality can be understood only as an interconnected wholeness (Johnson, 1974: 38–40). However, some if not most elements of totality do work by the dualist logic of the Correspondence Theory of Truth (Johnson, 1974: 213). Therefore, claims Jonas, if a large enough crisis challenges the relation between ideas and facts – between theory and experiments, if you will – the Dasein (or its partial incarnation as part of totality) would not be able to sustain the idea of unified, harmonious correspondence which constitutes the previous set of correspondences between ideas and facts. The image of harmony and unity would therefore be inverted, and turned into a symbol of error. To illustrate, this is the reason why, according to Jonas, the ancient Gnostics inverted the image of the biblical God, YHVA, and transformed it into a figure of an evil, arrogant and ignorant deity, the Gnostic ‘demiurge’. YHVA signified unity and wholeness, but the Gnostics reacted to a crisis, a period of normative or conceptual interregnum, which started in the Alexandrian conquest of the east and its exposure to the highly refined Greek thought, while at the same time ending the political freedom of the Greek cities (Jonas, 2001: 3–23). The combination of these wide-ranging events created, in Jonas’s view, a paradigmatic interregnum, which led to the transformation in the image of YHVA. So far, the process is similar to Popperian theories of science, where a theory is rejected once it fails to perform. However, in Jonas’s analysis, the worldview or

118 Aviram Sariel

theory is concerned with the characteristics and attributes of a cognizant entity which governs totality, such as the biblical God. That is, assuming there exists a god that created the world and governed it, Jonasian agents constantly attempt to maintain an account of what such a deity is like, and where it is located in the divine hierarchy, by observing worldly events considered relevant to understand the deity. This examination relates to the logical problem of evil. Following Epicurus, the standard exposition of the logical problem of evil run as follows: (a) If an omnipotent, omniscient, and omnibenevolent god exists, then evil does not. (b) There is evil in the world. Therefore, (c) an omnipotent, omniscient, and omnibenevolent God does not exist. ‘Evil’, here, is a worldly event considered crucial for the performance of a governing deity. It is pertinent to note that it is judged undesirable by standards to which the deity is supposedly committed, since different deities might hold widely varying accounts of goodness, considered undesirable by other deities, or other systems of belief. Therefore, what is at stake is the performance of a sovereign governor (its omniscience and omnipotence) matched against its account of goodness, or the truthfulness of the account of goodness, considered against the vast resources available for a governor, or a ruler. Concluding that the governing deity does not exist at all, is but one potential solution, and a radical one. The Gnostics, qua Jonasian subjects, do not go as far as concluding that there is no god to begin with. Rather, and more prudently, they conclude that the god of this world is not omnipotent, omniscient, and omnibenevolent.5 Nonetheless, there is still a god, who does govern. Therefore, although not in possession of infinite power and knowledge, this god still has access to vast amounts of information and still commands huge operational resources. Therefore, something about God might be significantly less than human standards: the deity may be ignorant by common measures of human knowledge, or evil by common measures of human goodness. Jonas’s Gnostics concluded that YHVA was both ignorant and evil. They then developed complicated accounts which illustrated the existence of another god, superior to YHVA, and a chain of emanated deities, in which YHVA occupied the lowest level. ‘Demiurge’, the Greek term accorded by the Gnostics to YHVA, also read as a clerk, a fairly low rank in an administrative hierarchy.

5. Jonas never specified how the image of YHVA came to be important to the Greeks: for him, unity was the actual state of affairs, and therefore any symbol of unity would be rejected.



Chapter 6.  Save the planet, win the election 119

Jonasian politics In political setting, the existence of a party is unquestioned. Therefore, upon refutation of its claim for knowledge, it can be cast as possessing lesser knowledge, lesser ability to perform, or lesser commitment for the interests of the electorate. But if the party is aligned with power and knowledge institution, than although limited, it should still have had access to vast amounts of information and commanded huge resources. Therefore, a refutation implies that the leadership of this party is somehow dysfunctional in ways unacceptable by the common standards of the voters. As shorthand for this thought, its image might be transformed from a symbol of unified knowledge and power to a symbol of unacceptable ignorance and inexcusable misuse of power, perhaps biased by faulty or outdated ideology. In case one concurs that the leadership of this party is composed of rational and capable persons, to a level which implies they also hold an effective and updated world view, these persons are likely to be considered corrupted, in the sense that their actions are effective, and indeed informed by correct knowledge, albeit not to the benefit of the electorate but to the benefit of their milieu. While the links in the argumentative chain may be questioned and rebutted, they do make an argument which we would often consider rational, at least in the sense of availability and function as a reason for designing and accounting for action (albeit rational in the face of refutation of rational). That is, political planners who trust in this type of argument do not consider their audience to be simplistic, intellectual underachievers, but perhaps the contrary: perceptive towards skeptical argumentation and radical conclusions. As such, Jonasian interregnums can serve as an inexact model, or illustration, for the type of thought which I associate with B-parties during election campaigns, periods in which a license to doubt is normative. That is, B parties can apply science, even though they are not branded as scientific. Yet, their public image when conducting election campaigns would not benefit from it: commitment to science is an essential characteristic of their adversaries, the A-party. Therefore, B parties may ignore science altogether. However, B parties may also adopt a meta-scientific approach, which would mean that like Jonas, they would analyze pre-theoretic phenomena composed of groups of people who do have theoretical commitments, sometime of considerable skill: the A party, its supportive establishment and the portion of the electorate in which the intimate relationship between science and the A party is internalized. In other words, the leadership of the B party, which in its political role can access science only minimally, may find itself – if its leadership is sufficiently savvy, perhaps trained in skepticism – in possession of a certain scientific fact, necessarily accompanied by a pre-theoretic attitude, which is also a valuable meta-theoretic framework.

120 Aviram Sariel

The process, as I see it, would develop as follows: B party leadership comes to understand that they are indeed supported by people whose choice runs against science as we know it. Analysis of reasons for the existence of the growing core of B voters is beyond the scope of this paper. Briefly, (a) the situation might be due to social factors: people who as a group have lesser access to the academy are less trained in and by its curriculum. Thus, they are less proficient in its terms, which therefore command less authority among them. Alternatively, we may apply a license to philosophize: (b) since these groups did not participate in institutional learning of such-and-such concepts, the terms which denote these concepts do not mean the same thing in their conceptual language. One way or the other, in the supportive core of B voters, science fails as a language of politics. As such, the A/B divide implies an A/B fallacy, which in turn means that the A leadership, or A-coalition in the wide sense, cannot penetrate the supportive core of B voters and convert it to A voters. Therefore, the specific science which supports A is not only false but dysfunctional, at least during elections. Perhaps, future election would rely on a new science which would explain the phenomena, but in the meantime, science as a whole is not an image of truth, for if it was, the A party would hold an unquestioned appeal for all voters. Therefore, autonomous decision in the face of the scientific establishment is a fact which science does not explain, and as such, science is refuted. It is not to say that the leadership of B parties rejects the very idea of science: according to the illustration, they reject a specific incarnation of science. But they do reject, and indeed refute, the notion of science as an unquestionable wholeness of truth. For, again, if it would, they would not have a single supporter in the electorate. Thus, the leadership of B-parties is always concerned with a new discovery: the limitation of instituted knowledge, Gramsci’s ‘old’. It is important to note that this situation is not wholly contradictory to the general idea of science: the fact results from refutation, and thus follows a standard meta-scientific analysis, i.e., Popperian theory of enlightenment as a march of refutations and falsifications. Therefore, analysis of pre-theoretic may serve as meta-theoretic, and in our case, meta-scientific and anti-scientific in the sense of science-as-currently-instituted. In other words, B parties’ leadership is intrinsically associated with an image of science which is decidedly different from science-as-established: they consider it falsified, and they consider themselves to have performed the falsification. In other words, they view scientific establishment as their erroneous peers, not only because B-leadership operates outside scientific establishment, but because they refuted it. While the refutation is partial, it clearly centers on the fact most important to B-leadership, the possibility condition for their existence. And therefore, they would gradually come to consider themselves, to a degree, as enjoying the scientific privilege to speculate outside legal chains, while at the same time not being subjected to scientific peer-opinions. This achievement, of course, is potentially dangerous.



Chapter 6.  Save the planet, win the election 121

On the side of the electorate which receives the radical – and false – communication channeled by the preposterous challenge, the failure to answer clearly is demonstrative of Jonas’s analysis: if the symbol of unity is rejected, it is possible that it is actually a symbol of a unified falsehood. Now, this may not be a necessary conclusion for all verities of scientific knowledge, yet it is a possible conclusion: something does not cohere with the kingdom of knowledge and power represented by the A party.6 It may seem like a huge virtue of the Likud leadership that they did in fact subjected their claim for Perpetuum Mobile to standard scientific investigation, and accepted its results. Other behaviors, let us recall, drove nations into apocalyptic calamities: Nazi science of racism, the Soviet experiment to re-engineer their public, or the Cambodian application of internally sufficient economy, serve as warning signs for where the road not taken by the Likud could lead to. However, the Likud was not promoting a doctrine (racism, Marxism-Leninism and agrarian socialism, respectively) but a refutation of a doctrine. The party was therefore in power not because it had a secret B doctrine, superior to the whole of science-as-established, but because it refuted science-as-established in some key fields of human behavior. This situation seems general in the A/B divide discussed here: B leadership does not represent an alternative science, but an alternative to being rooted in science and representing science. ‘A’ parties, on the other hand, side with science-as-instituted: therefore, they cannot explicate why anyone would support the B party. Further, they do not apply pre-theoretic analysis, because they are committed to theoretic analysis. And therefore, they are limited in their choice of meta-theoretic analysis. As such, they are limited in their ability to predict the actions of the B party, which are rooted in the very form of meta-theory unavailable to them, inasmuch as they are an A party. This situation is another element of the aggregate risk: B-parties that are able to apply this analysis may be able to exploit situations their A-party opponents would not. This advantage of the B party is not overwhelming. For we should consider science to be very effective, also when it comes to predicting electoral behavior and public opinion. However, within the limited arena of skepticism about science, B parties may surprise and trump A parties, over and again: their very existence is a surprise inexplicable to their opponents.

6. As an illustration for this type of consideration in a situation similar to that analyzed by Jonas, consider the advice of Bernard Gui, a famed Medieval Inquisitor. Gui suggested that inquisitors avoid public interrogations, since these could damage the support of the “faithful laity” who “believe that we have at our command… arguments so clear and obvious that no one may oppose us”, and “to some extent [be] weakened in the faith by observing the learned men are thus mocked” (Gui, 1991: 377).

122 Aviram Sariel

Thus, Jonasian phenomenology is helpful for considering both the relation between B-leadership and science, their analysis of their adversaries as representing an establishment and being limited by it, and the specific antinomian mode in which it is likely to reside. However, this does not imply that the antinomian mood would be used as radically as the Likud applied it. Let us consider a performative option derivable from Jonas’s analysis of value and symbol reversal. That is, inversion of symbols as dominant as the law of conservation of energy. This is a bold and provocative move, as bold and provocative as the invention of a perpetual motion machine. However, as we saw, there is nothing institutional which constrains it: the political checks and balances imposed on politicians do not relate to their acting as heralds of a new science. The general outlook of the move is twofold. First, it should provide a vision of an alternative future which fits with the ideal of science per se: save the planet, in Meridor’s case. Second, it should claim to have advanced towards an achievement of this goal by overcoming scientific barriers associated with science as culture. Last, it should appeal to a topic at the center of the brand associated with the adversary and its vision. Or, since the Labor party in general was so much invested in classic engineering (perhaps due to their socialist ties), including energy machines and advanced energy solutions and since Peres himself was (and is) credited as the father of Israel’s nuclear power program, the challenge should rest within this specific topic. Thus, the general outlook is of an attempt to compete with A on its own grounds: its own vision, and its field of reputed achievements, assuming that such an impact would be especially significant. The challenge consists not so much in the concrete proposal, but in its function as a test of A party’s response – and science’s response – to a proposal of disruptive innovation. The intended response, as it seems, would consist of a poor articulation of A-leadership command over the scholarly challenge: while connected with science-as-established, they represent it rather than make it. Thus, A-leadership is portrayed as essentially equal to B-leadership: both are political professionals and none of them is a scientist. Second, the very attempt to innovate on a personal level exemplifies the crisis of values of the type described by Jonas: for, if A-leadership represents ‘old’ establishment, while interregnum is marked by individuated antinomies, B-leadership performs an action which fits and exemplifies the interregnum. These two characteristics are not related to the concrete situation, but to the accentuation of a certain type of transition between old and new: A is entrenched in science and establishment, while B provides an essentially barbarian yet vital and individuated alternative. For the B party leadership, such an analysis is harmonious with their pre-theoretic/ meta-theoretic view of the world. For the A party leadership, the very possibility of the event is unavailable. They would need to refute the claims by addressing their scientific base of support. This will take time and effort. Thus, for a brief duration,



Chapter 6.  Save the planet, win the election 123

an extravagant claim may rule supreme, and signal that a revolution in values has already taken place. As we saw, Meridor’s Invention was presented just two days before the election: its refutation was total, yet required time. And until it happened, the Labor was not aligned with science per se, but with a version of science, which it failed to comprehend independently of its scientific board of advisors. Again, while one of Shimon Peres’s lifelong achievements was the foundation of Israel’s nuclear program, he was anything but a scientist on his own right. What he could claim – with great accuracy – was that he was in possession of exceptional skills for interpersonal relations, which allowed him to work with scientists as people with whom he could cooperate. Considering the complexities of knowledge, this claim is not anecdotal but typical of what A-leadership can claim in all cases. However, knowledge which pertains to people is, exactly, where B-leadership excels over established wisdom. Peres, like all A-leadership, was therefore incapable of answering such a challenge: he could not, say, claim that he is not sure about the exact details of the second law of thermodynamics, but trust in the people who do know. What he could, and did, was to ask these people to provide a slow yet authoritative analysis, and lose the elections.

Donald Trump, post-truths and alternative truths As we saw, ritualized rebellion against norms relevant to an opposing party, justly identified with norms-enforcing establishment, may be part and parcel of a specific scenario of elections and election parties. I discussed 1981 in Israel, but it sometimes seems as if Donald Trump read the same book Meridor read, and to an extent – although Menachem Begin and Jacob Meridor were anything but rude – they are on the same page. The previous paragraph ended the presentation of this paper in IASC Pisa conference, two weeks before the US elections, which were supposed to serve as a crucial test of the theory described above: under Trump leadership, the US Republican party was an obvious B party. Led by Hilary Clinton, the Democrats were an A party. As described above, the democrats often exhibited wonder at the very existence of the Trump phenomena. Their most common explanation was that it was rooted in human weakness, which tends towards bestiality: “When They Go Low, We Go High” served as Clinton’s response to the Republican campaign. Clinton was supported, of course, by the knowledge establishment en tout, from the Ivy League to private knowledge magnets such as Mark Zuckerberg and Sergey Brin. According to the analysis above, and given the extreme level of A/B asymmetry, Donald Trump had a surprising advantage, and may well win these US elections. His victory and some of the developments which followed call for some additional comments.

124 Aviram Sariel

First, Trump’s victory is now a historic fact, and a troubling one. Second, unlike the move illustrated by Meridor’s Invention, Trump did not produce a single event of the type analyzed above, an October surprise, or a last-minute invention of a Perpetuum Mobile. Instead, Trump lied throughout the campaign on a huge number of issues, and created a truth-challenge of a different type: each of the non-truths could have been checked and refuted, but the sheer number of distortions was overwhelming even for the capable stuff on the Democratic side. Therefore, the inability of an A party to provide systematic defense of their supposedly factual alliance was constantly present to all voters: if the utterances were not checked, they are accepted and influence through conclusions drawn from them. If they are refuted, these utterances demonstrate the Democratic dependency on teams of fact-checkers. Third, Trump practiced other forms of value-reversal, notoriously so when his attitudes towards women were raised. Here, the challenge exposed how hard it is to school anyone in politically-correct practices. Clearly, Clinton was significantly slowed when she had to provide apt explanations as to why exactly it is morally wrong to talk in a certain way. In other words, this specific form of challenge proved how complex progressive theory is, and how little its arguments, statistics, and theoretical assertions are publically known, even to political leadership. Meridor’s Invention proved Shimon Peres was no nuclear physicist; Trump’s consistent challenges to women dignity proved that the Democratic Party, and Hilary Clinton, fail as eloquent and effective representatives of contemporary feminism. Last but not least, the Trump controversy after the elections brings the Correspondence Theory of Truth into the very heart of political discourse. Cast as ‘post-truth’ vs. ‘alternative truth’, the campaign against Trumpism again pre-supposes that the other side is anything but effective in this field. It seems to attempt to cast Republican leadership as essentially ignorant of or indifferent to what truth is, not noticing that this situation necessarily means they have some concept of meta-knowledge which they find useful or truthful, and probably both useful and truthful. If my analysis is correct, this approach would miss its political point entirely: it would only serve to extend the horizon of interregnum, and provide a convenient vehicle to initiate the next campaign, already in the present time, simply by proving once more how pretentious is the claim for knowledge held by an A party. As of now 2017, two demonstrations supportive of this conclusion seem to be already available: US economy did not trample, and anti-Semitic incidents were not traced to Republican voters supposedly influenced by Trump but elsewhere. Both attests for the limits of established knowledge used for interpretation. So, if the trend is to continue, then Trump would not only occupy the white house for a second term, but may institute a longer period of Republican domination. This is not the worst



Chapter 6.  Save the planet, win the election 125

of all evils in itself, yet a single-party domination does constrain the potential of democracy. If the Israeli example is to be taken in the extreme, than the A/B divide may create a constant state of A/B mélange, in which B party always retains political power, yet governs over and with an establishment essentially committed to the A party, or A opposition. At the same time, and although in power, at least some of the B-party supporters would necessarily feel alienated from established knowledge, and even persecuted by it. Again, this situation may not be the worst of all evils, but it is an evil, as well as form of single governance which democracy is supposed to challenge.

In conclusion I tried to present a paradox in which an asymmetry in affiliation with scientific establishment might be a reason for an election failure of parties which align with science and victory for parties which do not. Remarkably, the conclusion of the analysis presented above could be that serious treatment of B parties and their practices could actually strengthen their ties with scientific establishment, and derive them from the particular bag of tricks I tried to sketch. Or, A parties should acknowledge the possibility of a Meridor Invention, and prepare for it: A parties could, in principle, attempt to fill their leadership with people from scientific background relevant to their specific agenda (say, Dr. Angela Markel). Alternatively, A parties’ leadership could re-consider the level in which it represents establishment as opposed to provide more autonomous figure for the judgement of the public. For if not, the election interregnum would continue to serve as a blind spot of A parties’ leadership and hunting fields for B parties’ leadership: as noted above, the Israeli experience suggests that this advantage may endure for decades.

References Agamben, G. (2005). The State of Exception. Chicago: University of Chicago Press. Avneri, A. (1993). The Defeat: The Fall of the Likud Government 1977–1992. Tel Aviv: Dani Books. Bauman, Z. (2012). Times of interregnum., Ethics & Global Politics ,, 5(1).  doi: 10.3402/egp.v5i1.17200 Bradley, C. P. (1985). Parliamentary Elections in Israel: Three Case Studies. Shoe String Press Inc. Fruend-Avraham, Y. (2015, April 4). Redeeming Zion: The Invention That Almost Changed Israel. Retrieved April 9, 2017, from NRG: http://www.nrg.co.il/online/1/ART2/690/555.html Gramsci, A. (1971). Selections from the Prison Notebooks. (Q. Hoare, & G. Nowell-Smith, Eds.) London: Lawrence & Wishart. Greilsammer, I. (1986). The Likud. In H. R. Penniman, & D. J. Elazar (Eds.), Israel at the Polls, 1981: A Study of the Knesset Elections. Indiana University Press.

126 Aviram Sariel

Gui, B. (1991). The conduct of the Inquisition of Heretical Depravity. In W. L. Wakefield, & A. P. Evans (Eds.), Heresies of the High Middle Ages (pp. 375–445). New York : Columbia University Press. Johnson, R. A. (1974). The origins of demythologizing: Philosophy and Historiography in the Theology of R. Bultmann. Leiden: Brill. Jonas, H. (1934). Gnosis und spätantiker Geist. Teil 1: Die mythologische Gnosis. Gottingen: Vandenhoeck & Ruprecht. Jonas, H. (2001). The gnostic religion: the message of the alien god and the beginnings of christianity. Beacon Press. Kleinberg, A. (2016, October 18). Zvi Razi, In Memoriam. Retrieved March 29, 2017, from Zvi Yavetz School of History, Tel Aviv University: http://humanities1.tau.ac.il/history-school/ he/people/in-memoriam/2-uncategorised/4220-razi.html Leff, G. (1968). Paris and Oxford Universities in the Thirteenth and Fourteenth Centuries: An Institutional and Intellectual History. London: John Wiley & Sons. Lehman-Wilzig, S. (1983). Thunder Before the Storm: Pre-Election Agitation and Post-Election Turmoil. In A. Arian (Ed.), The Elections In Israel, 1981. Tel Aviv University: Ramot Publishing. Olsen, H. L., Dugger, G. L., & Shippen, W. (1973). Preliminary considerations for the selection of a working medium for the solar sea power plant. John Hopkins Univ, Silver Spring, MD (USA). Applied Physics Lab. Parry, R. (1993). Trick or treason: The October surprise mystery. New York: Sheridan Square Publications. Peres, S. (1999). The Imaginary Voyage: With Theodor Herzl in Israel. New York: Arcade Publishing. Scholem, G. (1995). Redemption Through Sin. In G. Scholem, The Messianic Idea in Judaism And Other Essays on Jewish Spirituality (pp. 78–141). New York: Schocken Books. Sternhell, Z. (2009). The founding myths of Israel: Nationalism, socialism, and the making of the Jewish state. Princeton, NJ: Princeton University Press.

Chapter 7

Science and the source of legitimacy in democratic regimes Oded Balaban

University of Haifa

… the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using the word if it were tied down to any one meaning. Words of this kind are often used in a consciously dishonest way.  George Orwell (1968, pp. 132–3) Democracy admits no source of authority. It assumes that values are not derived from facts, and facts are not derived from values. This runs contrary to Plato’s “virtue [values] is knowledge.” According to Plato’s logic, experts should rule the republic. Contrary to his view, Democracy assumes that there are not experts on values. Therefore, democracy means the “decision” to rule by means of formal procedures like suffrage or rotation of rulers. Unlike in science, what prevails is not reason and authority based on knowledge. This is the ground for the needed measures against the tyranny of the majority to assure tolerance, freedom and equality before the law. Keywords: facts, values, tolerance, practical knowledge

Democracy, as an ideal form of government in which citizens have the capability to rule and to be ruled in turn (cf. Aristotle, Politics, 6.2, 1317a40–b3, 1317b18–20), is an historical invention, though it has a fictional character. The invention and establishment of democracy, at least as a formally system, where most issues are decided by deliberation and voting, is attributed to Kleisthenes, whose constitution, according to Aristotle, was more democratic than that of Solon (Aristotle, Athenian Constitution, Chapter 41). It took place in the wake of the liberation from the Peisistratid tyranny, ca 510 BC. If it was a deliberate act or a gradual transition – it is not clear (see Ostwald, 1969, pp. 20–56; Fornara, 1991). However, what has been established since then is rather the idea of democracy than democracy itself. Rousseau contends that “If the term is taken in its strict sense, true democracy has never existed and never will” (Rousseau, 1994, p. 101). doi 10.1075/cvs.13.09bal © 2018 John Benjamins Publishing Company

128 Oded Balaban

And Giovanni Sartori adds that democracy can be defined as “a high-flown name for something which does not exist,” and that it “is a misleading term for what it claims to designate” (Sartori, 1962, p. 3). The reason is that the concept of democracy was not created to describe any given fact. It is a normative concept that never shows itself as such but is disguised as a notion about certain facts that can be described and explained. It is intended to be used as a practical guide, as a regulative principle. The different definitions of democracy have an Aristotelian character, namely, they are definitions by genera and species that define what something ought to be, rather than what it is. Alternately, at least, we cannot draw a clear-cut distinction between the use of the term with a cognitive or, on the contrary, with a normative intention. Descriptions and illustrations of real democracies will only be looked upon as approximations to those ideal definitions. Democracy is defined in the same way as existing whites are defined by their proximity to whiteness, or existing horses are defined by their proximity to the “horseness,” the ideal horse. It should be noted that the non-existence of democracy is only the first in a long line of scandals related to this notion. The concept of democracy has many and even opposite meanings. There are also attempts to keep a delicate balance between opposite meanings, so that it becomes easy to promote, in its name, recommendations opposite to those expected. For an analysis of the different meanings of democracy and a succinct historical account of democracy, see Benoist (2003). Victor Massuh notes that Democracy attempts to satisfy the will of the majority without sacrificing the minorities, to favour equality without ignoring differences, to make room for civil society without devaluing the role of the State, to preserve the rights of the individual without neglecting the general interest. It encourages a subtle electoral mechanism by taking pains not to dampen democratic enthusiasm or its vitality; it sees to it that private and public interests interact without tension, ruptures or corruption.  (Massuh, 1998, p. 67)

All those contradictions are supposed to be reconciled living together in the idea of democracy. The question is whether one or more of those contradictory meanings of democracy, or democracy itself as a synthesis of contradictory meanings, has a rational justification or foundation. My thesis is that all those attempts made in the past and the present to find a foundation for the support of democracy are not sustainable. The result, or rather, what is assumed beforehand in democracy, generally with complete unawareness, is that either real or ideal, democracy lacks, as a matter of principle, a source of legitimation. The difference between democracy and other regimes lies in that it is the only one resulting from the awareness that all actual and possible regimes lack a source



Chapter 7.  Science and legitimacy in democracies 129

of legitimacy. The case of the matter is not that first democracy was invented, and later it was discovered that it lacks a source of legitimation. Democracy was invented as a result of this awareness. The great discovery was, then, that there are no governments that possess such a source. Its first expression, in Athens, was when, facing policy-deadlocks, Athenians did not decide to settle disputes by killing or exiling opponents, as was done before by tyrants. Athens reacted to discrimination, stigmatization, ostracism and other forms of interpersonal rejection by turning to a procedure where each side will be ready to assume the responsibility of sustaining the rights of minorities to live with the decisions of the majority. Namely, Athenians accepted as legitimate holding public discussions and making final decisions by vote or rotation in office, on those issues depending on values, not on expertise and professional knowledge. In his Protagoras, Plato shows how clear in his days was the distinction between issues discussed in democratic assemblies, that are issues concerning values and ends, and issues concerning knowledge of means. Public assemblies consult and take advice from experts, and are ready to give them the legal authority to make decisions on the fields of their specialization. Professional issues depending on knowledge are not discussed in general assemblies (see Plato, Protagoras, 319b–320b). This does not in any way contradict the historical fact that democracy started as “a kind of extended aristocracy” (Ehrenberg, 1960, p. 50). My description rather expresses the adaptation of aristocracy to the new regime, which emerges from the lack of a source of legitimacy.

The attempts to find a source of legitimacy Many attempts have been made to find a source of legitimacy even for democracy, a regime that in fact arose by recognizing that it is a futile attempt. The attempts themselves run against the origins of the idea of democracy. I will give a few examples. First, let me examine the shortcomings of the Will of the people which is a widely accepted source of legitimation for democracy. I expect to draw some valid conclusions from this example for all other cases. The problem can be addressed as follows: let me start with the Individual Will. Regarded as a source of legitimation, it is held to be immanent, since the possession and use of the Will belongs to each of the individuals. The individual Will is, precisely, the principle of individuation, or the principle of differentiation between individuals. The Will is something that I do not share with others and others do not share with me. For this reason, no political regime will function if it is only based only on the individual Will. It will be rather the ground for chaos or for a state under the laws of nature, as defined by Spinoza, where “fish are determined

130 Oded Balaban

by nature to swim and big fish to eat little ones, and therefore it is by sovereign natural right that fish have possession of the water and that big fish eat small fish” (Spinoza, 2007, 195). Now the question is, what do we hold in common if not the empirical Will? As individuals, we have a common Reason as opposed to a Will, as Descartes rightly held. He wrote that the source of error is the premature decision of the Will, that puts limits to the activity of our understanding (Descartes, 2008 Fourth Meditation). It is premature indeed but only from the point of view of theory. For practical needs, we need the intervention of the judgment, or the Will, which is the responsible for decision-making. Without the intervention of the Will, we would remain unable to make decide but only to think, including thinking about decision though without deciding. If we would not share the same reason, a common understanding capacity, we would be unable to hold a dialogue in which each understands the arguments of each other. Reason is the only trans-individual or inter-individual capacity that might be claimed as a source of legitimation for democracy. Remember what the problem is: democracy means the recognition of values that I do not share, which is the reason why we cannot, in principle, agree regarding decisions-making. If we were to have the same Will and share the same values and preferences, we would neither need democracy nor a State nor Laws or legislation but computers would make better decisions for. The case is that we have different Wills and share the same Reason. We cannot agree on our values, but we can understand each other including our disagreement and agree about this disagreement – namely, to understand it in the same way and with the same meaning. That means, the sum-total of Wills does not create the factor that causes and guides normative political activity. It is required another kind of Will, one that is not identical with the arithmetic sum of wills, but a will oriented towards the political good. As a step to overcome the individual Will, it has been thought an opposite idea of Will – the idea of a General Will, which is Rousseau’s version of the romantic Volksgeist. While the Will of all is real, empirical, representing the particularity of interest, the General Will is a normative universal idea – it is not real but ideal. It is justified as a fictional transcendent Will, since it is supposed to impose itself upon the individuals from outside. It appears when we look at ourselves from without, in reflective thinking. Yet since it is not really extrinsic, because it does not come from the heavens, this General Will has to be regarded, like the individual Will, as an attribute of the individual, but as an attribute of the impersonal aspect of the individual–namely, it is supposed not to be part of the empirical, factual individuality – it is supposed not to be part of the individual‘s individuality. It is rather supposed to be the representative of the individual’s generality or, better, an individual’s universality. It refers to the citizen, not to the individual, or, rather, to the individual as much as



Chapter 7.  Science and legitimacy in democracies 131

it is regarding only from the point of view of its being a citizen of the State, a kind of transformation of a concrete individual into an abstract citizen. It refers to what people would want if they were able to make decisions only as citizens and not as individuals (Rousseau, 1994, pp. 66–7). Kant and Rousseau tried to go down this path. Note, the problem was to find a source of legitimation as a guiding principle for all the citizens about what has to be decided, namely, the question is to find some “ought” that should be derived from a “is” while the “is” itself should not be empirical. If it were empirical, it would be impossible to find a source of legitimation, since the source has to take precedence over the empirical citizens. It has to be something placed at the origins of a natural, given, process and, therefore, not part of it. This has to be the place of Freedom against natural necessity. It is what is called the distinction between de-facto and de-juris, which is also the pass from being a natural being to become a social being, from being a human being to become a citizen. Real freedom indeed, does not mean to follow one’s inclinations or given nature, but precisely a subject that posits itself in front of itself as a natural being. It is the proof that the subject can act upon itself and change its own natural character, becoming a social and historical being. This is the work of Reason when applied to the Will or the work of a Rational Will. Thus, Rousseau and Kant extended Reason to the Will. The general Will is supposed to be a Rational Will. Reason, according to Descartes, is the source for shared understanding though not for shared values. This is the problem, therefore: How to turn a principle of understanding into a normative principle intended for making decisions. The General Will has this capacity only when it makes decisions that are proposed, or guided, by universal Reason. Otherwise it remains individual Will. The General Will, or the Rational Will, is assumed to be a source of legitimation, as volonte generale, as clearly differentiated from the empirically-given sum of individual Wills, volonté de tous, or when it becomes one with Reason. As such, however, as General Will, if it is regarded as the source of legitimation, it is merely because, actually, it is the precondition for law-making, in the sense that it in fact legislates, so that we become involved in a petitio principii where the empirical existence of the general Will is the proof of its existence. In short, it explains law making, but does not justify it. Reason itself is able to understand, but does not make decisions. Thus, the very attempt to regard the Will as the source of legitimation is a tacit admission that there is not a source, since the Will, in order to be universal and to be really a source, cannot be derived from empirically given facts, in this case, the fact of legislation. Nor can Reason help the Will to make decisions, since Reason represents, in the Is/ ought distinction, the side of the “is” alone. Reason cannot tell us what to do, but only what is and, therefore, it only can be used as a means for decision-making.

132 Oded Balaban

There is not a real source of legitimacy What I said regarding the General Will, as I argued elsewhere, is true for any attempt to find a source of legitimacy (for more cases, see Balaban, 2004). Now it should be clear that the real problem of legitimation is that we are trying to find something unattainable. It amounts to find a value derived from facts, or an “ought” that is derived from a “is,” though generally disguised as if this is not the case, as if it were instead a real foundation. This way it does not work, or works only in appearance. The problem here is, that if the Will is not derived from experience, if it is disinterested, if it is universal, how can it be the source of legitimation for any concrete, given, democracy? At most it might be the source in the sense that it can explain a type of government, but it cannot be a legitimating source. This is the core of the problem: if it is not derived from experience, how can experience be derived out of it, namely, based on it? Laws are at least in accordance with a principle, but are not derived from it. Does all this not only mean evading the question by means of a wrong answer? Does it not mean that we are dodging the question? Isn’t this a justification of something merely by the assertion that it is as it is? Isn’t this appeal to the Will as a source of legitimation similar to any attempt to justify something merely by means of recognizing its existence? Isn’t this a circular positivism or empiricism, in the sense that what is, is at the same time and for this reason, also what ought to be? If the Will explains a given democratic society, how can such an explanation become its legitimation? Joseph Raz seems to take this misleading path mixing together de facto and de jure authority, or taking together authority and its legitimacy. So, when a legal system is in force, when it is a de facto authority, it also means that it possesses legitimate authority or is held to possess it, namely, it claims to have it (Raz, 1994, p. 215). He, therefore, does not make a distinction between the exercise of power and its justification. Power, on the contrary can be exercised without claiming justification, or at least, independently of this claim. Authority and legitimation are very distinct things. The one is only a fact, an “is,” the second implies an “ought.” There is something similar in the writings of Max Weber, the most influential writer on legitimacy questions. Weber contends that the legitimacy of an authority lies solely in the fact that people believe in the existence of a legitimate order, and this belief is the source of the validity of the order (Weber, 1968, p. 31). Actually, Weber refers to the idea that there is no source of legitimacy though in positive terms. Legitimacy should be derived from a general criterion, and not from given facts. It involves a valuation of facts, not merely their recognition. Weber denies the transcendence of legitimacy, and so, perhaps contrary to all appearances, totally denies the existence of a source of legitimacy. Let me put this in a less outrageous



Chapter 7.  Science and legitimacy in democracies 133

style: he denies the existence of a valid legitimacy but recognizes only a factual one, which is but a contradictio in terminis. Now, I will take a further step. I think that even Reason cannot be a source of legitimation for decision-making. Because calling upon Reason as a source assumes that the problem was solved beforehand. Resorting to Reason assumes that we all share the same values and have the same preferences. So, although at first glance rational deliberation may seem to be a more democratic way for the resolution of conflicts than to decide by majority, deciding by majority at least does not deny possible antagonisms between values but acknowledges their existence. Deliberation or the idea of direct participatory democracy, assumes, on the contrary, that such discord is merely ostensive or that it is based upon misunderstandings that can be solved through negotiations. This way, assuming that people share the same values, they are supposed to be able to arrive at consensual decisions after discussing their differences openly and without fear, and even to reach a “win-win” solution where each participant gets more than it would have gotten if turning to other means. This last way is preferred by Jürgen Habermas. He recognizes that there are two concepts of legitimacy that have to be rejected, the empiricist and the normative. The first is analyzed by social sciences, since it is about facts, from which values cannot be deduced, disregarding the question of legitimacy or, what seems a tautology, disregarding the validity of legitimacy. It treats legitimacy as a social fact, without assuming the validity of validity or of legitimacy. The second, the normative, is, according to him, unsustainable because it disregards empirical elements, but is, according to Habermas, “untenable because of the metaphysical context in which it is embedded” (Habermas, 1979, p. 204; see Balaban, 1990). In this classification, Habermas assumes that there is no place for only one of them –the empiricist or the normative concept – in order to understand legitimacy. It is because Habermas himself thinks about legitimacy by asking at the same time for the validity, or normativity, of legitimacy, namely, when coming to analyze the meaning of legitimacy, he does not distinguish between a normative and a cognitive perspective. It is remarkable that in another order of things, when asking about the justification of law (in the normative meaning) in the framework of the principle of democracy, he contends that the law is not something that “one could ‘justify’, either epistemically or normatively” (Habermas, 1996, p. 112) However, since he believes in a process of communicative shared understanding as the basis for making decisions, he proposes a third position to solve the conflict between facticity and legitimacy (or normativity), the idea of a reconstructive or a critical legitimacy, that mixes together both points of view. However, Habermas adds that along his hermeneutic approach, he does not arrive at “a judgment of the legitimacy that is believed (factually, O.B.) in” (Habermas, 1979 pp. 204–5). He adds immediately, however, that “assuming that idea and reality do not split apart,

134 Oded Balaban

what is needed is rather an evaluation of the reconstructed justificatory system itself ” (p. 205). Habermas needs the assumption about the union of idea and reality in order to propose his own normative result: the possibility to achieve a rational consensus, meaning a shared consensus for a shared decision. That means that reason, or a rational agreement, is the primary mediator between facts and norms. Moreover, reason bestows legitimacy to legislation, as if legislation itself were devoid of self-legitimacy, by its being what it is, an act of free will devoid of a further legitimacy or foundation. Yet Habermas is not really ready to separate knowledge from value-judgment. The problem of legitimacy of legislation takes in general the form of the distinction between law or administrative law, and constitution or constitutional law. The first, represents factual legislation, and the second is an attempt to bestow legitimacy to legislation, it is an attempt to ask the question of the validity of law, namely a justification based on more general principles that, in themselves, are still normative. This is an impossible and perhaps unnecessary step, since legislation is already a normative notion. To ask for a further justification in a constitution means to regard laws as if they were in need of further foundation, as if they were devoid of legitimacy for the very reason that they are only given by order of the legislators. It seems that those who look for a foundation for laws want unconsciously to return to the patterns of thought prevalent in ancient Greece. To understand while taking sides and taking sides while understanding, is the typical way of thought in ancient Greek philosophy, where values and facts are mixed in such a way that the reader cannot ask, Plato (for example), whether he is asking in his dialogues about what people do or what they should do, or whether the Idea has a cognitive or a normative character. In his philosophy, Truth and Good are indeed one and the same. It seems that Habermas, like Plato though at a higher level, does not draw the distinction in order to understand without taking sides. The very turn to the concept of rationality is a value-charged turn like justice, love, courage, etc. When referring to someone as rational, it is intended to mean that he thinks as he ought to think, not as he actually thinks. To use the faculty of reason “properly” is to use it as it should be used. To call someone “irrational” is a way to say that he does not think as he should. In such cases people judge based on a norm about how people should think. Hegel was right when he referred to the so-called “laws of thought” as normative laws that do not fit human thinking but that are taken as if they “lie at the base of all thinking; to be inherently absolute and indemonstrable but immediately and indisputably recognized and accepted as true by all thought upon grasping their meaning” (Hegel, 2010, p. 354). It seems that Habermas was also trapped by this normativity. His project is to give validity to arguments by means of a discourse oriented to promote mutual understanding, to reach a valid accord, not only between those who participate



Chapter 7.  Science and legitimacy in democracies 135

in the dialogue, but also for any possible rational subject. A rational subject is one who is able to justify his acts with normative valid orders and, mainly, one who acts attempting to be able to judge without biases (see Habermas, 1981, p. 39). One of the interesting theoretical results in this regard, is an inversion of terms. Instead of asserting that democracy is the only regime that recognizes the lack of a source of legitimacy, it is asserted that only democratic regimes can be legitimate. For example, Allen Buchanan regards legitimacy as being the same as morally justifiable (see Buchanan, 2002). Bernard Manin believes that democracy is a regime based on deliberation, presupposing that reasons and argumentation are the basic character of democracy (see Manin, 1987). It is my conviction that the very decision by voting contradicts this rational principle. Kenneth May argues that the source of legitimacy lies in the formal conditions represented by democracy. For him, the capacity of the regime to make decisions is the key issue (May, 1952). In the same vein, Joseph Raz maintains that democracy is necessary for legitimacy, though only if it brings about desired outcomes (Raz, 1995, pp. 31, 161, 367). It seems that theorists of democracy are not ready to accept the democratic principles as a result of the absence of a source of legitimacy, believing that democracy is its very source.

On the role of science in political decision-making When the lack of legitimacy for democracy (or for any other regime) becomes clear, a second questions arises: can democratic decisions be taken on the basis of knowledge of given situations, or perhaps they also lack a source of legitimacy? Jeremy Waldron maintains that even if there is a disagreement on values, citizens can agree on what sorts of argument would make a proposition more likely to be true, shifting the focus to epistemological arguments in order to justify the authority of law (Waldron, 1999, p. 73). Legislation is for Waldron, at least in principle, a matter of knowledge. The basic assumption, shared with a popular widely held belief, is that political decisions have to be taken by experts. Decisions are mainly based on what is called the “Rational Choice Theory.” This assumes that citizens share the same values, so that it begs the question. It assumes that a rational actor is motivated by “expected utility” rather than by other values like altruism or direct pleasure. Utility is taken as the sum of an agent’s preferences. If the Rational Choice Theory is an instrument for deciding what ought to be a rational choice, such a theory, if it recommends some course of action, has to take into account the subjects’ cultural or ethical norms. If the question is how rationality is bounded by cultural or ethical norms, or by cognitive constraints (see Simon, 1995), in this case, nobody will be able to propose some pattern of behavior unless the pattern is socially accepted in each

136 Oded Balaban

case, which is the opposite of a normative rational choice approach, that should not be deduced from actual social facts. Under the same assumptions, there is even the contention that our political problems arise because we lack professional politicians, meaning that those who are not are ignorant on the relevant questions about which they are called to make decisions. Rationality under this approach, does not mean to choose means for given (neither rational nor irrational) ends, but concerns those definitions of rationality that put into question the very ends, values or beliefs, arguing that there are beliefs that can be true, namely, epistemically justified. Amartya Sen supports this approach. For him, rational choice “is primarily a matter of basing – explicitly or by implication – our volitional choices on sustainable reasoning… Rational choices … is foundationally connected with bringing our choices into conformity with the scrutiny of reason” (Sen, 2007, p. 340). Sen, ultimately and in a rather sophisticated way, attempts to reduce values to knowledge. The appeal over centuries to Plato’s Republic, which is the utopia (or to my taste a dystopia) of a country governed by philosophers, has its origins in this tendency to believe in knowledge as the source for decision making. The belief is that the production of scientific knowledge includes the knowledge of the values that determine what has to be decided, implying that this knowledge includes making recommendations on what to do. My second contention, answering the question of whether democratic decisions can be taken on the basis of knowledge of given situations, or whether perhaps they also lack a source of legitimacy, is thus, that the need of knowledge for decision making does not run against the normative non-scientific basis for the determination of ends, since it refers only to finding the means that must be taken before the given ends. Ends are determined by values and the conflicts between values cannot be resolved by knowledge of relevant facts, though this knowledge plays a central role that needs yet to be determined. The normative character of the ends can be shown but not grounded on something else. They are determined by unfounded systems of values. Knowledge enters into play when we ask what are the best available means to realize our values. Once acknowledged the lack of a source of legitimacy, the question is about the role of science (or knowledge in general) in democracy. Science deals with knowledge of facts, even with knowledge of democracy as a given fact and with the knowledge of values. The knowledge of values is the subject-matter of axiology, the science that studies values though without taking a stand for or against them. It is a value-neutral study of values. Values have, for axiology, the status of a special kind of “facts.” Even in this case, however, values are regarded as the stands taken on facts, and they are applied to facts, either to merely value them, expressing approval or rejection of them, or are applied to facts in order



Chapter 7.  Science and legitimacy in democracies 137

to change them or to prevent their being changed by someone else. They are merely used for the valuation of facts, or as a guide to be applied to facts. Under this value-neutral perspective, values are regarded, in principle and without exception, as not derived from facts, as facts are not derived from values. David Hume was the first to understand this distinction though limiting the issue to moral questions (see Hume, 2007, p. 302). Values are the stands taken on facts though they are not a property of those facts, but something that lies beyond them. As a matter of principle, values are derived from other values, and facts are derived from other facts. The deductive chain of values is independent of the causal chain of facts. Believing that each one can be derived from the other, values from facts or facts from values, most likely results from incorrectly assuming that values, since they are applied to facts, are also derived from that to which they are applied – from facts. These distinctions do not prevent people from deriving “ought” from “is.” However, in this case, facts are grasped from the perspective of their status. This means considering a fact as already having a value property belonging to it by its very being given. Deriving “ought” from “is”, means regarding facts as an authority. This is however not a derivation. It is instead a sanctification of facts by their mere status as facts. Regarding facts as authority means assuming that “What is, is what ought to be” – this is the only normative conclusion, and it contradicts the very distinction between “is” and “ought”. Coming back from this journey to the origins of democracy, we may be better prepared for understanding the swing of democracies toward formal procedures for decision-making. There are several principles that result from the awareness that democracy lacks a source of legitimacy and the need of ensuring the co-existence of different, controversial and even opposite values. The awareness of the lack of a source of legitimacy leads democracies to support principles intended to prevent their degradation towards the belief in experts instead of the primary implicit understanding that value disputes cannot be solved unless they do not exist, namely, unless people share the same values and the same order of value priorities. Here are two of those principles, succinctly explained: 1.  Suffrage The central expression of the awareness of the lack of a source of legitimacy is the formality of decisions by universal right to vote. Voting indeed, is the most conspicuous expression of irrationality. Instead of arguments, there is a showing of hands or, better, the secret ballot procedure. That means the rule by the decision of the majority, ignoring who is wise and who ignorant of relevant facts. Citizens are not expected to be rational agents nor to be experts on issues subjected to voting.

138 Oded Balaban

Universal suffrage, however, has its critics. Bryan Caplan complains that voters in democracies are not merely ignorant about the issues under consideration, but they are also irrational in their voting (see Caplan, 2006, pp. 114–41; see also Huemer, 2013, pp. 209–14). Others on the contrary, like James Surowiecki sought to defend the intelligence of the masses (Surowiecki, 2004). There is a vast literature on the issue (my choices are: Le Bon, 2002; MacKay, 1932). Both points of view share the same assumption: either criticized or acclaimed, knowledge and ignorance are, ultimately, the basis for voting, which is but an inversion of the principle of the lack of legitimacy. 2.  The separation of powers The separation of powers has become axiomatic for modern constitutional democracies. Separation of powers means the division of the functions of government from one another, means a counsel against the concentration of too much political power in the hands of any one person, group or agency (see Waldron, 2013, p. 438). The discrete branches are generally called the executive, legislative and judicial branches of government. The separation determines the role of each branch in the process of making laws, ruling according to laws, and their interpretation and application to particular cases. The legislature makes the laws, the judiciary interprets them, and the executive makes decisions according to them. The name of the executive should rather be the “ruling” or “governing” power, because governments do not execute laws but rule according to laws or in the framework of laws. Like voting, the separation of powers is also an irrational principle. It is adopted because of the distrust of the rational power of the ruler, who was elected by general suffrage. It is a request for controls, checks and balances in order to avoid tyranny. If ruling were a rational question deduced from knowledge, there were no need for the separation of powers. It is interesting to note in this regard Locke’s argument about the separation of powers. It comes in order to restrict the temptation for the same Persons who have the power of making Laws, to have also in their hands the power to execute them, whereby they may exempt themselves from Obedience to the Laws they make, and suit the Law, both in its making and execution, to their own private advantage. (Locke, 1980, p. 76)

The idea is that people will not act rationally or for a common good unless they are obliged to be subjected to the laws that they have themselves promulgated. It should be an extrinsic force that obliges free individuals that, without the constraint of the separation of power, will not be concerned with laws. Locke’s point of view can be criticized, obviously, but my sole intention here is to show that nothing will



Chapter 7.  Science and legitimacy in democracies 139

prevent humans from legislating laws and regulations that could run against the general interest, whatever it might be. Simply because of the lack of a source of legitimacy for political decisions, just because of the arbitrariness of laws, to ensure lesser evil or greater justice, or to ensure the best possible free society, a democratic regime will be reinforced by the separation of powers. It is indeed a formal device alone, but one that imposes restrictions on political government. However, as in many other cases, also in this case there is similarly a tendency of the mind to invert the terms. Instead of trying to understand the idea of the separation of powers as a result, the tendency is to justify its “rational” arguments as if it were a precondition for the normal function of the state. Hegel applied this inversion taking it to the extreme. He distinguished between three powers according to his understanding of the logical division of concepts in general. The concept is divided into three parts: universal, particular, individual (Hegel, 2010, II, Section I, Chapter 1). So, accordingly, the division of powers, where each power represents another aspect of the concept: the legislative power determines the universal side of the State, the governing power represents the particular side, and the power of the crown the individual side. By means of this division, Hegel supports a constitutional monarchy where the three together constitute the perfection of the State. As in Locke, the judicial power is not part of the formation of the State but lies beyond its constitutive sphere. The judicial only keeps the civil order; it is an administrative function acting from outside (cf. Hegel, 1995, § 272 § 273). The division is rational and constitutive of the State, so that it is not regarded as a result but as a principle, something typical of processes of a-posteriori rationalization. The separation of powers into three separated branches of government, though originated in ancient Greece, was theoretically established by Montesquieu (1989, p. 157). It was supported by John Locke and opposed by Thomas Hobbes (for a discussion on the issue, see Manning, 2011; Waldron, 2013).

Decision-making and science The separation of powers and suffrage are devices resulting from the incapacity to reaching agreements based on value disputes, struggles, and controversies by rational means, or by a knowledge of certain facts. In fact values lack foundation and this acknowledgment is the starting point for the idea of democracy. The separation of powers remains unjustified and can be perfectly presented as the result of the impossibility to find a commonly accepted basis for values and political decisions. Under those conditions, at least, the damage of political decisions to those who do not agree with them, will remain as a lesser evil.

140 Oded Balaban

Those principles result from the acknowledgment that no decisions can be taken collectively with arguments that are accepted by all, in contrary to arguments that can be accepted by all as true or false, right or wrong. In a context where decisions need to be taken in real time, democracy expresses the acknowledgment that no argument whatsoever will become a winning argument. This is important since it shows the difference between decision-making and science. Science is not in a hurry to make decision and therefore it does not need voting. In politics, as is the case of any practical issue, we cannot be philosophers or scientists, we cannot afford to sit back and do nothing besides thinking and understanding. Science is a case that represents concern with the “is”, namely, the knowledge of facts, while politics is a case representing concern with the “ought”, namely, the attempt to change facts according to values. The lack of a source of authority is essential also for understanding the limits of democracy. In order to ensure the co-existence of conflicting values, there is a need to determine which issues require legislation and which issues should better remain beyond the intervention of the State. Furthermore, given that nobody has a based right to impose his values on others, democratic regimes should defend the limitation of the intervention of the State on individual affairs, as much as possible. Giovanni Sartori warns against direct democracy that means a “frantic politicalization” of life that he claims was characteristic of Athenian democracy, and that runs contrary to modern democracies, which are intended to free the citizens from politics (Sartori, 1962, p. 255). To put it in Constant’s words, such democracy leads to “the complete subjection of the individual to the power of the whole” (Constant, 1997, p. 594). Only in case it becomes impossible, only when there are values that are incompatible with what a society regards as fundamental values, only then should the force of law be applied. Democracy, in short, calls by its very nature, to a maximum tolerance toward values that are not shared and that are usually held by minorities (those that never will be able to become majority, like ethnic minorities). Since the invention of democratic elections by votes, philosophers have been trying to establish the rules for an impossible task: To find a correct majority judgment in democracy. To find it implies assuming that there are true and false judgments on what ought to be a best policy at a given time and place. Values are not based on facts and cannot be justified, since they are not derived from any other ground than being mere not-true nor false stands that, therefore, do not oblige anyone to follow them. My point is, let me stress, that the lack of source of legitimation, instead of being in need to be explained, is in itself the very explanation of democracy.



Chapter 7.  Science and legitimacy in democracies 141

Knowledge of facts and application of values I would like, finally, to add a few words about the relationships between the knowledge of facts and the applications of values, or in short, the relationships between facts and values. The encounter between values and facts takes place only in practical thinking, that can be defined as a synthesis of knowledge of facts and the application of values. But even in this case, values and facts remain clearly distinguished from each other. The distinction between values and facts can help to clearly separate practical from non-practical thinking. Let me start by asking what does it mean to have a non-practical thinking or to be non-practical. Non-practical thinking separates values from facts without bringing them into a synthesis. It can be of two kinds. 1. A thinking that Spinoza calls Infinite Intellect (Spinoza, 1994, Prop. 4, Part I; prop. 4, 7. 11, 16, Part II) and Max Planck calls Ideal Intellect. Planck describes it as an intellect “intimately familiar with the most minute details of physical processes occurring concurrently everywhere” (Planck, 1968, p. 147) Such intellect cannot derive what to do solely from its knowledge, because it only refers to “what is,” it knows and understands alone, without changing anything in its object of thought. 2. A thinking with an ordered and clear value system about the political regime (or any other fact being thought of) in which its possessor would like to live, but lacking knowledge about the actual political and social situation in which he spends his life. The subjects possessing this kind of thought are unable to recommend what to do because they lack the required knowledge about their actual state of affairs. They cannot point for instance at the necessary means to be taken in their (ignored) real world in order to achieve their goals or ideals. Both are opposite extreme instances of ideal minds that are unable to make practical decisions – the first mind because it has not values, the second because it has not relevant knowledge. Keeping this distinction in mind, we can better understand that democracy is a regime resulting from value struggles while science means the commitment to know, explain, describe and understand facts, not only natural, but even the analysis and knowledge of regimes in general and democratic regimes in particular. The study of democracies is not a matter of values. Only supporting democracy or rejecting it is a question of values or taking stances for or against it. That nobody has authority to impose values on anybody else, is understandable only when coming to the conclusions that knowledge and taking a stand are

142 Oded Balaban

radically different. Only then it is possible to “decide” to establish formal procedures for decision-making. That means that no rational arguments prevail in decision-making. People can explain and try to convince, but the decisions are made by counting votes, or even by drawing lots. This awareness is, in itself, a call for tolerance of unshared values. However, is my conclusion not a refutation of the idea that values are not derivable from knowledge, which was one of my central arguments?

References Aristotle. (1932). Politics (bilingual edition). Cambridge, Mass.: Harvard University Press. Aristotle. (1935). Athenian Constitution. (bilingual edition) Cambridge, Mass.: Harvard University Press. Balaban, O. (1990). On Justice and Legitimacy: A Critique of Jürgen Habermas’ Concept of ‘Historical Reconstructivism.’ Zeitschrift für Philosophische Forschung 44, 273–277. Balaban, O. (2004). Democracy and the Limits of Tolerance. In Jahrbuch für Recht und Ethik/ Annual Review of Law and Ethics, 12, 349–381. Benoist, L. de. (2003). Democracy revisited: The Ancients and the Moderns. The Occidental Quarterly, 3, 47–58. Buchanan, A. (2002). Political Legitimacy and Democracy. Ethics, 112, 689–719.  doi: 10.1086/340313 Caplan, B. (2006). The Myth of the Rational Voter. Princeton: Princeton University Press. Constant, B.(1997 [1849]). De la Liberté des Anciens comparée à celle des Modernes. In Ecrits politiques. Paris: Gallimard, 1997. Descartes, R. (2008 [1641]) Meditations on First Philosophy. Oxford: Oxford University Press. Ehrenberg, V. (1960). The Greek State. Oxford: Blackwell. Fornara, C. W. & Samson, L. J. II. (1991). Athens from Cleisthenes to Pericles. Berkeley: University of California Press. Habermas, J. (1979). Legitimation Problems in the Modern State. In Communication and the Evolution of Society, Cambridge, UK: Polity Press. Habermas, J. (1981). Theorie des kommunikativen Handelns 1. Frankfurt am Main: Suhrkamp. Habermas, J. (1992). Faktizität und Geltung. Frankfurt am Main: Suhrkamp. Habermas, J. (1996). Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Cambridge, Mass.: MIT. Hegel, G. W. F. (1995 [1821]). Grundlinien der Philosophie des Rechts. Hamburg: Felix Meiner. Hegel, G. W. F. (2010 [1813]) The Science of Logic, Cambridge University Press. Huemer, M. (2013). The Problem of Political Authority. New York: Palgrave Macmillan. Hume, D. (2007 [1740]) A Treatise of Human Nature. Oxford: Clarendon Press. Le Bon, G. (2002 [1895]) The Crowd. New York: Dover. Locke, J. (1980 [1690]). Second Treatise of Government. Indianapolis: Hackett Publishing. MacKay, C. (1932 [1852]). Extraordinary Popular Delusions and the Madness of Crowds. Boston: L. C. Page. Manin, B. (1987). On Legitimacy and Political Deliberation. Political Theory 15, 338–368.

doi: 10.1177/0090591787015003005



Chapter 7.  Science and legitimacy in democracies 143

Manning, J. F. (2011). Separation of Powers as Ordinary Interpretation. Harvard Law Review 124, 1939–2040. Massuh, V. (1998). Democracy: A Delicate Balance and Universality. In Bassiouni, Cherif (Ed.). Democracy: Its principles and achievement (pp. 67–71). Geneva: Inter-Parliamentary Union. May, K. O. (1952). A Set of Independent, Necessary, and Sufficient Conditions for Simple Majority Decision. Econometrica, 20, 680–684.  doi: 10.2307/1907651 Montesquieu, C. de.. (1989 [1748]). The Spirit of the Laws. Cambridge: Cambridge University Press. Orwell, G. (1968). Politics and the English language [1946] In Orwell, S. & I. Angus (Eds.) The Collected essays, journalism and letters of George Orwell: In Front of your Noise, IV (pp. 127– 139). London: Secker & Warburg. Ostwald, M. (1969). Nomos and the beginnings of the Athenian Democracy. Oxford: Clarendon Press. Planck, M. (1968 [1949]) The Concept of Causality in Physics. In Scientific Autobiography and Other Papers, 121–150. New York: Greenwood Press. Plato. (1952). Protagoras (bilingual edition). Cambridge, Mass.: Harvard University Press. Raz, J. (1994). Authority, Law and Morality. In Raz, Ethics in the public domain – Essays in the morality of law and politics (pp. 210–237). Oxford: Clarendon Press, 1994. It was first published in The Monist 1985, 68, 295–324. Raz, J. (1995) Ethics in the Public Domain. Oxford: Clarendon Press. Rousseau, J. J. (1994 [1762] ).The Social Contract. In Rousseau, J. J., Discourse on Political Economy and The Social Contract. Oxford: Oxford University Press. Sartori, G. (1962). Democratic Theory. Detroit: Wayne State University Press. Sen, A. (2007). Rational Choice: Discipline, Brand Name, and Substance. In F. Peter & H. B. Schmid (Eds.), Rationality and Commitment (pp. 339–361). Oxford: Oxford University Press. Simon, H. A. (1995). A Behavioral Model of Rational Choice, The Quarterly Journal of Economics 69, 99–118.  doi: 10.2307/1884852 Spinoza, B. (1994 [1677]). The Ethics. In Curley, Edwin (Ed.). Spinoza Reader – The Ethics and Other Works. Princeton: Princeton University Press. Spinoza, B. (2007 [1677]) Theological-Political Treatise, Ed. By Jonathan, Israel, Cambridge: Cambridge University Press. Surowiecki, J. (2004). The Wisdom of Crowds. New York: Anchor Books. Waldron, J. (1999). Law and Disagreement. Oxford: Oxford University Press.

doi: 10.1093/acprof:oso/9780198262138.001.0001

Waldron, J. (2013). Separation of Powers in Thought and Practice. Boston College Law Review 54, 433–468. Weber, M. (1968 [1922]). Economy and Society, Berkeley: University of California Press.

Chapter 8

The ethics of communication and the Terra Terra project Giovanni Scarafile and Maria Elena Latino University of Salento

The Terra Terra project is the result of an interdisciplinary collaboration. In it several components converge, and they can be traced back to the following two macro-categories: on the one hand, the need for renewed food traceability as a result of satisfying the demands of the movements of food democracy; on the other hand, the need to provide specific and personalized information to consumers in compliance with ethical standards. Keywords: agrifood system, blockchain, communication, ethics, food citizenship, gamification, readability indexes, smart labelling, traceability

1. Ethics, a bridge between theory and practice The Terra Terra Project has given us the opportunity to implement a series of theoretical acquisitions based on the ethics of communication. In particular, we tested: (a) the practical relevance of communication skills, (b) the value of context in the delineation of communication modes, and (c) the role of alterity in the definition of communication practices. Furthermore, two general observations have accompanied the implementation of our project. 1. In general, if the first two areas (communication skills and value of the context) require for their theoretical definition a reference to practice, the third area – alterity – is often considered mainly from a theoretical point of view. Unlike most traditional approaches, our research has shown how the notion of alterity can help to guide communicative behaviors, thus becoming an intrinsic criterion for verifying the appropriateness of communication processes. For these reasons, all the three above mentioned areas cannot be neglected. They are indispensable conditions for identifying when communicative modes are not only effective, but also ethical.

doi 10.1075/cvs.13.10sca © 2018 John Benjamins Publishing Company

146 Giovanni Scarafile and Maria Elena Latino

In general, we have followed a problem-oriented approach. Moving from problems arising from contexts, we tried to find the best theoretical framework, able to suggest a solution to those problems. This theoretical framework was consulted to the extent of its versatility, considering its ability to provide solutions to the problems we started from. The chosen inductive approach consisted not so much in considering the practical problem as a corollary of a theoretical question, but in verifying the level of adaptability of the theoretical indications to the contexts, considered as conditions of possibilities for verifying the theoretical notions. Even though we do not go into specifics, we would still like to emphasize that such an indication is consistent with what Aristotle suggested about the moral norm, which, like the rule of Lesbos must be flexible. As the Stagirite writes in book V of Nichomachean Ethics, this result is achievable when a reconciliation is possible between the universal and the particular dimensions. The moral operator can find the value, which is universal, without excluding the particular situation (i.e., the context). That is why the praxis, even if is often degraded to the mere scope of abstract knowledge, is the hermeneutic horizon in which man forms himself in the act of making choices. As a result of the chosen approach in our research, two solutions were identified: the use of readability indexes and the informative cube – discussed in Section 3.4. In the light of what has been said so far, such solutions should not be understood as simple appendices, but as the bridge that crosses the paths of theory with those of practice. 2. The second observation concerns the same process and the identity of the actors involved in it. While we were working to develop the project, we realized that the process itself had an indirect result. On the one hand, in fact, we were questioning the possibility of a philosophy of communication and listening; on the other hand, it was through the practical experience of listening to experts from different disciplines that the project was outlined. For this reason, it seems to us that the relevance of this project is not limited to the scientific-industrial scope, but that it is significant for the implications on the level of interdisciplinary communication, although unfortunately there is no room in this essay to develop this point. Moreover, in the work of reconciling the practical and the theoretical dimension, which inspires choices, it is such capacity to dialogue between experts from other disciplines that defines the identity of those who participate in the project. What we talk about is inscribed in the difference between the sofòs, the erudite, and the phrònimos, the wise. Both aim at knowledge, though with a fundamental difference. The former, in fact, pursues knowledge regardless of the conditions in



Chapter 8.  The ethics of communication and the Terra Terra project 147

which it is obtained. The latter, however, considers first of all the occurrences of that knowledge in context. By recalling the phrònesis, the wise is able to conciliate eidetics and facticity, so avoiding the disembodied erudition. The communicative skills have the merit of filling the gap between theory and practice. If one wants to integrate the theory of communication with the practice, one should face, on the one hand, the snobbery of those who would like to preserve philosophical studies from any possible contamination with the facticity of the concrete situations of the human being, and on the other hand, the merely functionalistic approach of those who believe that communication is actually the result of the acquisition of “techniques.” In general terms, it can be said that the ethical task is to point out the paths to be followed in order to live well. The origin and source of legitimacy of the rules for living well can vary, but what remains constant is the practical approach, not being content that the solutions identified can be valid only at a theoretical level. From this perspective, therefore, ethics brings within it a tendency towards the applicability of its own principles. The point of connection between ethics, above all in its variant of communication ethics, and effectiveness is rediscovered in this evidence. It is, as we said, an intrinsic connection. Precisely in this light, it is probably possible to deduce new characteristics for the efficacy that indicates, starting from its etymological root, the ability to obtain what one wants. The ethical situation described above presents an important analogy with a characteristic of communication. In fact, communication is ad alterum in its execution. This means that it is expressed, concrete, becomes significant in function of the others, as a kind of bridge able to connect distant territories. The maximum expression of the effectiveness of communication is the ability to define as precisely as possible the characteristics of the interlocutor. If one wants to use a metaphor, the difference between effective communication and ordinary communication is similar to the difference between a dress made by a tailor based on the customer’s personalized measurements and a dress made with ordinary measurements. Outside metaphor, starting from the customer’s measurements is equivalent for the philosopher with being able to configure as exactly as possible his interlocutor or the recipient of his message. However, even such an ability would not escape the risk of being the expression of a functionalistic theory of communication in which the central element is, as we know, constituted by the idea that communication itself equals the transmission of a message. Precisely for this reason, the ability to configure the interlocutor as exactly as possible should not be separated from the ability to see in the other that one is faced with an otherness.

148 Giovanni Scarafile and Maria Elena Latino

In philosophy, referring to otherness means alluding to a radical unavailability, that is to say to a resistance opposed to every attempt at prehension. The prehension can be either cognitive or practical. What remains valid in all these situations is that otherness expresses a kind of enigma. Precisely in this sense, Lévinas (1998, p. 141) writes “Meeting a man means being awakened by an enigma”. The already mentioned radical unavailability does not mean indifference. On the contrary, otherness, what we might otherwise call the riddle, is of fundamental interest to the I. It is a question of making sure that the prerogatives of the I are not overwhelming with respect to the prerogatives of the other. That is why we should take care of the other so that in these conditions he can express himself without any conditioning on the part of the I. This is precisely why authentic communication should start from listening. This is not so much an acoustic dimension, but an existential mode. When it is activated, communication becomes a communicative relationship. It is in listening that I prepare myself to make room for the other’s enigma. It is listening that, as Lipari writes, “enacts an infinite surplus of welcoming invitation and reception, no matter what is said or heard. The listening, in contrast to the heard, is an enactment of responsibility made manifest through a posture of receptivity, a passivity of receiving the other into oneself without assimilation or appropriation. The listening is a process of contraction, of stepping back and creating a void into which the other may enter” (Lipari, 2012, p. 237). In the final analysis, then, an appropriate relationship is a communicative relationship with an interlocutor, which cannot be reduced to the categories of the ego. It is for this reason that Gadamer (2007, p. 355) notes that “it is not that we have found out something new that makes a conversation a conversation, but that we have encountered something in the other that we have not encountered in the same way in our own experiences of the world”. The above indicates the need to define specific communicative skills: it is not, in fact, a question of simply being informed about the theoretical possibilities offered by a communication respectful of the prerogatives of the interlocutors, but of implementing strategies to communicate in an ethical and effective way: “Whoever has the capacity to get in touch with the other in a direct, creative and appropriate way, being able to start and maintaining a constructive relationship has developed an effective competence in communicating” (Giannelli, 2006, p. 34).



Chapter 8.  The ethics of communication and the Terra Terra project 149

2. The communicative options 2.1

The readability indexes

The most famous readability index was developed in 1948 by Rudolf Flesch. It considers two linguistic variables: the length of the word (number of syllables) and the average length of the sentence (the number of its syllables divided by the number of words). Generally, there are two opposing but connected attitudes towards these techniques. On the one hand, one may risk giving them a sort of taumaturgical power, capable of solving any problems internal to the text. On the other hand, a feeling of aprioristic distrust can be developed, referring to the ability of a formula to make possible the comprehension of a text. In our research experience, as well as in the feedback received at the international conferences where we presented the Terra Terra project, we have found both attitudes. Although scientific literature has nowadays ratified the field of the readability indexes, in our view, the perplexities that we experienced can be avoided by taking up some aspects of the original debate, when scholars discussed the potential of the indexes in the attempt to fully define their features. To be explicit, in an article titled “Formula for Predicting Readability” published in 1948 by Edgar Dale and Jeanne S. Chall, the scope of the readability indexes was well known. With reference to the formula that would best interpret the need for simplification of the texts, the authors write: We do not claim that the formula developed here is definitive. The nature of multiple-correlation coefficient makes this point rather obvious. We do believe, however, that it is [a] short cut in judging the difficulty of written materials. The formula can also be used as an aid to text simplification..  (Dale & Chall, 1948, p. 19)

In other words, the same authors who supported the importance of adopting readability indexes did not fail to highlight their limits, to the point of noting that: “The formula can also be used as an aid to text simplification” since “Writing should not be any harder to read and understand than it needs to be” (Dale & Chall, 1948, p. 19). Nowadays, based on what we have directly experienced, we have kept a critical attitude in adopting readability indexes, without however appreciating their usefulness. Often, the result of this process is that the prose of many scientific articles is unreadable, because each author is convinced either that it is an irrelevant factor or, conversely, that he is a talented writer. An additional factor can be found in a document titled Automated Readability Index, commissioned in 1967 by Aerospace Medical Research Laboratories. After explaining that the Air Force is interested in discovering the method of easily extracting information from the documents and after examining the more affirmed

150 Giovanni Scarafile and Maria Elena Latino

methods at that time, the research comes to the following conclusions which still seem very realistic today: There are many factors involved in applying any readability index. A major consideration especially relevant when considering the adult reader is his background in the content area. If the written material is in his area of competency, readability would be less important than if it were in a subject matter area with which he had had little previous contact. […]. Additionally, the intent of the reader is possibly the most important factor. A person reading for recreation or general interest would probably prefer books with a relatively low readability index. The same reader searching for the solution to a specific problem of concern to him might successfully undertake the reading of a much more difficult source. Generally, the readability of a book as determined by the Automated Readability Index can only account for a portion of the factors involved in selecting appropriate written material. The background, interests and motivation of the reader and the writing style and skill of the author are possibly more important but beyond the scope of this, or any other known mathematical formula. (Smith & Senter, 1967, p. 13)

The previous text is significant for several reasons: a. The role of the reader’s experience is emphasized. In other words, there is no indiscriminate exaltation of abstruse formulas, but it is in some way argued that the reader’s experience (what is, for example, defined as his area of competence) should be taken into account as a determining factor for the success of the indexes; b. It is recognized that the scope of indexes is limited. At a more general level, we find it significant that in Smith and Senter’s text a reference is made to documents about the readability indexes written in the 1920s. As we strive to oppose those indexes, we risk not realizing that the study of this field has an important background and that it probably deserves greater consideration, even before being criticized. It is with this conviction that we have decided to adopt the readability indexes in the Terra Terra project. 2.2

Gamification

Gamification applies dynamics and gameplay mechanics in non-play texts in order to solve problems, engage and influence the behaviors of recipients. The term is very recent and was used for the first time in 2010 by Jesse Schell during the Las Vegas Dice Conference. 1 1. Jesse Schell’s talk is available at: https://www.ted.com/talks/jesse_schell_when_games_ invade_real_life



Chapter 8.  The ethics of communication and the Terra Terra project 151

In 2013, gamification gained scientific acknowledgement from the University of Basel, supporting its benefits and achievements and establishing its leadership role in enhancing user performance. In particular, these scholars from the Swiss University have conducted experiments and examined the effects of three specific game design elements: points, rankings, and levels. The results suggest that the implementation of these game elements has significantly increased the performance of users. The mechanism of gamification works by creating a strong motivation for the user, responding to his needs and natural inclinations and exploiting the specific elements of the game (badges, challenges, rewards, etc.). Game-related concepts are defined to enhance the meaning and the experience the user wants. Gamification can be imagined as a pyramid, where at the top we have the experience perceived by the user. Werbach (2012) speaks of three levels of gamification: dynamics, mechanics and components. The “PBL” (Points, Badges, Levels) are the components useful to build those dynamics that allow the user to experience his gaming experience. Those mechanisms that create rules, environments, gaming systems, and motivation in the games are now being “dismantled” by the ludic contexts, to enter other contexts. To gain points, to achieve a goal, is a personal challenge and has a particular effect on competitive players, while levels stimulate the challenge of gaining status and social recognition. Lastly, reward is a proof of victory, of passing a test. Within the Terra Terra Project, we have adopted some gamification solutions to make more engaging the reading experience of product labels. In other words, as in the experience of using an artwork, the spectator’s implication is central (Scarafile, 2016), so we have tried to find a way to involve the consumer when he has to acquire the information provided by the new labels sponsored by the Terra Terra Project. In a sense, therefore, we have applied Huizinga’s theoretical indication, according to which a game, once played, [it] endures as a new-found creation of the mind, a treasure to be retained by the memory. It is transmitted, it becomes tradition. It can be repeated at any time, whether it be “child’s play” or a game of chess, or at fixed intervals like a mystery. In this faculty of repetition lies one of the most essential qualities of play. It holds good not only of play as a whole but also if its inner structure. In nearly all the higher forms of play the elements of repetition and alternation (as in the refrain), are like the warp and woof of a fabric. (1946, pp. 9–10)

152 Giovanni Scarafile and Maria Elena Latino

3. The Terra Terra project Terra Terra is a voluntary traceability project involving the agrifood system. “Traceability” means the possibility to look inside the route of food from farm to fork, exploring in reverse each step. This process concerns also animal feed, animals for production of food or substances. The traceability process is extended through all phases of production, processing and distribution. In industrialized countries, traceability plays an important role as part of the measures aimed at ensuring the safety and protecting the quality of food products (Banterle, 2008). The identification of all the companies involved in the process of a given resource (from primary production to processing and marketing) allows the consumer to attribute the respective responsibilities to all actors involved in the achievement of the final product and to know the origin of all the raw materials making up the products, production methods, processes, and modes of transport adopted. At the same time, the adoption of a traceability system effectively supports companies in all certifications, from the environmental impact to the product and process. Within the European Community market it is possible to distinguish two levels of traceability: mandatory and voluntary. Differences between them, about rules and procedures, are marked. About the mandatory traceability system, the management of information considers mainly limited procedures aimed at identifying suppliers and customers of the companies operating at different stages of the supply chain, thus outlining all economic agents that make up the different stages of the supply chain itself. In order to obtain a greater level of information on the route followed by the products along the supply chain, from production to distribution to the end consumer, companies can refer to voluntary traceability, where information management is more detailed and complex. In voluntary systems, in fact, information doesn’t concern only the economic agents involved in the chain, but it is associated with the product, allowing us to reconstruct its history (Banterle, 2008). The idea of Terra Terra comes from a market need, which will be summarized in the concept of “Food citizenship” (see Section 3.1) and the related attention to food that leads to new market trends. However, even if product information is incorporated into the food value chain, there isn’t an adequate communication methodology able to explain it. The traditional label approach is unsatisfactory because it shows information in a static way, whereas the modern food citizen asks for information linked to personal interest, so the appropriate product information will not be the same for everyone. From this issue, the necessity arises to involve ethical communication skills in an Information Technology project.



3.1

Chapter 8.  The ethics of communication and the Terra Terra project 153

Food citizenship and traceability

According to WTCD (2015), in the last 200 years, in relation to the world population growth, a persistent concern has emerged about the possibility to increase agrifood production. However, the food supply has considerably increased through the adoption of capital-intensive agrifood techniques (Loewenson, 1992), biotechnology and genetics sciences (Twardowski, 2010), the ability to save farming from parasites, and the capability to safeguard food during the transport phase (Pandey & Agrawal, 2014). The question arises as to whether this improvement process could continue to satisfy the needs of a growing world population. The answer is probably “yes” from a quantity point of view, but “maybe” from a product quality point of view. Effectively, from the market arises a substantial change of the food use model. The customer appears careful about personal health and worried about foods’ properties, origin and freshness. For example, such modern science-based farming techniques as genetic modification are often discussed and negatively perceived. Consumers are troubled about food production sustainability, and the situation is aggravated by the quantity of resources needed along the agrifood value chain. A new social behaviour called “sustainable consumption” is born, regarding the possibility of the use of goods and related products which respond to basic needs and bring a better quality of life, while minimising the use of natural resources and toxic materials as well as the emissions of waste and pollutants over the life cycle.  (Seyfang, 2006, p. 384)

The scope is not to compromise the future of the next generations. Matching sustainable consumption with the current production principle is a formidable challenge, where the consumer plays the main role: basing his chooses on the value of social responsibility, he represents a new target of the agrifood market. The dualistic view proposed by Sagoff (1988) is overcome: “consumption choices” and “consumer interests” are not divided or in competition as respective parts of public and private spheres, but become parts of the same sphere (Dobson, 2003). The figure of “citizen consumer” has been born (Coff, Korthals, & Barling, 2008; Johnston, 2008; Lockie, 2009; Scammell, 2003; Schröder & McEachern, 2004): a responsible person, ethically motived to modify their lifestyle, in order to reach a model of sustainable consumption. The literature proposes various instances of this model that gives emphasis to several values – environmental, social, and food awareness – defining, respectively, ecological citizenship (Dobson, 2003), planetary citizenship (Alexander, 2004), and food citizenship. Particularly, the latter is defined as “the practice of engaging in food-related behaviours that support, rather than threaten,

154 Giovanni Scarafile and Maria Elena Latino

the development of a democratic, socially and economically just, and environmentally sustainable food system”. (Wilkins, 2005, p. 269) Food citizens need several kinds of information related to personal interests that result in new and growing market trends such as organic food, vegetarianism, local food with short supply chain, gluten-free foods, sustainability, and so on. Market analyses confirm these trends: for example, according to IFOAM, in the last fifteen years the world market for organic food has increased fivefold, reaching 80 billion dollars by 2015. Currently sustainability standards and food certification have been used as trust protocols enabling the consumer to greater awareness in consumption. But in the end the result of certification is a symbol on the product label whose actual meaning is hard to know and to verify. Moreover, guaranteeing the reliability of certificates encompasses laborious audits with high cost for the company that increases in relation to the range of the selling market regions (Provenance, 2015). But is this enough? Until a few years ago, when “the web” had a different role in the lives of consumers, companies weren’t directly involved in their relations with consumers. Selling was the unique possible task; opinion sharing was less “viral” and communication followed a top-down model. Today is different. Websites are more than simple shop windows; user-generated content reigns supreme; and relationships on social networks are the seed point for all business, products and service (Chua, 2014). Spreading information is the key to win on the market. 3.2

A new framework for traceability

The agrifood system shows companies (farmers or industries) that follow the national regulations and use marketing strategies to promote their products. So, product information and market information are unconnected. Product information is located within the company in several places and formats (Grant, 1996; Spender, 1996), but marketing strategies are rarely founded on them. So, on the one hand we have companies that declaim the products, on the other we have the skepticism of consumers. Two main issues foster this skepticism: the first concerns marketing campaigns with a low level of comparison to reality; the second concerns the high complexity level of the agrifood industry, which is characterized by a high variety of products and a high number of actors that play several roles in the value chain. Terra Terra, placed in the middle, aims at creating a virtual gate that enables the consumer to discover the product. The proposed framework encompasses methodological and technological solutions that synergically aim at raising the same focus: agrifood product valorization using traceability as a market strategy. Realizing a technological platform able to collect data from the entire process of



Chapter 8.  The ethics of communication and the Terra Terra project 155

the agrifood value chain needs a multidisciplinary approach that leverages industry 4.0 principles. Particularly, business process modeling, the Internet of Things, and mobile technologies will allow the several value chain actors to collect information, automatically encouraging the traceability. Not only is this information useful to companies in order to make their business process more efficient, it also will be useful in order to promote their products. Sure enough, the consumer will be able to consult the information collected by Terra Terra during all phases needed to bring the food from field to fork. How? Consumers, using their smartphone and photographing the QRcode placed on the products’ label, will access a virtual world describing the product using an interactive and attractive strategy. Furthermore, he will share the food experience with a community of specialists and friends. Tomorrow, using Terra Terra solutions, all the product information spread across the company will be collected into a single IT platform. This information will be used to create a marketing strategy based on trust and transparency and it will be shown to the consumer using a mobile app. 3.3

Data collection methods and IT architecture

Terra Terra aims to collect information in all phases needed to bring the agrifood product from farm to fork: farm, processing, packaging, distribution and selling. An analysis of the business process realized using Business Process modeling techniques aims to clearly define the information flow between the several roles behind the company and between the several actors along the agrifood value chain. So, it is possible to act in several levels, intra and inter companies. Applying process modeling, it is possible to define each activity realized by each actor and to identify the information that it is possible to collect. Data will be collected (i) automatically by sensors, (ii) retrieved automatically from other software, or (iii) using manual data entry from operators. Sensors will be used following the Internet of Things paradigms, based on two key features: –– The presence of heterogeneous non-technological items, equipped by technologies (e.g., sensors, RFID, etc.) and connected between them into a distributed network; –– The possibility to extract, from this network, data and information useful to realize a specific aim.  (Acquati & Bellini, 2016, p. 51; our translation) The items require intelligence, sharing of data, and access to data analysis. The sensor will be distributed along fields, processing plants, and transportation vehicles, transforming these components into smart items able to identify, locate, measure and connect, and implementing a new communication mode between

156 Giovanni Scarafile and Maria Elena Latino

people and people, things and things, things and people (Dohr, Modre-Opsrian, Drobics, Hayn, & Schreier, 2010). For example, sensors placed into a farm field can monitor the environmental conditions where the food is produced, measuring pollution levels in the water, soil and air. Each product (or raw material) is identified using a unique code. All information collected during the product farming, processing, packaging and delivery will be related to this code in order to create its history. So, a mechanism to recover and manage data is needed. But it isn’t enough. Even if sensor technology infuses intelligence into our wallets, clothing, automobiles, buildings, cities and biology, the Internet has serious limitations for business and economic activities. Online, we can’t establish the identity of one another, or the reliability of data exchanged (Tapscott & Tapscott, 2016). So, in order to achieve reliability of data collected and exchanged within the proposed platform, a trust protocol is needed. A question arises: Can a single entity manage all the data about every product along the agrifood supply chain and at the same time guarantee trust? If the product’s data along the supply chain were gathered by a third party, it could be impartial and able to deliver the technical capability in order to manage the whole system. The risk that the single entity becomes a point of weakness is high, representing a target for bribery or hacking. At the same time, distributing the trust protocol among various third parties implies an increase of cost and operative difficulties (Provenance, 2016). A new technology, called Blockchain, allows the use of a centralized system with a distributed governing mechanism managed automatically by third entities on the network. Thus, it is possible to certify the data collected by sensors in order to guarantee the data avoids forgery. Blockchain technology uses a global peer-topeer network to realize an open platform that can delivery neutrality, reliability and security of exchanged information. The logic and data are distributed in several machines (nodes) creating a public and shared “book.” This book is automatically updated on network nodes. Each information is accepted only if it is authenticated: a data pocket is verified using a private-public cryptographic key following a distributed “mining” process. The digital signature allows someone to provide their identity, and each node contributes to the data validation according to the logic provided by the Blockchain system. A specific behavior isn’t required of supply chain actors or customers, but the technology offers a solution to provide trust and integrity. So, the data collected by distributed sensors will not be locked within a single server of a specific value chain actor, rather the data will be stored in a shared distributed database. The information will be accessed and verified by all actors and not only by the entities that give the certificate.



Chapter 8.  The ethics of communication and the Terra Terra project 157

According to “the Iceberg theory” of Polanyi (1966), knowledge can be represented like an iceberg, where explicit knowledge is just the tip, while tacit knowledge is the iceberg body. So, during a business process (farming, processing, packaging, delivery, etc.) not all information could be collected using sensors. So, manual data collection is foreseen in order to capture the tacit knowledge of employees related to actions, behaviors and experiences. Also this data, once entered into the system, will be managed with Blockchain technology; however, it is not possible to assure that the employees will declare the truth. Finally, it is possible to recover data from other software used in the company through IT interfaces developed ad hoc.

4. How to communicate product information? Information needs at the consumer side regarding product quality characteristics originate from personal needs. Thus, identifying a set of standard information able to satisfy these various needs isn’t simple. Moreover, considering that a consumer’s information collected by the system (concerning their sex, age, history, interest, education, etc.) could become a target, it is necessary to develop a good communication strategy based on the principle of ethical communication. The classical approach named “clean labelling” (Hillmann, 2010) will evolve, encompassing concepts that on the one hand leverage a technological solution for traceability, and on the other are founded on communication models, able to spread efficiently the product information to all types of customers. The conceived communication model acts on two layers. The first is the definition of a multi-level content model able to aggregate the various information collected by the IT solution. The second is identifying the right communication tools, able to illustrate the multi-level content model. When the customer wants information about a product, they could focus the QRcode on the product label using an app on their smartphone. Terra Terra displays an interactive cube that summarily describes the product. The cube is the first communication tool able to illustrate the base level of the content model. The six faces of the cube follow the choices offered to the customer, and the possibility to explore the product from several points of view. Each point of view represents a class of interests that emerged from a national survey conducted by our research group. First, graphical information shows an overview. This level is conceived to give quickly an answer on product goodness by a specific communication tool: an info graph. Clicking on one cube face, the customer enters into the second level of the content model, where the point of view is explained trough several indicators. A third level is available in order to give detailed information

158 Giovanni Scarafile and Maria Elena Latino

about the product. This level is conceived to give an answer to the customer’s curiosity in a moment after purchase. 4.1

The “Terra Terra” communication model

The communication model of Terra Terra starts from the choice to explain the product from several points of view related to different consumer needs. So, it is conceived in order to give several aggregated levels of information, and to give an answer both when a customer has limited time to explore the product, and when the customer has more time. So, the communication model leverages three levels of information: –– First level: give synthetic information about the product from six points of view; –– Second level: give synthetic information about several issues related to each point of view; –– Third level: give detailed information about the product and farmer history; So, the model has a pyramidal shape, able to increase the level of information told from point to base. The six points of view that describe the product are the following:

Environmental footprint This view describes the environmental impact of the product on environmental sustainability. For example, in the environmental footprint view the customer can see the packaging material, the carbon footprint, polluting substances in the soil, and so on. In order to explain the role of indicators, we consider, as an instance, one of the various indicators constituting the environmental footprint view: the carbon footprint. This indicator is a measure of the exclusive total amount of carbon dioxide emissions that is directly and indirectly caused by an activity or is accumulated over the life stages of a product (Wiedmann & Minx, 2008). When buying a food product, in addition to wanting to know if the production process was chemically intensive, a consumer may wish to know the carbon footprint of the product. Food can be energy-intensive, given factory processing and distribution methods powered by fossil fuels (also known as “food miles”) (Czarnezki, 2011). Product composition In the product composition view the customer can see the value of single elements in the composition of the products, such as carbohydrates, sugar or omega-3. Following the EU product labeling rules for goods, sold within the Single Market, some of the standard basic information must be shown to the consumer. The



Chapter 8.  The ethics of communication and the Terra Terra project 159

purpose of the product composition view is to provide complete information on the content and composition of products in order to protect the consumer’s health and interest. Therefore, it is in the best interest of both the consumers and manufacturers that all product information is kept up to date with the requisite legal standards, and also with consumer’s needs.

Wellness In the wellness view the customer can obtain information about calories, allergens, intolerance and so on. Moreover, there is a section called “What science says” reporting the highlights of international research essays. These reports will be written adopting the readability indexes, in order to make sure that the text is clear to all consumers. Product acceptance The product acceptance view shows what the community says about the product. In this section, consumers can conduct an evaluation of products or companies. The presence of a community allows grouping people who share a common interest, passion or common point of view, encouraging public virtual relationships among users in order to expand their network, activating participation of different users and involving more consumption classes. Lifestyle The lifestyle view describes the certification obtained by the farmer. From the consumers’ point of view, the certifications, currently, are a guarantee about quality and sustainability of the agrifood value chain. A simple symbol on the label or packaging can lead a consumer to choose a specific product. This section reunites all certifications (gluten-free, organic food, slow food, halal certification, quality, and so on) of the product. Take responsibility At last, the section “take responsibility” represents a showcase where the farmer tells about himself, using several information tools such as traditional text and photos or storytelling, and data about the production process. In this section, the company has the possibility to communicate its values to the community. The purpose of this view is to enable companies to upload various contents in order to detect needs and emotions of consumers. The marketing tools are many, and in this section they can be put into practice.

160 Giovanni Scarafile and Maria Elena Latino

4.2

Terra Terra communication tools

As explained above, the communication model offers to the customer several communication layers. Each layer uses one or more tools to describe the contents. The first tool shows the contents related to the six product points of view in the cube (see Figure 1). As we mentioned above in Section 2.2, the cube represents a ludic tool, which will be used to capture the interest of the consumer, who by turning it on the smartphone’s screen could explore the product.

Figure 1. 

Each face of the cube is dedicated to a single point of view and shows, using a graphical representation, the goodness of the product. The Figure 2 shows the graphical representation of the environmental footprint view. Environmental footprint 1/5

Low

Figure 2. 

High



Chapter 8.  The ethics of communication and the Terra Terra project 161

Using a graphical range from 1 to 5, where 1 represents a low environmental footprint level and 5 a high footprint level, the consumer could obtain immediate information about the product. This summarized graphical layer is conceived to give information quickly during the purchasing activity. When the consumer is in a market and choosing between several products, using this solution, he will immediately obtain information to make the right choice. Clicking on the face of the cube, the customer can see several indicators. These represent tools able to give punctual information about the major issues of the related view. For example, relating to environmental footprint (see Figure 3), the customer can find out the product impact on the environment is low: with information about the packaging material, the carbon footprint, the presence of pesticides and pollution substances in soil and water, etc. Environmental footprint

1/5

The view describes how the product impacting on environment.

Indicators Packaging Plastic Carbon footprint 3 kg Pesticides in the soil No Polluting substance in the soil No Pesticides in the water No Polluting substance in the water No

Figure 3. 

162 Giovanni Scarafile and Maria Elena Latino

Figure 4 summarizes the Terra Terra communication model.

First Level: give a sintetic information about six product point of view Tool: Cube, syntetical info-graphs Environmental footprint 1/5

Second Level: give a sintetic information about several concept related to each point of view Tool: Indicators

The view describes how the product impacting on environment.

Third Level: give detailed information about the product and the farmer history.

Take responsibility

People Farm data

Tool: highlights of international research with readability index, storytelling, images, videos and traceability information collected in the several activities of agrifood value chain.

Figure 4. 

Conclusion Terra Terra represents a strategy to connect several actors with different roles giving support to communication. So Terra Terra will act as a process of science democracy, where the communication evolves in order to satisfy new customer needs through a new communication paradigm for food, based on the ethics and transparency model, able to transmit scientific contents using familiar technologies to customers. The Terra Terra Project is the result of a concrete dialogue between disciplines. Usually, one of the main difficulties in this field is not the theory of such a dialogue, but the practice. It is objectively difficult for practices of dialogue to be started. Our project managed to accomplish this task. 2 Philosophy comes out of its niche and meets science. Science finds that the way to get results is to open up to dialogue with philosophy, sometimes intended as an antagonist. For this reason,

2. We are grateful to prof. Angelo Corallo, who gave us the possibility to work together in the Core Lab, at the University of Salento.



Chapter 8.  The ethics of communication and the Terra Terra project 163

in conclusion, we would like to quote Popper’s words: “My point [is] that the classification into disciplines is comparatively unimportant, and that we are students not of disciplines but of problems” (Popper, 1962, p. 67).

References Alexander, G. (2004). Welcome to the Planetary Citizenship stream of T171 on the PlaNet weblog!. posted 18/2/04 PlaNet http://www.planetarycitizen.open.ac.uk last access 04/06/04. Banterle, A. (2008). Tracciabilità, coordinamento verticale e governance delle filiere agroalimentari. Agriregionieuropa, 4 (15). Chua, T. S. (Ed.). (2014). Mining user generated content. Chapman and Hall/CRC. Coff, C., Korthals, M., & Barling, D. (2008). Ethical traceability and informed food choice. In Ethical traceability and communicating food (pp. 1–18). Springer Netherlands.

doi: 10.1007/978-1-4020-8524-6_1

Czarnezki, J. J. (2011). The Future of Food Eco-Labeling: Organic, Carbon Footprint, and Environmental Life-Cycle Analysis. Dale, E. &. Chall, J. S. (1948). A Formula for Predicting Readability. Educational Research Bullettin. Vol. 27, N. 1, 11–20 + 28. Deterding, S., D. Dixon, R. Khaled, L. Nacke. (2011). Gamification. Toward a definition. CHI 2011. Gamification Workshop. (Manuscript). Dobson, A. (2003). Citizenship and the Environment. OUP Oxford.  doi: 10.1093/0199258449.001.0001 Dohr, A., Modre-Opsrian, R., Drobics, M., Hayn, D., & Schreier, G. (2010, April). The internet of things for ambient assisted living. In Information Technology: New Generations (ITNG), 2010 Seventh International Conference on (pp. 804–809). Ieee. Gadamer, H. -G. (2007). The Incapacity for Conversation. Continental Philosophy Review (2006) 39:39–351–359. Giannelli, M. T. (2006). Comunicare in modo etico: un manuale per costruire relazioni efficacia. Milano: Raffaello Cortina Grant, R. M. (1996). Toward a knowledge-based theory of the firm. Strategic management journal, 17(S2), 109–122.  doi: 10.1002/smj.4250171110 Hillman, J. (2010). Reformulation key for consumer appeal into the next decade. Food Rev, 37(1), 14–16. Huizinga, J. (1949). Homo ludens. A study of the play-element in culture. London: Routledge & Kegan Paul Ltd. IFOAM. (2015). Into the future. Annual report 2015. http://www.ifoam.bio/sites/default/files/ annual_report_2015_0.pdf last access 09/02/17. Johnston, J. (2008). The citizen-consumer hybrid: ideological tensions and the case of Whole Foods Market. Theory and Society, 37(3), 229–270.  doi: 10.1007/s11186-007-9058-5 Lévinas, E. (1998). La rovina della rappresentazione. In Lévinas, E., Scoprire l’esistenza con Husserl e Heidegger (pp. 141–154). Milano: Raffaello Cortina. Lipari, L. (2012). Rhetoric’s Other: Levinas, Listening and the Ethical Response. Philosophy & Rhetoric, Vol. 45, No. 3, 2012, pp. 227–245. Lockie, S. (2009). Responsibility and agency within alternative food networks: assembling the “citizen consumer”. Agriculture and Human Values, 26(3), 193–201.  doi: 10.1007/s10460-008-9155-8

164 Giovanni Scarafile and Maria Elena Latino

Loewenson, R. (1992). Modern plantation agriculture: corporate wealth and labour squalor. Zed Books. Mekler, E. D., F. Brühlmann, K. Opwis, A. N. Tuch. (2013). Do Points, Levels and Leaderboards Harm Intrinsic Motivation? An Empirical Analysis of Common Gamification Elements. Gamification ‘13 Proceedings of the First International Conference on Gameful Design, Research, and Applications (pp. 63–73). Pandey, D., & Agrawal, M. (2014). Carbon footprint estimation in the agriculture sector. In Assessment of Carbon Footprint in Different Industrial Sectors, Volume 1 (pp. 25–47). Springer Singapore.  doi: 10.1007/978-981-4560-41-2_2 Polanyi, M. (1966). The tacit dimension. London: Routledge and Kegan. Popper, K. R.. (1962). Conjectures and Refutations. New York, London: Basic Books. Provenance. (2015). Blockchain: the solution for transparency in product supply chains. https:// www.provenance.org/whitepaper last access 08/02/17. Sagoff, M., (1988). The Economy Of The Earth: Philosophy, Law and the Environment. Scammell, M. (2003). Citizen Consumers. Media and the restyling of politics: Consumerism, celebrity and cynicism, 117. Scarafile, G. (2016). Etica delle immagini. Brescia: Morcelliana. Schröder, M. J., & McEachern, M. G. (2004). Consumer value conflicts surrounding ethical food purchase decisions: a focus on animal welfare. International Journal of Consumer Studies, 28(2), 168–177.  doi: 10.1111/j.1470-6431.2003.00357.x Seyfang, G. (2006). Ecological citizenship and sustainable consumption: Examining local organic food networks. Journal of rural studies, 22(4), 383–395.  doi: 10.1016/j.jrurstud.2006.01.003 Smith, E. A., & Senter, R. J. (1967). Automated Readability Index. Aerospace Medical Research Laboratories. Wright Patterson Air Force Base, Ohio. Spender, J. C. (1996). Making knowledge the basis of a dynamic theory of the firm. Strategic management journal, 17(S2), 45–62.  doi: 10.1002/smj.4250171106 Tapscott, D., & Tapscott, A. (2016). Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World. Penguin. Twardowski, T. (2010). Chances, perspectives and dangers of GMO in agriculture. J. Fruit Ornamental Plant Res, 18(2), 63–69. Werbach, K., D. Hunter. (2012). For the Win: How Game Thinking Can Revolutionize Your Business, Wharton Digital Press. Wiedmann, T., & Minx, J. (2008). A definition of ‘carbon footprint’. Ecological economics research trends, 1, 1–11. Wilkins, J. L. (2005). Eating right here: Moving from consumer to food citizen. Agriculture and human values, 22(3), 269–273.  doi: 10.1007/s10460-005-6042-4 WTCD. (2013). https://www.webportglobal.com/en-US/Blog-Community/WTC-Dublin-Blog/ March-2013/Getting-into-the-Food-and-Beverage-Industry last access 09/02/17.

Chapter 9

The political use of science The historical case of Soviet cosmology Mauro Stenico

University of Trento

From 1927 Stalin imposed several restrictions on scientific community, as well as the ideological distinction between ‘progressive’ and ‘bourgeois’ science. Dialectical materialism became a dogma for Soviet astronomy: only the cosmological model of an infinite and eternal Universe seemed to be really ‘scientific’, since it appeared to exclude God. Claiming a beginning of space and time, the Big Bang theory was generally interpreted as a ‘bourgeois’ invention supporting creationism. Ideological influence on cosmology increased after 1945. The situation changed with Stalin’s death (1953), when Soviet astronomers undertook a serious confrontation with Western cosmology. My aim is to expose the strategies used by the Stalinian astronomy to oppose Western cosmology and the aspects of the new debate with it after 1953. Keywords: Big Bang, politics, cosmology, diamat, ideology

Introduction The multiple relationships between science and dialectical materialism (diamat) have been the subject matter of many past publications (e.g. Graham, 1993; Haley, 1983; Kojevnikov, 1996; Vucinič, 2001). The aim of my contribution, based on a reworked version of a chapter of my doctoral research (Stenico, 2015), is to outroot and highlight the main philosophical and political aspects of the complex history of Soviet cosmology from its beginning until 1991. The historical articulation of my analysis will be the following one: 1. 1917–1928. Leninian era: relative freedom for Soviet astronomers. 2. 1928–53. Stalinian era: radicalization of the opposition between the ‘progressive’ and the ‘bourgeois’ cosmologies. 3. 1953–1964. Čruščëvian era: gradual liberalization of cosmological research. 4. 1964–1982. Brežnëvian era: modernization of Soviet cosmology. 5. 1982/85–1991. Gorbačëvian era: end of the ‘progressive’ cosmology. doi 10.1075/cvs.13.11ste © 2018 John Benjamins Publishing Company

166 Mauro Stenico

The premises 1.  The cosmological background At the beginning of the 20th century, the Universe was conceived as infinite, eternal and Euclidean. General relativity, the discovery that the Milky Way was not the only galaxy and the role of such empirical data as the extragalactic redshifts threw traditional cosmology into a crisis. During the Twenties, the Universe began to be conceived as a finite, curved and expanding reality, thanks to innovative contributions carried out by the Russian mathematician Aleksandr Friedmann and the cosmologist and catholic priest Georges Lemaître. Lemaître hypothesized the origin of the Universe in the form of a primeval atom whose violent explosion had caused the cosmic expansion. He interpreted the extragalactic redshifts – i.e. the fact that the light emitted by most of the observed galaxies revealed a displacement toward the red end of their luminous spectrum – in the sense of the Doppler-Fizeau effect: in other words, galaxies were flying away. This was not the result of a free motion of galaxies in the space, but the product of an expansion of the space-time substratum making galaxies themselves move (Lemaître, 1927, 1946). An international competition for the interpretation of empirical data was opened and generated a number of models. Step by step, the theory of the ‘dynamic’ Universe took the form of the modern Big Bang theory, which justifies the cosmic expansion as a result of a catastrophic explosion which happened 13–14 billion years ago. However, despite the most considerable empirical discoveries of the 20th century cosmology – the cosmic microwave background radiation (CMBR, 1964) and those of the COBE satellite (1992), generally but not unanimously interpreted in favor of the Big Bang – the debate is not over. 2.  The dialectical materialism (diamat) To get an idea of the core of Soviet cosmology, one has to know the main aspects of the diamat, the philosophical outlook of communism. Since its origins, materialism sees matter as the sole principle to explain everything in the Universe. In the 19th century the German philosopher Friedrich Engels elaborated its ‘dialectical’ declination. All material dynamics were guided by three laws: passage of quantitative changes into qualitative ones; unity and conflict of the opposites; negation of negation. In Engels’ opinion the laws of dialectics came from observations and legitimized a truly ‘scientific’ research. “Matter without motion is unthinkable” (Engels, 1952a, p. 62): matter and motion (evolution) were bound together from all eternity and they constantly gave birth to new finite and corruptible material entities



Chapter 9.  The political use of science 167

in an infinite qualitative variety and succession. No intervention from ‘outside’, no Prime Mover: matter and its intrinsic property of motion were self-sufficient. Engels admired Immanuel Kant’s cosmogony of 1755, because it affirmed the plurality of the astronomical systems and the cosmic infinity: its sole ‘defect’ was the initial necessity of a Creator to bring matter and the physical laws into existence. The distinction between materialist and ‘idealist’ philosophers depended on their refusal or acceptance of an eternal and uncreated Universe (Engels, 1947, p. 14). In 1906 the young Stalin defined the diamat as the sole philosophy capable to understand the dynamics of Nature and society (Stalin, 1949). In 1909 Lenin wrote that in the Universe there was nothing but “matter in motion” (Lenin, 1946, p. 137) and incited all materialist scientists of the world to a general struggle against the ‘fideistic’ science. A clear definition of the diamat is given by the Bolšaia sovetskaia entsiklopedya (Great Soviet encyclopedia): A scientific worldview; a universal method of cognition of the world; the science of the most general laws of the movement and development of nature, society and consciousness. Dialectical materialism is based on the achievements of modern sciences (…) It is constantly developed and enriched as they progress. It constitutes the general theoretical foundation of Marxist-Leninist teaching. Marxist philosophy is materialistic, since it proceeds from the recognition of matter as the sole basis of the world (…) It is called dialectical because it recognizes the universal interrelationship between objects and phenomena and stresses the importance of motion and development in the world as the result of the internal contradictions operating in the world itself. Dialectical materialism is (…) the total sum of the entire preceding history of the development of philosophical thought.  (Dialectical materialism, 1975, p. 187)

3.  Partiinost’ (партийность) Marxism refuses the conception of a ‘neutral’ science: science is not an abstruse product of the human mind, but a product of a given social and historical context. Thus ‘progressive’ science is not indifferent toward class struggle and socialist edification. On the contrary, its declared aim is to demolish all frauds introduced by the ‘bourgeois’ scientists in order to keep on the exploitation of the people. The word ideinost’ (пдейность) indicates the loyal attitude of the man whose actions and thoughts are directed by Marxism; bezydeinost’ (безыдейность) is the ideological indifference. Since the Communist Party is the political expression of the people, the instructions coming from this supreme ‘moral’ authority must be faithfully followed in order to edify socialism: partiinost’ (Party spirit) is then a fundamental component of Marxist life and represents

168 Mauro Stenico

the ideological tendency expressing the interests of particular classes or social groups and manifested both in the social orientation of scientific and artistic creativity and in the individual viewpoints of scientists, scholars, philosophers, writers, and artists (…) Communist partiinost’ organically combines a genuinely scientific analysis of reality with a consistent defense of the interests of the proletariat (…) The idea of nonpartisanship is cultivated consciously as a hypocritical disguise for the selfish interests of the bourgeoisie. (Partiinost’, 1978, pp. 295–296)

From the new economic policy (NEP) to the first five-years plan (1928) The NEP was officially adopted by the 10th Congress of the Party (1921) in order to stop the violence of War Communism. At the beginning of the Twenties, the diamat was not yet imposed on sciences in its most rigid way. Some Marxist philosophers were already attacking such aspects of Western cosmology as the conception of a spatially finite Cosmos. Nevertheless, the Russian-American physicist Paul Epstein defines the NEP period as a golden age for Soviet scientists, who as private citizens could accept the diamat “or treat it with skepticism but, on the whole, were free of conscientious scruples in their profession because the scientific philosophy (…) had not yet crystallized” (Epstein, 1952, p. 31). Working on Albert Einstein’s relativistic equations, Friedmann developed the first models of a dynamic Universe (Friedmann, 1922): one of them assumed a cosmic beginning in the form of a mathematical ‘singularity’. Under Stalin such a proposal will be practically banned from science. In 1924 the astronomer Vasiliy Fesenkov founded the prestigious Astronomičesky žurnal (Astronomical journal). In 1928 the 4th Astronomical Congress was held in Leningrad and the astronomer Grigory Šain described his time as a very interesting period in the history of astronomy (Ičsanova, 1995, p. 111). In the same year, thanks to the intervention of the Dutch astronomer Willem de Sitter, then president of the International Astronomical Union (IAU), four Soviet astronomers were allowed to attend the IAU General Assembly in Leiden (Netherlands). The Pulkovo Observatory, near Leningrad, became an international center of attraction. Anyway, one should never forget that Lenin was putting forward the premises for the subsequent definitive partition between ‘progressive’ and ‘bourgeois’ science. He indicated the edification of a ‘proletarian’ culture as one the most significative missions of the young generation. Materialist science had to serve communism helping the edification of a classless society and destroying the false bourgeois theories supporting religious ‘opium’ (Lenin, 1969).



Chapter 9.  The political use of science 169

The Stalinian age (1928–1953) 1.  The change of atmosphere A great change happened in 1927, when Stalin’s strategy of the Socialism in one country triumphed at the 15th Congress of the Party. Lev Trotsky was exiled and the other political ‘antagonists’ would have been purged within a few years. Stalin’s idea was clear: before even thinking of a world revolution (Kamenev, Zinoviev), Soviet people had to defend and strengthen the Revolution in the sole country in which it had been achieved so far. The people of this country was the only true proponent of the Marxist values in science: the scientific autarky required the elimination of every philosophical ‘deviation’. No mercy for the ‘enemies of the people’, especially toward religion (Stalin, 1929). Stalin’s first act of force was to stop the NEP, which had given too much freedom to the nepmany and the kulaki, who were liquidated in the name of the Revolution. The beginning of the first five-years plan (1928–1932) marked the beginning of the cultural (Stalinian) revolution. The 16th Congress of the Party (1930) officialized the condemnation of the ‘saboteurs’ who were threatening Soviet Union (Stalin, 1930). The rapid ideologization (bolshevization) of sciences became possible thanks to the political control exerted on all publications and the infiltration of both the scientific institutions and editorial boards of the scientific journals with ideologues appointed by the Party. For some years, first-class astronomical observatories as Pulkovo were directed by non-astronomers. At the beginning of the Thirties the Astronomical Committee was established by the Narkompros (People’s Commissariat for Education). The professional astronomer Boris Numerov was appointed as director. In the meanwhile, Stalin’s sermons against religion were exerting a certain influence on Soviet astronomers, who in 1930 wrote a letter to the Pope (Ter-Oganezov, 1930). The document was signed by such notable astronomers as Fesenkov, Pavel Parenago, Boris Vorontsov-Velyaminov and many others. Its main promoter was the astronomer and fanatical ideologist Vartan Ter-Oganezov, who had practically no technical publication on the astronomical subject matter but was very loyal to the Party, a circumstance which permitted him a certain scientific career (Bronšten-McCutcheon, 1995). The accusations against the Catholic Church included collusion with capitalism and refusal of the ‘true’ diamatist conception of the Nature. In 1932 the Astronomičesky žurnal published an article devoted to the tasks of astronomy during the second five-years plan (1933–1937), which included “the furthering of scientific propaganda for combating all sorts of prejudices, mainly of a religious nature” (On the planning, 1932, p. 318). 1 A general appeal to 1. As in other cases in Soviet literature (e.g. the Great soviet encyclopedia), here is it not possible to identify the author of the article.

170 Mauro Stenico

all scientists of the world to undertake the Revolution was then presented in the same year by the same journal. Sure, the revolutionary way required considerable sacrifices, but only “those who willfully shut their eyes believe that a revolution, whose equals has not been known in any of the preceding phases of historical development, can be a process without pain” (The appeal, 1932, p. 127). Stalin’s personal help was assured and the appeal was signed also by Nikolai Bucharin, Aleksandr Karpinsky, president of the Soviet Academy of Sciences, the physicists Sergei Vavilov and Abraham Ioffe. The cult of Stalin’s personality was growing rapidly. At the 1st Congress of Soviet Writers (1934) the politician Andrei Ždanov invited artists to “create works of high ideological content (…) to remold the mentality of people in the spirit of socialism” (Ždanov, 1935, p. 24). Stalin was the genius and teacher, the only leader who could help men of culture to get rid of the last “survivals of capitalism in the consciousness of people” (Ždanov, 1935, p. 17). The American astronomer of Ukrainian origins Otto Struve was horrified about the celebration of Lenin and Stalin as great scientists (Struve, 1935, p. 251). However this was not but the beginning. In 1947 Ždanov will define Stalin’s teachings as “the teachings of the entire people”: following them “our philosophy should flourish” (Ždanov, 1950: 112). In 1951 Parenago will call Stalin «the greatest genius of all mankind” (Parenago, 1951, p. 109). After his return from Europe to Russia in 1931, the astrophysicist Georgy Gamow realized that the general Soviet atmosphere had changed: When I visited friends from the Moscow University, they looked at me in bewilderment, asking why on earth I had come back (…) During the almost two years of my absence great changes had taken place in the attitude of the Soviet government toward science (…) Whereas earlier in the history of the post-Revolutionary reconstruction the government was anxious to re-establish contact with science “beyond the borders” and was proud of Russian scientists who were invited to scientific gathering in Western Europe and America, Russian science now had become one of the weapons for fighting the capitalistic world (…) Stalin created the notion of capitalistic and proletarian science. It became a crime for Russian scientists to “fraternize” with scientists of the capitalistic countries (…) Any deviation from the correct (by definition) dialectical-materialistic ideology was considered to be a threat to the working class and was severely persecuted.  (Gamow, 1970, pp. 92–93)

In 1933 Gamow and his wife were able to leave the Soviet Union forever. His former compatriots will call him ‘traitor’ and ‘renegade’ until the end of the Fifties. As in all dictatorships, the tendency to reinterpret the history of science in the light of the coeval ideology became a common element in the scientific journals. In 1933 a group of scientists from all fields celebrated the diamat in Priroda (Nature)



Chapter 9.  The political use of science 171

declaring it had been the instrument for the greatest discoveries in the history of all sciences (Soviet scientists on Marx, 1933). Western science was decadent: the physicist Dmitry Blokhintsev affirmed that the time had come to get rid of the “idealistic concepts which soak into our physics from bourgeois science” (Blokhintsev, 1937, p. 549). In cosmology the sole ‘true’ model was the Engelsian one: an eternal, infinite and self-sufficient Cosmos. The curvature of the cosmic space on a large scale and its global expansion were nothing but the result of the Western ‘mathematical formalism’, i.e. the use of formulas without connection with physical reality. Robert McCutcheon explains that at that time Soviet astronomers were participating in the international cosmological debate only on the philosophical level, but not on the instrumental one, given their material poverty: The cosmological revolution must have taken Soviet astronomers by surprise. Telescope-poor, they simple did not have the types of instruments needed for extragalactic astrophysics, and thus they were observers, not participants, in the advances leading to the expanding universe (…) With the exception of Gerasimovič, Fesenkov, and a handful of other young astrophysicists, most Soviet astronomers probably did not have the training necessary for this type of research.  (McCutcheon, 1985, p. 100)

Even if the cosmological implications of Einstein’s relativity were under attack in many Marxist circles, the theoretical physicist Matvei Bronštein was publishing in favor of the cosmic expansion (Bronštein, 1931, 1933), the cosmological interpretation of the redshifts (Bronštein, 1936) and the heat death of the Universe (Bronštein-Landau, 1934). However some elements seemed to collide with the worsening of the ideological atmosphere: in 1931 Friedmann received a posthumous award (the Lenin Prize) for scientific merits; in 1932 some Soviet astronomers went to Cambridge (Massachusetts) to attend the 4th General Assembly of the IAU; in 1933 Leningrad hosted the 1st All-Union Conference of Nuclear Physics, which was attended also by such foreign scientific authorities as Paul Dirac; in 1935 the Soviet Union became an official member of the IAU; incredibly, the staffs of Pulkovo and other observatories were able to expel the ideological directors appointed by the Party. These were the last traces of ‘astronomical freedom’. In 1936 the Course in astrophysics and stellar astronomy, edited by the new director of Pulkovo, the professional astronomer Boris Gerasimovič, was the last work under Stalin admitting the possibility to interpret the redshifts as a proof of the expansion of the Universe as a whole.

172 Mauro Stenico

2.  The main aspects and strategies of the Stalinian astronomy At the 17th Congress of the Party (January-February 1934) Sergei Kirov and his political ideas obtained more consensus than Stalin himself. Kirov’s ‘mysterious’ assassination at the end of 1934 was used by Stalin and – never to forget – his companions to start one of the most bloody repressions of the human history: the Great Purges (1936–1938). Scientists, philosophers, intellectuals, writers, artists, priests, politicians and normal people were not spared. If philosophical deviations were not enough for arrests or executions, they could anyway act as an aggravating factor to ‘demonstrate’ someone’s guilt. In this atmosphere, the Stalinian astronomy was brought to maturity: the process was neither linear nor uniform, but the analysis of many coeval contributions permits to get an idea about the strategies used to fight Western cosmology. 2.1  Political defamation Assuming a beginning of space and time, Western cosmology could easily be accused of collusion with religion and capitalism: a “camouflaged theology” (Šafirkin, 1938, p. 125) which appeared as a modern version of the creatio ex nihilo and had ‘certainly’ been elaborated by Lemaître on instructions from Rome (Petrov, 1949, p. 123). In this sense, Marxist philosophers got used to repeat the same intellectual formulas like a mantra to demolish this or that theory.

Alternative collocation or interpretation of the astronomical empirical data 2.2  The alternative interpretation or collocation of the astronomical data was realized in multiple ways. Someone restored the model of the hierarchical Universe put forward by Johann Lambert and Carl Charlier, who maintained that the Cosmos was composed by an infinite series of independent orders of higher and lower levels. Man could not observe the Universe as a whole, but only a finite part of it. Moris Eigenson defined the Soviet astronomers as “the real and rightful inheritors of the great concept of the infinite world, populated by an infinite abundance of (…) systems of heavenly bodies of various complexity” (Haley, 1983, p. 83). In this model a given arbitrary system was physically connected with an infinitely great number of systems of lower and higher levels, but cosmology could “bear upon a concrete finite cosmic system only” (Eigenson, 1940, p. 751). The astronomer Kirill Ogorodnikov had no doubt on this point: We refuse to look at the universe “as a whole” (…) We must consider the expanding system of galaxies as a local phenomenon. Thus there is no need to introduce such abstract and physically false concepts as four-dimensional space, a finite universe, and so on. (Bronšten-McCutcheon, 1995, p. 335)



Chapter 9.  The political use of science 173

Another alternative collocation of astronomical data was to apply to them a different interpretation, for example the ‘tired light’ hypothesis put forward in 1929 by the Swiss astronomer Fritz Zwicky: the extragalactic redshifts had nothing to do with the expansion of the Universe, but depended on the energetic decay of the photons during their way across the Universe. Vladimir Krat, future director of Pulkovo, connected the redshifts to a certain instability of light-quanta (Krat, 1936). Whatever the real nature of the redshifts was, there was a general agreement on the gnoseological distinction between vselénnaia (bселенная: the global Universe) and metagaláktika (mетагалактика: metagalaxy, i.e. the observable region of the infinite Universe): every kind of astronomical generalization to the whole Universe meant ‘idealism’. The astronomer Viktor Ambartzumian explained that the expansion of the metagalaxy was only an episode in the infinite evolution of the infinite Universe (Ambartzumian, 1953, p. 286). In a certain way, the distinction between the observable region of the Universe and what was beyond the limit of the human observation was not totally unknown also to Western astronomers. During a series of lectures given in 1931, Willem de Sitter affirmed that “from the physical point of view, everything that is outside our neighborhood is pure extrapolation, and we are entirely free to make this extrapolation as we please, to suit our philosophical or aesthetical predilections or prejudices” (De Sitter, 1932, p. 113). 2.3  Liquidation of the ‘enemies’ ‘Enemies of the people’ were found even in astronomy and variously accused of fascism, Trotskyism or counterrevolutionary acts. Still at the beginning of 1934, the Ministry of Justice Nicolai Krylenko warned Soviet astronomers not to remain politically disengaged: “You cannot get away from life, no matter which mountain summit you climb or which Pulkovo Observatory you hide in” (McCutcheon, 1989, p. 142). Two years later the Purges began: from 1936 to 1937, 25–30 astronomers were arrested, i.e. the 15% of the total amount of the coeval professionals (McCutcheon, 1989). Pulkovo was almost annihilated: people like Eigenson were lucky to survive. In Pulkovo the astronomers were perceived as too proud and independent. Their frequent travels to foreign observatories and conferences resulted in may “suspicious” contacts in the United States and Europe. Worse, the astronomers at Pulkovo had unwittingly created a ready set of charges that could be used, in the hysteria of the times, to convict themselves and many others. (McCutcheon, 1989, p. 352)

The Soviet authorities were conducting the purges, but such figures as Ter-Oganezov and the Marxist physicist Vladimir L’vov did their best to denigrate the scientists who had fallen into disgrace. Ter-Oganezov, for example, denounced those ‘enemies’ who were sabotaging the Soviet astronomical institutions:

174 Mauro Stenico

The organs of NKVD have discovered a band of enemies of the people in our Soviet astronomical institutions. We have a new proof of the correctness of Comrade Stalin’s warnings that the enemy is trying to penetrate all pores of the Soviet organism. (Ter-Oganezov, 1937, p. 377)

Editorials against the ‘saboteurs’ appeared in the Pravda, but also in scientifically respectable journals like the Astronomičesky žurnal itself, which in the first number of 1937 prayed the government to have no mercy for the ‘traitors’. The astronomers Gerasimovič, Numerov, Innokenty Balanovsky, Dmitry Eropkin, Nikolai Ivanov, Nikolai Komendantov, Vladimir Kozlov, Maksimilian Musselius, Evgeny Perepelkin, Sergei Selivanov, V. Surovtsev, Pyotr Yašnov passed away in the gulag, in prison or were executed. Arrested and released after one or more years were the astronomers Nina Boeva, Vera Gaze, Nikolai Kozyrev, I. Leman-Balanovskaia, Naum Idelson, Aleksandr Markov, Vera Moškova, Vladimir Ščeglov, B. Šigin. In the condemnation of Kozyrev, who spent 10 years in a gulag, cosmology played a certain role: In May 1937 he was brought to trial (…) and sentenced to imprisonment. After two years he was sent to a labor camp in Noril’sk. There, a fellow inmate denounced him for his scientific views; for example, he (naturally) supported the theory of an expanding universe, which ran contrary to Soviet doctrine. Kozyrev was sentenced to ten years (…) He appealed, and the sentence was altered to death! Luckily there was no firing squad in Noril’sk, and after a second appeal the sentence reverted to one of ten years in prison. (Moore, 1992, p. 120)

Other scientists who lost their lives or went to prison were Bronštein, Nikolai Vavilov, who had courageously opposed to Trofim Lysenko, Arkady Apirin, Pavel Florensky, Vsevolod Frederiks, Boris Hessen, Nikolai Kariev, Aleksandr Kostantinov, Yuri Krutkov, Arkady Reisen, Yuri Rumer, Semen Semkovsky, Semen Šubin, Lev Šubnikov, Aleksandr Vitt, Aleksandr Weissberg-Cybulski. Thanks to the intervention of the nuclear physicist Pyotr Kapitza, the physicist Vladimir Fok was released within a few days and Lev Landau after one year. The Orwellian machine of the cancellation of the past was active: these people had to be considered as ‘nonpersons’: in 1940 Vorontsov-Velyaminov was excluded from the candidature as a corresponding member of the Academy of Sciences because he had favorably quoted an astronomer who had fallen into disgrace (Ereemeva, 1995, p. 318). 2.4  Ideological conferences Promoted by the highest cultural institutions and regularly attended by the most distinguished scientists of the time, the aim of these conferences was to establish the ‘correct’ line of the scientific research. Two ideological conferences were particularly relevant for the Stalinian astronomy.



Chapter 9.  The political use of science 175

The first one was held at the Šternberg Observatory (Moscow) in December 1938 on the initiative of the Academy of Sciences. Astronomy had just been purged from the ‘enemies’ and one can hardly imagine in what atmosphere the meeting was held. Soviet astronomers had to admit that they had been too poorly active on the ideological front. Western cosmology was in a deep crisis because it refused the only ‘true’ diamatist conception of the Universe. The speakers were forced to admit that ‘hostile agents of fascism’ had managed to penetrate the leading positions in astronomical institutions and the organs of the press, making propaganda for the ‘counter-revolutionary’ astronomy (Aristov, 1939). The second one was held at the Leningrad Section of the All-Union AstronomicalGeodetic Society in December 1948, the first period of the Cold War. Once again Soviet astronomers denounced the ‘bourgeois’ cosmology as an instrument for religious obscurantism: The reactionary “theory” of the expanding universe rules in foreign cosmology. Unfortunately has this anti-scientific theory penetrated even in our specialist press (…) It is necessary to denounce this astronomical idealism which helps religious obscurantism. (Prokofieva, 1950, pp. 19–20)

During the final debate, L’vov criticized some Soviet astronomers because they had been too kind toward ‘bourgeois’ cosmogony, that “cancerous tumor that corrodes modern astronomical theory and is the main ideological enemy of materialist astronomy” (Prokofieva, 1959, p. 17). 2.5  Direct political interferences Even if rarely, in some cases politicians did enter directly the scientific debate. The case of Lysenko, personally assisted by the Party, is already known. In the case of astronomy, the most important episode happened in June 1947, when a philosophical conference was held in Moscow in order to criticize a Marxist publication on the history of astronomy which had been to weakly ideological. Ždanov took advantage of the meeting to make clear that the Party endorsed the model of an infinite Universe and the distinction between metagalaxy and Universe: Many followers of Einstein, in their failure to understand the dialectical process of knowledge, the relationship of absolute and relative truth, transpose the results of the study of the laws of motion of the finite, limited sphere of the universe to the whole infinite universe and arrive at the idea of the finite nature of the world, its limitedness in time and space. The astronomer Milne has even “calculated” that the world was created 2 billion years ago. It would probably be correct to apply to those English scientists the words of their great countryman, the philosopher Bacon, about those who turn the impotence of their science into a libel against nature.  (Ždanov, 1950, p. 110)

176 Mauro Stenico

3.  The last years of the Stalinian astronomy: the Ždanovščina During the Second World War there was a little amnesty on both the cultural and the religious levels. After the war the astronomical debate started again. In 1946 the physicist Evgeny Lifšitz published an article which declared that on the physical level the Universe was infinite; on the mathematical level, on the contrary, the model of a spatially finite Universe helped astronomers in their calculations (Lifšitz, 1946). The partition between the theoretical and the physical levels began to obtain a certain consensus among Soviet astronomers. Furthermore, in other cultural publications ideology seemed not to be as strong as before. Enough was enough: in 1946 Stalin personally appointed Ždanov at the head of a campaign for the final elimination of all ‘survivals of capitalism’ in the consciousness of the people. The Ždanovščina (1946–1948) interested science, music, economy, literature and poetry. During the philosophical conference of 1947, Ždanov explained that every ‘bourgeois’ attempt to reintroduce God in natural sciences was inacceptable. The biggest scandal was the absence of a unified Marxist philosophical front against Western culture: “Philosophical works are entirely insufficient in quantity and weak in quality (…) Our leading philosophical institute (…) presents a rather unsatisfactory picture, too” (Ždanov, 1950, pp. 100–102). The option was clear: either a change of attitude or the collapse: “All this is pregnant with great dangers, much greater than you imagine. The gravest danger is the fact that some of you have already fallen into the habit of accepting this weaknesses” (Ždanov, 1950, p. 102). Ždanov died of a heart attack in August 1948, but the spirit of the Ždanovščina survived until 1956. Not surprisingly, Stalin conceded a little more autonomy to Soviet physicists, who after 1945 were working on the atomic bomb (Stalin, 1952, p. 25). The conferences on scientific subject matters (astronomy, biology, chemistry, geology etcetera) which were held at the beginning of the Fifties revealed the same trend of the Forties. The necessity to preserve the Marxist-Leninist ideology in natural sciences was officially confirmed also by the 19th Congress of the Party (1952). In the 2nd edition of the Great Soviet Encyclopedia (1950) the noun Universe was written according to the diamatist ‘necessities’ (Ambartzumian, 1953). During the Korean War (1950–1953) an accident between East and West happened inside the IAU. The General Assembly held in Zurich in 1948 had decided to hold the successive meeting in Leningrad in 1951. However, given the international tension created by the Korean War the assembly was cancelled and finally moved to Rome (NATO) in September 1952. Soviet Union did never accept the official motivation and interpreted the happening as a sabotage of the anti-Soviet circles of the IAU. The 8th General Assembly in Rome was attended also by a little Soviet delegation composed by four astronomers, who deserted the papal audience of 7 September and the visit to the Specola Vaticana in Castel Gandolfo. At that time, the relationships between the Vatican and Soviet



Chapter 9.  The political use of science 177

Union were practically inexistent: communism had been excommunicated by Pope Pius XII in 1949. In 1950 the encyclical Humani generis took position against the diamat. On 22 November 1951, before the members of the Pontifical Academy of Sciences the Pope declared that modern cosmology had provided a new argument in favor of the cosmic contingency (Pius XII, 1951). The Holy Father paid attention not to confuse the physical and the metaphysical levels: contingency of the Universe did not mean that science had demonstrated the creatio ex nihilo, since religious dogmas remained outside the competences of physics. Marxist philosophers did never accept the papal speech and always considered it as another argument demonstrating the alliance between ‘bourgeois’ science, religion and capitalism. No innovation appeared in Soviet astronomy until 1953 and the several publications of the last Stalinian period always insisted on the correctness of the diamat and on the defects of Western science.

Soviet cosmology in the Čruščëvian era (1953–1964) Stalin died on 5 March 1953. In a few years Soviet astronomy would pass through gradual changes, especially thanks to a new generation of young astrophysicists. The spirit of the Ždanovščina was still alive, but the 20th Congress of the Party (1956) and the secret speech of Nikita Čruščëv on the past crimes inaugurated a partially new course. Stalin’s figure was censored in the Soviet films on Lenin and the Revolution and his books were excluded from the classics of Marxism. For Soviet astronomy – which would have soon lived important successes in the Space Race – things began to change in 1954, when Pulkovo hosted an international meeting on astronomy with the presence of some American delegates. After 1953, such extremist ideologues as L’vov and Ter-Oganezov were gradually marginalized. The diamat experienced a process of ‘remolding’ which gradually conducted Marxist philosophers not to condemn aprioristically the Big Bang theory anymore. The premises for a new kind of confrontation with Western cosmology were ready, but the Big Bang would have been still generally interpreted as a phenomenon ascribable to the metagalaxy or, if referred to the Universe as a whole, only describing a new life phase of the eternal Cosmos. In December 1956 the 6th All-Union Cosmogonical Conference took place in Moscow. Ambartzumian believed the time had come for autocriticism: Cosmological problems are somewhat neglected in the USSR while a considerable number of papers on the subject are appearing abroad (…) Theoretical papers are often unconnected with new observational data (…) It is necessary to stimulate more work in extragalactic astronomy and cosmology. (Masevič, 1957, p. 306)

178 Mauro Stenico

The analysis of the Astronomičesky žurnal from 1924 (its foundation year) to 1956 does reveal in fact an amount of less than 10 contributions properly devoted to the cosmological subject matter. Ambartzumian was asking for more connection between theory (models) and practice (observations). Furthermore he clarified that the right interpretation of the redshifts was the Doppler-Fizeau effect: no chance for alternative explanations. The physicist Abraham Zelmanov, who under the last Stalinism had spent some months in prison, maintained that “in the exposition of a whole series of cosmological thesis, which have an ideological significance, it is important not to introduce simplifications and dogmatism” (Masevič, 1957, p. 306). Philosophers were not the judges of cosmology anymore, but of course their contribution to the scientific debate was not dismissed. Zelmanov urged his colleagues to a greater cooperation with foreign journals and astronomers. The philosopher Arnošt Kolman, who had spent some years in prison under Stalin, recognized the fact that a spatially finite Universe could be compatible with the diamat. From 1957 the Astronomičesky žurnal was regularly translated into English (Soviet astronomy) as well as many other Soviet scientific journals. In August 1958 the 10th General Assembly of the IAU was held in Moscow, with more than 1.200 participants from 36 countries. Alexei Kosygin, vice president of the Council of the Ministers of the USSR, praised the fact that the scientific collaboration was overcoming the ideological differences: the government was proud of Soviet astronomers cooperating with the rest of the world in the name of science (Sadler, 1960, p. 13). However, the proof that the fundamentals of the diamat were still active is given by the personal experience of the British astronomer Fred Hoyle, who in Moscow was privately warned not to use the word ‘creation’ in the exposition of his Steady state cosmology: Judge my astonishment on my first visit to the Soviet Union when I was told in all seriousness by Russian scientists that my ideas would have been more acceptable in Russia if a different form of words had been used. The words “origin” or “matter-forming” would be OK, but “creation” in the Soviet Union was definitely out. (Hoyle, 1989, p. 101)

During the Sixties, the young scientists Igor Novikov, David Frank-Kamenetsky and Andrei Doroškevič became very notable for their researches on such modern astrophysical issues as nucleosynthesis, singularities and galaxy formation. Yakov Zeldovič became a first-class international expert in the particle astrophysics. At the 11th General Assembly of the IAU held in Berkeley (California) in 1961, Ambartzumian was elected president of the scientific institution (1961–1964), giving great honor to Soviet astronomy. Aleksandr Friedmann’s works, which under Stalin had almost completely sunken into oblivion, were rediscovered and received an integral publication. Zeldovič commented: “There are many unsolved problems



Chapter 9.  The political use of science 179

in cosmology, but their solutions are to be sought on the basis of Friedmann’s theory, in the framework of the general ideas he developed (…) The correct solution came from the Soviet Union and was due to A. A. Friedmann” (Zeldovič, 1964, p. 494). In 1963 Novikov and Doroškevič predicted the existence of the CMBR (Doroškevič-Novikov, 1964), but still attributed it to the metagalaxy. To conclude, the Čruščëvian era was characterized by a great change of approach to physics and astronomy. The rest of the world was following it with great interest, as the 1963 Russian-English physics dictionary certified: “The achievement of Russian physical scientists during the last decade have stimulated universal interest in the physical literature of the Soviet Union” (Emin, 1963, p. 5).

Soviet cosmology in the Brežnëvian era (1964–1982) Under Leonid Brežnëv – but not thanks to him – Soviet cosmology undertook its final modernization. Brežnëv and the scientific institutions (e.g. the Academy of Sciences) did constantly insist on the connection between science and partiinost’. Despite the disquieting presence of the ‘Brežnëv Doctrine’ in the Communist Bloc, the relationships with the West were characterized by multiple agreements on nuclear weapons, the jointed American and Soviet Apollo-Soyuz Mission (1975) and new contacts with the Vatican: in 1967 four Vatican astronomers took part the first time to an assembly of the IAU beyond the Iron Curtain (Prague). In 1970 a Soviet delegation guided by Ambartzumian went to the Vatican (Pontifical Academy of Sciences) for a study week on the nuclei of galaxies. As in the rest of the world, even in the Soviet Union the discovery of the CMBR brought new stimulations for cosmology: the problem of galaxy formation was becoming one of the greatest dilemmas of the international astronomy. Ambartzumian faced it proposing a process of ‘gemmation’ of new galaxies from active nuclei galaxies: old galaxies would release little ‘embryos’ made of galactic material which would then grow up as new galaxies. In 1966 the astrophysicist Iosif Šklovsky gave even (little) credits to the ‘enemy’ Lemaître for having theorized different phases for the cosmic evolution (Šklovsky, 1969). Zeldovič proposed a model attributing 10 billion years to the Cosmos and assuming an initial state in which the density was infinite (Zeldovič, 1966). The contributions of some coeval Soviet astronomers do not always clarify if they were still attributing their speculations to the metagalaxy or to the Universe. In the meanwhile, philosophers like Gustav Naan maintained that some terms used in the classics of Marxism had to be correctly reinterpreted: the word Unendlichkeit (infinity), used by Engels in the Dialektik der Natur, could mean ‘boundless’ rather than ‘infinity’ (Naan, 1965). Furthermore, infinity could be preserved even within a spatially finite Universe: if matter was eternal (temporal infiniteness), there would

180 Mauro Stenico

have always been an infinite number of infinite qualitative entities in succession interacting from all eternity. The problem was now a ‘linguistic’ one: the terminology ‘Big Bang’ was still labeled as an ‘American extravagance’ (Zeldovič, 1969, p. 41). Still in 1976 Blokhintsev admitted he preferred ‘big explosion’ (Blokhintsev, 1976). Only in 1978 ‘Big Bang’ appeared for the first time in the English translation of the Astronomičesky žurnal (Synthesis of light elements, 1978). In the same year Naan compiled the noun cosmology for the 3rd edition of the Great Soviet encyclopedia, accepting the most recent innovations of modern astrophysics (Naan, 1976). At the beginning of the Eighties someone renounced explicitly the traditional distinction between metagalaxy and Universe, for example Novikov, who wrote that “there is nothing more grandiose than the global evolution of the entire Universe” (Novikov, 1983, p. 11). In 1983 the Soviet Union launched the experiment Relikt-1 on board the Prognoz 9 satellite to study in detail the CMBR. Given the low instrumental sensitivity, the experiment did not give the hoped results and was then surpassed by the American satellite COBE in 1992.

Soviet cosmology during Gorbačëvian era (1985–1991) Mikhail Gorbačëv’s appeals to the use of the diamat in natural sciences were not but the last rhetorical affirmations of a dying communism. Glasnost’ and Perestrojka let the people learn the fate of many scientists who had disappeared under Stalin. One can maybe find a little ‘astronomical pacification’ between Soviet Union and the Vatican in the words of Zeldovič when he went to Rome in 1986 and shared his hand with John Paul 2nd commenting: “When I was younger I thought that science and cosmology were able to explain the origins of the universe. Today I am not so sure!” (Sunyaev, 2004, p. 278).

References Ambartzumian, V. (1953). Das Weltall [The universe]. Sowjetnaturwissenschaft, 1, 278–291. Aristov, G. (1939). Itogi dekabr’skoi sessii gruppy astronomii Akademii Nauk SSSR [Results of the December session of the astronomical group of the Soviet Academy of Sciences]. Astronomičesky žurnal, 16 (2), 68–79. Blokhintsev, D. (1937). The advance of theoretical physics in the Soviet Union in twenty years. Physikalische Zeitschrift der Sowjetunion, 12 (5), 542–549. Blokhintsev, D. (1976). The hypothesis of the expanding universe. Soviet physics doklady, 21 (7), 387–388. Bronštein, M. (1931). Sovremennoe sostoyanie relyativistskoi kosmologii [On relativistic cosmology]. Uspekhi fizičeskik nauk. Mosca, 31 (1), 124–184.



Chapter 9.  The political use of science 181

Bronštein, M. (1933). On the expanding universe. Physikalische Zeitschrift der Sowjetunion, 4 (3), 73–82. Bronštein, M. (1936). Über den spontanen Zerfall der Photonen [On the spontaneous decay of photons]. Physikalische Zeitschrift der Sowjetunion, 10 (5), 686–688. Bronštein, M. & Landau, D. (1934). Über den zweiten Wärmesatz [On the second law of thermodynamics]. Physikalische Zeitschrift der Sowjetunion, 5 (1), 114–119. Bronšten, V. & McCutcheon, R. (1995). V. T. Ter-Oganezov, ideologist of Soviet astronomy. Journal for the history of astronomy, 26, 325–347.  doi: 10.1177/002182869502600403 Dialectical materialism (1975). In A. Prokhorov, Great soviet encyclopedia (3rd edition), 8, pp. 187–192. London-New York: MacMillan Educational Corporation. Doroškevič, A. & Novikov, I. (1964). Mean density of radiation in the metagalaxy. Soviet physics doklady, 9 (2), 111–113. Eigenson, M. (1940). Cosmological relativity. Comptes rendus de l’Académie des Sciences de l’ URSS, 26, 751–753. Emin, I. (1963). Russian-English physics dictionary. New York: John Wiley & Sons. Engels, F. (1947). Ludwig Feuerbach. Offenbach am Main: Bollwerk-Verlag Karl Drott. Engels, F. (1952). Dialektik der Natur [Dialetics of nature] Berlin: Dietz Verlag. Epstein, P. (1952). The diamat and modern science. Bulletin of the atomic scientists, 6, 23–34. Eremeeva, A. (1995). Political repression and personality. Journal for the history of astronomy, 26 (4), 297–324.  doi: 10.1177/002182869502600402 Friedmann, A. A. (1922). Über die Krümmung des Raumes [On the curvature of space]. Zeitschrift für Physik, 10 (1), 377–386.  doi: 10.1007/BF01332580 Gamow, G. A. (1970). My world line. New York: Viking Press. Graham, L. (1993). Science in Russia and the Soviet Union. Cambridge: Cambridge University Press. Haley, J. E. (1983). The confrontation of dialectical materialism with modern cosmological theories in Soviet Russia. Doctoral work, Santa Barbara (California). Hoyle, F. (1989). Frontiers in cosmology. In M. Bappu, Cosmic perspectives (pp. 97–107). Cambridge: Cambridge University Press. Kojevnikov, A. (1996). Games of Soviet democracy. Berlin: Max-Planck-Institut für Wissenschaftsgeschichte. Ičsanova, V. (1995). Pulkovo-St.Petersburg. Frankfurt am Main: Peter Lang. Krat, V. (1936). Note on the expansion of the universe. Astronomische Nachrichten, 258 (6188), 346–350. Lemaître, G. (1927). Un univers homogène de masse constante et de rayon croissant [A homogeneous universe of constant mass and increasing radius]. Annales de la Société Scientifique de Bruxelles, 47, 79–89. Lemaître, G. (1946). L’hypothèse de l’atome primitif [The primeval atom hypothesis]. Neuchâtel: Éditions du Griffon. Lenin (1946). Materialismo ed empiriocriticismo [Materialism and empiriocriticism]. Milan: Edizioni Universitarie. Lenin (1969). Über Wissenschaft [On science]. Berlin: Dietz Verlag. Lifšitz, E. (1946). On the gravitational instability of the expanding universe. Journal of physics of the USSR, 10, 116–129. Masevič, A. (1957). A meeting of the commission for cosmogony. Soviet astronomy, 1 (2), 306–307. McCutcheon, R. (1985). The purge of soviet astronomy. Doctoral work, Washington DC.

182 Mauro Stenico

McCutcheon, R. (1989). Stalin’s purge of Soviet astronomers. Sky & telescope, 78 (4), 352–357. Moore, P. (1992). Fireside astronomy. Chichester: Wiley. Naan, G. I. (1965). Über die Unendlichkeit des Weltalls [On the infinity of the universe]. In G. Kröber, Philosophische Probleme der modernen Kosmologie [Philosophical problems of modern cosmology] (pp. 89–115). Berlin: Deutscher Verlag der Wissenschaften. Naan, G. I. (1976). Cosmology. In A. Prokhorov, Great soviet encyclopedia (3rd edition) (pp. 188– 190), London-New York: MacMillan Educational Corporation. Novikov, I. (1983). Evolution of the universe. Cambridge: Cambridge University Press. On the planning of astronomical research in the Ussr (1932). Astronomičesky žurnal, 9, 318–319. Parenago, P. (1951). Mir zvezd [The world of stars]. Moscow: Akademy Nauk SSSR. Partiinost’ (1978). In A. Prokhorov, Great soviet encyclopedia (3rd edition), 19, pp. 295–297. London-New York: MacMillan Educational Corporation. Petrov, V. (1940). Nekotorje voprosy cosmology [Some questions of cosmology]. Pod znamenem marksizma, 7, 113–128. Pius, XII (1951). Le prove dell’esistenza di Dio alla luce delle moderne scienze naturali [The proofs for the existence of God in the light of the modern natural science]. In M. Sorondo, I Papi e la scienza nell’epoca contemporanea [Popes and science in the contemporary time] (pp. 118–129). Rome: Pontifical Academy of Sciences. Prokofieva, I. (1950). Conférence sur les questions idéologiques de l’astronomie [A conference on the ideological questions of astronomy]. La pensée, 28, 10–20. Sadler, D. (1960). Transactions of the IAU: 10th General Assembly held at Moscow 12–20 August 1958. Cambridge: Cambridge University Press. Šafirkin, V. (1938). O stroenii vselennoi i nekotorych reaktsionnych ideach burzhuaznoi kosmologii [On the structure of the Universe and some reactionary ideas of bourgeois cosmology]. Pod znamenem marksizma, 7, 115–136. Sitter, W. d. (1932). Kosmos. Cambridge: Harvard University Press.

doi: 10.4159/harvard.9780674331471

Šklovsky, I. (1969). New information on the age of the universe. Journal of the British Astronomical Association, 79, 381–383. Soviet scientists on Marx (1933). Priroda, 5–6, 4–16. Stalin (1929). Probleme des Leninismus [Problems of Leninism]. Wien: Verlag für Literatur und Politik. Stalin (1930). Politischer Bericht des ZK der KP(B)SU [Political report of the Central Committee of the Communist Party of Soviet Union]. Moscow: Zentralvölker-Verlag. Stalin (1949). Anarchismus oder Sozialismus? [Anarchism or socialism?]. Berlin: Dietz Verlag. Stalin (1952). Reden in Wählerversammlungen [Speeches to voters assembly]. Berlin: Dietz Verlag. Stenico, M. (2015). La ragionevole creazione: cosmologia moderna, ideologie del XX secolo e religione [The reasonable creation: modern cosmology, 20th century ideologies and religion]. Trento: Fondazione Museo storico del Trentino. Struve, O. (1935). Freedom of thought in astronomy. The scientific monthly, 40, 250–256. Synthesis of light elements in a big-bang model universe (1978). Soviet astronomy, 22 (1), 1–6. Sunyaev, R. (2004). Zeldovich: reminiscences. London: Chapman & Hall/CRC.

doi: 10.1201/9780203500163

Ter-Oganezov, V. (1930). Otkrytoe pis’mo sovetskikh astronomov Rimskomu Pape [Open letter of Soviet astronomers to the Pope of Rome]. Izvestia, 17 March, p. 2. Ter-Oganezov, V. (1937). Za iskorenenie do kontsa vreditel’stva na astronomicheskom fronte (For the eradication of all wrecking on the astronomical front). Mirovedenie, 26 (6), 373–377.



Chapter 9.  The political use of science 183

The appeal of the All-Union Academy of Science to all the scientists of the world (1932). Astronomičesky žurnal, 9 (3–4), 125–128. Vucinič, A. (2001). Einstein and soviet ideology. Stanford: Stanford University Press. Ždanov, A. (1935). Soviet literature: the richest in ideas. In: A. Ždanov, Problems of soviet literature (pp. 15–24). Moscow: Cooperative Publishing Society of Foreign Workers in the USSR. Ždanov, A. (1950). On literature, music and philosophy. London: Lawrence & Wishart. Zeldovič, Y. (1964). The theory of the expanding universe as originated by A. A. Fridman. Soviet physics uspekhi, 6 (4), 475–494.  doi: 10.1070/PU1964v006n04ABEH003584 Zeldovič, Y. (1966). The “hot” model of the universe. Soviet physics uspekhi, 9 (4), 602–617.

doi: 10.1070/PU1967v009n04ABEH003014

Zeldovič, Y. (1969). The “hot” universe. Vestnik of the USSR Academy of Sciences, 39 (4), 38–46. Zeldovič, Y. & Novikov, I. (1967). Cosmology. Annual review of astronomy and astrophysics, 5, 627–648.  doi: 10.1146/annurev.aa.05.090167.003211

Chapter 10

The dialectical legacy of epigenetics Flavio D’Abramo

Free University of Berlin

In this article I recognise three major historical phases of epigenetics, the first initiated by Child, Needham and Waddington during the first half of last century, focused on a dialectical analysis of biological processes between the organisms and their environments. The second phase started with the Bellagio conferences organised by Waddington where general principles derived from quantum physics were used to establishing a global order underpinned by scientific, objective facts beyond ethical and moral judgments. In the third phase, started with the failure of the Human Genome Project, there isn’t any consensus on the operative and philosophical notion of nature – i.e. the environmental context. Then, I highlight the necessity to reunify knowledge and moral within epigenetics. Keywords: social values, epistemic values, science and society, epigenetics, Charles Manning Child, Conrad H. Waddington, Joseph Needham, Brian Goodwin, research policy

Inheritance redefined Since its inception epigenetics has been a battlefield of scientific and politic nature, where welfaristic and technocratic views have developed subtle strategies. Epigenetics’ birth is traced back to 1942, year of publication of the article by Conrad H. Waddington in which the biologist mentioned and defined the term “epigenetics” (Waddington, 2012). Nevertheless, this dating hides the first appearance of the term, conceived earlier by Charles Manning Child, American biologist and socialist reformer who wanted to convey democratic instances within science. In 1906 Child postulated the origin of biological forms in processes of interaction of “formative substances”. With the “epigenetic origin of formative substances” Child indicated a precise model based on dynamic processes and in which “the cause of the organization is to be found in relations” between chemical elements and “not in particular substances” (Child, 1906, pp. 169–170). The epigenetic conceived by Child was therefore based on the relationship between biological entities, a relationship in which the past and the present doi 10.1075/cvs.13.12dab © 2018 John Benjamins Publishing Company

186 Flavio D’Abramo

of the organisms and their environmental conditions gave shape to the organisms themselves. The epigenetic theory developed by Child was literally blown away by the geneticist Thomas Hunt Morgan, who found the chromosomal theory and that, in contrast with the processual approach of Child, identified in the genes the ultimate cause of biological characteristics. In short, Morgan gave one of the most important contributions to establish, through a stabilizing order, a deterministic biogenetic theory based on the role of genes and chromosomes. Morgan opened the way to a series of experiments to provide scientific models to explain stability and fixity of human nature. Oswald Theodore Avery, who in 1944 studied the pneumococcus discovered that the material it inherited through generations resided in deoxyribonucleic acids. In the Rockefeller Institute of New York Avery formulated the equation “DNA = ­hereditary material”. Since 1944, when the work of Avery was published, biologists began to recognize the DNA as fundamental part of genes, and in which is contained the coding information transmitted across generations (Gorelick & Laubichler, 2008). After 1953, when the work of Watson and Crick on the structure of DNA was published (F. H. Crick, 1958), and where the genetic was reduced to the heritable, scientists started to match what is genetic with the DNA. According to the lucky description of Gorelick and Laubichler, prior to 1953, “genetic” was synonymous of “heritable” while after that year it became synonymous with “DNA” (Gorelick & Laubichler, 2008). This operation had great scientific implications, overall on current conceptions of what we consider heritable, i.e. the intergenerational modes of transmission of characters, features and quality of living beings, and consequently the role of individuals in society. In fact, the equation “genetic = DNA” defines the relationship between organisms and environments, at least as it was postulated in the molecular experiments of Avery, Watson and Crick. The theories of Avery, Watson and Crick were formulated on the concept of genotype of August Weismann, according to which the biological substance responsible for species-specific characteristics is transmitted from one generation to another in almost complete isolation from the past, present and future environments. One of the deepest implications of Neo-Darwinism, whose founding fathers are tracked in Weismann and Morgan, but also in Avery, Watson and Crick, is its determinism, that is the idea that our fate is written and determined in our genome, in our DNA. In the germplasm or DNA would then be carved grammars of individual autonomy. The only possible changes would be of random type, stochastic, and would be selected by natural selection, which sifts those individuals able to adapt to their contexts. In the case of Weismann random variation and natural selection also apply to the germplasm. This has meant that the inheritable molecular signals are only those contained in the nucleotides of DNA. And that the DNA of predecessors, being clearly separated from the environment (the environment sifts and does not determine anything in a direct or active manner), can not in any way transmit to the descendants the ancestors’ experiences. The considered environment was in fact



Chapter 10.  The dialectical legacy of epigenetics 187

the cellular one and not the one outside the body. In this sense, Weismann detached itself from Lamarckism, according to which the relationship of organisms with their environments was transmitted to next generations through complex biological dynamics. The operation of Weismann, who today is considered scientifically wrong, should however be recognized in its emancipatory instance, that is the idea that in every generation the organism starts from scratch (Meloni, 2016), getting rid of the biological “ballast”, not always positive, derived from the ancestors. A democratic instance – i.e. equality of all individuals was seen as guaranteed by biological mechanisms – that deformed the scientific view of Weismann. Other molecular dynamics considered in epigenetics, such as the methylation of cytosine, the formation of chromatin, the formative and regulative roles of the cell’s cytoskeleton, were dismissed as dynamics devoid of inheritable information, while today are considered as the gates through which the environment and the organism co-determinate each other. The environment has a direct action on the biological dynamics of individuals, and acts by modulating the genetic expression of cells, bacteria and viruses present in every organism. Social and environmental dynamics thus produce an active action in all the organisms. These factors, from the social and material environments that control the DNA and are considered of epigenetic nature, explain the way the environment modulates the behaviour of genes. However, there is no consensus definition of epigenetics. According to a recent definition, epigenetic changes affect gene expression and occurs without changes in the DNA sequence (Jirtle & Skinner, 2007). In another broader definition, epigenetics shows that conveyed in the development of individuals are not only genes, but also the environment and other social factors (Griffiths and Stotz in Meloni, 2016).

The reappearance of epigenetics The slow decline of what happened inside the Neo-Darwinism was already clear in the Sixties. The same “central dogma of molecular biology” (F. Crick, 1970) had received, from its inception, a criticism about methods and contents. Some representatives of the scientific community publicly showed their deepest doubts. For example, Barry Commoner immediately published his criticism in Nature, in 1968 (Commoner, 1968), in which it was shown the incorrectness of Watson and Crick hypothesis according to which the genetic information was considered as unidirectional flow that by genes runs toward proteins and other outer parts of the cell. Multiple discoveries now show that the information that runs between genes and other cellular and extracellular parts is a bidirectional flow. In these findings should be included prions or retroviruses discovered by Stanley Prusiner and the genetic recombination discovered by Barbara McClintock.

188 Flavio D’Abramo

The relationship between genetics, molecular biology and Neo-Darwinism, which was allowed and encouraged by agendas of important scientific institutions, lines of Research and Development of industrial complexes and associated media, meant that the term DNA is now used in everyday language as a dead metaphor, and that related theories were acknowledged outside the scientific community, to be accepted by the public as true and meaningful for the everyday experience. Yet, as never before, as scientific discipline, genetics is in crisis, so that the most common biological factors of inheritance remain unexplained, such as similarity between parents and children (Maher, 2008). The deep split between organism and environment that has been created in genetics has led to the failure of those who had been the stated goals. According to Collins (2010), director of the National Institutes of Health and director of the public team that decoded the human genome, the failures collected by genetics during the last decade would be due to scarcity of funds. In an article appeared in Nature, Erika Check Hayden (2010) compared the genetic research to the visual experience you can have when you look at a Mandelbrot set: magnifying the fractal, the details increase more and more in number; likewise, entering into details, the complexity of the systems increases exponentially despite the tools to do a comprehensive analysis are to date unknown. The equation “more data = more understanding”, which should be considered as one of the most critical theoretical impasse of the natural sciences, is the result of so-called data-driven research, which is a research based on induction (Kell & Oliver, 2004) and on large data sets. The scientific disciplines who engage with living beings have been enhanced with details especially those coming from physics, mathematics, information technology and organizational structures in which related practices are made – highly organized and at the same time fragmented. What happened was a real transfer of practices from the industry to the laboratory and a technology transfer from physics to biology. What it was tried to accomplish in Bellagio with a series of conferences titled “Towards a Theoretical Biology” it was the idea of combining the methods of physics and cybernetics to that of epigenetics, in order to include epigenetics within molecular biology, and to be able to relate explanations, capacity of control and creation of both living systems and their environments. Twentieth century biology is even more understood as a set of disciplines that enable a participation in the creative dynamics of nature: therefore, in some way, an analytical knowledge aiming at creating and controlling life itself. The definition of life given by Arthur Iberall served as a guide for physicists to produce explanations for the multiple operations of living systems, so that scientists could shape, build or evaluate systems resembling living systems. The life was not more explained through a single mechanistic scheme, but rather through multiple types of mechanical operations within a model that included also psychological factors. The body was considered as biopsychic desiring machine (Iberall, 1969). This program, therefore, move closer engineering, biology



Chapter 10.  The dialectical legacy of epigenetics 189

and psychology so as to actively control the biological and metabolic dynamics, up to the current basic hypothesis of Systems Biology. What happened in Bellagio was a handover phase between the reasons that animated the Theoretical Biology Club, groups of intellectuals who embraced a Marxist methodology and philosophy of science, and the lines of research that animated the biological and medical institutions coordinated by the Rockefeller Foundation. Epigenetics, established in the first decades of the twentieth century by Charles M. Child, Joseph Needham and Conrad H. Waddington, included not so much physics, but rather embryology, biochemistry, genetics, history of science and a humanistic philosophical approach. The agenda of Rockefeller Foundation seemed to rather go in a direction where the priority was a technology transfer from physics to biology (Abir-Am, 1988): one direction, thus, completely different from the one taken by Needham. Joseph Needham married in fact the thesis of the “Marxist philosophy” and “dialectical materialism” (Needham, 1936, pp. 45–46), in which the order of physics is not the order of biology: “biological order is a form of order different from those found in physics, chemistry, or crystallography” (Needham, 1936, p. 45). The program to which Needham referred to followed the lines of research drawn in 1931, in London, at the International Congress of History of Science and Technology, where Zavadovsky emphasized the need to find the appropriate method for each analyzed phenomenon, to leave “the violent identification of the biological and the physical” (Zavadovsky, 1931, p. 74). At the same conference of 1931 the Russian delegate also noted the necessity to give up disciplinary research: it is necessary to renounce both simplified identification and reduction of some sciences to others, as the supporters of the mechanistic and positivist currents in the sphere of natural science strive to do, and sharp demarcation and drawing of absolute watersheds between the physical, biological and socio-historical sciences.  (Zavadovsky, 1931, p. 80)

So in his program Needham, who moved closer to Zavadovsky’s positions, and who methodologically diverged from the Rockefeller’s lines of research, placed at the center both transdisciplinarity and specificity / autonomy of biological methods.

The new epigenetics and the gap between knowledge and morality The two political frameworks that have polarized genetic theories were, and still are that of dialectical materialism and neo-liberalism. The final result consisted in the creation of a biological theory disengaged with the environment, and where environmental factors are mere variables of calculation, that acquire their meaning mainly through methodological and epistemic values. In this context it was formulated a

190 Flavio D’Abramo

new theoretical biology able to overcome the rigidities of Neo-Darwinism and genecentrism around which was perched a large part of biology (Waddington, 1968). Inside the Bellagio conference it was also redefined the metaphysical program at the base of the new theoretical biology, intended as a redefinition of the postulates regarding the status of objects and processes of this new discipline. Through the union of quantum physics and biology, David Bohm assumed the concept of order, derived from the intellectual, instrumental and constructive capacities of the human mind, as the first metaphysical element, capable of governing each type of “perception”, outside space and time (Bohm, 1969, pp. 41–43). David Bohm, who replaced the space-time categories with an a-temporal one, made an operation that served as keystone: he placed our daily phenomena outside of space and time, thus distorting them in their meaning and common usage. To criticize this point was Brian Goodwin (1931–2009), who attended one of the last conferences organized by Waddington under the auspices of the Rockefeller Foundation. Goodwin called for the resumption of alchemical values, namely the recovery of scientific/humanistic analysis and approaches far beyond the principle of causality. Goodwin, mathematician and biologist, gave large contribution to the development of the systemic approach through the use of formal models he mostly applied to developmental biology. Goodwin’s reflection on the scientific dynamics of subjectivity is therefore unique as it comes from within science itself. A few years later, responding to David Bohm in the fourth volume of the Bellagio conference, Brian Goodwin indicated ethical choices placed outside the space-time context as not possible. In short, he pointed to the universal principles as not ethical. Located outside of space and time, whatever choice is simply beyond ethics. To retrieve an ethical / critical thinking, Goodwin remarked, it is necessary to refer instead to the context: “ethical choice becomes context dependent rather then universal” (Goodwin, 1972, p. 275). Goodwin drew from the alchemical tradition, when alchemists were able to blend science and morality: The aspect of alchemy that is so central to the present enquiry is the fact that this tradition attempted to fuse knowledge and meaning by combining science (scientia, knowledge), the study of natural process, with morality, man’s attempt to realize his own perfectibility and self-fulfillment, itself a continuous process.  (Goodwin, 1972, p. 262)

So the criticism that Goodwin moved to universalization of science consisted in showing how this process has led humans far from their fulfilment and from the ability to embrace moral / ethical judgments and decisions. The alchemist, while operating his research on natural phenomena he changed himself; he actively learnt from the same nature he observed. In short, the scientist was part of the same nature he watched. What happens with the passive observation is instead represented by a



Chapter 10.  The dialectical legacy of epigenetics 191

machine which while recording the phenomena does not change its internal state, if not in a deterministic manner, similarly to what happens within a Turing machine. With this mechanical process he will be able to penetrate, understand and control nature, Goodwin wrote, but without being a partaker. Knowledge and power are divided, in their meanings and in their moral: Contemporary science has essentially chosen the course of knowledge and power, split off from meaning and morality because knowledge has become an end in itself. Scientists today do not expect to be morally transformed by their activities. They do not, in fact, participate in a relationship with the world that acknowledges the autonomy and the ultimate inviolability of natural processes, the condition for a dialogue with Nature, which has now become something to be penetrated, known, and controlled. (Goodwin, 1972, p. 263)

Of particular interest are the critiques on scientific objectivity that Goodwin indicates as dynamic that inhibits observers – engineers or scientists – to actively participate in the natural phenomena analysed. The scientific activity objectified deprives researchers of moral and ethical meanings, a game that transforms the relationship between man and nature in a relationship of submission and coercion: participation in a process of mutual transformation is in fact expressly ruled out by the contemporary ideal of objective observation, preferably by a machine which cannot change its state except in response to the particular events which it is designed to record. The knowledge so obtained is regarded as neutral, without moral ‘contaminants’. It can be used beneficently or malevolently, but it is always used to exercise control over the world, certainly not to transform oneself.  (Goodwin, 1972, p. 263)

We witness a disconnection between experience – which is usually lived in a unified manner, à la Whitehead – and reason. Through the reason are rather produced scientific truth based on the alienation from the experience: “[…] truth has become so refined in its meaning and significance, so intellectualized and divided from the experience, that its moral force has been severely attenuated” (Goodwin, 1972, p. 264). Goodwin then criticizes the ability of man to play God, to control nature without changing himself, as if the observer was neither part of this dialogue nor part of nature: science as we know it today has largely opted to pursue the course of manipulation and power, drawing us inevitably into a Faustian crisis which arises from the irreconcilability of manipulation and wisdom. To manipulate wisely one must be wiser than Nature, wiser then Man, for both must be manipulated; hence one must be God. Faust found that only the Devil would play this game with him, tempt this hubris. The corruption arises with the decision to manipulate rather than to engage in a dialogue, the decision to be Master rather than partner.  (Goodwin, 1972, p. 263)

192 Flavio D’Abramo

Goodwin refers to Carl Jung (Goodwin, 1972, p. 264), that recalled the man’s ability to maintain a dialogue with nature capable of changing his perception, a dynamic that has little to do with the accumulation of material or knowledge that dominates our age: the essence of the alchemical process is quite clearly a two-way relationship between the adept and nature, both undergoing transformation as occurs in a true dialogue. The basis of this relationship was the belief that Nature has an innate tendency to seek a state of perfection, matter transforming into immortal, imperishable gold. Likewise, man has a longing for perfection. He can thus learn from Nature and at the same time assist her in her striving. (Goodwin, 1972, p. 262)

What according Goodwin swept the “wisdom of alchemy” of man, that is his ability to combine knowledge and morality, and therefore what separated reason and experience was, in part, the Church: “the current misconception of alchemy is due to the fact that it was vigorously discredited as heretical by the Church throughout the Middle Ages and the Renaissance because, like all mystical cults, it believed that God should be experienced, not simply believed in by the faithful” (Goodwin, 1972, p. 262). Today of God you can only have faith and no experience, so you can expect to control all the natural phenomena. A process that has brought us both to our separation from nature and from ourselves. The criticism by Goodwin can be applied well to the current scientific enterprise. In fact, in most of its current instances, that are derived from the neoliberal setting given during the Bellagio conferences, epigenetics might be one way to know and to relate the dynamics of biological units with the surrounding environments. These instances of epigenetics are, most of the times, emptied of the meaning that researchers could give them.

Reframing epigenetics Epigenetics can find very varied applications and it can be used to give multiple, sometimes opposed, meanings to the concept and status of the environment and therefore to the concept and the statute of human nature. For example, through the epigenetic we can interpret the early stages of development of the organism as biologically and psychologically constitutive, phases which are characterized by specific interactions with organisms that are part of the community they belong to – for example through the the material, socioeconomical context in which the individual grows (Barker, 2007). In this case, environmental factors such as nutrition, care received, climatic factors, are constitutive of the biological, psychological, and therefore cultural organisms. Depending on the factors taken into account one can act on the environment of the cell, to limit the environment to the organism’s



Chapter 10.  The dialectical legacy of epigenetics 193

internal borders. In this way a disease can be considered only through the cellular dynamics so as to exclude the factors outer the organism, creating an image of the human body very similar to that of a machine. In this regard it is interesting to note that methylation, the chemical bond which today covers a pivotal role in epigenetics was shown for the first time in 1969 as a constitutive dynamics of the nerve cells of the memory through the use of a computational / theoretical model (Griffith & Mahler, 1969). Or you may consider the body as part of a social, material system, thus including other aspects that characterize our societies and ecosystems. In this case epigenetics may be useful to identify environmental factors contributing to all the phases of development, as it happens in ecological developmental biology (Gilbert & Epel, 2009), or the dynamics of social and cultural relevance that affect directly the structure and biology of organisms through disciplines and fields of research like epigenetic epidemiology – see the special issue of International Journal of Epidemiology, 2012 (volume 41, number 1). The meaning that the researcher gives to his activities is set mostly by the development and use of the so called “epistemic values” or methodological characteristics of scientific practices. These scientific, epistemic values have often been referred to as opposed to ethical or social values (Meloni, 2016). The role, meaning and ethical values to confer to epigenetics are bound by the scientific, cultural and working structures within which each scientist operates. The single scientist can anyway still consociate both with his peers and with the people who will benefit of his work, in order to give shape to epistemological and ethical reflections, to overcome the old dichotomies facts / values, nature / culture and epistemology / moral. The current scientific phase allows certain steps for the reunification of the dichotomies above mentioned: science is in fact acquiring more and more diverse social roles and the European Union have put in issue the importance of public involvement. In this communitarian context the public is conceived not only as a subject producing information relevant to the scientific work, but also as a subject capable of producing significant ethical judgments to characterize science as a social activity (European Commission, 2014). To realise this proposal that shortens the distance between the pessimism of reason and the optimism of the will, it would be useful to trace the vast repertoire of conflicts between civil society and those scientific institutions used for purposes of control and domination of the nature. A project which deepens the theoretical and operational developments that occurred in the history of epigenetics so as to connect them with current lines of research will be useful in identifying the elements of continuity and rupture between old and new ideologies. This may allow a greater awareness among those who participate in scientific activities, scientists, technicians and the citizenry.

194 Flavio D’Abramo

Conclusion With the assumptions of the first epigenetics, the scientists wanted to couple organisms with related environments, so as to trace the not always mechanical dynamics that combine, in a dialectic way, the natural forms to their development. In its second phase, set out in the Bellagio conferences, epigenetics was placed behind the establishment of a global order of scientific kind, able to coordinate the human population (Bastin, 1969). The last steps of this scientific phase see epigenetics rooted in those epistemic values that, once again, are detached from contents and meanings of moral and social kind. To open spaces of historical, philosophical, anthropological, sociological and political reflection will help us to understand and build a science capable of reflecting on itself so that we can evaluate the most ineffable and not marketable aspects of our culture, within which are located scientific practices.

References Abir-Am, P. (1988). The assessment of interdisciplinary research in the 1930s: The Rockefeller foundation and physico-chemical morphology. Minerva, 26(2), 153–176.  doi: 10.1007/BF01096694 Barker, D. J. (2007). The origins of the developmental origins theory. Journal of Internal Medicine, 261(5), 412–417.  doi: 10.1111/j.1365-2796.2007.01809.x Bastin, T. (1969). A general property of hierarchies. In C. H. Waddington (Ed.), Towards a Theoretical Biology. 2 Sketches (pp. 252–264). Edinburgh: Edinburgh University Press. Bohm, D. (1969). Further remarks on order. In C. H. Waddington (Ed.), Towards a Theoretical Biology. 2 Sketches (pp. 41–60). Edinburgh: Edinburgh University Press. Check Hayden, E. (2010). Human genome at ten: Life is complicated. Nature, 464(7289), 664–667.  doi: 10.1038/464664a Child, C. M. (1906). Some Considerations regarding So-Called Formative Substances. Biological Bulletin, 11(4), 165–181.  doi: 10.2307/1535549 Collins, F. (2010). Has the revolution arrived? Nature, 464(7289), 674–675.  doi: 10.1038/464674a Commoner, B. (1968). Failure of the Watson-Crick theory as a chemical explanation of inheritance. Nature, 220(5165), 334–340.  doi: 10.1038/220334a0 Crick, F. (1970). Central dogma of molecular biology. Nature, 227(5258), 561–563.

doi: 10.1038/227561a0

Crick, F. H. (1958). On protein synthesis. Symp Soc Exp Biol, 12, 138–163. European Commission. (2014). Background document: Public Consultation ‘Science 2.0’: Science in Transition. Directorates-General for Research and Innovation (RTD) and Communications Networks, content and Technology (CONNECT). Retrieved from http://ec.europa. eu/research/consultations/science-2.0/background.pdf Gilbert, S., & Epel, D. (2009). Ecological Developmental Biology. Integrating Epigenetics, Medicine, and Evolution. Sunderland, Massachusetts: Sinauer Associates.



Chapter 10.  The dialectical legacy of epigenetics 195

Goodwin, B. C. (1972). Biology and meaning. In C. H. Waddington (Ed.), Towards a theoretical biology. 4 Essays (pp. 259–275). Edinburgh: Edinburgh University Press. Gorelick, R., & Laubichler, M. (2008). Genetic = Heritable (Genetic ≠ DNA). Biological Theory, 3(1), 79–84.  doi: 10.1162/biot.2008.3.1.79 Griffith, J. S., & Mahler, H. R. (1969). DNA ticketing theory of memory. Nature, 223(5206), 580–582.  doi: 10.1038/223580a0 Iberall, A. S. (1969). New thoughts on bio control. In C. H. Waddington (Ed.), Towards a Theoretical Biology. 2 Sketches (pp. 166–178). Edinburgh: Edimburgh University Press. Jirtle, R. L., & Skinner, M. K. (2007). Environmental epigenomics and disease susceptibility. Nat Rev Genet, 8(4), 253–262.  doi: 10.1038/nrg2045 Kell, D. B., & Oliver, S. G. (2004). Here is the evidence, now what is the hypothesis? The complementary roles of inductive and hypothesis-driven science in the post-genomic era. BioEssays, 26(1), 99–105.  doi: 10.1002/bies.10385 Maher, B. (2008). Personal genomes: The case of the missing heritability. Nature, 456(7218), 18–21.  doi: 10.1038/456018a Meloni, M. (2016). Political Biology: Science and Social Values in Human Heredity from Eugenics to Epigenetics. Basingstoke, UK: Palgrave Macmillan. Needham, J. (1936). Order and Life.Yale: Yale University Press. Waddington, C. H. (1968). The basic idea of biology. In C. H. Waddington (Ed.), Towards a theoretical biology. 1. Prolegomena (pp. 1–31). Edinburgh: Edinburgh University Press. Waddington, C. H. (2012). The epigenotype. 1942. Int J Epidemiol, 41(1), 10–13.

doi: 10.1093/ije/dyr184

Zavadovsky, B. (1931). The “physical” and the “biological” in the process of organic evolution. In N. I. Bukharin (Ed.), Science at the cross roads (pp. 69–80). London: Kniga.

Index

A Affordance 71 Agrifood system  145, 152, 154, 156 Amateurization 71–72 Argumentation  35, 39–40, 43–44, 51, 57, 76, 85, 119, 135 Articulation  53, 58, 60, 67, 122, 165 B Big Bang  165–166, 177, 180, 182 Black box arguments  71, 75–77, 81–82, 84–85 Blockchain  145, 156–157, 164 Brian Goodwin  185, 190 C Charles Manning Child  185 Cognitive niches  71, 85 Collective epistemology  15, 35, 39, 41, 49–51 Communication  15, 30–32, 71, 73–77, 82–85, 90, 142, 145–149, 151–153, 155, 157–163, 194 Conrad H. Waddington  185, 189 Content analysis  64, 78, 87, 109, 163 Controversies  3–5, 7–8, 13, 50, 87, 93, 109, 139 Cosmology  15, 165–166, 168, 171–172, 174–175, 177–183 D Democratic culture  32, 71, 85 Dialectics  35, 40, 44–45, 47–50, 166 Diamat  165–170, 175–178, 180–181

Disagreement  13, 25–26, 31, 35, 37–41, 43, 45–47, 49–51, 130, 135, 143 Donald Trump  73, 109–110, 123 E Epigenetics  185, 187–189, 191–195 Epistemic bubble  15, 71–72, 78, 81–82, 85 Epistemic values  38, 133, 136, 185, 189, 193–194 Epistemology of ignorance  71 Ethics  4, 16, 85, 100, 125, 142–143, 145–147, 149, 151, 153, 155, 157, 159, 161–163, 190 Experts  12, 14–15, 17, 19, 20–23, 25–27, 29, 31–33, 35–45, 47, 49–51, 53–57, 59–63, 66–68, 71–72, 77–78, 81–82, 87, 111, 115, 127, 129, 135, 137, 146 Expert epistemology  15, 35, 39–41, 49–51, 71 Expert opinion  11, 14–15, 17, 19, 27, 35–38, 40, 42–45, 47, 49, 56–57, 69, 73, 75, 77, 79, 82–83, 87 F Fact/value dichotomy  13, 17, 61 Facts  8, 12, 14, 24–25, 30, 37, 58, 61–64, 67, 69, 75, 77–78, 96, 99, 115–117, 127–128, 131–134, 136–137, 139–142, 185, 193 Food citizenship  145, 152–153, 163–164 G Gamification 164 Genetically modified plants 90–91, 97

H Hungary  15, 50, 87, 89–97, 99–105, 107 I Ideology  93, 113, 119, 165, 170, 176, 183 Inductive risk  17 Interregnum  109–110, 116–117, 119, 122, 124–125 Invasive acacia  15, 87, 89–91, 93, 95, 97, 99, 101–105, 107 J Joseph Needham  185, 189 L Local knowledge  14, 17, 20, 23–27, 30–32, 36, 62, 66, 68, 89, 164 M Media  15, 17, 19, 23, 56, 58, 63, 73, 75–78, 80–85, 87, 90, 93, 95–100, 103–105, 111–115, 133–134, 161, 164, 187–188 O On-line communities  71, 77–79, 81–82 P Philosophy of expertise  20, 31–33, 53, 55, 60, 84 Policy  11, 87–88, 90, 129, 140, 168, 185 Politics  3, 37, 55, 68, 73, 77, 83, 87–90, 99, 101, 105–106, 109–111, 114–115, 119–120, 125, 127, 140, 142–143, 164–165 Post-truth  35, 73, 123–124

198 Science and Democracy

Practical knowledge  14–15, 30, 76, 88, 127, 140–141, 146–147 Pragma-dialectics  35, 45, 50 Pragmatism  53–55, 61, 68–69 Public  7–8, 10–12, 14–17, 19, 32–33, 35–37, 42, 44–45, 49, 53–69, 74–77, 82, 84–85, 91, 93–94, 99–100, 102, 106, 109, 111–116, 119, 121, 123–129, 136, 143, 153, 156, 159, 165, 169, 175–176, 178, 185, 187–188, 193–194 Public opinion  7–8, 11, 14–15, 17, 19, 35–37, 42, 44–45, 49, 56–57, 69, 75, 77, 82, 99–100, 112, 121 R Rationality  16, 35–36, 46, 48–49, 51, 53, 134–137, 143 Readability indexes  145–146, 149–150, 159 Research policy  11, 87–88, 90, 185 Rhetoric  47–49, 76, 87, 113, 163, 180

S Science  7–17, 19–21, 25, 29, 31–33, 38, 44, 50, 53–56, 58, 61–62, 64–68, 71–77, 80, 83–85, 87–90, 93, 99, 105–107, 109–112, 114–115, 117–123, 125, 127, 129, 131, 133, 135–137, 139–143, 153, 159, 162, 165, 167–171, 173–177, 179–183, 185, 188–191, 193–195 Science and politics  55, 68, 73, 77, 83, 87–90, 99, 105–106, 109–111, 114–115, 119–120, 127, 140, 142–143, 165 Science and society  8–11, 13–16, 32, 55–56, 67, 71, 74–75, 88, 90, 93, 105, 139, 140, 142–143, 167–168, 175, 183, 185, 193 Science communication  15, 31–32, 71, 73–77, 83–85, 90, 142, 153, 159, 162, 194 Science journalism  87, 143 Smart labelling  145, 157

Social values  8–9, 12–13, 116, 122, 131, 133, 135–136, 141, 153, 169, 185, 193–195 Strategic manoeuvring  35, 46–47 T Technical democracy  32, 53, 55, 61, 67 Tolerance  9, 15, 91–92, 94, 100, 127, 140, 142, 159 Traceability  15, 145, 152–155, 157, 162–163 V Values  8–9, 12–13, 16, 36–38, 61, 98, 109, 116, 122–123, 127, 129–131, 133–137, 139–142, 153, 159, 163–164, 169, 185, 189–190, 193–195

The relationship between science and democracy has become a muchdebated issue. In recent years, we have even seen an exponential growth in literature on the subject. No doubt, the interest has partly been justiied by the concern of public opinion over the technological repercussions of scientiic research. Moreover, there are scientiic theories that, if they were accepted, would allegedly imply the adoption of policies that have wide social consequences, as well as a rethinking of deeply-rooted habits on the part of the citizens. These considerations alone allow us to understand the reasons for the interest in the, at times troublesome, relationships between science and public opinion which characterize democratic societies.

isbn 978 90 272 0074 7

John Benjamins Publishing Company