128 41 18MB
English Pages 356 [357] Year 2018
i
THE ROUTLEDGE HANDBOOK OF APPLIED EPISTEMOLOGY
While applied epistemology has been neglected for much of the twentieth century, it has seen emerging interest in recent years, with key thinkers in the field helping to put it on the philosophical map. Although it is an old tradition, current technological and social developments have dramatically changed both the questions it faces and the methodology required to answer those questions. Recent developments also make it a particularly important and exciting area for research and teaching in the twenty-first century. The Routledge Handbook of Applied Epistemology is an outstanding reference source to this exciting subject and the first collection of its kind. Comprising entries by a team of international contributors, the Handbook is divided into six main parts: • The Internet • Politics • Science
• Epistemic institutions • Individual investigators • Theory and practice in philosophy.
Within these sections, the core topics and debates are presented, analyzed, and set into broader historical and disciplinary contexts.The central topics covered include: the prehistory of applied epistemology, expertise and scientific authority, epistemic aspects of political and social philosophy, epistemology and the law, and epistemology and medicine. Essential reading for students and researchers in epistemology, political philosophy, and applied ethics, the Handbook will also be very useful for those in related fields, such as law, sociology, and politics. David Coady is Senior Lecturer in Philosophy at the University of Tasmania, Australia. He is the author of What to Believe Now: Applying Epistemology to Contemporary Issues (2012), the co-author of The Climate Change Debate: An Epistemic and Ethical Enquiry (2013), the editor of Conspiracy Theories: The Philosophical Debate (2006) and the co-editor of A Companion to Applied Philosophy (2016). James Chase is Senior Lecturer in Philosophy at the University of Tasmania, Australia. He works on epistemology; philosophical logic, particularly as applied to epistemological issues; and the methodology of analytic philosophy. He is the co-author of Analytic vs Continental (2011) and the co-editor of Postanalytic and Metacontinental (2010).
ii
ro u tled g e h a n d bo oks i n phi los ophy
Routledge Handbooks in Philosophy are state-of-the-art surveys of emerging, newly refreshed, and important fields in philosophy, providing accessible yet thorough assessments of key problems, themes, thinkers, and recent developments in research. All chapters for each volume are specially commissioned, and written by leading scholars in the field. Carefully edited and organized, Routledge Handbooks in Philosophy provide indispensable reference tools for students and researchers seeking a comprehensive overview of new and exciting topics in philosophy. They are also valuable teaching resources as accompaniments to textbooks, anthologies, and research-orientated publications. Also available: THE ROUTLEDGE HANDBOOK OF METAETHICS Edited by Tristram McPherson and David Plunkett THE ROUTLEDGE HANDBOOK OF EVOLUTION AND PHILOSOPHY Edited by Richard Joyce THE ROUTLEDGE HANDBOOK OF COLLECTIVE INTENTIONALITY Edited by Marija Jankovic and Kirk Ludwig THE ROUTLEDGE HANDBOOK OF SCIENTIFIC REALISM Edited by Juha Saatsi THE ROUTLEDGE HANDBOOK OF PACIFISM AND NON-VIOLENCE Edited by Andrew Fiala THE ROUTLEDGE HANDBOOK OF CONSCIOUSNESS Edited by Rocco J. Gennaro THE ROUTLEDGE HANDBOOK OF PHILOSOPHY AND SCIENCE OF ADDICTION Edited by Hanna Pickard and Serge Ahmed THE ROUTLEDGE HANDBOOK OF MORAL EPISTEMOLOGY Edited by Karen Jones, Mark Timmons, and Aaron Zimmerman For more information about this series, please visit: www.routledge.com/ Routledge-Handbooks-in-Philosophy/book-series/RHP
iii
THE ROUTLEDGE HANDBOOK OF APPLIED EPISTEMOLOGY
Edited by David Coady and James Chase
iv
First published 2019 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 selection and editorial matter, David Coady and James Chase; individual chapters, the contributors The right of David Coady and James Chase to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record has been requested for this book ISBN: 978-1-138-93265-4 (hbk) ISBN: 978-1-315-67909-9 (ebk) Typeset in Bembo by Out of House Publishing
v
CONTENTS
Notes on contributors
viii
PART I
Introduction
1
1 The return of applied epistemology James Chase and David Coady
3
PART II
The Internet
13
2 The World Wide Web Paul Smart and Nigel Shadbolt
15
3 Wikipedia Karen Frost-Arnold
28
4 Googling Hanna Kiri Gunn and Michael P. Lynch
41
5 Adversarial epistemology on the Internet Don Fallis
54
v
vi
Contents PART III
Politics
69
6 John Stuart Mill on free speech Daniel Halliday and Helen McCabe
71
7 Epistemic democracy Jason Brennan
88
8 Epistemic injustice and feminist epistemology Andrea Pitts
101
9 Propaganda and ideology Randal Marlin
115
PART IV
Science
129
10 Expertise in climate science Stephen John
131
11 Evidence-based medicine Robyn Bluhm and Kirstin Borgerson
142
12 The precautionary principle in medical research and policy: the case of sponsorship bias Daniel Steel 13 Psychology and conspiracy theories David Coady PART V
154 166
Epistemic institutions
177
14 Legal burdens of proof and statistical evidence Georgi Gardiner
179
15 Banking and finance: disentangling the epistemic failings of the 2008 financial crisis Lisa Warenski 16 Applied epistemology of education Ben Kotzee vi
196 211
vii
Contents PART VI
Individual investigators
231
17 Disagreement Tim Kenyon
233
18 Forecasting Steve Fuller
247
19 Rumor Axel Gelfert
259
20 Gossip Tommaso Bertolotti and Lorenzo Magnani
272
21 The applied epistemology of conspiracy theories: an overview M R. X. Dentith and Brian L. Keeley
284
PART VII
Theory and practice in philosophy
295
22 Philosophical expertise Bryan Frances
297
23 Ethical expertise Christopher Cowley
307
24 The demise of grand narratives? Postmodernism, power-knowledge, and applied epistemology Matthew Sharpe
318
Index
332
vii
viii
CONTRIBUTORS
Tommaso Bertolotti is post-doctoral fellow in philosophy and adjunct professor of cognitive philosophy at the University of Pavia, Italy. He also collaborates for research and teaching with the French engineering school Télécom ParisTech. Robyn Bluhm is an associate professor in the Department of Philosophy at Lyman Briggs College at Michigan State University, USA. Her research examines inter-related ethical and epistemological issues in medicine and in neuroscience. Kirstin Borgerson is an associate professor in the Department of Philosophy at Dalhousie University, Canada. Dr. Borgerson researches and teaches in medical epistemology and medical ethics. Jason Brennan is the Robert J. and Elizabeth Flanagan Family Professor of Strategy, Economics, Ethics, and Public Policy at Georgetown University, USA. He is the author of nine books, including When All Else Fails (Princeton University Press, 2018), In Defense of Openness, with Bas van der Vossen (Oxford University Press, 2018), and Against Democracy (Princeton University Press, 2016). James Chase is senior lecturer in philosophy at the University of Tasmania, Australia. He works on epistemology, philosophical logic, particularly as applied to epistemological issues, and the methodology of analytic philosophy. He is the co-author of Analytic vs Continental (2011) and the co-editor of Postanalytic and Metacontinental (2010). David Coady is senior lecturer in philosophy at the University of Tasmania, Australia. Most of his current work is on applied philosophy, especially applied epistemology. He has published on rumor, conspiracy theory, the blogosphere, expertise, and democratic theory. He has also published on the metaphysics of causation, the philosophy of law, climate change, cricket ethics, police ethics, and the ethics of horror films. He is the author of What to Believe Now: Applying Epistemology to Contemporary Issues (2012), the co-author of The Climate Change Debate: An Epistemic and Ethical Enquiry (2013), the editor of Conspiracy Theories: The Philosophical Debate (2006) and the co-editor of A Companion to Applied Philosophy (2016). viii
ix
Contributors
Christopher Cowley is associate professor at University College Dublin, Ireland. He works on ethical theory, medical ethics, and the philosophy of criminal law. He has edited volumes on the philosophy of autobiography and supererogation, and has written a monograph on moral responsibility. M R. X. Dentith received their PhD in Philosophy from the University of Auckland, where they wrote their dissertation on the epistemology of conspiracy theories. Author of the first single-author book on conspiracy theories by a philosopher, The Philosophy of Conspiracy Theories (Palgrave Macmillan, 2014), they have published extensively on the epistemology of conspiracy theories. They also have a side project developing an account of how we might talk about the epistemology of secrecy, which should probably be kept from the world until such time it is ready to be leaked to the public. Don Fallis is professor of information and adjunct professor of philosophy at the University of Arizona, USA. His research interests include epistemology and the philosophy of information. His articles on lying and deception have appeared in the Journal of Philosophy, Philosophical Studies, Synthese, and the Australasian Journal of Philosophy. He has also discussed lying on Philosophy TV and in several volumes of the Philosophy and Popular Culture series. Bryan Frances is a member of the chair of theoretical philosophy at the University of Tartu, Estonia. He received a €512,660 grant to head a three-year research project starting in February 2018 on “Expertise and Fundamental Controversy in Philosophy and Science” at Tartu. He works mainly in epistemology and metaphysics, although he has also published in the philosophy of mind, the philosophy of language, and the philosophy of religion. Karen Frost-Arnold is associate professor of philosophy at Hobart and William Smith Colleges in Geneva, NY, USA. Her research focuses on the epistemology and ethics of trust, social epistemology, philosophy of the Internet, philosophy of science, and feminist philosophy. Steve Fuller is Auguste Comte Professor of Social Epistemology in the Department of Sociology at the University of Warwick, England. Originally trained in history and philosophy of science, Fuller is best known for his foundational work in the field of “social epistemology,” which is the name of a quarterly journal that he founded in 1987 as well as the first of his more than twenty books. His most recent books are Knowledge: The Philosophical Quest in History (Routledge, 2015), The Academic Caesar (Sage, 2016) and Post-Truth: Knowledge as a Power Game (Anthem, 2018). Georgi Gardiner is a junior research fellow at St. John’s College, Oxford University, England. In August 2019, she will join the Philosophy Department at the University of Tennessee. She received her PhD from Rutgers University. Before that she studied at the University of Edinburgh and the Open University. Axel Gelfert is professor of philosophy at the Technical University of Berlin, Germany, where he investigates issues at the intersection of social epistemology and the philosophy of science and technology. He is the author of A Critical Introduction to Testimony (Bloomsbury, 2014) and How to Do Science with Models (Springer, 2016). Hanna Kiri Gunn is a feminist social epistemologist at the University of Connecticut, USA, concerned about the possibility of meaningful democratic deliberation. Recent work has ix
x
Contributors
looked at the ways that the Internet supports or undermines our democratic ideals, in particular our ability to develop and exercise epistemic and communicative agency. Daniel Halliday is senior lecturer in philosophy at the University of Melbourne, Australia. He works mainly on topics at the intersection of political philosophy and economics, with a special focus on markets, taxation, and inequality. He is the author of The Inheritance of Wealth: Justice, Equality, and the Right to Bequeath (Oxford University Press, 2018). Dan is also working on a co- authored textbook about the moral foundations of capitalism. Stephen John is currently the Hatton Trust Senior Lecturer in the Philosophy of Public Health at the Department of History and Philosophy of Science, University of Cambridge, England. He is particularly interested in the intersection between social epistemology, political philosophy, and philosophy of science. Brian L. Keeley is professor of philosophy at Pitzer College in Claremont, California, where he also teaches in the Science,Technology & Society and Neuroscience Programs, as well as serving as an extended graduate faculty member in philosophy at Claremont Graduate University. In addition to having edited a volume in the Cambridge University Press Contemporary Philosophy in Focus series on Paul Churchland, he has published over forty articles, book chapters, and reviews on topics including the philosophy of neuroscience, the nature of the senses, artificial life, and the unusual epistemology of contemporary conspiracy theories. Tim Kenyon is Vice-President, Research, at Brock University in St. Catharines, Canada, and professor of philosophy at the University of Waterloo, Canada. His research topics include issues in social epistemology, critical thinking, and language pragmatics. Ben Kotzee is senior lecturer and director of the Doctoral Program in Education at the University of Birmingham. He works on topics in the philosophy of education and in professional ethics. He is the editor of the journal Theory and Research in Education. Michael P. Lynch is professor of philosophy and director of the Humanities Institute at the University of Connecticut, USA. Helen McCabe’s research has mainly looked at the political philosophy of John Stuart Mill, especially his connections to pre-Marxist socialism (particularly that of Robert Owen, Charles Fourier, Victor Considerant, Henri Saint- Simon, and Louis Blanc). Helen is increasingly interested in the nature of his intellectual relationship with Harriet Taylor Mill (whom he credited as his co-author), and in her independent status as a political philosopher. She holds a DPhil in politics from Oxford University. Lorenzo Magnani is full professor of philosophy of science at the University of Pavia, Italy, where he also chairs the Computational Philosophy Laboratory.Together with Nancy Nersessian and Paul Thagard, he is among the founders of the model-based reasoning (MBR) community. Randal Marlin has degrees in philosophy from Princeton (AB), McGill (MA), and Toronto (PhD). His interest in the philosophy of law began with studies at Oxford in 1961 and an encounter with Oxford Essays on Jurisprudence, followed by a tutorial with H. L. A. Hart. A parallel interest in existentialism and phenomenology eventually led him to study propaganda. x
xi
newgenprepdf
Contributors
Sabbatical leave from Carleton University in Ottawa allowed for a year- long study with Jacques Ellul at the University of Bordeaux in 1979–80 out of which came a course, Truth and Propaganda, that he continues to teach following retirement from Carleton in 2001. A former vice-president of the International Jacques Ellul Society, he is the author of Propaganda and the Ethics of Persuasion, 2nd ed. (Broadview Press, 2013) and many journalistic and other writings and civic involvements devoted to promoting truth, peace, and freedom in a world endangered by technology, technique, and flawed human nature. Andrea Pitts is assistant professor of philosophy at the University of North Carolina, USA.Their research interests include critical philosophy of race, feminist philosophy, social epistemology, Latin American and U.S. Latinx philosophy, and critical prison studies.Their publications appear in Hypatia, Radical Philosophy Review, Inter-American Journal of Philosophy, and IJFAB: International Journal to Feminist Approaches to Bioethics. Sir Nigel Shadbolt is principal of Jesus College, Oxford, and a professorial research fellow in the Department of Computer Science at the University of Oxford, England. He is also chairman of the Open Data Institute, which he co-founded with Sir Tim Berners-Lee. Matthew Sharpe is associate professor at Deakin University, Australia. He has published quite widely, including on critical theory and post-structuralist thinkers. Paul Smart is a senior research fellow in electronics and computer science at the University of Southampton, England. His research focuses on the cognitive significance of emerging digital technologies, especially the role of the Web in shaping human and machine intelligence. Daniel Steel is associate professor at the University of British Columbia, Canada. He conducts research on the intersection of values and science on subjects that have implications for urgent matters of environment or public health. He is the author of Philosophy and the Precautionary Principle: Science, Evidence and Environmental Policy (Cambridge, 2015) and Across the Boundaries: Extrapolation in Biology and Social Science (Oxford, 2008) and is co-editor, with Kevin Elliott, of Current Controversies in Values and Science (Routledge, 2017). He has recently been awarded a Canadian Social Sciences and Humanities Research Grant (2017–2020) for a project titled, “Distinct Concepts of Diversity and their Ethical-Epistemic Implications for Science.” Lisa Warenski, PhD, has published on a range of topics in theoretical epistemology. Her work in applied epistemology draws from her experience as a corporate loan officer and credit analyst in major money center banks in New York City.
xi
xii
1
PART I
Introduction
2
3
1 THE RETURN OF APPLIED EPISTEMOLOGY James Chase and David Coady
An applied turn in epistemology? The term ‘applied epistemology’ has been in irregular use for some time. It has occasionally been associated with pre-existing disciplines or practices: critical thinking, or the information sciences, or casuistry.1 But most often it simply invokes an analogy, direct or implicit, with applied ethics –for instance, when looking for a label for the activities standing to traditional epistemology and meta-epistemology as applied ethics stands to normative ethics and meta- ethics (Battersby 1989). And since applied ethics has been a well-known going concern for decades, the analogy does work by feeding our imagination: it highlights the many ways in which we might regard philosophical work as constituting applied epistemology. But the fact that applied ethics was also consciously launched by way of critique of normative ethics and meta-ethics is also useful; here the analogy flags potential concerns with the way the analytic epistemological tradition has developed. In the 1960s and 1970s, analytic ethics faced an internal critique led by such figures as Peter Singer and James Rachels. According to this critique, the analytic focus on normative and meta- ethical work had become lopsided, harmfully aloof from practical matters of deciding what to do, and threatening a kind of self-imposed irrelevance. Instead, ethicists had contributions to make by way of clarifying popular moral debates, applying ethical theory to important contemporary issues, and engaging with political figures, scientists, and the public on ethical matters.This applied ethical revolution was successful at establishing new ethical practices and bringing other concerns from the perceived fringe of academic ethics to the center of the stage. Journals such as Philosophy and Public Affairs appeared, collections of work on applied ethics were published, fields such as bioethics, environmental ethics, healthcare ethics, and business ethics arose or took newly determinate shape, and university curricula shifted to accommodate the new focus. Around the same time, analytic epistemology also faced internal critiques, sounded by well-known philosophers in two exceptionally widely read and influential works: W. V. O. Quine’s “Epistemology Naturalized” (originally an address at the 14th International Congress of Philosophy in 1968, and collected in the 1969 Ontological Relativity and Other Essays) and Richard Rorty’s 1979 book Philosophy and the Mirror of Nature. Quine’s essay appraises the foundationalist program within empiricism from Hume to Carnap, but his critique widens in scope to epistemology as a whole (conceived of as a form of first philosophy). On one 3
4
James Chase and David Coady
prominent interpretation of Quine’s project, the traditional agenda of epistemological topics and the normativity of epistemic concepts are to be jettisoned wholesale, in favor of the new – and appropriately naturalized –project of descriptive epistemology, a chapter of psychology bordering on linguistics.2 Quine recommends the psychologist Donald Campbell’s evolutionary epistemology program as an example. Rorty’s book also takes on traditional epistemology, but with a wider historical backdrop, and his telling of the tale ends with a different recommendation. On Rorty’s account, traditional epistemology is a defensive maneuver inaugurated by Kant and his nineteenth-century followers to keep philosophy in being as an intellectual discipline in the face of the rampant sciences. The “theory of knowledge,” the very core of philosophy as it is now to be understood, is to underlie all other disciplinary endeavors, and an epistemological tradition is then read back into the early modern tradition as a whole, highlighting the well-known epistemological works of Descartes, Locke, Hume, and so forth. Like Quine, Rorty thinks this whole apparatus has, by the twentieth century, become dangerously dependent on a handful of doubtful claims about the nature of sense data and their relations to our knowledge and the world. But Quine’s descriptive turn to science is also rejected. Instead, Rorty suggests, epistemology, and its baggage of scheme and content, concept and intuition, should simply come to an end, replaced by a kind of historically aware study of our conversations, reason- giving practices, and manners of discourse. In each case the call to revolution was a mixed success. Quine’s critique targets local aspects of the epistemological tradition as it presented itself to him, and so while the call for a naturalized epistemology produced or re-energized many rivals to traditional normative epistemology (such as evolutionary epistemology), its main impact has been renewal within the normative tradition itself, prompting, for instance, reliabilist analyses of the concept of justification. (In fact, Quine’s later writings are themselves much more welcoming of the normative aspects of epistemology than at least some of the naturalizing projects he inspired.) Rorty’s historical reconstruction has been very much contested, and whether his final position comes to a viable pragmatism or a corrosive skepticism is a subject of debate. But in any case, his critique and counterproposal can also be now seen as part of a wider anti-realist (and in part pragmatist) moment within analytic philosophy that itself prompted renewal within normative epistemology, at the hands of philosophers such as Hilary Putnam and Michael Dummett. But in both cases, the particular details are used to launch a concern about epistemology with its own staying power, that of irrelevance. A background concern for both Quine and Rorty is that analytic epistemology has been shaped almost entirely by an interest in the theory of knowledge, its scope, structure and limits, as those subjects have been understood since at least the early modern period. It has had rather too little to do with the practices of epistemic agents and communities, and the specific epistemic problems that might arise out of or for them. So, while the focus of Quine and Rorty was not specifically the creation of a new sub-discipline, ‘applied epistemology’, the concerns about relevance made manifest in the ethical case are also present implicitly here.Yet there is no applied epistemological revolution in the period: no new journals with that focus, no curricular changes, no new sub-disciplines. The absence of an applied turn in epistemology to match the applied turn in ethics is especially puzzling when one considers how many issues of contemporary concern to the general public are epistemic in nature. Much of this concern is driven by technological change. Over the last three decades an information revolution has transformed the ways in which we acquire knowledge and justify our beliefs. Now when we want to find something out or check whether something we believe is really true, our first port of call is almost certainly the World Wide Web (see Smart and Shadbolt, Chapter 2). Once we are there, we will almost certainly use Wikipedia (see Frost-Arnold, Chapter 3) or Google (see Gunn and Lynch, Chapter 4) or, 4
5
The return of applied epistemology
quite possibly, both. These technological developments have raised new epistemic issues just as surely as advances in reproductive technology fifty-odd years ago raised new ethical issues. The latter developments provided much of the impetus for the applied turn in ethics, but the former changes have so far failed to lead to a comparable turn in epistemology. Such a turn seems both inevitable and desirable. Not all developments motivating an applied turn in epistemology (or an epistemic turn in applied philosophy) are technological, or at any rate, they are not all purely technological. Political and social developments (which are of course implicated in technological change) have always raised epistemic issues of their own, which philosophy has traditionally engaged with. John Stuart Mill’s famous defense of free speech, for example, rests largely on epistemic principles (see Halliday and McCabe, Chapter 6). More recently, however, epistemology has tended to be marginalized in political philosophy, which is often thought of as a branch of ethics. Many applied philosophers have been willing to write about the ethical issues raised by recent Western military interventions, for example, by appealing to principles of just war theory, but few have had anything to say about the equally important epistemic issues raised by these wars, such as the nature of the evidence presented to the public in support of the casus belli or what we can know about the true motives of the governments prosecuting the wars. Similarly, the Global Financial Crisis of 2008 prompted many applied philosophers to participate in public discussion about business ethics and the nature of greed and self-interest, but very few spoke of the equally important epistemic issues facing financial markets (see Warenski, Chapter 15). Likewise, the ethics of social phenomena such as gossip, rumor, and propaganda have been much discussed by applied philosophers, but relatively few of them have been prepared to discuss the epistemic issues raised by these phenomena (see Bertolotti and Magnani, Chapter 20; Gelfert, Chapter 19; and Marlin, Chapter 9, respectively). Although the applied turn in epistemology is not on the scale of the applied turn in ethics, it would not be true to say that there has been no applied turn at all.There has been a “social turn” in epistemology over the last two or three decades, and much of the work done under the flag of “social epistemology” could equally well be classified as “applied epistemology.” Like Monsieur Jourdain in Molière’s The Bourgeois Gentleman who discovers that he has been speaking prose his whole life without knowing it, it will come as news to many social epistemologists that they have been doing applied philosophy without recognizing it by that name. Although applied epistemology and social epistemology overlap, they are not the same. Not all social epistemology is particularly applied and not all applied epistemology is particularly social. In applied epistemology we are concerned with practical questions about what we should believe and how we (individually and collectively) should pursue knowledge, wisdom, and other epistemic values. Although many of these questions are (in one way or another) social questions, not all of them are, any more than all questions in applied ethics are social questions.
Applied epistemology in early modern philosophy We have given a rough indication of what we take applied epistemology to involve; is it possible to go further? As it’s developed to date, it’s certainly not just the application of pre-existing epistemological work to some problem or situation; like ethics, epistemology can sustain case-based reasoning (casuistry) that fights shy of such principles. Kasper Lippert-Rasmussen (2016) argues that there are several quite distinct conceptions of ‘applied philosophy’, most clearly present in work in applied ethics, but all (Lippert-Rasmussen argues) extending to other fields in philosophy. The conceptions Lippert-Rasmussen discusses are clearly applicable to epistemology, but however useful they are as marking differing conceptions of applied philosophy in general, 5
6
James Chase and David Coady
we will take a slightly different approach. On our view, applied epistemology is best thought of as a family resemblance concept, involving the dimensions that Lippert-Rasmussen usefully outlines (as rival conceptions), but also perhaps others. As such, we suggest that applied epistemology is a matter of degree, and characterized by one or more of the following:3 (i) a concern for relevance to the ordinary affairs of everyday (non-philosophical) life; (ii) being addressed to an audience that isn’t exclusively philosophical; (iii) being addressed to a specific issue arising in a specific context (rather than addressed to ‘timeless’ concerns of skepticism or the analysis of knowledge); (iv) developing a body of theory with an eye to its action-guiding applications; (v) bringing a body of pre-existing theory into relation with a problem outside philosophy to allow us to think differently about that problem; (vi) being informed by (that is, conditionalizing on) empirical facts in some way; (vii) seeking to effect social or political change through philosophical work. Any or all of these factors mark work in epistemology that is at least to some extent applied rather than theoretical in nature. Some, such as relevance, seem especially central, but none of these dimensions is plausible as the core of a tight conceptual analysis of applied epistemology, and seeking to impose such a structure here would be unhelpfully prescriptive. By this measure, it should not be at all controversial that a great deal of work in applied epistemology has been carried out historically. Indeed, many of the classics of epistemology in the Western tradition can fruitfully be understood as works of applied epistemology from which the application has been subtracted through historical amnesia. Philosophers in general have enquired into the nature and limits of knowledge and rational belief, not out of idle curiosity, but because these issues had a practical significance and because they wanted to contribute to debates of topical concern. Much philosophical writing in the early modern period is, unsurprisingly, written for a more general audience than that of self-avowed philosophers, and seeks to effect social or political change, or is intended to facilitate thinking about the epistemic aspects of scientific inquiry or personal conduct. One way in which this happens is in working out the consequences of a major epistemological claim. In the empiricist tradition, for instance, the implications of that doctrine for the limits on our knowledge are often invoked practically, including by the most well-known figures. John Locke’s Essay Concerning Human Understanding has ambitions of this kind –it was prompted by a 1671 discussion about revealed religion, and a major goal is to chart the middle path between religious authoritarianism (a target in Book 1) and religious enthusiasm (seen off in Book 4). That the epistemology on offer was highly relevant to questions about religious belief was immediately apparent, and as a result the Essay was the subject of much controversy. Berkeley brings his own empiricism to bear in offering a resolution of a contemporary scientific problem (Barrow’s objection to the geometric theory of vision) in his Essay Towards a New Theory of Vision of 1709, and in rejecting Newtonian accounts of absolute space, time, and motion in his De Motu of 1721. And Hume’s essay “Of Miracles” (published as Section X of his Enquiry Concerning Human Understanding) rests his well-known case on a principle of evidence drawn from his empiricist epistemology. A second notable feature of early modern epistemological work is that not all of it is third personal in nature in the way analytic epistemology, and even post-Kantian epistemology as a whole, has generally been. Instead, much writing is on questions of method or heuristic, advice which is intended to be directly relevant to first personal epistemic appraisal or epistemic activity. This is the nature of much writing in the skeptical tradition at the time (as in Montaigne, Gassendi, and perhaps Bayle), but the first personal focus is also evident in the Cartesian tradition (well-known 6
7
The return of applied epistemology
examples are Descartes’s Rules for the Direction of the Mind and Discourse on Method, and Arnauld and Nicole’s advice on method in the fourth part of the Port Royal Logic, itself a hybrid work on epistemology and logic), in the various fusions of moral and epistemological considerations in early modern Catholic theology (such as Bartholomew Medina’s probabilism, which sets out an epistemic framework for the following of authorities in acting on matters of conscience), and in the early modern regeneration of the doctrine of fallacies as a psychological accompaniment to logic and a precursor to the study of critical thinking (for instance, advisory passages by Francis Bacon, Antoine Arnauld, and Locke himself). These are all well-known examples for a philosophical audience, but our received ideas about what constitutes our philosophical tradition also skew our attention to the purely theoretical, and so we naturally miss a third aspect of applied epistemology in the period. Just as we are less attentive to developments in, say, psychology after the period in which it has clearly evolved from a philosophical matrix, so too do we generally fail to attend to the ways in which epistemological ideas have been put to work outside philosophy, in religion, science, politics, medicine, and mathematics, among other fields. Consider again Locke, Berkeley, and Hume. Locke’s doctrine of ideas is the intellectual ground of the freethinking case in John Toland’s Christianity Not Mysterious (1696), for which Toland was prosecuted in London. Berkeley’s account of perception and action is taken up in David Hartley’s Observations on Man (1749), arguably the first work of associationist psychology. Berkeley’s case against Newtonian science of course influences the physics of Ernst Mach, and through him, Einstein. Hume’s standard of testimony is used as a test case of the congruence between probability theory and “simple common sense” in Pierre Laplace’s Philosophical Essay on Probabilities (1814), and thereby feeds into his analysis of jury reform in the subsequent chapter. Many early modern philosophers also explicitly applied their own work within epistemology to their work in other disciplines. For instance, the Portuguese philosopher Francisco Sanches is best known within philosophy for his Quod nihil scitur of 1576 (published in 1581), a classic of the early modern skeptical literature. Sanches argues there from general skeptical principles that ideal (Aristotelian) knowledge is impossible and that a mitigated skepticism, based on experience and judgment, is the appropriate attitude to all matters of belief. (Rather unusually for the time, Sanches does this without taking over the arguments of Sextus Empiricus.) This attitude is then on show not only in his other philosophical works (for instance, attacking Aristotle’s openness to the art of divination on the basis of our incapacity to know such matters), but also in Sanches’s whole parallel career as a doctor and his many practical treatises in medicine. The link between the two is in fact made clear in the reader’s greeting of the Quod nihil scitur itself, where Sanches remarks that “the goal of my proposed journey [i.e., his skeptical epistemological treatise] is the art of medicine, which I profess, and the first principles of which lie entirely within the realm of philosophical contemplation” (Sanches 1988 [1581]: 171). The turn away from pure philosophy is where we tend to drop our philosophical interest in Sanches –the Quod nihil scitur is the only work of his with any kind of readership in contemporary philosophy –but his career as a whole is that of an applied epistemologist. And Sanches is hardly alone in this particular application; the close association between philosophy and medicine is a basic assertion of the Galenic tradition, a common trope in Renaissance university curricula, and a familiar feature of Descartes’s own account of his progress in his Discourse on Method. A second (and better known) example is that of Spinoza’s analysis of the Bible in his Tractatus Theologico-Politicus of 1670, a precursor to and defense of his Ethics. Spinoza’s form of rationalism is both optimistic and uniform: we can have adequate knowledge of all of Nature, and so of God; however, the continuity of all aspects of nature entails an equally undivided treatment of knowledge and the methods of inquiry. As a result, Spinoza denies that the interpretation of 7
8
James Chase and David Coady
scripture is to be distinct in mode from the interpretation of anything else in Nature. An appropriate attitude to the Bible, then, will involve the methods of rational inquiry brought to historical texts in general, and Spinoza accordingly carries out one of the first exercises in historical criticism, undermining the claims of Biblical scripture as a source of knowledge by exploring internal inconsistencies in the Biblical texts and matters of historical context. As a consequence, Spinoza argues for a complete overturning of the traditional devotional attitudes to accounts of prophecies and miracles and presents a political/ethical case for toleration of thought, freedom of religious expression, and a secular state.
The shape of theoretical epistemology If we were telling this tale in the manner of Rorty, about now a change of emphasis would be flagged. For, on his account in Philosophy and the Mirror of Nature, our contemporary understanding of ‘philosophy’ (as something distinct from science and centered on epistemology), ‘epistemology’ (as first philosophy), and the ‘theory of knowledge’ (as the foundation of all sciences) all arise after Kant, with the growth of philosophy as a specific academic discipline, autonomous and self-contained, constituting a tribunal of pure reason. And on that story, the decline of applied epistemology is naturally to be dated to the same period, as the concerns of philosophy become ever more etiolated and removed from a close relation with other disciplines. But we disagree with Rorty’s history, at least insofar as it concerns the kinds of activities we marked off earlier as characteristic of applied epistemology. The nineteenth century may have seen the professionalization of philosophy, but it also teems with projects in applied epistemology, many of them carried out by philosophers. A notable example is that of the philosophy of science as it looked in the early nineteenth century, which is, to a fair extent, work in applied epistemology: it is addressed to a non- philosophical audience, concerned with first personal heuristic advice (in a way that becomes less common in later philosophy of science), and it applies theoretical work within epistemology to specific cases.William Whewell’s Philosophy of the Inductive Sciences (first edition 1840) is remarkably fertile in just this way, with his ‘discoverer’s induction’ involving a form of conceptual innovation (‘colligation’) and being constrained by considerations of coherence, prediction, and a version of inference to the best explanation (‘consilience’). John Stuart Mill’s System of Logic (first edition 1843), rather more famously, recommends a process of eliminative induction and runs through a range of methods to be used in carrying it out. Moreover, the controversy between these two, carried on in successive editions of their major books (along with Whewell’s irritable Of Induction, with Especial Reference to Mr. J. Stuart Mill’s System of Logic (1849)), to some extent revolves around the question of the worth of purely theoretical epistemological work in application. Whewell, as a scientist, historian of science, and Kantian epistemologist (of sorts), insists that Mill’s empiricist methodology is threadbare in application because it cannot explain the way human knowledge has actually arisen in the natural sciences, a process that Whewell sees as involving an interplay between the subjective colligation (bringing the phenomena together in a unifying conception) and the confirmation of these colligations, generating further, higher-order, phenomena. Science idealizes facts, Whewell argues, just because that’s how knowledge in general operates. Just like Sanches, Whewell puts this epistemological account to work in his own scientific activities and those of his acquaintances (for instance, in arriving at his conceptual coinages for his friend Michael Faraday), and also in his work on science and its history (as in his coinage of the very term ‘scientist’ itself). A second, rather different, example is provided by the various fusions of the psychology of perception and epistemology explored in German-language intellectual circles (in philosophy, 8
9
The return of applied epistemology
but also psychology, physics, anthropology, and other fields) later in the nineteenth century. Here epistemological theory, empirical work, and metaphysical worldviews are brought together in a variety of ways, as philosophers sought to make use of psychology and physiology to make progress on epistemological problems. Friedrich Lange’s “physiological neo-Kantianism,” for instance, draws epistemic conclusions about the senses (as involving inferential processes) from the study of the eye and other sensory modalities, and then applies these epistemic conclusions in arriving at a species of idealism and a corresponding Kantian epistemology. His materialist opponent Heinrich Czolbe presents an unusual interpretation of the empirical data designed to complement a form of naïve realism, in which colors and sounds exist as sensible qualities outside the mind and are transmitted through the nerves to the brain. Ernst Mach, by contrast, tries to sort the whole mess out by going back to epistemological first principles, deriving an empirically respectable account of knowledge, and then applying it to the sciences (for an account of all this, see Edgar 2013). These are not at all the only examples of applied work in the period; we could multiply them easily. But to give Rorty his due, something very much like the target of his critique also takes shape within this period alongside such activities: a concern for epistemology as first philosophy and foundational to all inquiry, and a taste for the general and theoretical over the locally relevant. This is the path generally taken in epistemological work by those figures who made up (or were later identified as members of) the early analytic movement. The relationship between early analytic philosophy and applied ethics is complicated: Russell’s regular activities on topics of ethical and political relevance, for instance, are generally divorced from his work on metaphysics, logic, knowledge, and the philosophy of language, in much the way that Michael Dummett later maintained separate identities as a logician/philosopher and as a campaigner on matters of immigration and racism (and appeared to conceive of the latter as not especially connected to his professional duties).4 But in the case of epistemology the interests of the early analytics, at least up to roughly the First World War, can hardly be said to be applied rather than theoretical in nature. Fregean concepts and Moorean propositions are entities of a third realm, generating a broad range of epistemological problems; Russell’s incomplete draft Theory of Knowledge and related writings are motivated by the need to secure the epistemological foundations of his reworking of logic.5 Even the early logical positivists, in reshaping the philosophy of science,6 are primarily theoreticians, writing for a largely philosophical audience and with a conception of science in play far removed from local historical and social complexities, as was emphasized by the post-Kuhnian revolution away from the high positivist attitude. By the time the analytic movement becomes a self-conscious multi-generational tradition in the decades after the Second World War, we are recognizably in the territory of the critiques of Rorty and Quine: analytic epistemology in this period concerns above all the ahistorical analysis of knowledge, its scope and limits, and related matters in the foundations of knowledge (sense-data theories of perception, and so on). This focus is very evident when one looks at the ways epistemology was taught in this period. For instance, Teaching Theory of Knowledge, an advisory booklet put out by the Council for Philosophical Studies in 1986 as a collaborative effort by many institutions and individual American epistemologists, lists fourteen topics as “central to recent and contemporary epistemology” (Clay 1986: 1): They are the traditional analysis of knowledge, skepticism, the Gettier problem, foundationalism, coherentism, reliabilism, explanationism, memory, perception, a priori knowledge, naturalistic epistemology, realism, rationality, and epistemic logic. On each topic, the emphasis is exclusively or overwhelmingly on theoretical concerns. Other philosophical traditions at the time show, by contrast, a very strong interest in applied epistemology –notably the pragmatist tradition in American philosophy, which itself has 9
10
James Chase and David Coady
recurrently influenced and grown with the analytic movement.7 Influenced by the physiologists as much as by Hegel, John Dewey develops an account of knowledge and a conception of the role of the philosopher that naturally tend to application –in educational theory, most obviously (see Chapter 16 by Kotzee), but more generally throughout our social and political life. In Democracy and Education, Dewey sets out his conception of philosophy as the “general theory of education,” as something to take effect in conduct, and so as something to be coordinated with “public agitation, propaganda, legislative and administrative action” (Dewey 1916: 383). By contrast, epistemology, as it has developed, and its research agenda, are, according to Dewey, the products of a mistaken preoccupation with isolating mind from world and identifying self and mind. The charge here is precisely that of irrelevance and theoretical sterility –the virtues of philosophical clarity, rigor, and precision being wasted on reasonably unimportant or misguided matters.This whole tradition constitutes something of a ready-made critique of analytic epistemology, and it’s unsurprising that both Rorty and Quine had pragmatist training and were themselves influential within the pragmatist tradition.8 More recent pragmatists develop very similar critiques; for instance, consider Philip Kitcher’s call for us to reimagine contemporary philosophy along Deweyan lines, as responsive to human social situations and the need of each individual to make sense of the world and their own place within it (Kitcher 2011: 254). On Kitcher’s view, there should be a kind of involution of the order of business of contemporary philosophy: Kitcher calls us to attend to the valuable work on questions of contemporary significance done by philosophers on the margins, and focus on work in the core areas of philosophy (logic, epistemology, metaphysics, philosophy of language, and philosophy of mind) only to the extent necessary to make progress on those questions (Kitcher 2011: 248).
The open future of applied epistemology On our view, then, the ‘return’ of applied epistemology is a fairly local matter –a reaction to the tendency of much twentieth-century analytic epistemology to a certain inwardness in audience and topic. We’ve suggested a fairly broad conception of applied epistemology as a way to frame this reaction, but others are on offer. We’ll close by considering the ways in which our conception of applied epistemology differs from another influential account, implicit in much social epistemology and explicit in the following definition by Larry Laudan: Applied epistemology in general is the study of whether systems of investigation that purport to be seeking the truth are well engineered to lead to true beliefs about the world. Laudan 2006: 2 We think this is insufficiently general. It is certainly insufficiently general to represent the contents of this book. There are at least two ways in which Laudan’s definition seems to us to be too narrow. First, it focuses exclusively on “systems of investigation,” such as science, mathematics, or (Laudan’s particular concern in this passage) the Anglo- American legal system. Some contributions to this book concern systems of investigation, in something like Laudan’s sense (though we prefer the term “epistemic institutions”), for example, the chapters on Internet platforms already mentioned, or John’s discussion of the Intergovernmental Panel on Climate Change’s pronouncements about climate change (Chapter 10), or Brennan’s discussion of the allegedly truth-promoting properties of democratic institutions (Chapter 7), or Gardiner’s discussion of legal burdens of proof (Chapter 14). Other chapters, for example Dentith and Keeley’s 10
11
The return of applied epistemology
discussion of rational belief in conspiracy theories (Chapter 21), or Steve Fuller’s discussion of the problems of forecasting (Chapter 18) do not fit this mold, because they are primarily about individual investigators rather than epistemic institutions. Second, Laudan’s approach is too narrow because it presupposes that the only fundamental or intrinsic epistemic value is true belief.This veritistic approach to epistemology is in part inspired by classical utilitarianism in ethics.9 Not surprisingly it has been criticized on grounds that echo standard criticisms of classical utilitarian thought. These criticisms come in two forms: those which object that there is more than one intrinsic value in the domain in question (in this case epistemology), and those which object that there are constraints on how this value should be pursued. Whatever you think of these objections, an approach to applied epistemology that presupposed the truth of veritism, or any other controversial theoretical framework, would be overly narrow, for just the same reason that an approach to applied ethics which presupposed the truth of classical utilitarianism would be. Applied philosophy of any kind which is dogmatic in its theoretical commitments will be of little interest to those who don’t share those commitments. Veritism is not the only example of an epistemic theory based on an ethical theory.Virtue epistemology, for example, is explicitly modeled on virtue ethics. But while epistemology has been willing to turn to ethics for guidance on theoretical matters, it has been much more reluctant to follow it in applying its theorizing to issues of concern to non-philosophers. Part of the problem is that in philosophy we are accustomed to thinking that the central problem of epistemology is meeting “the skeptical challenge.” Epistemology over the last hundred years has been dominated by attempts to demonstrate that, despite a plethora of arguments to the contrary, we can have knowledge and justified belief (especially of the external world). Applied epistemologists should feel free to ignore the skeptical challenge, for the same reason applied ethicists typically ignore equally paradoxical views in meta-ethics. Applied ethics begins with the assumption that there really is a distinction between morally correct and morally incorrect actions as well as a distinction between morally virtuous and morally vicious people. Likewise, applied epistemology can assume that we have knowledge (indeed quite a lot of it), and that we can acquire more. Furthermore, it can assume that we are sometimes (indeed often) justified in believing the things we believe. It is only when we make these assumptions that we are in a position to address the really interesting and important epistemic questions that are challenging everyone, and not just academic philosophers, in the twenty-first century.
Notes 1 See Battersby (1989), Shera (1970), and Sosa (2007) for examples. 2 See, for instance, Kim (1988).That this was Quine’s own conception of his famous paper is doubtful; see for instance Quine (1986: 664–65), Quine (1990: 229), and Putnam (1982: 19). 3 A list heavily influenced by Lippert-Rasmussen’s conceptions, but with some additions. 4 See Dummett (1981: x–xi) for a well-known instance of his distinction between work for anti-racist organizations and his philosophical work (“more abstract matters of much less importance to anyone’s happiness or future”); Dummett makes a comparison with Russell’s situation himself. 5 Russell’s later interest in education is a curious case: As Monk notes (2001: 57–58), his 1926 bestseller On Education is a work of behaviorist child psychology, betraying the enormous influence of Watson’s work on him at the time, and the school opened by Bertrand and Dora Russell in 1927 was run on the lines of Margaret McMillan’s play-centered nursery schools. Neither has much connection to Russell’s work in analytic philosophy or his general epistemological work. On the other hand, much of Russell’s popular writing in the 1920s, collected in Sceptical Essays, is arguably straightforwardly applied epistemology.
11
12
James Chase and David Coady 6 It’s undeniable that much work in the philosophy of science instances applied epistemology; see, for instance, the well-known discussion of the role of ‘impure’ motives in furthering knowledge acquisition in science in Kitcher (1993), but see also Hansson (2003) and Klausen (2009) for a more general discussion of philosophy of science in relation to applied epistemology. 7 There are other examples. For instance, Jean Piaget’s “genetic epistemology,” largely treated as a project in developmental psychology, was always intended by him to have epistemological consequences (including normative consequences), settling various philosophical debates not only about knowledge but about, for instance, the nature of numbers. Philosophers themselves have generally been unconvinced by the way Piaget develops the connection (see Siegel 1978 for a critique). 8 Rorty locates himself in the American pragmatist tradition; see, for instance, Rorty (1982). Notwithstanding his occasional talk of ‘pragmatism’ and his training by C. I. Lewis, Quine’s own relationship to American pragmatism is a little more remote; see Godfrey-Smith (2014) for discussion of this. 9 See Goldman (1999: 87).
References Battersby, M. (1989). “Critical Thinking as Applied Epistemology: Relocating critical thinking in the philosophical landscape.” Informal Logic, 11(2). Clay, M. A. (1986). Teaching Theory of Knowledge. Tallahassee, FL: Council for Philosophical Studies. Dewey, J. (1916). Democracy and Education: An Introduction to the philosophy of education. New York: Macmillan. Dummett, M. (1981). Frege: Philosophy of language. Cambridge, MA: Harvard University Press. Edgar, S. (2013). “The Limits of Experience and Explanation: F. A. Lange and Ernst Mach on things in themselves.” British Journal for the History of Philosophy, 21(1): 100–21. Godfrey-Smith, P. (2014). “Quine and Pragmatism,” in G. Harman and E. Lepore (eds.), A Companion to W. V. O Quine. Oxford: Wiley-Blackwell. Goldman, A. I. (1999). Knowledge in a Social World. Oxford: Oxford University Press. Hansson, S. O. (2003). “Applying Philosophy.” Theoria, 69(1–2): 1–3. Kim, J. (1988). “What is ‘Naturalized Epistemology?’” Philosophical Perspectives, 2: 381–405. Kitcher, P. (1993). The Advancement of Science. Oxford: Oxford University Press. Kitcher, P. (2011). “Philosophy Inside Out.” Metaphilosophy, 42(3): 248–60. Klausen, S. H. (2009). “Applied Epistemology: Prospects and problems.” Res Cogitans, 6(1): 220–58. Laudan, L. (2006). Truth, Error, and Criminal Law: An essay in legal epistemology. Cambridge: Cambridge University Press. Lippert-Rasmussen, K. (2016).“The Nature of Applied Philosophy,” in K. Lippert-Rasmussen, K. Brownlee, and D. Coady (eds.), A Companion to Applied Philosophy. Chichester: Wiley. Monk, R. (2001). Bertrand Russell: The ghost of madness 1921–1970. London: Vintage. Putnam, H. (1982). “Why Reason Can’t be Naturalized.” Synthese, 52(1): 3–23. Quine, W. V. (1986). “Reply to Morton White,” in L. Hahn (ed.), The Philosophy of W. V. Quine. La Salle, IL: Open Court Publishing. Quine, W. V. (1990). “Comment on Lauener,” in R. Barrett and R. B. Gibson (eds.), Perspectives on Quine. Oxford: Blackwell. Rorty, R. (1982). Consequences of Pragmatism: Essays, 1972– 1980. Minneapolis, MN: University of Minnesota Press. Sanches, F. (1988 [1581]). That Nothing Is Known (Quod Nihil Scitur). Cambridge: Cambridge University Press. Shera, J. (1970). “Library and Knowledge,” in J. H. Shera (ed.), Sociological Foundations of Librarianship. New York: Asia Publishing House. Siegel, H. (1978). “Piaget’s Conception of Epistemology.” Educational Theory, 28(1): 16–22. Sosa, E. (2007).“Experimental Philosophy and Philosophical Intuition.” Philosophical Studies, 132(1): 99–107.
12
13
PART II
The Internet
14
15
2 THE WORLD WIDE WEB Paul Smart and Nigel Shadbolt
Introduction The World Wide Web (henceforth the “Web”) is a large-scale digital compendium of information that covers practically every sphere of human interest and endeavor. For this reason, it should come as no surprise to learn that the Web is a prominent target for epistemological analysis. To date, search engines (Heintz 2006; Miller and Record 2013; Simpson 2012), Wikipedia (Coady 2012; Fallis 2008, 2011) and the blogosphere (Coady 2012; Goldman 2008) have all been the focus of epistemological attention. Other systems, while relevant to epistemology, have attracted somewhat less scrutiny. These include microblogging platforms (e.g., Twitter), social networking sites (e.g., Facebook), citizen science projects (e.g., Galaxy Zoo), and human computation systems (e.g., Foldit). One of the aims of this chapter is to introduce the reader to these systems and highlight their relevance to applied epistemology. A second aim is to review existing epistemological analyses of the Web and, where necessary, point out problems with the philosophical narrative. A third and final objective is to highlight areas where the interests of epistemologists (both theoretical and applied) overlap with the interests of those who seek to understand and engineer the Web. One of the outcomes of this analysis is a better understanding of the ways in which contemporary epistemology can contribute to the nascent discipline of Web science (see Smart et al. 2017).
Personalized search: epistemic boon or burden One of the major areas of epistemological enquiry into the Web concerns the epistemic impact of search engines, such as Google Search (Miller and Record 2013; Simpson 2012). A particular focus of attention relates to the effect of personalized search mechanisms, which filter search results based on a user’s prior search activity. Such mechanisms, it is claimed, can result in so-called “filter bubbles” (see Pariser 2011), which have the effect of limiting a user’s awareness of important bodies of epistemically relevant information. Epistemologists are largely in agreement regarding the negative effects of personalized search. Simpson (2012), for example, argues that filter bubbles accentuate the problem of confirmation bias and undermine users’ access to objective information. Similar views are expressed by Miller and Record (2013). They
15
16
Paul Smart and Nigel Shadbolt
claim that the justificatory status of an agent’s beliefs is undermined as a result of exposure to personalized search results. Concerns about the epistemic sequelae of personalized search have led epistemologists to make a number of practical suggestions as to how to avoid filter bubbles, or at least how to minimize their epistemic effects. Simpson (2012) thus suggests that users should turn off personalization or resort to search engines that do not use personalization mechanisms (he cites DuckDuckGo1 as a prime example). Simpson also suggests that there is a prima facie case for government regulation of search engine providers. Echoing the views of Introna and Nissenbaum (2000), he argues that search engines are in the business of providing an important public service and that regulation is required to ensure they operate in an objective and impartial manner. Other proposals to address the problem of personalized search center on the epistemic responsibilities of Web users. Miller and Record thus suggest that search engine users “can use existing competencies for gaining information from traditional media such as newspapers to supplement internet-filtered information and therefore at least partly satisfy the responsibility to determine whether it is biased or incomplete” (2013: 130). Finally, Knight (2014) draws attention to the efforts of computer scientists in developing “diversity-aware search” techniques. These are deemed to enable users to break out of their filter bubbles via the active inclusion of ‘diverse’ information in search results. We thus have a range of proposals concerning the practical steps that could be (and perhaps should be) taken by users to obviate the negative effects of personalized search. But before we accept such proposals, we should at least question the (largely implicit) assumption upon which all these proposals are based. Do personalized search engines really undermine the epistemic status of their users? And, if so, are we justified in condemning personalized search engines on account of their poor veritistic value? In responding to these questions, we suggest it helps to be aware of a range of issues that, to our knowledge, have not been the focus of previous epistemological attention. While these issues do not exclude the possibility that personalized search may, on occasion, harm the epistemic standing of individual Web users, they do at least provide reasons to question the epistemological consensus that has emerged in this area. The first issue to consider relates to the way in which search engines are actually used.Waller (2011), for example, discovered that almost half (i.e., 48%) of the queries entered by search engine users appeared to be directed toward the retrieval of information about a specific website. In other words, it seemed that users were relying on a search engine, at least in part, as a means of providing quick and easy access to familiar sources of information. These findings are important, for they suggest that the discovery of new information is not the sole purpose of search engines; instead, it seems that search engines may also be used to quickly access sources of information that a Web user is already aware of. When seen in this light, it is far from clear that personalized search mechanisms should always be seen to work against the epistemic interests of the individual Web user. In fact, there is perhaps a risk that by interfering with personalization mechanisms, we will disrupt a set of well-honed techniques for quickly and efficiently accessing familiar bodies of task-relevant information. This is not to say that Waller’s (2011) findings eliminate concerns about the epistemic implications of personalized search.The use of search engines as a convenient shortcut to familiar sources of information may, from an epistemic perspective, be more or less hazardous depending on the kind of information that is being accessed, and clearly nothing about this particular way of using search engines guarantees the objectivity or impartiality of the actual information source.2 In spite of these caveats, Waller’s (2011) findings are important because they draw attention to the different ways in which Web users may exploit the functionality of personalized search engines. One question for future research is to ascertain whether all these modes of use 16
17
The World Wide Web
are equally injurious to an individual’s epistemic health and standing, and whether some modes may actually be of productive value in enhancing an individual’s epistemic functioning. A second issue to consider relates to the broader ecological setting in which search engines are used. Here we suggest that epistemological analyses can benefit from the sort of perspectives that have long been embraced by the cognitive science community, especially those that emphasize the situated and environmentally embedded nature of cognitive processing (Robbins and Aydede 2009). In particular, we suggest that it is helpful to think of Web users as embedded in multiple networks of information flow and influence, each of which presents the user with a diverse (even if filtered) stream of facts, ideas, and opinions. This broader informational ecology, we suggest, might work to mitigate the negative epistemic effects of personalized search (if indeed there are any). The sociological concept of network individualism (Rainie and Wellman 2012) may be of potential value here. Networked individualism refers to the way in which society is changing as a result of the introduction of new media technology. In particular, it emphasizes the manner in which people connect, communicate, and exchange information following the advent of the Web and the growth of mobile communications technology. According to Rainie and Wellman (2012), society is increasingly organized along the lines of multiple, overlapping social networks, each of which is characterized by fluid and dynamic forms of membership. As a result of these shifts in social structure, individuals are likely to be exposed to multiple sources of heterogeneous information, and this may help to allay concerns about the selective exposure effects that filter bubbles are deemed to produce. A consideration of network individualism thus reminds us that a user’s informational ecology is not necessarily exhausted by the nature of their interaction with a particular search engine. Once this broader informational ecology is taken into consideration, concerns about the epistemological impact of personalized search may start to look a little overblown. Finally, a more positive perspective on personalized search is provided by the notion of mandevillian intelligence (see Smart forthcoming-b, forthcoming-c). Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, for example, at the level of collective doxastic agents (see Palermos 2015) or socio-epistemic systems (see Goldman 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information. While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search technologies may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman 2010) and supporting an effective division of cognitive labor (see Muldoon 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary depending on whether it is individual epistemic agents or the collective ensembles in which those individuals are situated that is the specific focus of epistemological attention. Needless to say, much more work needs to be done to evaluate these claims about the potential epistemic benefits of personalized search (as well as perhaps other forms of online 17
18
Paul Smart and Nigel Shadbolt
information filtering). Note, however, that in the absence of the notion of mandevillian intelligence, the epistemic consequences of personalized search might have seemed self-evident and thus unworthy of further scientific and philosophical scrutiny.This helps to highlight one of the ways in which the notion of mandevillian intelligence is relevant to applied epistemology: it helps to provide the conceptual basis for novel investigative efforts that seek to explore the epistemic consequences of, for example, technological interventions at both the individual and collective (social) levels.
Web-extended knowledge One of the ways in which epistemologists have sought to understand the epistemic effects of the Web is by drawing on externalist approaches to mind and cognition (see Clark 2008). According to the notion of active externalism, for example, the causally active physical vehicles that realize mental states and processes can sometimes extend beyond the traditional biological borders of the brain (and body) to include a variety of non-biological (i.e., extra-organismic) resources (Clark and Chalmers 1998). This idea is sometimes presented as a thesis about the explanatory kinds of interest to cognitive science (in which case it is commonly referred to as the Hypothesis of Extended Cognition or HEC), and sometimes it is presented as a thesis about mentalistic folk categories, such as states of belief (in which case it is commonly referred to as the Extended Mind Thesis or EMT). With respect to the EMT, it has been suggested that the nature of our interaction with the Web supports the emergence of Web-extended minds; that is, forms of bio-technological merger in which the Web serves as part of the realization base for some of our folk psychological mental states, most notably states of dispositional belief (Smart 2012).This possibility has been discussed in relation to criteria that are used to discriminate genuine forms of cognitive extension from those of a more ersatz variety (see Clark and Chalmers 1998). Smart (2012) thus talks about the Web in terms of the opportunities it provides for quick and easy access to online information and the way in which these opportunities speak to at least one of the criteria for cognitive extension discussed by Clark and Chalmers (1998), namely the accessibility criterion. Recently, the notion of Web-extended minds has led epistemologists to make a number of claims about the impact of the Web on our epistemic profiles. One implication of the Web- extended mind concept, Ludwig (2015) argues, is that we are able to envisage a profound transformation of our doxastic potential. In particular, Ludwig anticipates “an explosion of dispositional beliefs and knowledge that is caused by digital information resources such as Wikipedia or Google” (2015: 355). Similar views are expressed by Bjerring and Pedersen. They argue that the Web enables us to enjoy various forms of “restricted omniscience,” wherein we have more or less “complete knowledge about a particular, fairly specific subject matter” (2014: 25). We thus arrive at a claim that seems to follow quite naturally from the possibility of Web-extended minds –a claim that is nicely captured by the Web-extended knowledge hypothesis: Cognitively potent forms of bio-technological merger between human agents and the Web serve as the basis for Web-extended knowledge, i.e., epistemically relevant doxastic states that supervene on material elements forming part of the technological and informational fabric of the World Wide Web. Unfortunately, there are a number of problems confronting this hypothesis. One of the most pressing problems relates to the way in which the criteria for cognitive extension (e.g., those 18
19
The World Wide Web
proposed by Clark and Chalmers) work against the epistemic interests of the technologically extended agent (see Smart forthcoming-a). In order to help us understand this, consider the accessibility criterion, as discussed by Clark and Chalmers (1998). The general idea behind the accessibility criterion is that external information should be quickly and easily accessible –it should be possible for agents to draw on external information whenever it is required and easily incorporate this information into their cognitive processing routines. Accessibility thus seems to demand a degree of fluency with respect to the interactions that occur between a human agent and a bio-external resource –where the notion of fluency can be understood (at least in part) as the “subjective experience of ease or difficulty with which we are able to process information” (Oppenheimer 2008: 237). Now, the problem with claims regarding easy access and fluent interaction is that these properties seem to be in some tension with the possibility of Web-extended forms of knowledge. One of the key insights to emerge from research on fluency, for example, is that fluent processing is often associated with a “truth bias,” in which the ‘truth’ of some body of external information is judged relative to the subjective ease with which it is processed (Alter and Oppenheimer 2009). In the context of the Web, where information is of variable reliability, this particular kind of cognitive bias looks set to undermine the epistemic integrity of the Web-extended cognizer. Indeed, it seems reasonable to think that, in the interests of preserving positive epistemic standing, Web users should be somewhat circumspect about online information. At the very least, it seems important for the epistemically responsible agent to subject online information to critical evaluation and scrutiny (Heersmink 2018; Record and Miller forthcoming). But now note how this seemingly sensible demand for critical evaluation conflicts with the putative role that fluency plays in extending the epistemic reach of the Web-extended cognizer. Fluency thus seems to speak in favor of the possibility of Web-extended minds, but it seems to work against the interests of Web-extended knowers (i.e., agents whose epistemic credentials are enhanced as a result of Web-based forms of cognitive extension).We thus encounter the extended cognizer vs. extended knower problem: The properties that work to ensure that an external resource can be treated as a candidate for cognitive incorporation are also, at least in some cases, the very same properties that work to undermine or endanger the positive epistemic standing of the technologically extended agent. see Smart forthcoming-a Somewhat surprisingly, this problem highlights a potential tension between our notions of extended cognition and extended knowledge. Contrary to the idea that Web-extended minds are the natural harbingers of Web-extended knowledge, cognitive extension may lead to a form of epistemic diminishment, undermining the extent to which extended cognizers are the proper targets of knowledge attribution.
Epistemic feelings In addition to ideas concerning Web-extended knowledge and Web-extended knowers, there is an additional way in which active externalism is relevant to epistemological analyses of the Web. This is revealed by the results of recent empirical studies investigating the effect of Web access on subjective, epistemically relevant experiences, such as the feeling of knowing (Fisher et al. 2015; Ward 2013). The feeling of knowing is one of a range of epistemic feelings that have been studied by epistemologists (Michaelian and Arango-Muñoz 2014). It refers to the 19
20
Paul Smart and Nigel Shadbolt
experience of being able to retrieve or access some piece of information (e.g., the answer to a specific question), typically from bio-memory. In situations where people use the Web to search for online information, however, it seems that this feeling of knowing ‘extends’ to include the informational contents of the online realm. Searching for information, Fisher et al. suggest, “leads people to conflate information that can be found online with knowledge in the head” (2015: 675). Similarly, Ward notes that as people turn to the “cloud mind of the internet, they seem to lose sight of where their own minds end and the mind of the internet begins. They become one with the cloud, believing that they themselves are spectacularly adept at thinking about, remembering, and locating information” (2013: 88). These findings are of interest, because they have long been anticipated by those working in the active externalist camp. In 2007, for example, Clark proposed that our subjective sense of what we know is informed by the kind of access we have to bio-external information: Easy access to specific bodies of information, as and when such access is normally required, is all it takes for us to factor such knowledge in as part of the bundle of skills and abilities that we take for granted in our day to day life. And it is this bundle of taken-for-granted skills, knowledge, and abilities that … quite properly structures and informs our sense of who we are, what we know, and what we can do. Clark 2007: 106 Such comments seem particularly prescient in view of the findings by Ward (2013) and Fisher et al. (2015). Indeed, from the standpoint of active externalism, it might be thought that the results of Ward (2013) and Fisher et al. (2015) are largely consistent with the idea of online information being incorporated into an individual’s body of personal beliefs and (perhaps) knowledge. Inasmuch as we accept this to be the case, then it potentially alters our views about the significance of Web-induced changes in the feeling of knowing.The aforementioned quotes from Ward (2013) and Fisher et al. (2015) both sound something of a cautionary note regarding the extent to which changes in the feeling of knowing should be seen as marking a genuine shift in an individual’s epistemic and cognitive capabilities –at the very least, such comments appeal to a distinction between what might be called ‘knowledge-in-the-head’ and ‘knowledge- on-the-Web’. Extended approaches to cognition and knowledge encourage us to question this distinction. From an active externalist perspective, it is entirely possible that changes in epistemic feelings are merely the subjective corollary of a particular form of cognitive extension, one that emerges as a result of our ever more intimate cognitive and epistemic contact with the informational contents of the online realm. It goes without saying, of course, that feelings of knowing are not sufficient for genuine knowledge attribution –we may feel we know lots of things without actually knowing anything! It is thus important to note that while the work of Ward (2013) and Fisher et al. (2015) might be seen to support claims about the Web-extended mind, this does not necessarily tell us anything about Web-based forms of extended knowledge. From the perspective of applied epistemology, it will be important, in future work, to consider the extent to which Web-based shifts in epistemic feelings provide a reliable indication of what we do (and do not) know. It will also be important to consider the extent to which changes in subjective experience alter our tendency to engage in epistemically relevant processes and practices (e.g., those that help to ensure the modal stability of our beliefs across close possible worlds). Interestingly, research in social psychology suggests that changes in self-related perceptions of expertise contribute to a more closed-minded or dogmatic cognitive style (Ottati et al. 2015). In view of such results, it is natural to wonder whether changes in feelings of knowing (such as those accompanying the 20
21
The World Wide Web
use of Web technologies) might lead individuals to become more dogmatic and thus diminish their epistemic standing under a virtue-theoretic (especially, a virtue responsibilist) conception of knowledge (see Baehr 2012).
Social machines Despite the fact that the Web is a relatively recent phenomenon, it plays a crucial role in an ever-expanding array of social processes. Indeed, the sudden disappearance of the Web would, in all likelihood, result in a severe disruption of society, on a par perhaps with that resulting from a coordinated nuclear strike. (This is somewhat ironic given that the Web emerged on the back of research efforts to support the continued functioning of society in the face of a nuclear attack!) For this reason, it is appropriate to think of the Web as a form of critical infrastructure for society, resembling, perhaps, the more traditional elements of national infrastructure, such as the road, rail, and electricity distribution networks. Arguably, the reason why the Web has emerged to occupy this role is because of its ever more intimate integration into practically every aspect of social life. For better or worse, the Web has now become an integral part of the structures and processes that make our contemporary society what it is –part of the integrated physical fabric that makes society materially possible. This vision of socio-technical integration lies at the heart of an important concept that has emerged in the context of the Web science literature. This is the concept of social machines (Palermos 2017; Smart and Shadbolt 2014). Social machines are systems in which human and (Web-based) machine elements are jointly involved in the mechanistic realization of phenomena that subtend the computational, cognitive, and social domains (Smart and Shadbolt 2014). From an epistemological perspective, a particular category of social machines is of particular interest.These are known as knowledge machines (Smart et al. 2017). A knowledge machine is a social machine that participates in some form of knowledge-relevant process, such as the acquisition, discovery, and representation of knowledge. Citizen science systems, such as Galaxy Zoo (Lintott et al. 2008), are one kind of knowledge machine that has been the focus of considerable research attention.These have grown in prominence over recent years, to the point where they play an important role in many forms of scientific practice (see Meyer and Schroeder 2015). Such characterizations are sufficient to make citizen science systems worthy of applied epistemological analysis, and this is especially so given the interest in applying epistemological theory to the understanding and analysis of scientific processes (e.g., Palermos 2015). Another important class of knowledge machines are human computation systems (Law and von Ahn 2011), which seek to incorporate human agents into some form of computational processing. One example of such a system is the online protein folding game, Foldit (Cooper et al. 2010). This system incorporates the pattern matching and spatial reasoning abilities of human participants into a hybrid computational process that aims to predict the structural properties of protein molecules. The role of the human participants in these sorts of systems should not be underestimated. In many cases, the task being performed by the larger socio-technical ensemble –the one involving both human and machine elements –is not one that could be (easily) performed in the absence of the (often large-scale) socio-technical infrastructure that social machines make available. This is something that is often explicitly recognized by those who seek to harness the epistemic potential of social machines. In one of the papers describing the Foldit system, for example, the authors explicitly acknowledge the contributions made by more than 57,000 users of the Foldit system (Cooper et al. 2010). One of the things that is revealed by a consideration of citizen science and human computation systems is the extent to which social machines draw on the complementary contributions 21
22
Paul Smart and Nigel Shadbolt
of both human and machine elements. Human agents are thus the locus of particular kinds of capability that subtend the epistemic, cognitive, perceptual, behavioral, social, moral, emotional, affective, and aesthetic domains; computing technologies, in contrast, are renowned for their speed of processing, their ability to engage in repetitive symbolic manipulation, their capacity for digital data storage, and so on. By bringing these diverse capabilities together in the context of a complex task, social machines are potentially poised to tackle problems that currently lie beyond the cognitive and epistemic reach of the bare biological brain (see Hendler and Berners-Lee 2010).
Network epistemology One of the goals of the social machine research effort is to gain a better understanding of the forces and factors that influence the performance profile of social machines relative to the kinds of tasks in which they are involved. In the case of knowledge machines, for example, scientists are interested in understanding how different organizational schemes (characterized as the pattern of information flow and influence between human and technological elements) affects the quality of specific epistemic products, such as the reliability of propositional statements. It is here that we encounter a potentially productive point of contact between the scientific goals of social machine researchers and the philosophical concerns of the epistemological community. Goldman (2011), for example, identifies a specific form of social epistemology, called systems-oriented social epistemology, whose primary objective is to understand the veritistic value of different kinds of socially distributed epistemic practice and social organization. This, it should be clear, is very much in accord with the goals of social machine researchers. It is also something that is well-aligned with a body of work in network science that seeks to illuminate the ways in which the topological structure of social networks influences the dynamics of belief formation and collective cognitive processing within a community of interacting cognitive agents (Glinton et al. 2010; Kearns 2012). Such forms of network epistemology (see Zollman 2013) (or, more generically, computational social epistemology) promise to inform our understanding of the complex interactions that occur between forces and factors at a variety of levels (e.g., the cognitive, the social, and the technological), as well as the ways in which these interactions influence the epistemic properties (e.g., truth-tracking capabilities) of individual agents and socio-epistemic systems.
An epistemically safe environment? One issue that typically arises in debates about the epistemic impact of the Web concerns the extent to which the Web can serve as a source of reliable information. At first sight, it would seem that the open and democratic nature of the Web (i.e., the fact that pretty much anyone can participate in the creation of online content) poses a problem for claims about the reliability of online content. The problem, of course, is that by enabling every Tom, Dick, or Harry to add or edit content, we run the risk of contaminating the online environment with misleading and inaccurate information. In the face of such epistemic risks and hazards, is there any reason to think that the Web is apt to serve the epistemic interests of our doxastic systems? One response to this question involves an appeal to the sorts of social participation that are enabled by the Web. Of particular interest is the scale of Web-based social participation –the fact that many hundreds or thousands (and sometimes millions) of individuals are involved in the creation and curation of specific bodies of online information (consider, for example, the
22
23
The World Wide Web
number of people who have contributed to the Wikipedia system). Interestingly, large-scale social participation may be relevant to some of the concerns that have arisen in respect of the dubious reliability of online content.To help us see this, consider Google’s PageRank algorithm (Brin and Page 1998), which is used to support the ranking of Web search results. Part of the reason the PageRank algorithm works is because it exploits the linking behavior of human users on a global scale, and this helps to ensure that the efforts of a ‘few’ malign individuals will be swamped by the efforts of the more virtuously minded masses. Similar kinds of approach are used by a variety of Web-based systems. When it comes to human computation or citizen science systems, for example, contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs. One thing that is worth noting here is that these kinds of reliability mechanism are, in many cases, very difficult to sabotage. When it comes to Google Search, for example, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. In view of such safeguards, it is difficult for individual agents to ‘artificially’ recreate the sort of endorsement that is reflected in the results of the PageRank algorithm. At the very least, it is difficult to see how such endorsement could be manufactured in the absence of a large-scale, socially coordinated effort.
Reliability indicators and trust Trust is a topic that lies at the intersection of both Web science (Golbeck 2006) and contemporary epistemology. From an epistemological perspective, issues of trust are typically discussed in relation to what is dubbed testimonial knowledge (Lackey 2011), that is, the knowledge communicated by other individuals. The fact that so much of our knowledge is based on the testimony of others raises questions about the extent to which we are justified in believing what others tell us. In the context of face-to-face encounters, of course, there are a variety of cues –or, in the terminology of Craig (1990), “indicator properties” –that are apt to influence our judgments as to who is a trustworthy informant (see Sperber et al. 2010). Such cues are likely to play an important role in influencing decisions as to who we select as a source of information, as well as the extent to which we endorse the information provided by a particular source (e.g., information may be rejected if an informant shows signs of dishonesty or incompetence while communicating information). By being responsive to such cues, it seems that we are able to exercise considerable ability with respect to the epistemically virtuous selection and endorsement of sources of testimonial information, a claim that is broadly consistent with the tenets of virtue reliabilistic approaches to knowledge (Greco 2007).3 It is easy to see how the judicious exploitation of reliability-relevant cues could serve as the foundation for testimonial knowledge in face-to-face encounters. But are such strategies relevant to the processing of information originating from the online realm? Do we, in other words, encounter cues on the Web that could be used to assess the reliability of information? And are these cues, if they exist, actually used to judge the trustworthiness or credibility of particular information sources? There are a number of strands of Web science research that speak to these issues. In terms of the information that is presented on a typical website, studies have revealed that user credibility judgments tend to be influenced by relatively superficial features, such as a website’s appearance, structure, and navigability (Fogg et al. 2003; Metzger 2007; Wathen and Burkell 2002). Other
23
24
Paul Smart and Nigel Shadbolt
studies have explored the relevance of social cues to credibility assessments. Westerman et al. (2012), for example, investigated the relationship between perceptions of source credibility in the context of the Twitter microblogging system. Their results revealed the presence of a curvilinear relationship, with too many or too few Twitter followers having a negative impact on perceptions of expertise and trustworthiness. The problem with the sorts of cues investigated by these studies (e.g., site design features, number of Twitter followers, etc.) is that they are relatively easy to ‘fake’. Site designs are easily modified, and fake Twitter accounts (and thus non-existent followers) are relatively easy to manufacture. Ideally, what is required is a set of cues that provide something akin to an honest signal in evolutionary theory (see Pentland 2008). In other words, an important property of an online reliability indicator is that it reliably indicates the reliability of online content. A crucial question, therefore, is whether the Web provides access to these particular kinds of (‘honest’) reliability indicators. In fact, such indicators are available, and as with almost everything on the Web, they rely heavily on the fact that the Web is as much a social environment as it is a technological one. Some examples of such indicators, as identified by Taraborelli (2008), include the following: 1. implicit indicators of individual endorsement (such as indicators that a specific user selected/ visited/purchased an item); 2. explicit indicators of individual endorsement (such as explicit ratings produced by specific users); 3. implicit indicators of socially aggregated endorsement (such as density of bookmarks or comments per item in social bookmarking systems); 4. explicit indicators of socially aggregated endorsement (such as average ratings extracted from a user community); 5. algorithmic endorsement indicators (such as PageRank and similar usage- independent ranking algorithms); 6. hybrid endorsement indicators (such as interestingness indicators in Flickr, which take into account both explicit user endorsement and usage-independent metrics). All of these indicators, it should be clear, are ones that rely, to a greater or lesser extent, on the behavior of other users. They are, as such, reminiscent of work that seeks to investigate the phenomenon of social proof (Cialdini 2007). As work in this area suggests, a tendency to rely on the actions of others does not always yield positive results; sometimes it can lead to herd behavior, and in an epistemic context, there is a risk that users will erroneously equate popularity with reliability. Nevertheless, there are, it seems, a rich variety of cues that users can exploit as part of the epistemically virtuous selection and endorsement of online content, and these cues do seem to play an important role in guiding users’ actual judgments as to the credibility of online content (see Metzger et al. 2010). It thus seems that rather than being an environment that is deficient or impoverished with respect to the availability of reliability-indicating cues, the Web may, in fact, afford access to cues that are both more varied and perhaps more reliable than those encountered in face-to-face testimonial exchanges. Key issues for future research in this area concern the extent to which features of the online socio-technical environment can be used to support the construction, evaluation, and validation of epistemically relevant indicator properties. It will also be important to assess the extent to which socially constructed indicator properties are immune to the various forms of epistemic injustice that have been discussed in the epistemological literature (see Fricker 2003). 24
25
The World Wide Web
Conclusion The Web provides access to a digital compendium of information that is unprecedented in terms of its scale, scope, and accessibility. It is, in addition, a resource that plays an ever-g reater role in shaping our epistemic capabilities at both an individual and collective level. The Web is, as such, a valuable form of epistemic infrastructure for our species, influencing the kinds of beliefs we form and providing a platform for us to discover, manage, and exploit epistemic resources. As a discipline whose primary focus is to understand the factors that influence our epistemic capabilities, applied epistemology establishes a natural point of contact with contemporary Web science, helping to reveal the Web’s epistemic properties and informing the search for interventions that maximize its epistemic power and potential.
Notes 1 See https://duckduckgo.com/ (accessed January 25, 2018). 2 We are grateful to an anonymous reviewer for highlighting this particular point. 3 By being the reliable receivers of testimony, for example, our cognitive abilities play an important role in explaining why it is that we believe the truth in testimonial exchanges.
References Alter, A. L., and Oppenheimer, D. M. (2009). “Uniting the Tribes of Fluency to Form a Metacognitive Nation.” Personality and Social Psychology Review, 13: 219–35. Baehr, J. (2012). The Inquiring Mind: On intellectual virtues and virtue epistemology. Oxford: Oxford University Press. Bjerring, J. C., and Pedersen, N. J. L. L. (2014). “All the (Many, Many) Things We Know: Extended knowledge.” Philosophical Issues, 24: 24–38. Brin, S., and Page, L. (1998). “The Anatomy of a Large-Scale Hypertextual Search Engine.” 7th Annual Conference of the World Wide Web. Brisbane, Australia. Cialdini, R. B. (2007). Influence: The psychology of persuasion. New York: HarperCollins. Clark,A. (2007).“Soft Selves and Ecological Control,” in D. Ross, D. Spurrett, H. Kincaid, and G. L. Stephens (eds.), Distributed Cognition and the Will: Individual volition and social context. Cambridge, MA: MIT Press. Clark, A. (2008). Supersizing the Mind: Embodiment, action, and cognitive extension. New York: Oxford University Press. Clark, A., and Chalmers, D. (1998). “The Extended Mind.” Analysis, 58: 7–19. Coady, D. (2012). What to Believe Now: Applying epistemology to contemporary issues. Oxford: Blackwell. Cooper, S., Khatib, F., Treuille, A., Barbero, J., Lee, J., Beenen, M., … Players, F. (2010). “Predicting Protein Structures with a Multiplayer Online Game.” Nature, 466: 756–60. Craig, E. (1990). Knowledge and the State of Nature. Oxford: Oxford University Press. Fallis, D. (2008). “Toward an Epistemology of Wikipedia.” Journal of the American Society for Information Science and Technology, 59: 1662–74. Fallis, D. (2011). “Wikipistemology,” in A. I. Goldman and D.Whitcomb (eds.), Social Epistemology: Essential readings. New York: Oxford University Press. Fisher, M., Goddu, M. K., and Keil, F. C. (2015). “Searching for Explanations: How the Internet inflates estimates of internal knowledge.” Journal of Experimental Psychology: General, 144: 674–87. Fogg, B., Soohoo, C., Danielson, D. R., Marable, L., Stanford, J., and Tauber, E. R. (2003). “How Do Users Evaluate the Credibility of Web Sites? A study with over 2,500 participants,” in J. Arnowitz, A. Chalmers, and T. Swack (eds.), Conference on Designing for User Experiences. San Francisco, CA: ACM. Fricker, M. (2003). “Epistemic Justice and a Role for Virtue in the Politics of Knowing.” Metaphilosophy, 34: 154–73. Glinton, R., Paruchuri, P., Scerri, P., and Sycara, K. (2010).“Self-Organized Criticality of Belief Propagation in Large Heterogeneous Teams,” in M. J. Hirsch, P. M. Pardalos, and R. Murphey (eds.), Dynamics of Information Systems: Theory and Applications. Berlin: Springer. Golbeck, J. (2006).“Trust on the World Wide Web: A survey.” Foundation and Trends in Web Science, 1: 131–97.
25
26
Paul Smart and Nigel Shadbolt Goldman, A. I. (2008). “The Social Epistemology of Blogging,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy. New York: Cambridge University Press. Goldman, A. I. (2011). “A Guide to Social Epistemology,” in A. I. Goldman and D. Whitcomb (eds.), Social Epistemology: Essential readings. New York: Oxford University Press. Greco, J. (2007). “The Nature of Ability and the Purpose of Knowledge.” Philosophical Issues, 17: 57–69. Heersmink, R. (2018). “A Virtue Epistemology of the Internet: Search engines, intellectual virtues, and education.” Social Epistemology, 32: 1–12. Heintz, C. (2006). “Web Search Engines and Distributed Assessment Systems.” Pragmatics & Cognition, 14: 387–409. Hendler, J., and Berners-Lee, T. (2010). “From the Semantic Web to Social Machines: A research challenge for AI on the World Wide Web.” Artificial Intelligence, 174: 156–61. Introna, L., and Nissenbaum, H. (2000). “Defining the Web: The politics of search engines.” Computer, 33: 54–62. Kearns, M. (2012). “Experiments in Social Computation.” Communications of the ACM, 55: 56–67. Knight, S. (2014). “Finding Knowledge –What is it to ‘know’ when we search?” in R. König and M. Rasch (eds.), Society of the Query Reader: Reflections on Web Search. Amsterdam: Institute of Network Cultures. Lackey, J. (2011). “Testimony: Acquiring knowledge from others,” in A. I. Goldman and D. Whitcomb (eds.), Social Epistemology: Essential readings. New York: Oxford University Press. Law, E., and von Ahn, L. (2011). “Human Computation.” Synthesis Lectures on Artificial Intelligence and Machine Learning, 5: 1–121. Lintott, C. J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., … van den Berg, J. (2008). “Galaxy Zoo: Morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey.” Monthly Notices of the Royal Astronomical Society, 389: 1179–89. Ludwig, D. (2015). “Extended Cognition and the Explosion of Knowledge.” Philosophical Psychology, 28: 355–68. Metzger, M. J. (2007). “Making Sense of Credibility on the Web: Models for evaluating online information and recommendations for future research.” Journal of the American Society for Information Science and Technology, 58: 2078–91. Metzger, M. J., Flanagin, A. J., and Medders, R. B. (2010). “Social and Heuristic Approaches to Credibility Evaluation Online.” Journal of Communication, 60: 413–39. Meyer, E. T., and Schroeder, R. (2015). Knowledge Machines: Digital transformations of the sciences and humanities. Cambridge, MA: MIT Press. Michaelian, K., and Arango-Muñoz, S. (2014). “Epistemic Feelings, Epistemic Emotions: Review and introduction to the focus section.” Philosophical Inquiries, 2: 97–122. Miller, B., and Record, I. (2013). “Justified Belief in a Digital Age: On the epistemic implications of secret Internet technologies.” Episteme, 10: 117–34. Muldoon, R. (2013). “Diversity and the Division of Cognitive Labor.” Philosophy Compass, 8: 117–25. Oppenheimer, D. M. (2008). “The Secret Life of Fluency.” Trends in Cognitive Sciences, 12: 237–41. Ottati,V., Price, E. D.,Wilson, C., and Sumaktoyo, N. (2015). “When Self-Perceptions of Expertise Increase Closed-Minded Cognition: The earned dogmatism effect.” Journal of Experimental Social Psychology, 61: 131–38. Palermos, S. O. (2015). “Active Externalism, Virtue Reliabilism and Scientific Knowledge.” Synthese, 192: 2955–86. Palermos, S. O. (2017). “Social Machines: A philosophical engineering.” Phenomenology and the Cognitive Sciences, 16: 953–78. Pariser, E. (2011). The Filter Bubble: What the Internet is hiding from you. London: Penguin. Pentland, A. (2008). Honest Signals: How they shape our world. Cambridge, MA: MIT Press. Rainie, L., and Wellman, B. (2012). Networked: The new social operating system. Cambridge, MA: MIT Press. Record, I., and Miller, B. (forthcoming). “Taking iPhone Seriously: Epistemic technologies and the extended mind,” in A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (eds.), Extended Epistemology. Oxford: Oxford University Press. Robbins, P., and Aydede, M. (eds.). (2009). The Cambridge Handbook of Situated Cognition. New York: Cambridge University Press. Simpson, T. W. (2012). “Evaluating Google as an Epistemic Tool.” Metaphilosophy, 43: 426–45. Smart, P. R. (2012). “The Web-Extended Mind.” Metaphilosophy, 43: 446–63.
26
27
The World Wide Web Smart, P. R. (forthcoming-a). “Emerging Digital Technologies: Implications for extended conceptions of cognition and knowledge,” in A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (eds.), Extended Epistemology. Oxford: Oxford University Press. Smart, P. R. (forthcoming- b). “Mandevillian Intelligence.” Synthese. https://doi.org/10.1007/s11229017-1414-z Smart, P. R. (forthcoming-c). “Mandevillian Intelligence: From individual vice to collective virtue,” in A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (eds.), Socially Extended Knowledge. Oxford: Oxford University Press. Smart, P. R., Clowes, R. W., and Heersmink, R. (2017). “Minds Online: The interface between web science, cognitive science and the philosophy of mind.” Foundations and Trends in Web Science, 6: 1–232. Smart, P. R., and Shadbolt, N. R. (2014). “Social Machines,” in M. Khosrow-Pour (ed.), Encyclopedia of Information Science and Technology. Hershey, PA: IGI Global. Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., and Wilson, D. (2010). “Epistemic Vigilance.” Mind & Language, 25: 359–93. Taraborelli, D. (2008). “How the Web Is Changing the Way We Trust,” in K.Waelbers, A. Briggle, and P. Brey (eds.), Current Issues in Computing and Philosophy. Amsterdam: IOS Press. Waller,V. (2011). “Not Just Information: Who searches for what on the search engine Google?” Journal of the American Society for Information Science and Technology, 62: 761–75. Ward, A. F. (2013). One with the Cloud: Why People Mistake the Internet’s Knowledge for Their Own. PhD, Harvard University. Wathen, C. N., and Burkell, J. (2002). “Believe It or Not: Factors influencing credibility on the Web.” Journal of the American Society for Information Science and Technology, 53: 134–44. Westerman, D., Spence, P. R., and Van Der Heide, B. (2012). “A Social Network as Information: The effect of system generated reports of connectedness on credibility on Twitter.” Computers in Human Behavior, 28: 199–206. Zollman, K. J. S. (2010). “The Epistemic Benefit of Transient Diversity.” Erkenntnis, 72: 17–35. Zollman, K. J. S. (2013). “Network Epistemology: Communication in epistemic communities.” Philosophy Compass, 8: 15–27.
27
28
3 WIKIPEDIA Karen Frost-Arnold
Introduction As a free encyclopedia produced mostly by volunteer contributors, Wikipedia is an ambitious experiment in online collaborative knowledge dissemination. It is one of the most visited sites on the Internet. It receives 374 million unique visitors every month, and regularly appears as the top Google result to factual queries (“Wikipedia:About”). There are 280 active Wikipedias in different languages (“List of Wikipedias”). The English language Wikipedia is the largest and is the focus of this chapter. English Wikipedia (hereafter ‘Wikipedia’) has over 5 million articles (“Wikipedia:About”) and over 30,000 active Wikipedians editing content (“Active Wikipedians”). As one of the most commonly used reference sites, Wikipedia certainly merits applied epistemologists’ attention. Wikipedia has lofty epistemic goals and is designed and constantly updated with epistemic considerations in mind. The Wikipedian dream is an army of volunteers drawing on reliable, independent sources to provide free access to an accurate, neutral source of encyclopedic knowledge. Anyone can edit most pages and help shape the policies and organizations that maintain Wikipedia’s community. This openness aims at adding content rapidly. Additionally, in place of slower editorial oversight or expert peer review procedures,Wikipedia relies on its contributors to check that articles meet its standards of a neutral point of view, verifiability, and no original research (“Wikipedia:Core content policies”). But does the reality live up to Wikipedians’ dreams? As a normative enterprise, applied epistemology is uniquely positioned to evaluate to what extent Wikipedia actually improves or damages the landscape of human knowledge. This chapter summarizes the Wikipedia debates within applied epistemology and argues that the social organization of the Wikipedia community shapes its epistemic merits and limitations. A useful framework for the epistemology of Wikipedia is veritistic systems-oriented social epistemology (see Fallis 2011). Systems-oriented social epistemology evaluates epistemic systems according to their epistemic outcomes for community members (Goldman 2011). A veritistic social epistemology takes true belief as the fundamental epistemic good (Goldman 1992, 1999). Thus, a veritistic systems-oriented social epistemology of Wikipedia evaluates Wikipedia’s impact on the formation and dissemination of true beliefs within its community of users. Alvin Goldman (1992) lays out five veritistic standards, summarized by Paul Thagard as follows: 28
29
Wikipedia
1. the reliability of a practice is measured by the ratio of truths to total number of beliefs fostered by the practice; 2. the power of a practice is measured by its ability to help cognizers find true answers to the questions that interest them; 3. the fecundity of a practice is its ability to lead to large numbers of true beliefs for many practitioners; 4. the speed of a practice is how quickly it leads to true answers; 5. the efficiency of a practice is how well it limits the cost of getting true answers (Thagard 1997: 247). Section 1 evaluates Wikipedia according to standard 1, while section 2 focuses on standards 2–5. Section 3 draws on the social epistemology of trust to argue that Wikipedia’s epistemic status depends on its complex and shifting relations of trust and distrust between community members. Finally, section 4 applies feminist epistemology to show that lack of diversity in Wikipedia’s community is a pressing epistemic threat, especially to standards 1 and 2.
1. Is Wikipedia reliable? Should we trust it? Much epistemic analysis of Wikipedia focuses on the question of Wikipedia’s reliability: is most of the information in Wikipedia accurate (see Fallis 2011: 299)? Work on the price system, prediction markets, and the Condorcet jury theorem suggests that aggregating the viewpoints of many people can produce reliable knowledge, under certain conditions (Estlund 1994; Hayek 1945; Sunstein 2006; Surowiecki 2004). However, Don Fallis (2008) argues that this does not adequately explain Wikipedia’s reliability.Wikipedians do not simply aggregate their views (e.g., they often reach consensus through deliberation rather than voting); any current article on Wikipedia reflects the viewpoint of the most recent editor (rather than an aggregated judgment), and often articles are created by a small group instead of a large crowd (Fallis 2008: 1670). Whether or not Wikipedia illustrates the wisdom of crowds, studies show that it is relatively reliable (Fallis 2008; “Reliability of Wikipedia”). For example, a widely read Nature study found that Wikipedia’s reliability compares well with Encyclopedia Britannica’s (Giles 2005).1 Heilman et al. (2011) review several studies showing that Wikipedia’s medical information is generally reliable, and more recent studies concur (Kräenbring et al. 2014; Kupferberg and Protus 2011; Temple and Fraser 2014). Despite these encouraging results, applied epistemologists’ concerns about reliability center around three features of the encyclopedia: non-expert authorship, anonymous editing, and incentives to damage Wikipedia. Anyone can edit most pages in Wikipedia, which means that editors (i.e., those who write, edit, remove, and categorize information on Wikipedia) are often not experts on the topic. Wikipedia is not a source of new knowledge production that requires content expertise. Instead, Wikipedia aims to be an army of people collecting and disseminating information based on reliable independent sources. Experienced Wikipedians may be “informational experts” who possess procedural skill in managing information (such as finding and inserting citations in Wikipedia articles) (Hartelius 2011; Pfister 2011). Nonetheless, one might worry that Wikipedians who lack content expertise will not know which topics are notable and important, which sources are reliable, which claims have been refuted, etc. (Fallis 2011: 300). Additionally, many content experts have been alarmed at non-experts editing or deleting their work on Wikipedia (Healy 2007). When this happens, the reliability of Wikipedia can suffer, particularly if accurate claims by content experts are removed or replaced with inaccurate ones by non-experts. While Wikipedia does not grant content experts additional authority (in part, because it has no 29
30
Karen Frost-Arnold
mechanism for verifying the expertise of editors (“Wikipedia:Expert editors”)), Wikipedians have made attempts to reach out to experts by soliciting their input in ways that do not require experts to learn how to navigate the wiki infrastructure (Lih 2015; Lih et al. 2015). Such efforts, if successful, could improve the reliability of the encyclopedia. Wikipedia allows editors to be anonymous. Anonymity is a second source of concern for epistemologists who believe that holding people’s offline identities accountable for errors will increase the accuracy of their contributions (Sanger 2009; Wray 2009). Wikipedians can create accounts with a pseudonym, or they can edit with only their IP address being logged. Thus, an editor can add inaccurate information, and their offline identity will not be tarnished. Since editors’ offline identities cannot be punished, some worry that editors lack incentive for living up to expectations of accuracy. Larry Sanger, a co-founder of Wikipedia, left the project, in part, because of such concerns: “anonymity allows those with an anti-intellectual or just crotchety bent to attack experts without a restraint that many would no doubt feel if their real names were known” (Sanger 2009: 66). And K. Brad Wray argues that the epistemic culture of Wikipedia is inferior to that of science precisely because scientists have a reputation to protect, while anonymous Wikipedians do not (Wray 2009: 39–40). While these arguments have merit, they ignore two points. First, there are punishments for untrustworthy behavior on Wikipedia. Although there is rarely punishment for users’ offline identities, there is a system for sanctioning users’ online identities by, for example, banning an account from the site. Thus, there is some accountability for inaccuracy.2 I discuss these sanctions in section 3. Wikipedians, though, are generally very sensitive to privacy issues and rarely publicly release the offline identity of bad actors (Owens 2013). Second, demands for increased accountability ignore the epistemic value of online anonymity. Members of vulnerable populations who have legitimate fears of reprisal and harassment might only contribute their expertise to Wikipedia under the protection of anonymity (Fallis 2008: 1668; Frost-Arnold 2014a). In terms of reliability, Wikipedia’s anonymity may allow some vulnerable Wikipedians to engage in talk and discussions that help weed out errors, thereby improving the accuracy of the encyclopedia. A third epistemic concern is that Wikipedia’s openness attracts editors with incentives to harm the epistemic project (Fallis 2011: 300). Trolls, vandals, public relations firms, politicians, and harassers are all potential threats to reliability, since it may be in their interest to add false content to articles. For example, editing while under a conflict of interest, especially undisclosed paid editing, is frowned upon in Wikipedia (“Wikipedia:Conflict of interest”). In August 2015, administrators blocked 381 user accounts for charging money to post articles (Erhart and Barbara 2015). In this scam, small businesses and artists were contacted by someone posing as an experienced Wikipedian or administrator. The scammers used records of previously rejected articles. They contacted the subjects of those articles to offer to add the content once payment had been received (“Wikipedia:Long-term abuse/Orangemoody”).This and other cases of paid editing give critics cause to worry about its reliability. However, one might be comforted by the fact that Wikipedia’s volunteer army did uncover the scam and take corrective measures (Erhart and Barbara 2015). Additionally, Wikipedia has worked with major PR firms to create a professional ethics culture in which PR firms do not violate Wikipedia policies, including those related to conflict of interest (Lih 2015; “Wikipedia:Statement on Wikipedia from participating communications firms”). Wikipedia’s openness also attracts trolls and vandals who introduce inaccurate content for their own amusement. In the widely reported Seigenthaler incident, a prankster edited the biography of journalist John Seigenthaler to speculate that Seigenthaler had been involved in the assassinations of John and Robert Kennedy. Seigenthaler publicly attacked Wikipedia for failing to detect the error quickly and for not identifying the vandal (Seigenthaler 2005). In 30
31
Wikipedia
response, Wikipedia instituted new measures to protect the encyclopedia against vandalism: (1) anonymous users were prevented from creating new articles, (2) a Biography of Living Persons policy was created to provide guidelines for removing questionable material in biographies, and (3) a semi-protection tool was introduced, which prevents unregistered or newly registered users from editing articles that may be targets of vandalism (Lih 2009: 191–94). While these measures cannot make Wikipedia vandal-free, they can increase its reliability. In sum, there is some empirical support for Wikipedia’s reliability, and it may be reliable enough to meet our epistemic goals, especially when the stakes for error are not high (Fallis 2008: 8). While applied epistemologists have raised concerns about its lack of experts, anonymity, and openness to those with harmful goals, Wikipedia has policies in place to address these issues and continually improve its reliability. Leaving aside whether Wikipedia is in fact reliable, a further question of epistemological importance is whether we can be justified in believing it reliable, or as P. D. Magnus asks: “Can we trust it?” (Fallis 2008, 2011; Magnus 2009). If we can justifiably trust Wikipedia, our justification will have to be different than justification for trusting traditional testimony, since Wikipedia has no single, identifiable author (Magnus 2009; Simon 2010; Tollefsen 2009; Wray 2009).3 Judith Simon (2010) argues that trust in Wikipedia is best viewed as an instance of procedural trust –trust in the process which generates Wikipedia content. One might take a user’s knowledge about Wikipedia’s safeguards for reliability as good justification for their trust in the process. However, a problem with procedural trust in Wikipedia is that Wikipedia is dynamic – anyone can change the content at any moment.Thus, while we may have good reason to believe that Wikipedia as a system is reliable on average, “this overall trustworthiness does not help us to assess the trustworthiness of a specific claim in Wikipedia” (Simon 2010: 349). So, are there any tools to assess the trustworthiness of specific claims? Magnus argues that Wikipedia frustrates our usual methods of evaluating the trustworthiness of online claims. For example, one might use the method of sampling by checking a claim on a website against other sources for corroboration. However, many other sites copy material from Wikipedia, so one may inadvertently use Wikipedia as a self-confirming source (Magnus 2009: 87). Relatedly, Wikipedia has been a locus of ‘citogenesis’: “the creation of ‘reliable’ sources through circular reporting” (“Wikipedia:List of citogenesis incidents”). In citogenesis, information is added to Wikipedia, which is then picked up in media reports, which are in turn used as ‘independent’ sources to support the original information in Wikipedia. Another way Wikipedia frustrates our usual tools of verification is through multi-authored content. I might evaluate the reliability of a blog post about physics by a self-professed physicist using standards of plausibility of style (Does the author write like a physicist?) or calibration (Suppose I know something about physics. Is the author correct about the physics claims that I know? If so, then the author is likely to be accurate about physics outside my area of knowledge). But a Wikipedia article is potentially written by hundreds of people who do as little as add one claim or tidy up the spelling and style. Thus, the fact that the article is written in the style of a physicist is no guarantee that any one claim was written by an expert in physics; and since each claim could be written by a different author, accuracy in one claim is no guarantee that other claims were written by the same knowledgeable author (Magnus 2009: 85–87). So, Wikipedia may be challenging to verify, and trust in its claims may be hard to justify. However, Fallis (2011) and Simon (2010) argue that there may be other ways to verify Wikipedia’s content. Wikipedia grants access to every article’s editing history (showing readers whether the article has been the subject of a recent edit war –a red flag for bias), and it provides editing history for every editor (letting readers verify whether a claim was made by an editor with a history of vandalism).4 The ‘talk’ pages allow readers to see critical discussion about the 31
32
Karen Frost-Arnold
reliability of content. And dispute templates are posted in many Wikipedia articles warning readers that, for example, “The truthfulness of this article has been questioned.” Thus, readers who take the time to use these tools can put themselves in a position to gain defeaters that would undermine trust in Wikipedia content. In sum, determining whether to trust Wikipedia’s claims may be difficult using traditional methods of verification, but other methods are available.
2. A broader epistemology of Wikipedia Reliability is not the only epistemic virtue, and applied epistemologists have analyzed Wikipedia’s success according to Goldman’s other standards of power, speed, fecundity, and efficiency.While error-avoiding epistemologists take reliability as the primary epistemic virtue, truth-seeking veritists take power to sometimes trump reliability (Coady 2012: 5–7; Fallis 2006: 182–83; Frost-Arnold 2014a: 66–68). An encyclopedia with only one entry consisting of all true claims (e.g., about Harriet Tubman) would be a maximally reliable source, but it would not be very powerful or encyclopedic (Coady 2012: 170). We want an encyclopedia to help its users to know many truths, as well as avoid errors. And an encyclopedia that helps users attain more true beliefs than its competitor would have an epistemic advantage, along one dimension. On this score, Wikipedia is incredibly epistemically successful. Encyclopedia Britannica announced it was ending print production in 2012; at the time, Britannica had 65,000 articles compared to Wikipedia’s 3,890,000, almost sixty times as many (Silverman 2012). While Wikipedia’s openness appeared to be a concern for its reliability, it is a boon to its power –thousands of volunteer non-experts can create and expand more entries than an encyclopedia written by a smaller number of experts (Fallis 2011: 305). Wikipedia also measures well against Goldman’s standards of speed, fecundity and efficiency (Fallis 2011: 305). In the era of smartphones, Wikipedia provides instant, convenient answers to questions for many people. Additionally, Wikipedia’s openness is explicitly designed to allow for fast content creation (Fallis 2008: 1669; Sunstein 2006: 150). An initial attempt at an online encyclopedia by Jimmy Wales and Larry Sanger, Nupedia, had such a demanding peer review process that it only produced about two dozen articles in its first year (Lih 2009: 40–41). In comparison, Wikipedia produced 20,000 in its opening year (Lih 2009: 77). Many Wikipedians are open access advocates, and the project’s emphasis on free online content increases its fecundity –it allows true beliefs to be acquired by many people. Similarly, Wikipedia is efficient as a free source of knowledge for users, and its online platform and use of volunteer editors has cost-saving benefits in production.
3. Wikipedia and trust The epistemic culture of Wikipedia is central to its epistemic successes and failings (cf. Coady 2012: 171).While most readers perceive Wikipedia as a static piece of text,Wikipedia is actually a dynamic community. Wikipedians interact on talk pages, meet in person, collaborate on Wiki Projects, organize edit-a-thons, and hold conferences. This section analyzes one key feature of Wikipedia’s culture: its complex and shifting relations of trust and distrust. Wikipedia started with a small community, and personal relations of trust between people who knew each other were central in founding the community (Lih 2009). Paul de Laat (2010) shows that as new members were recruited, Wikipedia used hopeful trust, a form of trust with interesting epistemic significance, to motivate trustworthiness in newbies. On Victoria McGeer’s (2008) account of hopeful trust, the trustor makes herself vulnerable by putting herself in the hands of the trustee, but her hope that the trustee will not take advantage of that vulnerability 32
33
Wikipedia
encourages the trustee to live up to the trust. Thus, hopeful trust can inspire people to be more trustworthy. It does this because the trustor’s hopeful vision can be motivating –it prompts the trustee to think: “I want to be as she sees me to be” (McGeer 2008: 249). In this way, a hopeful vision can make the trustee a kind of role model to herself. Hopeful trust is epistemically interesting because one’s own trust in someone can count, under the right conditions, as evidence that they will be trustworthy. In their relations with one another, Wikipedians often endorse a powerful form of hopeful trust (de Laat 2010). First, members of the Wikipedia community make themselves vulnerable –they put themselves in each other’s hands.Wikipedia maintains an openness that makes it vulnerable to those with harmful motives. In the face of this threat, Wikipedians could adopt a default attitude of distrust to new members, forcing them to get permission to edit after successfully completing screening procedures. But Wikipedians instead adopt an attitude of qualified trust toward newcomers. One reason Wikipedians often give for this approach is that it inspires people to do better. As founder Jimmy Wales puts it, [B]y having complex permission models, you make it very hard for people to spontaneously do good … There are so many hostile communities on the Internet. One of the reasons is because this philosophy of trying to make sure that no one can hurt anyone else actually eliminates all the opportunities for trust … [Wikipedia is] about leaving things open-ended, it’s about trusting people, it’s about encouraging people to do good. Wales 2009: xvii–xviii A second sign of Wikipedia’s attitude of hopeful trust is its commitment to the principles of “Assume good faith,”“Don’t bite the newbies,” and “Be bold” (“Wikipedia:Assume good faith”; “Wikipedia:Be bold”; “Wikipedia:Please do not bite the newcomers”). With these principles, Wikipedians encourage new editors to make edits without fear of breaking the encyclopedia. In other words, Wikipedians make themselves vulnerable to damage, and they do so with hope by assuming good faith (de Laat 2010: 332). Of course, Wikipedians are a diverse community, and disputes about whether members are actually following these practices abound, but the community consensus guidelines espouse this attitude of hopeful trust. Third, Wikipedia’s most public ambassador, Jimmy Wales, presents a hopeful vision of the community in his public discussions. For example, in a CNN discussion with Seigenthaler,Wales said, “Generally we find most people out there on the Internet are good. … It’s one of the wonderful humanitarian discoveries in Wikipedia, that most people only want to help us and build this free nonprofit, charitable resource” (Wales 2005).5 In sum, Wikipedians make themselves vulnerable to harm, but they often do so with an attitude of hopeful trust in fellow members and newcomers. Wikipedia communications about community norms hold out a vision to its members of trustworthy commitment to the production of a free, reliable encyclopedia produced by good faith volunteers. If McGeer is right that hopeful trust can inspire trustworthiness, this may help explain some of Wikipedia’s epistemic success –its attitude of hopeful trust and vision of epistemic trustworthiness motivates users to work for the epistemic good of the community. On the other hand, Wikipedia’s attitude of hopeful trust is also qualified and balanced with a distrustful attitude and rational choice approach to trust.6 Wikipedia started as a small community where common forms of interpersonal trust could have force. If I know some other Wikipedians, and I care what they think of me, then it is more likely that I can find their vision of me as a fellow Wikipedian motivating. But as Wikipedia grew, it was bound to attract large numbers of users who had only fleeting interactions with the community. Therefore, it 33
34
Karen Frost-Arnold
is not surprising that Wikipedia was rocked by scandals of bad actors and consequently grew an attitude of distrust to qualify the attitude of hopeful trust (see de Laat 2010). As discussed earlier, in response to the Seigenthaler incident, Wikipedia instituted methods to improve its reliability: (1) no article creation for anonymous users, (2) a Biography of Living Persons policy, and (3) semi-protection. Moreover, software tools and autonomous bots have proliferated to help Wikipedians detect and revert vandalism (de Laat 2015). Many of these mechanisms stem from a mistrust of certain types of editors, for example, anonymous, new editors. Additionally, as Wikipedia grew, it was no longer feasible for Jimmy Wales to act as a mediator and judge in disputes. So, in 2004, Wales recruited volunteers to set up a mediation committee and an arbitration committee (ArbCom) (Lih 2009: 180). While the mediation committee has been used less, ArbCom is busy –it is the main sanctions arm of Wikipedia, delivering punishments such as probation, restrictions on topics a Wikipedian can edit, and bans from the community (“Wikipedia:Editing restrictions”). Systems of punishment are signs of a rational choice approach to trust. This approach models individuals as self-interested rational actors and maintains that people act in a trustworthy manner when there are sufficient external constraints, most notably punishment for untrustworthiness, to make it in their self-interest to act as expected.Wikipedia has guidelines for conduct and a system of punishment for those who violate the rules. The goal of such systems is to allow members to cooperate with each other without needing to assume that their community members share altruistic or pro-social motivations. Following Hobbes, the rational choice approach thus takes a bleaker view of human nature than the hopeful trust approach. Internet historian Jason Scott summarizes the contrast as follows: “Wikipedia holds up the dark mirror of what humanity is, to itself ” (quoted in Lih 2009: 131). Because of all the vandalism, trolls, and harassment on Wikipedia, one might be persuaded to adopt this pessimistic view. Whether Wikipedia’s system of punishment is sufficient to keep bad actors at bay is an open question, constantly debated within the community.7 It may be that Wikipedia’s success lies in its ability to blend both of these approaches to trust. It may be descriptively accurate that Wikipedia’s early epistemic successes depended on an attitude of hopeful trust and interpersonal connections. But the pressures of growth turned the community more to threat of punishment. However, as I have argued above, there are still remnants of hopeful trust’s openness to newcomers and articulation of a vision of trustworthiness aimed at inspiring new members.8 For the applied epistemologist, the interesting question is whether these changing relations of trust support or undermine Wikipedia’s epistemic goals.
4. Wikipedia’s gaps and biases Lack of diversity is an epistemic problem for Wikipedia. Estimates of gender diversity of Wikipedians range from 9% to 16% women (Hill and Shaw 2013; Wikimedia Foundation 2011). The Wikimedia Foundation does not collect data on the race or ethnicity of editors in its surveys, and there is a lack of other data, but anecdotal evidence suggests that Wikipedia lacks multicultural diversity (Backer 2015; Murphy 2015; Reynosa 2015). Wikipedia also has a Western bias (Hern 2015).There is no consensus on the causes of Wikipedia’s diversity problem. Determining the true causes of the diversity gap requires careful empirical study and falls outside the purview of the applied epistemologist. Applied epistemology can contribute (1) an analysis of the epistemic significance of the imbalance and (2) an evaluation of the epistemic consequences of proposed solutions to the gap. I address both. The demographics of Wikipedia editors have several epistemic implications. First, concerns about diversity on Wikipedia are most commonly framed as concerns about gaps in coverage of topics related to women, people of color, and non-Western subjects, which is largely a concern 34
35
Wikipedia
about the power (in Goldman’s sense) of Wikipedia. For instance, Lam et al. (2011) found that topics of interest to women have shorter articles in Wikipedia than do topics of interest to men. And women’s biographies only account for 15.5% of all biographies in Wikipedia (Graells-Garrido et al. 2015). If women editors are more likely than men to create biographies of women, then attracting more women editors may increase the number of biographies of women, thereby increasing the number of truths available on Wikipedia, since more biographies will be added.9 Thus, lack of diversity means that fewer truths are available to cognizers. Second, lack of diversity undermines the reliability of Wikipedia. In discussions of Wikipedia’s reliability, it is often argued that errors are weeded out by communal scrutiny. But as feminist epistemologists have long argued, communities that lack diversity can perpetuate bias. Background assumptions shaped by individuals’ social locations can shape the claims they consider supported by evidence. Errors can be detected when putative knowledge claims are subjected to critical scrutiny by people from different backgrounds who can recognize and often detect unconscious biases with the background assumptions and purported evidential support for false claims (Frost-Arnold 2014a: 71; Goldman 1999: 78; Intemann 2010; Longino 1990). One of the main projects of feminist science studies has been to provide case studies of knowledge communities that detected errors and bias when women or people of color entered scientific disciplines in greater numbers. Returning to Wikipedia, this means that with fewer women, people of color, and members of other underrepresented groups making edits or taking part in the discussions on article talk pages, fewer errors are detected and the reliability of Wikipedia suffers. Third, lack of diversity facilitates the proliferation of misleading truths.Women’s biographies on Wikipedia contain more marriage-related events than do biographies of men (Graells- Garrido et al. 2015). There is an epistemic problem here, but what is it? Consider the statement in a particular woman’s biography that she was married to so-and-so. Assuming the spouse was correctly identified, this statement might not seem epistemically problematic, because it is true. Perhaps the problem is a failure of power because Wikipedia is missing some truths (namely, the marital status of many men), but this fails to fully capture the problem. It is not just that some truths are missing; it is that the truths which do appear, in conjunction with the absence of others, can be misleading and harmful. A reader of Wikipedia biographies is exposed to misleading evidence –based on what they read, they have reason to believe the following false claim (among others) ‘A woman’s spouse is more relevant to her accomplishments than a man’s spouse is.’10 This is epistemically problematic, as a false claim, but it is also ethically harmful as it contributes to the devaluation of women’s accomplishments and promotes the stereotype that women’s identities are more tied to their families than to their professional accomplishments. While women are also vulnerable to unconscious bias and can also perpetuate stereotypes, it is likely that women have more at stake in correcting such misleading information in Wikipedia. Therefore, more diversity in editorship seems likely to diminish Wikipedia’s problem with misleading truths. The Wikimedia Foundation set the goal of increasing women Wikipedians to 25% by 2015 (Cohen 2011).Although this initiative has failed, many other projects aim to improve Wikipedia’s gender and multicultural diversity (“AfroCROWD:About”; “Wikipedia:WikiProject Countering systemic bias”; “Wikipedia:WikiProject Women in Red”). Applied epistemology can be useful in assessing the epistemic merits of these proposed solutions. To illustrate, I show how one solution falls afoul of “the problem of speaking for others” (Alcoff 1991). One of the suspected causes of Wikipedia’s diversity problem is its reliance on the notability guideline and verifiability policy. The notability guideline ensures that the truths in Wikipedia be non-trivial and interesting,11 and the verifiability policy obviously makes Wikipedia more 35
36
Karen Frost-Arnold
verifiable and trust in it more rationally justifiable. While epistemically helpful in these ways, notability and verifiability can perpetuate bias within a community lacking diversity (Backer 2015; Stephenson-Goodknight 2015). Assessments of an article’s notability and a source’s legitimacy are made in light of background assumptions, which can be biased by one’s social location. To a white, Western editor, an article on an artwork of cultural importance to an Asian subpopulation may not seem notable. The nutshell description of the notability guideline reads, Wikipedia articles cover notable topics—those that have gained sufficiently significant attention by the world at large and over a period of time, and are not outside the scope of Wikipedia. We consider evidence from reliable independent sources to gauge this attention. “Wikipedia:Notability” Following this guideline requires assessing whether a topic has gained “sufficiently significant attention by the world at large,” which is a judgment made in light of assumptions about what makes attention significant, who or what counts as good indicators of the world’s attention, etc. Similarly, one must make judgments about whether there exist reliable independent sources to document this attention, and this will be done in light of background assumptions about which sources are reliable. The problem is that these background assumptions can be biased. Alice Backer, founder of Afro Free Culture Crowdsourcing Wikimedia (AfroCROWD), an initiative to increase the participation of people of African descent in Wikimedia (“AfroCROWD:About”), gives the example of the proposed article on “Garifuna in Peril,” a film about the Garifuna community (an Afro-Honduran community) (Backer 2015).The creator of the article, Garifuna blogger Teofilo Colon, tried multiple times to add the article to Wikipedia, but each time it was rejected by a user as non-notable and lacking independent verifiable sources to support notability, despite the fact that Colon continued to add dozens of sources, including well- established Central American newspapers. Now we cannot know with certainty the reasons why the Wikipedian who rejected the article found it to be non-notable and poorly sourced, but it is not hard to imagine that the typical white, Western Wikipedian is ignorant about Garifuna culture, unaware of the films that have meaning in Garifuna communities, and uninformed about Central American news sources. Our social location shapes our background knowledge, which biases our assessment of what is notable and which sources are reliable. Hence, not surprisingly, the contributions of women, people of color, and other marginalized communities are often rejected. Now one proposed solution to this problem is for an experienced editor, who is a trusted Wikipedian, to post the new article on behalf of less experienced editors who belong to minority groups. Thus, when Wikipedians look at the new article and see that it was created by the trusted Wikipedian, they will be less likely to recommend it for deletion as non-notable.This solution was repeatedly proposed at WikiConference USA 2015, and examples were offered of instances in which this strategy had effectively added new articles, which had previously been deleted when submitted by people of color. Notice that this solution leverages Wikipedia’s relations of trust –it recognizes that a large number of previous edits and known history of valuable contributions to the community make one a trustworthy editor, and trust in the editor behind an article can override concerns about the notability or verifiability of the content. However, applied epistemology reveals a problem with this solution. It runs afoul of what Linda Alcoff calls “The problem of speaking for others” (Alcoff 1991) and other problems with advocating on behalf of marginalized others (Code 2006; Frost-Arnold 2014b; Ortega 2006). Advocacy can be epistemically beneficial; it can allow the claims of marginalized speakers to be 36
37
Wikipedia
heard, when they might otherwise be ignored or rejected. However, advocacy also perpetuates a system in which the speech of marginalized people is not heard or accepted in their own voice. When people of color need to give their Wikipedia articles to established white Wikipedians to prevent their article from being deleted, we still have a system in which people of color are not trusted editors. Additionally, when an advocate speaks “in place of ” a marginalized subject, the advocate buttresses their social status and epistemic credibility on the back of the epistemic labor of the marginalized. While the intentions of the advocate may be noble, this still perpetuates unjust privilege. White, established Wikipedians will ultimately receive the credit for the addition of the new articles in their editing history, and Wikipedians of color will still find that their content is rejected when added under their own identities. This solution does not push established Wikipedians to address their own biases or learn to trust new community members who belong to more diverse populations. While no strategies are without problems, other solutions may be more epistemically fruitful. Collaborations between applied epistemologists and Wikipedians could be a source of new ideas for addressing the diversity gap. In conclusion, Wikipedia is fertile ground for applied epistemology. As a dynamic and transparent community with explicitly epistemic goals, Wikipedia provides opportunities to study the production and dissemination of knowledge through online collaboration. While debates persist about the reliability and verifiability of Wikipedia’s content, Wikipedia is constantly adjusting to protect its epistemic standing.Wikipedia’s shifting relations of trust and distrust raise important normative questions about which modes of social organization are best suited for epistemic communities at various stages of development. And, while the epistemic damage due to a lack of diversity is well-traveled terrain for the applied epistemology of science,Wikipedia’s diversity gaps both display similar problems and raise new challenges for a digital age.
Acknowledgments I thank Teofilo Colon, Michael Hunter, Alla Ivanchikova, Rosie Stephenson-Goodknight, K. Brad Wray, and an anonymous reviewer for helpful comments.
Notes 1 Encyclopedia Britannica critiqued the Nature study (Encyclopedia Britannica 2006). See Nature (2006), for Nature’s rebuttal. 2 Sanger recognizes that there is some accountability but thinks accountability that does not target one’s offline identity is insufficient to prevent some of the poor behavior on Wikipedia. 3 In fact, some question whether Wikipedia provides testimony at all (Tollefsen 2009; Wray 2009). 4 Editing history is a mark of trustworthiness in Wikipedia. Wikipedians often introduce themselves by listing their number of edits, and many post summaries of their editing history on their user pages. 5 For quotes from other Wikipedians espousing Wales’s hopeful vision, see de Laat (2010). 6 De Laat (2010) analyzes Wikipedia’s balance between trust and distrust in terms of the discretion offered to editors as a result of hopeful trust tempered with a decrease in discretion due to increasing rules and governance tools. 7 Tollefsen (2009) argues that Wikipedia’s accountability mechanisms make it compatible with the assurance theory of testimony, according to which hearers are entitled to accept testimony because the speaker offers their assurance of the truth of the testimony and accepts responsibility for it. However, de Laat (2010) responds that accountability should not be confused with offering assurances, which anonymous editing precludes. 8 For a useful discussion of the design challenges of balancing openness to newcomers with vigilance against vandals, see Halfaker et al. (2014). 9 Of course, more women editors adding more articles on women will only add more truths to Wikipedia if women are as equally reliable as men, which there is no reason to doubt. Also, it is hard to obtain data
37
38
Karen Frost-Arnold about whether higher percentages of women Wikipedians will lead to more truths being added to the encyclopedia (rather than the removal of false claims or a change in presentation of existing claims). That said, the goal of many projects aimed at decreasing the diversity gap is to increase the number of women writing articles (“Wikipedia:WikiProject Countering systemic bias”). 10 Additionally, readers are misled about which topics (such as women’s marital status) are important. I thank an anonymous reviewer for this point. 11 See Fallis (2006) and Goldman (1999: 94–96) on the epistemic value of interesting truths.
References “Active Wikipedians.” (n.d.). In Wikimedia. Retrieved December 16, 2015, from https://stats.wikimedia. org/EN/TablesWikipediansEditsGt5.htm. “AfroCROWD:About.” (n.d.). In AfroCROWD. Retrieved December 14, 2015, from www.afrocrowd. org/?q=content/about. Alcoff, L. (1991). “The Problem of Speaking for Others.” Cultural Critique, 20: 5–32. Backer, A. (2015). “AfroCROWD—Bridging the Multicultural Gap.” WikiConference USA, October 10, 2015, Washington, DC. Retrieved December 14, 2015, from www.youtube.com/watch?v= WkHbg9V5wnI. Coady, D. (2012). What to Believe Now: Applying epistemology to contemporary issues. Malden, MA:Wiley-Blackwell. Code, L. (2006). Ecological Thinking: The politics of epistemic location. New York: Oxford University Press. Cohen, N. (2011). “Define Gender Gap? Look up Wikipedia’s Contributor List.” The New York Times, January 30, 2011. Retrieved December 14, 2015, from www.nytimes.com/2011/01/31/business/ media/31link.html?_r=0. de Laat, P. B. (2010). “How Can Contributors to Open-Source Communities Be Trusted?” Ethics and Information Technology, 12(4): 327–41. de Laat, P. B. (2015). “The Use of Software Tools and Autonomous Bots against Vandalism: Eroding Wikipedia’s moral order?” Ethics and Information Technology, 17(3): 175–88. Encyclopedia Britannica, Inc. (2006). “Fatally Flawed: Refuting the recent study on encyclopedia accuracy by the journal Nature.” Retrieved December 13, 2015, from https://corporate.britannica.com/ britannica_nature_response.pdf. Erhart, E., and Barbara, J. (2015). “Hundreds of ‘Black Hat’ English Wikipedia Accounts Blocked following Investigation.” Wikimedia blog. Retrieved December 8, 2015, from http://blog.wikimedia.org/2015/ 08/31/wikipedia-accounts-blocked-paid-advocacy/. Estlund, D. M. (1994). “Opinion Leaders, Independence, and Condorcet’s Jury Theorem.” Theory and Decision, 36(2): 131–62. Fallis, D. (2006). “Epistemic Value Theory and Social Epistemology.” Episteme, 2(3): 177–88. Fallis, D. (2008). “Toward an Epistemology of Wikipedia.” Journal of the American Society for Information Science and Technology, 59(10): 1662–74. Fallis, D. (2011). “Wikipistemology,” in A. I. Goldman and D.Whitcomb (eds.), Social Epistemology: Essential readings. New York: Oxford University Press. Frost-Arnold, K. (2014a). “Trustworthiness and Truth: The epistemic pitfalls of Internet accountability.” Episteme, 11(1): 63–81. Frost-Arnold, K. (2014b). “Imposters, Tricksters, and Trustworthiness as an Epistemic Virtue.” Hypatia, 29(2): 790–807. Giles, J. (2005). “Internet Encyclopedias Go Head to Head.” Nature, 438: 900–901. Goldman, A. (1992). Liaisons: Philosophy meets the cognitive and social sciences. Cambridge, MA: The MIT Press. Goldman, A. (1999). Knowledge in a Social World. New York: Oxford University Press. Goldman, A. (2011). “The Social Epistemology of Blogging,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, New York: Cambridge University Press. Graells-Garrido, E., Lalmas, M., and Menczer, F. (2015). “First Women, Second Sex: Gender bias in Wikipedia.” arXiv:1502.02341 [cs.SI]. Halfaker, A., Geiger, R. S., and Terveen, L. G. (2014). “Snuggle: Designing for efficient socialization and ideological critique.” Proceedings CHI 2014. ACM Press: 311–20. Hartelius, E. J. (2011). The Rhetoric of Expertise. New York: Lexington Books. Hayek, F. (1945). “The Use of Knowledge in Society.” American Economic Review, 35(4): 519–30.
38
39
Wikipedia Healy, K. (2007). “Wikipedia Follies.” Crooked Timber. Retrieved December 13, 2015, from http:// crookedtimber.org/2007/02/04/wikipedia/. Heilman, J. M., Kemmann, E., Bonert, M., Chatterjee, A., Ragar, B., Beards, G. M., … Laurent, M. R. (2011). “Wikipedia: A key tool for global public health promotion.” Journal of Medical Internet Research, 13(1): e14. Hern, A. (2015). “Wikipedia’s View of the World is Written by the West.” The Guardian, September 15, 2015. Retrieved December 13, 2015, from www.theguardian.com/technology/2015/sep/15/ wikipedia-view-of-the-world-is-still-written-by-the-west. Hill, B. M., and Shaw, A. (2013). “The Wikipedia Gender Gap Revisited: Characterizing survey response bias with propensity score estimation.” PloS ONE, 8(6): e65782. Intemann, K. (2010). “25 Years of Feminist Empiricism and Standpoint Theory: Where are we now?” Hypatia, 25(4): 778–96. Kräenbring, J., Penza, T. M., Gutmann, J., Muehlich, S., Zolk, O., Wojnowski, L., … Sarikas, A. (2014). “Accuracy and Completeness of Drug Information in Wikipedia: A comparison with standard textbooks of pharmacology.” PloS ONE, 9(9): e106930. Kupferberg, N., and Protus, B. (2011).“Accuracy and Completeness of Drug Information in Wikipedia: An assessment.” Journal of the Medical Library Association, 99(4): 310–13. Lam, S. T. K., Uduwage, A., Dong, Z., Sen, S., Musicant, D. R., Terveen, L., and Riedl, J. (2011). “WP: Clubhouse? An exploration of Wikipedia’s gender imbalance.” Proceedings of the 7th International Symposium on Wikis and Open Collaboration. ACM Press: 1–10. Lih, A. (2009). The Wikipedia Revolution: How a bunch of nobodies created the world’s greatest encyclopedia. London: Aurum Press. Lih, A. (2015). “What Wikipedia Must Do.” WikiConference USA, October 9, 2015, Washington, DC. Retrieved December 14, 2015, from www.youtube.com/watch?v=Gj6U22uJzGM. Lih, A., McGrady, R., Ramjohn, I., and Ross, S. (2015). “Thinking (and Contributing) Outside the Editing Box: Alternative ways to engage subject-matter experts.” WikiConference USA, October 10, 2015, Washington, DC. “List of Wikipedias.” (n.d.). In Wikimedia Meta-Wiki. Retrieved December 16, 2015, from https://meta. wikimedia.org/wiki/List_of_Wikipedias. Longino, H. (1990). Science as Social Knowledge. Princeton, NJ: Princeton University Press. Magnus, P. D. (2009). “On Trusting Wikipedia.” Episteme, 6(1): 74–90. McGeer,V. (2008). “Trust, Hope and Empowerment.” Australasian Journal of Philosophy, 86(2): 1–18. Murphy, C. (2015). “Can ‘Black Wikipedia’ Take Off like ‘Black Twitter’?” Colorlines. Retrieved December 15, 2015, from www.colorlines.com/articles/can-black-wikipedia-take-black-twitter. Nature. (2006). “Nature’s Responses to Encyclopedia Britannica.” Nature, 438: 900– 901. Retrieved December 13, 2015, from www.nature.com/nature/britannica/. Ortega, M. (2006). “Being Lovingly, Knowingly Ignorant: White feminism and women of color.” Hypatia, 21(3): 56–74. Owens, S. (2013). “The Battle to Destroy Wikipedia’s Biggest Sockpuppet Army.” The Daily Dot. Retrieved December 8, 2015, from www.dailydot.com/lifestyle/wikipedia-sockpuppet-investigation-largest- network-history-wiki-pr/. Pfister, D. (2011). “Networked Expertise in the Era of Many-to-Many Communication: On Wikipedia and invention.” Social Epistemology, 25(3): 217–31. “Reliability of Wikipedia.” (n.d.). In Wikipedia. Retrieved December 4, 2015, from https://en.wikipedia. org/wiki/Reliability_of_Wikipedia. Reynosa, P. (2015). “Why Don’t More Latinos Contribute to Wikipedia?” El Tecolote. Retrieved December 15, 2015, from http://eltecolote.org/content/en/commentary/why-dont-more-latinos-contribute- to-wikipedia/. Sanger, L. M. (2009). “The Fate of Expertise after Wikipedia.” Episteme, 6(1): 52–73. Seigenthaler, J. (2005). “A False Wikipedia ‘Biography’.” USA Today, November 29, 2005. Retrieved December 13, 2015, from http://usatoday30.usatoday.com/news/opinion/editorials/2005-11-29- wikipedia-edit_x.htm. Silverman, M. (2012). “Encyclopedia Britannica vs. Wikipedia.” Mashable. Retrieved December 17, 2015, from http://mashable.com/2012/03/16/encyclopedia-britannica-wikipedia-infographic/ #Olumu22uvkq6. Simon, J. (2010).“The Entanglement of Trust and Knowledge on the Web.” Ethics and Information Technology, 12(4): 343–55.
39
40
Karen Frost-Arnold Stephenson-Goodknight, R. (2015). “Women … It Takes a Village.” WikiConference USA, October 10, 2015, Washington, DC. Retrieved December 14, 2015, from www.youtube.com/watch?v= WkHbg9V5wnI. Sunstein, C. (2006). Infotopia. New York: Oxford University Press. Surowiecki, J. (2004). The Wisdom of Crowds. New York: Doubleday. Temple, N. J., and Fraser, J. (2014). “How Accurate Are Wikipedia Articles in Health, Nutrition, and Medicine?” Canadian Journal of Information and Library Science, 38(1): 37–52. Thagard, P. (1997). “Collaborative Knowledge.” Noûs, 31(2): 242–61. Tollefsen, D. P. (2009). “Wikipedia and the Epistemology of Testimony.” Episteme, 6(1): 8–24. Wales, J. (2005). “Wales Interview Transcript.” Retrieved December 10, 2015, from https://en.wikipedia. org/wiki/User:One/Wales_interview_transcript. Wales, J. (2009). “Foreword,” in The Wikipedia Revolution: How a bunch of nobodies created the world’s greatest encyclopedia. London: Aurum Press. Wikimedia Foundation. (2011). Wikipedia Editors Study. Retrieved December 15, 2015, from https:// wikimediafoundation.org/ w / i ndex.php?title=File%3AEditor_ S urvey_ R eport_ - _ A pril_ 2 011. pdf&page=1. “Wikipedia:About.” (n.d.). In Wikipedia. Retrieved December 16, 2015, from https://en.wikipedia.org/ wiki/Wikipedia:About. “Wikipedia:Assume good faith.” (n.d.). In Wikipedia. Retrieved December 10, 2015, from https:// en.wikipedia.org/wiki/Wikipedia:Assume_good_faith. “Wikipedia:Be bold.” (n.d.). In Wikipedia. Retrieved December 10, 2015, from https://en.wikipedia.org/ wiki/Wikipedia:Be_bold. “Wikipedia:Conflict of interest.” (n.d.). In Wikipedia. Retrieved December 8, 2015, from https:// en.wikipedia.org/w/index.php?title=Wikipedia:Conflict_of_interest&oldid=694255381. “Wikipedia:Core content policies.” (n.d.). In Wikipedia. Retrieved December 16, 2015, from https:// en.wikipedia.org/wiki/Wikipedia:Core_content_policies. “Wikipedia:Editing restrictions.” (n.d.). In Wikipedia. Retrieved December 10, 2015, from https:// en.wikipedia.org/wiki/Wikipedia:Editing_restrictions. “Wikipedia:Expert editors.” (n.d.). In Wikipedia. Retrieved December 4, 2015, from https://en.wikipedia. org/wiki/Wikipedia:Expert_editors. “Wikipedia:List of citogenesis incidents.” (n.d.). In Wikipedia. Retrieved October 16, 2015, from https:// en.m.wikipedia.org/wiki/Wikipedia:List_of_citogenesis_incidents. “Wikipedia:Long-term abuse/Orangemoody.” (n.d.). In Wikipedia. Retrieved December 8, 2015, from https://en.wikipedia.org/w/index.php?title=Wikipedia:Long-term_abuse/Orangemoody&oldid= 688451455. “Wikipedia:Notability.” (n.d.). In Wikipedia. Retrieved December 2, 2015, from https://en.wikipedia.org/ wiki/Wikipedia:Notability. “Wikipedia:Please do not bite the newcomers.” (n.d.). In Wikipedia. Retrieved December 10, 2015, from https://en.wikipedia.org/wiki/Wikipedia:Please_do_not_bite_the_newcomers. “Wikipedia:Statement on Wikipedia from participating communications firms.” (n.d.). In Wikipedia. Retrieved December 8, 2015, from https://en.wikipedia.org/wiki/Wikipedia:Statement_on_ Wikipedia_from_participating_communications_firms. “Wikipedia:WikiProject Countering systemic bias.” (n.d.). In Wikipedia. Retrieved October 16, 2015, from https://en.m.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias. “Wikipedia:WikiProject Women in Red.” (n.d.). In Wikipedia. Retrieved October 16, 2015, from https:// en.m.wikipedia.org/wiki/Wikipedia:WikiProject_Women/Women_in_Red. Wray, K. B. (2009).“The Epistemic Cultures of Science and Wikipedia: A comparison.” Episteme, 6(1): 38–51.
40
41
4 GOOGLING Hanna Kiri Gunn and Michael P. Lynch
Introduction In a recent New Yorker cartoon, a man is fixing a sink. His partner, standing nearby skeptically asks, “Do you really know what you are doing, or do you only google-know?” This cartoon perfectly captures the mixed relationship we have with googling, or knowing via digital interface, particularly via search engines. On the one hand, googling is now the dominant source of socially useful knowledge. The use of search engines for this purpose is almost completely integrated into many of our lives. On the other, the point the cartoon is making resonates with nearly all of us: users often recognize that there are risks and trade-offs associated with gaining certain kinds of information via online search. These facts about googling make it particularly interesting to the applied epistemologist. Our practices involving search engines not only have a distinctive character, that character puts some traditional epistemic questions in a new light. This chapter will examine two of those questions. The first concerns the extent to which googling raises problems similar to familiar quandaries surrounding testimonial knowledge. The second –and more radical –concerns whether googling is a type of distributed or extended knowledge.
Googling as a mode of inquiry Finding information via a search engine is a ubiquitous feature of modern life. Google, as a company, is now so dominant that the near-universal term for such activity is “googling.” For most people, googling is just how you acquire socially useful knowledge. In this chapter, we’ll use “googling” to cover a particular method or mode of inquiry. By a “mode of inquiry” we mean a process whose point is to answer a particular question or questions. Such questions needn’t be, and often aren’t, explicitly stated, but are implicit in the inquiry itself. Importantly, following common usage, we take “googling” to name the process of finding information online. As such, we take it to include not just the act of using a search engine, but the whole process of acquiring information over the Internet, including the search, the sites reached by that search, and the answer to the question(s) which is/are the target of the inquiry. Thus, to say you “googled it” is to say that you acquired the relevant answer online via a search. 41
42
Hanna Kiri Gunn and Michael P. Lynch
The sheer ubiquity of googling hints at one of its more epistemically interesting features. Googling is deeply integrated into our epistemic life in at least three respects. First, for many questions, googling is our primary or “go-to” way of finding answers. A quick way of illustrating this fact is to simply reflect how you might go about finding answers to any of the following questions, assuming you have normal, working Internet access: • • • •
What are the possible causes for your car not starting? Where is the closest auto-parts store? How late is that store open? What opinions do people have about local auto mechanics?
Whether or not these particular questions matter to you, it is clear to most Internet users that the first thing one would do if one wanted to know the answer to such questions is to engage in an Internet search. Indeed, for most people, searching online happens without much forethought. It is just the obvious, immediate first step in answering almost any question about the social world; for those questions, it has a kind of priority for many people. Modes of inquiry have causal priority over one another when you can’t engage in one mode without engaging in another. Thus, one can’t read without using vision. Modes have epistemic priority over one another when you are justified in using the second only if you are justified in using the first. But modes can also enjoy what we can call a priority of use relative to certain classes of questions. Priority of use is a kind of statistical priority. A mode has priority of use over another, relative to a given class of questions, just when it is, other things being equal, the first mode or method used in order to answer questions of that class. Consider, for example, perception with regard to questions about our immediate physical environment. If we wish to know whether it is safe to cross the road, we first try to look and see; failing that, we might ask someone nearby. If we want to know whether the phone is ringing, we first try listening for it, etc. Perception often has causal, epistemic, and use priority over other modes of knowing. But, as the above experiment indicates, googling has priority of use for many ordinary questions. We can use other modes of inquiry to answer these sorts of questions, but we typically try googling first. Second, googling is cognitively seamless –that is, we look through the interface, or don’t treat it as cognitively salient. This itself is a consequence of ubiquity and epistemic priority of use together with the sheer speed of Internet searches. In this way too, googling is like certain other modes of inquiry, like reading street signs or even perceiving the world around you. It is a process that happens so easily that we “see” the information we are seeking without always noticing how we are going about it. Third, and perhaps most strikingly, most people treat googling with default, prima facie trust. How many times has someone you know announced on some point of fact and everyone else in the room races to their phones to verify or falsify it? We routinely use googling to trump other forms of inquiry, even to question experts. Of course, almost no one will admit to trusting everything on the Internet; we all know that googling can lead us astray. But that doesn’t stop us from using it routinely, nor regarding it as essentially reliable on a broad range of topics. Here again, googling is akin to perception.We know that perception can mislead us, but we nonetheless treat it with prima facie trust. So too with how we acquire information online; we used to say that seeing is believing. Now googling is believing. The above three points suggest that googling, like perception, is a mode of inquiry that is deeply integrated into our epistemic life. Of course, googling is unlike perception in numerous
42
43
Googling
ways, two of which will form the basis for the rest of the discussion of this paper. First, and perhaps most obviously, when googling, we are not interacting with physical objects but consulting a source for sources that are distributed across a virtual space. And second, googling, unlike perception, is radically dependent on other people and processes that are not our “own.” Consulting Google (the actual web-browser) is like asking someone about whom to ask for an answer. And any act of googling (finding information online) by its very nature is dependent upon the beliefs and actions of other people. In that way, googling is more like testimony. Taken together, the above points indicate what we believe is distinctive about googling from an epistemological point of view. It is a mode of inquiry that is at once closely integrated (like perception) and yet distributed and reliant on others (like testimony). It is this combination of features that makes googling of particular interest to the social epistemologist. We should note at the outset that there is one further feature of googling that makes it particularly distinctive: it is a preference-dependent mode of inquiry. From a certain level of abstraction, Facebook, Google, and most of our apps, search engines, and social platforms all work in the same basic way, different algorithms aside. They attempt to track people’s preferences by way of tracking their likes, their clicks, their searches, or their friends. That data is then analyzed and used to predict what a given person’s current and future preferences will be. But it is also used to predict what sort of information you –and crucially, those similar to you –will find interesting, what posts you will like, and what links you will most click. The results of this preference-aggregation are then served up on websites you visit.That’s how the magic happens. It is why those shoes you were thinking of buying are being advertised on your Facebook feed. And that’s also what makes googling both one of the most efficient modes of inquiry humans have ever produced and one of the riskiest, from the epistemic point of view.1,2 In the next section, we discuss how those risks bear on the question of whether it is rational for us to trust what we learn by googling.
In Googling we trust Trust in the testimony of others plays an important role in our epistemic lives, since the amount of knowledge we need to rely on in daily life far exceeds what we can acquire personally. With the Internet serving as a major source of testimonial knowledge (and seemingly equally for low- risk queries like “how to fold a fitted sheet” to high-r isk queries like “can I feed my newborn peanuts?”), it is important to explore the epistemic vulnerabilities googling presents.Trusting, in general, is an activity that makes us vulnerable to other people, and trusting in googling, as many of us do on a daily basis, brings many of the same problems. That being said, there are several features of googling that should make us wonder if it poses unique epistemic vulnerabilities that testimony from other sources does not. For one, the Internet makes it very easy for many sources to remain anonymous and so removes the risks to reputation that face-to-face testimony may bring (and the security that it can provide for the inquirer). Another related problem is the credibility of the sources that Google directs us toward, and in particular our ability to verify the credibility of those sources. In an ideal world, we would be able to rely on genuine experts for the information that we need. But googling doesn’t necessarily take us to experts for answers to our questions.When we google we are directed to forums, Internet encyclopedias, social media, news websites, online stores, blogs, and videos, to name a few possible sources googling may point us to for answers. We can immediately see two potentially unique problems with googling: we regularly don’t know who we are talking to, and we typically don’t have access to information about their trustworthiness.
43
44
Hanna Kiri Gunn and Michael P. Lynch
In the discussion that follows, we explore a number of similarities between googling and verbal testimony, and we point out where we think there may be unique issues raised by taking googling as a mode of inquiry. We consider in more depth the nature of the sources of testimony that googling directs us toward, and how these various sources present us with different epistemic problems. We also consider the issue of credibility markers available to us when we google and how this may be different from existing discussions in the literature on related discussions about recognizing experts.
Googling and prima facie trust Placing trust in others’ testimony is unavoidable. But it isn’t always warranted. So that raises a natural theoretical question: under what circumstances is such trusting rational? We mentioned earlier that googling is arguably treated by us with a default, prima facie trust –much like the testimony of others. Put more carefully, absent obvious defeaters for what someone says, we tend to take someone’s testimony that p as prima facie evidence for the truth of p. And, we contend, we typically do the same thing when we google answers to our questions. If so, then we can extend the natural question above and ask: are we rational in putting prima facie trust into googled results? There are two dominant positions in the existing literature on testimony that bear on this question: reductionism and anti-reductionism. According to reductionists, the epistemic merit of testimony is reduced to the epistemic merit of sources such as perception, memory, and inductive inference. Our trust in testimony is rational when it is based on these other sources. According to one such view, therefore, we are rational in trusting testimony by virtue of our past experiences of the general reliability of testimony –reliability that we’ve presumably tracked via other evidential sources (“global reductionism”) (van Cleve 2006; Hume 2010 [1740]: T 1.3.4). Alternatively, it may be because we are justified in accepting the testimony of particular individuals on particular topics due to beliefs about their reliability on those topics (“local reductionism”) (Fricker 1994). Note, however, that in contrast to the common-sense understanding of trust as prima facie trust sketched above, reductionist positions require positive evidence for trustworthiness; that is, trust is earned, not assumed. Anti-reductionists, by contrast, argue that testimony itself is a basic form of evidence that provides universal, prima facie justification for accepting the testimony of speakers. What enables us to trust testimony –understood typically as factual assertions –is an epistemic principle. One such principle draws a connection between our self-trust in our own opinions and faculties and the similar faculties and beliefs of others (Reid 1975; Foley 2001; Zagzebski 2015). We don’t typically require independent justification of the reliability of our faculties in order to form beliefs, we by-and-large trust them as reliable (putting aside cases where we have medical or other reasons to distrust them). So, when faced with others whose beliefs have a similar etiology and who also have similar (reliable) faculties, we should, in order to avoid inconsistency, treat their testimony (the outcomes of their exercise of these faculties and opinions) with a similar level of default trust. Other anti-reductionists argue that we can afford prima facie trust to the testimony of others because, they argue, there is a special kind of epistemic warrant that comes with something being presented as true (Burge 1993).3 To extend either reductionist or anti-reductionist positions to cases of googling will depend on the degree to which googling and verbal testimony are similar. And in order to evaluate that, we need to look more closely both at the nature of sources we find when we google (i.e., our candidate testifiers) and the testimonial contexts we come across. 44
45
Googling
Testifiers and trolls With these positions in mind, we can return to the question with which we began –namely, are we rational in giving prima facie trust to googled results? What we find is that differences between traditional examples of testimony and googling do matter. In most discussions of testimony, philosophers explore cases of (a) one-on-one verbal testimonial exchanges, where (b) “testimony” is typically understood as the speech act of asserting. Both points are relevant when considering googling because neither are guaranteed (or, indeed, the norm) for many sources we find, and then form beliefs from, on the Internet. Two common sorts of cases we find in the existing testimony literature involve novices looking for expert testimony in areas they know little about (e.g., anthropogenic climate change), and individuals needing to ask a stranger for advice (e.g., as a tourist asking for directions).While we may be engaged in the same sorts of inquiry on the Internet (i.e., for expert’s opinions or for directions) we are often not interacting with one other individual.4 In many cases it may be community forums or community compiled encyclopedias (e.g.,Wikipedia or Stack Exchange). In still more cases, we are not using a human source at all. Instead, search results are delivered automatically by Google. Alternatively, answers to natural language questions are provided by computational knowledge engines like Wolfram|Alpha. We take it that these features have significance for thinking about reductionist and anti-reductionist positions that would aim to show how we are justified in trusting our Google searches.5 On a reductionist reading, we might say that we are warranted in trusting our Google results because of the general or local reliability of googling in the past.This would seem to square well with one feature of preference ranking in search engine results –the most popular results are the ones that our search engines direct us toward. That being said, and given the varied types of sources we come across when we google, a local reductionist position may be more appropriate. It seems plausible that one should not treat community-maintained wikis in the same way as one treats a medical blog or a post on social media. In order to afford prima facie trust, then, it might be epistemically preferable to localize either to a particular type of source (e.g., wikis, scientific blogs, news websites) or to a particular testifier (e.g., Wikipedia, Phil Plait’s astronomy blog, or The Guardian). Whether we find reductionist positions plausible when applied to googling will hinge in part on whether we think that they are too demanding –that is, whether they presume too high an evidential standard for determining reliability. How much experience do we need to have with googling to give us rational trust, either locally or globally? If you think that we can, at least sometimes, be rational in trusting some online searches even when we lack broad personal experience, you might find the reductionist account too conservative. On the other hand, perhaps we should demand or desire that googling, even to a local level, have a potentially high evidential demand like this to justify affording prima facie trust to their results. The nature (i.e., individuals or collectives/groups) and variety of sources that we come across when we google is one reason we might desire these high evidential demands. Reductionist accounts of testimony are open to the critique that they presume too great an evidential standard for trust: global reductionism would seem to require a great deal of leg-work in establishing the reliability of past-testimony given the ratio of the number of claims we are exposed to via testimony to the number of claims to check; similarly, the narrower evidential demand of the local reductionist is more demanding that it first appears. Likewise, the anti-reductionist position seems to face some serious challenges in the case of googling. Appealing to general wisdom for a moment, we might wonder how we should square the common refrain of “You can’t trust what you read on the Internet” with the 45
46
Hanna Kiri Gunn and Michael P. Lynch
anti-reductionist view about googling that would give the verdict that we can because we found it googling. A self-trust view like that mentioned above does not appear to port over to the case of googling very easily. Recall that testimony has a prima facie level of trust on a view like Foley’s because of a presumption of trust in our own opinions and faculties. If we know enough about the author of the results we find when we google, then we may be warranted in trusting those results. The issue is that we often do not know much about the authors of such results (e.g., forum contributions or blogs) or the author may not be an individual (e.g., a community wiki). More problematically for self-trust views, the author may not be a person at all (e.g., Google’s automated responses to many questions including currency conversions and mathematical questions). For similar reasons, appealing to a Burge-or Gricean- style anti- reductionist position for googling is not straightforward either. The general idea of these positions is that we are warranted in trusting because of norms that apply to verbal testimonial settings, namely ones that are tied to truthfulness and speech acts of asserting. Unlike verbal testimony, where we typically take testimony to be a voiced assertion, googling arguably directs us toward a much larger range of speech acts and speakers than just individuals offering up explicit opinions. We might find an answer to our query in a picture, a forum, a video, or by reading news websites, reading a blog, or from an automated system. In some such cases, the information we draw on is not be being presented as true and thus it doesn’t seem we would be entitled in the way that Burge, for example, argues. But even in cases where the results of googling can be taken as claims that would be verbally asserted by someone, it is not clear that the same conversational norms that guide these anti- reductionist accounts necessarily apply on the Internet. Consider the phenomenon of Internet “trolls.” Trolls are individuals who partake in forums or comment sections (or anywhere where individuals can post messages to others) with the intention of angering, fooling, or otherwise making insincere contributions to the discussion. Trolling contributions vary in their obviousness, but in many cases, they are intentionally deceptive in order to get the desired rise out of other participants in the discussion. Consequently, it can be difficult to detect trolls. If it is highly likely that trolls are contributing to the source our googling directs us toward, then we would be wise not to treat the information in the source with a default level of trust. In response to this concern, we might take trolls to be merely a new instantiation of liars or deceivers (or even bald-faced liars), in which case we can look to philosophers of language for their existing analyses (Bok 1978; Carson 2010; Saul 2012). This would make Internet trolls not a new problem, but still one that poses as epistemic threat to us when we google. In sum, there is some intuitive appeal for a reductionist account of prima facie trust in our googling results, even with the potentially high evidential demand that these accounts bring. In cases of verbal testimony, there are factors that can bias us against (or in favor of) certain potential testifiers (Mills 2007; Fricker 2007; Medina 2013). Consequently, our ability as individuals to perform the relevant tasks for assessing the reliability of potential testifiers can be biased in unreliable ways. These are serious issues for social epistemologists concerned about testimonial knowledge and trust. Likewise, the Internet brings with it a range of biasing factors that can thwart our ability to place trust appropriately.
Selecting our Google sources A number of different factors determine the search results that we see when we look for information through a search engine, but it often includes our personal search history and the 46
47
Googling
searches of others, that is, facts about the preference-dependence that we introduced earlier. It is plausible that the ranking of search results brings with it a degree of reliability: information that is accurate, true, or otherwise reliable will be more likely to be shared or cross-linked and thus will rise in the rankings. There are other factors, though, that can lead to a result being higher in the rankings, including “viral” information, memes, or simply being the product of a celebrity or “influencer” (of film, music, television,YouTube, etc.) that can cause some pages to rank highly on Google. In these cases, we have popularity-because-popular, rather than popularity-because-reliable, as in the former cases. These facts select for certain testifiers over others, and in ways that may not be tied to truth or reliability.6 As Miller and Record (2013) argue, these various methods of personalization –despite often being touted as features –may undermine our ability to justify our beliefs. Miller and Record take three properties of googling as central to their analysis: (i) our ability to discern between different kinds of sources and why they have been so aggregated, (ii) the transparency (or lack thereof) of generating results, and (iii) the representativeness of the information available to the user. They argue that when these personalization technologies are secretive or non-transparent, we are likely to form beliefs that are less justified. As a result, they argue that users can and should do what they can to better inform themselves about googling and the ways our sources are selected. Of course, we may not be googling for information via a search engine, but instead by posting on forums or through social media and these sources run the risk of being undermined by other biasing factors. To take social media as an example: Our friend circles mean that we are likely to find testifiers for opinions and beliefs that support what we already believe, given that we tend to form friendships with others who have similar worldviews as ours.This can bias the testimony that we find, but it can also lead to a biasing of the primary sources that we are advised to look at when we ask on social media. For example, those on one side of the alternative medicine debate are likely to send recommendations and links to outside sources that support their own beliefs. So, the advice we get about what to look at to answer our queries on social media (and potentially forums) can be highly selective in ways that might bias us in one direction.7 Thus, googling poses a selection problem for testifiers because of the way in which searches are biased, and another problem regarding the way that we gather information from those we already trust. Are these new problems with googling? We would say, yes and no. Obviously, prior to being able to google, the results of our inquiry could not be subject to the preference-dependence biasing effects that search engines cause. That said, other social factors likely bias the popularity of the information we hear about through other traditional media forms and these are also not necessarily because of any epistemically virtuous qualities on behalf of the media in question. The social media discussion is also not all that new. Friend circles (or family as the case may be) are naturally selective, and the biasing effects from social media are probably best understood because of pre-existing facts about non-social-media friendships and associations. In both cases, however, the near ever-present ability to google, the massive body of information that makes up the Internet, and the speed of results present us with a medium that has the potential to greatly exaggerate these problems.
Credibility markers The risk of false information, liars, or bullshit (à la Frankfurt 2005) is always present when dealing with testimony. As we have just discussed, it’s not necessarily that the Internet and 47
48
Hanna Kiri Gunn and Michael P. Lynch
googling create new epistemic problems for us, but rather that they run the risk of exaggerating or making us open to existing threats more often.This is because of the easy availability of googling and the huge amount of information on the Internet to support nearly any position one favors. For instance, it is just as easy to find complicated explanations of why the moon landings were a hoax as it is to find complicated explanations of how they couldn’t have been a hoax. The consequence is that our need to assess the credibility of our sources when googling is especially important. The relevant existing problem in social epistemology explores when we –as novices –are rationally justified in trusting experts. By definition, as novices we are not in a position to judge the truth of what a purported expert tells us. Plato’s Charmides provides us with a particularly early discussion of this problem. Sources on both sides of the moon landing debate (if we can call it such) will claim expertise, and there may even be internal debates on each side about the details and evidence that best supports their side’s position. Insofar as there are debates between two experts (regardless of their stance on the moon landing), we as novices can be faced not only with novice/expert situations but novice/2-expert situations (Goldman 2001). In the former, we find ourselves trying to evaluate the credibility of one purported expert. In the latter, we are faced with trying to adjudicate between conflicting experts on the topic at hand. One solution to this problem is to use institutional markers of expertise. These are such things as earned degrees, support from other experts on the same side of the debate in question (best when this support is from the majority), performance in presenting their testimony, evidence pertaining to the purported expert’s past credibility, or evidence of questionable biasing factors (e.g., funding from groups or individuals with a vested interest in the results). Let us limit ourselves to cases where we feel confident saying that the Google results are presented as true and where we are not experts on the issue at hand.8 With the cases leftover, we will still be able to find ourselves in many novice/expert situations and novice/2-expert (or more) situations when we google. The question now is whether we can appeal to any institutional markers of expertise that might assist us in such a situation, or whether there are non- institutional but reliable markers of expertise available to us when we google. We take it that a key consideration here, once again, is the nature of the source one has found on the Internet. The trustworthiness of someone who writes their own blog will need to be assessed differently from that of someone who has answered a question on a forum post. Facts about the website itself can often aid us in assessing credibility, but this requires that we have sufficient knowledge about different types of websites, the role of advertising, and the nature of the testifiers to assess credibility. We discuss three examples here to demonstrate what types of credibility markers we can have when we google: publicly accessible feedback for individuals, advertising, and community-enforced standards (for a related discussion, see Mößner and Kitcher 2016: 10). One source of questionable help in these cases is biographical information provided by the author themselves, which in some cases may be easily verified (e.g., by checking their alleged institutional affiliations or other affiliations). On many sites, for example, there are internal systems that report on users in ways that can indicate their credibility. For many forums, participants are ranked with titles that indicate their expertise (i.e., ‘moderator’, ‘administrator’, etc.), and they receive publicly accessible feedback either for themselves as posters of information (e.g., systems that award visual tokens for the number of contributions made), or for the particular piece of information posted (e.g., “upvoting”9). Examples of these forums include Stack Exchange (user titles, visual tokens (bronze/silver/gold medals), and upvoting), Reddit (visual tokens (karma), upvoting), and the Ubuntu Forums (titles, visual tokens (coffee beans)). Making effective use of these credibility markers though, requires that the user who has been 48
49
Googling
directed to the website through googling knows both what credibility markers are in place, and what such markers, like visual tokens or upvotes, indicate on that website. In some cases, it is also useful to have an understanding of the risk of trolling behaviors on particular forums (e.g., Reddit), and how these can influence the upvoting of results. A second potential credibility marker is the role of advertising on the websites that we find when googling. A healthy skepticism toward advertising is generally warranted in our epistemic practices, as companies typically have monetary gain in mind rather than our epistemic interests. On the Internet such general skepticism is similarly warranted, particularly when the articles we are directed toward are “sponsored content.” Sponsored content, in contrast to traditional editorial content, consists of paid-for articles that are used as advertisements or promotions, but the articles are presented as though they are actual editorial content (usually with “sponsored content” written somewhere on the page). This is a way for advertisers to take advantage of the credibility we typically afford to editorial articles and instead direct it toward their product. We take it that the general rules we apply in being skeptical of advertised claims should apply when looking at sponsored content, but epistemic problems arise from our ignorance of what sponsored content is and our ability to spot that a source is actually sponsored content. There is a growing body of literature looking at our ability to detect sponsored content and our general ability to assess the credibility of Internet sources (Abdulla et al. 2007). The third credibility marker is less a set of markers or signals but rather the commitment of some Internet sources to a set of epistemic norms upheld by the source itself. One major source of information that googling directs us toward is Wikipedia, and understanding how Wikipedia functions is an important part of understanding the credibility of it as a source. Entries on this Internet encyclopedia are created, edited, and managed by a community of users. Importantly, the majority of these users are novices and not experts on the topics they contribute to (or at least, they are not required to have qualifications to contribute or be a community moderator). The community has rules for contributing to entries (the “five pillars”) that are enforced by the community. Consequently,Wikipedia is “self-consciously” not a normative epistemic source (i.e., one that tells us what we ought to believe about a topic) but rather a descriptive one (i.e., one that describes what the present consensus or majority view is on a topic).10 All entries are required to meet the Wikipedia standard of verifiability (“Wikipedia:Verifiability”), and a main task of community users who edit and maintain entries is to check up on edits made by others (in part facilitated with discussion boards for each entry). Given that those who edit Wikipedia pages are typically novices this seems the rational move –publish what is already accepted in the literature.11 Entries are required to come supplied with citations and other credibility markers (e.g., “citation needed,” or comments at the top of the entry indicating that the page needs work to meet the community standards). What this does mean, though, is that experts may not be able to update entries with our best information until sufficient time has passed for new information to be widely accepted by the relevant community of experts. This can mean that when our googling directs us to Wikipedia, our information can be out of date or false (until such a time as accepted by the majority of the relevant expert community).12 All of these measures for assessing the credibility of sources are taxing – they require having a reasonably comprehensive knowledge both of different platforms one comes across when one googles, and of the particular sources that one finds (i.e., to know which credibility markers to look out for). While googling provides us with easy and immediate access to answers to any question we could think to ask, our ability to rationally evaluate the quality of the answers we find is a hard ask –even for those who have grown up in the Internet age.13 49
50
Hanna Kiri Gunn and Michael P. Lynch
What this suggests is a distinction between comfort and competence when googling. While many of us are perfectly comfortable asking our preferred search engine for the answers to our problems, it is apparent that many of us are far less competent at evaluating the credibility of what we find. The ease with which we google and readily accept answers despite this distinction raises interesting questions about our relationship with the information we come across when we google.
Googling as extended knowledge Imagine “neuromedia,” technology that allows the capabilities of your smartphone to be encoded on technology directly linked to your neural network: a brain-chip, if you will (Lynch 2014). Suppose everyone in a particular community had access to this technology. They can query Google and its riches “internally”; they can comment on one another’s blog posts using “internal” commands. In short, they can share knowledge –they can review one another’s testimony –in a purely internal fashion.This would have, to put it lightly, an explosive effect on each individual’s “body of knowledge.” That’s because whatever I “post” mentally would then be almost instantly accessible by you mentally (in a way that would be, we might imagine, similar to accessing memory). We’d share a body of knowledge by virtue of being part of a network. But neuromedia also raises a further, more radical possibility: it is possible that neuromedians’ knowledge would be “extended,” in that individual neuromedians would be sharing not just the content of what is known, but the actual process or act of knowing itself. While neuromedia makes this point vivid, current technology arguably already extends our knowledge. In this last section, we’ll briefly examine this possibility. Traditionally, humans have known about the world via processes such as vision, hearing, memory, and so on. These modes of inquiry are internal; they are in the head, so to speak. But if you had neuromedia, the division between ways of forming beliefs that are internal and ways that are not would no longer be clear. The process by which you access posts on a webpage would be as internal as access to your own memory. So, plausibly, if you come to know, or even justifiably believe, something based on information you’ve downloaded via neuromedia, that’s not just a matter of what is happening in your own head. It will depend on whether the source you are downloading from is reliable –and that source will include the neural networks and cognitive processes of other people. In short, were we to have neuromedia, the difference between relying on yourself for knowledge and relying on others for knowledge would be a difference that would make less of a difference. David Chalmers and Andy Clark’s (1998) “extended mind” hypothesis suggests that, in fact, many of our intentional mental states are already extended past the boundaries of our skin. When we remember what we are looking for in a store by consulting a shopping list on our phone, they argue, our mental state of remembering to buy bread is spread out; part of that state is neural, and part of it is digital.The phone’s Notes app is part of my remembering. If Chalmers and Clark are right, then neuromedia doesn’t extend the mind any more than it already is extended. We already share minds when I consult your memory and you consult mine. The extended mind hypothesis is undoubtedly interesting, and it may just be true. But we don’t actually have to go so far to think that when employing googling as a mode of inquiry, our knowledge is extended. That is because even if we don’t literally share mental states, according to other philosophers, it is possible that we share the processes that ground or justify what our individual minds believe and think. Sandy Goldberg (2010) has argued, for example, that when I come to believe something based on information you’ve given me, whether or not I’m justified in that belief doesn’t 50
51
Googling
depend just on what is going on in my brain. Part of what justifies my belief is whether you, the teacher, are a reliable source. What justifies my receptive beliefs on the relevant topic –what grounds them –is the reliability of a process that includes the teacher’s expertise. So, whether I know something, according to Goldberg, depends as much on what is going on with the teacher as it does the student. Goldberg’s hypothesis seems plausible when applied to googling if you accept two conditions. The first condition is that when we form beliefs via googling by relying on TripAdvisor or Google Maps, for example –we form beliefs by a process that is essentially socially embedded – a process the elements of which include not just chips and bits but aspects of other people’s minds, social norms, and my own cognition and visual cortex. The second condition is that a broadly reliabilist view of knowledge is correct. According to this position, you know that, for example, Denver is in Colorado, just when that belief is the result of a reliable belief-forming process. Importantly, you don’t have to know that the process in question is reliable –it just has to be reliable. If that is how knowledge, or at least knowledge via googling works, then if these processes are themselves extended, then the grounds for belief are extended, and arguably our Google knowledge is as well.
Conclusion Googling is a mode of inquiry that is not going away; it is, for many people, the most common way to form beliefs about the social world. The epistemological questions it raises are old questions,but these questions are given a new and pressing form. In a similar fashion, applied epistemologists (and ethicists) are beginning to look in more detail at issues of privacy and surveillance on the Internet, topics which dovetail with the phenomenon of preference-dependence discussed here. As googling continues to occupy such a central role in our lives, new concerns over epistemic responsibility and epistemic agency on the Internet are also rising topics of concern.We expect that applied epistemologists (including ourselves) will continue to explore these topics as our lives become ever more integrated with the digital world.14
Notes 1 Preference-tracking can be manipulated while manipulating at the same time (Lynch 2016). That’s important because search prompts and the ranking of results affects what people know as much what turns up in the encyclopedia (back when there were such things). Here’s a quick example: type, “is climate change…” into Google and you are likely to have Google’s Complete program helpfully finish your thought with “…a hoax?” (That answer often comes up before “…a fact?”). Hit that, and you might get, for example, “Top Ten Reasons Climate Change is a Hoax” from a nice little outfit calling itself “globalclimatescam.” (One can learn so much on the Internet!) 2 See Miller and Record (forthcoming) for a discussion of the epistemic risks posed by search engine autocomplete features, and a proposal for the epistemic responsibilities of information providers. 3 Similar arguments are made by appeal to a pragmatic principle like Grice’s “Cooperative Principle” (1989); the general idea behind these views being that the testimonial setting itself entitles the listener to believe what is said. 4 We leave aside here examples of trusting online stores about their own products. For one, the credibility of businesses touting their own wares is not new with the Internet and so doesn’t present new issues. On the other hand, concerns about the credibility of reviews are captured in the other cases we discuss here and in the next section on credibility markers. 5 As stated above, typically philosophers take assertions to be the speech act in testimony cases that lead to testimonial knowledge. While we may want to say that our non-human sources and community complied encyclopedias, for example, are presenting information to us as true, it’s not immediately clear that they (qua non-human or non-individual sources) are performing assertions. In addition,
51
52
Hanna Kiri Gunn and Michael P. Lynch in many cases where the source may be both human and individual, there are a great many types of media on the Internet we may be directed toward from our googling and these need not involve any assertions. 6 Indeed, research into the clicking behavior of individuals using search engines supports the conclusion that individuals are likely to click the top search result even when the next result may be (quite plainly) the more relevant option. One hypothesis offered is that individuals have a “trust bias” for top results because of an assumption that search engines are presenting the most relevant results (Joachims et al. 2005). 7 One medium we do not discuss here in detail are Internet blogs. Given the sheer number of blogs created by a vast range of individuals (and thus, representing a potentially equally large number of perspectives and opinions), blogs are an important area for epistemic work on googling. For a discussion of the interplay between and epistemic comparison of blogs and professional (particularly, political) journalism, see Coady (2012: Chap. 6). 8 We put aside cases where people are using non-assertions to base their beliefs on, and cases where the searcher has enough knowledge to adjudicate between competing purported experts. 9 Upvoting is a way for users to indicate support for the information in a post or contributed to an Internet forum. We use “support” here because upvoting is used across a wide variety of websites and may be an indication of truth, reliability, aesthetic value, etc. Typically, upvoting systems will rank the contributions to the forum so that those with the highest votes are at the top. 10 As the website itself states, “We strive for articles that document and explain major points of view, giving due weight with respect to their prominence in an impartial tone. We avoid advocacy and we characterize information and issues rather than debate them. In some areas there may be just one well- recognized point of view; in others, we describe multiple points of view, presenting each accurately and in context, rather than as “the truth” or “the best view.” All articles must strive for verifiable accuracy, citing reliable, authoritative sources, especially when the topic is controversial or is on living persons. Editors’ personal experiences, interpretations, or opinions do not belong” (“Wikipedia:Five pillars”). 11 For a defense of the position that novices ought to believe in accordance with the larger group of experts, see Coady (2012: Chap. 2). 12 For an interesting case of an expert finding themselves unable to edit a Wikipedia page on their own area of expertise, see Messer-Kruse (2012). 13 See Hargittai et al. (2010) and Wineburg and McGrew (2016) for two examples of studies specifically addressing young people’s ability to conduct reliable credibility judgments on the Internet (and what inhibits them). Studies of this sort also raise important questions about epistemic responsibility on the Internet and whose job it is to make us safe when we google. 14 Thanks to Casey Rebecca Johnson, Nate Sheff, Teresa Allen and audiences at the Social Epistemology Working Group (SEW) for comments and discussion.
References Abdulla, R. A., Aiken, K. D., Anderson, C. C., Boush, D. M., Casey, D., Eysenbach, G., … Rainey, J. G. (2007). “Bibliography on Web | Internet Credibility.” Retrieved February 9, 2014, from www.semanticscholar. org/paper/Bibliography-on-Web-Internet-Credibility-Flanagin-Metzger/042d8e5813f3d70ffbc7b0d 9e99e4b4b99842372. Bok, S. (1978). Lying: Moral choice in public and private life. New York: Pantheon Books. Burge, T. (1993). “Content Preservation.” Philosophical Issues, 102: 457–88. Carson, T. L. (2010). Lying and Deception: Theory and practice. Oxford: Oxford University Press. Clark, A., and Chalmers, D. (1998). “The Extended Mind.” Analysis, 58: 7–19. Cleve, J. van (2006). “Reid on the Credit of Human Testimony,” in J. Lackey and E. Sosa (eds.), The Epistemology of Testimony. Oxford: Clarendon Press. Coady, D. (2012). What to Believe Now: Applying epistemology to contemporary issues. Chichester: Wiley- Blackwell. Foley, R. (2001). Intellectual Trust in Oneself and Others. Cambridge: Cambridge University Press. Frankfurt, H. G. (2005). On Bullshit. Princeton, NJ: Princeton University Press. Fricker, E. (1994).“Against Gullibility,” in B. K. Matilal and A. Chakrabarti (eds.), Knowing from Words: Western and Indian philosophical analysis of understanding and testimony. Dordrecht: Springer. Fricker, M. (2007). Epistemic Injustice: Power and the ethics of knowing. Oxford: Oxford University Press.
52
53
Googling Goldberg, S. (2010). Relying on Others: An essay in epistemology. Oxford: Oxford University Press. Goldman, A. I. (2001). “Experts: Which ones should you trust?” Philosophy and Phenomenological Research, 63: 85–110. Grice, H. P. (1989). Studies in the Way of Words. Cambridge, MA: Harvard University Press. Hargittai, E., Fullerton, L., Menchen-Trevino, E., and Yates Thomas, K. (2010). “Trust Online: Young adults’ evaluation of web content.” International Journal of Communication, 4: 468–94. Hume, D., Norton, D. F., and Norton, M. J. (2010). A Treatise of Human Nature. Oxford: Oxford University Press. Joachims,T., Granka, L., Pan, B., Hembrooke, H., and Gay, G. (2005).“Accurately Interpreting Clickthrough Data as Implicit Feedback.” Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM Press: 154–61. Lynch, M. P. (2014). “Neuromedia, Extended Knowledge and Understanding.” Philosophical Issues, 24: 299–313. Lynch, M. P. (2016). The Internet of Us: Knowing more and understanding less in the age of big data. New York: Liveright Publishing Corporation. Messer-Kruse, T. (2012). “The ‘Undue Weight’ of Truth on Wikipedia.” The Chronicle of Higher Education. Retrieved February 8, 2017, from www.chronicle.com/article/The-Undue-Weight-of-Truth-on/ 130704/. Medina, J. (2013). The Epistemology of Resistance: Gender and racial oppression, epistemic injustice, and resistant imaginations. Oxford: Oxford University Press. Miller, B., and Record, I. (2013). “Justified Belief in a Digital Age: On the epistemic implications of secret Internet technologies.” Episteme, 10(2): 117–34. Miller, B., and Record, I. (forthcoming). “Responsible Epistemic Technologies: A social-epistemological analysis of autocompleted web search.” New Media and Society. Mills, C. (2007). “White Ignorance,” in S. Sullivan and N. Tuana (eds.), Race and Epistemologies of Ignorance. Albany, NY: State University of New York Press. Mößner, N., and Kitcher, P. (2016). “Knowledge, Democracy, and the Internet.” Minerva, 55(1): 1–24. Reid, T., Beaublossom, R. E., and Lehrer, K. (1975). Inquiry and Essays. Indianapolis, IN: Bobbs-Merrill. Saul, J. M. (2012). Lying, Misleading, and What Is Said: An exploration in philosophy of language and in ethics. Oxford: Oxford University Press. “Wikipedia:Five pillars.” (n.d.). In Wikipedia. Retrieved February 8, 2017, from https://en.wikipedia.org/ wiki/Wikipedia:Five_pillars. “Wikipedia:Verifiability.” (n.d.). In Wikipedia. Retrieved February 8, 2017, from https://en.wikipedia.org/ wiki/Wikipedia:Verifiability. Wineburg, S., and McGrew, S. (2016). “Why Students Can’t Google Their Way to the Truth.” Education Week, 36(11): 22–28. Zagzebski, L.T. (2015). Epistemic Authority: A theory of trust, authority, and autonomy in belief. New York: Oxford University Press.
53
54
5 ADVERSARIAL EPISTEMOLOGY ON THE INTERNET Don Fallis
Over the past twenty years, the Internet has become an increasingly important source of information for almost everyone. But can we trust this information? Since anyone at all can send an email or put up a website, we have to wonder whether these people really know what they are talking about. However, the Internet poses an even more insidious threat to our ability to acquire knowledge. Many people online intentionally try to mislead us. For instance, fraudulent websites have been created in order to manipulate stock prices. Also, Wikipedia entries have been altered in order to manipulate public perceptions. Is it possible to acquire knowledge from the Internet when there are so many epistemic adversaries out there? In this chapter, I suggest how work by René Descartes and David Hume points the way to one possible solution to the problem of epistemic adversaries on the Internet. In his discussion of malicious demons, Descartes emphasizes the importance of identifying the source of the information that we receive. In his discussion of reports of miracles, Hume emphasizes the importance of finding evidence that is difficult to fake. By combining their insights in what I will call adversarial epistemology, and by appealing to work in modern cryptography, I argue that it is possible for us to assure ourselves that we are dealing with a trustworthy source and, thus, to acquire knowledge from the Internet.
Epistemic adversaries on the Internet It is a sad truth that we face all sorts of adversaries in life who intentionally try to thwart our goals. For instance, competitors in sports, in business, and in politics attempt to beat us out for the victory. Criminals and terrorists even try to take our money or take our lives. Sometimes these adversaries use force to get their way. But frequently, as when a running back fakes to the left and runs to the right in order to fool a potential tackler, adversaries simply use guile.That is, they interfere with what we know about the world so that we will make choices that redound to their benefit, choices that we would not make if we had all the facts.1 Such epistemic adversaries show up in all sorts of contexts. Politicians lie to us on television, scammers lie to us over the phone, and all sorts of people lie to our faces. In this chapter, I focus on one specific area where we frequently face adversarial threats to our knowledge, the Internet. The Internet has greatly expanded the amount of information that we have access to and has significantly reduced the time that it takes us to access information (see Thagard 2001).Thus, we 54
55
Adversarial epistemology on the Internet
increasingly use the Internet to learn about all sorts of important topics, such as health information (see Fallis and Frické 2002: 73) and financial information (see Fowler et al. 2001). But there are definitely adversaries on the Internet trying to interfere with our knowledge acquisition. Fraudulent websites, phishing scams, and online hoaxes are commonplace (see Schneier 2000: 23–28; Hancock 2007). Moreover, the Internet has many features that facilitate deceptive activities. For instance, it is easier to make emails and websites look authentic as compared with physical documents (see Schneier 2000: 74–77). Also, emails and websites provide people with fewer potential clues to the deceptive intent of the authors as compared with face-to-face communication. As Ralph Keyes (2004: 198) puts it, “with email we needn’t worry about so much as a quiver in our voice or a tremor in our pinkie when telling a lie. Email is a first rate deception-enabler.” As a result, it can be difficult to determine the answers to the following sorts of questions: • Does my organization’s IT department really need me to send them my password in order to carry out a new software update as this email claims (see Schneier 2000: 267–68)? • Is this tech startup going to be purchased by a multinational corporation as this business news website claims (see Schneier 2000: 74; Fowler et al. 2001)? • Is this email a legitimate request from a Nigerian prince for assistance in transferring funds out of the country (see Hancock 2007: 290)? • Was this famous journalist involved in the Kennedy assassinations as his Wikipedia page maintains (see Fallis 2008: 1665)? • Is my nephew really stranded in Romania as this email claims? • Is this a valid credit card transaction (see Schneier 2000: 78–79)? • Is the person that I am chatting with online an attractive young woman who sympathizes with the Syrian rebels as “she” claims (see Gallagher 2015)? • Did Brad Pitt shock liberals by endorsing Donald Trump in the United States Presidential Election as this news website claims (see Ohlheiser 2016)? In many cases, such as with fake Nigerian princes and fake IT departments, it is pretty easy to identify epistemic adversaries. Since so many potential victims can be reached so easily on the Internet, a successful adversary’s online presence does not have to be all that convincing (see Schneier 2000: 18). She can make money even if only a few of the people who receive the email or visit the website are taken in. But epistemic adversaries on the Internet are often much more subtle. Since a huge amount of personal information about each of us is available on the Internet, it is fairly easy for an epistemic adversary to target people that she is likely to convince (see Schneier 2000: 18–19). Alternatively, as in spear phishing, she can tailor her attack to the specific person that she needs to fool. We could avoid being misled by epistemic adversaries online by believing nothing that we read on the Internet. But we probably do not want to take such extreme measures. As William James (1979 [1896]: 31–32) famously put it, “a rule of thinking which would absolutely prevent me from acknowledging certain kinds of truth if those kinds of truth were really there, would be an irrational rule.” There is a lot of valuable information on the Internet that we would miss out on. Also, what if my nephew really is stranded in Romania? Many information scientists (e.g., Alexander and Tate 1999) have published checklists of features that are supposed to be indicators of accuracy (such as whether a website is up to date or whether the author is listed). However, in empirical studies (e.g., Fallis and Frické 2002; Kunst et al. 2002; Khazaal et al. 2012), few of the proposed indicators have been found to be correlated 55
56
Don Fallis
with websites actually containing accurate information, and those indicators are not very highly correlated.2 Moreover, the proposed indicators are not specifically targeted at identifying epistemic adversaries. This is a serious problem since the clues that suggest that someone is lying are unlikely to be the same clues that suggest that a person just does not know what she/he is talking about.3 In order to acquire knowledge from the Internet despite the fact that there are epistemic adversaries out there, we really need a theory about how to identify epistemic adversaries. Such a theory falls within the domain of epistemology, the branch of philosophy that studies what knowledge is and how people can acquire knowledge. In particular, we need to do some work in what might be called adversarial epistemology.
The Cartesian lesson in adversarial epistemology In recent years, several epistemologists (e.g., Goldman 1999: 165–73;Thagard 2001; Keen 2007; Brennan and Pettit 2008; Fallis 2008; Pettit 2008; Magnus 2009; Simon 2010; Coady 2012: 138–71; Simpson 2012; Frost-Arnold 2014b) have looked at knowledge acquisition in the context of the Internet. In particular, they have investigated the reliability of the Internet as a source of information. However, this work in Internet epistemology has focused primarily on worries about the competence rather than about the sincerity of Internet sources. For instance, should we trust what we read in Wikipedia, the “free online encyclopedia that anyone can edit” (emphasis added), in the way that we trust traditional encyclopedias? Also, given that bloggers often lack the training and the resources of traditional journalists, should we trust what we read in the blogosphere to the same degree that we trust traditional news media? The issue of epistemic adversaries on the Internet is typically only mentioned in passing, if at all.4 Indeed, epistemologists in general rarely worry about the possibility of epistemic adversaries. Whether we try to acquire knowledge through reason or through perception, it is certainly possible to end up with false beliefs instead of knowledge. But epistemologists typically focus on cases where it is accidental, rather than intentional, that we are misled.5 For instance, humans make errors in reasoning (see Feldman 2003: 157–66). Also, we can be fooled by optical illusions (simply because nature is indifferent to our epistemic outcomes). Nevertheless, there are a few notable examples of adversarial epistemology in the history of philosophy.6 In his Meditations on First Philosophy, Descartes (1996 [1641]) wanted to see if there was anything that he could know for certain. He concluded that, unfortunately, most of his beliefs were open to doubt. For instance, Descartes (1996 [1641]: 13) realized that he might only be dreaming “that I am here in my dressing-gown, sitting by the fire –when in fact I am lying undressed in bed!” But in addition, it might be that “some malicious demon of the utmost power and cunning has employed all his energies in order to deceive me” (Descartes 1996 [1641]: 15). It is conceivable that “the sky, the air, the earth, colours, shapes, sounds and all external things are merely the delusions of dreams which he has devised to ensnare my judgment.” At first glance, the problem of the malicious demon might not seem particularly relevant to our problem of epistemic adversaries online. What Descartes envisions is completely crazy. Indeed, it makes the idea that you might be a brain in a vat hooked up to a powerful computer providing you with simulated experiences (e.g., as depicted in The Matrix) seem quite reasonable. No one (not even Descartes) thinks that these sorts of science fiction scenarios are really the case. By contrast, we are concerned here with much more mundane adversaries. The idea that this email that I just received came from someone who is only pretending to be a Nigerian prince (or my nephew) is certainly a live possibility.
56
57
Adversarial epistemology on the Internet
But that being said, Descartes is addressing exactly the right sort of problem. Real-life adversaries do precisely what the malicious demon does. They make the world appear to be a way that it is not. And they often go to great lengths to do this. Thus, much like Descartes, it is not clear that we can know that the world really is as it appears to be. So, how does Descartes address the problem of the malicious demon? Descartes famously identified one fact (viz., he himself exists) that not even the demon could deceive him about. (He cannot doubt his own existence because he must exist in order to do all of this doubting.) But in order to secure his knowledge of any other facts, Descartes had to argue that there is an all-powerful being who is benevolent and is concerned that we have good epistemic outcomes. As Descartes (1996 [1641]: 37–38) put it, “since God does not wish to deceive me, he surely did not give me the kind of faculty which would ever enable me to go wrong while using it correctly.” At first glance, this strategy might not seem to be particularly helpful for our purposes. First, it is not clear that we should be convinced by Descartes’s arguments for the existence of a benevolent (rather than a deceitful) God. Many objections to them have been raised (see Descartes 1996 [1641]: 81–83, 95–99). Second, even if his arguments are correct, they only rule out the possibility of an all-powerful malicious demon. They do not rule out the possibility of more mundane adversaries on the Internet. Even so, Descartes is asking exactly the right question. In order to acquire knowledge despite the fact that there may be epistemic adversaries out there, we have to determine who we are dealing with. In particular, are we dealing with someone who is trying to inform us about the world, or are we dealing with an adversary who is trying to mislead us? Call this the Cartesian lesson in adversarial epistemology.7 Admittedly, there are several potential concerns about following Descartes’s advice. First, identifying the source of a piece of information on the Internet is notoriously difficult (see Brennan and Pettit 2008: 191; Pettit 2008: 170–74). In addition to misleading us about other aspects of the world, epistemic adversaries often mislead us about who they are (see Hancock 2007: 291–93). Thus, as Janet Alexander and Marsha Tate (1999: 11) point out, “if an author’s name is given on a page, it should not be automatically assumed that this person is the actual author. In addition, it is often difficult to verify who, if anyone, has ultimate responsibility for publishing the material.” Or as the New Yorker cartoon famously puts it, “on the Internet, nobody knows you’re a dog.” However, as I discuss below, it is possible for people to establish their identity online. And people who are not epistemic adversaries often want to have their identity known when they disseminate information. For instance, if my nephew really is stranded in Romania, he will definitely want me to know that his email came from him. A second potential concern is that identifying the source of a piece of information does not by itself tell us whether or not that source is an adversary. For instance, even if I am sure of the identity of a politician who is giving a speech on television, it still may be unclear to me whether he is lying. We often have independent reasons to believe that what a particular individual says is true, however. For instance, she may have a good track record of providing accurate information (see Fallis 2004: 469–70; Magnus 2009: 79–80). Thus, if we also have good reason to believe that a particular individual is the source of a particular piece of information, we have good reason to believe that the information is true.8 For instance, since I know that my nephew is an honest person, it is fairly safe for me to conclude that he really is stranded in Romania if I can assure myself that this email really came from him.
57
58
Don Fallis
A third potential concern is that being able to identify the source of a piece of information can have epistemic costs as well as epistemic benefits. Some accurate information will only be disseminated if it can be done anonymously (see Brennan and Pettit 2008: 185; Coady 2012: 154–55; Frost-Arnold 2014b). For instance, it might be too dangerous for a whistleblower to come forward if her true identity might be discovered. But following Descartes’s advice need not involve any epistemic costs in terms of lost communications. First, not all accurate information that someone might want to disseminate is so controversial that it will only be disseminated anonymously. As noted above, people who are not epistemic adversaries are often happy to have their identity known when they disseminate information. Second, when some accurate information is sufficiently controversial, there is no reason why it should not be disseminated anonymously. The cryptographic techniques for establishing identity online that I discuss below can be used on a voluntary basis. Having the ability to establish your identity online does not preclude refraining from using this ability in some circumstances. Indeed, people who do wish to communicate anonymously will probably want to use other cryptographic techniques in order to ensure that their communications are not connected to their identity (see, e.g., Chaum 1988).9
The Humean lesson in adversarial epistemology In his work on miracles, Hume (1977 [1748]: 77) also did some adversarial epistemology. According to Hume, “when any one tells me, that he saw a dead man restored to life, I immediately consider with myself, whether it be more probable, that this person should either deceive or be deceived, or that the fact, which he relates, should really have happened.”With this, Hume certainly leads us to ask the right sort of question for our purposes here. For instance, if an email says that the IT department needs me to send them my password, I should “consider with myself, whether it be more probable, that this person should either deceive or be deceived” or that the IT department really needs me to send them my password. Unfortunately, just as with Descartes’s solution to the problem of the malicious demon, it is not immediately clear how Hume’s suggestions for dealing with reports of miracles can help us with real-life epistemic adversaries. Hume quite reasonably recommends that we ignore (i.e., do not believe) such extremely implausible claims because deception is more likely than a violation of well-established laws of nature. But this advice does not help with the claims that we are concerned with here, which are not completely implausible. For instance, that the source of this email is a Nigerian prince who needs to get his money out of the country might be unlikely, but it would not violate any physical laws. Hume (1977 [1748]: 75), however, also gave some suggestions that can be applied to evaluating testimony in general and not just reports of miracles. He proposed that “we entertain a suspicion concerning any matter of fact, when the witnesses contradict each other; when they are but few, or of a doubtful character; when they have an interest in what they affirm; when they deliver their testimony with hesitation, or on the contrary, with too violent asseverations.” These suggestions are very similar to standard lie detection techniques. Unlike the checklists, they are specifically designed for detecting epistemic adversaries.10 Since epistemic adversaries have a motivation to appear to be sincere, Hume wanted to find clues that it would be difficult for an adversary to cover up. For instance, an adversary would typically have to go to great lengths to ensure that no one else contradicted her false claim. In order to acquire knowledge despite the fact that there may be epistemic adversaries out there, we need to find things that are robust indicators of insincerity.11 Call this the Humean lesson in adversarial epistemology. 58
59
Adversarial epistemology on the Internet
At first glance, though, there appear to be several shortcomings with Hume’s suggestions for our purposes. First, these indicators of insincerity can only help us to determine who we should not trust online. For instance, if other websites contradict what a particular website says, that gives us a reason not to trust it. But we are not just interested in identifying epistemic adversaries so that we can avoid being misled by them. We also want to be able to determine that we are not dealing with an epistemic adversary so that we can acquire knowledge despite the fact that epistemic adversaries are out there. And it is not clear that the indicators that Hume suggests help us to determine who we should trust online. The fact that some feature is an indicator of insincerity does not mean that the lack of that feature is an indicator of sincerity (see Fallis and Frické 2002: 78). For instance, even if people who speak with hesitation are more likely to be insincere than sincere, it does not mean that people who speak without hesitation are more likely to be sincere than insincere. Even so, there are indicators of sincerity in the vicinity. In his work on social epistemology, Alvin Goldman (1999: 108–9) made some suggestions for assessing a “speaker’s honesty” that are very similar to Hume’s. He claims that “in face-to-face communication, the speaker’s style of delivery can powerfully influence credibility: her tone of voice, her facial aspect, and her body language. Another element is the speaker’s prior pattern of (verifiable) truth-telling. By telling truths in the past, a speaker establishes a reputation for honesty that can promote credibility on each new occasion. Another strategy is to inform hearers of how to authenticate one’s report if they question it. Other witnesses of the reported event might be identified who could be consulted for confirmation.” So, for instance, whereas the fact that other websites contradict what a particular website says gives us a reason not to trust it, the fact that other websites corroborate what the website says arguably gives us a reason to trust it. A second potential shortcoming is that most of these indicators are limited to face-to-face communication. Evaluating a “speaker’s style of delivery,” as both Hume and Goldman suggest that we do, requires close physical proximity (or at least a high quality synchronous video connection). However, most online communications are asynchronous and at a distance. Online epistemic adversaries might be anywhere in the world (see Schneier 2000: 20). Even so, there are techniques for detecting insincerity that just utilize text (see Newman et al. 2003) and images (see Farid 2009). For instance, empirical studies have found that “compared to truth-tellers, liars used more words, were more expressive, non-immediate and informal and made more typographical errors” (Hancock 2007: 296). Such indicators could potentially be used to detect, or rule out, epistemic adversaries on the Internet. A third potential shortcoming is that it is not clear how difficult it is for an adversary to fake these indicators. Just as with the checklists, standard lie detection techniques are notoriously unreliable (see Hancock 2007: 296). Some “face-to-face” indicators are fairly robust. For instance, the psychologist Paul Ekman (2001: 132–33) claims that there are certain “micro- expressions” which appear briefly on a person’s face even if she is trying to conceal her deceptive intent. These indicators tend to be difficult to cover up because people do not have voluntary control of the muscles that lead to micro-expressions. However, it is not clear that the same can be said for indicators that might be used to detect, or rule out, epistemic adversaries on the Internet. As the biologist Jim Perry (1986: 177) points out, “when an evaluation technique has been developed and tested, it will become familiar to the evaluatees as well as the evaluators.” So, for instance, if an adversary reads in the research literature that liars are somewhat less likely to use first-person pronouns, she can easily take steps to use first-person pronouns more often in her writing. Thus, even if they start out correlated with insincerity, textual indicators will probably not remain correlated with insincerity for very long. 59
60
Don Fallis
Even corroboration can potentially be faked online. If an adversary can create one fraudulent website (or email) that is convincing, it is probably feasible to create several other fraudulent websites (or emails) that appear to provide corroboration for it. For instance, in the “catfishing” attack against the Syrian rebels, hackers working for the Syrian government created fake pro- opposition websites to corroborate their claims (see Gallagher 2015). Also, as Peter Ludlow (2013) points out, private intelligence agencies have developed systems for the government “that allowed one user to control multiple online identities (‘sock puppets’) for commenting in social media spaces, thus giving the appearance of grass roots support.” Even so, we can potentially address this problem by combining the Humean lesson with the Cartesian lesson. While robust indicators of sincerity for online communications are hard to come by, we may have better luck finding robust indicators of identity. In the remainder of this chapter, I argue that such indicators are available and I show how they can allow us to know that what an email or a website says is true.
Trustworthy sources There is no guarantee that a website that looks like it was created by a certain person was really created by that person. But there are some sources on the Internet, such as the New York Times and WebMD, whose identity can be established with a high degree of confidence. The reason is that these institutions have the resources and motivation to police their online identities. When someone makes unauthorized use of their online identity, they are likely to find out about it and they can do something about it. Anyone’s online identity can be taken over by an adversary. For instance, hackers have posted false stories on the New York Times website and on the Associated Press’s Twitter account (see Fiore and Francois 2002; Shapiro 2013). Also, hackers have created fake websites that “impersonate” the websites of reputable sources of information, such as Bloomberg News (see Fowler et al. 2001).12 However, since these institutions very much want to protect their brand, they institute security that makes it difficult for an adversary to fake their identity. Moreover, if such an adversary nevertheless succeeds in doing so, she is unlikely to succeed for very long as these institutions are highly motivated to correct the situation quickly. A short while ago, Taylor Swift’s Twitter account was hacked by a group known as the Lizard Squad (see Kastrenakes 2015). Within a few hours, however, Taylor was back in control of her Twitter account. If Taylor Swift can police her online identity in this way, then presumably an organization like the New York Times also has the resources to do so. Thus, the fact that a website looks like it is the New York Times website, or the WebMD website, is a fairly robust indicator that it really is. As noted above, establishing the identity of a source is not sufficient by itself to establish the accuracy of what it says. But if we also have reason to trust a source, we can know that what it says is true. Since many institutions, such as the New York Times and WebMD, have a motivation to get their facts straight (as well as to police their online identity), we probably do have reason to trust them.13 However, it would be unfortunate if the New York Times and similar large institutions were our only real sources of knowledge on the Internet. For instance, we may be interested in topics that do not have sufficiently broad appeal to be reported on in the New York Times. Also, we may want to have access to multiple points of view from diverse sources (see Hume 1977 [1748]: 75; Coady 2012: 151). Thus, we would often like to get information from private individuals, such as my nephew, who are not as able to police their online identity. Can we ever trust online information that seems to be coming from such individuals? For instance, if my nephew really 60
61
Adversarial epistemology on the Internet
were stranded in Romania in need of money, could I come to know this on the basis of an email message? In fact, we often get online information that we can trust from private individuals. For instance, suppose that my wife sends me an email saying that she will bring KFC home for dinner. Admittedly, it would not be very difficult for an adversary to impersonate my wife. As the security expert Bruce Schneier (2000: 200) reminds us, “you do know that the ‘From’ field in your mail header can easily be forged?” But even so, I can be reasonably sure that the email is from my wife and that she actually is bringing KFC. No one has any serious motivation to mislead me into believing that my wife is bringing home KFC when she is not. But we would not want to be limited to getting online information on fairly trivial matters where there is no motivation to deceive. What happens when the stakes are higher? In other words, what happens when an adversary does have a motivation to impersonate someone?
Establishing identity online with secret information Even when there is enough incentive to impersonate someone, it is still possible for private individuals to establish their identity online. We actually do this on a regular basis. For instance, you can prove to your bank’s website that you are who you say that you are. And you can do this despite the fact that criminals certainly have an incentive to get unauthorized access to your bank account. You typically establish your identity to the bank by entering a password and possibly by answering some security questions (such as “What is the name of your junior high school?” or “What is your mother’s maiden name?”). The basic idea here is that you produce some secret information that only you know. Thus, your bank’s website is able to conclude that you are who you say that you are. It is unlikely that an epistemic adversary would have this knowledge and, thus, could convincingly pretend to be you. Unfortunately, there are several issues with this strategy as it stands. First, it is difficult to come up with secret information that it is not possible for an adversary to guess (and if an adversary can guess what the secret information is, then she can pretend to be you). Passwords, for instance, do not provide much security from motivated adversaries (see Schneier 2000: 136–37). In fact, they are sort of necessarily insecure. If passwords are going to be easy enough for people to remember, they are going to be fairly easy for adversaries to guess. And security questions are not much better. A lot of personal information about each of us is available online (see Schneier 2000: 33). So, it is often possible for an adversary to learn the answers to our security questions just by searching the Internet. Second, in order to establish your identity, you have to transmit the secret information to the bank. As a result, even if she cannot guess the secret information, an adversary might learn the secret information by eavesdropping on your communications with the bank. And it is particularly easy for an adversary to intercept messages that are sent over the Internet (see Schneier 2000: 178–79). Third, instead of passively eavesdropping, an adversary might take active steps to steal the secret information. For instance, she might break into your house and/or your computer. Alternatively, she might pretend to be your bank in order to trick you into simply revealing the secret information (see Schneier 2000: 114–15). And regardless of how an adversary learns the secret information, she can use it to pretend to be you. Finally, in order for the secret information to convince the bank that you are who you say you are, the bank must already know the secret information about you. In other words, you have to have previously shared the secret information with the bank and only with the bank.With 61
62
Don Fallis
many people, such as our friends and family, we are able to share secret information at a secure location or via a secure communication channel. But we would like to be able to learn things online from all sorts of people whom we have never met and with whom we have not had the opportunity to share secrets.
Establishing identity online with cryptography Cryptography has not been discussed much in the Internet epistemology literature. However, my contention is that it can address all of the aforementioned issues with using secret information to establish identity online. Cryptography is traditionally associated with the sending of secret messages. But it can also be used to establish the identity of the source of a message (see Schneier 2000: 96–98). In fact, this is how establishing your identity to the bank actually works. But before I can show this, I need to explain some of the basics of cryptography. Cryptography is intended to deal with the problem of trying to communicate over insecure communication channels. For instance, suppose that Alice wants to send a message to Bob and to be sure that only Bob can read it. But she is worried that someone, Eve, might get ahold of the message while it is in transit. For instance, if Alice speaks to Bob, Eve might overhear. Also, if Alice sends an email to Bob, Eve might intercept it. What can Alice do about this? Cryptography solves this problem by using sophisticated mathematics to convert the message into something that looks like gibberish, something that only the intended recipient can decipher. As a result, even if someone intercepts the encrypted message (or ciphertext), she will not be able to tell what the original message was. Modern cryptographic algorithms use a long string of random digits, known as a cryptographic key, to encrypt and decrypt messages. Most of these algorithms have been vetted over many years by experts looking for some way to break the encryption without knowing the key (see Schneier 2000: 118). Indeed, for many algorithms, it has been proven that there is no way to read the original message without knowing the key.14 Thus, using cryptography is like locking a message in a box that can only be opened by someone who has the key. So, if Alice wants to send a secret message to Bob, it is as if she locks it in a box using a key that only she and Bob have copies of. So, how might cryptography help you to establish your identity online? One possibility would be to send your password or the answers to your security questions to the bank in an encrypted form. In that case, an adversary will not be able to learn this secret information by eavesdropping on your communications with the bank. But this strategy does not address the other issues that I have discussed. For instance, this strategy does not make it any harder for an adversary to guess your password. It turns out though that you do not need to actually send any secret information in order establish your identity. Simply sending an encrypted message is enough to convince someone that a message came from you since it shows that you know the cryptographic key. For instance, Alice might write a message that says, “This is Alice. I am stranded in Romania and I need money.” Then she locks it in the box using the key that only Alice and Bob share and has the box delivered to Bob. Since Alice and Bob are the only people who have copies of the key, when Bob receives the box and unlocks the message, he can be sure that it came from Alice. Only Alice could have put this message in the locked box. This procedure fully addresses the first two issues with using secret information to establish identity online. First, unlike passwords and answers to security questions, a cryptographic key – a long string of random digits –is extremely difficult for an adversary to guess. Second, since 62
63
Adversarial epistemology on the Internet
the cryptographic key itself is not sent over an insecure channel, this procedure is invulnerable to eavesdroppers. An adversary cannot learn the cryptographic key just by getting ahold of an encrypted message. In addition, this procedure partially addresses the third issue. Since the cryptographic key itself is not sent to the person that you want to convince of your identity, there is no risk of revealing the cryptographic key to an imposter. Admittedly though, this procedure does not help with the problem of outright theft of secret information. But this problem can presumably be dealt with in the traditional ways that we insure the security of ourselves and our property (intellectual and otherwise). And just as we can have reason to trust someone to tell the truth, we can have reason to trust her to keep her cryptographic key secure. Moreover, this task is much easier for a private individual than is policing one’s online identity (e.g., making sure that no one hacks into your Twitter account or creates a fake website for you). However, as it stands, this procedure does not address the final issue with using secret information to establish identity online. In order for someone to convince me of her identity in this way, the two of us must have already securely shared the cryptographic key. But as suggested above, this requirement severely limits the number of people that we can identify and learn things from online. Fortunately, though, even more sophisticated cryptography is available that allows us to solve what is known as the “key distribution problem” (see Schneier 2000: 89–90). Indeed, it turns out that we can be convinced of someone’s identity online even if we have never shared any secret information with her. So far, I have been discussing symmetric cryptography.This is where there is one key that both encrypts and decrypts messages. In order to use asymmetric or public key cryptography, Alice creates two distinct keys. When one key is used to encrypt a message, the other key can be used to decrypt it (see Schneier 2000: 94–96). Alice makes one of the two keys public (say, by posting it on her website) and she keeps the other key private. Then if Bob or anyone else wants to send her a secret message, they encrypt it with her public key. Since the value of the private key cannot be deduced from the value of the public key, Alice is the only person who knows what her private key is.Thus, she is the only person who can decrypt the message from Bob and read it. Alice and Bob never have to get together to share any secret information. This procedure ensures that Bob’s message will remain secret, that it cannot be read by eavesdroppers. But unlike with symmetric cryptography, Bob cannot establish his identity to Alice just by sending an encrypted message. Alice cannot be sure who the message came from since everyone has access to her public key. However, public key cryptography can be used to convince someone that a message came from you. Doing so simply requires reversing the aforementioned procedure for sending secret messages. Alice can digitally sign a message by encrypting it with her private key (see Schneier 2000: 96–98). When Bob is able to decrypt it with her public key, he can be sure that the message came from Alice since she is the only person who knows what her private key is.15 In other words, Alice has succeeded in establishing her identity to Bob online. So, if Bob also has reason to think that Alice is trustworthy, then he has reason to think that the information that he has just received is accurate. Thus, this procedure addresses the remaining issue with using secret information to establish identity online.16 Unfortunately, though, checking a digital signature is not always enough by itself for us to be confident of who a message is coming from.We check a digital signature by trying to decrypt it with the putative source’s public key. But this just tells us that the message was encrypted with the associated private key.17 If Alice’s website claims that K is her public key, how do we know that K is really Alice’s public key? Maybe an adversary just created a fake website made to look like it belongs to Alice and put his own public key on it. 63
64
Don Fallis
If we want to use digital signatures to establish identity online, they typically have to be supplemented with something else. A digital certificate is an electronic document signed by a trusted entity known as a certificate authority. (Symantec and Comodo are examples of certificate authorities.) This electronic document says that a particular public key belongs to a particular person.18 Basically, on the say-so of the certificate authority we can be confident that a particular public key really does belong to Alice. At first glance, it may not look like digital certificates represent much progress toward establishing the identity of the source of an email or a website. In order to assure ourselves that a particular public key belongs to a particular identity, we have to assure ourselves that a different public key belongs to a particular certificate authority. In addition, we have to assure ourselves that this certificate authority is a reliable source of information about people’s identities. In other words, it looks like we have traded one epistemological problem for two new epistemological problems. However, the idea behind digital certificates is that the certificate authority is a “trusted third party” (see Schneier 2000: 226–27). That is, while you may not know the Nigerian prince or what his public key is, you do know this certificate authority and what its public key is. Moreover, you trust this certificate authority to reliably check the identities of other people. Unfortunately, certificate authorities have not always lived up to the ideal of being a “trusted third party” (see Leavitt 2011: 17–18). They have not always been sufficiently diligent about maintaining the security of their computer systems. This has allowed hackers to break in and issue fake certificates. Also, certificate authorities have not always been careful enough when checking people’s identities. As a result, on several occasions, adversaries have been able to get ahold of digital certificates that falsely claim that a particular public key belongs to a particular person. Nevertheless, the idea of using digital certificates to establish identity online can potentially work. Basically, it just requires that certificate authorities come to have the same sort of properties that the New York Times has. As noted above, we can trust what we read on the New York Times website because the New York Times has two important properties. First, the New York Times is able to police its online identity and it works hard to do so. Second, the New York Times is motivated to get its facts straight and it works hard to do so. In a similar vein, we can trust what a digital certificate says if the certificate authority that signed it has two analogous properties. First, the certificate authority would need to work hard to police its online identity. That is, we need to be sure that a digital certificate really has been issued by the certificate authority itself. Second, the certificate authority would need to work hard to get its facts straight. That is, we need to be sure that the certificate authority will only tie a public key to a real-life identity when it is sure that that public key really belongs to that person.
Conclusion The Internet is full of epistemic adversaries. As a result, it is not immediately clear how we can know that what an email or a website says is true. Although there may be other strategies for addressing this problem, work by Descartes and Hume in adversarial epistemology suggests one possible solution –namely, we can often eliminate the worry that we are dealing with an epistemic adversary online by finding robust indicators of exactly who we are dealing with. Moreover, as I have argued, modern cryptography can provide us with such indicators.19 It is important to note, though, that identifying the source of a piece of information is not always sufficient to secure knowledge on the Internet. In many cases, we might not trust 64
65
Adversarial epistemology on the Internet
a particular source even if we are quite sure who it is. But often (as with the New York Times and my nephew), if we can identify who we are dealing with, we can trust what she says to be accurate because we already know her to be trustworthy. Moreover, once we are able to determine who we are dealing with, we are in a much better position to determine whether or not she is trustworthy. For instance, since we can tell that different messages came from the same source, we can check a source’s past track record. Also, we can assure ourselves that corroborating sources are not sock puppets.20
Notes 1 Strictly speaking, an epistemic adversary might use force rather than guile to interfere with what we know about the world. For instance, a repressive government might be quite open about the fact that it is censoring books or blocking access to the Internet (see Richtel 2011). However, the focus of this chapter will be on those epistemic adversaries that do engage in deceit. 2 For a recent literature review, see Fahy et al. (2014). Khazaal et al. (2012) did find that a high score on the DISCERN instrument (www.discern.org.uk/discern_instrument.php) is correlated with accuracy. However, this instrument only works for the evaluation of a specific type of websites (health). Also, it requires checking for a large number of website features (16 to be precise). 3 P. D. Magnus (2009: 79–83) discusses several general criteria for evaluating “claims from single-author websites, personal blogs, and Internet forum posts.” These include considering “authority, plausible style, plausible content, calibration, and sampling.” But as with the checklists themselves, it is not clear how effective these considerations are unless we already know that we are not dealing with a highly motivated epistemic adversary (see the section on Descartes below). 4 For instance, cases of people intentionally removing accurate information from, and intentionally adding inaccurate information to,Wikipedia have been discussed very briefly (see Keen 2007: 18; Fallis 2008: 1665; Simon 2010: 349). 5 Liars sometimes do show up as characters in the epistemological literature. For instance, Nogot fakes having a Ford (see Feldman 2003: 26). But deception is typically not essential to these thought experiments. 6 More recently, epistemologists of testimony (e.g., Faulkner 2007; Lackey 2008: 53–56) have had to take account of the possibility of liars. Also, Karen Frost-Arnold (2014a) has offered an epistemic evaluation of “imposters” and “tricksters.” 7 Descartes wanted certainty. But even if we only seek strong, but not indefeasible, justification for believing what an email or a website says (which is probably the best we can hope for), the Cartesian lesson is still important. The importance of determining identity has also been emphasized in recent research on online deception (see Hancock 2007: 290). 8 Moreover, if a source knows that you know who she is, that by itself can give you further reason to trust her (see, e.g., Brennan and Pettit 2008: 177). Many sources have “such credit and reputation in the eyes of mankind, as to have a great deal to lose in case of their being detected in any falsehood” (Hume 1977 [1748]: 78). 9 In addition, even if she does not want her online communications connected to her offline identity, someone writing under a pseudonym may want to use the cryptographic techniques that I discuss below in order to ensure that her communications are connected to her pseudonymous online identity. 10 Some of these indicators do work for identifying incompetence as well as insincerity. 11 Indicators that are not robust may initially be correlated with insincerity, but they are easy for an adversary to cover up. Indicators (aka signals) that are difficult to fake have been studied extensively by economists (e.g., Spence 1973) and biologists (e.g., Zahavi 1975). For a discussion of how these theories apply to verifying the accuracy of information on the Internet, see Fallis (2004: 474–76). 12 In other cases, in order to disseminate “fake news” or fake investment advice, epistemic adversaries have created websites that are intended to look like a reputable source, but not like any particular reputable source, such as the New York Times or Bloomberg News (see Fowler et al. 2001; Ohlheiser 2016). The problem here is not (or not only) that we cannot tell who the source of this information is. The main problem is that we do not have good reason to believe that this source is trustworthy (see the section on Descartes above).
65
66
Don Fallis 13 The online institution that has been most discussed in the Internet epistemology literature is Wikipedia (see, e.g., Fallis 2008; Magnus 2009; Simon 2010: 348–52; Coady 2012: 169–71). And despite Magnus’s (2009: 84) suggestion that claims made in a Wikipedia entry are “more like ‘claims made in New York’ than ‘claims made in the New York Times,’ ” Wikipedia probably does share these two critical properties with the New York Times and WebMD. First, we have reason to think that Wikipedia is fairly reliable as compared with other sources of encyclopedic information (see Fallis 2008: 1666–67). Second, we can be pretty sure that we are reading an entry from the real Wikipedia without any sophisticated identification procedures. Admittedly, we do not know the identities of the individual people who contributed to a Wikipedia entry. But as with encyclopedia entries in general, the identities of individual authors are not particularly relevant to assessing the quality of the information (see Fallis 2008: 1667; Simon 2010: 348; Coady 2012: 171). Of course, Wikipedia itself does have to worry about the identities of contributors in order to ensure that people are not editing their own entries (see Fallis 2008: 1668; Simon 2010: 349). 14 These proofs often make a few mathematical assumptions that have not themselves been proven. But mathematicians and cryptographers would be very surprised if these assumptions turned out to be false. 15 Of course, anyone else who gets ahold of the message will also be able to determine that it came from Alice. If the message is private or controversial, Alice may want to make sure that only Bob will know that the message came from her. In that case, she will have to digitally sign it with her private key and then encrypt the result with Bob’s public key. 16 This procedure still addresses the first three issues as well. In fact, since only one person (rather than two) ever needs to know the private key, public key cryptography does even better than symmetric cryptography with respect to the security issue. Remember Ben Franklin’s saying that “three may keep a secret, if two of them are dead.” 17 Admittedly, in some exceptional cases, this could be enough to ensure that we are not dealing with an epistemic adversary. Even if we do not know for sure what her offline identity is, we might know that whoever controls the associated private key has a reputation for accuracy. In a similar vein, we do not have to verify someone’s offline identity in order to decide whether or not to accept a Bitcoin transaction. We just need to verify that sufficient funds are associated with the public key that was used to digitally sign the transaction. 18 Digital certificates are most commonly used by big companies, such as Amazon or Google. But digital certificates can also potentially be used by private individuals. For instance, my nephew could conceivably go to a certificate authority and get a digital certificate that shows that a particular public key really does belong to him. 19 Modern cryptography does not guarantee that someone is who she says that she is. But unlike Descartes, we are not seeking certain knowledge about who sent a message. 20 For extremely helpful feedback, I would like to thank James Chase, David Coady, Andrew Dillon,Tony Doyle, Melanie Fagan, Bob Goldberg, Kay Mathiesen, Sille Obelitz Søe, Jacob Stegenga, Dustin Stokes, Dan Zelinski, the fellows of the Tanner Humanities Center, an anonymous referee, and audiences at the University of Copenhagen and at the University of Utah.
References Alexander, J. E., and Tate, M. A. (1999). Web Wisdom. Mahwah, NJ: Lawrence Erlbaum Associates. Brennan, G., and Pettit, P. (2008). “Esteem, Identifiability, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy. Cambridge: Cambridge University Press. Chaum, D. (1988). “The Dining Cryptographers Problem: Unconditional sender and recipient untraceability.” Journal of Cryptology, 1: 65–75. Coady, D. (2012). What to Believe Now. Malden, MA: Wiley-Blackwell. Descartes, R. (1996 [1641]). Meditations on First Philosophy. Trans. J. Cottingham. Cambridge: Cambridge University Press. Ekman, P. (2001). Telling Lies. New York: W. W. Norton. Fahy, E., Hardikar, R., Fox, A., and Mackay, S. (2014). “Quality of Patient Health Information on the Internet: Reviewing a complex and evolving landscape.” Australasian Medical Journal, 7: 24–28. Fallis, D. (2004). “On Verifying the Accuracy of Information: Philosophical perspectives.” Library Trends, 52: 463–87.
66
67
Adversarial epistemology on the Internet Fallis, D. (2008). “Toward an Epistemology of Wikipedia.” Journal of the American Society for Information Science and Technology, 59: 1662–74. Fallis, D., and Frické, M. (2002). “Indicators of Accuracy of Consumer Health Information on the Internet: A study of indicators relating to information for managing fever in children in the home.” Journal of the American Medical Informatics Association, 9: 73–79. Farid, H. (2009). “Digital Doctoring: Can we trust photographs?” in B. Harrington (ed.), Deception. Stanford, CA: Stanford University Press. Faulkner, P. (2007). “What is Wrong with Lying?” Philosophy and Phenomenological Research, 75: 535–57. Feldman, R. (2003). Epistemology. Upper Saddle River, NJ: Prentice-Hall. Fiore, F., and Francois, J. (2002). “Unwitting Collaborators, Part 12: Disinformation changing web site contents.” Informit. Retrieved from www.informit.com/articles/article.aspx?p=29255. Fowler, B., Franklin, C., and Hyde, R. (2001). “Internet Securities Fraud: Old trick, new medium.” Duke Law and Technology Review. Retrieved from http://scholarship.law.duke.edu/dltr/vol1/iss1/6/. Frost-Arnold, K. (2014a). “Imposters, Tricksters, and Trustworthiness as an Epistemic Virtue.” Hypatia, 29: 790–807. Frost-Arnold, K. (2014b). “Trustworthiness and Truth: The epistemic pitfalls of Internet accountability.” Episteme, 11: 63–81. Gallagher, S. (2015). “Syrian Rebels Lured into Malware Honeypot Sites through ‘Sexy’ Online Chats.” Ars Technica. Retrieved from http://arstechnica.com/information-technology/2015/02/syrian-rebels- lured-into-malware-honeypot-sites-through-sexy-online-chats/. Goldman, A. I. (1999). Knowledge in a Social World. New York: Oxford University Press. Hancock, J.T. (2007).“Digital Deception: When, where and how people lie online,” in A. N. Joinson, K.Y. A. McKenna, T. Postmes, and U.-D. Reips (eds.), Oxford Handbook of Internet Psychology. Oxford: Oxford University Press, pp. 289–301. Hume, D. (1977 [1748]). An Enquiry Concerning Human Understanding. Indianapolis, IN: Hackett. James,W. (1979 [1896]). The Will to Believe and Other Essays in Popular Philosophy. Cambridge, MA: Harvard University Press. Kastrenakes, J. (2015). “Someone Just Hijacked Taylor Swift’s Twitter and Instagram Accounts.” The Verge. Retrieved from www.theverge.com/2015/1/27/7921965/taylor-swifts-twitter-account-hacked. Khazaal,Y., Chatton, A., Zullino, D., and Khan, R. (2012).“HON Label and DISCERN as Content Quality Indicators of Health-Related Websites.” Psychiatric Quarterly, 83: 15–27. Keen, A. (2007). Cult of the Amateur. New York: Doubleday. Keyes, R. (2004). The Post-Truth Era. New York: St. Martin’s Press. Kunst, H., Groot, D., Latthe P. M., Latthe, M., and Khan, K. S. (2002). “Accuracy of Information on Apparently Credible Websites: Survey of five common health topics.” British Medical Journal, 324: 581–82. Lackey, J. (2008). Learning from Words. Oxford: Oxford University Press. Leavitt, N. (2011). “Internet Security under Attack: The undermining of digital certificates.” Computer, 44(12): 17–20. Ludlow, P. (2013). “The Real War on Reality.” The New York Times. Retrieved from http://opinionator. blogs.nytimes.com/2013/06/14/the-real-war-on-reality/. Magnus, P. D. (2009). “On Trusting Wikipedia.” Episteme, 6: 74–90. Newman, M. L., Pennebaker, J.W., Berry, D. S., and Richards, J. M. (2003). “Lying Words: Predicting deception from linguistic styles.” Personality and Social Psychology Bulletin, 29: 665–75. Ohlheiser, A. (2016). “What Was Fake on the Internet This Election: Voting online, Benghazi emails.” The Washington Post. Retrieved from www.washingtonpost.com/news/the-intersect/wp/2016/10/26/ what-was-fake-on-the-internet-this-election-voting-online-benghazi-emails/. Perry, J. A. (1986). “Titular Colonicity: Further developments.” Journal of the American Society for Information Science, 37: 177–78. Pettit, P. (2008). “Trust, Reliance, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy. Cambridge: Cambridge University Press. Richtel, M. (2011). “Egypt Cuts Off Most Internet and Cell Service.”The New York Times. Retrieved from www.nytimes.com/2011/01/29/technology/internet/29cutoff.html. Schneier, B. (2000). Secrets and Lies. New York: John Wiley & Sons. Shapiro, R. (2013).“AP Twitter Account Hacked.”The Huffington Post. Retrieved from www.huffingtonpost. com/2013/04/23/ap-twitter-hacked_n_3140277.html.
67
68
Don Fallis Simon, J. (2010).“The Entanglement of Trust and Knowledge on the Web.” Ethics and Information Technology, 12: 343–55. Simpson, T. W. (2012). “Evaluating Google as an Epistemic Tool.” Metaphilosophy, 43: 426–45. Spence, A. M. (1973). “Job Market Signaling.” Quarterly Journal of Economics, 87: 355–74. Thagard, P. (2001). “Internet Epistemology: Contributions of new information technologies to scientific research,” in K. Crowley, C. D. Schunn, and T. Okada (eds.), Designing for Science. Mahwah, NJ: Erlbaum. Zahavi, A. (1975). “Mate Selection: A selection for a handicap.” Journal of Theoretical Biology, 53: 205–14.
68
69
PART III
Politics
70
71
6 JOHN STUART MILL ON FREE SPEECH Daniel Halliday and Helen McCabe
1. Introduction John Stuart Mill’s work on freedom of expression has had enormous influence on philosophical research and has long been a pillar of the undergraduate curriculum. But though the attention paid to Mill on this topic is large, its focus (at least when Mill is taught to students) tends to be restricted to the text of On Liberty. Our aim in this chapter is to provide a sense of how Mill’s wider body of writings contributes to the strength and ongoing relevance of his defense of free speech. While On Liberty will always be the central text, Mill’s other writings deserve not to be overlooked. They provide much that both clarifies and supports the agenda in On Liberty, particularly with regard to the epistemic elements of Mill’s project. Important here is Mill’s preoccupation with the role of freedom, and freedom of expression in particular, as facilitating the growth and diffusion of knowledge within a community, and in turn facilitating greater human development. In short, Mill’s defense of free speech is very much about how knowledge spreads and why it is valuable. We hope that this chapter will be useful to students wishing to grapple with Mill’s arguments but who have not had an opportunity to read beyond On Liberty, and to those professional scholars not already entrenched in Mill’s thinking. We begin by noting that Mill’s approach to applied philosophy shifted between an emphasis on breadth and depth. Compared to Mill’s longer works, On Liberty very much prioritizes breadth. Mill’s overall goal is to understand “the nature and limits of the power which can be legitimately exercised by society over the individual” (1977a: 217). The book is famous for, among other things, Mill’s promotion of the value of individuality, his worries about the “despotism of custom” (1977a: 272), and his skepticism about the assumption of infallibility among legislators. These contribute to Mill’s defense of his well-known harm principle. Clearly such topics continue to have significance across a wide range of rather different contexts. Nevertheless, the style of argument in On Liberty is rather schematic: The book’s main arguments are expressed with the help of generalities and abstractions, with cases and examples described only quite fleetingly. This is no scholarly oversight –schematic arguments are simply necessary if one wants to fit an ambitious project into a short book. In light of these facts, it’s fair to say that On Liberty is applied philosophy constrained by the fact that it is, first and foremost, a piece of popular philosophy: It needed to be short and readable, and this is reflected in the sort of text that it is.1 71
72
Daniel Halliday and Helen McCabe
None of this is to degrade the status given to On Liberty, only to suggest that Mill’s style of argument depended on how much space he had at his disposal. In several of his other works, far more sustained attention is paid to specific aspects of the socioeconomic and political status quo.These arguments lack the schematic minimalism of On Liberty: They seek to diagnose more specific injustices or shortcomings, based on fairly extensive descriptions of the relevant cases, while recommending tailored reforms. Overall, attention to Mill’s other works leads to three ways in which his project can be better appraised. In particular, we will find in Mill’s other texts: ( 1) strengthening of the arguments found in On Liberty; (2) some novel arguments that cannot be found in On Liberty; (3) observations that prompt some further questions about the importance of free speech that are not encouraged by the text of On Liberty. As a guide, here’s how this entry is structured: Section 2 provides a brief recap of the ground covered in On Liberty. Section 3 draws on various other writings to give a sense of how On Liberty’s defense is augmented, particularly with reference to Mill’s emphasis on the concept of the free exchange of ideas (as we propose to call it) as opposed to mere expression. Mill saw freedom of speech as valuable in epistemic, discursive terms. Much of this will involve attention to Mill’s complementary views on education. Section 4 provides an assessment of this defense in light of prevailing criticisms aimed mainly at On Liberty, and new criticisms that emerge in light of the position we reconstruct. These new criticisms are inspired by some contemporary examples whose significance we think Mill would have appreciated. Section 5 concludes. We would like to make one brief qualification before continuing. Like many influential figures in the history of political philosophy, Mill is subject to a certain amount of controversy as to which contemporary school of thought gets to ‘claim’ him. Thus, there is disagreement over whether Mill was a classical liberal, and thus saw economic liberty as having importance on a par with other personal freedoms, or a progressive who might thereby have assigned secondary or defeasible status to economic freedoms, or something else. Such questions have substance, and we accept that the claims we defend here may implicitly take a position on such debates.2 But we want to remain largely neutral as to which (if any) position in contemporary philosophy gets to confer prototype status on Mill’s project. As co-authors, we do not necessarily agree with each other when it comes to such questions.The fact that this entry has still been written is evidence that there is more to studying Mill than the rather tribal question of whose side he would pick if faced with twenty-first century disagreements. Much of this has to do with working out where his views have twenty-first century relevance, regardless of how they are best labeled.
2. On Liberty’s schematic agenda On Liberty’s defense of free speech centers on how it aids humanity’s search for the truth and, hence, greater realization of human potential. This commences with Chapter 1’s negative account of the dangers of censorship. This is followed by Chapter 2’s positive account of how free expression aids the identification of true opinions while protecting accepted truths against becoming “dead dogmas.” Chapter 3 focuses on the value of individuality and “the permanent interests of man as a progressive being” as the basis for a justification of freedom of action regulated only by the harm principle (1977a: 224). Chapter 4 draws some general conclusions, and Chapter 5 deals (briefly) with some “Applications,” though its claims are still couched in quite general language. Overall, free expression allows everyone in society to develop authentic opinions which really do affect and guide their actions as “living truths” (1977a: 243). 72
73
John Stuart Mill on free speech
As we have said, Mill uses few examples in On Liberty. Best known may be his ‘corn-dealer’ case in Chapter 3 (1977a: 260), the man crossing the dangerous bridge (1977a: 294), and a variety of possible harmful actions in Chapter 5 (e.g., trade; government activity; drunkenness; giving advice; and gambling) (1977a: 292–310). Each of these cases is a realistic hypothetical meant to represent a broad category of actual cases. Mill himself speaks of his examples in Chapter 5 as a “few observations … on questions of detail,” which “are designed to illustrate the principles, rather than to follow them out to their consequences” (1977a: 292). In this sense Chapter 5’s “Applications” remain quite abstract and generalized, as applications go. One important idea, made clearer in other writings, is implicit in On Liberty. This has to do with the way in which freedom of expression becomes easier to defend when construed in a relatively narrow way.3 On a broad conception, often presupposed by everyday appeals to ‘free speech’, freedom of expression covers anything at all that might be called an ‘expression’. A narrower conception is restricted to the expression of claims that may be true or false, and which therefore might feature in some sort of constructive exchange or disagreement in ways that other expressions typically cannot. As Richard Vernon has pointed out (1996: 622–23), Mill’s language in his famous defense errs strongly toward the expression of linguistic content aimed at discursive participation. Mill does not use the word ‘expression’ much at all, and Chapter 2’s title invokes the liberty “of discussion.” It is the successful exchange of ideas that allows free speech to fulfill an epistemic function from which it gains much of its value. In a similar way, Mill’s remarks about the importance of rigor and a culture of mutual criticism suggest that free speech is (most) valuable when it works in ways that stimulate genuine constructive exchanges. The point about rigor is a good one –we are all irritated by people who take criticism as an affront and simply assert that they are ‘entitled to their opinion’, as if mere criticism were full-blown censorship. It is here, too, that Mill explains that it is “not on the impassioned partisan,” but “on the calmer and more disinterested bystander, that this collision of opinions works its salutary effect” (1977a: 257). But although the text strongly suggests the importance of free expression construed in this relatively narrow way, On Liberty contains no extended account of the mechanisms by which free expression actually brings about a discursive society, and exactly where in society the improvements might be most felt or most urgently needed. The sort of free expression worth defending is that which lends itself to the free exchange of ideas. The philosophical distinctiveness of this position is worth making explicit. Broadly speaking, defense of free expression on epistemic grounds contrasts with arguments for free speech that, though liberal or even Millian in some important respects, rely on premises that are not explicitly epistemic, having to do with the value of autonomy, tolerance, or some related interest in being able to express one’s position.4 It is also worth emphasizing that the free exchange of ideas can encompass different sorts of content. It includes explicitly propositional content, of which even the false (or almost entirely false) variety might “supply the remainder of the truth” or serve to preserve the truth from becoming “dogma” (1977a: 258). It also includes apparently non-propositional content, as in music and pictorial form, which is sometimes subject to censorship but can be part of an ‘idea’ about the pursuit of a certain conception of the good life. Granted, there may prove to be some forms of expression that can make little contribution of any sort. We are not suggesting that Mill would conclude that such content is for that reason generally fair game for censors. Apart from anything else, it would be very hard to design legislation that could separate the two types of expression with any sort of reliability, and here one should keep in mind Mill’s counsel against the assumption of infallibility. And even if the most hostile forms of expression contain no truth, they might help stimulate a culture in which decent people take on the responsibility for calling out those 73
74
Daniel Halliday and Helen McCabe
who cause offence, rather than relying on the state to do it for them.5 We will come back to this problem in section 4. On Liberty’s defense of free speech, then, has a negative and a positive aspect. The negative aspect has to do with the ills of censorship, which should be used sparingly (guided, perhaps, by the harm principle). The positive aspect emphasizes the role played by the free expression of ideas (in the ‘narrow’ sense) in facilitating the growth of knowledge in a community, “quite the chief ingredient of individual and social progress,” and the vital part played by free expression (in the broader sense) in developing individuality, “one of the principal ingredients of human happiness” (1977a: 261). But in On Liberty, the negative defense is more emphatic, and it’s easy to gain the impression that it’s the more important. While there are elements of the positive defense in On Liberty, it is expanded upon in Mill’s other works (whereas the negative defense is not). Accordingly, it is through examining these other texts that the epistemic elements in Mill’s project are become more appreciable.
3. Expression versus exchange: Mill’s defense of the discursive value of free speech With the possible exception of Utilitarianism, which is not much concerned with free speech, Mill’s other works besides On Liberty are either narrower in scope or greater in length. Several of them include substantive discussion of free speech and closely related topics. One such related topic is the problem of freedom in general and when it may be permissibly restricted by some interference by government. In short, Mill was a liberal but not a libertarian.6 Mill was always cautious of too much power being given to the government, in part because there was always a risk of this power being abused, particularly of it being wielded in the interests of established minorities; and in part because he thought ‘active’ citizens (who were much better than ‘passive’ ones) were more likely to be created by a culture of self-help than reliance on government. Mill’s worries about the perils of a passive citizenry is perhaps most evident in the rejection of benevolent despotism that is laid out in Considerations on Representative Government (1977b: 399–412), as well as in various other writings (1977a: 262–63; 1965: 938). But this is not to say that Mill was opposed to all state interference. The final chapter of the Principles of Political Economy is an extended account of why the importance of individual freedom does not ultimately favor an entirely ‘hands-off ’ approach by government. It is fair to say that, for Mill, though non-interference was the default position, departure from this default could occur quite readily and for a wide range of reasons. Thus, though Mill speaks of “laissez-faire [as] the general rule” (1965: 944–47), he immediately follows this with twenty-five pages of “large exceptions to laissez-faire” (1965: 947–71). Indeed, he attacks the traditional case for laissez-faire as nonsensical, and inconsistent with the ideals it purports to protect and preserve (1965: 800, 936–44).7 Mill also had a nuanced view on the nature of government interference. He believed it could come by way of restrictive legislation, but also by state provision of goods and services (1965: 937, 947–50, 954–71). In other words, the state ‘interferes’ when it prohibits some sort of activity or exchange, but also when it provides opportunities for such activities or exchange. Mill also drew a parallel distinction between ‘authoritative’ interferences, which cannot be avoided or opted out of, and ‘non-authoritative’ interferences, particularly regarding the provision of goods, where participation is optional (1965: 936–37). A good example of both comes from his views on education: parents ought to be legally obligated to educate their children –an unavoidable ‘authoritative’ interference by government (legitimized by the harm principle). But though the state ought to provide schools for them to send their children to, such interferences count as non-authoritative: The state ought neither to hold a monopoly on provision of schools, nor on 74
75
John Stuart Mill on free speech
accreditation/licensing of educators and thus no one is forced to use state-provided education services (1965: 948–50). All of this is rather more elaborate than the bald claim at the start of On Liberty that freedom should be constrained only when there is a risk of harm to others. Free speech, like other freedoms, should be understood with reference to this rather sophisticated way of understanding what freedom is and how its exercise might be regulated.8 Mill’s views on freedom and interference support the view that, in general, state interference is often a matter of helping along some sort of voluntary activity by individual citizens. Interference might be needed as an enabling condition for an eventual (and more desirable) state of affairs in which citizens are doing things for themselves.9 This needs to be kept in mind when reconstructing Mill’s positive defense of free speech, particularly regarding its emphasis on how the free exchange of ideas can help humans advance as “progressive beings.” Drawing further on what Mill says in the Principles, we will now focus on two more specific ways in which free speech was supposed to improve society: 1. the role played by free expression and individuality in the development of effective formal education; 2. the role played by free expression, particularly of the press, in improving the intellectual capacities and political power of the working poor. Both of these themes have a clear epistemic significance: Each concerns the growth and use of knowledge within a population, and the role that free speech can play in facilitating that growth. There is also a degree of political significance attached to the second theme. Mill evidently believed that education could help advance the economy.10 He claimed that “a thing not yet so well understood and recognized, is the economical value of the general diffusion of intelligence among the people” (1965: 107). He also believed that education equipped people with greater ability to gain pleasure (and better kinds of pleasure) outside of work hours. The uneducated worker had a strong tendency to concentrate on alcohol consumption (1965: 108–9) or procreation (1965: 159). In Mill’s day, these were considered two important causes of the abject poverty endured by large swathes of the population. Though it might do so very indirectly, a right to free expression was an important aid (according to Mill) in fighting this. In Mill’s day, many of the Victorian social elite believed the poor to be incorrigible and that poverty would never be erased. Mill’s view shows a more intelligent optimism.11 Mill thought it vital that everyone receive at the very least a basic schooling, or formal education. In On Liberty, Mill rejected a state monopoly on provision of education or licensing of teachers; he was also anxious about a state-prescribed curriculum, something he regarded as a “mere contrivance for molding people to be exactly like one another” (1977a: 302). Mill believed that formal education must internalize freedom of expression –both in the sense of curriculum planning and the character of the experience gained by students (1977a: 302–4). His aversion to monopoly was an affirmation of competition –a view expressed more generally in Principles (1965: 794). The provision of education should have no great barriers to entry, so as to ensure a wide variety of experiments in curriculum design and the style of its delivery.12 This is not to say, however, that the state can’t perform well when it is required to join in with the competition. Mill supported the provision of state-funded schools and even thought they might be exemplars of best practice for other education providers (1965: 947–50; 1977a: 302– 3). Favoring competition does not mean favoring privatization, nor of allowing consumer preferences to be the final arbiter of which products win or lose. Indeed, Principles makes clear 75
76
Daniel Halliday and Helen McCabe
that elementary education may be one of those goods not best left to the market, due to the incompetency of consumers when it comes to identifying the best product (1965: 947–50). All of these claims are up for debate. Questions about the public and private side of educational provision, for example, remain contentious. Such controversies aside, Mill made his contribution largely by emphasizing a basically liberal view about the point of education. What was more important to Mill than who provided the education was how students learned.When done properly, education can facilitate the free development of individuality, a good in itself (or as an integral part of utility) (1977a: 260–75), rather than a mere instrument for securing greater economic output.13 It bears emphasizing that individuality is something people ultimately must cultivate for themselves. The state might help by providing an environment conducive to this. Dale Miller, however, represents Mill’s view well when stressing that “individuality could never be distributed by society.”14 Mill believed an education could only really equip its beneficiary with the ability to make a contribution if it had stimulated the capacity for independent thought. He scorned education that promoted rote learning, and thereby ‘dead dogma,’ homogeneity, and mediocrity. Some emphatic remarks occur in his Autobiography: Most boys or youths who have had much knowledge drilled into them, have their mental capacities not strengthened, but overlaid by it. They are crammed with mere facts, and with the opinions or phrases of other people, and these are accepted as a substitute for the power to form opinions of their own. And thus, the sons of eminent fathers, who have spared no pains in their education, so often grow up mere parroters of what they have learnt, incapable of using their minds except in the furrows traced for them. Mill 1981: 33–35 We suspect that readers who have taught in universities will appreciate these sentiments. We hope that students who’ve suffered under such conditions can take Mill’s words as an encouragement to pursue higher education partly for the sake of the emancipatory experience it can provide after a childhood spent undergoing much learning by rote. Certainly, Mill makes a sound point about the problems that result from schooling designed to get students over the next hurdle at the expense of preparing them for the territory waiting on the other side. Private education is often priced on the basis that some school or tuition service is especially good at delivering good exam results or test scores. The tendency of markets to select such services may represent one of Mill’s counterexamples to the principle of laissez-faire, namely, that where a consumer is not the best judge of what the market supplies (1965: 947–50). The point is that while parents may prioritize their child’s academic success, and purchase services accordingly, this may result in educational outcomes that are far from optimal.15 Such sentiments are revisited in Principles. Mill says, with subtler tone but similar substance, that “if there is a first principle in intellectual education, it is this –that the discipline which does good to the mind is that in which the mind is active, not that in which it is passive” (1965: 281), again echoing claims made in On Liberty (1977a: 262–63). Let us now examine the second theme –the relation between free speech and the advancement of the laboring poor. Principles retains On Liberty’s schematic affirmation of individuality’s normative value (1965: 209). But there is a new emphasis on the increasing independence of the poor. Mill writes of how they are “coming out of their leading-strings” and becoming dissatisfied with paternalistic or arbitrary rule (1965: 763). This gives some content to the schematic emphasis, in On Liberty, on independence and personal sovereignty, and his strong anti-paternalism. 76
77
John Stuart Mill on free speech
Not all education is of the formal sort that occurs within schools. Mill associated informal education with the more free-wheeling production of pamphlets, newspapers, and other vehicles for the dispersal of ideas and information. In Mill’s day, formal education of any advanced sort was a luxury reserved for those from families who could afford it. Though Mill did call for policies that would realize “an effective national education for the children of the laboring class” (1965: 374), a realistic view at the time was that much of the population would not get educated far beyond basic literacy and numeracy. Mill thus spoke of those “who have been taught to read, but have received little other intellectual education” (1965: 861). Free expression of ideas was vital for the dissemination of information to those who had no other access to these sorts of educational goods. Here are three representative passages from the Principles: There is a spontaneous education going on in the minds of the multitude, which may be greatly accelerated and improved by artificial aids. The instruction obtained from newspapers and political tracts may not be the most solid kind of instruction, but it is an immense improvement upon none at all. Mill 1965: 763 Ideas of equality are daily spreading more widely among the poorer classes, and can no longer be checked by anything short of the entire suppression of printed discussion and even of freedom of speech. Mill 1965: 767 Newspapers are the source of nearly all the general information which they [the uneducated poor] possess, and of nearly all their acquaintance with the ideas and topics current among mankind; and an interest is more easily excited in newspapers, than in books or other more recondite sources of instructions. Mill 1965: 861 The basic point is that any society without freedom of the press denies the underprivileged a means of learning about the realities of their social context, and from communicating with each other in ways that may help them mobilize so as to change it. An economy without freedom of expression is not just one that will suffer from a less productive workforce, but also one that will retain oppressive hierarchies akin to feudalism, particularly through lack of political participation among the masses. Such optimism about the intellectual emancipation of the working poor gives greater content to On Liberty’s various claims about humans as “progressive beings,” in large part because this optimism is articulated with reference to the empirical status quo and how it might develop. Indeed, Mill probably held the view that human development is only possible within communities not riven by an arbitrarily hierarchical internal structure.16 In addition, the argument of Principles suggests that a further evil of censorship is its role in promoting the oppression of the poor by keeping them ignorant and divided. Again, this goes beyond On Liberty’s more abstract worries about fallibility and dogma, which do not make any reference to social hierarchy and inequality. But it does more than merely strengthen the schematic argument in On Liberty: The Mill of Principles was also making what might now be identified as an egalitarian argument, one that defends free speech on grounds that it may help make society a fairer place. (Though, we should note, the emphasis is on pursuing equality through removing unjustly hierarchical differences in status, rather than simply relying on the redistribution of wealth and property.) 77
78
Daniel Halliday and Helen McCabe
In short, freedom of expression leads to many positive social benefits when and because it plays an integral role in the emergence and survival of a discursive society. It necessitates at least universal mandatory basic education, and also encourages, and makes possible, informal education throughout our lives. Mill’s examples are of the contemporary benefits of a free press in educating the working poor, but this does not mean it would be permissible to silence the press once everyone had a basic education. What’s significant about writings other than On Liberty is how revealing they are as to the depth of Mill’s thinking on why freedom of speech could be defended along such lines.
4. A contemporary assessment of Mill’s Applications We will not offer an historical assessment of whether Mill was right about the trends of his day. Suffice it to say that the lot of the working poor had begun to improve by the end of Mill’s life and continued to do so thereafter. Much of this had to do with improvements in education, as well as increased political participation and representation of the working classes, at least in Britain. How much of this can be attributed to the rise of individual freedoms and how much to other factors is another question. We are most interested in examining how Mill’s ideas might fare in the contemporary climate.We will do this, first, by seeing how the arguments from Principles and elsewhere help Mill handle some influential criticisms aimed at the schematic argument of On Liberty. His positive account faces two problems relating to its reliance on ideas about the power of free exchange in maintaining an epistemically well-functioning community. These are a ‘naiveté’ objection that Mill does not properly anticipate the ‘invasion’ of communities by extremism; and an objection about the problem of fragmentation within the community. Both problems draw force from recent real-world trends. A third problem is more external. It grants that Mill was right about the case for protecting the free expression of claims whose content helps provide inputs into useful debates, but it asks what we are to make of other ‘expressions’ whose discursive credentials are weaker, and where there might appear to be good reasons to censor. It has been said that Mill’s position is unduly optimistic as to how things will actually play out when people are free to say what they like. Free expression has led, it seems, to some dreadful political outcomes that might have been prevented had powers of censorship been available beforehand. Roger Crisp (1997: 194–95) objects that “Mill’s faith in human rationality is excessive. He underrates the human capacity to believe and act on the patently absurd” and cites the example of Nazi propaganda in support of this claim. We think that the force of this objection may weaken in light of the sort of defense given in the Principles. That is, Crisp’s criticism may presuppose that Mill regarded rationality as already in situ, ready to be set free by the absence of censorship. Aimed only at the argument made in On Liberty, whose text may encourage this interpretation, we might agree with Crisp. But what’s made clearer in the Principles is that rationality might increase gradually, over time, as conditions of free exchange improve the quality of debate and stimulate the mental powers of citizens. A properly discursive society might be protected against ‘invasion’ in two ways. Firstly, it is possible that ‘irrational’ political views gain currency partly because of enduring conformism and intolerance of individuality and difference.17 Secondly, the proponents of extremist views would, in a Millian discursive society, need to air these views in a context in which they can be properly, and easily, criticized. Citizens of this discursive society thus would not only be able to hear both sides of a debate, but hear extremist views tackled in debate. They would also have the requisite capacities of judgment which might make them less susceptible to such views (when based on misinformation and untruth) in the first place, or certainly more able to choose between 78
79
John Stuart Mill on free speech
opposing views. This, too, might involve easily accessible objective information regarding the topic under debate. As Mill says in his Inaugural Address to the University of St. Andrews: Unless an elementary knowledge of scientific truths is diffused among the public, they never know what is certain and what is not, or who are entitled to speak with authority and who are not: and they either have no faith at all in the testimony of science, or are the ready dupes of charlatans and imposters … we all require the ability to judge between the conflicting opinions that are offered to us as vital truths. Mill 1984: 233 By “scientific truths,” Mill meant “laws of the world.” This may plausibly include much more than the natural sciences but also (at least) the social sciences. Mill’s claim could be directed, today, at climate change deniers, Islamophobes, and anyone else who takes advantage of an audience uninformed about the subject matter and unable to distinguish argument from rhetoric. The rise of the Nazis may not be a counterexample to Mill’s defense, but rather an illustration of what can happen to a society that has not been able to develop in line with Mill’s idea of humanity as a progressive species. In venturing this defense, we are not wholly confident as to its total success against Crisp’s sort of criticism.We do find it plausible that the discursive strength of a society evolves over time, and Mill’s views on the laboring classes suggest that he saw things this way. It is also plausible that demagoguery struggles to take hold when its audience is well informed about relevant facts. In addition, totalitarian regimes routinely seek to purge society of disinterested intellectuals who might publicly question what the regime is doing. However, the worry may remain that the quality of debate evolves sufficiently slowly that ‘immunity’ to rabble-rousing politics takes some time to secure and may never be fully obtained. After all, scientific knowledge may diffuse relatively slowly compared to the spread of fascist ideology. Generally, human evolution (genetic or cultural) tends to lag environmental change and cannot react easily to sudden shocks. This may prove to be a significant concession for a Millian outlook to make. This is because environment change has apparently accelerated greatly where human communication (and hence expression) is concerned. It is hard to deny the assessment recently offered by the historian Timothy Garton-Ash (2017: 137–38), who claims that “all forms of expression can potentially reach farther and last longer than they did even 20 years ago, let alone in the era of John Stuart Mill.” This summary reflects a variety of plausible generalizations: Cartoons that get published in a French magazine may lead to people killing each other in Pakistan. A misguided post or non-consensual photograph posted on social media may be literally impossible to purge from the Internet. The fact that someone can say something online without revealing their identity or location probably accounts for the tendency of very nasty and threatening content. All of this is cause for concern and suggests that Mill’s optimism must now be qualified. But Garton-Ash, who documents these examples alongside a great many others, still identifies as a Millian. He concludes that the answer to such trends cannot be a resort to censorship.The state enforcement of legislation cannot hope to cope with such an internationally diffuse and intangible web of communication. Our best hope, he says, is to rely on humans finding ways to become more self- regulating in their communications. Given the way in which the solution is sought in individual freedom, and the potential for humans to adapt, and not in the powers of state coercion, it is still very much a Millian view. And Mill’s optimism does not fly wholly in the face of the evidence. Whatever the hazards of twenty-first century communication, there are the benefits. Growth in access to online content is making it difficult for authoritarian states to prevent free expression from circulating among their people. Again, the value is largely epistemic: State propaganda is 79
80
Daniel Halliday and Helen McCabe
being undermined as people ‘wise up’, and social media has played a role in facilitating civil disobedience and forms of protest. More generally, the removal of media censorship is seen by some social scientists as an important element of a “virtuous circle” of economic and political institutions, whose mutual reinforcement maintains prosperity.18 And again, censorship is not the only sort of state intervention that might be used to solve problems associated with free speech. This goes for the twenty-first century as well as the nineteenth. The danger Crisp points to comes from public pronouncements, and particularly public pronouncements which are liable to reach previously unpersuaded audiences and encourage them to adopt fascist views, particularly if this is because they are allowed to go unchallenged. But as Mill’s ‘corn-dealer’ example shows, his defense of free speech takes into account the changing dangers of context, and his underlying principle of exchange also means his defense of free speech is not necessarily applicable to, for instance, demagogic speeches, spread of misinformation through one-sided advertising, and the like. Mill’s arguments might suggest the letters of fascists ought to be allowed to appear in newspapers but would not support a fascist rally through a predominantly Jewish or black neighborhood, nor prohibition of non-fascist media outlets.This may be best thought of in terms of restrictions on where and when certain content can be expressed, rather than as any restriction on content per se. Similarly, Mill’s position allows for serious consideration of the potential harm of legitimizing extremist views if they are invited to participate in debates funded by public-service broadcasters, though he would want that harm to be weighed against the potential value of such ideas being debated and challenged in a public arena and thus either being roundly defeated, or re-affirming other people’s anti-fascism as a “living truth.”What is more, even if Mill did support such public engagement with extremism, it would have to be engagement. That is, he provides us with good grounds for ‘coercing’ demagogues into answering their opponents properly, rather than being able to simply choose who interviews them and what they are allowed to ask about, or making speeches which no one can challenge, let alone forcibly removing from the arena those who might potentially protest. Mill would probably be appalled about the ease with which politicians (extremist or otherwise) are able to avoid answering difficult questions in televised interviews or ‘debates’.19 In a similar vein, David Brink suggests that Mill’s arguments support what he calls “deliberation-enhancing forms of censorship” (2014: 166) such as campaign-finance restrictions (2014: 169–70). Of course, such examples of ‘censorship’ only really qualify as such if one adopts the view of free speech currently maintained by the U.S. Supreme Court, to which Mill’s view is much superior.20 On Mill’s account, such regulation would form part of the maintenance of a system of genuine free exchange, as opposed to one in which the wealthiest can buy up all the air time. Other sorts of regulation might be justified on Millian grounds: They might involve publicly funded ‘fact-checking’ organizations (with the requisite funding for publishing and disseminating their results); better dissemination of impartial information; and a general attitude on the part of the public regarding what they are willing to believe, and the grounds on which they are willing to vote for legislators and changes in statutes or public policy. There is no principled reason why such measures cannot be part of what enables the culture of communication to become increasingly ‘self-regulating’ over time. It bears emphasizing that these are delicate points. To the extent that private wealth has come to exert disproportionate influence over political campaigning, there are genuine worries about restrictions on speech handing similarly disproportionate influence to the state. Much depends on the possibility of setting up institutions that can remain independent when politicians might wish to bend them to some chosen agenda.
80
81
John Stuart Mill on free speech
The emergence of a genuinely discursive society is bound to be a slow process, and probably it will not occur wholly spontaneously. Thus, it needs the design and implementation of institutions and systems of exchange. It might also be true that even more censorship than Mill appears to desire might be justifiable on the route to such a society, though we should note both Mill’s arguments for the way this almost paternalist censorship helps create passive citizens who would themselves undermine the sustainability of such a system of exchange, and the previously mentioned types of “censorship” Mill’s arguments do support. Certainly, On Liberty does not provide us with a detailed scheme for creating or implanting such a society, only a set of arguments as to why it would be a good thing, and a few examples of what actions might, or might not, be restricted in such a society, but it does provide us with principles upon which to ground such a framework. The problem of fragmentation is somewhat different. Mill wrote in the nineteenth century, when, as he noted, newspapers were a relatively new thing. In the twenty-first century, newspapers are not exactly the egalitarian forum for disseminating ideas that Mill saw as the savior of the working classes. The general worry here is that the effective exchange of ideas can be stymied when different perspectives are not brought into the same arena of debate. ‘Echo chambers’21 arise because the very bulk of free expression leads people only to listen to or read opinions which reaffirm their own view of the world –a view which may not be grounded in reality, understanding or reflection. What is more, it prevents people from understanding their opponents’ position, not only making it harder to enter into meaningful conversation, persuasion, and debate, but leading to disparagement, disdain, and distrust, even where the opinions of experts are concerned. This in turn may lead us to be more willing to consider forcing the right opinion (as we see it) on those who disagree with us, undermining the very foundations of democracy, liberal rights and, in the end, these ‘forced’ opinions themselves. If such fragmentation takes hold within a political community, the problem is that there is little or no exchange of ideas going on, just reiteration of the same ideas within circles in which they are accepted: an endless preaching to the converted members of one’s own fragment.To be fair, Mill did appreciate this sort of possibility, as suggested by some remarks from the Inaugural Address: Look at a youth who has never been out of his family circle; he never dreams of any other opinions or ways of thinking than those he has been bred up in; or, if he has heard of any such, attributes them to some moral defect, or inferiority of nature or education. Mill 1984: 226 The observation is astute but is not accompanied by any suggestion as to what a solution might be. Mill did not anticipate how the massive proliferation of different media outlets would make fragmentation extremely difficult to prevent. Censorship per se would not be the answer. The idea of ‘forcing’ adults to listen to alternative views sits very uneasily with Mill’s anti-paternalism. Indeed, one of the more venerable insights of the liberal tradition is that while coercion might make a person change their behavior, it is much less equipped to make them change (much less improve) their mind.22 Suffice it to say that any solution to these problems requires more than laissez-faire but less than censorship, though it is not clear precisely what. One solution might involve breaking up the concentration of ownership and control.This is not, of course, to obstruct the content of views, but rather to oppose the disproportionate power with which some parties can broadcast them. In this way, the solution to fragmentation might
81
82
Daniel Halliday and Helen McCabe
overlap with attempts to suppress domination. Public-service broadcasting is another potential aid, though not a complete solution (for, again, the problem of echo chambers is not that other views are not being expressed, but that individuals refuse to listen to them). Another idea would be to implement more of Mill’s ideas about education and how it ought to prepare children for adult life and responsibilities as citizens in a discursive society. This has interesting implications for just how hands-off the state could be, on Mill’s view, regarding how and what children were taught outside of state schools. Perhaps Mill’s thought always was that parents could pick and choose (much of) the content and context of their child’s education but could not allow the child’s “permanent interests as a progressive being” to be retarded or damaged, and it certainly seems plausible to think we have an interest in, and in being able to properly participate in, a discursive society. Overall, then, with regards to his ‘positive’ defense of free speech, Mill was perhaps optimistic. But his view has the resources to defend intervention of the sorts needed to make his positive defense not only more plausible, but more practicable. Let us turn to the criticism leveled against Mill’s ‘negative’ arguments against censorship in On Liberty, one which was raised by Mill’s contemporary (James Fitz-James Stephen) but which also has contemporary relevance. This is the idea that when governments censor opinions, it is not because they are sure they are false (as Mill seems to assume) but –at least in some instances –because they know them to be true, and also know them to be harmful. Consider, for example, publications of fairly technical information that instructs readers about the construction of homemade weapons. Many jurisdictions continue to ban or heavily restrict such publications, on the grounds that the information can be put to dangerous use. Another sort of case is represented by things like violent pornography and video games. Here, the case for censorship is founded on a somewhat different concern about what sort of attitude might be conveyed by such ‘expressions’. These cases are not really threatening to Mill’s project. Again, we should see an important difference between free expression per se, and the free exchange of ideas. Mill is assumed to argue against censoring such expression, and then the debate rages over whether or not he is right to do so in any given example. But we are not sure this is the right reading of Mill in these cases: recipes for chemical weapons, violent video games, and pornography are typically not contributions (apart from, perhaps, in some extremely rare cases) to public discourse or an exchange of ideas. The Anarchists’ Cookbook may have truth-apt content but such content does not serve as stimulus for any important debate. One of the arguments against pornography is that it precludes debate over what is a ‘good’ or ‘normal’ sexual relationship. Mill makes plain that one of the most important reasons we should allow for free speech is to permit the free exchange of ideas (and hence the improvement of opinions): but he also says that both speech and actions are to be constrained by the harm principle, and both elements are, we think, of importance in this case. Proponents of (or apologists for) freedom to acquire such material often couch their defense in terms of free expression. Such claims are often left unpacked, but one thought might be, of course, that experimenting with chemicals and publishing one’s findings, or consuming video games and pornography are “experiments in living.” This suggestion strikes us as less plausible when applied only to forms depicting violence (and it might also be problematic if it is solely about consumption and not ‘active participation’, which Mill wants to promote). In particular, it is hard to see how such material, produced merely to be consumed, is articulating the sort of content that might feature in a discursive exchange with its audience. To put the point crudely, one may object to depictions of sex and violence but it is not clear that this involves disagreeing with (rather than disapproving of) any content thereby expressed. More crudely still, 82
83
John Stuart Mill on free speech
it’s not immediately clear what anyone ever learned from violent pornography or video games, at least nothing comparable to the valuable lesson that might be learned by working through the flawed reasoning or misleading rhetoric of a fascist pamphlet. At the same time there is a lingering sense that depictions of sex and violence often express some evaluative or normative content. On the other hand, the free expression of content that many find offensive may help a small, widely dispersed set of individuals share a lifestyle that they have in common.23 The fact that others find this alien or repulsive may not count for anything, at least in the absence of proper evidence that the content in question leads to some kind of harm, for example that it genuinely reinforces the oppression of some group. Our remarks here are just meant to be suggestive: This is, admittedly, a complex issue, partly because good evidence for the allegedly grievous effects of such speech is hard to come by.24 But some have suggested that censorship may be compatible with Mill’s overall position, particularly if Mill’s views link expression to discursive improvement in ways similar to what we’ve suggested here, which involves considerations of ‘harm’ to others, as well as of the ‘free development of [one’s own] individuality’.25 But as Joshua Cohen has pointed out, it might generally be better to seek ways of combatting the oppressive background conditions that some published content reinforces, rather than just restrict the content itself (1993: 249). While there is wide debate on what constitutes ‘harm’ on Mill’s view, one basic thought is that harm occurs whenever there is some negative impact on our interests as a progressive being, or violation of any rights plausibly founded on those interests. Speech that violates, undermines, or damages these interests can be legitimately censored on Mill’s account –and this might include publishing on the Internet the recipe for poisonous gases (security being one of our interests). This is a different rationale than that the free exchange of ideas ought to be protected (even promoted), and thus there are two ways (at least) in which Mill might have the materials to mount a defense against such objections to his negative arguments against censorship.
5. Conclusions Mill’s defense of freedom of expression in On Liberty is rightly famous, and has generated a good deal of debate, from his own day to ours. Our understanding of it is deepened when we see it in the context of his other works. Our understanding is also deepened when we understand Mill as supporting not a simple ‘free expression’ of opinions, nor a ‘marketplace of ideas’ regulated by a laissez-faire, hands-off kind of tolerance, but the ‘free exchange’ of ideas. This in itself requires more non-authoritative and also authoritative interferences or actions by the government than is usually supposed. These include not only mandating that every child receive a sufficient education, but also providing schools and paying for the education of at least some. It may also have implications for campaign finance, public-service broadcasting, media ownership, and censorship of extremist or offensive views, as well as throwing some light on why ‘echo chambers’ are problematical and, perhaps, even dangerous. We have emphasized what we describe as the ‘epistemic’ elements of Mill’s defense of free speech. Broadly speaking, these seek to highlight the ways in which free expression allows knowledge to both grow and disperse and include Mill’s claims about how knowledge and education are important elements of human development. We do not claim that this exhausts any plausible, or indeed any Millian, case for free speech. But we think that the epistemic character of Mill’s defense is what is most readily appreciated by looking beyond the text of On Liberty. As we have said, On Liberty will remain the central text, and there will always be ways of defending free speech that do not have such epistemic preoccupations. Our goal here has been mainly to show how much mileage Mill saw in such an epistemic strategy. 83
84
Daniel Halliday and Helen McCabe
Acknowledgments The authors thank John Thrasher for very helpful comments on an earlier draft. Daniel Halliday’s work on this chapter was supported by a research grant from the Center for Ethics and Education (grant #G40371/553300). He gratefully acknowledges the Center’s support.
Notes 1 We follow Daniel Jacobson (2000: fn 26) in treating On Liberty as a popular work, in ways that might help explain otherwise puzzling aspects of the text, such as occasional lack of analytic rigor. Similar remarks could be made about the text of Utilitarianism, which first appeared as a series of magazine articles, and is another very short text. We also note David Brink’s suggestion (2014: ix, 285) that On Liberty can be distinguished from Mill’s “more applied” works. 2 For interpretations of Mill as a classical liberal, which emphasize his intellectual debts to the likes of Adam Smith and David Ricardo, see Gaus (2017) and Riley (1998). For an interpretation that distinguishes Mill from his classical liberal predecessors, see Freeman (2001), Baum (2007), Claeys (1987, 2013), Sarvasy (1984, 1985), and McCabe (2015). For accounts that are less explicit about identifying a box in which Mill ought to be put, see Hollander (1985), Ten (1998), Kurer (1992), Stafford (1998), Miller (2010: Chap. 8), and Rosen (2013, 210–11). There are also contemporary authors who are skeptical about Mill’s defense of free speech on either interpretation (see, for example, van Mill 2017: Chap. 2). 3 The remarks in this paragraph are indebted to a similar distinction that David Brink (2014: 152–56) draws between Mill’s “truth-tracking” and “deliberative” rationales, and which is worked out in more detail than anything we say here. 4 An influential version of what we call the non-epistemic (though not necessarily anti-Millian) defense is Scanlon (1972). A contemporary defense of free speech that identifies as Millian in a more explicitly epistemic sense, and which distinguishes itself from such an approach is Braddon-Mitchell and West (2004). Something of a hybrid is Cohen (1993), in which the case for free speech rests on a variety of interests, including but not limited to our “deliberative” interest in inhabiting a culture that has ways of “finding out which ways of life are supported by the strongest reasons” (1993: 228).This draws on what Cohen describes as “the Millian thesis that favorable deliberative conditions require a diversity of messages” (1993: 247), but Cohen’s case also draws on interests that are not ultimately deliberative in his Millian sense. 5 This view is sometimes given qualified defense by commentators who identify as broadly Millian when it comes to free speech. Timothy Garton-Ash believes that part of the response to hate speech is to become more thick-skinned and to deal with hate speech ourselves rather than through legislation: “An essential and defining feature of a mature, liberal, democracy is that, so far as humanly possible, it replaces external constraint with self-restraint” (2017: 229). The point is delicate and Garton-Ash is at pains not to exaggerate it, but it has some substance. And its Millian credentials are real –see the remarks below on state protection as a source of passivity among the citizenry and in On Liberty where Mill says that “[i]t is obvious … that law and authority have no business with restraining” “the employment of vituperative language” either by the powerful or the powerless (though the injustice of using it is worse on the side of powerful). Rather, individual opinion “ought in every instance, to determine its verdict by the circumstances of the individual case: condemning everyone, on whichever side of the argument he places himself, in whose modes of advocacy either want of candour, or malignity, or bigotry, or intolerance … manifest themselves … This is the real morality of public discussion” (1977a: 259). 6 For a useful disambiguation of these terms, see Freeman (2001). 7 A similar perspective is found in the Chapters on Socialism, particularly the closing remarks on why the idea of private property is “not fixed but variable” (1967: 749–53). 8 Here we follow Jill Gordon (1997), who has more to say on this point. 9 This permits a somewhat charitable interpretation of On Liberty’s claim that “despotism is a legitimate mode of government in dealing with barbarians” (1977a: 224). While it is easy to see this remark as betraying a racist or imperialist attitude, it is worth attending to the more expansive remarks in Representative Government. Here, Mill makes the point that societies first need to evolve a respect for rule of law before democratic freedoms can flourish. This is now a respectable position about social evolution (see Fukuyama 2011: 14–19). On Mill’s view of social progress, and what he means by ‘savage’ and ‘barbarian’ stages, see also Habibi (2017: 521).
84
85
John Stuart Mill on free speech 10 We are setting aside lots of detail about Mill’s overall position on the value of education in a liberal society. For good general appraisals, see Donner (2007) and Gutmann (1982). 11 See especially the discussion of the Malthusian outlook (Mill 1965: 154–59). 12 Mill did not think that good curriculum design could draw on no generalizations at all, only that experimentation is required to help identify superior curricula and to help them adapt over time.This position was defended in his Inaugural Address to the University of St. Andrews, from which we quote below. 13 Eventually, the free development of individuality might lead people away from the pursuit of profit (for instance, in alternative ‘hippy’ lifestyles) and, potentially more dangerously for the sustainability of capitalism, might well lead to workers demanding fair sharing of profits or even revolutionary change. Mill had nuanced views about the case for socialism (see Dale Miller 2003, 2010: Chap. 8; Jonathan Riley 1996; and Helen McCabe 2015). 14 Miller (2010: 134). 15 For a more contemporary discussion of this aspect of markets in education, see Halliday (2016). 16 Though we think this is the thought that guides Mill in the chapter on the futurity of the laboring classes in Principles, Mill develops this view more fully in On the Subjection of Women. This may be the text in which Mill’s condemnation of social hierarchy is most emphatic, but little is made in Subjection of any connection to freedom of expression. Here we have been helped by the very comprehensive discussion and exegesis of Subjection in Morales (1996). 17 Ryan Muldoon (2015) argues that Mill’s comparison defense of individuality against the “despotism of custom” turns out to be well-supported by contemporary work on norms and social inertia. This goes some way toward vindicating Mill’s convictions about the role of custom in maintaining stasis and mediocrity. For more general discussions of Mill on individuality, see Anderson (1991) and Miller (2010: Chap. 7). 18 We borrow this term from Acemoglu and Robinson (2012: esp. Chap. 11). As these authors explain, countries with repressive governments and high levels of poverty tend to lack virtuous circles. 19 Mill himself did not shy away from openly answering difficult questions put to him by the crowd when standing for election in Westminster. 20 We are suppressing a thorny issue here that depends on careful attention to the political significance of corporate speech. For more discussion, see Mahoney (2013). 21 We borrow this term from Cass Sunstein (2001: 74, 205), who raises precisely the sort of fragmentation worry that we describe here. 22 Here we have in mind the views put forward by John Locke in his Letter Concerning Toleration, particularly the remark that “It is only light and evidence that can work a change in men’s opinions; which light can in no manner proceed from corporal sufferings, or any other outward penalties” (2003: 395). Mill makes little reference to Locke in his political writings. But there is a clear continuity of thought regarding the way in which mental improvement requires some degree of free thinking. Indeed, Mill was also concerned about the ‘outward penalties’ of shame, loss of ‘face’ and offended amour propre which might prevent the ‘light’ of evidence from working on men’s minds if rational arguments were presented to them in an antagonistic, patronizing, or flat-footedly corrective way (see McCabe 2014). 23 Joseph Raz has claimed that free expression can “reassure those whose ways of life are being portrayed that they are not alone” (1991: 311). Raz does not mention Mill anywhere in this defense but speaks of the “validating function” of free expression. We take his point to be Millian insofar as it acknowledges the role of free expression in allowing certain minority lifestyles to flourish.The point is also epistemic insofar as validation of one’s way of life can be aided by knowing that like-minded people exist. 24 These cases are often associated with harms that obtain in some structural fashion: any single piece of pornography, for instance, might not do any specific harm to any specific woman (presuming, of course, that anyone participating in making it has freely consented and was not harmed in the process). But this does not undermine the hypothesis that pornography in general (or, at least, the kind of pornography most-usually produced, which objectifies women, treating them as mere objects for the sexual gratification of men, and even when not specifically portraying the violence, humiliation, or degradation of women in order to engender male sexual pleasure, still considers only the male perspective, treating the male orgasm as the supreme purpose of sexual activity) helps perpetuate a patriarchal culture which harms women, not only through obvious repercussions of sexual violence, rape, and objectification, but in more insidious ways regarding men’s treatment, assessment, and view of women. 25 According to Vernon (1996), Mill’s position might allow him to support the censorship of violent pornography, which may have little discursive input and, at least in some forms, reinforce gender hierarchies that Mill saw as unjust.
85
86
Daniel Halliday and Helen McCabe
References Works by John Stuart Mill Mill, J. S. (1965 [1848–1871]). Principles of Political Economy, with Some of Their Applications to Social Philosophy, Collected Works II and III. Toronto: University of Toronto Press. Mill, J. S. (1967 [1879]). Chapters on Socialism, Collected Works V. Toronto: University of Toronto Press. Mill, J. S. (1977a [1859]). On Liberty, Collected Works XVIII. Toronto: University of Toronto Press. Mill, J. S. (1977b [1865]). Considerations on Representative Government, Collected Works XIX Toronto: University of Toronto Press. Mill, J. S. (1981 [1873]). Autobiography, Collected Works I. Toronto: University of Toronto Press. Mill, J. S. (1984 [1868]) Inaugural Address Delivered to the University of St Andrews, Collected Works XXI. Toronto: University of Toronto Press.
Other works Acemoglu, D., and Robinson, J. (2012). Why Nations Fail: The origins of power, prosperity, and poverty. New York: Random House. Anderson, E. (1991). “John Stuart Mill and Experiments in Living.” Ethics, 102(1): 4–26. Brink, D. (2014). Mill’s Progressive Principles. New York: Oxford University Press. Baum, B. (2007). “J. S. Mill and Liberal Socialism,” in N. Urbinati and A. Zakaras (eds.), J. S. Mill’s Political Thought: A bicentennial reassessment. New York: Cambridge University Press. Braddon-Mitchell, D., and West, C. (2004). “What is Free Speech?” Journal of Political Philosophy, 12(4): 437–60. Claeys, G. (1987). “Justice, Independence, and Industrial Democracy: The development of John Stuart Mill’s views on socialism.” Journal of Politics, 491: 122–47. Claeys, G. (2013). Mill and Paternalism. Cambridge: Cambridge University Press. Cohen, J. (1993). “Freedom of Expression.” Philosophy & Public Affairs, 22(3): 207–63. Crisp, R. (1997). Mill on Utilitarianism. London: Routledge. Donner, W. (2007). “John Stuart Mill on Education and Democracy,” in N. Urbinati and A. Zakaras (eds.), J. S. Mill’s Political Thought: A bicentennial reassessment. New York: Cambridge University Press. Freeman, S. (2001). “Illiberal Libertarians: Why libertarianism is not a liberal view.” Philosophy & Public Affairs, 30(2): 105–51. Fukuyama, F. (2011). The Origins of Political Order: From prehuman times to the French Revolution. London: Profile Books. Gaus, G. (2017). “Mill’s Normative Economics,” in C. Macleod and D. Miller (eds.), A Companion to Mill. Malden, MA: Wiley-Blackwell. Garton-Ash, T. (2017). Free Speech: Ten principles for a connected world. London: Atlanta Books. Gordon, J. (1997). “John Stuart Mill and the ‘marketplace of ideas’.” Social Theory and Practice, 23(2): 235–49. Gutmann, A. (1982).“What Is the Point of Going to School?” in A. Sen and B.Williams (eds.), Utilitarianism and Beyond. New York: Cambridge University Press. Habibi, D. (2017). “Mill on Colonialism,” in C. Macleod and D. Miller (eds.), A Companion to Mill. Malden, MA: Wiley-Blackwell. Halliday, D. (2016). “Private Education, Positional Goods, and the Arms Race Problem.” Politics, Philosophy & Economics, 15(2): 150–69. Hollander, S. (1985). Economics of John Stuart Mill: Vol. 1 Theory and method;Vol. 2 Political economy. Oxford: Blackwell. Hollander, S. (2015). John Stuart Mill: Political economist. Hackensack, NJ: World Scientific Publishing. Jacobson, D. (2000).“Mill on Liberty, Speech, and the Free Society.” Philosophy & Public Affairs, 29(3): 276–309. Kurer, O. (1992). “J. S. Mill and Utopian Socialism.” The Economic Record, 68(202): 222–32. Locke, J. (2003). Political Writings. Indianapolis, IN: Hackett. Mahoney, J. (2013). “Democratic Equality and Corporate Political Speech.” Public Affairs Quarterly, 27(2): 137–56. McCabe, H. (2014). “John Stuart Mill’s Philosophy of Persuasion.” Informal Logic, 34(1): 38–61. McCabe, H. (2015). “John Stuart Mill’s Analysis of Capitalism and the Road to Socialism,” in C. Harrison (ed.), A New Social Question: Capitalism, socialism and utopia. Cambridge: Cambridge Scholars Publishing. Miller, D. (2003). “Mill’s Socialism.” Politics, Philosophy & Economics, 2(2): 213–38.
86
87
John Stuart Mill on free speech Miller, D. (2010). J. S. Mill: Moral, social and political thought. Malden, MA: Polity Press. Morales, M. (1996). Perfect Equality: John Stuart Mill on well-constituted communities. New York: Roman & Littlefield Press. Muldoon, R. (2015). “Expanding the Justificatory Framework of Mill’s Experiments in Living.” Utilitas, 27(2): 179–94. Raz, J. (1991). “Free Expression and Personal Identification.” Oxford Journal of Legal Studies, 11(3): 303–24. Riley, J. (1996).“JS Mill’s Liberal Utilitarian Assessment of Capitalism versus Socialism.” Utilitas, 8(1): 39–71. Riley, J. (1998).“Mill’s Political Economy: Ricardian science and liberal utilitarian art,” in J. Skorupski (ed.), The Cambridge Companion to Mill. New York: Cambridge University Press. Rosen, F. (2013). Mill. Oxford: Oxford University Press. Sarvasy, W. (1984). “J. S. Mill’s Theory of Democracy for a Period of Transition between Capitalism and Socialism.” Polity, 16(4): 567–87. Sarvasy, W. (1985). “A Reconsideration of the Development and Structure of John Stuart Mill’s Socialism.” Western Political Quarterly, 38(2): 312–33. Scanlon, T. M. (1972). “A Theory of Freedom of Expression.” Philosophy & Public Affairs, 1(2): 206–26. Stafford, W. (1998). “How Can a Paradigmatic Liberal Call Himself a Socialist? The case of John Stuart Mill.” Journal of Political Ideologies, 3(3): 325–45. Ryan, A. (1970). The Philosophy of John Stuart Mill. Basingstoke: Macmillan. Sunstein, C. (2001). Republic.com. Princeton, NJ: Princeton University Press. Ten, C. L. (1998). “Democracy, Socialism and the Working Classes,” in J. Skorupski (ed.), The Cambridge Companion to Mill. New York: Cambridge University Press. Van Mill, D. (2017). Free Speech and the State: An unprincipled approach. Basingstoke: Palgrave Macmillan. Vernon, R. (1996). “John Stuart Mill and Pornography: Beyond the harm principle.” Ethics, 106: 621–32.
87
88
7 EPISTEMIC DEMOCRACY Jason Brennan
What, if anything, justifies democracy? One basic question in political philosophy is who should hold power. Just as there are competing answers to that question, so there are competing views about which criteria we should use to answer the question. Proceduralism holds that some ways of distributing political power are intrinsically just, or, alternatively, that some ways are intrinsically unjust.1 In contrast, instrumentalist or epistemic theories hold that (1) there are procedure-independent right answers to at least some political questions, and (2) what justifies a distribution of power or decision-making method is, at least in part, that this distribution or method tends to select the right answer (see Estlund 2007; Brennan 2016a). To hold an “epistemic conception of democracy,” is to hold that democracies tend to, in one way or another, track some normative standards about what governments ought to do, or that democracies tend to produce certain good/just/right outcomes (e.g., Cohen 2006; List and Goodin 2001; Estlund 2007). Advocates of epistemic democracy hold that there are some procedure-independent standards for assessing the quality of political decisions, and at least part of what justifies democracy is that it tends to produce higher quality decisions than other forms of government. To illustrate, a liberal might hold that democracy is good at least in part because democracies tend to recognize and protect rights. Or an egalitarian might hold that democracy is good because democracies tend to have more equal distributions of income. Epistemic democrats believe that for at least some normative questions, democracies tend to track the right answer; they deny that democratic decisions are good simply because they are made democratically. There are at least two major conceptions of epistemic democracy. The first view –which we might call the “wisdom of the crowd” view –holds that in certain conditions, the democratic electorate as a whole makes wise decisions at the voting booth, or, more narrowly, that deliberative bodies composed of randomly selected voters tend to make good decisions. As Melissa Schwartzberg (herself somewhat skeptical of epistemic democracy) summarizes the wisdom of the crowd view, “[e]pistemic democracy defends the capacity of ‘the many’ to make correct decisions and seeks to justify democracy by reference to this ability” (2015: 187). The wisdom of the crowd view holds that two heads are better than one, and many heads are better than fewer. 88
89
Epistemic democracy
The second view –which we might dub the “wisdom of the process” view –holds that the democratic governments as a whole tend to implement relatively good policies and laws. A person holding this view claims that certain features of modern democratic governments – such as the separation of powers, checks and balances, the party system, contestatory elections, technocratic bureaucracies, special interest politics, or a combination of such factors –tend to result in good outcomes. Some epistemic democrats advocate both views –they believe that both the crowd and the process are wise –while others might advocate the latter but not the former. For instance, an epistemic democrat might hold that the electorate is not wise, but then maintain that democratic checks and balances tend to produce good outcomes, in part because party systems, entrenched bureaucracies, and other factors allow democratic governments to “get away” with passing rules and laws that the electorate opposes (see Achen and Bartels 2016; Shapiro 2003; Brennan 2016a). A classic illustration of the wisdom of the crowd view is the jelly bean experiment. A professor asks students to guess how many jelly beans are in a jar. Individual student guesses are distributed all over, but the class’s mean guess is very close to correct. The larger the number of guesses, the more accurate the mean guess becomes. Only a small minority of individuals are more accurate than the class’s mean guess (Suroweicki 2005: 5). In this case, the crowd as a whole is wiser than nearly all of the individuals within it. As an example of the wisdom of the process view, consider how markets work according to standard neoclassical economics. Market prices emerge from individual actions and preferences. In general, if not always, market prices quickly and efficiently coordinate the activities of billions of people, even though individual agents in the market know hardly anything about the economy, and even though no individual or group of experts could themselves plan a large- scale economy. More specifically, no single human being has the knowledge or ability to make a number 2 pencil from scratch (including growing the tree, making the saw that cuts it down, the truck that takes it to the sawmill, making the paint, etc.), and yet the market produces pencils cheaply and efficiently (Read 1958). As individuals, people are too dumb to make pencils by themselves, but in a market economy, as a collective, they are excellent at it. In standard neoclassical economics, we cannot say that all consumers and producers as a collective whole know how to run the economy efficiently, yet the market process works. The process is smart though the individuals participating within it, as individuals or as a collective, are not. In this essay, I will argue that the most defensible version of epistemic democracy is the wisdom of the process view. The evidence for the wisdom of the process view is stronger than the evidence for the wisdom of the crowd view. In fact, there is good evidence that in democratic decision-making, the “crowd” is not wise, but systematically in error. Nevertheless, though the crowd is not wise, the democratic political process as a whole has a tendency to produce good outcomes. To put this thesis in a more exciting and radical way: perhaps the reason democracy tends to work so well is that, in a sense, it does not work. Democracies tend to produce good outcomes because they respond to citizens’ interests, but downstream, post-election decision- makers retain significant freedom to implement policies that the crowd opposes.
Pure proceduralist skepticism about epistemic democracy By definition, epistemic democrats are committed to the view that there are procedure- independent right, correct, or true answers to at least some political questions. They hold that for many political decisions, if not all, there is some truth of the matter about what ought to be done. (An epistemic democrat might agree that there is no independent truth of the 89
90
Jason Brennan
matter about what color flag a country ought to have, but there is an independent truth about which rights it ought to protect.) There are better and worse decisions, as judged by some true, procedure-independent standard. The reason to advocate democracy over other forms of government is that it tends to track the truth. Note that while epistemic democracy is an instrumentalist view of government, as it holds that part of what justifies democracy is how well it performs, epistemic democrats need not be thoroughgoing consequentialists. In general, consequentialist moral theories hold that what makes an action right is that the action can be expected to lead to the best overall consequences. Some epistemic democrats are consequentialists (e.g., Arneson 2003, 2004), but an epistemic democrat could instead subscribe to a deontological view. For instance, an epistemic democrat could hold that democracies are superior to other forms of government because they are the most likely to recognize, respect, and protect people’s natural rights. In that case, the epistemic democrat advocates democracy because it best tracks a deontological theory of justice. Some democratic theorists are skeptical of epistemic democracy because they are skeptical of the claim that there are procedure-independent standards by which to judge democratic outcomes. Many democratic theorists are pure proceduralists (e.g., Arendt 1967: 114). Pure proceduralism is the thesis (a) that there are no independent moral standards for evaluating the outcome of the decision-making institutions, and (b) that some political decision-making procedures are either intrinsically just or unjust. For instance, Jürgen Habermas says, “The notion of a higher law belongs to the premodern world. There are no standards that loom over the political process, policing its decisions, not even the standard of reason itself ” (1996: 106). Habermas believes there are moral standards governing how democracies ought to make decisions but denies there are any standards for judging what democracies decide. As another illustration, consider this quotation from political theorist Iñigo González-Ricoy: In a democratic society no process-independent moral criteria can be referred to in order to settle what counts as a harmful, unjust or morally unjustified exercise of the right to vote, for voting is a device that is only called for precisely when citizens disagree on what counts as harmful, unjust and morally unjustified. González-Ricoy 2012: 50 González-Ricoy claims that because people disagree about what counts as harmful or unjust, it is illegitimate to refer to any independent standards of justice to judge what democracies do. Pure proceduralists are not moral nihilists –they do not claim there is no moral truth at all. Rather, they believe that there are truths about what decision-making procedures we ought to use, but no truths about what we ought to decide. This puts them in an odd position: why would principles of justice or political legitimacy be limited only to how we decide rather than what we decide? Some political theorists answer this question by saying that since people disagree about what government ought to do, justice requires that we use a fair procedure for resolving our disagreements. However, as David Estlund points out, a concern for fairness does not get us to democracy. We could instead “flip a coin,” that is, we could randomly select some citizens’ preferred policies or candidates over others (Estlund 2007). Further, pure proceduralism has some deeply implausible implications. For instance, suppose we disagreed about whether we should do something horrific, such as nuke a neighboring state for fun, or allow parents to kill their children for fun. Suppose, after following an idealized deliberative procedure, the majority of citizens (or, if you like, every citizen) agreed that we should nuke the neighbor or allow filicide. A pure proceduralist would have to say that in such a case, it would, as a result, become permissible to nuke the neighbor or to commit filicide. But that 90
91
Epistemic democracy
seems absurd –it seems absurd to claim that evils could become permissible by democratic fiat. If so, then pure proceduralism must be false. There must be at least some independent standards by which to judge democratic decisions, and the main question is what those standards are. Another way of putting this is that pure proceduralists are committed to skepticism about almost every theory and view within political philosophy. What personal and economic rights do people have? Pure proceduralists have to say: whatever rights a democracy decides.What are the standards for social justice? Whatever a democracy decides. What is the correct theory of criminal justice? Whatever a democracy decides. Is capitalism or socialism more just? Whatever a democracy decides. What democracies do is just simply because democracies do it. Epistemic democrats can and do accept that people are fallible and that there is reasonable disagreement about right and wrong. An epistemic democrat need not believe that she herself knows what the independent standards or right answers to political questions are. Instead, epistemic democrats try to appeal to features of democratic decision-making that, they believe, make it likely that democracies will better tend to track the truth or produce better outcomes than other forms of government, whatever that truth may be.
Political ignorance, misinformation, and irrationality Before asking what amount of wisdom the crowd or the democratic process has, we might ask how much wisdom the individuals within the crowd have. The answer, for most of them, is not much. Beginning in the 1950s, researchers at the University of Michigan and elsewhere began examining how much political knowledge the average and typical voters had.2 The results were disheartening. As Philip Converse summarizes, “The two simplest truths I know about the distribution of political information in modern electorates are that the mean is low and the variance is high” (1990: 372). Ilya Somin says,“The sheer depth of most individual voters’ ignorance is shocking to many observers not familiar with the research” (2013: 17). In his extensive review of the empirical literature on voter knowledge, Somin concludes that at least 35% of voters are “know-nothings” (2013: 17–37, emphasis mine). I emphasize “voters” because not everyone votes, and people who choose not to vote tend to know less than people who choose to vote. Political Scientist Larry Bartels says, “The political ignorance of the American voter is one the best-documented features of contemporary politics” (1996: 194). For instance, during election years, most citizens cannot identify any congressional candidates in their district (see Hardin 2009: 60). Citizens generally don’t know which party controls Congress (Somin 2013: 17–21). Most Americans do not know even roughly how much is spent on social security or how much of the federal budget it takes up (Somin 2013: 29). During the 2000 U.S. presidential election, while slightly more than half of all Americans knew Gore was more liberal than Bush, they did not understand what the term “liberal” meant. In fact, 57% of them knew Gore favored a higher level of spending than Bush did, but significantly less than half knew that Gore was more supportive of abortion rights, more supportive of welfare-state programs, favored a higher degree of aid to black people, or was more supportive of environmental regulation (Somin 2013: 31). Only 37% knew that federal spending on the poor had increased or that crime had decreased in the 1990s (Somin 2013: 32). On many of these questions, Americans did worse than a coin flip –they were systematically mistaken. Similar results hold for other election years (see, e.g., Althaus 2003: 11). Voters do poorly on tests of basic political knowledge. But such tests overestimate what voters know for at least two reasons. First, most surveys give voters a multiple-choice test. They mark lucky guesses as correct answers. Second, most surveys, such as the American National Election 91
92
Jason Brennan
Studies or the Pew Research Center Polls, only test easily verifiable basic knowledge (such as who the president is or which party controls Congress), rather than advanced social scientific knowledge, such as voters’ understanding of economics or sociology. Not surprisingly, attempts to measure such knowledge also produce disheartening results. Americans have systematically different beliefs from economists about how the economy functions, and systematically different beliefs from political scientists regarding how government functions (Caplan 2007; Caplan et al. 2013). These differences in belief are not explained by demographic differences between academic social scientists and the lay public. Every four years, the American National Election Studies ask voters who they are, what they know, and what policies they prefer. With this massive amount of data, it is possible to determine, through statistical analysis, how demographic factors affect policy preferences (holding knowledge constant), and how knowledge affects policy preferences (holding demographics constant). It is thus possible to estimate what policies the voting public would support if it had full knowledge, and also what it would support it were completely ignorant or systematically in error. In one such study, Scott Althaus finds that, holding demographics constant, a poorly informed voting public has systematically different preferences from a well-informed voting public (see Althaus 2003: 129; Caplan 2007).3 Worse, the actual median voter’s policy preferences are closer to those of the badly informed voter than of a well-informed voter. Other studies using similar methods find similar results (Caplan 2007; Caplan et al. 2013; Gilens 2012). Voters do not just appear to be ignorant and misinformed, but also irrational. The overwhelming consensus in political psychology, based on a huge and diverse range of studies, is that most citizens process political information in deeply biased, partisan-motivated ways, rather than in dispassionate, rational ways. Political psychologists Milton Lodge and Charles Taber summarize the body of extant work: “The evidence is reliable [and] strong … in showing that people find it very difficult to escape the pull of their prior attitudes and beliefs, which guide the processing of new information in predictable and sometimes insidious ways” (2013: 169). Here Lodge and Taber mean both (a) that voters suffer in particular from confirmation/disconfirmation bias, that is, that they ignore and evade evidence contrary to their previous views, while overrating evidence that validates their existing views, and (b) that random emotions greatly influence how voters process information. (For instance, if you happen to be sad or happy right before you encounter a new piece of evidence, that changes whether and how you take it in.) Political psychologists Leonie Huddy, David Sears, and Jack Levy say, “Political decision-making is often beset with biases that privilege habitual thought and consistency over careful consideration of new information” (2013: 11). Voters suffer from a wide range of biases, including confirmation bias, disconfirmation bias, motivated reasoning, intergroup bias, availability bias, and prior attitude effects. The dominant explanation for why voters are so ignorant or misinformed is the rational ignorance theory.This view starts by noting that the probability that an individual vote will break a tie in an election is vanishingly small (see Brennan and Lomasky 2003: 56–57, 119; Gelman et al. 2012). Individual voters seem to be aware that their chances of making a difference are low. Acquiring political information is costly, taking time and effort.When the expected costs of acquiring information of a particular sort exceed the expected benefits of possessing that sort of information, people will usually not bother to acquire the information. Thus, voters are rationally ignorant –most do not invest in political information because it does not pay. Furthermore, voters may be “rationally irrational”: they do not expend the effort to correct their cognitive biases when reasoning about politics, because the costs of doing so exceed the expected benefits (Caplan 2007; Brennan 2016a). Since individual votes make no difference, individual voters are 92
93
Epistemic democracy
incentivized to remain ignorant, or to indulge irrational thought processes or mistaken political ideologies. Now, some democrats dispute whether voters are quite as ignorant as the studies say. They agree that voters lack the information the studies test. However, they claim, voters have other forms of information which are relevant for politics. As one public debate partner of mine put it, colorfully: “Knowing the price of a bus ticket is as important as knowing the unemployment rate.” But while voters surely know a great deal about their personal experiences, it’s unclear how this helps them as individuals (or in aggregate) when it comes to choosing parties or policies.
A priori arguments for the “wisdom of the crowd” view The wisdom of the crowd view holds that wisdom or knowledge is an emergent property of the group. Though individuals within the group have inaccurate beliefs, having them vote together might cause the group as a whole to make accurate or good decisions. There are a few cases –such as guessing the number of jelly beans or the weight of farm animals –where large crowds’ mean guesses tend to be accurate. There are others cases where the mean guesses are far off. Defenders of the wisdom of the crowd view need a general theory to explain when the crowd is wise and when it is not. To show that competence is an emergent feature of democratic decision-making, political theorists frequently cite three mathematical theorems, (1) The Miracle of Aggregation Theorem, (2) Condorcet’s Jury Theorem, and (3) The Hong-Page Theorem. The Miracle of Aggregation Theorem holds that a large democracy with a minority of well- informed voters can perform as well as a small democracy made up entirely of well-informed voters (Converse 1990: 381–82). The argument for this theorem is simple. Suppose a voter is entirely ignorant. She is asked to choose between two candidates, A and B. Being ignorant, she has no reason to pick either candidate, and so will just choose at random. If there are a large number of random voters, then voters will be randomly distributed, and cancel each other out. The small minority of informed voters will have systematic preferences for one candidate, and so that candidate will end up with a majority and win. In the abstract, this theorem seems plausible, but it might not describe what happens in real-life democracies. The theorem assumes voters are so ignorant that they vote randomly. But empirical work reveals that the real-life voters we call “ignorant” do not vote randomly. For example, they are strongly biased to select the incumbent over the challenger (Somin 1998: 431).4 They are biased to “follow the leader,” that is, to settle on the same candidate that other ignorant voters have picked, rather than to vote at random (Jakee and Sun 2006). As we discussed above, their preferences do not track the preferences of a simulated fully informed electorate. Condorcet’s Jury Theorem holds that if voters are independent, and if the average voter is sufficiently well-motivated and is more likely than not to be correct, then, as a democracy becomes larger and larger, the probability that the voting public will get the right answer approaches 1 (de Condorcet 1976: 48–49). Thus, even if individual voters are on average just slightly more likely than chance to make the right choice, Condorcet’s Jury Theorem says that an electorate of only 10,000 voters is close to certain to make the correct choice. Whether Condorcet’s Jury Theorem tells us anything at all about real-life democracy depends on whether democratic voting meets a number of conditions. For instance, voters have to be sufficiently independent of one another –they cannot just be copying each other’s votes.5 But empirical work provides strong evidence that voters are conformists and do in fact follow each 93
94
Jason Brennan
other (Jakee and Sun 2006). However, as I will discuss below, the main worry is that voters in fact make systematic mistakes. The Hong-Page Theorem holds that under the right conditions, cognitive diversity among the participants in a collective decision-making process better contributes to that process producing right outcomes than increasing the participants’ individual reliability or ability (Hong and Page 2001). These conditions are as follows: ( 1) the participants must have genuinely diverse models of the world; (2) the participants must have sufficiently complex models of the world; (3) the participants must agree on what the problem is and what would count as a solution; (4) the participants must all be trying to solve the problem together; (5) the participants must be willing to learn from others and take advantage of other participants’ knowledge.6 The conditions for the Hong- Page Theorem are highly demanding. It requires that participants have complex models of the worlds, but, as Scott Page writes: “For democracy to work, people need good predictive models. And often, the problems may be too difficult or too complex for that to be the case” (2007: 345). It may be that most voters have overly simple models of the world. The Hong-Page Theorem also assumes individual decision-makers have identified a problem and are each trying to solve that problem. It assumes individual decision-makers agree on what the problem is, that they are each dedicated to solving that problem, and that they are willing to listen to each other in an open-minded and rational way. It assumes that when the problem has been solved, the participants will agree that it has in fact been sold. But it’s far from obvious that actual democratic deliberation is anything like that. In real- world deliberation, deliberators frequently fail to listen to one another and instead dig in their heels. In a comprehensive survey of the empirical research on democratic deliberation, political scientist Tali Mendelberg says that the “empirical evidence for the benefits that deliberative theorists expect” is “thin or non-existent” (2002: 154), adding that “in most deliberations about public matters,” group discussion tends to “amplify” intellectual biases rather than “neutralize” them (2002: 176). In the real-world, voters do not agree on just what the “problem” is that politics is trying to solve, nor do they always agree that problems have been solved. For example, Americans in the early 1990s wanted crime rates to go down. In the next twenty years, crime rates went down dramatically, but few Americans knew this (see Somin 2013: 18–21). In fact, Americans mistakenly think gun crimes are up.7 All this aside, there is a common worry about these three theorems. Each of the three theorems claims that if certain conditions obtain, then a democratic voting body as a whole will make reliable decisions, even though individuals within that body are not particularly reliable. The essential problem with all three theorems, though, is that empirical work seems to show that the voting public makes systematic errors. “Ignorant” voters do not have random preferences. For instance, in economics, the median voter advocates significant protectionism, while economists both Left and Right advocate free trade (Caplan 2007). Numerous studies find that the mean, median, and modal voters have different policy preferences than they would have were they better informed (see Althaus 2003; Gilens 2012; Achen and Bartels 2016; Brennan 2016a). It is hard to overstate the importance of critiques based on systematic error. If citizens are systematically mistaken, then by definition their errors are not randomly distributed, and the miracle of aggregation does not occur. And if they are systematically mistaken, then Condorcet’s 94
95
Epistemic democracy
Jury Theorem implies not that democracies always pick the right answer, but that they always pick the wrong answer. If they are systematically mistaken, then citizens lack the cognitive diversity the Hong-Page Theorem needs to generate the right results.
The wisdom of the process A priori “proofs” of the wisdom of the crowd view are problematic for many reasons. Real-life democracies, such as Canada, France, Sweden, Germany, the United States, and so on, do not quite seem to fit the models, in part because multiple empirical studies find that real-life voting publics make systematic mistakes on both basic political knowledge and advanced social scientific knowledge. Nevertheless, this by itself does not show that epistemic democracy is mistaken. After all, epistemic democrats in general take a comparative approach. They need not claim that democracies always or even usually get the right answer. Instead, they need only argue that democracies tend to do better than the feasible or extant alternatives, such as one-party systems, dictatorships, absolute monarchies, oligarchies, and the like. As an illustration, suppose democracies tend to make correct decisions only 1/10th of the time, but the second-best alternative regime-type tended to make correct decisions only 1/ 11th of the time. Suppose also that when democracies make wrong decisions, they tend to be only 1 to 2 standard deviations in error, while the next best system tends to be 2 to 3 standard deviations in error. In such a case, democracy as a whole would be an unreliable and, indeed, an incompetent political decision-making method. Nevertheless, epistemic considerations would still favor democracy over any other system.Thus, an epistemic democrat might say, “democracy is the worst form of government except for all those that have been tried from time to time…” (Winston Churchill, quoted in Langworth 2008: 574). An epistemic democrat could thus admit that voters tend to be ignorant, misinformed, and irrational, but then argue, quite plausibly, that nevertheless, democracies tend to do a better job than the alternatives at respecting and promoting a wide range of citizens’ interests. For instance, empirical work shows that there are strong positive correlations between (a) how democratic a country is and (b) how strongly a country respects non-political civil and personal liberties, and also between (c) how strongly a country respects economic liberties and (d) the wealth and income enjoyed by both the average citizen of a country and the citizen at the 10th percentile of income.8 Though these strong correlations exist, correlation is, of course, not sufficient for causation. Perhaps democratic politics tends to lead to liberal results and to prosperity. Perhaps liberal and prosperous countries tend to become more democratic. Perhaps some third factor tends to produce both. Perhaps democracy and liberalism tend to be mutually reinforcing. Leading institutional economists Daron Acemoglu and James Robinson argue that democracies do not just happen be liberal and prosperous, but democratic politics tends to lead to liberalism and prosperity. They claim political and economic regimes fall into two broad types: extractive and inclusive (Acemoglu and Robinson 2012). In extractive regimes, power is concentrated into the hands of the few, and these few in turn use their power to expropriate wealth and dominate the majority. In inclusive regimes, power is widely dispersed. Institutions and laws in inclusive regimes tend to be designed to benefit the majority of citizens. Inclusive regimes tend to enjoy the rule of law, respect for private property, and reduced economic rent seeking. In turn, this leads to greater economic prosperity and respect for civil rights. While Acemoglu and Robinson might be right, we should be careful not to presume that the reasons democracies are so liberal is that most democratic citizens strongly advocate civil or economic liberalism. As Scott Althaus (2003: 11), Bryan Caplan (2007: 51), Martin Gilens 95
96
Jason Brennan
(2012: 106–11) each conclude (using different surveys and data sets), the modal and median citizens in the United States are much less supportive of economic or civil liberties than more elite, educated, and higher-income citizens. Many modern democracies appear to be significantly more liberal than we might expect, given what the modal and median voter tends to support. For many years, and perhaps still, the dominant model in political science of how politicians respond to voter preferences was the Median Voter Theorem.9 According to the Median Voter Theorem, politicians in any given district will tend to advocate and attempt to implement the policy preferences of the median voter from that district. The basic argument is that politicians on either extreme, left or right, can usually win more votes by moving toward the political center, where the center is defined by the median voter that that district advocates. Some evidence for this model can be seen in the U.S. presidential elections. After the primaries, the Democratic and Republican nominees tend to move toward the center; they advocate more moderate policies than they did during the primaries. One recent challenge to the Median Voter Theorem may help to explain why democracies perform better than we might expect, in light of widespread voter ignorance and misinformation. Recently, Martin Gilens measured how responsive different presidents have been to different groups of voters. Gilens finds that when voters at the 90th, 50th, and 10th percentiles of income disagree about policy, presidents are about six times more responsive to the policy preferences of the rich than the poor (2012: 80). If Gilens is right, epistemic democrats should take heart. Democracy performs well in part because it does not automatically give the median voter what he or she wants. Study after study finds that political knowledge is strongly, positively correlated with income; voters at the 90th percentile of income will generally be much better informed than voters at the 50th or especially 10th percentile (see, e.g., Delli Carpini and Keeter 1996). Furthermore, voters tend to vote and form their policy preferences sociotropically (for what they perceive to be the national interest).10 (Thus, siding with what higher-income voters want does not necessarily mean advancing their interests over others.) Gilens finds that, for example, high-income and high-information Democratic voters have systematically different policy preferences from low-income, low-information Democrats. High-income Democrats tend to have high degrees of political knowledge, while poor Democrats tend to be ignorant or misinformed. Poor Democrats approved more strongly of invading Iraq in 2003. They were more strongly in favor of the Patriot Act, of invasions of civil liberty, of torture, of protectionism, and of restricting abortion rights and access to birth control. They were less tolerant of homosexuals and more opposed to gay rights (Gilens 2012: 106–11). In general, democratic voters elect politicians with certain ideological or policy bents. In doing so, they make it more likely that laws, regulations, and policies fitting those bents will be implemented. But it is not as though whatever policies most voters happen to support during an election are immediately implemented. Instead, there is a wide range of political bodies and administrative procedures that mediate between what the majority of the moment appears to want during the election and what laws and rules actually get passed. Many empirically minded democratic theorists argue that these intermediary bodies and processes help explain why democracies make good decisions. Consider that: • Large government bureaucracies have significant independence. They do not simply follow presidential or congressional orders, but instead set their own agendas and act without, or even in defiance of, oversight from elected officials. • The judiciary similarly has significant power to change or to veto democratic legislation, or to create laws itself. 96
97
Epistemic democracy
• The political process –with checks and balances, frequent elections, and so on –tends to prevent political instability (see, e.g., Oppenheimer and Edwards 2012: 119–222). • While voters are badly informed, politicians are much better informed, and many are reasonably well-motivated. Politicians and political parties make deals with one another and compromise, or they hold fast and prevent the other side from unilaterally imposing its will. As a result, political outcomes tend to be relatively moderate and conservative, in the sense that changes from the status quo come gradually. • Political parties have significant power to shape the political agenda and make decisions independently of voters’ desires, opinions, and wishes. Since most voters are ignorant, they are unlikely to know what the parties have done and are thus unlikely to punish them for imposing laws the voters wouldn’t like if only they knew about them.11 Each of these mediating factors reduces the power of the majority of the moment during the election, and instead places greater power in the hands of more informed citizens. In that sense, they are epistemic checks within a democratic system. But, at the same time, these bodies and processes are not dictatorial.They can be checked by the electorate, and an unusually dissatisfied electorate can vote to create change. To summarize, the arguments and positions we have reviewed in this section do not purport to show that, somehow, when we get lots of relatively ignorant people together, they will end up voting in a way that closely approximates the truth. Rather, the arguments are that, as a whole, the various inclusive systems of checks and balances in modern democracies tend to perform well, compared to the alternatives. Most governing is done by politicians and bureaucrats who to a significant degree act against citizens’ policy preferences. Yet politicians and bureaucrats are disciplined to a significant degree to serve the public interest, rather than simply serving their own personal interests, because inclusive voting allows citizens to punish bad behavior and reward good behavior, at least to a significant degree more than we see in extant non-democratic systems.
The epistocratic challenge Political scientists generally defend democracy on epistemic grounds. They’ve produced an impressive body of work explaining why democratic procedures tend to produce good outcomes (even if the “wisdom of the crowd” view is mistaken). I have reviewed only a small sample of that work here. However, while there is strong evidence that democracies tend to outperform other extant forms of government, there might be new forms of government that outperform democracy. In particular, perhaps some form of epistocracy might be preferable to democracy, at least on epistemic grounds. A political regime is said to be epistocratic to the extent that political power is, by law, distributed according to competence, skill, and the good faith to act upon that skill.12 There are many possible forms of epistocracy, including: 1. Restricted Suffrage: Citizens may acquire the legal right to vote and run for office only if they are deemed (through some sort of process) competent and/or sufficiently well-informed (Brennan 2011). 2. Plural Voting: As in a democracy, every citizen has a vote. However, some citizens, those who are deemed (through some legal process) to be more competent or better informed, have additional votes (Mill 1975; Mulligan forthcoming). 3. The Enfranchisement Lottery: Electoral cycles proceed as normal, except that by default no citizen has any right to vote. Immediately before the election, thousands of citizens are 97
98
Jason Brennan
selected, via a random lottery, to become pre-voters. These pre-voters may then earn the right to vote, but only if they participate in certain competence-building exercises, such as deliberative forums with their fellow citizens (López-Guerra 2014). 4 . Epistocratic Veto: All laws must be passed through democratic procedures via a democratic body. However, an epistocratic body with restricted membership retains the right to veto rules passed by the democratic body (Brennan 2016a). 5. Weighted Voting/Government by Simulated Oracle: Every citizen may vote, but each must take a quiz concerning basic political knowledge at the same time. Their votes are weighted based on their objective political knowledge, perhaps while statistically controlling for the influence of race, income, sex, and/or other demographic factors (Brennan 2016a). Epistocracy should not be conflated with technocracy. For many, the term “technocracy” connotes a government with extensive power over its citizenry, which imbues bands of experts with significant power, and which attempts to engage in various forms of social engineering. An epistocrat need not accept technocracy so defined; a fortiori, some democrats do accept technocracy so defined (see, e.g.,Thaler and Sunstein 2008). Nor should epistocracy be described as the “rule of the few” (pace Landemore 2012). Suppose an epistocrat favored excluding the bottom 50% of voters. In the United States, that would still be the rule of 100 million voters –the rule of the many, if not quite everybody. Epistocrats might favor using the same checks and balances within current democratic systems. Epistocracies might have parliaments, contested elections, free political speech open to all, contestatory and deliberative forums, and so on.13 These epistocracies might retain many of the institutions, decision-making methods, procedures, and rules that we find in the best- functioning versions of democracy.The major difference between epistocracy and democracy is that people do not, by default, have an equal right to vote or run for office. There are a number of arguments for epistocracy, and not surprisingly, in general, epistocrats defend epistocracy on epistemic grounds. Epistocrats make note of the widespread voter ignorance and misinformation we discussed above, and then argue that distributing basic political power in unequal ways could lead to better outcomes. In general, though, their arguments are somewhat speculative, because at present we have few or no viable examples of epistocracy.14
Conclusion In general, the best places to live are democracies, not one-party states, absolute monarchies, or oligarchies. Epistemic democrats of all stripes think this is not merely a coincidence, and that part of what justifies democracy is that it tends to produce good and just outcomes. In this chapter, we have examined two forms of epistemic democracy, the wisdom of the crowd view and the wisdom of the process view. We saw that the main arguments for the wisdom of the crowd view seem weak, but there are stronger arguments for the wisdom of the process view. At present, though, the main challenge for epistemic democrats is to explain why some form of epistocracy does not produce an even wiser process, or, if it does, why this is not sufficient grounds to prefer epistocracy to democracy.
Notes 1 For instance,Thomas Christiano (1996, 2008) thinks democracy is intrinsically just because it expresses that all citizens have equally valuable lives.
98
99
Epistemic democracy 2 Of course, one issue is just what counts as political knowledge. Even “ignorant” voters presumably have some knowledge about their own lives and local conditions, which in principle could be important for making good political decisions. 3 Both Althaus and Caplan correct for the influence of demographic factors. 4 For an empirical confirmation of these claims, see Bartels (1996) and Alvarez (1997). 5 So, for instance, David Estlund (2007: 136–58) doesn’t dispute the mathematics of Condorcet’s Jury Theorem (CJT) but denies that it tells us anything about real-life democracies. Similarly, even though it would be ideologically convenient for me if the CJT applied to actual democracies (because I think democracies are largely incompetent and because I think I can prove that average and median levels of competence among voters is