The Epistemology of Groups 0199656606, 9780199656608

Groups are often said to bear responsibility for their actions, many of which have enormous moral, legal, and social sig

257 80 2MB

English Pages 224 [147] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Halftitle page
Title page
Copyright page
Dedication page
Contents
Acknowledgments
Introduction
0.1 On the Very Existence of Group Beliefs
0.2 The Nature of Groups
0.3 Chapter Overviews
0.4 The Bigger Picture
1. Group Belief Lessons from Lies and Bullshit
1.1 Summative and Non-Summative Views of Group Belief
1.2 Group Lies and Group Bullshit
1.3 Judgment Fragility
1.4 Base Fragility
1.5 The Group Agent Account
1.6 Conclusion
2. What Is Justified Group Belief?
2.1 Divergence Arguments
2.2 The Paradigmatic Inflationary Non-Summativist View: The Joint Acceptance Account
2.3 Problems for the Joint Acceptance Account
2.4 Revisiting Divergence Arguments
2.5 Deflationary Summativism, the Group Justification Paradox, and the Defeater Problem
2.6 The Collective Evidence Problem
2.7 The Group Normative Obligations Problem
2.8 A Condorcet-Inspired Account of Justified Group Belief
2.9 The Group Epistemic Agent Account
2.10 Central Objection to the Group Epistemic Agent Account
2.11 Conclusion
3. Group Knowledge
3.1 Social Knowledge
3.2 Social Knowledge and Action
3.3 Social Knowledge and Defeaters
3.4 Knowing, Being in a Position to Know, and Should Have Known
3.5 Collective Knowledge
3.6 Conclusion
4. Group Assertion
4.1 Two Kinds of Group Assertion
4.2 Having the Authority to Be a Spokesperson
4.3 The Autonomy of Spokespersons
4.4 Coordinated and Authority-Based Group Assertion
4.5 Two Other Accounts
4.6 Group Assertion Is Not Reducible to Individual Assertion
4.7 Conclusion
5. Group Lies
5.1 Individual Lies
5.2 Counterexamples to the Traditional View of Lying
5.3 Non-Deception Accounts of Lying
5.4 Back to Deception
5.5 Summativism and Sufficiency
5.6 Summativism and Necessity
5.7 The Joint Acceptance Account of Group Lies
5.8 Group Lies
5.9 Conclusion
References
Index
Recommend Papers

The Epistemology of Groups
 0199656606, 9780199656608

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

The Epistemology of Groups

The Epistemology of Groups JENNIFER LACKEY

Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Jennifer Lackey 2021 The moral rights of the author have been asserted First Edition published in 2021 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2020941998 ISBN 978–0–19–965660–8 ebook ISBN 978–0–19–263790–1 DOI: 10.1093/oso/9780199656608.001.0001 Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A. Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

To Baron, Isabella, and Catherine

Contents Acknowledgments Introduction 0.1 On the Very Existence of Group Beliefs 0.2 The Nature of Groups 0.3 Chapter Overviews 0.4 The Bigger Picture 1. Group Belief: Lessons from Lies and Bullshit 1.1 Summative and Non-Summative Views of Group Belief 1.2 Group Lies and Group Bullshit 1.3 Judgment Fragility 1.4 Base Fragility 1.5 The Group Agent Account 1.6 Conclusion 2. What Is Justified Group Belief? 2.1 Divergence Arguments 2.2 The Paradigmatic Inflationary Non-Summativist View: The Joint Acceptance Account 2.3 Problems for the Joint Acceptance Account 2.4 Revisiting Divergence Arguments 2.5 Deflationary Summativism, the Group Justification Paradox, and the Defeater Problem 2.6 The Collective Evidence Problem 2.7 The Group Normative Obligations Problem 2.8 A Condorcet-Inspired Account of Justified Group Belief 2.9 The Group Epistemic Agent Account 2.10 Central Objection to the Group Epistemic Agent Account 2.11 Conclusion 3. Group Knowledge

3.1 Social Knowledge 3.2 Social Knowledge and Action 3.3 Social Knowledge and Defeaters 3.4 Knowing, Being in a Position to Know, and Should Have Known 3.5 Collective Knowledge 3.6 Conclusion 4. Group Assertion 4.1 Two Kinds of Group Assertion 4.2 Having the Authority to Be a Spokesperson 4.3 The Autonomy of Spokespersons 4.4 Coordinated and Authority-Based Group Assertion 4.5 Two Other Accounts 4.6 Group Assertion Is Not Reducible to Individual Assertion 4.7 Conclusion 5. Group Lies 5.1 Individual Lies 5.2 Counterexamples to the Traditional View of Lying 5.3 Non-Deception Accounts of Lying 5.4 Back to Deception 5.5 Summativism and Sufficiency 5.6 Summativism and Necessity 5.7 The Joint Acceptance Account of Group Lies 5.8 Group Lies 5.9 Conclusion References Index

Acknowledgments I have been thinking about issues related to the epistemology of groups for a number of years and so there are many people to thank for playing a role in seeing this project through to the end. For giving me enormously helpful feedback on one or more of the chapters in this book, I am grateful to Anne Baril, Jared Bates, Michael Bratman, Jessica Brown, Tom Carson, Fabrizio Cariani, J. Adam Carter, David Christensen, Michael DePaul, Josh Dever, Don Fallis, Sandy Goldberg, Alvin Goldman, John Greco, Allan Hazlett, Marija Jankovic, Nick Leonard, Kirk Ludwig, Eliot Michaelson, Federico Penelas, Jim Pryor, Florencia Rimoldi, John Searle, Andreas Stokke, and Deb Tollefsen. Thanks also go to audience members at the University of Warsaw, the Indiana Philosophical Association meeting at Hanover College, Western Michigan University, the Workshop on the Epistemology of Groups at Northwestern University, the University of Buenos Aires, the University of Warwick, a Social Epistemology Workshop in Helsinki, Finland, the GAP.9 Conference in Osnabrück, Germany, an Invited Symposium at the Eastern Division of the APA in Washington, D.C., the Epistemic Dependence on People and Instruments conference in Madrid, Spain, the 3rd Colombian Conference in Logic, Epistemology, and Philosophy of Science in Bogotá, Colombia, the University of Toronto, Mississauga, the University of St. Andrews, the Southwest Epistemology Workshop at the University of New Mexico, the International Workshop on Lying and Deception at Johannes Gutenberg University in Mainz, Germany, the Collective Intentionality IX Conference at Indiana University, New York University, the University of Connecticut, the University of Massachusetts, Amherst, the University of Georgia, Radboud University, the XVII Congress of the Inter-American Philosophical Society in Salvador, Brazil, the Social Epistemology Workshop in St. Andrews, Scotland, the Midwest Epistemology Workshop at Notre Dame, the University of Texas at Austin, the University of Nebraska—Lincoln, and students in my graduate seminars at Northwestern University. I am also grateful to my daughter, Isabella Reed, for her tireless and meticulous work on the index for this book. Just when I thought I might not cross the finish line, Isabella stepped in with her ever generous and thoughtful spirit to provide much-needed assistance with this project. My greatest debt of all is to my husband, Baron Reed—my most brilliant reader, my fiercest champion, and my most constructive critic. Every sentence in this book has been carefully read by him, often more than once, and every argument has benefited from his incisiveness, powerful mind, and ability to invariably see the best version of what I am saying. I have learned more from Baron than from any other human in my life (except my mother, who literally taught me how to walk and talk!). In keeping with the theme of this book, I often feel that Baron and I make up a distinctive philosophical group with a shared “mind of our own.” I am dedicating this book to Baron and to our daughters, Isabella and Catherine, for enriching my life beyond words: to Baron, for his unwavering support, witty humor, unparalleled love, and the most powerful intellectual connection; to Isabella, for an irrepressible generosity of spirit, for wisdom and courage far beyond her years, and for a deep companionship that continues to

surprise me; and to Catherine, for her utterly unique and captivating way of looking at the world, for her seemingly endless creativity, and for her adventurous yet completely steady heart.

Introduction In 2005, Volkswagen of America learned that its diesel vehicles could not meet emissions standards in the United States. Rather than lower the actual emissions levels, the auto manufacturer inserted software that reported substantially lower emissions levels during testing than were possible when the vehicles were on the road. Before a team from West Virginia University uncovered this deception, these “defeat devices” were installed in 11 million diesel cars sold worldwide between 2008 and 2015. The result was that the fraudulent test results met emission requirements, but the vehicles “spewed as much as 40 times more pollution from tailpipes than allowed by the U.S. Environmental Protection Agency.”1 Since then, scientists at MIT have found that the excess emissions will cause 60 premature deaths across the United States and 1200 in Europe, with Germany, Poland, France, and the Czech Republic being hit the hardest.2 Michael Horn, who was the CEO of Volkswagen Group of America at the time, testified before the House Energy and Commerce Committee’s oversight and investigations panel in October of 2015 about this emissions-test cheating scandal. In response to challenges from lawmakers, Horn said, “This was a couple of software engineers who put this in for whatever reason. To my understanding, this was not a corporate decision. This was something individuals did.”3 Horn went on to explain that three Volkswagen employees had been suspended as a result of the software that led to the fraudulent test results. In response to Horn’s testimony, Rep. Chris Collins (R-N.Y.), who is himself an engineer, said, “I cannot accept VW’s portrayal of this as something by a couple of rogue software engineers. Suspending three folks—it goes way, way higher than that.”4 Collins continued: “Either your entire organization is incompetent when it comes to trying to come up with intellectual property, and I don’t believe that for a second, or they are complicit at the highest levels in a massive cover-up that continues today.”5 Volkswagen’s response to this scandal lies at one end of the spectrum regarding collective responsibility: the blame is entirely the result of a few individual employees, with none attaching to the corporation itself. At the other end of the spectrum lies a case like this: on March 6, 1984, the United States Department of Defense charged that between 1978 and 1981, the company National Semiconductor had sold them 26 million computer chips that had not been properly tested and then had falsified their records to conceal the fraud.6 These potentially defective chips had been used in airplane guidance systems, nuclear weapons systems, guided missiles, rocket launchers, and other sensitive military equipment. Highlighting the gravity of the situation, a government official noted that if one of these computer chips malfunctioned, “You could have a missile that would end up in Cleveland instead of the intended target.” Officials at National Semiconductor admitted to both the omission of required tests and the falsification of relevant documentation and agreed to pay $1.75 million in penalties for defrauding the government. However, the company refused to provide the names of any of the

individuals who had participated in the decision to omit the tests and falsify the documents, or any who had been involved in carrying out these tasks. The legal counsel for the Department of Defense objected, arguing that “a corporation acts only through its employees and officers” and thus the government would have no assurance that National Semiconductor would not engage in the fraud again. In response, the CEO of National Semiconductor said, “We totally disagree with the Defense Department’s proposal. We have repeatedly stated that we accept responsibility as a company and we steadfastly continue to stand by that statement.” A spokesperson for National Semiconductor later reiterated this position: “We will see [that our individual people] are not harmed. We feel it’s a company responsibility, [and this is] a matter of ethics.” National Semiconductor prevailed: no individual employee was ever held criminally or civilly liable for the crime. Only the company qua company was penalized. In contrast to Volkswagen’s approach, then, National Semiconductor took complete responsibility for defrauding the government at the level of the company and denied that any individual employees were deserving of blame. These two different ways of characterizing collective responsibility are closely connected to a central debate in the literature on the epistemology of groups. On the one hand, deflationary theorists hold that group phenomena, such as group beliefs, can be understood entirely in terms of individual members and their states. According to this approach, the states of collectives are not interestingly different than those of individual knowers, and thus collective epistemology turns out to be largely or completely reducible to individual epistemology. Inflationary theorists, on the other hand, hold that group phenomena are importantly over and above, or otherwise distinct from, individual members and their states. In this way, groups are often said to crucially have “minds of their own.”7 Settling some of the issues in this debate lies at the heart of making sense of attributions of collective responsibility. If National Semiconductor, for instance, is to be treated as an entity over and above its members in bearing responsibility for defrauding the government, then it is essential to determine whether the company believed or knew that required tests were being omitted and relevant documentation falsified and whether it lied to the government about its fraudulent behavior. In the absence of plausible readings of these collective states, it simply wouldn’t make sense to say that National Semiconductor as a corporation bears full responsibility for its actions. A central aim of this book is to make progress in understanding these crucial notions in collective epistemology—group belief, justified group belief, group knowledge, group assertion, and group lies—so as to shed light on whether it is groups, their individual members, or both who ought to be held responsible for collective actions.

0.1 On the Very Existence of Group Beliefs One fundamental type of objection I have frequently heard to a project on the epistemology of groups goes like this: groups are not the proper bearers of epistemic states. Being a knower, or justifiedly believing a proposition, requires belief. Belief is a mental state, and groups don’t have mental states in any robust sense. To the extent that we attribute states of belief or knowledge to groups, such talk is loose or metaphorical. Anthony Quinton, for instance, writes: We do, of course, speak freely of the mental properties and acts of a group in the way we do of individual people. Groups are said to have beliefs, emotions, and attitudes and to take decisions and make promises. But these ways of

speaking are plainly metaphorical. To ascribe mental predicates to a group is always an indirect way of ascribing such predicates to its members. With such mental states as beliefs and attitudes, the ascriptions are of what I have called a summative kind. To say that the industrial working class is determined to resist anti-trade union laws is to say that all or most industrial workers are so minded. (Quinton 1975/1976, p. 17)

There are two different views of group belief suggested in this passage. On the one hand, there is the eliminativist view, according to which it is literally false that groups believe things and hence group belief attributions are simply metaphorical. On the other hand, there is the deflationary view mentioned above, according to which it is literally true that groups believe things, but such claims are made true entirely by individual members of the groups believing things. On either reading, we might say that a book on the epistemology of groups is misguided. Groups are not epistemic agents in their own right because they don’t have proper beliefs and, thus, they don’t have justified beliefs or knowledge. Talk of group beliefs is either metaphorical or fully reducible to the beliefs of individuals. Since any reader of this book who is sympathetic with this line of thought will most likely see no point in moving forward, let me offer a very brief argument right at the outset for rejecting it. I take as a starting position that groups lie. I will say more about this throughout the book, especially in Chapters 1 and 5, but this does not seem particularly contentious. Google “Facebook lies” and a litany of articles comes up. Corporations have been forced to pay literally billions of dollars for lying. After a 2009 headline in Business Insider that reads “Pfizer to Pay $2.3 Billion in Biggest Fine Ever for Deceitful Drug Marketing,” the first line says, “There is not always truth in advertising, but when you really lie, you really pay—especially if you happen to be an enormous drug company.”8 And on a more personal level, imagine that your employer promised to give you research funds while recruiting you, but you then learned after they never materialized that the university said this to you while knowing full well that they did not have the resources to follow through. It seems quite natural for you to say, not at all metaphorically, “My university lied to me.” With this in mind, here is an argument: 1. Groups lie. 2. Group lies cannot be understood without groups having genuine beliefs. 3. Therefore, groups have genuine beliefs. I will discuss group lies in far more detail in Chapters 1 and 5. But very briefly, a lie—whether offered by an individual or a group—just is an assertion that one does not believe oneself that is made with the intention to be deceptive. Indeed, even within debates about the details over how to understand lying, there is consensus that it crucially involves the absence of belief on the part of the liar. Premise (1) is thus widely supported by our social practices, including our notions of moral and criminal responsibility, and (2) follows from every major account of the nature of lying. Finally, all I mean by “genuine” is that such beliefs are neither loose talk nor entirely reducible to the beliefs of individuals. Rather, there is a robust sense in which groups are believers in their own right. Of course, I don’t expect this argument to be fully satisfying at this point. But what I hope it does is get the skeptical reader on board with thinking that a project on the epistemology of

groups is worth exploring.

0.2 The Nature of Groups Another initial question that might be raised about a project on the epistemology of groups is what kind of collective entities will be at issue. Groups obviously come in a variety of forms. At one end of the spectrum, there are highly structured groups with policies, procedures, and robust forms of interaction among the members, such as corporations, universities, juries, and boards; at the other end, there are collections of individuals with no formal structure or interaction among group members, such as left-handed Northwestern students and red-haired New Yorkers. And in between these two ends are groups with varying degrees of structure and interaction, such as governments, scientific communities, Americans, and women. Accounts of group phenomena are often directly shaped by which groups are taken to be paradigmatic. For instance, those who are drawn to restrictive views typically focus on highly structured groups with regular interaction among the members. A particularly clear example of this can be seen in the work of Frederick F. Schmitt who, following Margaret Gilbert, argues that “a set of individuals forms a group just in case the members of the set each openly expresses his or her willingness to act jointly with the other members of the set” (Schmitt 1994, p. 260). This conception of a group requires a high level of conscious interaction among the members, and thus rules out as groups all but very formalized collections. It is, therefore, not surprising that Schmitt frequently uses a jury as a classic example of a group. Moreover, this starting point directly impacts his view of other collective phenomena, such as justified group belief, where he relies crucially on the notion of joint acceptance. Clearly, there is a sense in which many groups, such as the Democratic Party or even Northwestern University, are not even properly positioned for joint acceptance, as they are large, dispersed, and made up of members with varying levels of authority.9 In contrast, those who are interested in more permissive accounts of collective phenomena typically focus on large, unstructured groups, such as those that have information distributed throughout their members. Alexander Bird, for instance, takes the scientific community to be a paradigmatic group in his argument on behalf of a phenomenon that he calls “social knowing,” where group states do not even supervene on the mental states of the individual members.10 Similarly, Søren Harnow Klausen recently argued that: We should allow that the factors which, together with truth (or, in the case of knowing how, some sort of adequacy to the task in question), are necessary and jointly sufficient for group knowledge, can be distributed among the members of the group. A well-known example of genuinely distributed cognition has been provided by Hutchins (1995), who describes how a navy vessel crew is able to navigate successfully through the concerted efforts of many individuals, each of whom carries out a very specialized task and does not necessarily have any knowledge of the contributions of others, nor of the more general tasks or the ways in which the different contributions are merged. I suggest that this kind of example, rather than that of a jury or a board of directors facing a specific decision, should serve as a paradigm of collective knowledge. (Klausen 2015, p. 823)

Here, Klausen begins with a paradigm of a group that not only fails to engage in any sort of collective deliberation, as a jury does, but is also such that the members are wholly unaware both of the actions of the others and of the collective goals. If this is the starting point, then any account that requires joint awareness or activity of any sort will immediately be ruled out.

Attempts have been made in the literature to distinguish these different kinds of groups in ways that have theoretical or practical significance. One common distinction that is drawn here is between established groups and non-established groups, where it is standard to rely on examples of each kind rather than definitions or criteria. Margaret Gilbert, for instance, writes: There are ascriptions of cognitive states to two or more people who are understood to constitute an established group of a specific kind such as a union, court, discussion group, family, and so on. There are also ascriptions of cognitive states to two or more people without any presumption that they constitute an already established group. (Gilbert 2004, p. 96)11

While there are certainly important differences between, say, a court and a collection of lefthanded Northwestern students, it is doubtful that “being established” is the most theoretically or practically relevant feature to focus on. Suppose, for instance, that unbeknownst to the lefthanded students at Northwestern, I fill out all of the necessary paper work for them to have official status as a club on campus. Left-handed Northwestern students are now recognized by the university and thus constitute an established group. Surely, however, this official status by itself does not change the group in any deeply significant way. So, the distinction between being established and non-established doesn’t seem to mark a particularly meaningful difference between these two kinds of groups. Christian List and Philip Pettit (2011) have argued that while a corporation is a group, lefthanded Northwestern students are a mere collection, and this difference is grounded in whether the collection in question can survive changes of membership. They write: Collections of individuals come in many forms. Some change identity with any change of membership. An example is the collection of people in a given room or subway carriage. Other collections have an identity that can survive changes of membership. Examples are the collections of people constituting a nation, a university or a purposive organization. We call the former “mere collections”, the latter “groups”. Our focus here is on groups. (List and Pettit 2011, p. 44)

Once again, however, it is not clear that an important distinction has been highlighted. On the one hand, a company may not survive the firing of its CEO, a cult may not survive the death of its leader, and a political group may not survive its dictator being overthrown, but there is nonetheless a clear difference between these groups and left-handed Northwestern students. On the other hand, a collection of individuals trying to save a beached whale over the course of 36 hours12 may survive a number of changes in membership,13 yet there may be significant asymmetries between this type of group and a medical association or academic department. So, it seems as though there can be paradigmatic groups unable to survive changes in membership, and classic instances of mere collections that are able to do so. Thus, the ability to survive changes in membership also fails to track a theoretically substantive difference among collective entities. One distinction that has clear significance along a number of dimensions—such as epistemic, moral, legal, and practical—is the group’s ability, or lack thereof, to engage in collective deliberation or reasoning. Such reasoning involves, at a minimum, a sensitivity to evidence, the capacity to engage in belief revision, and being the proper subject of normative evaluation. For instance, school boards often meet to discuss issues relevant to their goals, bouncing ideas off of one another, considering and responding to objections as a group, weighing various pieces of evidence, and revising their collective beliefs as necessary. In contrast, in the absence of unusual circumstances, left-handed Northwestern students simply do not engage in these sorts of activities for several reasons. First, collections of individuals such as left-handed Northwestern

students often simply don’t conceive of themselves as a group and so wouldn’t consider engaging in any sort of reasoning with other members. Second, these sorts of collections are sometimes widely dispersed across even larger groups of individuals, rendering it practically impossible to collectively reason, especially when there aren’t any formal meetings or events at which they can do so. Finally, these groups of individuals are frequently united by features, such as left-handedness, that have very little general interest or value. For ease of expression, let us call those groups capable of engaging in collective reasoning deliberative groups and those that are not non-deliberative groups. There are two points about non-deliberative groups that should be emphasized here. First, that this sort of group is incapable of engaging in collective reasoning is simply a contingent feature that arises in the circumstances in which it is found, one that can certainly change from one minute to the next. For instance, Northwestern administrators may request that all lefthanded students gather in an auditorium to discuss whether the campus is suitably sensitive to their needs. Prior to this gathering, the group was incapable of engaging in collective reasoning and was therefore non-deliberative. But as soon as they find themselves in an auditorium identified as sharing a certain salient property, they might then reason together as a group and thus become deliberative. For instance, each student may now share with the others evidence about how many classrooms do not have desks for left-handed students, how many times each student has been inconvenienced because of his or her left-handedness, and so on, and they may decide as a group what their position is regarding whether the campus is appropriately set-up for their needs. So, being non-deliberative is not necessarily an enduring property of a group.14 Second, despite the fact that non-deliberative groups cannot engage in collective reasoning, there is a clear sense in which they can nonetheless have group beliefs, albeit ones that are different from those had by their deliberative cousins. For instance, suppose that I am in charge of safety at Northwestern and I wish to determine whether left-handed students regard the campus as properly designed for their needs. To this end, I send out a survey for all left-handed students to fill out and, after receiving the results, I aggregate their judgments via a supermajority procedure. I then report on this basis that left-handed Northwestern students believe that the campus is not suitably sensitive to their particular needs. It is not uncommon to think that there is nothing strained or mistaken about this belief attribution—as a group, left-handed Northwestern students do hold this belief, just not in the same way that a group capable of collective reasoning might do so. While deliberative groups depend for their existence on features such as an appropriate structure, a set of constitutive rules, accepted social integration, and so on, non-deliberative groups can simply be brought into existence through internal or external interest. For instance, someone who is either herself left-handed or is simply interested in those who are may be inspired to survey Northwestern students with this feature, aggregate their responses, and reveal their belief as a non-deliberative group. The same can be said for other collections of individuals, such as red-haired New Yorkers. This interest brings the group into existence, and the surveying and aggregating reveals their group belief. Unlike the features discussed earlier—such as whether a group is established or can survive changes in membership—the ability to engage in collective reasoning has clear significance. For instance, a collection that is capable of weighing evidence and revising its beliefs as a group can be evaluated as rational and irrational in a way that one that is not so capable cannot. Sure, after conducting the survey of left-handed Northwestern students, the administration may have evidence that these students have beliefs that collectively conflict with one another, and they may

assess them on this basis. But this is an assessment of the beliefs of a collection of individuals rather than of a collective entity. Corresponding to the evaluation of a group being appropriately judged irrational is the responsibility that such an entity might collectively bear for this shortcoming. If a school board is irrational in its belief that the honors program at a local high school ought to be discontinued, then it can be held responsible as a group for this epistemic shortcoming. Relatedly, the school board can then act as a collective agent by deciding to discontinue the honors program and, accordingly, can be held morally and legally responsible for some of the effects that this move has on the community. For instance, the school board can be appropriately deemed self-serving or callous if it is discovered that the decision was made simply for political benefits. Or the school board can be sued if parents regard the cessation of the honors program as failing to provide their children with the educational opportunities that are required by the state. Thus, there is obvious epistemological, metaphysical, moral, legal, and practical value that comes with the ability to engage in collective reasoning. In addition to deliberative and non-deliberative groups, there are what we may call mere collections, which are sets of individuals that share a common feature though one for which there is no interest, either internal or external. This is currently the status of left-handed Northwestern students since neither the members of this collection nor those outside of it have any interest in left-handed students qua being left handed. As we have seen, this can certainly change. Lefthanded students may organize themselves into a deliberative group by meeting weekly to discuss their plight, formalizing rules for making decisions among themselves, and electing a board to officially represent their interests on campus. Alternatively, a right-handed member of the community worried about issues of fairness may convert left-handed students into a nondeliberative group through her interest in them. There is, however, a feature that is even more general—one that may even cut across the deliberative/non-deliberative distinction15—that captures the kinds of groups that will be the focus of this book; namely, being subject to normative evaluation. In particular, I will be interested in those groups that are properly subject to normative assessment, such as praise and blame, along both epistemic and moral dimensions, and the corresponding attributions of responsibility, accountability, and so on. Put succinctly, if we can properly hold a group, G, responsible for φ-ing, then this is sufficient for regarding G as a group in the sense relevant for this project.

0.3 Chapter Overviews This book is divided into five chapters. In Chapter 1, I take up the question: how should we understand the sense in which groups have beliefs? In stark contrast to the quote from Anthony Quinton earlier, the received view in collective epistemology is that group belief must be understood in non-summative or inflationary terms. Such views are motivated by cases that purport to show that a group can be said to properly believe that p, despite the fact that none of its individual member believes that p. If this is true, then group belief cannot be understood, even in part, in terms of the beliefs of individual group members. This is the negative claim of the non-summative view. The positive claim is that group belief should be characterized in terms of something that the members do, and this is typically identified as joint acceptance. Very roughly, group belief is the result of the members jointly agreeing to accept a given proposition as the group’s, even if no member believes it herself.

In this chapter, I challenge this orthodoxy by raising an entirely new objection to this general approach to understanding group belief. I show that joint acceptance accounts crucially lack the resources to be able to explain how groups can lie and bullshit, and, more generally, I argue that group belief cannot be determined by states or processes that are under the direct voluntary control of the members. I also show that such non-summative views countenance as group beliefs states that are riddled with incoherence among the bases of the beliefs of the group members. This leaves group belief without an appropriate mind-to-world direction of fit and renders it unsuitable for proper epistemic evaluation and collective deliberation. In addition, I show that the original cases used to motivate non-summative views can be fully explained without the need to posit group belief. I then go on to develop and defend a new view, which I call the Group Agent Account: group belief is determined in part by relations among the bases of the beliefs of members, where these relations arise only at the collective level, and are crucial especially insofar as the group is able to function as an agent. At the same time, group belief is partly constituted by the individual beliefs of members. In this way, the resulting view is neither strictly summative nor nonsummative. In Chapter 2, I turn to the question of justified group belief, which has received surprisingly little attention in the literature. Mirroring the debate regarding group belief, there are those who favor an inflationary approach, where the justificatory status of group belief involves only actions or features that take place at the group level, such as the joint acceptance of reasons. On the other hand, there are those who endorse a deflationary approach, where justified group belief is understood as nothing more than the aggregation of the justified beliefs of the group’s members. In this chapter, I raise new objections to both of these approaches. Against inflationary views, I show that they face what I call the Illegitimate Manipulation of Evidence Problem, according to which accounts that allow the justification of group beliefs to be achieved through wholly voluntary means also permit the evidence available to the group to be illegitimately manipulated, thereby severing the connection between group epistemic justification and truthconduciveness. Against deflationary views, I argue, that they lead to the Group Justification Paradox in which a group ends up counting as justifiedly believing both that p and that not-p. Finally, I develop and defend a positive view of justified group belief that parallels my account of group belief in Chapter 1 in critical respects, which I call the Group Epistemic Agent Account: groups are understood as epistemic agents in their own right, ones that have evidential and normative constraints that arise only at the group level, such as a sensitivity to the relations among the evidence possessed by group members and the epistemic obligations that arise via membership in the group. These constraints bear significantly on whether groups have justified belief. At the same time, however, group justifiedness on the Group Epistemic Agent Account is still largely a matter of member justifiedness, where the latter is understood as involving both beliefs and their bases. The result is a view that neither inflates nor deflates group epistemology, but instead recognizes that a group’s justified beliefs are constrained by, but are not ultimately reducible to, members’ justified beliefs. In Chapter 3, I take up two quite influential kinds of purported group knowledge that are inflationary and non-summative in nature, and that pose direct challenges to the account of justified group belief developed in Chapter 2. The first, developed and defended in most detail by Alexander Bird, is often referred to as “social knowledge.” A paradigmatic instance of social knowing is taken to be the so-called knowledge possessed by the scientific community, where no

single individual knows a given proposition, but the information plays a particular functional role in the community. The second is “collective knowledge,” which occupies an important place in United States law. According to the “collective knowledge doctrine,” knowledge may be imputed to a group by aggregating bits of information had by its individual members. If these are correct, then my view of justified group belief is false, particularly the requirement that some of the members of a group have the relevant justified beliefs themselves. However, I show in this chapter that both social knowledge and collective knowledge sever the crucial connection between knowledge and action, and open the door to serious abuses, not only epistemically, but morally and legally as well. Bits of information that are merely accessible to group members, or individual instances of knowledge that are aggregated with no communication, do not amount to group knowledge in any robust sense. I conclude, then, that neither social knowledge nor collective knowledge is genuinely group knowledge, and thus neither poses a challenge to my Group Epistemic Agent Account. I turn, in Chapter 4, to understanding what it means for a group to assert a proposition. It is especially crucial to get a grip on this notion for being in a position to hold collectives, such as corporations, morally and legally responsible for what they say. I begin by distinguishing between two kinds of group assertion—coordinated and authority-based—and I argue that authority-based group assertion is the core notion. I then show that a deflationary view of group assertion, according to which a group’s asserting is understood in terms of individual assertions, is misguided. This is the case because a group can clearly assert a proposition even when no individual does. I then develop a positive inflationary view of group assertion according to which it is the group itself that is the asserter, even though this standardly occurs through a spokesperson(s) or other proxy agent(s) having the authority to speak on behalf of the group. This is supported by the fact that paradigmatic features of assertion apply only at the level of the group. A central virtue of my account is that it provides the framework for distinguishing when responsibility for an assertion lies at the collective level and when it should be shouldered by an individual simply speaking for herself. In the final chapter of this book, I take up group lies. Despite the prevalence of group lies and their often far-reaching effects, such as those seen in both the Volkswagen and National Semiconductor cases, there has never before been a philosophical treatment of group lies.16 This chapter begins the process of filling this surprising gap in the literature by focusing on the question of what a group lie is. After providing an account of how to understand individual lies, I consider, first, whether group lies can be understood in terms of the lies of the group’s members and, second, whether group lies can be characterized in terms of joint agreement by the group’s members to lie. After showing both views to be misguided, I offer my own account of group lying, according to which it crucially involves the group offering a statement. In particular, because what a group says can come apart from what its individual members say, I argue that a group might lie when no individual member lies, and a group might fail to lie even though every individual member does. A central virtue of my account is that it captures the often subtle and complex relationship that can exist between most groups and their spokespersons. In this way, my view provides the basis for understanding how groups are responsible for their lies, as well as for determining when it is appropriate to trace this responsibility to the individual members of the group and the spokespersons who represent them.

0.4 The Bigger Picture

Let’s return to the two cases we discussed at the start of this Introduction. Straightforward deflationary views of collective phenomena have the resources for holding only individual members of groups responsible for their actions. After all, according to such accounts, only individuals believe, know, assert, lie and so on, and so there quite literally are no states or actions of collective entities to bear attributions of praise and blame. Such a framework accords very well with the way the Volkswagen of America CEO tried handling the emissions-test cheating scandal. There were a few rogue software engineers on this reading, with no responsibility attaching to the corporation itself, and so the suspension of these employees fully handled the matter. But as Rep. Chris Collins (R-N.Y.) and other lawmakers made clear, this response is not only deeply unsatisfying, it is also wildly implausible. Deception of this magnitude undoubtedly involves some degree of culpability at the level of the corporation itself. In particular, installing a device explicitly designed to produce fraudulent test results in 11 million diesel cars sold worldwide cannot occur without complicity or, at the very least, negligence at the highest levels of leadership and oversight. Thus, purely deflationary views of collective epistemic phenomena lack the resources both for providing the correct diagnosis in a case such as this and for holding the relevant parties accountable. Purely inflationary views of collective phenomena, on the other hand, have the resources for holding only groups responsible for their actions, allowing individual members to go entirely or largely blameless. For, on such views, activity that takes place at the collective level determines whether groups believe, know, assert, lie and so on, and so there quite literally are no individual states or actions of this kind that are part of the analysis to shoulder praise and blame. This framework very nicely captures the way that National Semiconductor approached its omission of required tests and the falsification of relevant documentation. The company took full responsibility for defrauding the government and agreed to pay $1.75 million in penalties, but vehemently denied that any blame belonged to individual employees. As the legal counsel for the Department of Defense made clear, however, corporations act only through their employees and officers, and so there is no way that this level of illegal activity occurred without knowledge and involvement on the part of individuals. Hence, strictly inflationary views of collective epistemic phenomena lack the resources both for providing a diagnosis that includes culpable activity on the part of individuals and for holding all of the relevant parties accountable. To be sure, group members might be held accountable for jointly accepting the proposition in question on an inflationary view. But there are at least two ways in which this is inadequate. First, this would distribute responsibility equally, as there is no interesting sense in which some might have jointly accepted more than others. In actual cases, however, it is clear that members often have significantly different roles in group actions and, accordingly, bear different degrees of praise and blame. Second, it is important to have a framework that can account for the responsibility members shoulder in collective action that go beyond joint acceptance. Groups often have complicated structures, where those at the highest levels of leadership may avoid joint acceptance altogether but might nonetheless be involved in collective action through more nuanced and harmful ways. The views that I develop and defend in this book avoid all of these pitfalls. Because I argue that group belief and group justified belief involve both the beliefs and epistemic statuses of individual members, and also relations and normative requirements that hold only at the level of the collective, my views are neither purely inflationary nor deflationary. Rather, I provide a framework for distributing responsibility across groups and their individual members. I regard this as a central virtue of this book. Neither Volkswagen nor National Semiconductor got things

right in the attributions of responsibility, and this led to significant pushback and dissatisfaction among those impacted by their responses. We need a framework for fully understanding accountability in such cases, and my views provide this. Moreover, while my views of group assertion and group lies are robustly inflationary, they also have the resources for the proper distribution of responsibility between collectives and their individual members. For example, spokespersons have the authority to speak on behalf of groups, and when they assert or lie in their official capacity as a representative of the group, it is the group itself that asserts or lies. Similar considerations apply to collective action more broadly that is performed through a proxy agent. It is then, quite straightforward, how responsibility attaches to groups on my view. But this relationship between groups and spokespersons provides the resources for also holding individual members accountable. Spokespersons are often chosen by members of the group who bear responsibility for not only ensuring that what is conveyed on its behalf is accurate and well-supported, but also for keeping the spokespersons properly informed. When things go awry along any of these dimensions, there are often particular individuals clearly deserving of blame. In addition, spokespersons are frequently (though crucially not always) members of the groups they represent. The Chair of my Department, for instance, is both a member of our group and our spokesperson when it comes to conversations with the Dean about hiring decisions. We can certainly imagine situations in which my Department is on the hook for a group decision we made but where the Chair, as our spokesperson, deserves greater, or a different sort of, blame. Perhaps she withheld crucial information in our decision-making or inaccurately conveyed our position in conversation with the Dean. On my view, the Department would shoulder the responsibility of its assertion conveyed through the spokesperson, but the Chair could still be uniquely blamed for her role in the process. One final issue that is worth addressing at the outset is this: I argue in the first three chapters of this book on behalf of views of collective phenomena, such as group belief and justified group belief, that include as an epistemic anchor, so to speak, the states of individual members. So, for instance, my accounts of group belief and of justified group belief both require that at least some of the individual members of the group instantiate the states in question. Yet in the last two chapters, I defend views of both group assertion and group lies that are robustly inflationary. In other words, I show that groups can assert and lie when no member of the groups is even aware of the proposition in question. What explains this asymmetry in a way that is not ad hoc? Robustly inflationary views should be adopted only where a group is capable of granting authority to another agent or agent-like entity to do something on its behalf. So, for instance, a group can grant authority to a lawyer to speak on its behalf, to lie on its behalf, to bullshit on its behalf, and to act on its behalf. In all of these cases, then, it will be possible for the group’s actions to be constituted by the actions of another, even when the group itself is entirely ignorant of the matter. Thus, accounts of all of these phenomena will be robustly inflationary in nature. In contrast, a group cannot grant authority to another to believe on its behalf, to desire on its behalf, to justifiedly believe on its behalf, or to know on its behalf. Accordingly, accounts of all of these phenomena will require that at least some of the individual members of the group instantiate the states in question. The Epistemology of Groups. Jennifer Lackey, Oxford University Press (2021). © Jennifer Lackey. DOI: 10.1093/oso/9780199656608.003.0001

1 https://www.chicagotribune.com/news/sns-bc-us--volkswagen-emissions-scheme-20150921-story.html, accessed August 8, 2019. 2 http://news.mit.edu/2017/volkswagen-emissions-premature-deaths-europe-0303, accessed August 8, 2019. 3 https://www.latimes.com/business/autos/la-fi-hy-vw-hearing-20151009-story.html, accessed August 7, 2019. 4 https://www.latimes.com/business/autos/la-fi-hy-vw-hearing-20151009-story.html, accessed August 7, 2019. 5 https://www.washingtonpost.com/news/the-switch/wp/2015/10/07/volkswagens-pulling-the-plug-on-its-2016-americandiesel-cars/, accessed August 7, 2019. 6 This case, including all of the quotations, are discussed in Velasquez (2003). 7 See Pettit (2003). 8 https://www.businessinsider.com/pfizer-to-pay-23-billion-in-biggest-fine-every-for-deceitful-advertising-2009-9, accessed August 7, 2019. 9 Others distinguish between collections of individuals and group agents, with very strong requirements to qualify as the latter. For instance, according to Jesper Kallestrup, “A collective is a group agent only if (i) its individual members intend that the collective act and form attitudes together, i.e. each of these individuals must intend that they together enact the joint performance and come to a group attitude. Moreover, (ii) each must intend to do their part, and (iii) intend to do so because of their belief that others intend to do their bit.…A different but related set of constraints concerns the office of the collective as fixed by its charter. A collective is a group agent only if its (founding) members jointly set up common goals and agree on how to proceed in order to meet them. Both the ends and the means, which are carried out for the purpose of achieving them, are captured by the group’s charter, which is sometimes formally enshrined in a system of laws, other times its existence is evidenced by the practice of the group and its members. When these two sets of constraints are met, a collection of individuals unites in forming a rational agent in its own right” (Kallestrup 2016). It should be clear that many collectives that act together will fail these conditions, such as a group of strangers who work together to save a drowning swimmer. In at least some sense, this collection surely might properly be regarded as a group agent, despite failing Kallestrup’s demanding conditions. 10 See Bird (2010). 11 See also Lahroodi (2007) and Bird (2010). 12 This is a modified example that List and Pettit (2011) use to illustrate a mere collection. 13 For instance, a reward that is given in recognition of the efforts of those saving the beached whale would include all of the people who participated in the rescue at any point. This group would, then, be the subject of praise and other kinds of normative assessment in ways that mere collections of individuals are not. 14 Of course, being deliberative can also go out of existence. 15 I should note that while I am interested in normative evaluation of both deliberative and non-deliberative groups, as a general rule the normative evaluation of deliberative groups tends to be richer and more significant to our broader understanding of the epistemic, moral, and legal landscape. 16 Lackey (2018b) is the only exception.

1 Group Belief Lessons from Lies and Bullshit Groups and other sorts of collective entities are frequently said to believe things.1 Sarah Huckabee Sanders, for instance, was asked by reporters at White House press conferences whether the Trump Administration “believes in climate change”2 or “believes that slavery is wrong.”3 Similarly, it is said on the website of the ACLU of Illinois that the organization “firmly believes that rights should not be limited based on a person’s sexual orientation or gender identity.”4 And, according to the Presidential Commission on the BP oil spill in the Gulf of Mexico, both BP and Halliburton believed that there were flaws with the cement used for the well safety device before the Deepwater Horizon explosion.5 These are just a few examples, but there are countless others. Moreover, the importance of understanding these claims is clear, both theoretically and practically. If we do not grasp what it is for a group to hold a belief, then we cannot make sense of our widespread attributions to collective entities6 of actions that they perform, or should have performed, and of the corresponding responsibility that they bear. If BP, for instance, believed that there were problems with a well safety device before the oil spill in the Gulf of Mexico, then the company should have taken actions to repair it and is therefore clearly culpable for the massive environmental damage that ensued. Broadly speaking, there are two approaches to understanding the nature of group belief. On the one hand, there is the summative view, according to which group belief is understood as nothing more than the “summation” of the beliefs of the group’s members. On the other hand, there is the non-summative view, where groups are regarded as entities with “minds of their own” and group belief is conceived of as involving actions that take place at the collective level,7 such as the joint acceptance of a proposition. Despite the initial plausibility of the summative approach, it is now received wisdom in collective epistemology that group belief must be understood in non-summative terms. In this chapter, however, I challenge this orthodoxy by raising new, and what I regard as decisive, objections to this approach to group belief. I then go on to develop and defend a new view, which I call the Group Agent Account: group belief is determined in part by relations among the bases of the beliefs of members, where these relations arise only at the level of the collective, and are crucial especially insofar as the group is able to function as an agent. At the same time, group belief is also largely a matter of the individual beliefs of members. In this way, the resulting view is neither strictly summative nor nonsummative.

1.1 Summative and Non-Summative Views of Group Belief Let’s begin with the traditional summative account, according to which a group’s believing that p can be understood in terms of the individual members of the group believing that p. A conservative version of the summative account (CSA) can be formulated in the following way: CSA: A group G believes that p if and only if all or most of the members of G believe that p. The CSA correctly subsumes some classic instances of group belief. For instance, it is plausible to characterize the Northwestern community’s belief that its institution is in Illinois in terms of all—or at least most—of its members holding this belief. However, other common examples of group belief do not seem to fare as well on such a view. Suppose, for instance, that the President of the university issues a statement saying that Northwestern believes that the quarter system is not beneficial to students’ academic success. Suppose further, however, that this belief is held by only a small, yet powerful constituency of the Northwestern community, such as the administration, or an appointed committee that oversees this matter. It may still be appropriate to attribute the belief that the quarter system is not beneficial to students’ academic success to the Northwestern community, despite the fact that neither all nor most of its members holds this belief. Because of cases such as this, summative accounts are typically formulated more liberally (LSA) as follows: LSA: A group G believes that p if and only if some of the members of G believe that p. On this version of the summative view, it may be appropriate to attribute belief that p to G even if only one of its members believes that p. For instance, perhaps it is sufficient that the President of the university alone believes that the quarter system is not beneficial to students’ academic success to properly attribute this belief to Northwestern. This may also be true when a CEO holds a belief within a company, a leader holds a belief within her cult, and a dictator holds a belief within her nation. Even with this modification, however, the summative account of group belief is said to suffer from a debilitating objection. In particular, it is argued that a group can be properly said to believe that p, even when not a single of its members believes that p. A classic example of this sort of case is where a group decides to let a view “stand” as that of the group’s, despite the fact that none of its members actually holds the view in question. For instance, consider the following: PHILOSOPHY DEPARTMENT: The

philosophy department at a leading university is deliberating about the final candidate to whom it will extend admission to its graduate program. After hours of discussion, all of the members jointly agree that Jane Smith is the most qualified candidate from the pool of applicants. However, not a single member of the department actually believes this; instead, they all think that Jane Smith is the candidate who is most likely to be approved by the administration. Here, it is argued that the philosophy department believes that Jane Smith is the most qualified candidate for admission, even though none of the members holds this belief. This attribution is

supported by the group’s actions: the group asserts that Jane Smith is the most qualified candidate, it defends this position, it heavily recruits her to join the department, and so on. This is taken to show that individual belief that p on the part of even one of the group members is not necessary for the group’s believing that p.8 There are different ways in which the sort of scenario found in PHILOSOPHY DEPARTMENT can come about. A standard route is through compromise. If, say, half of the members of a group believe that candidate x is the best, and the other half believe candidate y is, they might compromise and put forward candidate z as their top candidate. Or suppose that one member of a company believes that the appropriate minimum age for employment is 18 and another believes it is 16. The group might adopt the position that it is 17. Another common way for a case such as PHILOSOPHY DEPARTMENT to arise is through the following of externally imposed rules. For instance, a jury might come to the conclusion that a defendant is nnocent because it was instructed to exclude all hearsay evidence, but each individual juror might nonetheless believe that he is guilty. Similarly, an evaluating panel might deliver the verdict that a submitted study is unpublishable because it does not rise to the exceedingly high standards of the journal in question, but each member of the group might personally believe that it is. A further way for the scenario in PHILOSOPHY DEPARTMENT to arise is through pragmatic considerations. This is one way to understand the case above, where the members of the philosophy department put forward Jane Smith as the top candidate because they believe she is the most likely to be approved by the administration. Or suppose that a group of political leaders puts forward views, not because any of the individuals believe them, but because collectively they regard these positions as increasing their chances of being voted back into office. Opponents of the summative account of group belief also hold that a group can be properly said to not believe that p, even when every single one of its members believes that p. Hence, it is argued that individual belief that p on the part of all of the members of a group is not sufficient for the group’s believing that p. Consider the following: PHILOSOPHY DEPARTMENT2:

The same philosophy department that is deliberating about the final candidate to whom it will extend admission to its graduate program is also such that every single one of its members believes that the best red pepper hummus in Chicago can be found at Whole Foods. Despite the unanimity of individual belief in such a case, it is argued that it is not correct to say that the philosophy department believes that the best red pepper hummus in Chicago can be found at Whole Foods. This is because assessment of red pepper hummus is entirely irrelevant to the goals and purposes of the group.9 These problems have motivated the now widely accepted non-summative account of group belief, according to which a group’s believing that p is irreducible to some or all of its members believing that p. Such a view holds that in some very important sense, the group itself believes that p, where this is understood as over and above, or otherwise distinct from, any individual member believing that p. There are two central versions of non-summativism. The first and perhaps most widely accepted is what we may call the joint acceptance account (hereafter, JAA), a prominent expression of which is offered by Margaret Gilbert in the following passage: JAA: A group G believes that p if and only if the members of G jointly accept that p.

The members of G jointly accept that p if and only if it is common knowledge in G that the members of G individually have intentionally and openl…. expressed their willingness jointly to accept that p with the other members of G. (Gilbert 1989, p. 306)10 A key aspect of such an account is that joint acceptance does not require belief on the part of a single member of the group in question. She writes: It should be understood that: (1) Joint acceptance of a proposition p by a group whose members are X, Y, and Z, does not entail that there is some subset of the set comprising X, Y, and Z such that all the members of that subset individually believe that p. (2) One who participates in joint acceptance of p thereby accepts an obligation to do what he can to bring it about that any joint endeavors…among the members of G be conducted on the assumption that p is true. He is entitled to expect others’ support in bringing this about. (3) One does not have to accept an obligation to believe or to try to believe that p. However, (4) if one does believe something that is inconsistent with p, one is required at least not to express that belief baldly. (Gilbert 1989, pp. 306–7) Thus, according to Gilbert’s non-summative view, so long as a group jointly accepts that p in the way described above, such a group is said to believe that p.11 On a joint acceptance account of a group’s believing that p, then, it is neither necessary nor sufficient that some of its individual members believe that p. It is not necessary because joint acceptance by the group members does not require individual belief on their part, and it is not sufficient because individual belief by the group members does not involve their joint acceptance of the proposition in question.12 According to this account, then, the philosophy department in the first case above believes that Jane Smith is the most qualified candidate for admission, even though none of the members hold this belief, precisely because they jointly agree to let this position stand as the group’s. Moreover, the philosophy department in the second case does not believe that the best red pepper hummus in Chicago can be found at Whole Foods, even though every single member of the group possesses this belief, because the members never jointly accepted such a claim. Thus, such an account delivers the correct intuitive result in both instances.13 There is, however, an immediate problem facing the version of the joint acceptance account proposed by Gilbert: groups are often large, with committees or boards that are appointed to make decisions on behalf of the group as a whole. For instance, consider the following: MEDICAL ASSOCIATION: The

Board of Directors of the American Academy of Pediatrics convenes and decides that its official position is that there are significant health benefits to circumcision, which it proceeds to publish in all of its relevant materials.14 Despite this, all of the doctors who are members of the American Academy of Pediatrics recognize that the evidence is inconclusive, and so have some lingering doubts that prevent them from individually holding this belief. As it stands, the JAA cannot countenance group belief in MEDICAL ASSOCIATION since the members of the American Academy of Pediatrics fail to jointly accept that there are significant health benefits to circumcision. In particular, only a very small percentage of the group’s members— namely, the Board of Directors—satisfies the requisite joint acceptance condition. This structure

of a collective entity is quite commonplace: groups are often vast, rendering it practically difficult if not impossible to have each member engage in any sort of joint activity. Thus, a smaller, more manageable body is either elected or appointed to represent and make decisions for the larger group. Given that MEDICAL ASSOCIATION is certainly in the spirit of precisely those sorts of cases that the joint acceptance view of group belief was designed to accommodate, the JAA requires modification. Raimo Tuomela proposes a different version of the joint acceptance account that is formulated to avoid exactly the worries found with the JAA. Specifically, he offers the following: JAA2: G believes that p in the social and normative circumstances C if and only if in C there are operative members Al,…, Am of G in respective positions Pl…, Pm such that: (1’) the agents Al,…, Am, when they are performing their social tasks in their positions Pl…, Pm and due to exercising the relevant authority system of G, (intensionally) jointly accept that p, and because of this exercise of authority system, they ought to continue to accept and positionally believe it; (2’) there is a mutual belief among the operative members Al,…, Am to the effect that (1’); (3’) because of (1’), the (full-fledged and adequately informed) non-operative members of G tend tacitly to accept—or at least ought to accept—p, as members of G; and (4’) there is a mutual belief in G to the effect that (3’). (Tuomela 1992, pp. 295–6) Tuomela’s account of group belief is quite similar to Gilbert’s, though crucially he requires only that “operative members” engage in the joint acceptance of the proposition in question. Operative members, according to Tuomela, are those who are responsible for the group belief having the content that it does which, in turn, is determined by the rules and regulations of the group in question. For instance, in the case of a corporation or a large company, the board of directors may be the operative members while the employees who work on the assembly line or in the housekeeping department may be non-operative members. Given this amendment, the JAA2, unlike the JAA, delivers the verdict that the American Academy of Pediatrics believes that there are significant health benefits to circumcision in MEDICAL ASSOCIATION since the Board of Directors is obviously comprised of operative members in the relevant sense.15 In what follows, then, I will take the JAA2 as the paradigmatic joint acceptance account of group belief.16 The second version of non-summativism commonly accepted in the literature is what we might call the premise-based aggregation account (hereafter, PBAA), a central proponent of which is Philip Pettit. Like other non-summativists, Pettit grounds his view in the argument that a group can be properly said to believe that p, even when not a single of its members believes that p. Unlike other views, however, he locates his project within a judgment aggregation framework. “Aggregation procedures are mechanisms a multimember group can use to combine (‘aggregate’) the individual beliefs or judgments held by the group members into collective beliefs or judgments endorsed by the group as a whole” (List 2005, p. 25).17 For instance, a dictatorial procedure, “whereby the collective judgments are always those of some antecedently fixed group member (the ‘dictator’)” (List 2005, p. 28) understands the belief of a group in terms of the beliefs of a single member—the dictator. A majority procedure, “whereby a group judges a given proposition to be true whenever a majority of group members judges it to be true,”

understands the belief of a group in terms of the beliefs of a majority of its individual members (List 2005, p. 27). These are simply two examples of judgment aggregation procedures; as we will soon see, there are certainly others. With this in mind, Pettit asks us to consider the following case: FACTORY: The

employees of a factory are deciding whether to forgo a pay-raise in order to spend the saved money on implementing a set of workplace safety measures. The employees are supposed to make their decision on the basis of considering three separable issues: “first, how serious the danger is; second, how effective the safety measures that a pay-sacrifice would buy is likely to be; and third, whether the pay-sacrifice is bearable for members individually. If an employee thinks that the danger is sufficiently serious, the safety measure sufficiently effective, and the pay-sacrifice sufficiently bearable, he or she will vote for the sacrifice; otherwise he or she will vote against” (Pettit 2003, p. 171). Imagine now that the factory’s three employees vote in the following way:

In FACTORY, all three members of the group believe that the pay sacrifice should not be made since each individual votes “No” in the conclusion column. However, the group itself might decide to arrive at their collective belief via a premise-based aggregation procedure, whereby the group’s belief is determined by the majority of votes found in the premise columns. According to Pettit, if the group belief is determined by how the members vote on the premises, then the group conclusion is to accept the pay sacrifice since there are more “Yes”s than “No”s in each of the premise columns. In such a case, “the group will form a judgment on the question of the paysacrifice that is directly in conflict with the unanimous vote of its members. It will form a judgment that is in the starkest possible discontinuity with the corresponding judgments of its members” (Pettit 2003, p. 183). This divergence between the belief of a group and the beliefs of its members has come to be known as the doctrinal paradox or the discursive dilemma and it has motivated Pettit to conclude that groups are intentional subjects that are distinct from, and exist “over and beyond,” their individual members. He writes: “These discontinuities between collective judgments and intentions, on the one hand, and the judgments and intentions of members, on the other, make vivid the sense in which a social integrate is an intentional subject that is distinct from its members” (Pettit 2003, p. 184). According to Pettit, then, a group’s believing that p can be understood in terms of combining the majority of individual beliefs that p held by a group’s members via a premise-based aggregation procedure, so long as the collection of individuals is in fact a group. Moreover, given the considerations raised in the context of discussing the joint acceptance account, we can add that the members in question should be operative ones. Thus, a more precise formulation of the view is: PBAA: A group G believes that p if and only if the majority of the operative members’ votes in

the premise columns are that p.18 I now want to turn to two phenomena that have never before been discussed in the collective epistemology literature; namely, group lies and group bullshit.

1.2 Group Lies and Group Bullshit To begin, I take the following to be a paradigmatic group lie: TOBACCO COMPANY: Philip

Morris, one of the largest tobacco companies in the world, is aware of the massive amounts of scientific evidence revealing not only the addictiveness of smoking, but also the links it has with lung cancer and heart disease. While the members of the board of directors of the company believe this conclusion, they all jointly agree that, because of what is at stake financially, the official position of Philip Morris is that smoking is neither highly addictive nor detrimental to one’s health, which is then published in all of their advertising materials. Since it is not my purpose in this chapter to provide an account of lying, I will simply use the one that I will argue for in Chapter 5:19 LIE: A lies to B if and only if (1) A states that p to B, (2) A believes that p is false, and (3) A intends to be deceptive20 to B with respect to whether p in stating that p. Hence, in order for a group, G, to lie, I will assume that all three of these conditions need to be satisfied: G must state that p where G believes that p is false, and G must have the deliberate intention to be deceptive.21 While there are many questions that can be asked about the nature of group belief, group intention, and so on, that both (1) and (2) are satisfied by Philip Morris in TOBACCO COMPANY is at least plausible. This is prima facie supported by the fact that a cursory internet search of “tobacco company and lies” brings up a litany of articles and websites defending precisely the verdict that tobacco companies have lied to the public. For instance, in a 2005 article in the Los Angeles Times, it was reported that “Jurors in Los Angeles County Superior Court found that Philip Morris had concealed information about the risks and addictiveness of smoking, with deliberate intent to defraud smokers such as Fredric Reller of Marina del Rey, who died in September 2003 at the age of 64.…In Fredric Reller’s videotaped deposition, ‘he admitted that he was ashamed and embarrassed that he had believed Philip Morris’s lies and deceit that there was no valid scientific proof that their cigarettes caused lung cancer.’”22 I take it, then, that the scenario described in TOBACCO COMPANY, which resembles the actual case of Philip Morris in some crucial respects, is precisely the sort of scenario in which we feel comfortable attributing a lie to a group. Given that it is clear that there are group lies, I now want to propose the following desideratum for any plausible account of group belief: Group Lie Desideratum: An adequate account of group belief should have the resources for

distinguishing between, on the one hand, a group’s asserting its belief that p and, on the other hand, paradigmatic instances of a group’s lying regarding that p. I do not think that the Group Lie Desideratum needs much argument: if an account of group belief cannot discriminate between paradigmatic instances of group belief and clear instances where group belief is absent, the account is fundamentally misguided. But the problem here is not just that a view’s inability to satisfy the Group Lie Desideratum reveals its deep failure to capture the nature of group belief. There are also important moral and legal reasons for wanting to hold groups, such as corporations, businesses, and governments, responsible both for their lies and for the consequences that follow from them. In a case such as TOBACCO COMPANY, for instance, it is not just an intellectual curiosity whether an account of group belief gets the verdict right—it also matters so that we can properly hold Philip Morris morally and legally responsible for its lies about the health risks involved in smoking and the deaths that resulted from them. With this in mind, I will now show that the two dominant non-summative accounts of group belief in the literature—the JAA2 and the PBAA—are incapable of satisfying the Group Lie Desideratum. Let’s begin with the joint acceptance account. The first point to notice is that, according to the JAA2, Philip Morris believes that smoking is neither highly addictive nor detrimental to one’s health in TOBACCO COMPANY. The operative members of the company—namely, the board of directors—not only jointly accept this proposition, but also do so through the exercise of their authority and with mutual belief. Moreover, given that the power possessed by the board of directors is part of the very structure of the company, the non-operative members of the group tacitly accept this proposition and do so with awareness. Thus, conditions (1’) through (4’) are satisfied by Philip Morris, thereby resulting in the group believing that smoking is neither highly addictive nor detrimental to one’s health. Given that the scenario described in TOBACCO COMPANY is a paradigm of a group lying, yet the JAA2 regards it as a perfectly ordinary instance of a group reporting its belief, the joint acceptance account of group belief is not only incorrect, but fundamentally so. There is a sense in which this conclusion should not come as a surprise: the situation described in TOBACCO COMPANY is nearly identical in structure to that found in MEDICAL ASSOCIATION, with the exception that the motivation for the joint acceptance in the former case is financial gain while that in the latter case is the overall health of the nation. Since nonsummative accounts of group belief do not place any conditions on the motivation for the joint acceptance in question, this difference is simply silent on whether the state in question is a group lie or the reporting of a group belief. Otherwise put, TOBACCO COMPANY describes a paradigmatic group lie while MEDICAL ASSOCIATION describes a paradigmatic group belief for the proponent of the joint acceptance account, yet such states are indistinguishable on such a view.23 Once we see this, it becomes clear that there are other phenomena in the neighborhood of group lies that the joint acceptance account also fails to distinguish from group belief. For instance, while Harry Frankfurt’s notion of bullshit has, as far as I know, never been discussed in connection with collective entities, it nonetheless seems clear that groups can bullshit just as individuals can. For instance, consider the following: OIL COMPANY: After

the oil spill in the Gulf of Mexico, BP began spraying dispersants in the clean-up process that have been widely criticized by environmental groups for their level of toxicity. In response to this outcry, the executive management team of BP convened and its

members jointly accepted that the dispersants being used are safe and pose no threat to the environment, a view that was then made public through all of the major media outlets. It turns out that BP’s executive management team arrived at this view with an utter disregard for the truth—it simply served their purpose of financial and reputational preservation. The scenario in OIL COMPANY is a classic instance of what we may call group bullshit, which Frankfurt describes in the individual case as follows: It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose. (Frankfurt 2005, pp. 55–6) Whereas the group in TOBACCO COMPANY believes that smoking is highly addictive and detrimental to one’s health, but then asserts that this is not the case with the deliberate intention to deceive, the group in OIL COMPANY simply fails to believe that the dispersants they are using are safe and pose no threat to the environment, but then asserts that this is the case solely to serve their purposes. In both the former case of a group lie and the latter instance of group bullshit, however, the relevant group belief seems to be absent. Yet, as was the case in TOBACCO COMPANY, the joint acceptance account regards the group state in OIL COMPANY as a straightforward instance of group belief. Once again, this is clear not only because the operative members of BP— namely, the executive management team—jointly accept that the dispersants they are using in the gulf oil spill are safe and pose no threat to the environment, but also because the other conditions of the JAA2 are plausibly satisfied as well. So, the joint acceptance account delivers the verdict that there is the group belief that p, both in paradigmatic cases where the group believes that notp and in paradigmatic cases where the group simply fails to believe that p. This motivates a second desideratum of an adequate account of group belief, which we can characterize as follows: Group Bullshit Desideratum: An adequate account of group belief should have the resources for distinguishing between, on the one hand, a group’s asserting its belief that p and, on the other hand, paradigmatic instances of a group’s bullshitting that p. Since both a lie and bullshit undeniably involve the absence of belief, the satisfaction of both the Group Lie Desideratum and the Group Bullshit Desideratum are non-negotiable for a tenable account of group belief. To my mind, the fact that the joint acceptance account is incapable of meeting these desiderata is a decisive objection to this conception of group belief. It is worth pointing out that these particular objections—involving group lies and group bullshit—have not been previously raised to a joint acceptance account of group belief. The

standard problem raised against such a view is that paradigmatic instances of group belief function differently in important ways than beliefs in individuals do. For instance, it has been argued by K. Brad Wray, A.W.M. Meijers, and Raul Hakli that group belief is far more directly voluntary than it is in the individual case.24 Consider, again, MEDICAL ASSOCIATION, which involves the board of directors of the American Academy of Pediatrics jointly agreeing that there are significant health benefits to circumcision. When group belief is determined by the official position arrived at by a decision-making body in this way, the members can simply decide that the group believes that p, whereas individuals do not seem capable of just deciding to believe that p in this way.25 Similarly, it has been argued that group belief in this sense is far less governed by evidence than it is when an individual’s doxastic states are at issue. For instance, K. Brad Wray claims that groups, unlike individual agents, can choose to believe based on their goals and Christopher McMahon contends that groups often defend as true positions that they adopt for purely instrumental reasons.26 In MEDICAL ASSOCIATION, the goal of the AAP may be to produce the best health for the greatest number of children, and so the board of directors may choose to downplay their personal doubts in an effort to further this broader aim. This way of belief formation, it is argued, is unavailable in the individual case, where doxastic attitudes are far more directly sensitive to evidence. Of course, responses to these objections can and have been offered on behalf of the joint acceptance account. For instance, while group belief on this view may be more voluntary than individual belief, this may simply be a matter of degree rather than of kind. For individuals surely have voluntary control over methods of belief acquisition, sensitivity to evidence, and so on, all of which directly affect which beliefs are formed. Moreover, it has been questioned whether we should expect features of individual phenomena, such as belief, to always be possessed by their collective counterparts.27 Perhaps because entities at the individual and collective levels are so different, it shouldn’t be surprising for them to have radically different properties. Finally, individuals often do have beliefs for purely instrumental reasons. A politician, for instance, may believe that he is doing what is best for the country because it is expedient, better for his image, and so on. My central point here, however, is not to evaluate these objections in any sort of detail but, rather, to point out that the inability of the joint acceptance account to satisfy the Group Lie and the Group Bullshit Desiderata has not been noticed in the literature on group belief. Moreover, while responses have been offered to the classic objections to this view, I will now show that there are not plausible responses to be offered to this inability. First, a proponent of the joint acceptance account of group belief may attempt to resist my conclusion by arguing as follows: if the central difference between TOBACCO COMPANY and OIL COMPANY, on the one hand, and MEDICAL ASSOCIATION, on the other hand, lies with the motivation for the joint acceptance in question, why can’t a condition simply be added to the JAA2 requiring a certain kind of motivation needed for group belief? What might such a condition look like? It cannot simply require that the joint acceptance not be motivated by the intention to deceive, for such an intention is lacking in instances of group bullshit and yet there is still the absence of group belief. It also cannot require that the joint acceptance not be motivated by an utter disregard for the truth, for, as Frankfurt says above, the liar is respectful of the truth—it is just that this respect is used to conceal the truth from the liar’s audience. It may be better, then, to add a positive condition: perhaps the joint acceptance needs to be motivated by a sensitivity to the truth, or to the available evidence, or to some other

epistemically proper feature. This proposal, however, is doomed to failure for at least two reasons. First, the motivation for the joint acceptance in MEDICAL ASSOCIATION is the overall health of the nation, not a sensitivity to an epistemically significant property. Given that the proponent of the joint acceptance view regards MEDICAL ASSOCIATION as a classic case of group belief, it would hardly help the view to add a condition to group belief that the American Academy of Pediatrics would fail to satisfy. Second, wishful thinking can certainly produce belief, both at the individual and at the group level, but clearly a positive epistemic requirement would not be satisfied here. In particular, belief that results from wishful thinking is not sensitive to an epistemically proper feature, despite the fact that it is undeniably a belief. A second strategy that the proponent of the joint acceptance account might take for resisting my objection is to flesh out a way of allowing for group lies and group bullshit within the framework of the view. Here is how it might go for the former: suppose that when a given group deliberates about the question whether p, the members jointly accept that p, but then also jointly agree to spread it about that not-p with the deliberate intention to deceive the public. Thus, their joint acceptance of that p amounts to group belief on the JAA2. This, combined with their agreement to convey that not-p with the intention to deceive, results in both conditions of the traditional conception of lying being satisfied. It is, then, possible to distinguish between group belief and group lies on the joint acceptance account. Although this scenario as described is certainly possible, so, too, is it possible for the members of a group to move directly to jointly agreeing that not-p and then spreading this about to the public with the deliberate intention to deceive, as is done in TOBACCO COMPANY. It may, of course, be obvious that all of the individuals in the group believe that p, but it clearly doesn’t follow from this that the group also believes that p, given the non-summative nature of the JAA2. Thus, TOBACCO COMPANY still represents a paradigmatic instance of group lying that is not explainable by the joint acceptance account. The situation is even worse in the case of group bullshit, where there do not seem to be any resources within the joint acceptance account for distinguishing it from group belief. For if a group jointly accepts one thing but then agrees to report another, this simply collapses into a group lie. If the group instead jointly accepts a proposition with an utter disregard for the truth, this simply turns out to be a classic case of group belief for the proponent of the JAA2. There is simply no room in between to account for group bullshit.28 Let us now turn to the premise-based aggregation account of group belief and evaluate how this non-summativist view fares with respect to the Group Lie Desideratum and the Group Bullshit Desideratum. To begin, consider the following case: (or JA-TOBACCO COMPANY): The board members of Philip Morris are discussing whether cigarette smoking is safe to the health of smokers. The board members are supposed to make their decision on the basis of considering three separable issues: first, whether the available evidence supports the conclusion that smoking is not connected to lung cancer; second, whether there is reason to think that smoking does not cause emphysema; and third, whether there is data supporting that there is not a link between smoking and heart disease. If a board member thinks that the evidence supports that smoking is not connected to cancer, does not cause emphysema, and is not linked to heart disease, he or she will vote that smoking is safe to the health of smokers; otherwise he or she will vote that it is not. The board members vote in the following way: JUDGMENT AGGREGATION TOBACCO COMPANY

After the voting, the board members decide that, because of what is at stake financially, Philip Morris will publish in all of their advertising materials that smoking is safe to the health of smokers. Following Pettit, one way of determining the group’s belief in JA-TOBACCO COMPANY is via a premise-based aggregation procedure. On this account, there are more “Yes”s than “No”s in each of the premise columns, so the group believes that cigarette smoking is safe to the health of the smokers. Indeed, Pettit’s very solution to the conflict between the individual beliefs and the group belief in the original case is to conclude that while the group believes that the company should forgo a pay-raise in order to spend the saved money on implementing a set of workplace safety measures, no single individual employee believes this. Similarly, then, the conclusion in JA-TOBACCO COMPANY should be that while the group believes that cigarette smoking is safe to the health of smokers, no single individual board member of Philip Morris believes this. When the company then reports in their advertising materials that smoking is safe, they are simply reporting the belief of the group. But doesn’t this leave us with the same problem afflicting the joint acceptance account: namely, that the situation in JA-TOBACCO COMPANY intuitively appears to be a paradigmatic group lie, and yet the view at issue countenances it as a standard instance of reporting a group belief? To make this case even stronger, we can imagine that the individual votes of “Yes” in the premise columns are motivated at least in part by economic considerations, though not ones incompatible with belief. For instance, perhaps the board members were inclined to look for conclusive or definitive evidence linking smoking with lung cancer, emphysema, or heart disease before voting “No” in one of the premise columns. Were the economic advantages of selling cigarettes not present, we can imagine that their standards for believing negatively would have been lower. Given this, the very fact that there are more “Yes”s than “No”s in each of the premise columns in JA-TOBACCO COMPANY is in large part the result of the financial gain promised by minimizing the health risks of smoking. So, while each board member individually believes that smoking is detrimental to the health of smokers, the collective view of Philip Morris is that it is safe and this group belief is explainable by the company’s desire for economic benefits. When the company then publishes this view in all of its advertising materials—again for financial gain —this appears to be a classic example of a group lie, and yet the PBAA regards it as a straightforward instance of reporting a group belief. If this is still doubted, imagine the board members sitting in the conference room at Philip Morris looking at the table showing the results of their votes. Each knows that he or she individually believes that smoking is not safe to the health of smokers and yet each also agrees that Philip Morris should publish in their advertising materials that smoking is safe. Moreover, assume that it is also clear to each board member that the decision to publish the view that smoking is safe to the health of smokers is made so as to avoid the risk of massive financial loss. That this latter choice is sufficient for satisfying the “intention to deceive” component of lying

should be clear if it is noted that the board could have decided to instead report that the data regarding smoking and various diseases is mixed or inconclusive. Thus, I take it as undeniable that in JA-TOBACCO COMPANY, we find a paradigmatic group lie, despite the PBAA’s verdict that a group belief is present. Note that we would hardly regard it as sufficient to combat accusations of Philip Morris’s moral and legal responsibility for the smoking-related deaths of people for the board members to respond that they were using a premise-based aggregation procedure in arriving at their collective view and hence actually believed that smoking is safe. And while attributions of moral and legal responsibility may not always track beliefs, they at least do so frequently, and thus they provide even further reason for concluding that the group is lying in JATOBACCO COMPANY. Of course, it may be immediately asked why we wouldn’t simply aggregate the judgments of the board members in JA-TOBACCO COMPANY via a different procedure. For the problem is generated in the first place by relying on a premise-based aggregation procedure. But in addition to the dictatorial and majority procedures mentioned above, there are, among others, a supermajority procedure, whereby a group believes a given proposition to be true whenever a supermajority of group members believes it to be true; a unanimity procedure, “whereby the group makes a judgment on a proposition if and only if the group members unanimously endorse that judgment,” (List 2005, p. 30); and a conclusion-based procedure, whereby the group’s belief is determined by the majority of votes found in the conclusion columns. Clearly, if a conclusionbased aggregation procedure is used in JA-TOBACCO COMPANY, then the result is that each board member and the group as a whole believe that smoking is not safe to one’s health. There are, however, at least two problems with this move. First, there is nothing in the judgment aggregation view that rules out using the premise-based aggregation procedure or dictates the use of a conclusion-based rule in a case such as JA-TOBACCO COMPANY. Thus, it can simply be stipulated in the case that the group’s view will be determined by its votes on the premises, perhaps because the board members agreed upon this strategy from the outset, or because they decided to aggregate this way after seeing the results of the voting, or because it is written into Philip Morris’s bylaws that group beliefs will be grounded in the members’ beliefs about the premises, or because this is the most promising way of achieving the rationality of the group. Second, recall the dialectic of the chapter: I am arguing that non-summative accounts of group belief lack the resources for accounting for group lies and group bullshit. Out of the relevant aggregation procedures, the only one that supports non-summativism is the premisebased rule. For each of the others understands the belief of the group in terms of the beliefs of some individual or set of members, for example, the dictator, the majority of the members, the supermajority, and so on. If Pettit were to respond to my challenge that non-summativism cannot accommodate group lies by proposing the use of a procedure that supports summativism, this would hardly save the account. Let us now turn to the PBAA’s ability, or lack thereof, to adequately explain group bullshit. There are two scenarios to consider here, each with a different outcome. On the one hand, if the proponent of the PBAA countenances group belief in cases where individual members of a collective entity vote positively on an issue despite not personally holding the belief, then problematic instances of group bullshit immediately arise. For we can simply imagine JATOBACCO COMPANY exactly as it is described, except that each of the board members votes with an utter disregard for the truth of the claims. When the judgments are then aggregated via a premisebased procedure and it is reported to the public that smoking is safe solely for financial gain, the PBAA regards this as a straightforward case of a group asserting its belief when the more

plausible verdict is that Philip Morris is simply bullshitting the public. On the other hand, if a proponent of the PBAA countenances group belief only in cases where individual members of a collective entity vote on an issue because they personally hold the belief in question, then we can again understand JA-TOBACCO COMPANY exactly as it is described, except that the decisions to use a premise-based aggregation procedure and to report the result that smoking is safe are made without any regard for the truth. This appears to be a classic example of group bullshit, and yet the PBAA regards it as a straightforward instance of the report of a group belief. I should emphasize that my claim, as was the case with respect to the joint acceptance account, is not that there aren’t any conceivable scenarios in which the PBAA could plausibly explain an instance of a group lie or of group bullshit. Here is one: suppose that in JA-TOBACCO COMPANY, every member of the board at Philip Morris votes “No” in each of the above premise and conclusion columns. On every way of aggregating the group’s judgments, then, Philip Morris believes that smoking is not safe. Now suppose further that despite knowing that they individually and collectively believe that smoking is not safe, the board nonetheless decides to report to the public that it is safe, either with the intention to deceive them or with an utter disregard for the truth. This is a case where the PBAA can plausibly explain a group lie and group bullshit, respectively. However, there are two reasons this does not affect the arguments in this chapter. First, my arguments show that there are paradigmatic group lies and bullshit that the PBAA countenances as straightforward instances of reporting group beliefs. This is certainly compatible with there being some other cases of group lies and bullshit that such an account can adequately accommodate. Second, my arguments are targeting non-summative accounts of group belief. The only judgment aggregation procedure that supports a non-summative account of group belief is a premise-based one in a scenario such as JA-TOBACCO COMPANY. And, as I said above, it does not respond to my objection that non-summativism cannot capture group lies to propose cases in which a summative aggregation procedure can.

1.3 Judgment Fragility While I regard the failure to satisfy the Group Lie and the Group Bullshit Desiderata as decisive objections to the joint acceptance and premise-based aggregation accounts of group belief, there are two further considerations against these views that should be discussed. In this section, I’ll focus on the phenomenon of judgment fragility. Let’s begin with the JAA2. Notice, first, that group members may jointly accept that p, not with an utter disregard for the truth—as is the case with bullshit—but with little regard for the truth. This can happen especially clearly when a view is adopted by a group for pragmatic reasons. For instance, consider the following: HISTORY DEPARTMENT: The

History Department at a leading university is deliberating about the final candidate to whom it will extend admission to its graduate program. After hours of discussion, there is still widespread disagreement over whether Mary Jones or Thomas Brown is the most qualified applicant remaining in the pool. With three minutes left to the meeting and the Chair announcing that they will need to convene again tomorrow if a decision cannot be reached, one member proposes a different applicant from their shortlist for admission, Robert Lee. Despite the fact that not a single member of the department actually believes that Lee is the most

qualified candidate for the last spot, they all jointly accept this proposition so as to end the department meeting on time and to avoid having to devote another day to such matters. The History Department then proceeds to report to the Graduate School that its position is that Robert Lee is the most qualified applicant for the last spot of admission. In HISTORY DEPARTMENT, the members of the History Department spent hours discussing the applicant pool for their graduate program and compiled a short list of candidates through attention to the candidates’ qualifications, so there is clearly not a complete disregard for the truth in their overall deliberative process. Nevertheless, their jointly accepting and then reporting that Robert Lee is the most qualified candidate for the last spot is entirely motivated by their practical desires to end the meeting on time and to avoid devoting another day to this issue. This results in the group state being riddled with what we might call judgment fragility. Let’s say that a group’s judgment is fragile in this sense if the following holds: were the members of the group to deliberate about the same body of evidence at T1 and T2 with no relevant difference in the information that emerges via the deliberation, it is very likely that the group’s judgments would diverge between T1 and T2. Group belief, I suggest, is, ceteris paribus,29 incompatible with judgment fragility of this sort. This is because belief, whether it is at the individual or the group level, is a relatively settled state. We wouldn’t say, for instance, that you believe that I’m trustworthy if you change your mind about this question every few minutes without any corresponding change in the relevant evidence. Similarly, we shouldn’t say that the History Department believes that Robert Lee is the best candidate for admission if every time we send the group into a room to deliberate about this question, without any difference in evidence, a different answer emerges. Thus, despite the fact that the group’s state in HISTORY DEPARTMENT counts as a straightforward instance of group belief on the JAA2, its judgment fragility renders this the wrong verdict. A similar case can be used to illustrate a problem with the PBAA. To see this, consider the following: (or JA-HISTORY DEPARTMENT): A three-member History Department at a leading university is deliberating about the final candidate to whom it will extend admission to its graduate program. The members are supposed to make their decision on the basis of considering three separable issues: first, whether the applicant’s writing sample is the most impressive; second, whether the student’s letters are the strongest; and third, whether the person’s previous coursework is the best. If a member thinks that the student’s writing sample is the most impressive, the letters are the strongest, and the previous coursework is the best, he or she will vote that the candidate is the best one; otherwise he or she will vote that the candidate is not. Regarding one of the candidates on the short list, Robert Lee, the votes are as follows: JUDGMENT AGGREGATION HISTORY DEPARTMENT

After the voting, the Chair announces that they will need to convene again tomorrow if a decision cannot be reached and so the members decide to use a premise-based aggregation procedure to arrive at the History Department’s view entirely because they do not wish to meet again. This results in the group’s believing that Robert Lee is the best candidate for admission, which is then reported to the administration. As was the case with HISTORY DEPARTMENT, the members of the History Department in this case deliberate about the applicants for their graduate program with attention paid to the candidates’ qualifications, so there is clearly not a complete disregard for the truth. But they also choose to use a premise-based aggregation procedure entirely to avoid an additional departmental meeting, which renders the resulting state subject to judgment fragility. In particular, were the members of the group to deliberate about the same body of evidence at a different time with no relevant change in the information that emerges via the deliberation, it is very likely that the History Department would have a different view. This is because the members’ decision to aggregate their beliefs via a premise-based procedure was guided solely by a contingent practical constraint that there is no reason to suppose will emerge in another context. For this reason, the History Department again does not seem to believe that Robert Lee is the best candidate for admission, despite the PBAA’s verdict that it does. I want to take a step back to discuss a feature that the phenomenon of judgment fragility shares with the problems raised by group lies and group bullshit, and to suggest a deeper diagnosis of what has gone wrong with non-summative accounts of group beliefs. Notice, first, that both joint acceptance and the selection of aggregation procedures are acts that are under the direct voluntary control of the members of a group. In particular, it is this voluntary control that enables the members of Philip Morris to simply decide to jointly accept that smoking is safe, that allows the members of the History Department to choose to accept that Robert Lee is the best candidate for admission, and that permits these same members to embrace a premise-based aggregation procedure to achieve a specific outcome. Because of this, members can also be guided by a range of factors that are utterly disconnected from the way the world is—from the economic concerns of Philip Morris to the fleeting whims and desires of departmental colleagues to end a meeting on time. Herein lies the problem: beliefs have a mind-to-world direction of fit. For instance, it has been argued that beliefs aim at the truth and thus aim to fit the world, or that beliefs are satisfied or proper when they fit the world. Regardless of the details, however, they are importantly different from desires, which have a world to-mind direction of it. Desires aim for the world to be a certain way and are satisfied when the world fits them. As Mark Platts says, “beliefs should be changed to fit with the world, not vice versa” while “the world, crudely, should be changed to fit with our desires, not vice versa” (Platts 1979, p. 257).30 But group belief, when understood according to non-summative accounts, can crucially lack this mind-toworld fit. Philip Morris in TOBACCO COMPANY is not aiming to conform its state to the world, or even to be responsive to the way the world is, when its members jointly accept that smoking is safe. Quite the contrary; Philip Morris’s state is responsive to the way it wants the world to be and thus has more in common with a desire than a belief. In particular, the company wants it to be the case that smoking is safe and jointly accepts that it is so in an effort to bring about the consequences that would follow were the world in fact this way. In this sense, Philip Morris’s state has more of a world-to-mind direction of fit.31 Given that a mind-to-world fit is one of the identifying features of belief, and the nonsummativist is unable to secure this fit for group belief, it is clear that we need to look elsewhere

to understand this phenomenon.

1.4 Base Fragility Let us turn to the final problem for non-summativism, what I call base fragility, and begin with how it afflicts the JAA2. Consider, for instance, the following case: ENGLISH DEPARTMENT: The

English Department at a leading university is deliberating about the final candidate to whom they will extend admission to their graduate program. All of the members jointly accept that the best candidate for admission is Sarah Peters, but half of them agree to this because they believe that she is a highly qualified applicant and half of them agree to this because they believe that she is a highly unqualified applicant. The latter half of the department is made up of a contingency of disgruntled employees who wish to sabotage their own department and regard “the best candidate for admission” as the applicant who will most likely pull the program’s rankings down. Once again, the joint acceptance account regards this as a straightforward instance of group belief. But notice: because the members of the English Department jointly accept the proposition that Sarah Peters is the best candidate for admission for different and indeed competing reasons, the resulting state has a base fragility to it that is not present in standard cases of belief. Let us say that a group’s state is base fragile if the bases of a significant subset of its members’ beliefs conflict with the bases of another significant subset of its members’ beliefs. The English Department’s view about Sarah Peters is clearly base fragile in this sense. This can be seen by noticing that any future evidence that the English Department acquires regarding Sarah Peters’s qualifications, whether it is for or against them, will count against the group belief. For instance, evidence on behalf of Peters’s qualifications will sway the disgruntled employees away from continuing to regard her as the best candidate for admission and evidence that undermines her qualifications will persuade the other half of the department that she is no longer the best candidate for admission. This example can, of course, be even further complicated so that onequarter of the group’s members believe that p for reason q, one-quarter believes that p for reason r, and so on. The more heterogeneous the grounding for the joint acceptance is among the group members, the more base fragile the resulting state is. Base fragility of this sort is, I claim, incompatible with group belief for at least two reasons. First, group beliefs have to be the sorts of things that are properly subject to epistemic evaluation, and states that are base fragile are not. In particular, group beliefs have to be evaluable as rational or irrational, justified or unjustified, undefeated or defeated, and so on. When a group’s belief is held in the face of such base fragility, however, no single coherent evaluation can be given. If, for instance, the English Department gets further evidence against Sarah Peters’s qualifications, does this render its belief that she is the best candidate irrational, unjustified, or defeated? No single answer can be given here. When viewed in light of one set of bases, the evidence counts against the belief, but when viewed in light of another set, it counts in favor of it. This deep lack of unity reveals that the state that is purported to be a single one belonging to a group is in fact a collection of individual beliefs. This is not to say, of course, that all of the members of a group need to hold a belief for the

same reasons. Indeed, one of the epistemic virtues of group belief is that a group’s members might all hold a belief for different, mutually supporting reasons. This can render the resulting state better off epistemically than any of the individual states taken alone. The point here is that the group’s belief cannot be base fragile, where this means that the bases of the individually-held beliefs are wildly conflicting. Second, group beliefs have to be the sorts of things that can coherently figure into collective deliberation about future actions of the group, and states that are base fragile cannot. For instance, if the English Department is deliberating about how to act, they should arrive at conclusions typical of a group that believes that Sarah Peters is the best candidate for admission —such as nominating her for a university fellowship or writing her an outstanding letter of support. But this is not in fact how things will turn out, as half of the members will be deliberating in ways that are typical of a group that believes that Sarah Peters is the worst candidate for admission. Thus, there will be widespread disagreement among the members about future actions related to this proposition, ultimately leading either to inertia, incoherence, or a change in the bases of some of the members. It is not difficult to see that a problem involving base fragility arises with respect to the PBAA, too. We can simply leave JA-HISTORY DEPARTMENT as it is, except we can imagine that the department members’ votes on the premises are riddled with base fragility. For instance, department member A might vote that Robert Lee’s writing sample is the best for reason q—say, that it is the most historical—while member C votes that it is the best for reason ~q—that it is not the most historical. This might happen if, for instance, A and C have different interpretations of the role that the historical elements play in the writing sample—perhaps A regards such elements as the central focus of the paper, while C regards them as merely incidental aspects supporting a non-historical claim. Similarly, department member B might vote that Lee’s letters are the best because of reason r—that they emphasize how professional he is—while member C votes that they are the best for reason ~r—that they focus on how he is not professional. C might be looking for a pure lover of the discipline rather than a highly professionalized candidate, and perhaps B and C have competing visions of what professional activity involves. Finally, department member A might vote that Lee’s course work is the best because of reason s—that his courses reveal the most breadth—and member B might vote that his course work is the best because of reason ~s—that his courses reveal the least breadth. Perhaps B values depth over breadth, and A and B disagree over whether Lee’s course work has breadth because only A regards interdisciplinary work as relevant to his evaluation. Thus, the votes are as follows:

As should be clear, the PBAA regards this as a clear instance in which the History Department believes that Robert Lee is the best candidate for admission. But as was the case in ENGLISH DEPARTMENT, the resulting state is base fragile in a way that renders it unfit for proper epistemic evaluation. If, for instance, the History Department gets further evidence about Lee’s writing sample, it is unclear whether it would render its belief that Lee is the best candidate irrational, unjustified, or defeated. When viewed in light of one set of bases, the evidence might count

against the belief, but when viewed in light of another set, it might count in favor of it. As we saw above, this sort of base fragility is incompatible with group belief. Thus, the phenomenon of base fragility provides a further reason to reject non-summativism about group belief.

1.5 The Group Agent Account We have seen that a central problem facing the two classic non-summative accounts of group belief—the JAA2 and the PBAA—is their inability to satisfy the Group Lie and the Group Bullshit desiderata. I take this to be a decisive reason to reject such accounts. But notice: this argument also goes some distance toward resurrecting a broadly summative approach to group belief. Here is why: summativism was traditionally regarded as the intuitive approach to understanding the phenomenon of group belief. What undermined this view were precisely cases such as PHILOSOPHY DEPARTMENT and PHILOSOPHY DEPARTMENT2, which purported to show that individual belief that p on the part of a group’s members is neither necessary nor sufficient for the group believing that p. If the scenario described in PHILOSOPHY DEPARTMENT is indistinguishable from paradigmatic group lies and instances of bullshit, however, then surely we should no longer grant that the philosophy department, in jointly accepting that Jane Smith is the most qualified candidate for admission to their graduate program, clearly believes this proposition. So, one of the central reasons for rejecting summativism in the first place no longer holds. We have also seen, however, that issues about fragility, particularly at the level of the bases, rule out understanding group belief entirely in summative terms. For even if every member of a group believes that p, they might do so for wildly conflicting reasons, which renders the resulting state unfit for epistemic evaluation and for future deliberation in relation to group action. This, then, prevents the state from being a group belief. Indeed, considerations about judgment and base fragility show that, in a deeply important sense, group belief is crucially connected to our understanding of groups as agents in their own right. When individuals make up a group, there are relations that arise among their beliefs that can only be properly assessed at the level of the collective. Whether these relations are together coherent or incoherent, for instance, is critical in assessing whether a belief state is appropriate for figuring in the group’s actions. And the nature of these relations at the collective level directly impact whether a group’s action is rational or justified in light of its belief states. I thus propose the following account of group belief, which avoids all of the problems afflicting rival views: Group Agent Account: A group, G, believes that p if and only if: (1) there is a significant percentage of G’s operative members who believe that p, and (2) are such that adding together the bases of their beliefs that p yields a belief set that is not substantively incoherent.32 There are five features to note about the Group Agent Account. First, the addition of condition (1), which necessitates belief on the part of a significant percentage of operative members, enables my view to satisfy the Group Lie and the Group Bullshit Desiderata. This can be seen by noticing that such a condition fails to be satisfied in TOBACCO COMPANY since not a single member

of the board of directors of Philip Morris believes that smoking is neither highly addictive nor detrimental to one’s health. It also fails to be fulfilled in OIL COMPANY, as not a single member of the executive management team of BP believes that the dispersants they are using are safe. Thus, the Group Agent Account is able to accommodate the verdict that Philip Morris is lying in the former case and BP is bullshitting in the latter, thereby having the resources for distinguishing between a group’s asserting its belief, on the one hand, and its lying or bullshitting on the other. Second, the Group Agent Account avoids the problem posed by judgment fragility since group belief is not determined by factors, such as joint acceptance or choices about which aggregation procedure to use, that are under the direct voluntary control of the group’s members. Indeed, it is precisely because of this level of voluntarism that judgment fragility arises. In HISTORY DEPARTMENT, for instance, the members simply choose to accept Robert Lee as the best candidate for admission, and it is this choice—grounded entirely in pragmatic factors—that renders the group’s judgment fragile. Were they to deliberate about the same issue with the same evidence, though without the worry about the meeting ending in five minutes, they very likely would have arrived at a different conclusion. Similarly, the choice about which aggregation procedure to use in JA-HISTORY DEPARTMENT is taken up directly, simply because the group wishes to arrive at the desired outcome. This leaves group belief on both views subject to the fleeting whims and temporary desires of its members. In contrast, condition (1) of the Group Agent Account ties group belief intimately to individual beliefs, so that the level of voluntary control at the former level is no greater than it is at the latter level. This has the consequence that group belief is no more riddled with judgment fragility than individual belief is. Third, and related, the Group Agent Account gets the direction of fit right for group belief. In particular, since individual beliefs have a mind-to-world direction of fit, and since individual beliefs provide the building blocks for group belief on my view, group belief also has a mind-toworld direction of fit. Fourth, because condition (2) of the Group Agent Account requires that the bases of the individual beliefs of the operative members not be incoherent, it avoids the problem posed by base fragility. In particular, in cases such as ENGLISH DEPARTMENT, the wildly conflicting reasons that the members have for accepting that Sarah Peters is the best candidate for admission leads to the failure of (2), and thus the resulting state fails to qualify as a group belief. How should we characterize the requirement that the reasons not be incoherent? There are various ways of understanding this. One option is to understand coherence in terms of evidential support and incoherence in terms of a lack thereof.33 Alternatively, coherence can be understood in terms of accuracy-dominance avoidance, in the sense that, for a coherent set of beliefs, there is no rival belief set that is never worse and sometimes better than it with respect to overall accuracy, and incoherence could then be fleshed out accordingly.34 Here, I take no stand on how precisely this concept should be understood. It suffices for my purposes that there is, intuitively, the presence of incoherence among the members’ bases in cases such as ENGLISH DEPARTMENT, and that there are available accounts that can explain this. Fifth, even though the Group Agent Account is not a simple summativist one, it nonetheless faces the relevance objection posed by cases like PHILOSOPHY DEPARTMENT2. Recall that the objection here is that every member of a group may believe that p, but believing that p may be entirely irrelevant to the purpose and goals of the group. So, for instance, every member of the philosophy department may in fact believe that the best red pepper hummus in Chicago is at Whole Foods, but this may be completely disconnected from the focus and objectives of the collective entity. The problem is that, on my view, the philosophy department ends up believing

that the best red pepper hummus in Chicago is at Whole Foods—so long as the bases of the individual beliefs aren’t incoherent—even though this is said to be the wrong intuitive verdict. By way of response, notice that there is a difference between a group having a belief, on the one hand, and a group having a relevant or important belief, on the other. There is nothing peculiar in itself in saying that the philosophy department believes that the best red pepper hummus in Chicago can be found at Whole Foods. It is just that such a belief is typically of so little interest to us that we wouldn’t overtly make this attribution to the group. But this is true in the individual case as well, and yet we wouldn’t withhold belief here. For instance, you most likely believe that oranges don’t grow on kangaroos and that the Earth is more than 20 years old, but only under highly unusual circumstances would I explicitly attribute these beliefs to you. Why? Because such beliefs are of very little interest to me. It doesn’t follow from this, however, that you don’t hold such beliefs. According to the Group Agent Account, the same is true in the group case. In fact, it is worth pointing out that the non-summativist who appeals to this sort of relevance objection is committed to a very counterintuitive conception of belief, group or otherwise. For instance, suppose that every member of PETA believes that Citizen Kane is the greatest film of all time. On this view, PETA fails to believe this at T1 since it is irrelevant to its goals, but then believes this at T2, when its President announces that PETA will now be extensively evaluating the depiction of animals in films. Nothing has changed about the psychology of any of the group’s members or the propositions they accept, yet they now have a belief simply because of an announcement from the group’s President. This conclusion strikes me as a further reason to doubt the force of this objection from relevance. Sixth, it is worth returning briefly to the cases with which this chapter began, for it may be thought that they still pose a problem for the Group Agent Account. For instance, if the intuitive description of PHILOSOPHY DEPARTMENT is that the group believes that Jane Smith is the most qualified candidate for admission to its graduate program despite not a single of its members believing this, then the mere fact that it is structurally identical to a group lie and group bullshit does not undermine the intuitiveness of this description. In other words, regardless of its similarity to cases where group belief is clearly absent, PHILOSOPHY DEPARTMENT describes a scenario where group belief seems to be present. By way of response, let me say that it is not at all clear to me that the intuitive response here is that the group holds the beliefs in question. In fact, there are many other plausible ways to describe this case that do not involve belief at all. For instance, we can say that the philosophy department’s official position is that Jane Smith is the most qualified candidate for admission to its graduate program, or that the philosophy department has decided to accept this,35 or that this is its public view, and so on. All of these characterizations make clear without invoking the notion of group belief that the group bears a relationship to the proposition in question that none of the individuals may share, and it does so without being committed to anything such as a group mind that is over and above the minds of any individual members. I would say something similar in the case of a jury, where they come to a conclusion about the guilt or innocence of a defendant because of the rules they are instructed to follow, despite the fact that not a single juror in fact believes it. In such a case, I would say that the group’s verdict is that, say, the defendant is innocent, despite the fact that not a single member believes this to be true. So, the intuitiveness of the group’s having a belief that no individual member does in PHILOSOPHY DEPARTMENT strikes me as merely apparent. Further support for this conclusion derives from considering an individual analogue of this

case. Suppose, for instance, that only a single member of the philosophy department reports to the administration that Jane Smith is the most qualified candidate for admission to its graduate program, despite the fact that she does not believe that this is the case. How would we describe this situation? The standard view is that the member of the philosophy department accepts, but does not believe, the proposition in question. But then why wouldn’t we say this in cases that are identical in all respects except that a group is substituted for an individual? Why would the mere fact that a collective entity is involved transform the psychological state of acceptance into belief? Since there does not appear to be a compelling answer to this question, PHILOSOPHY DEPARTMENT does not motivate the rejection of the Group Agent Account. Finally, let me offer a few general words about group belief, and the very important lessons we have learned about it from reflecting on group lies (and related phenomena, such as group bullshit). My general view will be met with resistance from opponents on two radically different sides.36 On the one side, there will be those who hold that there simply are no group beliefs, and any talk to the contrary is metaphorical. According to this position, treating group belief as a phenomenon worthy of philosophical treatment in its own right is deeply mistaken. On the other side, there will be those who maintain that groups have “minds of their own,”37 and that their mental states are over and above, or distinct from, any mental states of their individual members. Group belief, on this view, does not even partially depend on individual belief, so the two phenomena are importantly different. What I hope to have shown in this chapter is that paying close attention to group lies reveals that both of these sides are wrong. If we take as our starting point what I regard as an undeniable fact—namely, that groups lie—then it becomes clear both that groups genuinely have beliefs, and that they need to be anchored by individual beliefs. To see this, notice first that when we talk about groups lying, this is not simply metaphorical. When Fredric Reller said in his videotaped deposition that he believed Philip Morris’s lies that there is no valid scientific evidence that cigarettes cause lung cancer,38 he wasn’t speaking loosely—he was attributing a full-blown lie to Philip Morris, just as he would to you or me. Groups can lie, and when they do, the consequences can be catastrophic. But notice: in order to understand what it is for a group to lie, we need to have a robust conception of what it is for a group to have a belief. For, on every plausible conception of lying, even those that are in deep disagreement, a necessary condition is that the liar either believes that what is said is false, or fails to believe that it is true.39 Given this, the very notion of group belief is at the heart of understanding group lies. Since I take it as undeniable that groups lie, I also take it to be clear that they have beliefs, too.40 As we have seen in this chapter, however, the notion of group belief that is needed when theorizing about group lies cannot be understood in terms of the non-summative proposals in the offing, as they deliver the wrong results in cases of paradigmatic lies. This is why group belief needs to be anchored by individual beliefs. Otherwise, it turns out that features such as economic motivations can wholly determine whether a group holds a belief and, thereby, whether they’ve told a lie. If, for instance, it is in Philip Morris’s financial interests to believe that smoking is safe, then all that needs to be done to deny culpability for deceiving smokers on most nonsummative accounts is to get the operative members of the group in a room and have them agree that smoking is safe. Voilà: Philip Morris now believes that smoking is safe, and thus there is no lying when this is reported to the public, even in a court of law. If this conclusion strikes you as deeply wrong, as it does me, then you should consider taking on board the view of group belief defended in this chapter.

1.6 Conclusion We have seen that non-summative accounts fail to satisfy the Group Lie and the Group Bullshit Desiderata, and thus that group belief cannot be determined by states or processes that are under the direct voluntary control of the members. We have also seen that non-summative accounts incorrectly countenance as group beliefs states that are riddled with judgment and base fragility. This leaves group belief without a mind-to-world direction of fit and renders it unsuitable for proper epistemic evaluation and collective deliberation. Thus, the current orthodoxy in epistemology according to which non-summativism is the only game in town is deeply mistaken. In place of non-summativism, I defended the Group Agent Account. On my view, group belief is largely a matter of the beliefs of individual members, yet it is also importantly constrained by relations that arise only at the level of the group, especially as it is an agent. The result is a view that not only renders group belief incompatible with judgment and base fragility, it also satisfies the Group Lie and Group Bullshit Desiderata, thereby providing the resources for holding groups responsible for their lies and bullshit. The Epistemology of Groups. Jennifer Lackey, Oxford University Press (2021). © Jennifer Lackey. DOI: 10.1093/oso/9780199656608.003.0002

1 A discussion of what distinguishes a group from a mere collection of individuals lies beyond the scope of this chapter, though I discuss this issue briefly in the Introduction. See also Gilbert (1989 and 2004), Bird (2010), List and Pettit (2011), and Ritchie (2013). I won’t be directly engaging with those who are more skeptical about the existence of group beliefs. For arguments of this sort, see Rupert (2005 and 2011) and Huebner (2014). If what I argue in this chapter is correct, however, then I will have shown that there are compelling reasons to countenance the existence of group beliefs. 2 https://www.washingtonpost.com/news/energy-environment/wp/2017/11/03/trump-administration-releases-report-findsno-convincing-alternative-explanation-for-climate-change/?utm:term=.a8be9a994df0, accessed January 15, 2017. 3 https://townhall.com/tipsheet/laurettabrown/2017/10/31/april-ryan-asks-white-house-if-trump-administration-believesslavery-is-wrong-n2403031, accessed January 15, 2017. 4 https://www.aclu-il.org/en/issues, accessed January 15, 2017. 5 https://www.theguardian.com/environment/2010/oct/29/bp-oil-spill-bp, accessed January 15, 2017. In response to this, “Halliburton said it did not believe that the foam cement design used on the well caused the incident. ‘Halliburton believes that significant differences between its internal cement tests and the commission’s test results may be due to differences in the cement materials tested,’ the company said in a statement issued in response to the letter. ‘The commission tested off-the-shelf cement and additives, whereas Halliburton tested the unique blend of cement and additives that existed on the rig at the time Halliburton’s tests were conducted.’ The company added: ‘Halliburton believes that had BP conducted a cement bond log test, or had BP and others properly interpreted a negative pressure test, these tests would have revealed any problems with Halliburton’s cement.’” 6 I will use “group” and “collective,” and “groups” and “collective entities,” interchangeably. 7 This phrase is from Pettit (2003). 8 See Gilbert (1989), Schmitt (1994), Tollefsen (2007 and 2009), and Bird (2010) for this sort of argument. Fricker (2010) provides cases of this sort involving group virtues, but also suggests that similar considerations apply in the case of group beliefs (see p. 241). 9 This sort of argument can be found in Gilbert (1989) and Schmitt (1994). Schmitt, borrowing from Gilbert (1989), puts this point as follows: “Two groups may have the same membership, yet differ in their beliefs. The membership of the Library Committee may be identical with that of the Food Committee. Yet the two committees might have very different purposes and accordingly make judgments about quite different issues based on very different kinds of evidence. Every member of the Library Committee might believe that there are a million volumes in the library, and so might the Library Committee itself. Yet the Food Committee holds no such belief. Thus, the summative condition is too weak” (1994, p. 261). This objection to the sufficiency dimension of summativism focuses on the relevance of goals and purposes of the group, but there is another sort of counterexample. Gilbert (1987), for instance, argues that each member of G might believe that p, but G itself would not be said to believe that p if each member is unwilling or unable to communicate his or her belief that p. She writes:

Suppose an anthropologist were to write “The Zuni Tribe believes that the north is the region of force and destruction.” Now suppose that the writer went on to give his grounds for this statement as follows: Each member of the Zuni tribe believes that the north is the region of force and destruction, but each one is afraid to tell anyone else that he believes this; he is afraid that the others will mock him, believing that they certainly will not believe it. What conclusions can be drawn from this? It surely suggests at least that when we ascribe a belief to a group we are not simply saying that most members of the group have the belief in question. That is, it is surely not logically sufficient for a group belief that p that most members of the group believe that p. (Gilbert 1987, p. 187) Gilbert may wish to distinguish this sort of case from an implicit belief of a group. For instance, J. Angelo Corlett writes: “…a decision-making group might possess a belief without formally recognizing or accepting it. And it may do so because a certain belief may be implied by one of the beliefs it accepts. For example, it would seem that each group of the requisite sort believes implicitly that it is a group. Otherwise, the group might not qualify as a conglomerate” (Corlett 2007, p. 236). 10 Gilbert (1987, 1993, 1994, 2002, and 2004), Schmitt (1994), Tuomela (1992), and Tollefsen (2007 and 2009) also hold a joint acceptance view of group belief. I will discuss Tuomela’s particular account in some detail later in this chapter. 11 Elsewhere, Gilbert writes, “what is both logically necessary and logically sufficient for the truth of the ascription of group belief…is…that all or most members of the group have expressed willingness to let a certain view ‘stand’ as the view of the group” (Gilbert 1989, p. 289). 12 For our purposes, it is sufficient to note that there is a difference between belief and acceptance—that accepting that p is not the same as believing that p, and vice versa. For detailed discussions about this difference, see, for instance, van Fraassen (1980), Stalnaker (1984), Cohen (1989 and 1992), Wray (2001), and Hakli (2007 and 2011). While there is certainly not consensus among these authors about what precisely this distinction amounts to, below are four differences that have been cited between acceptances and beliefs: 1. One can accept propositions that one does not believe, whereas one cannot believe what one does not accept. 2. Acceptance often results from a consideration of one’s goals, and this results from adopting a policy to pursue a particular goal. 3. Belief results in a feeling, in particular, a feeling that something is true. 4. Acceptance can be voluntary, whereas belief cannot. (Wray 2001, p. 325) 13 It should be noted that, at best, the joint acceptance account captures the beliefs held only by what I called in the Introduction deliberative groups. These groups are distinguished from non-deliberative groups by their ability to engage in collective reasoning, where this includes deliberation, revision, and a sensitivity to evidence, all understood collectively. What is central for our purposes here is that members of non-deliberative groups are not capable of engaging in joint acceptance in the way required by the view. For instance, if I survey left-handed Northwestern students and aggregate their beliefs via a majority aggregation rule, it may be perfectly appropriate to say via this method that Northwestern students believe that there are not enough desks to suit their needs. But given that the students themselves do not even identify as a group, they will be unable to collectively deliberate about their needs or to jointly accept this proposition. Thus, to the extent that a unified account of the beliefs properly attributed to both deliberative and non-deliberative groups is desirable, the joint acceptance account fails. 14 This is how “hierarchical groups” in the sense characterized in Goldman (2004) function. 15 While the JAA2 does much better than the JAA when board- or committee-governed groups are at issue, they both have problems countenancing certain beliefs of groups that are dictatorially based. Consider, for example, the following: CATHOLIC CHURCH: The Pope in his official capacity solemnly declares that the use of assisted reproductive technologies is immoral according to the Catholic Church. While the Pope himself believes this, this declaration was made without the Pope discussing the issue with any other member of the Church, including the cardinals and bishops with whom he works most closely. In this sort of case, it may be quite natural to say that the Catholic Church believes that it is immoral to use assisted reproductive technologies. However, while the proponent of the JAA2 may argue that dictatorially governed groups have only one operative member—the dictator—it does not seem correct to say that there is any joint acceptance occurring. There may be acceptance, to be sure, since the Pope may both believe and accept that assisted reproductive technologies are immoral. But the very notion of something being joint presupposes that there is more than one person involved. Given this, the JAA2 seems incapable of accounting for at least many of the beliefs held by dictatorially-governed groups. While I regard this as a genuine problem for any joint acceptance account, my central aim at this particular point is to isolate a paradigmatic non-summative account of group belief that is best able to capture the relevant data. Given that the JAA2 is better able to explain the beliefs found in cases such as MEDICAL ASSOCIATION than the JAA is, it will be the central target in the discussion that follows. 16 See also Tuomela (1993 and 1995). 17 For more on the theory of judgment aggregation, see List and Pettit (2002 and 2004), Dietrich (2005), List (2005), and

Pauly and van Hees (2006). 18 I should emphasize that while there are different aggregation procedures, and thus different ways to understand group belief in judgment aggregation terms, the premise-based procedure is the only one of these that results in a non-summativist conception of group belief. This is why I formulated the PBAA as capturing both necessary and sufficient conditions for group belief—that is, when group belief is understood non-summatively on a judgment aggregation model, a group believes that p when and only the majority of the operative members’ votes in the premise columns are that p. This is compatible with there being other, summative conceptions of group belief on a judgment aggregation model, such as those that result from a dictatorial or majority procedure. I will discuss this point in a bit more detail later in this chapter. 19 See also Lackey (2013). I should note that the only condition that will be crucial to the arguments in this chapter is (2), and even among competing views of what it is to lie, a condition of this sort is accepted. 20 For arguments against the inclusion of a condition involving deception in an account of lying, see Sorensen (2007 and 2010), Fallis (2009), and Carson (2010). For a response, see Lackey (2013) and Chapter 5. 21 I will develop this account in far more detail in Chapter 5, but this will suffice for present purposes. 22 https://www.latimes.com/archives/la-xpm-2005-mar-05-fi-smoke5-story.html, accessed July 29, 2020. 23 For ease of expression, I shall often compare group lies and group beliefs. However, strictly speaking, this should be read as comparing acts of lying and acts of reporting a belief. 24 See Wray (2001 and 2003), Meijers (2002), and Hakli (2007 and 2011). 25 For a classic defense of the view that individuals lack direct voluntary control over their beliefs, see Alston (1988). 26 See Wray (2003) and McMahon (2003), respectively. 27 See Gilbert and Pilchman (2014). 28 For another argument against the joint acceptance account of belief, one that draws on some of my arguments involving defeaters in Chapter 3, see Carter (2015). 29 This clause is intended to capture highly unusual cases where it might be argued that belief can come and go without a change in the evidence, such as through direct brain intervention. I’m grateful to Nathan Lauffer for a comment that led to the addition of this clause. 30 Platts himself does not endorse this view. 31 I mentioned earlier that a standard objection to non-summativist accounts is that group belief ends up being far more directly voluntary than it is in the individual case. While my argument here also partially relies on the voluntariness of both joint acceptance and the selection of aggregation procedures, my central concern is that the structure of belief ends up having the wrong direction of fit on a non-summativist model. 32 One lesson that is often drawn from the Preface Paradox is that there are some kinds of incoherence that are not irrational. The addition of “substantively” is intended to permit a group to have a belief even when there is this sort of incoherence. 33 See Kolodny (2007). 34 See Briggs, Cariani, Easwaran, and Fitelson (2014). 35 This response has been developed in detail by Wray (2001), Meijers (2002), and Hakli (2007). 36 It might be thought that this distinction simply maps the difference between summativism and non-summativism, but this would not be quite right. For instance, while a summativist might understand group belief in terms of individual beliefs, this need not be understood as a form of group belief eliminativism. I am here interested in contrasting those who think we shouldn’t even be theorizing about group beliefs, since talk of this phenomenon is simply metaphorical. 37 See, for instance, Pettit (2003). 38 See the earlier discussion of this in section 1.2 of this chapter. 39 See, for instance, the references in note 20. 40 This is compatible with granting that many phenomena that are called “group beliefs” in fact are not. As I said earlier, I would regard many of these states as the group’s official position, acceptance, verdict, and so on.

2 What Is Justified Group Belief? As we saw in Chapter 1, groups are often said to believe things. Some of these beliefs amount to knowledge while others do not, with epistemic justification being one of the central features distinguishing these two categories. But how should we understand a group’s justifiedly believing that p?1 The importance of this question is clear, both theoretically and practically. If we do not understand the justification of group beliefs, then we cannot make sense of our widespread epistemic attributions to collective entities—of evidence that they have, or should have, and of propositions that they know, or should have known. Moreover, the justificatory status of such beliefs matters a great deal to whether groups are morally and legally responsible for certain actions and, accordingly, the extent to which they ought to be held accountable. For instance, if the Bush Administration justifiedly believed that Iraq did not have weapons of mass destruction, then not only did the Administration lie to the public in saying that it did, but it is also fully culpable for the hundreds of thousands of lives needlessly lost in the Iraq war. Despite this, the topic of group justification has received surprisingly little attention in the literature, with the few who have addressed it falling into one of two camps. On the one hand, there are those who favor an inflationary approach, where groups are treated as entities that can float freely from the epistemic status of their members’ beliefs. For these theorists, the justificatory status of group belief involves only actions or features that take place at the group level, such as the joint acceptance of reasons. On the other hand, there are those who endorse a deflationary approach, where justified group belief is understood as nothing more than the aggregation of the justified beliefs of the group’s members. In this chapter, I raise new objections to both of these approaches. If I am right, we need to look in an altogether different place for an adequate account of justified group belief. From these objections emerges the skeleton of the positive view that I go on to develop and defend, which parallels my account of group belief in the previous chapter in critical respects and which I call the Group Epistemic Agent Account: groups are epistemic agents in their own right, with justified beliefs that respond to both evidence and normative requirements that arise only at the group level, but which are nonetheless importantly constrained by the epistemic status of the beliefs of their individual members.

2.1 Divergence Arguments Before turning to particular approaches to group justification, two points of clarification about the topic of this chapter are in order. First, the analysandum of all of the views under

consideration is doxastic, rather than propositional, justification. Thus, the question to be answered is when a group has a justified belief—that is, justifiedly believes that p—rather than has justification for believing a proposition without necessarily believing it—that is, is justified in believing that p. Second, the views at issue are exclusively concerned with epistemic justification, which is the kind of justification that is integral to converting true belief into knowledge.2 Practical and moral justification will not figure into the discussion. With these points in mind, let us begin with the inflationary approach to the justification of group beliefs. The primary support for this view comes from divergence arguments, which purport to show that there can be a divergence between the justificatory status of a group’s beliefs and the status of the beliefs of the group’s members. In particular, it is claimed that a group can justifiedly believe that p, even though not a single one of its members justifiedly believes that p. There are two central kinds of cases that purport to establish these conclusions. Let’s call them different evidence cases and different epistemic risk settings cases. An instance of the first kind can be seen in the following: DIFFERENT EVIDENCE: A

jury is deliberating about whether the defendant in a murder trial is innocent or guilty. Each member of the jury is privy to evidence that the defendant was seen fleeing the scene of the crime with blood spatter on his clothes, but it is grounded in hearsay that, though reliable, was ruled as inadmissible by the judge. Given only the admissible evidence, the jury as a group justifiedly believes that the defendant is innocent, but not a single juror justifiedly believes this proposition because it is defeated for each of them as individuals by the relevant reliable hearsay evidence. Cases of this sort are prevalent in the collective epistemology literature, but Frederick F. Schmitt provides the most developed and detailed version. According to Schmitt (1994), different evidence cases successfully function as divergence arguments only when they involve chartered groups, where “[a] chartered group is one founded to perform a particular action or actions of a certain kind,” and “has no life apart from its office” (1994, pp. 272–3). In other words, chartered groups must function only in their offices or risk ceasing to exist. The U.S. Congress, the Sierra Club, and juries are all groups of this sort. Moreover, given the particular charter of a group, it may be governed by special epistemic standards, such as the exclusion of hearsay in a court of law. Because of this, …a nonlegal group may fail to be justified in a belief because a member possesses countervailing hearsay. A court, on the other hand, would not lose its justification merely because a member possesses countervailing hearsay. And this is because in its legal capacity, the court rightly excludes hearsay, and its legal capacity is the only capacity in which it operates. (Schmitt 1994, p. 274)

Since the jury in DIFFERENT EVIDENCE is a chartered group, Schmitt argues that its charter prohibits it from considering the hearsay evidence about the defendant fleeing the scene of the crime with blood spatter on his clothes. Without this crucial testimony, the jury justifiedly believes that the defendant is innocent of the murder in question. But since the jurors qua individuals are not governed by these special standards of available reasons, they each have a defeater provided by the hearsay evidence for believing in the defendant’s innocence. Thus, the jury justifiedly believes that the defendant is innocent despite the fact that not a single individual member justifiedly holds this belief.

An instance of the second kind of case is as follows: DIFFERENT RISK SETTINGS: A

philosophy department has been given permission to hire an assistant professor and appoints a sub-committee of three persons for this task. After considering the application of Fred Jones, the individual members and the committee have different epistemic risk settings with regard to accepting the proposition that Jones is a qualified candidate. “These risk settings determine how much evidence is necessary for acceptance. So, while both the individuals and the group as a whole consider precisely the same evidence and they assign the same weight to the evidence, the group reaches its threshold for acceptance while no individual member has reached her threshold for acceptance. And, since there is no epistemically preferred threshold, both the group and the members are equally epistemically rational.” (Mathiesen 2011, p. 41) This case, due to Kay Mathiesen, relies on differences in tolerance for epistemic risk that agents may have. For instance, it has been argued that if one agent is an epistemic risk-taker, while another is epistemically cautious, it is possible for them to have access to the same evidence and yet the former rationally believes that p while the latter rationally suspends belief regarding the question whether p.3 Moreover, these differences in epistemic risk settings can be determined by pragmatic factors. Mathiesen writes: A practical agent has certain goals or interests. In the case of the hiring committee, its practical goals have been determined by the charge from the departmental committee to determine a set of “qualified” candidates for the position. The practical goals of the members may be quite different from those of the group. For instance, the individuals may “personally” prefer to be very skeptical that anyone is truly qualified. But, given that as a group they need to present a set of names to the department, such skepticism would be out of place in group reasoning. (Mathiesen 2011, p. 40)

Because of the practical interests of the hiring committee in DIFFERENT RISK SETTINGS, the group is more of an epistemic risk-taker than any of the individual members. According to Mathiesen, this has the result that even though the amount of evidence available to both the group and the members is sufficient for justified belief, only the former’s risk settings permit belief. Thus, the hiring committee justifiedly believes that Jones is a qualified candidate, despite the fact that not a single individual member justifiedly holds this belief. Divergence arguments involving different evidence and different risk settings are said to support two conclusions—a negative and a positive one. The former is: Non-Summativism: A group, G, justifiedly believing that p cannot be understood only in terms of some or all of G’s members justifiedly believing that p. The positive conclusion is: Inflationism: A group, G, justifiedly believing that p is understood in terms of the group itself justifiedly believing that p, where this is over and above, or otherwise distinct from, the individual members of G justifiedly believing that p. According to inflationary non-summativism, then, a group justifiedly believing that p is irreducible to all or some of its members justifiedly believing that p; instead, the group itself is

the epistemic subject of such justified belief.4 In what follows, we will take a closer look at the paradigmatic version of inflationary non-summativism: the joint acceptance account.

2.2 The Paradigmatic Inflationary Non-Summativist View: The Joint Acceptance Account The most widely accepted inflationary view of group justification is what we may call the joint acceptance account (hereafter, the JAA5). One version of the JAA is developed and defended by Schmitt in his (1994), where he argues: JAA-S: A group G justifiedly believes that p if and only if G has good reason to believe that p, and believes that p for this reason6 where G has a reason r to believe that p if and only if all members of G would properly express openly a willingness to accept r jointly as the group’s reason to believe that p. (Schmitt 1994, p. 265)7 On this view, then, whether a reason counts as possessed by a group is determined entirely via its joint acceptance by the group’s members, and the epistemic goodness or badness of this reason can then, in turn, be fleshed out in terms of traditional justification-conferring features, such as being produced by a reliable process, being grounded in adequate evidence, and so on. Schmitt’s preferred explanation of the epistemic goodness of a group’s reason is reliabilist, and though there are interesting questions about the reliability of group belief, the aspect of his view that is a substantive contribution to collective epistemology is his joint acceptance account of group reasons. According to Schmitt, while joint acceptance determines group reasons, “[t]he reference to what members would properly do is needed because the reasons possessed by the group include those that are available within and to the group, not merely those the members actually jointly accept as reasons” (Schmitt 1994, p. 266, original emphasis). For instance, suppose that the members of the Humane Society of the United States do not explicitly jointly accept that the moral wrongness of animal cruelty gives them a reason to believe that dog fighting should be opposed; nevertheless, this reason might be available to the group via all of their other commitments. Moreover, though Schmitt does not mention these virtues of his view, the inclusion of what members would do is also necessary to account for cases where r seems to be possessed by a group, despite the fact that not all members of the group actually jointly accept r. This can happen when, for instance, a group member is out of town or ill and is therefore not present when the relevant deliberation takes place, or when the group is so large that actual joint acceptance is practically impossible. In such cases, so long as all of the group members would jointly accept r, it counts as a reason that the group has.8 Another version of the JAA is defended by Raul Hakli. According to Hakli: JAA-H: A group G justifiedly believes9 that p collectively “if and only if the group can successfully defend p against reasonable challenges by providing reasons or evidence that are collectively acceptable to the group and that support p according to the epistemic principles

collectively accepted in the epistemic community of the group. The epistemic community determines what counts as a successful defence and as a reasonable challenge.” (Hakli 2011, p. 150) Just as Schmitt provides a joint acceptance account of group reasons, and then defends a reliabilist account of what makes these reasons epistemically good ones, Hakli endorses a joint acceptance account of group reasons and then develops a dialectical view of what makes these reasons epistemically good ones. In particular, a group has only “reasons or evidence that are collectively acceptable to the group.” What makes these reasons or evidence good is if they can be used to successfully defend p against reasonable challenges, which, in turn, is determined by what the epistemic community collectively accepts. For proponents of divergence arguments, the central virtue of the JAA is its ability to account for how groups can justifiedly hold beliefs that no single member justifiedly believes, thereby supporting inflationism about group justification. For instance, in DIFFERENT EVIDENCE, that the jury justifiedly believes that the defendant is innocent without a single individual member justifiedly holding this belief can presumably be explained according to both versions of the JAA: the members of the jury would jointly express openly a willingness to accept the admissible evidence as their reason to believe that the defendant is innocent. Since we can assume that this admissible evidence is both reliably produced10 and capable of providing a successful defense against reasonable challenges, the jury justifiedly believes this proposition. However, the individual members not only don’t believe that the defendant is innocent, they also don’t have justification for believing it, as the hearsay evidence provides them with a defeater. Similar considerations apply in DIFFERENT RISK SETTINGS: the members of the hiring committee would jointly express openly a willingness to accept the available evidence as their reason to believe that Jones is a qualified candidate. Since we can again assume that this evidence is reliably produced and capable of providing a successful defense against reasonable challenges, the hiring committee justifiedly believes this proposition. But because the members qua individuals are more epistemically cautious, they do not believe Jones is a qualified candidate, and thus they don’t believe this justifiedly either. This results in the hiring committee justifiedly believing that Jones is a qualified candidate, despite the fact that not a single individual member justifiedly holds this belief. Thus, the JAA handles classic divergence arguments with ease.

2.3 Problems for the Joint Acceptance Account Objections may be raised to both the reliabilist and dialectical components of the JAA-S and JAA-H, respectively. Regarding the former, for instance, it is not at all clear that reliability will appropriately track the inflationist’s intuitions about epistemic justification. If the jury in the above case forms a belief that the defendant is innocent purely on the basis of admissible evidence, is this produced by a reliable process? Clearly, no. Forming beliefs by ignoring relevant evidence is a paradigm of an unreliable process, so how will the joint acceptance theorist who is also a reliabilist achieve the desired verdict in DIFFERENT EVIDENCE? With respect to the latter, there are well-known objections to dialectical accounts of individual epistemic justification that can be raised here, too. For instance, how is such a view going to handle really persuasive speakers who nonetheless offer bad epistemic reasons, or highly gullible epistemic

communities who very readily accept poor defenses as successful ones? But what I want to do here is challenge the core tenet of the JAA—namely, the grounding of group reasons in joint acceptance. This will cut across every existing inflationary, nonsummative account of group justification in the literature. To begin, it will be helpful to clarify two features of the JAA. First, in order for a group to be said to possess a reason, it cannot be required that all members of the group be such that they would express openly11 a willingness to jointly accept r as the group’s reason to believe that p. This would make group justification too hard to come by. For instance, suppose that a philosophy department is deliberating about whether to offer a position in their graduate program to a highly qualified female applicant. All of the members of the department would properly express a willingness to jointly accept that the excellence of this woman’s writing sample is a reason to believe that she should be admitted, except for one, whose sexism would invariably prevent such agreement. The philosophy department here clearly has a reason to believe that she should be admitted to the graduate program, regardless of how we understand the nature of reasons. In particular, that a sexist member of the department would never accept the excellence of this woman’s writing sample as a reason to admit her affects neither whether the group believes that her writing sample is excellent nor whether the writing sample is in fact excellent. Given this, regardless of whether one thinks that reasons are psychological states, factive states, or both, a single biased member of a group steadfastly refusing to accept a given reason under any circumstances is not sufficient for the group to lack that reason. Thus, the JAA needs to be understood as requiring that only some of the members are such that they would engage in the relevant joint acceptance. Of course, it is not enough that this is true of just any members of the group in question. Groups have members with vastly different roles, only some of whom have the authority or power to determine certain outcomes for the group as a whole. As we saw in the previous chapter, those who have the relevant decision-making authority are often called operative members.12 For instance, the custodians of a law firm might have the authority to determine whether the hallway traffic gives the firm a reason to install hardwood floors rather than carpeting, but not whether the details of a case provide a reason to file a motion to dismiss on behalf of a client. The JAA should, then, be understood as requiring not only that some members are such that they would properly express a willingness to jointly accept r as the group’s reason to believe that p, but also that these members are operative ones.13 With these points in mind, I now want to turn to what I take to be a decisive objection to all versions of the joint acceptance account, one that shows that the JAA makes group justification far too easy to come by. Consider the following: IGNORING EVIDENCE: Philip

Morris is one of the largest tobacco companies in the world, and each of its operative members is individually aware of the massive amounts of scientific evidence revealing not only the addictiveness of smoking, but also the links it has with lung cancer and heart disease. Moreover, each individual member believes that the dangers of smoking give the company a reason to believe that warning labels should be placed on cigarette boxes. However, because of what is at stake financially and legally, none of these members would properly express a willingness to accept that the dangers of smoking give Philip Morris a reason to believe that it should put warning labels on cigarette boxes. Does Philip Morris have a reason to believe that it should put warning labels on cigarette boxes?

Clearly, yes. Every member of this group is aware of the scientific evidence showing the dangers of smoking and, accordingly, believes that warning labels should be put on cigarette boxes. The mere fact that the company is illegitimately ignoring relevant evidence through dogmatically and steadfastly refusing to jointly accept facts that are not to its liking should not result in its not having this reason, too. This conclusion is supported by the fact that we would surely hold Philip Morris responsible for the ill effects caused by smoking precisely because we take it to have a good reason to warn people about the dangers of cigarettes. Yet, according to the JAA, Philip Morris does not have a reason to put warning labels on cigarette boxes. Indeed, were the company to do so, it would be acting without a reason. Consider, now, another case: FABRICATING EVIDENCE: Philip

Morris is one of the largest tobacco companies in the world, and each of its operative members is individually aware of the massive amounts of scientific evidence revealing not only the addictiveness of smoking, but also the links it has with lung cancer and heart disease. Entirely because of what is at stake financially and legally, however, each of these members decides to jointly accept that all of the scientists working on the relationship between smoking and health problems are liars. Given this, they also jointly accept that the duplicity of the scientists gives Philip Morris a reason to believe that the results of the studies showing a connection between smoking and lung cancer and heart disease are unreliable. It is obvious that Philip Morris does not have a good reason to believe that the results of studies showing a connection between smoking and health problems are unreliable, but I think it is also clear that it doesn’t even have a bad reason. To see this, notice that the members completely fabricate, for purely financial and legal motives, that the scientists working on these issues are liars, and thereby jointly accept that this provides the company with a reason to reject the studies as unreliable. But surely this is not sufficient for a group to have a reason. One way to see this is that reasons, even bad ones, are often taken to provide excuses for actions grounded in them. Suppose that my student says that the reason she didn’t cite the sources on which she relied is that she believed it wasn’t necessary to acknowledge material assigned for our class. This reason might not justify her plagiarism, but it does provide an excuse for her failure to cite the relevant sources. I may, for instance, be able to understand her behavior, explain why such an excellent student ended up engaging in academic dishonesty, and ultimately hold her less responsible for the act than if she had knowingly failed to provide the necessary citations. In FABRICATING EVIDENCE, however, there is no sense whatsoever in which the members’ joint acceptance of a made-up claim provides Philip Morris with an excuse for regarding the scientific studies as unreliable. Indeed, rather than lessening responsibility, as a bad reason might do, the company seems more guilty of the actions grounded in the acceptance of the unreliability of the scientific evidence, since the fabrication of evidence was knowingly and willfully done. According to the JAA, however, Philip Morris has a reason to believe that the results of the studies showing a connection between smoking and lung cancer and heart disease are unreliable. It is just a small step from here to show that the JAA also leads to problematic results regarding the epistemic justification of group beliefs. Consider IGNORING EVIDENCE: given that all of the evidence showing that smoking is dangerous is not available to the group because of the members’ refusal to jointly accept it, none of it is part of the justificatory basis of the group’s belief. It is, then, not at all difficult to imagine scenarios in which the remaining evidence leaves the group justifiedly believing that smoking does not pose any health hazards. For instance, the

group might have access to some studies that, though reliably conducted, had a very limited sample of subjects, none of whom happened to develop lung cancer or heart disease despite years of smoking. In this case, Philip Morris’s “belief” that smoking is not unhealthy would be reliably formed, capable of successful defense, and, given the total evidence available, well-grounded, thereby being epistemically justified. But this result is absurd. This is the sense in which the JAA makes the justification of group beliefs far too easy to come by. Any group can manipulate the available evidence through what it chooses to accept or reject, and thereby wind up with beliefs that count as epistemically justified even when they clearly are not. The upshot of these considerations is that joint acceptance cannot ground the justification of group beliefs. On the one hand, IGNORING EVIDENCE shows that the relevant kind of joint acceptance by the members of a given group is not necessary for a group to possess a reason, since Philip Morris has a reason to believe that warning labels should be placed on cigarette boxes even in the absence of joint acceptance of that claim by its members. On the other hand, FABRICATING EVIDENCE reveals that the relevant kind of joint acceptance by the members of a given group is not sufficient for a group to possess a reason, since Philip Morris does not have a reason to believe that the results of the studies showing a connection between smoking and health problems are unreliable, despite there being joint acceptance of that claim by its members. What both of these cases make clear is that group justification cannot be wholly determined by factors over which the members of the group have direct voluntary control. For it is this voluntary control that enables the members of Philip Morris to simply decide to not jointly accept what they should, and to jointly accept what they should not. Because of this, it is possible for joint acceptance to be guided by factors that are utterly disconnected from the truth, such as the economic and legal goals of a company. Thus, any account of group justification that relies entirely on joint acceptance succumbs to what I shall call the Illegitimate Manipulation of Evidence Problem (IMEP): IMEP: If the justification of group beliefs can be achieved through wholly voluntary means, then the evidence available to the group can be illegitimately manipulated, thereby severing the connection between group epistemic justification and truth-conduciveness. Given that the JAA clearly faces the IMEP, we need to look elsewhere for an account of the justification of group beliefs. Of course, the proponent of the JAA might substantially revise the view so that there are epistemic constraints on both a group’s having a reason and the reason being a good one. For instance, perhaps a group has a reason, r, to believe that p if and only if its operative members would properly express a willingness to jointly accept r as the group’s reason to believe that p, where “properly” is understood in distinctively epistemic terms. On this view, the joint acceptance in question would have to be determined, not by the will of the operative members, but by the evidence available to them. Otherwise, there would be no way to ensure that Philip Morris has the relevant reason in IGNORING EVIDENCE, but lacks it in FABRICATING EVIDENCE. The problem with this approach, however, is that it ceases to be an inflationary nonsummative account of group justification. To see this, consider how this revised version of the JAA would handle IGNORING EVIDENCE: despite the fact that the operative members would not jointly accept that the dangers of smoking give Philip Morris a reason to believe that it should put warning labels on cigarette boxes, the group nonetheless has this reason because of the

evidence available to its members. But in what way is this a joint acceptance account when all of the work is done by the available evidence and joint acceptance is utterly irrelevant to whether the group has a reason? This point can be put in the form of a dilemma: either group reasons are determined by joint acceptance or they are not. If they are, the view succumbs to the Illegitimate Manipulation of Evidence Problem. If they are not, the view is not a joint acceptance account. Either way, inflationary non-summativism is left wanting.

2.4 Revisiting Divergence Arguments Divergence arguments provide the central grounding for an inflationary approach to group justification. We have seen that the paradigmatic version of such an approach—the joint acceptance account—has serious problems that motivate its rejection. But where does that leave us vis-à-vis divergence arguments? My goal in this section of the chapter is to argue that the two main divergence arguments independently fail. This should just about close the door to inflationism about group justification. Let’s begin with DIFFERENT EVIDENCE. Recall that the standard interpretation of this sort of case is that, while the jury as a group justifiedly believes that the defendant is innocent, none of the jurors justifiedly believe this proposition because their justification is defeated by the relevant reliable hearsay evidence. But why should we think that the notion of justification is epistemic in both evaluations? The reason that there might be the inclination to say that the jury justifiedly believes that the defendant is innocent is because hearsay evidence is deemed inadmissible by the court. However, being inadmissible is clearly not the same as being unreliable or otherwise non-truth-conducive. Consider, for instance, that hearsay evidence is generally inadmissible because a witness needs to be “brought to testify in court on the stand, where he may be probed and cross-examined as to the grounds of his assertion and of his qualifications to make it” (Wigmore 1904, p. 437). The problem with hearsay evidence mentioned here is not that it is more likely to be unreliable or lacking in evidential value, but rather that the opposing side is denied the possibility of confronting the source of the information. This is a practical or procedural concern, but not necessarily an epistemic one. This is made clear by the fact that we can imagine a piece of hearsay evidence that has been produced by a far more reliable process and is better grounded in evidence than a piece of firsthand evidence. Nevertheless, the former would be inadmissible in a court of law, while the latter would not be. Given this, the mere fact that something is ruled inadmissible does not necessarily reveal anything about its epistemic status. Applying these considerations to DIFFERENT EVIDENCE, the reliable hearsay evidence that the defendant was seen fleeing the scene of the crime with blood spatter on his clothes is highly epistemically relevant to the jury’s beliefs, even if the rules of the court prohibit it from being factored into their verdict. This shows that while both the jury and the individual jurors justifiedly believe that the defendant is guilty in an epistemic sense, the jury is legally justified in believing that the defendant is innocent. This is because, as was mentioned above, the law’s exclusion of hearsay evidence can be radically disconnected from truth-conduciveness, which is precisely what we find in DIFFERENT EVIDENCE. Thus, this case fails to establish what divergence arguments purport to show: namely, that the epistemic justification of a group’s beliefs can

diverge from the epistemic justification of the beliefs of its individual members.14 Now Schmitt may respond to this argument by reminding us that a jury is a chartered group and must therefore function according to its charter. As he says, “the court rightly excludes hearsay, and its legal capacity is the only capacity in which it operates” (Schmitt 1994, p. 274). Given this, insofar as the jury considers the hearsay evidence in question and forms a belief in the defendant’s guilt on this basis, it has ceased to be a jury. Thus, it is not the case that the jury justifiedly believes that the defendant is guilty.15 But this response will not do. Surely, juries can make mistakes or break the rules and still remain a jury. This happens with groups all the time. A basketball team might break the rules by its players repeatedly double-dribbling the ball, and yet it still remains a basketball team playing basketball. It is just a bad basketball team playing a very poor game of basketball. Similarly, a jury might consider hearsay evidence when forming its belief about a defendant’s innocence or guilt and nonetheless remain a jury engaged in deliberation. It is just a jury that has broken the rules. Moreover, unlike the basketball team, so long as its verdict is grounded only in admissible evidence, it is not even clear that the jury is overall a bad one. Of course, if a group breaks enough of the rules, or the right kind of rules, such as those that are constitutive, it might cease to be the group in question. If the players carry the ball across the court and never dribble it or attempt to make a basket, then perhaps they no longer make up a basketball team. The central point I wish to emphasize here, however, is that the mere fact that a chartered group breaks a rule of its charter does not lead to the group no longer existing. Given this, combined with the fact that the jury in DIFFERENT EVIDENCE is considering hearsay evidence only in the formation of its belief and not in issuing its verdict, there is no reason to conclude that it is not the jury that believes that the defendant is guilty. A further objection to Schmitt’s strategy for defending his reading of DIFFERENT EVIDENCE is that linking epistemic justification with the charter of a group succumbs to a version of the Illegitimate Manipulation of Evidence Problem. Schmitt considers the example of the charter of a court or jury to exclude hearsay evidence as admissible, but there are no constraints on the charters of groups. Given this, what prevents a group from being formed whose primary charter is, for instance, to exclude any evidence that conflicts with its belief that aliens have visited Roswell, New Mexico? In such a case, the group could end up justifiedly believing that aliens have visited Roswell simply because it is illegitimately restricting the available evidence. Clearly, this is unacceptable. Let’s now turn to DIFFERENT RISK SETTINGS. Recall that Mathiesen’s interpretation of this case is that while the hiring committee justifiedly believes that Jones is a qualified candidate, not a single member justifiedly holds this belief because none believe this proposition. This is due to the fact that the members of the group are more epistemically cautious than the group is as a whole. I want to challenge the claim that the diverging doxastic states are both justified, but I want to do this by questioning the role of epistemic risk settings. According to Mathiesen, such risk settings can be determined by pragmatic factors. This is crucial to DIFFERENT RISK SETTINGS, as the reason the hiring committee is less epistemically cautious is because it has been given a charge by the department to present a set of qualified candidates for the job. Given this, skepticism would be “out of place” in the reasoning of the group. But this opens the door to a version of the Illegitimate Manipulation of Evidence Problem: if epistemic risk settings can be determined by practical interests, and such settings can justify different doxastic states, what prevents groups from manipulating their risk settings precisely to suit their unwarranted practical purposes? For instance, given the financial interests of Philip Morris, it certainly makes sense

from a practical point of view for it to be extraordinarily cautious when it comes to accepting the testimony of scientists about the health hazards of smoking. Given this, we can end up with Philip Morris being epistemically justified in withholding belief about the dangers of smoking because of its extraordinarily high standards for evidence, even when belief is clearly called for. This shows that it is highly questionable whether risk settings can do the work that Mathiesen needs them to. Thus, we have compelling reasons to reject both the best examples of the inflationary approach and the divergence arguments meant to support them. This goes a long way toward closing the door on inflationary group epistemology. In particular, the joint acceptance account is not only the dominant version of inflationary non-summativism, but divergence arguments grounded in cases such as DIFFERENT EVIDENCE and DIFFERENT RISK SETTINGS are the primary defense offered for such an approach. If this account and these arguments fail, then so does the central case for inflationary non-summativism.

2.5 Deflationary Summativism, the Group Justification Paradox, and the Defeater Problem Given the serious problems facing an inflationary approach to understanding the justification of group beliefs that were developed in the previous chapter, a natural response is to move toward a deflationary one. The most widely accepted deflationary view is summativism, according to which the justification of a group’s belief is understood simply in terms of the justification of the individual members’ beliefs. More precisely, there are two aspects to such a view, corresponding to those found with respect to inflationary non-summativism. The negative thesis is: Deflationism: A group, G, justifiedly believing that p does not involve the group itself justifiedly believing that p, where this is over and above, or otherwise distinct from, the individual members of G justifiedly believing that p. The positive thesis is: Summativism: A group, G, justifiedly believing that p is understood only in terms of some or all of G’s members justifiedly believing that p. Deflationary summativism draws inspiration from a judgment aggregation framework.16 As may be recalled from Chapter 1, “Aggregation procedures are mechanisms a multimember group can use to combine (‘aggregate’) the individual beliefs or judgments held by the group members into collective beliefs or judgments endorsed by the group as a whole” (List 2005, p. 25).17 For instance, a dictatorial procedure, “whereby the collective judgments are always those of some antecedently fixed group member (the ‘dictator’)” (List 2005, p. 28), understands the judgment of the group in terms of the judgment of a single member—the dictator. A majority procedure, “whereby a group judges a given proposition to be true whenever a majority of group members judges it to be true,” understands the judgment of the group in terms of the judgments of a majority of its individual members (List 2005, p. 27). A supermajority procedure, whereby a group judges a given proposition to be true whenever a supermajority of group members judges

it to be true, understands the judgment of the group in terms of the judgments of a supermajority of its individual members. And a unanimity procedure, “whereby the group makes a judgment on a proposition if and only if the group members unanimously endorse that judgment,” (List 2005, p. 30) understands the judgment of the group in terms of the unanimous agreement of all of its members. Though there are obvious differences between these views, they all characterize the judgment of a group in terms of the judgments of the individual members. This framework for aggregating member judgments into collective ones can easily be extended to justified beliefs. Indeed, Alvin Goldman does just this18 and, in so doing, provides the most detailed deflationary summativist view to date.19 One of the first questions to address in developing such an aggregative view of justified belief is how to understand the relationship between group belief and group justifiedness. While Goldman doesn’t explicitly endorse a particular account of group belief, he follows List and Pettit (2011) in working within a framework in which group beliefs are the result of a function that takes profiles of individual members’ beliefs as inputs and yields collective beliefs as outputs. Goldman calls such a mapping a belief aggregation function, or BAF. Examples of BAFs mirror those for judgments discussed above; for instance, according to the majoritarian rule, a group believes that p if and only if a majority of its members believe that p, and so on. What determines whether a BAF is an appropriate belief-forming rule for a group? As we saw in Chapter 1, there are various possible answers to this question. On one view, groups are able to select their own BAFs as they see fit. On another, BAFs are determined entirely by the socio-psychological forces that are operative within the group’s structure without any choice or input from the group itself. Goldman doesn’t commit himself to a particular approach here. What he does commit himself to, however, is that whatever account of belief is endorsed, it does not bear a necessary connection to a theory of group justifiedness. For instance, consider the following example of what Goldman calls a justification aggregation function, or JAF: JAF-1: If at least sixty percent of G’s members justifiedly believe that p, then G too is justified in believing that p. (Goldman 2014, p. 17) JAF-1 is a supermajoritarian rule for group justifiedness, according to which a group justifiedly believes that p if and only if a supermajority of its members justifiedly believe that p. But, according to Goldman, BAFs and JAFs can and often do diverge for a given group. For instance, while a majoritarian BAF might be used for arriving at a group’s belief, it is nonetheless perfectly acceptable for a supermajoritarian JAF to be relied upon for determining this same group’s justifiedness. He writes: Notice that JAF-1 mirrors the [supermajoritarian belief aggregation function previously discussed]. There is no necessary connection, however, in the sense that a JAF must always ‘sanction,’ or approve of, whatever BAF a given group selects. On the contrary, a given BAF may be one that a suitable JAF would classify as unsuitable for generating justified group beliefs. (Goldman 2014, p. 17)

According to Goldman, then, an account of group belief need not constrain a theory of group justifiedness, nor need the latter constrain the former. There are, however, reasons to doubt this claim. To see this, consider the following: DISCONNECT: It

is part of the bylaws of the Vegetarian Club at Northwestern University (VCNU)

that the elected President of the club determines the beliefs for the entire group. Given this dictatorial BAF, combined with the President’s true belief that vegan burgers are healthier than hamburgers, it is the VCNU’s belief that vegan burgers are healthier than hamburgers. At the same time, a supermajoritarian JAF is used for arriving at the VCNU’s justifiedness in holding this belief. Since 60 of the 100 members justifiedly believe that vegan burgers are healthier than hamburgers, the VCNU justifiedly believes this. However, the President herself holds this belief purely because of wishful thinking and is thus not among the 60 members who believe this justifiedly. In DISCONNECT, a dictatorial BAF is combined with a supermajoritarian JAF which, according to Goldman, seems to be a legitimate pairing. But two results follow from this, both of which are problematic. First, the process or basis responsible for the formation of the group’s belief is entirely disconnected from the justifying features of this very belief. In particular, while the President solely determines the VCNU’s belief, 60 different members of the group affect whether this particular belief is justified. This is quite an odd result. It would be on a par in the individual case with saying that while reason is responsible for your believing that p, its justifiedness is determined entirely by testimony. Second, while the belief itself is formed through an epistemically baseless and unreliable process—wishful thinking—it nonetheless ends up being justifiedly held. This is tantamount to saying that the origin of a belief can be wholly without epistemic significance to its justifiedness. Such a result should be regarded as epistemically unacceptable, especially by a process reliabilist such as Goldman. What is even more objectionable here, however, is that the door is left wide open for Gettier cases to abound.20 For if group belief and group justification can be determined by entirely different aggregation functions, then not only can the process responsible for the formation of the group’s belief be disconnected from its justifying features, the truth of the belief can also be disconnected from its being justified. For instance, in DISCONNECT, it is simply a matter of luck that the President of the VCNU ends up with a true belief that vegan burgers are healthier than hamburgers, since forming beliefs purely on the basis of wishful thinking is surely not likely to result in mostly true beliefs. Given this, it is similarly a matter of luck that the VCNU ends up holding this true belief. But then there is no connection between the truth of the group’s belief and its justifiedness, since the 60 members who believe that vegan burgers are healthier than hamburgers for good reasons have nothing at all to do with the formation of the group’s true belief in the first place. This description of the situation perfectly parallels the classic diagnosis of Gettier cases, and reveals the extent to which this move of separating group belief and group justifiedness leaves group knowledge vulnerable to being Gettiered. Of course, an obvious solution to this problem is to require of any JAF that it aggregates the justified beliefs of at least the very members of the group who are responsible for the formation of the groups’ belief. But this is just to deny the original claim that accounts of group belief and group justifiedness can float freely of one another. Thus, BAFs and JAFs must work in synch with one another, lest cases like DISCONNECT proliferate.21 With this in mind, let’s turn to deflationary summativism. According to Goldman, there are two different conceptions of group justification within an aggregative framework—what he calls horizontal and vertical justifiedness. The best way to understand these notions is to consider the following case: DIFFERENT BASES: G

is a group whose members consist of 100 guards at the British Museum,

M1–M100. Each of the first 20 guards, M1–M20, justifiedly believes that guard Albert is planning an inside theft of a famous painting (= A). By deduction from A, each of them infers the (existential) proposition that there is a guard who is planning such a theft (= T). The remaining 80 guards do not believe and are not justified in believing A. Each of the second 20 guards, M21– M40, justifiedly believes that Bernard is planning an inside theft (= B) and deductively infers T from B. The other 80 members do not believe B and are not justified in believing B. Each of a third group of 20 members, M41–M60, justifiedly believes that guard Cecil is planning an inside theft (= C) and deductively infers T from C. The 80 others do not believe and are not justified in believing C. Thus, 60 members of G (justifiedly) believe T by deduction from some premise he/she justifiedly believes. (Goldman 2014, p. 16) Most of the leading aggregation procedures—for example, supermajoritarian and majoritarian— have the result that G believes T. But does G justifiedly believe this proposition? Goldman writes: …G’s belief in T may be considered from two perspectives: the horizontal perspective and the vertical perspective. The horizontal perspective addresses the question of the J-status of G’s belief in T solely in terms of other beliefs of G, i.e., group-level beliefs.…G’s belief in T is unjustified in terms of horizontal J-dependence. This is because, although G believes T, G does not infer T from any justified group-level belief of its own. The situation is different, however, when we consider G’s belief in T by reference to vertical J-dependence. Consider all of the members’ beliefs in T and the proportion of them that are justified….given…[the] vertical criterion of J-dependence, G’s belief in T is justified (because 60% of G’s members justifiedly believe T). (Goldman 2014, p. 18) On Goldman’s view, then, G justifiedly believes that someone is planning an inside theft at the museum when vertical justifiedness is considered, that is, when the justificational status of the group’s belief is determined, not by the group’s beliefs, but by all of the members’ relevant beliefs and the proportion of them that are justified. In particular, because the individual members have different bases for their beliefs that someone is planning a theft, there is no grouplevel basis from which the group’s belief to this effect can be justifiedly inferred. Hence, there is no horizontal justification. But if the group’s belief that someone is planning a theft is viewed independently of a group-level basis, then the proportion of the members’ relevant beliefs that are justified render it vertically justified. Indeed, it is precisely this vertical perspective that Goldman adopts when offering his positive account of the justification of group beliefs. Moreover, Goldman claims that it is preferable to think of justifiedness as a matter of degree, and thus to regard it as a gradable notion instead of a categorical one. Rather than sketch a fullblown theory of justificational gradability for collective entities, however, he offers a few sample principles so as to give a sense of the results such a theory will deliver. Assuming that members’ doxastic attitudes have categorical justificational status, a central principle is the following: (GJ) If a group belief that p is aggregated based on a profile of member attitudes toward that p, then (ceteris paribus) the greater the proportion of members who justifiedly believe that p and the smaller the proportion of members who justifiedly reject that p, the greater the group’s level, or grade, of justifiedness in believing that p. (Goldman 2014, p. 28)

On Goldman’s view, the justificational statuses of members’ doxastic attitudes depend on the processes by which they severally arrived at their respective attitudes and, as (GJ) makes clear, the justificational status of the group belief depends on the justificational statuses of the members’ attitudes. Put succinctly, group justifiedness increases with a greater percentage of individual member justifiedness.22 (GJ) is not only intuitively plausible, it also is easily supported by applying an aggregative framework to group justifiedness. Despite this, I will argue in what follows that (GJ) should be rejected, as it leads to what I call the Group Justification Paradox and the Defeater Problem. To begin, let us compare DIFFERENT BASES with the following version of the case: CONFLICTING BASES: G

is a group whose members consist of 100 guards at the British Museum, M1–M100, each of whom justifiedly believes that an inside theft of a famous painting is being planned by only one of a total of five possible guards—Albert, Bernard, Cecil, David, and Edmund. Each of the first 20 guards, M1–M20, justifiedly believes that only guard Albert is planning the inside theft (= A). By deduction from A, each of them infers the (existential) proposition that there is a guard who is planning such a theft (= T). The remaining 80 guards do not believe and are not justified in believing A. Each of the second 20 guards, M21–M40, justifiedly believes that only Bernard is planning the inside theft (= B) and deductively infers T from B. The other 80 guards do not believe and are not justified in believing B. Each of a third group of 20 members, M41–M60, justifiedly believes that only guard Cecil is planning the inside theft (= C) and deductively infers T from C. The 80 others do not believe and are not justified in believing C. Each of a fourth group of 20 members, M61–M80, justifiedly believes that only guard David is planning the inside theft (=D) and deductively infers T from D. The remaining 80 guards do not believe and are not justified in believing D. The final group of 20 members, M81– M100, justifiedly believes that only guard Edmund is planning the inside theft (= E) and deductively infers T from E. The 80 others do not believe and are not justified in believing E. Thus, 100 members of G justifiedly believe T by deduction from some premise he/she justifiedly believes.23 In the original DIFFERENT BASES, 60 out of 100 members of G justifiedly believe that there is a guard who is planning an inside theft of a famous painting at the museum and, thus, the group itself justifiedly believes this proposition. In CONFLICTING BASES, 100 out of 100 members of G justifiedly believe that there is a guard who is planning an inside theft of a famous painting at the museum and, thus, the group again justifiedly believes this proposition. According to (GJ), then, the group’s level of justifiedness is greater in CONFLICTING BASES than it is in DIFFERENT BASES since the proportion of members who justifiedly hold the relevant proposition in the former is greater than in the latter. But let us take a closer look at CONFLICTING BASES. Each of the first 20 guards, M1–M20, justifiedly believes that only guard Albert is planning the inside theft. Given this, combined with the fact that all of the guards are aware that Albert, Bernard, Cecil, David, and Edmund are the only possible thieves, each of these 20 guards also justifiedly believes that Bernard, Cecil, David, and Edmund are not planning the theft. Each of the second 20 guards, M21–M40, justifiedly believes that only Bernard is planning the inside theft and, given their other background beliefs, also justifiedly believes that Albert, Cecil, David, and Edmund are not planning the theft. Similar

considerations apply with respect to the other three subgroups: each believes that one, and only one, guard is planning the theft, and believes that the other four possible guards are not so planning.24 We are now in a position to see the Group Justification Paradox unfold: for each of the five possible candidates of the theft in question, 80 out of 100 guards justifiedly believe that he is not planning it. According to nearly every judgment aggregation function, it follows from this that the group, G, also justifiedly believes that each of the five possible candidates is not planning the theft. Since the group justifiedly recognizes that Albert, Bernard, Cecil, David, and Edmund are the only possible candidates for planning the theft, the group justifiedly believing that none of them is planning the theft amounts to the group justifiedly believing that no one is planning the theft. But, according to the vertical perspective, G also justifiedly believes that someone is planning an inside theft of a famous painting at the British Museum since 100 members justifiedly believe this. (GJ) thus leads to what we might call the Group Justification Paradox: G ends up justifiedly believing both that no one is planning the theft and that someone is planning the theft.25 Now, it might be noticed that this paradox relies on accepting that conjunction is closed for justified group belief. In particular, the group justifiedly believes that each of the five guards in question is not planning the theft. Thus, G believes that it is not Albert, not Bernard, not Cecil, and so on. Let us represent the group’s justified beliefs here as follows: ~A ~B ~C ~D ~E In addition, the group justifiedly believes that someone is planning the theft and that Albert, Bernard, Cecil, David, and Edmund are the only five possible candidates. Thus, G justifiedly believes: (A v B v C v D v E) The contradiction found in the Group Justification Paradox is then generated by the closure of conjunction, that is, by moving from the first set of justified group beliefs above to the following: (~A & ~B & ~C & ~D & ~E) This can be rewritten as the group justifiedly believing: ~(A v B v C v D v E) Thus, the result is that G justifiedly believes both (A v B v C v D v E) and ~(A v B v C v D v E), that is, both that someone is planning the theft and that no one is planning the theft. Hence, a contradiction. Given this, a proponent of (GJ) might respond to the Group Justification Paradox

by denying that conjunction is closed for justified group belief, thereby avoiding the contradiction in question. But notice that even if the closure of conjunction is rejected, G is still left with an obviously inconsistent set of beliefs, even if not an outright contradictory one. Specifically, since all 100 guards justifiedly believe that someone is planning the theft, the group justifiedly believes this, too. Moreover, all 100 guards justifiedly believe that the thief must be either Albert, Bernard, Cecil, David, or Edmund and thus that one of the propositions—A, B, C, D, or E—must be true. Given the vertical picture of justification, it follows that the group also believes one of these propositions is true. Thus, G justifiedly believes: (A v B v C v D v E) With respect to each of these five propositions, however, 80 out of 100 guards justifiedly believe it is false. On nearly every judgment aggregation function, this means that the group itself justifiedly believes that A, B, C, D, and E are false. Hence, G also justifiedly believes: ~A ~B ~C ~D ~E If the vertical picture is correct, then, G justifiedly believes an obviously inconsistent set of propositions, which is enough for the Group Justification Paradox to undermine (GJ). It should be further noted that unlike with some other paradoxes, this is not an inconsistent set of beliefs that it is nevertheless reasonable to have. For example, the Preface Paradox envisions an author apologizing for the errors that are contained in her book. In so doing, the author has ensured that there is at least one error in the book because she is now committed to an inconsistent set of claims: each of the individual claims made in the book, plus the claim that at least one of them is false. Nevertheless, the author’s apology in the preface is epistemically reasonable, given the excellent grounds we all have for our own fallibility.26 But notice: the author in the Preface Paradox does not have any particular reason to revise one claim rather than another. Looking at the grounds for holding any particular belief, the author would weigh the evidence in its favor against the very small chance that it is incorrect. Given this, were the author to think through the matter, she wouldn’t change any of her beliefs. This is not the case with the Group Justification Paradox, though. For each of the claims that one of the five guards is planning the inside theft, 80 members of the group do have evidence against it that they would present. Were the group to collectively deliberate, it would have a great deal of work to do before it could reach a stable position. Moreover, with respect to the Preface Paradox, we can suppose that the beliefs are largely independent, in the sense that they can be accepted or rejected without this having any implication for the other claims in the book. But this is not the case with the Group Justification Paradox: accepting one sub-group’s claims necessarily means rejecting the claims of the other sub-groups. For this reason, they can’t all be comfortably encompassed in a single point of view. Finally, the author in the Preface Paradox can act consistently by accepting each claim individually while also not, for instance, betting on all of the claims being true. The

group in the Group Justification Paradox, however, cannot act consistently. In discussion with the police, for instance, the group will be both advising that one of the five guards is planning the theft and then ruling out each of them as the suspect. Thus, while the inconsistency in some paradoxes might be rationally tolerable, the inconsistency in the Group Justification Paradox is not. Thus, that (GJ) succumbs to the Group Justification Paradox is sufficient for calling this view into question. However, reflecting on CONFLICTING BASES also enables us to see that (GJ) is false. For despite the fact that the proportion of members in CONFLICTING BASES who justifiedly believe that someone is planning the theft in question is greater than the proportion of members in DIFFERENT BASES who justifiedly believe this, the group’s level of justifiedness is lower in the former than it is in the latter. This is because the group in CONFLICTING BASES has a defeater for believing that someone is planning an inside theft of a famous painting at the museum, but not in DIFFERENT BASES. Let’s call this the Defeater Problem for (GJ). To see this, we should first take a brief detour through of defeaters. There are two central kinds of defeaters that are typically taken to be incompatible with justification. First, there are what we might call psychological defeaters, which can be either rebutting or undercutting. A psychological defeater is a doubt or belief that is had by S, and indicates that S’s belief that p is either false (i.e., rebutting) or unreliably formed or sustained (i.e., undercutting). Defeaters in this sense function by virtue of being had by S, regardless of their truth-value or epistemic status.27 Second, there are what we might call normative defeaters, which can also be either rebutting or undercutting. A normative defeater is a doubt or belief that S ought to have, and indicates that S’s belief that p is either false (i.e., rebutting) or unreliably formed or sustained (i.e., undercutting). Defeaters in this sense function by virtue of being doubts or beliefs that S should have (whether or not S does have them), given the presence of certain available evidence.28 The underlying thought here is that certain kinds of doubts and beliefs—either that a subject has or should have—contribute epistemically unacceptable irrationality to doxastic systems and, accordingly, justification can be defeated or undermined by them. Moreover, a defeater may itself be either defeated or undefeated. Suppose, for instance, that Harold believes that there is a bobcat in his backyard because he saw it there this morning, but Rosemary tells him, and he thereby comes to believe, that the animal is instead a lynx. In such a case, the justification that Harold had for believing that there is a bobcat in his backyard has been defeated by the rebutting belief that he acquires on the basis of Rosemary’s testimony. But since psychological defeaters can themselves be beliefs, they, too, are candidates for defeat. For instance, suppose that Harold consults a North American wildlife book and discovers that the white tip of the animal’s tail confirms that it was indeed a bobcat, thereby providing him with a defeater-defeater for his original belief that there is a bobcat in his backyard. And, as should be suspected, defeater-defeaters can also be defeated by further doubts and beliefs, which, in turn, can be defeated by further doubts and beliefs, and so on. Similar considerations involving evidence, rather than doubts and beliefs, apply in the case of normative defeaters. When one has a defeater for one’s belief that p that is not itself defeated, one has what is called an undefeated defeater for one’s belief that p. It is the presence of undefeated defeaters, not merely of defeaters, that is incompatible with justification and, thus, knowledge. With these points in mind, let us return to the scenario in CONFLICTING BASES. It should be fairly clear that G has a rebutting psychological defeater for believing that someone is planning an inside theft of a famous painting at the museum that is not itself defeated. In particular, G’s belief that no one is planning such a theft indicates that G’s belief that someone is planning such

a theft is false, and hence the target belief’s justification has been defeated.29 This means that G’s belief that someone is planning a theft is unjustified, despite the fact that every member of G justifiedly believes this. In contrast, there is no reason to regard the group’s belief as defeated in DIFFERENT BASES in the same way. For even if we add to the original case that each of the 100 guards justifiedly believes that the inside theft is being planned by only one of a total of five possible guards—Albert, Bernard, Cecil, David, and Edmund—the most we get is that 40 percent believe that it is not Albert, 40 percent believe that it is not Bernard, 40 percent believe that it is not Cecil, 60 percent believe that it is not David, and 60 percent believe that it is not Edmund. This in no way provides the group with the belief that no one is planning an inside theft at the museum. Given this, G’s level of justifiedness is greater in DIFFERENT BASES than it is in CONFLICTING BASES, despite the fact that the proportion of members in the former who justifiedly believe that someone is planning the theft in question is lower than the proportion of members in the latter who justifiedly believe this. The Defeater Problem thus shows that (GJ) is false.30

2.6 The Collective Evidence Problem We have seen, then, that (GJ) succumbs to both the Group Justification Paradox and the Defeater Problem. So where does this leave us? At the very least, the two problems afflicting (GJ) show that group justifiedness cannot be determined by aggregating individual member justifiedness independently of the relevant bases. Indeed, it is precisely because (GJ) permits such independence that the problems stemming from CONFLICTING BASES arise. For if group justifiedness were a matter of aggregating members’ doxastic states plus their bases, then the group in CONFLICTING BASES wouldn’t justifiedly believe that someone is planning an inside theft of a famous painting at the British Museum since there is no single justified belief + base combination that is had by at least a majority of the group’s members. But if the group doesn’t hold the justified belief that someone is planning a theft, then there is no paradox and there is no belief to be defeated. The problem with this strategy, however, is that it also has the result that the group in DIFFERENT BASES doesn’t justifiedly believe that someone is planning a theft at the British Museum, which was the very case used to motivate vertical justification. So, if the goal is to avoid the Group Justification Paradox and the Defeater Problem while also retaining the notion of vertical justifiedness for groups, we need to look elsewhere. To this end, let’s consider the weakest conclusion that can be drawn from these two problems. Consider this: the central feature that distinguishes DIFFERENT BASES from CONFLICTING BASES is that, as their names suggest, the bases of the individual members’ beliefs conflict in the latter but not necessarily in the former. For as the original case is described, 20 percent of the group believes that Albert is planning a theft of the museum, 20 percent believes that Bernard is planning a theft, and 20 percent believes that Cecil is planning a theft. By deduction, each of these groups infers the (existential) proposition that there is a guard who is planning such a theft. But since it is not built into the case that they believe that only one guard is planning such a theft, it is open for them to believe that more than one guard is. Thus, all 60 out of 100 guards might be correct in their respective beliefs because it is possible that Albert, Bernard, and Cecil are together planning a theft. DIFFERENT BASES, then, does not necessarily involve bases that conflict. So, the weakest conclusion that can be drawn from the Group Justification Paradox and the

Defeater Problem is that group justifiedness cannot aggregate individual member justifiedness when the latter involves conflicting bases. Perhaps, then, the spirit of (GJ) can be saved by revising it as follows: (GJ1) If a group belief that p is aggregated based on a profile of member attitudes toward that p and the individual members’ bases for believing that p are non-conflicting, then (ceteris paribus) the greater the proportion of members who justifiedly believe that p and the smaller the proportion of members who justifiedly reject that p, the greater the group’s level, or grade, of justifiedness in believing that p. (GJ1) preserves group justifiedness in DIFFERENT BASES—at least when it is read in the way specified above—but rules it out in CONFLICTING BASES, which is exactly what the proponent of vertical justifiedness needs. But now consider the following: NON-CONFLICTING BASES: G

is a group whose members consist of 100 guards at the British Museum, M1–M100, each of whom justifiedly believes that a man was responsible for an inside theft of a famous painting. Each of the first 20 guards, M1–M20, justifiedly believes that the thief exited a men’s bathroom right before the theft (= B). From B, each of them infers the proposition that it was a man who committed the theft (= M). The remaining 80 guards do not believe and are not justified in believing B. Each of the second 20 guards, M21–M40, justifiedly believes that the thief has a goatee (= G) and infers M from G. The other 80 members do not believe and are not justified in believing G. Each of a third group of 20 members, M41–M60, justifiedly believes that the thief was greeted as “sir” (= S) while walking into the museum and infers M from S. The 80 others do not believe and are not justified in believing S. Each of a fourth group of 20 members, M61–M80, justifiedly believes that the thief was talking in a baritone voice (=V) and infers M from V. The remaining 80 members do not believe and are not justified in believing V. The final group of 20 members, M81–M100, justifiedly believes that the thief’s name is William (= W) and infers M from W. The other 80 members do not believe and are not justified in believing W. Thus, 100 members of G justifiedly believe M by inference from some premise he/she justifiedly believes. At the same time, however, each subgroup of 20 guards also has counterevidence for the basis of the justified beliefs of a different subgroup. M1–M20 justifiedly believe that not-G since they have evidence that the thief’s goatee is fake. M21–M40 justifiedly believe that not-B since they have evidence that the bathroom that the thief was seen exiting is in fact a family bathroom. M41–M60 justifiedly believe that not-W since they have evidence that the thief has been using a pseudonym. M61–M80 justifiedly believes that not-S since they have evidence that it was actually the thief’s companion who was greeted as “sir” upon entering the museum. Finally, M81–M100 justifiedly believe that not-V since they have evidence that the baritone voice heard at the scene of the crime wasn’t the thief’s but was instead a recording. As the name suggests, the bases of the members’ beliefs in this case are non-conflicting and, indeed, they are mutually supporting. In particular, when B, G, S, V, and W are taken together, they provide powerful support—far more than any piece of evidence taken in isolation—for

concluding that the person responsible for an inside theft of a famous painting at the British Museum is a man. Thus, NON-CONFLICTING BASES clearly satisfies the relevant part of the antecedent of (GJ1) and results in the group justifiedly holding this belief about the thief. But notice the second part of the case: in addition to the bases of the members’ beliefs being non-conflicting, each subgroup of 20 guards has counterevidence for the basis of the justified beliefs of a different subgroup. So, for instance, while the basis for guards M1–M20 justifiedly believing that it was a man who committed the theft is that the thief exited a men’s bathroom, they also justifiedly believe that the thief’s goatee is fake, which is counterevidence for the basis of the belief for guards M21–M40. Similar considerations apply to all of the subgroups, resulting in there being no basis that does not have 20 percent of the group justifiedly believing its negation. To my mind, this results in the group of guards in NON-CONFLICTING BASES not justifiedly believing that a man was responsible for an inside theft of a famous painting at the British Museum. This is because there is not a single basis of the members’ beliefs that is free of direct and compelling counterevidence. Otherwise put, there is no basis that would survive full disclosure: were all 100 members to fully disclose all of their evidence and counterevidence, there would be no remaining reason to believe that the thief is a man. Despite this, since 100 members of G justifiedly believe M, (GJ1) grants a very high level of justifiedness to G in holding this belief. It also is not difficult to imagine a variant of NON-CONFLICTING BASES—let’s call it VARIANT— that is exactly as the original case is described, except for two modifications: (i) only 60 of the 100 guards justifiedly believe that a man was responsible for an inside theft of a famous painting at the British Museum, and (ii) none of the 100 members possess counterevidence for any of the relevant bases. According to (GJ1), G has a lower level of justifiedness in the belief that the thief is a man in VARIANT than in NON-CONFLICTING BASES because there is a significantly greater proportion of members who justifiedly hold this belief in the latter than there is in the former. But this seems wrong. In particular, even though there are fewer members who justifiedly hold the relevant belief in VARIANT, there is far greater epistemic support for the group’s belief in this case than there is in NON-CONFLICTING BASES when the collective entity is viewed as a whole. Let us call this general objection to (GJ1) the Collective Evidence Problem. It might be thought that there is an easy solution to this problem. For why can’t the counterevidence in NON-CONFLICTING BASES be understood in terms of the group possessing an undefeated defeater for believing that the thief is a man? Accordingly, why can’t (GJ1) simply be modified to rule this out as follows: (GJ2) If a group belief that p is aggregated based on a profile of member attitudes toward that p and the individual members’ bases for believing that p are non-conflicting, then (ceteris paribus) the greater the proportion of members who justifiedly believe that p and the smaller the proportion of members who justifiedly reject that p, the greater the group’s level, or grade, of justifiedness in believing that p, so long as the group does not possess an undefeated defeater for believing that p. If (GJ2) is combined with the claim that G has an undefeated defeater for believing that the thief is a man, then perhaps the spirit of vertical justification can be retained while having the resources for denying justified belief to G in NON-CONFLICTING BASES. There is, however, a central problem with this strategy; namely, that it is not at all clear how

to understand the group having an undefeated defeater in NON-CONFLICTING BASES within the vertical justification framework. To see this, recall that the paradigmatic instance of vertical group justifiedness is when members’ justifiedness is aggregated without taking into account the corresponding bases. Thus, a group can end up highly justified in believing that p, so long as a significant percentage of members justifiedly believe that p, regardless of what it is that justifies the members’ beliefs. The corresponding view of vertical group defeat, then, would permit the group to have a defeater for believing that p, so long as a significant percentage of the members have a defeater for believing that p, regardless of what it is that does the defeating work in question. So, for instance, on this view, a group might have a defeater for believing that p even when the members have quite different counterevidence regarding whether p, just as a group might justifiedly believe that p even when members have quite different reasons for believing that p. But notice: this is not at all what is found in NON-CONFLICTING BASES. In this case, not a single member of the group has a defeater for believing that the thief is a man since not a single member has counterevidence for his particular basis for this belief. If group justification and, correspondingly, group defeat is understood in the atomistic aggregative way at the heart of vertical justification—where doxastic states of members are aggregated independently of their relations to other doxastic states—then there seems to be no sense in which the group has a defeater for believing that the thief is a man in NON-CONFLICTING BASES. Of course, the proponent of (GJ2) might simply introduce an altogether different account of group defeat, one that is able to accommodate G’s belief being defeated in NON-CONFLICTING BASES. But it should be clear that whatever account of group defeat is offered such that the group’s belief that the thief is a man ends up defeated, it will be working with a wildly different conception of group justifiedness than that found in the vertical conception. For as discussed above, there is no sense in which the group’s belief is vertically defeated in NON-CONFLICTING BASES. Thus, this strategy will not save the aggregative version of deflationary summativism from the Collective Evidence Problem.

2.7 The Group Normative Obligations Problem We have seen that modifications to (GJ) fail to solve the problems afflicting the aggregative account of group justifiedness. There is one further objection facing this view that I would like to develop, which can be seen by considering the following: GROUP NORMATIVE OBLIGATIONS: G

is a group whose members consist of 3 nurses employed at a nursing home, N1–N3, each of whom justifiedly believes that patient O’Brien is not at risk of dying. N1 is aware that she forgot to give O’Brien his first medication, but she also justifiedly believes that this act of negligence alone is not sufficient to put him in danger of death. N2 is aware that she forgot to give O’Brien his second medication, but she also justifiedly believes that this act of negligence alone is not sufficient to put him at risk of serious health problems. And N3 is aware that she forgot to give O’Brien his third medication, but she also justifiedly believes that this act of negligence alone is not sufficient to put his life in jeopardy. At the same time, however, N1–N3 all justifiedly believe that O’Brien missing all three of his medications would put him at serious risk of dying.

Moreover, it is an explicit requirement of the collective unit comprising the positions held by N1–N3 that they always communicate with one another about the patients for whom they mutually care. Despite this, N1–N3 do not share their respective acts of negligence with one another, and so each justifiedly believes that the other nurses successfully gave O’Brien his medicine. Thus, through failing to fulfill their responsibilities qua group members, N1–N3 lack crucial evidence that they should have had and that would reveal the epistemic deficiency of their beliefs that O’Brien is not at risk of dying. Given the evidence available to N1–N3 individually, each justifiedly believes that O’Brien is not at risk of dying. Moreover, as individual epistemic agents, N1–N3 are not neglecting any epistemic duties. In particular, subtract their membership in G and they have no obligation, epistemic or otherwise, to discuss O’Brien’s care with one another. Indeed, this can even be built directly into the case—perhaps because of concerns about the unreliability of gossip, nurses are prohibited from discussing patients’ care, unless they are members of a unit. As a group, however, matters are quite different. Given the normative obligations that N1–N3 are bound by as members of their nursing unit, they have an epistemic duty to consult with one another about O’Brien’s care. Were they to do what epistemically they ought to, they would all be aware of the fact that he failed to receive all three of his medications, and thus they would each justifiedly believe that O’Brien is at risk of dying. But failing to possess evidence through neglecting one’s epistemic duties does not get one off the hook for being responsible for such evidence, either as an individual or as a group. If one ought to be aware of evidence, this is enough for preventing epistemic justification, regardless of whether one in fact possesses it. Thus, while every member of the nursing unit justifiedly believes that O’Brien is at risk of dying, the group itself does not. Let us call this objection to the aggregative account the Group Normative Obligations Problem. This point can be put in terms of defeaters. Recall that a normative defeater is a belief that a subject ought to have, and that indicates that the target belief is either false or unreliably formed or sustained. Normative defeaters function by virtue of being beliefs that a subject should have, whether or not she does have them. Given this, we might say that even though N1–N3 do not, qua individuals, possess normative defeaters for believing that O’Brien is not at risk of dying, they do have such defeaters qua group members. This is because, absent their membership in the group, there is no basis for saying that they epistemically ought to believe that O’Brien failed to receive all three of his medications. However, given their group membership, there is such a basis for saying this, and such a belief clearly indicates that their belief that O’Brien is not at risk of dying is false. It is worth pointing out that though the normative obligations in question arise by virtue of the nurses’ membership in the nursing unit, they are nonetheless epistemic rather than merely professional or prudential. To see this, notice that one’s professional roles often give rise to distinctively epistemic duties. If one is a medical doctor, for instance, one might be professionally obligated to consult test results before determining a diagnosis for one’s patients. This duty arises out of one’s role as a physician, but it is surely epistemic. This is made clear by the fact that it concerns evidence highly relevant to the belief in question—in this case, the diagnosis—that bears on whether one has the corresponding knowledge. Similar considerations apply in GROUP NORMATIVE OBLIGATIONS: the duty to consult with other members of the nursing unit regarding the mutual care of O’Brien arises out of the nurses’ professional roles, but the

obligation is epistemic in nature. In particular, it concerns evidence that the nurses know is highly relevant to their beliefs about O’Brien’s health that directly affects whether they know he is not at risk of dying. Of course, not all distinctively epistemic duties that arise out of professional roles are such that their flouting prevents the possession of knowledge. For instance, the nursing unit might be required by the administration to form all of their beliefs about patients via only logical deduction. While this requirement might ensure a very high degree of epistemic support, it is far more stringent than what is needed for their beliefs to amount to knowledge. At the very least, when the epistemic duty in question concerns evidence without which the belief in question will be irrationally held, its flouting will prevent the possession of the corresponding knowledge. Clearly, this is the case in GROUP NORMATIVE OBLIGATIONS, where the nurses continuing to believe that O’Brien is not at risk of dying is irrationally held when they all know that their co-workers have evidence that bears directly on this very matter. Moreover, as was the case with NON-CONFLICTING BASES, it is not difficult to imagine a variant of GROUP NORMATIVE OBLIGATIONS—let’s call it VARIANT2—that is just as the original case is described, except for two modifications: (i) only 2 of the 3 nurses justifiedly believe that O’Brien is not at risk of dying, and (ii) there is no requirement of the collective unit comprising the positions held by N1–N3 that they always communicate with one another about the patients that they mutually care for. According to (GJ), G has a lower level of justifiedness in the belief that O’Brien is not at risk of dying in VARIANT2 than in GROUP NORMATIVE OBLIGATIONS because there is a greater proportion of members who justifiedly hold this belief in the latter than there is in the former. But this is wrong. For even though there are fewer members who justifiedly hold the relevant belief in VARIANT2, none of them are neglecting any epistemic duties, as they are in GROUP NORMATIVE OBLIGATIONS. This means that the group has a greater degree of justifiedness in the belief that O’Brien is at risk of dying in VARIANT2 than in GROUP NORMATIVE OBLIGATIONS, despite the fact that there is a greater proportion of members who justifiedly hold this belief in the latter. The aggregative account of group justifiedness is, again, shown to be false.

2.8 A Condorcet-Inspired Account of Justified Group Belief Before turning to my positive view, there is one more approach to characterizing justified group belief that I would like to consider, one that is inspired by the Condorcet jury theorem and developed in the work of Christian List.31 Though List presents his account specifically as one of group knowledge, it can be adapted for our purposes here. In particular, on this view, (1) the theory of judgment aggregation is followed in holding that a group’s belief simpliciter is a function of the members’ individual beliefs, but (2) being justified is understood as a collective property that a group’s belief may or may not have, depending on whether it satisfies certain “truth-tracking” or “reliability” conditions at the group level. For instance, it has been shown that each of a committee members’ individual beliefs might be only very slightly better than random at “tracking the truth,” thereby meeting only the very minimal Condorcetian “greater than 1/2” probability-of-correctness condition, but, nevertheless, a sufficiently large committee of independent individuals could be very close to perfect in its collective reliability.32 On a truthtracking or reliabilist conception of justification, then, the group belief that p might be regarded as justified, even though each of the underlying individual beliefs falls short of the threshold

required for counting as such. Unlike Goldman’s deflationary summativist view, then, justified group belief on this Condorcet-inspired picture is not merely an aggregate of justified individual beliefs. Rather, a justified group belief may be an aggregate of individual beliefs, all of which are unjustified, yet where the group belief meets the relevant truth-tracking or reliability conditions at the collective level. And unlike inflationary non-summativism, justified group belief is constructed out of the epistemic features of the individual beliefs of the group’s members. It is just that the level of justification at the collective level outstrips what is possessed by any single belief. This presents a view of justified group belief that does not fit neatly into either the inflationary nonsummativist or the deflationary summativist camp, and also seems to tell against requiring justified belief at the level of individual members. For if this view is correct, then individually justified beliefs—of any sort or quantity—are not necessary for justified group belief.33 By way of response, it is undeniable that this Condorcet-inspired approach shows that there can be epistemic value at the level of the group’s belief that is not present at the level of any of the individual beliefs. But the question that I want to focus on is whether this value is plausibly regarded as epistemic justification; and here I want to suggest that, for several reasons, the answer should be a negative one. To begin, this view counts as justified group beliefs that have an inappropriate epistemic grounding, both at the individual and at the collective level. Let’s begin with the former: notice that in order for there to be justified group belief on this picture, the central requirements are (i) that the group have a large enough number of independent members who (ii) satisfy the “greater than 1/2” probability-of-correctness condition. But then a group could go about achieving epistemic justifiedness simply by extending membership to more and more people, with no regard whatsoever to the grounds of the individual beliefs beyond independence. Suppose, for instance, that we are talking about a scientific research group: it wouldn’t matter whether the members believe that p on the basis of experimental results or some biased testimony, via methodical research or wishful thinking. So long as (i) and (ii) are met, group justifiedness can be, too. Indeed, rather than checking the CVs of potential members and evaluating the epistemic quality of the relevant individual beliefs, a research group could achieve justified group belief by simply surveying the beliefs of individuals and recruiting a large enough number of members who satisfy (i) and (ii). There’s no need for any members to go to the expense of actually conducting experiments. This leads to a related, though slightly different, concern: the grounds of the members’ individual beliefs of members surely matter to the justifiedness of the group, yet this Condorcetinspired view cannot account for this. A group with members all of whom performed excellent scientific experiments would clearly be better justified in its resulting scientific belief that p than one where they all held the same belief because of independent idiosyncratic websites. The former group would, for instance, be quite likely to have a great deal of p-related beliefs that are justified, to be able to draw appropriate inferences regarding that p, to have the capacity to explain why that p is the case, and so on, while the latter group would not. Even if none of these features is necessary for group justifiedness, they surely have the capacity to affect the level or grade of epistemic justification present. But if the truth-tracking or reliability at the collective level of the two groups is equivalent, then the Condorcet-inspired view counts them as equally justified in their respective beliefs. In fact, if the group’s belief grounded in idiosyncratic websites is slightly more reliable than the one based on excellent scientific experiments, then it would have a greater level of justifiedness, despite the fact that its basis is wildly inappropriate

from a scientific point of view. So far I have focused on the bases of the individual members, but there are also similar concerns that could be raised at the level of the group and its structure. A group that luckily stumbles into reliability at the collective level through having a large enough group of members who believe that p and satisfy (i) and (ii) could end up more justified than one that sets up a structure where, for instance, evidence is vigilantly gathered, shared among members, and checked multiple times over. Or a group that isolates members and forces them to obtain information from epistemically questionable sources could end up more justified than one that engages in collective deliberation and forms beliefs on the basis of pooled evidence that has been scrutinized. In this way, the Condorcet-inspired approach lacks the resources for allowing group justifiedness to be affected by the way in which truth-tracking or reliability is achieved at the collective level. There are also considerations involving group action that tell against this approach. At a minimum, there is a close connection between a group believing that p and its being epistemically permissible for such a group to act as if p.34 Without making any commitments here about this connection being one of either necessity or sufficiency, we can surely say that if one justifiedly believes that p, then it is generally the case that it is epistemically permissible for one to act as if p. It is also the case that groups cannot offer assertions, engage in negotiations, sign contracts, break the law, or perform any of the other sorts of actions typically attributed to groups without there being action on the part of some of its members. Of course, this does not mean that for every group, G, and act, a, G performs a only if at least one member of G performs a. It may be that one member performs action b, and another performs action c, and still another performs action d, which, when taken together, involves G performing a. But it is the case that for every group, G, and act, a, G performs a only if at least one member of G performs some act or other that causally contributes to a. Moreover, for many groups, particular members are granted the authority to serve as proxy agents, where this means that the actions performed by such individuals count as actions of the group’s.35 For instance, a spokesperson might be given the authority to testify on behalf of a corporation, a CEO the authority to purchase property for a company, and an administrator the authority to deny a colleague tenure for the college. Notice, however, that in each of these cases, justified belief at the level of the individuals serving as proxy agents is crucial for rendering the group actions in question epistemically permissible. If, for instance, the administrator believes that the colleague should be denied tenure merely because, say, a department member with a grudge told him so, then the denial of tenure is clearly epistemically improper. In particular, the administrator is not properly epistemically positioned to deny the colleague tenure, given such a poor basis. This is even clearer if we assume that there is no member of the college who has a basis that is epistemically better than the administrator’s. But if justified group belief is entirely severed from the justified beliefs of members, as the Condorcet-inspired approach does, then such a denial of tenure could turn out to be entirely permissible.36 For all of these reasons, I regard the Condorcet-inspired considerations as pointing to features of groups that are undoubtedly epistemically valuable, but that don’t track justified group belief.

2.9 The Group Epistemic Agent Account

We have seen that the paradigmatic versions of both inflationary non-summativism and deflationary summativism suffer from debilitating objections. Let us take a step back and see what can be learned from the problems afflicting the joint acceptance and aggregative accounts of group justification. The joint acceptance account treats groups as epistemic entities that can float freely of the evidential profiles of their individual members. For instance, as IGNORING EVIDENCE reveals, even if every member of Philip Morris possesses massive amounts of scientific evidence revealing the links between smoking and lung cancer, such a view permits groups to choose not to accept this evidence and thereby end up justifiedly believing that smoking does not pose a health hazard. But groups cannot pick and choose what evidence is available to them—they are constrained by the evidence possessed by their individual members. This is the central lesson of the Illegitimate Manipulation of Evidence Problem. In contrast, the aggregative account altogether avoids concerns associated with this problem by securing a close dependence of group justifiedness on member justifiedness. But this is done at the cost of failing to appreciate the distinctive epistemic issues that arise at the group level. For the proponent of the aggregative account, group justifiedness is a simple “justified belief in/justified belief out” matter. Yet we have seen that this model ignores the complexity of justified belief at the group level, particularly the evidential relations that exist between members’ beliefs and bases and the epistemic obligations that arise via membership in the group. Otherwise put, suppose we think of the relation between member justifiedness and group justifiedness as a function. On the aggregative account, this is a very simple function; the inputs to the function are the justified beliefs of individual members, the function merely aggregates them, and the output is the justified belief of the group. We have seen, however, that there are at least three ways in which this model is importantly wrong. The first case we discussed —CONFLICTING BASES—shows that the aggregative view is incorrect with respect to the inputs to the function that links individual and group justification. In addition to the justified beliefs of individuals, we also need to take into account the bases for these individual beliefs. The second case—NON-CONFLICTING BASES—shows that this modification is still not enough to capture a plausible account of group justification. In NON-CONFLICTING BASES, the force of the objection rests on the addition of pieces of evidence that are not part of the bases for the aggregated individual beliefs. The additional pieces of evidence are relevant, not to the beliefs of the individuals who have them, but to the beliefs of other individual members of the group. One response to this case is to say that the range of inputs to the function linking individual and group justification must be expanded yet again: the function should take into account all of the individuals’ beliefs and their bases. But this is not a plausible account of group justification. In particular, it does not come close to modeling the way in which groups try to ascertain and make use of justification in determining what they ought to believe. Moreover, the third case—GROUP NORMATIVE OBLIGATIONS —shows that even expanding the range of inputs in this way is not enough to secure an adequate account of group justification. The upshot of the third case is that the function itself, and not merely the range of inputs to the function, needs to be modified in a significant way. In particular, what is left out is the impact of normative expectations and obligations that apply to the individuals who constitute the group—and that very often will apply to those individuals in virtue of their membership in the group. This is exactly what we find in GROUP NORMATIVE OBLIGATIONS, where N1–N3 acquire epistemic obligations precisely because of their membership in the nursing unit.

Moreover, we are now in a position to fully appreciate the ways in which Goldman’s claim that aggregation transmits justifiedness is inaccurate.37 In particular, aggregation does not simply transmit justifiedness from individual members to groups since there are a number of ways in which the output can end up being far less justified than the aggregation of the inputs. As we have seen, this can happen when the members’ bases conflict, when their collective evidence adds up to zero, or when normative obligations that exist at the group level fail to be satisfied by the relevant members. Thus, the justified beliefs of groups should be treated neither as states that can float freely of the evidence possessed by their individual members, nor as nothing more than the aggregation of the justified beliefs of their members. Instead, I propose that groups be understood as epistemic agents in their own right, though ones whose justified beliefs are constrained by the epistemic statuses and normative obligations of their individual members. Let us call this the Group Epistemic Agent Account of justified group belief, according to which: A group, G, justifiedly believes that p if and only if: (1) A significant percentage of the operative members of G (a) justifiedly believe that p,38 and (b) are such that adding together the bases of their justified beliefs that p yields a belief set that is coherent. (2) Full disclosure of the evidence relevant39 to the proposition that p, accompanied by rational deliberation about that evidence among the members of G in accordance with their individual and group epistemic normative requirements, would not result in further evidence that, when added to the bases of G’s members’ beliefs that p, yields a total belief set that fails to make sufficiently probable that p.40 Let us begin with condition (1a), where there are three central features that should be clarified. First, according to the Group Epistemic Agent Account, a group’s justified belief involves the justified beliefs of operative members of the group. Recall that a condition of this sort was initially motivated by a case involving the custodians at a law firm to show that the joint acceptance at the heart of inflationary non-summative views must take place between the right members. Similar considerations apply here. Suppose that all of the members of the housekeeping staff at Philip Morris have limited evidence about the links between smoking and lung cancer such that they justifiedly believe that smoking does not pose health risks. Even if they turn out to constitute a majority of the employees at Philip Morris, the epistemic status of the beliefs of the CEO and board members is what matters to whether the group justifiedly believes this. This is because they are the members who have the relevant decision-making authority in the domain in question. Second, a significant percentage of operative members needs to justifiedly believe that p in order for the group to justifiedly believe it. What amounts to a significant percentage of operative members varies from group to group—it might be as small as a single dictatorial member, or as large as all of the members. But simply one or two members justifiedly believing a proposition in a fully democratic group of 50 is clearly not sufficient for the group to justifiedly believe this.41 Finally, notice that the group’s belief that p will inherit a strong, positive epistemic status from the members’ justified beliefs in which it is based. These member beliefs will, in turn, be justified by whatever features are required at the individual level, such as that they are produced by reliable processes, or grounded in adequate evidence, or track the truth. Although aggregation

of these individually justified beliefs is not sufficient for justified group belief, it does lay the foundation for group justification. Let us now turn to condition (1b). Unlike the deflationary summativist approach, the Group Epistemic Agent Account requires that adding the bases of the justified beliefs that p of the same operative members at issue in (1a) yields a belief set that is coherent. This is what was learned from CONFLICTING BASES: member justifiedness does not transmit smoothly to group justifiedness, since the bases of the members’ beliefs might be wildly conflicting. When this happens, such a view faces the Group Justification Paradox and the Defeater Problem. Otherwise put, it is precisely because vertical justifiedness aggregates member beliefs with no regard for their bases that these problems arise. The second feature of (1b) that should be noted is that the belief set resulting from adding together the bases of G’s operative members’ beliefs that p needs to be coherent.42 Notice that consistency is not required here. This is because various philosophers have made a compelling case for the claim that an individual can be rational despite having an inconsistent set of beliefs. As was noted earlier, the Preface Paradox describes just this sort of situation, where it is rational for the author of a book to believe both the individual claims made in the book, plus the claim that at least one of them is false. So, consistency cannot be a requirement of individual rational belief. But groups can easily find themselves in situations of this kind, so similar reasoning leads to the conclusion that consistency is too strong a requirement of group rational belief, too.43 How, then, should we characterize the requirement that the belief set resulting from aggregation be coherent? As noted in Chapter 1, there are various ways of understanding this. One option is to understand coherence in terms of evidential support.44 Alternatively, coherence can be understood in terms of accuracy-dominance avoidance, in the sense that, for a coherent set of beliefs, there is no rival belief set that is never worse and sometimes better than it with respect to overall accuracy.45 Again, I take no stand on how precisely this concept should be understood, as it is sufficient for my purposes that there is an intuitive failure of coherence in CONFLICTING BASES, and that there are accounts that can capture this. Third, notice that the bases that are being added together to yield a coherent set are those of G’s operative members’ justified beliefs that p. If operative members believe that not-p, or believe that p for dogmatic or irrational reasons, then their beliefs and bases are simply irrelevant to the justificatory status of G’s belief that p. Finally, with respect to condition (1) as a whole, it should be emphasized that its satisfaction is compatible with the bases of some operative members’ justified beliefs failing to cohere with the bases of others. All that is needed is that a significant percentage of the operative members of the group satisfy both (1a) and (1b). Thus, the Group Epistemic Agent Account permits justified group belief in cases like the following: 99 out of 100 operative members of a group have bases for their justified beliefs that p that cohere with one another, but the basis of the final operative member’s justified belief that p fails to cohere with those of the others.46 The lack of coherence added by this lone basis does not prevent the group from justifiedly believing that p, given the coherence to be found among the bases for the justified belief that p of a significant percentage of operative members. It should be clear that the inclusion of (1a) in the Group Epistemic Agent Account altogether avoids the Illegitimate Manipulation of Evidence Problem afflicting inflationary nonsummativist views. One of the features that make the joint acceptance account susceptible to this objection is that the evidence available to the group can end up being a matter of choice. In particular, because groups can often exercise control over what is jointly accepted, they can

manipulate what evidence is, and is not, available to them as a group. But if the justification of group beliefs is necessarily a matter of the justification of the beliefs of individual members, and the evidence that is available to individual subjects is not a matter of choice, then there is no worry that epistemic justification for group beliefs can be achieved through the illegitimate manipulation of evidence. If, for instance, a significant percentage of the operative members of Philip Morris justifiedly believe that smoking causes health problems, then no amount of joint acceptance, or lack thereof, can make it the case that Philip Morris itself does not justifiedly believe this, too. At the same time, it should also be apparent that including (1b) in the Group Epistemic Agent Account denies justified group belief in CONFLICTING BASES and thereby avoids both the Group Justification Paradox and the Defeater Problem. In particular, adding together the bases of the British Museum guards’ beliefs that p yields an incoherent set of beliefs. This results in G failing to justifiedly believe that someone is planning an inside theft of a famous painting at the British Museum, despite the fact that all 100 members of G justifiedly believe this. Since it was the attribution of justified belief to G here that generated the problems stemming from CONFLICTING BASES in the first place, they simply don’t arise for the Group Epistemic Agent Account. While (1) addresses the Illegitimate Manipulation of Evidence Problem, the Group Justification Paradox, and the Defeater Problem, it is silent when it comes to the Collective Evidence Problem and the Group Normative Obligations Problem. This leads us to condition (2) of the Group Epistemic Agent Account, where there are six features that should be emphasized. First, notice that this is a negative condition: it requires only that full disclosure and rational deliberation be such that they would not generate further evidence that, when added to the bases of G’s members’ beliefs that p, would yield a total belief set that fails to make probable that p. This is because condition (1) proceeds by aggregating justified beliefs of operative members. The bases of these individual beliefs make probable that p. Thus, if the belief set of these bases is coherent, the set itself should still make probable that p. What (2) does is to ensure that disclosure and deliberation do not add evidence such that the resulting expanded set of beliefs that the group ends up with would no longer make probable that p. There are various ways to understand probability—for example, as grounded in frequencies or propensities, or as an a priori relation—but the details are not important here.47 Second, condition (2) is a subjunctive conditional. It does not require that the group members in fact fully disclose all of the relevant evidence and engage in rational deliberation. This would be far too strong. Instead, it specifies what needs to be true of the evidence, were the group members to do this. Third, it is the evidence that actually exists that is relevant to the disclosure and deliberation condition—not the evidence that would exist if the group were to engage in these activities in the counterfactual situation. This captures the sense in which the group’s justification depends on the evidence they actually have, along with how they ought to think about it.48 Fourth, the rational deliberation at issue in (2) must be in accordance with the epistemic normative requirements governing both the individual members and the group as a whole. While being a member of a group does not absolve one of one’s duties—epistemic or otherwise—it can bring with it new ones. For instance, as we saw in GROUP NORMATIVE OBLIGATIONS, being a member of the nursing unit requires of N1–N3 that they always communicate with one another about the patients that they mutually care for, even though this is not expected of them as non-members. According to (2), then, if any epistemic norms, individual or group, have been violated, then this would be made clear in the process of disclosure and deliberation, and hence the group would

not have justified group belief. Notice, however, that condition (2) does not require that the members of the group in question actually follow the epistemic normative requirements governing both the individual members and the group as a whole. That this would be too stringent can be seen by considering the following: suppose that only N1 in GROUP NORMATIVE OBLIGATIONS is aware that she forgot to give O’Brien his first medication and that she also justifiedly believes that this act of negligence alone is not sufficient to put him in danger of death. Even though N1 is flouting a group epistemic norm when she fails to share this information with the other members of the nursing unit, this by itself is not sufficient for the group to fail to justifiedly believe that O’Brien is at risk of dying. Condition (2) delivers exactly the right verdict here: full disclosure and rational deliberation among the members of the nursing unit would reveal both that N1 failed to fulfill this requirement, but also that this does not put O’Brien in danger. Thus, this process would not result in further evidence that, when added to the bases of G’s members’ beliefs that p, yields a total belief set that fails to make probable that O’Brien is not at risk of dying. This bears on a concern that one might have about (2): what if there is a dogmatic member who steadfastly clings to misleading evidence, but does not actually share it with the other members of the group? Should the fact that the belief set in question would fail to make probable that p were she to share it result in the group actually failing to justifiedly believe that p? By way of response, notice that the mere possession of counterevidence by a member does not, by itself, result in the group failing condition (2) of the Group Epistemic Agent Account; instead, it has to be such that it would survive full disclosure of all of the relevant evidence and rational deliberation by the members of G, where the latter is done in accordance with the governing epistemic normative requirements. Counterevidence that is possessed entirely because of dogmatism or some other epistemic vice would presumably be rejected or dismissed via this process. Moreover, in those cases where a member has counterevidence that is not in fact shared but is such that it would survive this sort of process of disclosure and deliberation, then condition (2) seems to provide the correct verdict. For instance, suppose that an operative member of a scientific research group has evidence that undermines the conclusion that a drug is effective in treating a particular kind of cancer, but does not disclose it to the other members. Were she to share it, however, all of the other members of the research group would accept it, thereby yielding a belief set that fails to make probable that the drug is effective in treating cancer. Here, the unshared counterevidence possessed by the operative member prevents the group from justifiedly believing that the drug is effective in treating cancer. But this also seems to be the right result, one supported by the fact that we would surely hold the group responsible for the harmful consequences of treating cancer patients with the drug were we to learn that one of the operative members had compelling evidence challenging its effectiveness. A related issue that arises here is this: suppose there is a skeptic in a given group who doesn’t disclose her skeptical worries to the other members. Isn’t it the case that for just about any proposition and any belief set, adding a skeptical argument to it would result in a belief set that fails to make probable that p? Thus, doesn’t it turn out on the Group Epistemic Agent Account that the mere presence of a skeptic in a group results in the group itself failing to justifiedly hold any beliefs?49 By way of response, it should be emphasized that the skeptical arguments would have to be such that they would survive full disclosure of all of the relevant evidence and rational deliberation by the members of the group in accordance with all of the individual and group epistemic norms. This means that whether the addition of the skeptical doubts would in fact result in a belief set that fails to make probable a given proposition depends on the evidential

force of the skeptical arguments themselves. But many, if not most, people—philosophers and otherwise—continue to regard themselves as having knowledge even in the face of skepticism. So, there is no reason to think that the same wouldn’t be the case with respect to most groups. On the other hand, if skepticism is correct, then none of us—individuals or groups—have justified belief. But this would be in virtue of the fact that skepticism is correct and not in virtue of the presence of a skeptical group member. Fifth, all of the group members, not just the operative ones, are relevant to the counterfactual disclosure of evidence and rational deliberation in (2). This is because both operative and nonoperative members can possess relevant counterevidence for believing that p, or can flout epistemic norms, that bear significantly on the justificatory status of a group’s belief. For instance, suppose that the nursing unit comprising N1–N3 is a smaller part of a larger group—the nursing home staff—that includes the custodians for the building. Even if the ten custodians at the nursing home in GROUP NORMATIVE OBLIGATIONS are not operative members of the group and thus do not contribute positively to justified group belief when O’Brien’s health is concerned, their having seen N1–N3 all fail to give the patient his medications might still be relevant counterevidence to whether the group justifiedly believes that O’Brien is not at risk of dying. Moreover, their failing to communicate this information to the head of the nursing unit consisting of N1–N3 might also be in violation of general epistemic norms embraced by the nursing home.50 Finally, it should be emphasized that condition (2) does not entail (1b). To see this, suppose that the bases of the operative members’ justified beliefs that p are incoherent from the start, but full disclosure and rational deliberation would not turn up new evidence that, when added to these bases, yields a total belief set that fails to make probable that p. In such a case, the group’s belief that p will satisfy condition (2), and, let us suppose, (1a) as well, but surely the initial incoherent bases render the belief unjustified. This verdict of unjustified group belief is precisely what condition (1b) ensures. Moreover, condition (1b) ensures that the basis for group justification is widely distributed among the group’s operative members. Even in cases where the operative members who justifiedly believe that p do not all share the same base for that belief, there is still a significant percentage of them who justifiedly believe that p with bases that could in principle be shared. In that sense, they exhibit the cohesiveness that is characteristic of genuine group phenomena. With these points in mind, let’s see how (2) handles the Collective Evidence Problem. Recall that this objection is generated by NON-CONFLICTING BASES, where the group belief in question is that a man was responsible for an inside theft of a famous painting at the museum. In this case, none of the bases of the British Museum guards’ individual beliefs conflict, but each subgroup of 20 guards has counterevidence for the basis of the justified beliefs of a different subgroup. When viewed as a collective whole, then, the evidential basis of the group is zero. This is because there is not a single basis of the members’ beliefs that is free of counterevidence. But notice: if the 100 guards were to engage in full disclosure and rational deliberation, all of the counterevidence for the bases would emerge. For instance, each of the first 20 guards, M1–M20, would disclose that they have evidence that the thief’s goatee is fake, and thus all of the members would then realize that this undermines the basis of belief for M21–M40. Were all five subgroups to do this, it is clear that there would be no surviving evidence for believing that a man was responsible for the inside theft. Thus, full disclosure and rational deliberation among the guards in NON-CONFLICTING BASES would clearly produce further evidence that, when added to the bases of the members’

relevant beliefs, yields a total belief set that fails to make probable the proposition in question. According to the Group Epistemic Agent Account, then, the group of British Museum guards does not justifiedly believe that a man was responsible for the inside theft—despite the fact that all 100 guards justifiedly believe this—which is precisely the desired verdict. Let us now see how (2) deals with the Group Normative Obligations Problem. Recall that in GROUP NORMATIVE OBLIGATIONS, each of N1–N3 justifiedly believes that O’Brien is not at risk of dying, but also fails to fulfill her group epistemic duty of sharing with the other nurses that she forgot to give him his medication. In such a case, were the nurses to fully disclose and rationally deliberate about all of their relevant information, and were this evidence to be added to the original bases, the resulting belief set would clearly fail to make probable that O’Brien is not at risk of dying. For such a set would include both the belief that N1–N3 all forgot to give O’Brien his medication and the belief that his missing all three of his medications would put him at serious risk of dying. Once again, then, the Group Epistemic Agent Account delivers the correct verdict: the nursing unit’s belief fails condition (2) and thus the group does not justifiedly believe that O’Brien is not at risk of dying, despite the fact that all 3 nurses justifiedly believe this as individuals.51

2.10 Central Objection to the Group Epistemic Agent Account I would like to here consider and respond to a central objection that might be raised to the Group Epistemic Agent Account; namely, that it succumbs to a version of the IMEP. In particular, if group justification depends on the justification of the beliefs of the group members, then isn’t it possible for the epistemic status of group beliefs to be determined by the deliberate manipulation of the group membership? For instance, suppose that a group aims to arrive at a particular justified group belief through the accepting and removal of certain members. To this end, suppose that Philip Morris hires only new employees who actually justifiedly believe that the scientific evidence regarding the ill effects of smoking is unreliable, and they fire all those who believe such evidence to be reliable. In such a case, isn’t it possible for Philip Morris to end up justifiedly believing that the scientific evidence regarding the ill effects of smoking is unreliable precisely because they illegitimately manipulated the justified beliefs of the group members?52 And isn’t this just a slightly different version of the IMEP? By way of response, the first point to notice is that, unlike the version of the IMEP afflicting the JAA, every account of group justification succumbs to this one. For instance, at the inflationary end of the spectrum, Philip Morris might hire only new employees who will jointly accept that the scientific evidence regarding the ill effects of smoking is unreliable, and fire all those who will not, thereby resulting in Philip Morris justifiedly believing that smoking is not dangerous according to the JAA. At the deflationary end of the spectrum, Philip Morris might hire only new employees who actually justifiedly believe that the scientific evidence regarding the ill effects of smoking is unreliable, and fire all those who believe such evidence to be reliable, thereby resulting in Philip Morris justifiedly believing that smoking is not dangerous according to deflationary summativism. What this reveals is that it is simply part of the nature of collective justification that a change in a group’s membership can change the justificatory status of the group’s beliefs. The second point I should like to make is that there is an important asymmetry between the

version of the IMEP afflicting the JAA and the one purportedly facing the Group Epistemic Agent Account; namely, that the illegitimate manipulation of evidence easily results in the group’s belief being justified in the former, but not in the latter. With respect to views where the very reasons available to the group are determined via joint acceptance, the group can illegitimately restrict the evidence available to them and thereby end up with justified beliefs that are intuitively unjustified. But when there is a summative constraint on group justification, this also brings with it a broadly summativist conception of defeaters. Thus, if a group aims to arrive at a particular justified group belief through the accepting and removal of certain members, then at least some members of the group are aware of this. And if at least some members of the group are privy to the fact that the evidence is being manipulated in this way, then they have a defeater for accepting the proposition in question, thereby preventing the group from justifiedly holding the target belief. But what, it might be asked, if a person outside of the group aims to arrive at a particular justified group belief through the accepting and removal of certain members? If this person is not a member, then the evidence she has will not count as a defeater for the group. So, couldn’t the group end up with a justified belief through the manipulation of evidence by this outside person? However, the mere fact that evidence can be illegitimately manipulated, with the end result being a justified belief, is not at all surprising. This happens at both the individual and at the group level. For instance, if I deliberately undergo a memory-removal procedure, then obviously the justification of my memorial beliefs will be seriously altered. More realistically, I might deliberately choose to watch only BBC news because my friend works there, my students might hide their consulted sources because they don’t want me to know that they plagiarized, or I might always consult brunettes when asking for directions out of habit. In such cases, the evidence available to me is being restricted—either by me or by others—but this does not necessarily prevent me from having justified beliefs in the relevant domains. If, say, the BBC news or brunettes are reliable source of information, then their testimony may be able to provide the epistemic grounding needed for me to have knowledge. Or, if I am wholly ignorant of what sources my students relied upon, then they cannot provide evidence that either grounds or defeats my belief that the work is original. This is true with respect to groups, too. The members of a company may decide to only rely on CNN because their CEO owns stock in it, or they may never look in the drawers of co-workers because it is against the company’s ruled to do so, or they may always depend on the testimony of their own lawyers when obtaining legal counsel out of habit. Again, this restriction of the available evidence does not, by itself, prevent the group in question from having justified beliefs, since the sources might still be reliable or provide adequate evidence. Applying this to the case at issue, if an outside person of a group aims to arrive at a particular justified group belief through the accepting and removal of certain members, then this seems no different from an outside person deliberately restricting the evidence available to a group. And if, as we have seen above, the latter can result in a justified belief, I see no reason why the former cannot. Otherwise put, in all of the cases where the attribution of justified belief is epistemically unproblematic, the manipulation of evidence at issue is what we might call indirect—it involves restricting one’s access to evidence. This is importantly different from manipulating evidence in a direct way, which involves ignoring evidence of which one is already aware or fabricating evidence that doesn’t otherwise exist. A paradigmatic example of the former is choosing to not read a newspaper; a paradigmatic example of the latter is reading the newspaper and then

deliberately choosing to ignore what one just learned. Thus, the version of the IMEP afflicting the JAA concerns the direct manipulation of evidence, while the one at issue with respect to the Group Epistemic Agent Account involves only the indirect manipulation of evidence.

2.11 Conclusion We have seen that the Group Epistemic Agent Account handles all of the problems afflicting the two dominant approaches in the literature to understanding the justification of group beliefs. While inflationary non-summativists focus entirely on what a group does—that is, whether its members engage in joint acceptance or not—deflationary summativists focus exclusively on what a group has—that is, whether its members’ beliefs are individually justified. The Group Epistemic Agent Account, in contrast, incorporates both components. In particular, groups are understood as epistemic agents on this view, ones that have evidential and normative constraints that arise only at the group level, such as a sensitivity to the relations among the evidence possessed by group members and the epistemic obligations that arise via membership in the group. These constraints bear significantly on whether groups have justified belief. At the same time, however, group justifiedness on the Group Epistemic Agent Account is still largely a matter of member justifiedness, where the latter is understood as involving both beliefs and their bases. The result is a view that neither inflates nor deflates group epistemology, but instead recognizes that a group’s justified beliefs are constrained by, but are not ultimately reducible to, members’ justified beliefs. The Epistemology of Groups. Jennifer Lackey, Oxford University Press (2021). © Jennifer Lackey. DOI: 10.1093/oso/9780199656608.003.0003

1 I will frequently speak simply of a “group justifiedly believing” a proposition, “group justification,” or “group justifiedness.” All of these locutions should be understood as involving group epistemic justification. 2 Of course, this is not to say that knowledge is nothing more than justified true belief (see, for instance, Gettier (1963)), but that epistemic justification is key to distinguishing between mere true belief and knowledge. 3 See Levi (1962), Fallis (2006), Riggs (2008), and Mathiesen (2011). 4 It might be asked whether it is the group’s belief, or its justification, that is over and above, or otherwise distinct from, the individual members of the group justifiedly believing that p. To the extent that justification and belief can be considered entirely separately when doxastic justification is in question, the issues in this chapter will concern justification. But most theorists who are inflationary about justification are also inflationary about belief, that is, they argue that both epistemic justification and belief are fundamentally a matter of joint acceptance. See, for instance, Schmitt (1994). 5 This is obviously to be distinguished from the joint acceptance account of group belief discussed in Chapter 1. 6 I added “and believes that p for this reason” to Schmitt’s account; otherwise, group beliefs and group reasons will be entirely disconnected from one another. 7 Because Schmitt talks about groups having reasons, I will adopt this locution in the discussion that follows. But where relevant, this should be understood as groups not only having these reasons, but basing the beliefs in question on these reasons. 8 Beyond this, Schmitt says that he will not offer an account of what proper acceptance is. The key point for my purposes, however, is that “proper” is not an epistemic notion. As Schmitt says, “proper joint acceptance of a reason is not the same as the reason’s being good. Joint acceptance of r as a reason may be proper even if the reason is bad” (1994, p. 266). Instead, “proper joint acceptance” will often be determined by the structure or procedural requirements of the group in question. 9 It should be noted that Hakli provides an account of the justification of group acceptances rather than of group beliefs. None of the arguments in this chapter, however, turn on this distinction. 10 I actually challenge this assumption below.

11 For the sake of ease of expression, I will often drop the “openly.” 12 See, for instance, Tuomela (2004). 13 Thus, one of the conditions of Raimo Tuomela’s inflationary account of group justification is that “There is a special social justificatory dimension in that at least the operative group members…must share a justifying joint reason for…p” (Tuomela 2004, 113). 14 In response to this move, Kallestrup (2016) writes: “the key here is that the relevant standards that govern different juries are epistemic in the sense that they fix the types and strengths of evidence which can be brought to bear when juries reach a decision (or form a belief). So, while a jury decision is strictly a legal act, its justification is an epistemic property of that group. For such justification is a matter of the jury basing their decision on permissible and strong enough evidence, which in turn is constrained by those standards. And because the standards may differ from jury to jury, so will the epistemic properties of arriving at justified decisions.” This response misses the point that the legal standard excluding hearsay is not always truthconducive, and so the justification in question is legal, not epistemic in nature. The mere fact that the legal standards govern evidence is clearly not sufficient for rendering beliefs that meet these standards epistemically justified. A corporation could adopt standards of evidence that rule out considering scientific studies that challenge the safety of their products. Surely, the beliefs that follow these standards would not thereby be epistemically justified. 15 I am grateful to Mark Thomson for raising this point. 16 While one can appeal to the resources of the judgment aggregation framework to support a deflationary summativist view of justified group belief (as Alvin Goldman does below), it is important to note that not all aggregation procedures support this approach. For instance, the premise-based aggregation procedure discussed in Chapter 1 could be reframed in terms of justified belief, rather than merely belief, and result in a case where the group holds a justified belief that no member of the group does. 17 For more on the theory of judgment aggregation, see List and Pettit (2002 and 2004), Dietrich (2005), List (2005), Pauly and van Hees (2006), and Cariani (2011). 18 See Goldman (2014). 19 For views of collective knowledge that are summative in nature in one way or another, see Corlett (1996 and 2007), Mokyr (2002), and Tuomela (2011). For instance, Mokyr (2002) argues that, under the right circumstances, “society ‘knows’ something if at least one member does” (p. 4), thereby espousing a sufficiency claim. In contrast, Tuomela focuses on necessity, arguing that “a group cannot know unless its members or at least some of them know the item in question” (Tuomela 2011, p. 85). At the same time, however, Tuomela argues that “when [a group, g] believes that p, the members of g, collectively considered, will be assumed to believe (accept) that p when functioning as group members and thus be collectively committed to p. Their private beliefs related to P (here covering p and –p) can be different from those they adopt as members of g” (Tuomela 2011, p. 86). Thus, for Tuomela, a group cannot know that p without some of its members knowing that p, but a group can know that p despite the fact that none of its members privately believe that p. 20 For the original Gettier cases, see Gettier (1963). 21 Goldman’s process reliabilism might have the resources for responding to this problem, but only by virtue of a de facto connection between BAFs and JAFs. Thus, my point still holds that these two aggregation functions cannot work independently of one another. 22 An immediate problem with (GJ) is that the degree (or level or grade) to which any individual member of the group justifiedly believes that p does not play any role in determining the degree (or level or grade) of the group’s justifiedness in believing that p; only the proportion of members who justifiedly believes that p is relevant. I am grateful to an anonymous reviewer for raising this point. 23 This case is similar to those involving base fragility discussed in the Chapter 1. 24 For further discussion of some of the issues surrounding groups with conflicting bases, see Cariani (2013). 25 It should be noted that the Group Justification Paradox is analogous to the general result in judgment aggregation theory that no supermajority rule short of unanimity will always secure a deductively closed and consistent set of collective attitudes. (I am grateful to an anonymous referee for this point.) 26 For the original Preface Paradox, see Makinson (1965). See also the Lottery Paradox in Kyburg (1961). See Klein (1985), Foley (1987), Christensen (2004), and Makinson (2012) for arguments that consistency is not a requirement of individual rational belief. 27 For various views of what I call psychological defeaters see, for example, BonJour (1980 and 1985), Nozick (1981), Goldman (1986), Pollock (1986), Plantinga (1993), Bergmann (1997), Reed (2006), and Lackey (2008). 28 For discussions involving what I call normative defeaters, approached in a number of different ways, see BonJour (1980 and 1985), Goldman (1986), Fricker (1987 and 1994), Chisholm (1989), Burge (1993 and 1997), McDowell (1994), Audi (1997 and 1998), Williams (1999), BonJour and Sosa (2003), Hawthorne (2004), Reed (2006), and Lackey (2008). What all of these discussions have in common is simply the idea that evidence can defeat knowledge (justification) even when the subject does not form any corresponding doubts or beliefs from the evidence in question. 29 It might be thought that the Defeater Problem undercuts the Group Justification Paradox. For if we can take one of the

beliefs to be defeated, perhaps there isn’t a paradox after all. By way of response, notice that every paradox with contradictory beliefs can be seen as involving rebutting defeaters, but this doesn’t make them any less paradoxical. 30 It is worth noting that, though Goldman includes a “ceteris paribus” clause in (GJ), these problems cannot be subsumed under it. For (GJ) just is a reflection of the notion of vertical justification: according to this principle, group justifiedness increases with a greater percentage of individual member justifiedness, and this just amounts to permitting group justifiedness to be determined independently of a group-level basis for that justification. But recall that what motivated the conception of vertical justification in the first place was DIFFERENT BASES. Given that CONFLICTING BASES is an extension of this paradigmatic instance of vertical justification, there is no plausible sense in which it can be relegated to the ceteris paribus clause. 31 See List (2005). 32 For an extended discussion of this, see List (2005). 33 I am grateful to an anonymous referee for a presentation of this alternative account of justified group belief. 34 See, for instance, Fantl and McGrath (2002 and 2009), Stanley (2005), Williamson (2005), and Hawthorne and Stanley (2008). 35 For a detailed discussion of proxy agency, see Ludwig (2014) and Lackey (2018a and 2018b). See also Chapters 4 and 5 of this book. 36 I develop this line of argument in far more detail, and in response to a broader range of theories than just the Condorcetinspired one discussed here, in Lackey (2014b) and in Chapter 3 of this book. 37 See Goldman (2014, p. 20). 38 Silva (2019) challenges this condition through a case in which every member of a jury arrives at a belief through improper reasoning, but the group itself arrives at the same conclusion through proper reasoning. In such a case, Silva claims that “the group holds a doxastically justified belief because it properly responds to its evidence, while no member of the group properly responds to its evidence. So no member of the group is doxastically justified” (Silva 2019, p. 9). However, there is no reason to posit justified group belief in such a case when it can easily be explained in terms of other intuitively plausible descriptions, such as a justified verdict, or a justified official position, or justified collective acceptance, and so on. 39 I am assuming that the evidence relevant to the proposition that p will subsume beliefs that bear on it, including those that might arise via premise-based aggregation in “doctrinal paradox” cases. But if this is doubted, the disclosure of relevant beliefs can be built directly into condition (2), so that it reads as follows: (2*) Full disclosure of the beliefs and evidence relevant to the proposition that p, accompanied by rational deliberation among the members of G in accordance with their individual and group epistemic normative requirements, would not result in further evidence that, when added to the bases of G’s members’ beliefs that p, yields a total belief set that fails to make probable that p. Moreover, notice that this condition focuses on the full disclosure of the evidence that is relevant to the proposition that p. It should be noted that relevant does not mean here in principle relevant but relevant in the circumstances at issue. This should make clear that condition (2) is not unrealistically strong. 40 To help grasp condition (2) at an intuitive level, it can be understood as being in the same broad spirit as requirements in the law that appeal to what “a reasonable person would do.” 41 As with other threshold notions in epistemology, such as “sufficient” justification for knowledge, there is room for disagreement over where on the scale the threshold is located for “significant” percentage and how it comes to be there. Some will argue that contextualism is helpful here; others will take the threshold to be fixed by practical interests or by implicit social coordination. As that debate is tangential to the thread of my argument here, I will set it aside. 42 It should be emphasized that the beliefs in this set will not all be group beliefs. For instance, some of the beliefs in the set might not be shared by enough of the individual members to count as the group’s beliefs. 43 However, List (2014b) introduced the concept of “consistency of degree k,” which is weaker than full consistency by ruling out only “blatant” inconsistencies in an agent’s beliefs while permitting less blatant ones. Condition (1) might be understood as requiring this weaker notion of consistency. 44 See Kolodny (2007). 45 See Briggs, Cariani, Easwaran, and Fitelson (2014). 46 I am grateful to Sharon Ryan for a comment that led to this point. 47 See, for instance, Goldman (1979 and 1986), Kyburg (2001), and Fumerton (2004). 48 More precisely, let’s distinguish the following: e is the evidence the group members have in the actual world, e* is e plus what full disclosure and rational deliberation on e would yield, and e** is e* plus what awareness of what they are doing as they engage in disclosure and rational deliberation would yield. The last of these, e**, goes beyond what their evidence is and how they should think about it, and thus is irrelevant to the group’s actual justification. This is relevant to a worry that one might have that a group couldn’t justifiedly believe that it is not deliberating now, because were they to disclose and deliberate about the relevant evidence, the process would bring about evidence that they were deliberating. (I am grateful to John Hawthorne for this objection.) But, as should be clear, what is at issue in this objection is e**, and not e*, which is what condition (2) picks out.

Otherwise put, awareness of what one is doing in deliberating is typically yielded in deliberating, but it follows from the deliberation itself, not from the evidence that is the content of the deliberation. Here is another instance of this pattern: suppose the existence of a particular group is tenuous, in the sense that it is likely to be formally dissolved at any moment. Suppose, further, that the group has evidence on Monday that it exists then, though it hasn’t yet formed the belief that it exists on Monday. The evidence in question is silent as to whether the group might exist after Monday. So, deliberation on the import of the evidence will be silent as to whether the group exists on Tuesday. Nevertheless, if the group were to deliberate on Tuesday, the act of deliberation would provide new evidence that the group exists on Tuesday, but this wouldn’t be revealed through the content of that deliberation. The condition in question is specifying the content of what the deliberation would lead to; it does not require that an act of deliberation ever occur. 49 I am grateful to Nick Leonard for this objection. 50 It might be noticed that there is an asymmetry in the Group Epistemic Agent Account: while only operative members can contribute positively to the group’s justified belief, any member can bear negatively on the group’s epistemic status. This shouldn’t be surprising, however, since this asymmetry is mirrored at the individual level. For instance, just about every externalist about epistemic justification accepts that negative reasons can defeat knowledge, even if positive reasons are not necessary for knowledge. Similarly, justification for believing is harder to come by than justification for doubting—I may, for example, doubt that a car is reliable on the basis of a used car salesman’s raising questions about the car’s condition, but I wouldn’t believe that another car is reliable on the basis of this same man’s testimony that it is. Still further, consider Harman’s (1973) well-known newspaper case: Jill reads in a newspaper that the President has been assassinated and, though the story is true, it has been suppressed by the government. As a result, all of the other newspapers and television stations are reporting that the President is fine and that the assassin actually killed his bodyguard. Harman argues that Jill does not know that the President was assassinated in such a case. But the point that is of interest here is that, if this conclusion is correct, then evidence that one does not possess can undermine one’s knowledge. But the same is never said about acquiring knowledge. 51 Silva (2019) argues for replacing my view with what he calls Evidentialist Responsibilism for Groups (ERG), according to which: A group, G, justifiedly believes that P on the basis of evidence E iff: (1) E is a sufficient reason to believe P, and the total evidence possessed by enough of the operative members of G does not include further evidence, E*, such that E and E* together are not a sufficient reason to believe P, and (2) G is epistemically responsible in believing P on the basis of E. Group Responsibilist Condition: A group, G, is epistemically responsible in believing P on the basis of E iff (a) enough of the operative members of G satisfy their G-relevant epistemic duties, and (b) G properly bases its belief on E. Space constraints prevent me from discussing Silva’s very interesting view in detail, but let me just note a couple of concerns with his account. First, according to the ERG, some operative members of a group can have defeaters while the group itself remains justified on this view. This is because the total evidence possessed by “enough” of the operative members cannot include further evidence, E*, that functions as a defeater. Presumably, then, there can be “some” operative members who have such counterevidence. But why would justified group belief be compatible with any operative members having evidence that is in direct conflict with the evidence that other operative members possess? Second, the ERG requires that the relevant operative members of G satisfy their G-relevant epistemic duties. But what if the G-relevant epistemic duties deviate in important ways from general epistemic duties? For instance, as we’ve noted, juries exclude hearsay evidence even when this exclusion not only fails to have a general epistemic advantage, but often has specific epistemic disadvantages, by, for instance, leading us farther away from the truth. Other groups might have similarly epistemically disadvantageous group duties. A group, for instance, might exclude the relevance of scientific testimony from non-approved sources, where the criteria for approval is explicitly bound up with financial motivations. Thus, the group satisfies its G-relevant epistemic duties, but such duties are objectively deeply epistemically problematic. Why would we say that the group is justified? 52 I am grateful to Stew Cohen, Juan Comesaña, and Brian Miller for this objection.

3 Group Knowledge There are two quite influential views of group knowledge that are inflationary and nonsummative in nature, and that pose challenges to the account of justified group belief developed in Chapter 2. The first is often referred to as “social knowledge,” and it is developed and defended in most detail by Alexander Bird. A paradigmatic instance of social knowing is taken to be the so-called knowledge possessed by the scientific community, where no single individual knows a given proposition, but the information plays a particular functional role in the community. The second is “collective knowledge,” which occupies an important place in United States law. According to the “collective knowledge doctrine,” knowledge may be imputed to a group by aggregating bits of information had by its individual members. If these accounts of social knowledge and collective knowledge are correct, then my view of justified group belief is false, particularly the requirement that some of the operative members of a group have the relevant justified beliefs themselves. So, in this chapter, I will take a close look at these two inflationary conceptions of group knowledge. I will argue that both accounts fly in the face of fundamental features of knowledge, and thus should be rejected as accounts of group knowledge.

3.1 Social Knowledge One way that a group is said to know that p without a single of its members knowing that p is if information is distributed across a collective entity in a “compartmentalized” way. An oft-cited instance of this is Edwin Hutchins’s example of the crew of a large ship safely navigating the way to port.1 Each crew member is responsible for tracking and recording the location of a different landmark, which is then entered into a system that determines the ship’s position and course. In such a case, the ship’s behavior as it safely travels into the port is clearly wellinformed and deliberate, leading to the conclusion that there is collective knowledge present. More precisely, it is said that the crew as a whole knows, for instance, that they are traveling north at 12 miles per hour, or that the ship itself knows this, even though no single crew member does. This is taken to show that knowledge is socially extended in an important sense.2 Recently, an even more radical conception of socially extended knowledge has been defended by Alexander Bird, according to which knowledge can be possessed, not only by structured groups with a unified goal—like the crew of a ship—but also by large and unstructured collective entities with diverse aims. He calls this phenomenon “social knowing” or “social knowledge,” paradigmatic instances of which are “North Korea knows how to build an atomic bomb,” or “The growth of scientific knowledge has been exponential since the scientific revolution.” In such cases, a collective entity is said to know that p despite the fact that not a

single individual member is even aware of that p, thereby showing that social knowledge does not supervene on the mental states of individuals. In the first part of this chapter, I will argue that knowledge is not socially extended in this radical sense.3 My argument is twofold: first, I show that endorsing “social knowing” or “social knowledge” leads to serious epistemological problems and, second, I suggest that the work done by ascribing social knowledge to collective entities can instead be done by describing such entities as being in a position to know. To begin, it is undeniable that we often attribute knowledge to large, unstructured social groups. We talk about the U.S. knowing that Osama bin Laden was killed, the scientific community knowing that climate change is having a serious impact on wildlife, and liberals knowing that Fox News is biased. These are all instances of what Bird calls organic groups. Unlike established or structured groups, which derive their cohesion from joint acceptance or external rules, organic groups are held together by organic solidarity, which “involves bonds that arise out of difference, primarily the inter-dependence brought about by the division of labour. The key feature of the division of labour is that individuals and organizations depend on others who have different skills and capacities” (Bird 2010, p. 37). For instance, the division of labor in science is made clear through the way in which different subfields and specialties bind scientists together. Bird offers the following to illustrate this: …a palaeobiologist is investigating the relationship between certain extinct animals. A significant part of the relevant evidence concerns the age of the rocks in which the fossils of the animals were found, and thus depends on the work of geologists. The geologists, in dating the rocks, depend in large measure on techniques that concern the radioactivity of rock samples, and thereby depend on theories and equipment developed by physicists. (Bird 2010, p. 38)

The sort of interdependence found between scientists is a paradigm of the kind of division of labor that is central to organic groups, and such groups are, according to Bird, the bearers of social knowledge. Unlike every other conception of group knowledge, however, social knowledge is said to not supervene on the mental states of individuals. Notice the force of this claim: it is not saying that a group can know that p without a single individual member of the group knowing that p. As we saw earlier with respect to belief and justification, this is a widely accepted thesis in the collective epistemology literature, and allows for weaker requirements involving the members of groups, such as joint acceptance. But social knowledge permits an organic group to know that p without individual belief that p, joint acceptance that p, commitment that p; indeed, without a single individual person—group member or not—even being aware of that p. This conclusion is motivated by cases like the following: ACCESSIBLE INFORMATION: Dr.

N. is working in mainstream science, but in a field that currently attracts only a little interest. He makes a discovery, writes it up and sends his paper to the Journal of X-ology, which publishes the paper after the normal peer-review process. A few years later, at time t, Dr. N. has died. All the referees of the paper for the journal and its editor have also died or forgotten all about the paper. The same is true of the small handful of people who read the paper when it appeared. A few years later yet, Professor O. is engaged in research that needs to draw on results in Dr. N.’s field. She carries out a search in the indexes and comes across Dr. N.’s discovery in the Journal of X-ology. She cites Dr. N.’s work in her own widelyread research and because of its importance to the new field, Dr. N.’s paper is now read and cited by many more scientists. (Bird 2010, p. 32)

Let T1 be the time at which Dr. N.’s scientific discovery, d, is first published, T2 be the time at which Dr. N. and all of the readers of the paper are dead, and T3 be the time at which Professor O engages with Dr. N’s work and makes it such that it is widely read and cited. According to Bird, there is a collective subject—namely, the scientific community or “wider science”—that knows that d throughout this entire process. He writes: Was Dr. N.’s discovery part of scientific knowledge? I argue that it was so throughout the period in question. There is no doubt that it was at the end and also at the beginning. By publishing in a well-known, indexed journal, Dr. N. added to the corpus of scientific knowledge in the way that many hundreds of scientists do each month. Now consider the intermediate time t. As regards its status as a contribution to scientific knowledge, it seems irrelevant that Dr. N. and others who had read the original paper had died or forgotten about it. What is relevant is that the discovery was in the public domain, available, through the normal channels, to anyone, such as Professor O., who needed it. (Bird 2010, p. 32)

According to Bird, then, the scientific community knows that d at T2 despite the fact that not a single individual is even aware of that d at this time. This is taken to support the negative claim that social knowledge does not supervene on the mental states of individuals. It is also taken to establish the positive thesis that social knowledge is fundamentally a matter of the accessibility of the information in question to the members of the collective entity who need it. But surely accessibility cannot be the full story for social knowledge. This is where Bird’s thesis about the “functional role of knowing” enters. Taking individual cognitive faculties as the model, Bird argues that only social structures with the following properties are candidates for being social knowers: (i) They have characteristic outputs that are propositional in nature (propositionality); (ii) They have characteristic mechanisms whose function is to ensure or promote the chances that the outputs in (i) are true (truth-filtering); (iii) The outputs in (i) are the inputs for (a) social actions or for (b) social cognitive structures (including the very same structure [the structure that produces the output]) (function of outputs) (Bird 2010, pp. 42–3) Scientific communities clearly satisfy (i)–(iii). The characteristic output of the scientific community is the journal article, which is clearly propositional. There are mechanisms within science, such as the peer-review process, to promote the chances that the outputs in journal articles are true. And the research results published in scientific journals are the direct inputs for social action, such as the use of new drugs for diseases or the application of novel technologies in businesses. Combining these considerations, the following conception of social knowledge emerges: SK: Social Knowledge: A social structure, S, socially knows that p if and only if (1) that p is true,4 (2) S satisfies (i)–(iii), and (3) the information that p is accessible5 to the members of S who need it.6 SK is able to capture not only the knowledge purportedly had by the scientific community in cases such as ACCESSIBLE INFORMATION, but also our ordinary attributions of knowledge to collective entities like the U.S.

Despite these virtues, I will argue in what follows that social knowledge is not knowledge after all, as accepting SK leads to unacceptable epistemological consequences.

3.2 Social Knowledge and Action The first point to notice is that no matter how inflationary collective entities and their states are deemed, groups cannot float entirely freely from their individual members. Indeed, even those who argue that groups “have minds of their own”7 recognize that there is undoubtedly an intimate connection between groups and their individual members, as is made clear in the following passage from List and Pettit: The things a group agent does are clearly determined by the things its members do; they cannot emerge independently. In particular, no group agent can form intentional attitudes without these being determined, in one way or other, by certain contributions of its members, and no group agent can act without one or more of its members acting. (List and Pettit 2011, p. 64).

Such a connection between groups and their members is particularly vivid when it comes to action. A group cannot offer assertions, engage in negotiations, sign contracts, break the law, or perform any of the other sorts of actions typically attributed to groups without there being action on the part of some of its members. Of course, this does not mean that for every group, G, and act, a, G performs a only if at least one member of G performs a. It may be that one member performs action b, and another performs action c, and still another performs action d, which, when taken together, involves G performing a. For instance, one nursing home worker might forget to turn a patient over in order to prevent bedsores, another might forget to wash this same patient’s bedsores, and a third might forget to give the patient her antibiotics. Each of these actions might be necessary, though not sufficient, for the patient’s death, and thus the collective entity might be convicted of negligent homicide, even though not a single one of the workers is herself guilty of this action. Nevertheless, even though the group committed an act of negligent homicide without any individual committing this very same act, the group could not have performed the act without the members performing relevant acts, such as failing to turn the patient over and wash her. This supports the following weaker relationship between groups and their members when actions are concerned: GMAP: Group/Member Action Principle: For every group, G, and act, a, G performs a only if at least one member of G performs some act or other that causally contributes to a.8 What the GMAP makes clear is that while group action cannot occur independently of its members, it can go beyond what any of them do individually. The second point to note is the intimate connection that exists between knowledge and action. A widely accepted view of this connection is that a necessary condition on knowing that p is that it is epistemically appropriate to use the proposition that p in practical rationality. For instance, Jeremy Fantl and Matthew McGrath argue that “S [knows] that p only if S is rational to act as if p” (Fantl and McGrath 2002, p. 78).9 According to Fantl and McGrath, then, rationally acting as if p is a necessary condition on knowing that p. Some argue for an even stronger relationship between knowledge and practical rationality. John Hawthorne and Jason Stanley

write, “Where one’s choice is p-dependent, it is appropriate to treat the proposition that p as a reason for acting iff you know that p” (Hawthorne and Stanley 2008, p. 578).10 Similarly, Timothy Williamson maintains that the “epistemic standard of appropriateness” for practical reasoning can be stated as follows: “One knows q iff q is an appropriate premise for one’s practical reasoning” (Williamson 2005, p. 231).11 Since the sufficiency of S’s knowing that p for S to rationally act as if p is logically equivalent to the necessity of S’s rationally acting as if p for S’s knowing that p, both Hawthorne and Stanley and Williamson are in agreement with Fantl and McGrath, but they also think that knowing that p is necessary for it being epistemically appropriate to use the proposition that p in practical reasoning. A similar view is found in Stanley (2005), where he writes: “To say that an action is only based on a belief is to criticize that action for not living up to an expected norm; to say that an action is based on knowledge is to declare that the action has met the expected norm” (Stanley 2005, p. 10). Here, Stanley suggests that knowledge is sufficient not only for being properly epistemically positioned to rely on p in practical reasoning, but also for being so positioned to act on p. While the subtle differences between these accounts are interesting, I will focus here on their similarities. In particular, they all endorse at least the following: KAP: Knowledge/Action Principle: S knows that p only if S is epistemically rational to act as if p or, equivalently, S is epistemically rational to act as if p if S knows that p.12 Now, it should be noted that the KAP holds that knowledge is sufficient for rendering action epistemically appropriate, but there may be other senses of propriety in which knowledge fails to be so sufficient. For instance, I may clearly know that my drunk colleague is making a fool of himself at a departmental party, but it may nonetheless be inappropriate for me to act as if this is the case by confronting him about his behavior. It may be imprudent because it would strain our friendship; or it may be impolite because it would be utterly embarrassing to him; or it may simply be pointless because he won’t remember my actions the next day anyway. Thus, my confronting my friend may be inappropriate in all of these ways, while nonetheless being epistemically proper and thus in keeping with the KAP. The KAP is defended on a number of different grounds that apply at both the individual and the group level. First, it has intuitive appeal. If I decide to leave for the airport an hour later than was expected, my knowing that the relevant flight was delayed seems sufficient to render such a conclusion epistemically permissible. If my choice is questioned, appealing to my knowledge adequately meets the challenge, while offering anything less—such as my suspecting that the flight is delayed, or being justified in believing that it is—does not. Similarly, if BP uses dispersants to clean up the oil spill in the Gulf of Mexico, the company knowing them to be both effective and safe seems adequate to render the action epistemically permissible. If BP’s action is challenged, appealing to knowledge sufficiently meets the challenge, while offering anything less—such as suspecting that the dispersants are effective and safe—does not. Moreover, the KAP has significant theoretical power. For instance, it is often noted that while it is epistemically inappropriate to rely on the proposition that one’s lottery ticket will lose in one’s practical reasoning and relevant actions if one has merely probabilistic evidence for this conclusion, it is epistemically permissible to so rely on this proposition once one has learned the results of the lottery.13 The KAP, combined with the thesis that one possesses knowledge that one’s lottery ticket will lose in the latter, but not the former, case can easily explain this data. Similarly, it is frequently observed that when low standards are in play within a contextualist

framework, such as in DeRose’s case of self-attributing knowledge that the bank is open when it is not especially important that he deposit his paycheck, it seems epistemically appropriate to rely on the proposition in question in practical reasoning and action.14 In contrast, when high standards are in play, such as when DeRose has just written a very large and important check for which he needs to ensure that adequate funds will be available in his bank account, it seems inappropriate to so rely on this proposition.15 Once again, the KAP, combined with the view that one can self-attribute knowledge in the former, but not the latter, case can account for this data with ease. Finally, the KAP is said to explain what makes knowledge distinctively important or valuable. According to Fantl and McGrath, “If you know that p, then p is warranted enough to justify you in ϕ-ing, for any ϕ” (Fantl and McGrath 2009, p. 66). Thus, for any instance of practical reasoning or action, ϕ, one’s knowing that p is sufficient for epistemically justifying one in ϕ-ing. Such a principle, Fantl and McGrath argue, “secures the distinctive importance of knowledge” (Fantl and McGrath 2009, p. 182).16 In a similar spirit, Hawthorne claims that “… the importance of the concept of knowledge consists, in large part, in such [a] connection…as [that between knowledge and action]; in turn, it seems likely that any view that severs such [a connection] will be highly disruptive to our intuitive sense of the epistemic landscape” (Hawthorne 2004, p. 31). And Stanley maintains that rejecting the connection between knowledge and action “devalues the role of knowledge in our ordinary conceptual scheme” (Stanley 2005, p. 10). With the GMAP and the KAP in mind, let us return to the paradigmatic instance of social knowledge: the scientific community’s knowing that d at T2, despite the fact that no individual is aware of that d at this time. According to the KAP, if G knows that d, then it is epistemically rational for G to act as if d. Let us make this vivid by supposing that d is the discovery of an enzyme that plays a role in the development of cancer cells. Given the GMAP, a group can act only through its individual members, so G’s actions would be through individuals, not a single one of whom needs to even be aware that d. For the sake of clarity, let’s add to our envisaged scenario that Dr. P., who is known worldwide as the leading researcher focusing on cutting-edge cancer treatments, is speaking at a conference about the current state of affairs in the scientific community, and questions about d arise. In this context, we can thus imagine that Dr. P. is acting as a spokesperson of sorts for the scientific community.17 When SK is combined with the GMAP and the KAP in this way, the result is that it is epistemically rational for G, through the actions of Dr. P. to assert that d in lectures and published work, to approve cancer drugs that depend on d, to conduct further experiments for cancer treatment that rely on d, to apply for grants that take d for granted, and so on. But does this seem right? If Dr. P. and indeed every other living member of the scientific community, has no evidence whatsoever that the enzyme in question in fact plays a role in the development of cancer cells, then in what sense would it be epistemically rational for the scientific community to assert that d or approve cancer drugs that depend on d? If the community did so happen to perform these actions, it seems that they wouldn’t be related to the knowledge in question, but, instead, would be entirely a matter of luck. Indeed, the complete absence of evidence upon which any of these actions would be based makes them not only epistemically irrational and impermissible, but also reckless and irresponsible. Imagine, for instance, that Dr. P. is pressed about how precisely the enzyme plays a role in the development of cancer cells, or whether there are any risks involved in treating cancer though targeting this enzyme, or how robust the evidence is supporting this discovery, or whether there are some cancers rather than others that are linked to the enzyme. He,

on behalf of the scientific community, would be utterly silent on all of these questions. This shows that it is epistemically impermissible for the scientific community to act on the discovery of an enzyme that plays a role in the development of cancer cells in any of the ways that are typical of knowledge. The upshot of these considerations, then, is that if the GMAP and the KAP are true, SK is false: a group cannot know that p at a time when no individual is aware that p at that time. Now it should be noted that this argument is not at odds with Bird’s characterization of a social structure, since scientific communities generally satisfy condition (iii). In particular, the published findings in scientific journals are often the inputs of social action. My claim is thus not that scientific communities never engage in social action but, rather, that when there is an instance of mere social knowing—such that the output in question is accessible, but not accessed, by any individual—it is not epistemically rational for the community to act on it. A natural response for Bird to make here is the following: a group can rationally act on its social knowledge that p in the absence of any individual awareness that p when its action is the result of a goal-directed system that is appropriately responsive to the input that p. Consider, for instance, the following: DISTRIBUTED INFORMATION: The

47 members of the UN Population Commission are collecting data for a report that they will issue, Charting the Progress of Populations. Each member works independently and enters the information that he or she collects into a computer that, in turn, processes the data and provides various results as outputs. One such result is that the birth rate of Latinos is on the rise in the U.S., a conclusion of which not a single member of the UN Population Commission, or anyone else, is aware.18 This conclusion is then automatically sent to the New York Times, which writes, “According to the UN Population Commission, the birth rate of Latinos in the U.S. is on the rise.” Here Bird might argue that the UN Population Commission knows that the birth rate of Latinos in the U.S. is on the rise and that it is rational for it to act on this knowledge by, for instance, asserting that this is the case to the New York Times, despite the fact that there is no individual awareness of this fact at the time of the assertion. In particular, the members of the Commission are simply cogs in a very sophisticated system that takes the information from the members as inputs and produces actions on behalf of the group as outputs. And this can happen whether there is individual awareness or not. By way of response to this point, there are two points that I would like to make. First, it is not clear that the output described in Distributed Information is rightly regarded as the UN Population Commission’s action. Indeed, it is not clear that it is an action at all, let alone the UN Population Commission’s. Is the supermarket door acting when it responds to the sensors being activated by a shopper? Is the elevator acting when it takes me to the sixth floor because I pressed the corresponding button? Presumably not. But then what is the salient difference between these cases and the information about the birth rate of Latinos being sent to the New York Times? To answer this question, let us take a step back and look at the connection between belief and action. One classic role of belief is that, together with desire, it rationalizes action.19 Thus, if you want to offer an explanation of my approaching the coffee pot in the kitchen, you can cite my desire for coffee and my belief that it can be found in the pot in the kitchen. But now consider the relationship between the UN Population Commission’s purported belief that the birth rate of

Latinos is on the rise and its actions.20 Imagine, for instance, that the UN Population Commission has convened a meeting with all 47 of its members to vote on a question that is directly tied to whether this is the case, say, whether additional funding should be provided for programs involving this ethnic group. If there is not a single member of the Commission that is aware of the results of their research, then surely the group would vote against the extra funds for Latino programs in the U.S. But then how can this be reconciled with the UN Population purportedly believing, and indeed knowing, that the birth rate of Latinos is on the rise? Moreover, if the Commission did end up voting in favor of the extra funding, it certainly would not be because its belief is rationalizing its action in the relevant sense. In particular, there would be absolutely no connection between the output of the computer and the voting of the members in such a case. Given this, there is compelling evidence that the UN Population Commission does not believe, and hence does not know, that the birth rate of Latinos is on the rise in 21 DISTRIBUTED INFORMATION. Thus, this strategy is not going to save the connection between social knowledge and action. Of course, the proponent of SK can simply deny the KAP, either in general or specifically with respect to groups. This move, however, comes at a steep cost. For notice that it is not merely that social knowledge will be disconnected from action in some rare or contrived cases; rather, it will be disconnected from action in all cases. Whenever there is an instance of mere social knowledge, and a group is said to know that p without this state supervening on the mental states of any individuals, the KAP would be violated. However, even if one doubts the truth of the KAP, it is nearly universally accepted that knowledge bears some close relationship with action.22 Yet social knowledge severs even the weakest such connection since it is entirely divorced from action. This puts serious pressure on what the value or significance of social knowing is. For if it cannot figure into explaining or rationalizing group actions, then why should we care about it? Moreover, those who deny the KAP typically substitute it with a weaker epistemic norm. But such a move isn’t available to the proponent of SK since the same sort of problem arises. For instance, consider a Justified Belief/Action Principle: S is epistemically rational to act as if p if S justifiedly believes that p. Now if G knows that d, then, assuming justification is necessary for social knowledge, it is epistemically rational for G to act as if d. Since, according to the GMAP, a group can act only through its individual members, G’s actions would be through individuals, not a single one of whom is even aware that d. So, if G did act as if d, the action would be a matter entirely of luck. Thus, if the GMAP and a Justified Belief/Action Principle are true, then the falsity of SK follows again. What this shows is that the reason motivating the proponent of SK to reject the KAP leads to the rejection of even significantly weaker epistemic norms. This is obviously an unwelcome result, as surely some norm connecting an epistemic state with action is correct.

3.3 Social Knowledge and Defeaters Even if the proponent of SK were to bite the bullet about divorcing social knowledge from group action and group responsibility, there is a further problem facing this view. To see this, consider the following addition to the original Case of Dr. N.:

ADDITION: Suppose

that because of their ignorance of Dr. N.’s published paper, many members of the scientific community come to believe that not-d at T2. Indeed, suppose that at scientific conferences and workshops, there is often explicit collective agreement among the participants that not-d. Because of this, the members of the scientific community act on not-d by, for instance, asserting that not-d in lectures and published work, approving cancer drugs that depend on not-d, conducting further experiments for cancer treatment that rely on not-d, applying for grants that take not-d for granted, and so on. The first point to notice here is that on every available account of group belief, the scientific community believes that not-d at T2.23 In particular, whether group belief is understood in terms of joint acceptance of the proposition by the members of the collective, in terms of belief by the individual members of the group, or in functional terms, the scientific community counts as believing that not-d as the scenario is described. The second point that is here relevant is that, like individuals, groups can have defeaters. It may be recalled from Chapter 2 that there are two central kinds of defeaters that are typically taken to be incompatible with justification and, therewith, knowledge at the individual level. First, there are psychological defeaters, which can be either rebutting or undercutting. A psychological defeater is a doubt or belief that is had by S, and indicates that S’s belief that p is either false (i.e., rebutting) or unreliably formed or sustained (i.e., undercutting). Second, there are normative defeaters, which can also be either rebutting or undercutting. A normative defeater is a doubt or belief that S ought to have, and indicates that S’s belief that p is either false (i.e., rebutting) or unreliably formed or sustained (i.e., undercutting). Recall, further, that a defeater may itself be either defeated or undefeated, and that when one has a defeater for one’s belief that p that is not itself defeated, one has an undefeated defeater. Applying these considerations to groups, it is fairly straightforward to understand how collective entities could have at least some psychological defeaters: to the extent that we understand what it means for a group to believe that p, we also understand what it means for a group to believe that p but also to believe that q, where q indicates that G’s belief that p is either false or unreliably formed or sustained. Indeed, in ADDITION we find precisely this sort of scenario: the scientific community believes that d because it purportedly knows that d, but it also believes that not-d. The belief that not-d, then, is a classic instance of a rebutting psychological defeater of the group’s belief that p that is not itself defeated. Now, there are two different conclusions that might be drawn regarding this case, neither of which is attractive for the proponent of SK. On the one hand, it might be argued that the scientific community has a rebutting psychological defeater at T2, and thus the social knowledge in question has been defeated. There are at least two problems with this response. First, in the absence of an argument, it seems epistemically arbitrary to maintain that the mental states of individual members of the group can contribute negatively to social knowing, but not positively. For notice that according to the SK, social knowing that p is determined entirely by the satisfaction of conditions (1)–(3), which require that an appropriate social structure truly believe that p, where that p is available to the members of the collective entity who need it. There is, then, nothing about the mental states of either the individual members or of the scientific community itself that contributes positively to the knowledge that d at T2. So then why would their mental states be relevant negatively? While I regard this first problem as sufficient for ruling out the conclusion that the scientific community has a rebutting psychological defeater at T2, it is worth mentioning a second, though

less serious, worry: if the scenario described in ADDITION includes a defeater, then there will be less social knowledge than might have been thought. To see this, suppose, for instance, that there is a discovery published in an obscure journal that definitively establishes that dinosaur extinction is the result of extremely large-scale volcanic activity. According to the SK, this can count as an instance of social knowledge even if the author of the published paper and all those who were aware of it die. At this later time, however, it seems plausible and indeed probable that a significant number of the members of the scientific community might nonetheless believe that such extinction is the result of a meteor impact. Thus, what might have initially seemed to be an instance of social knowledge turns out to instead be defeated. What is significant for our purposes here is that this scenario does not seem anomalous or unrealistic. In many cases of purportedly social knowledge, it is likely that there will be a significant contingency of relevant individuals who hold conflicting beliefs that function as either rebutting or undercutting psychological defeaters. Moreover, among those who do not disbelieve or have doubts about the matter in question, there almost certainly will be many who should disbelieve or have doubts about it, given the other beliefs that they hold. This threatens to shrink the instances of social knowledge to a relatively small number, thereby questioning the significance of such a phenomenon to our epistemic lives. Given these problems with granting that the scientific community has a rebutting psychological defeater at T2, it might be argued, on the other hand, that the social knowledge in question has not been defeated at this time. In particular, it might be claimed that in a situation such as that found in ADDITION, a group can know that p even though the group believes that notp. There are two different ways this might be done. First, the proponent of SK might support this verdict by saying that it is possible for a group to both believe that p and believe that not-p while not having a rebutting psychological defeater. The problem with this strategy, however, is that it is exempting groups from the norms governing rationality and thus removing them from the realm of the rational altogether. For believing both that p and that not-p is the height of irrationality, and rebutting defeaters are precisely what rule out this combination of states from being epistemically permissible. But if groups are not subject to standard norms of rationality, then there seem to be excellent grounds for ruling them out as knowers. Otherwise put, if a subject is not deemed irrational for believing both that p and that not-p, then this is excellent evidence that the subject is not a knower at all but, rather, is a mere receptacle of information. The second strategy for supporting the verdict that the social knowledge in question has not been defeated at T2 is to say that it is possible for a group to both know that p and believe that not-p because belief is not a necessary condition on knowledge. In this way, the conclusion that the scientific community believes both that p and that not-p would be avoided since there would be no group belief that p despite there being social knowledge that p. But then the proponent of SK would have to explain why granting the existence of this kind of social knowledge is more compelling than the view that belief is a necessary condition on knowledge. It seems doubtful that this could be done, not only because of the prima facie implausibility of SK, but also because there is an excellent reason why belief is nearly universally taken to be necessary for knowledge; namely, there needs to be a connection between the knower and the proposition known to distinguish knowing that p from merely being in a position to know that p. Moreover, notice that the proponent of this strategy cannot even argue for something weaker, such as group acceptance that p, to replace group belief that p since social knowledge can be altogether free of the mental states of the individual members.24 There is, then, even further reason to reject that SK is providing an account of a collective entity knowing that p.25

3.4 Knowing, Being in a Position to Know, and Should Have Known In addition to the specific problems discussed thus far with respect to social knowledge, there is the further question why we should attribute knowledge to collective entities in such cases in the first place. With respect to individuals, there is a clear difference between knowing that p and being in a position to know that p, which itself is grounded, at least in part, in the difference between information that has been accessed and information that is merely accessible. For instance, if I have an unopened letter on my desk that contains a confession from my friend to a murder, we wouldn’t say that I know that my friend committed the crime prior to my opening it and reading its contents.26 Instead, we would say that I am in a position to know this. Indeed, it would not only be bizarre for me to assert that my friend committed the murder or to act on this by reporting her to the police if the information is merely accessible, but not accessed; it would also be reckless and irresponsible. When the police ask why I’m reporting my friend, for instance, I would have absolutely nothing to offer by way of support. But why would the situation be any different when groups are concerned? If the discovery of an enzyme that plays a role in the development of cancer cells is published in a journal article that is accessible to, but not accessed by, any living scientist, why wouldn’t we provide the same verdict as we did in the individual case: the scientific community is in a position to know this discovery, but it doesn’t know it? Not only is this the more intuitive description, it also accords well with the disconnect that was earlier discussed between social knowledge and action. In particular, there is such a disconnect precisely because this phenomenon involves a group being in a position to know, rather than knowing, and there isn’t an intimate connection between the former and action as there is with the latter. Moreover, the mere fact that social knowledge isn’t knowledge after all doesn’t mean that we can never hold groups responsible for the information in question. Let’s consider the individual case, first. Suppose that not only is the confession from my friend sitting unopened on my desk, but also that my friend explicitly told me to read it before heading over to the police station for questioning. Here it might be appropriate to hold me responsible for the content of the letter since I should have known that my friend committed the crime.27 Similar considerations apply with respect to groups. Perhaps it is not only that the discovery of an enzyme that plays a role in the development of cancer cells is published in a journal article that is accessible to any living scientist, but also that practicing oncologists have an epistemic responsibility to have read the article. As in the individual case, it might be appropriate to hold the oncology community responsible for this discovery since it should have known about it. Thus, not only does social knowledge lead to unacceptable epistemological consequences, but the role that it would play can be better filled by other states, such as being in a position to know and should have known. We can conclude, then, that knowledge is not socially extended in the radical sense found in SK, and thus does not pose a problem for the view of justified group belief developed in Chapter 2.

3.5 Collective Knowledge The second view of group knowledge, which is not only highly influential but also at odds with my own account of justified group belief, is found with the application of the “collective

knowledge doctrine” in U.S. law. In the case of the United States v. Bank of New England, for instance, the Bank of New England was charged and convicted of thirty-one violations of the Currency and Foreign Transaction Reporting Act. The details of the case are as follows: from May 1983 through June 1984, James McDonough visited the Prudential branch of the Bank of New England on thirty-one separate occasions to withdraw money from a corporate account. On one such occasion, McDonough presented the bank teller with two checks made payable to cash in the amounts of $8500 and $5000. According to the Reporting Act, a Currency Transaction Report (CTR) must be filed whenever a cash withdrawal is made that exceeds $10,000 and it is a violation to willfully fail to file such a report. The teller on duty was unaware of the Reporting Act, while the teller’s supervisor was aware of the Act, but did not know that the customer’s two deposits had to be aggregated for purposes of the reporting requirement. The bank’s project coordinator, who was working in the bank’s main office, knew that the law required aggregation, but had no knowledge that the transaction in question occurred” (Ragozino 1995, p. 433). None of the three employees of the bank individually committed a criminal violation of the Act, then, because none, individually, willfully failed to file a CTR. However, according to the “collective knowledge doctrine,” which resulted from this case, the knowledge of the individual employees can be added or aggregated and then properly attributed to the bank itself. Indeed, according to the trial court’s instructions to the jury considering the case: “if Employee A knows one facet of the currency reporting requirement, B knows another facet of it, and C a third facet of it, the bank knows them all” (Hagemann and Grinstein 1997, p. 214). Given this, the knowledge of the three individual employees—that is, the teller’s knowledge that two deposits exceeding $10,000 had been made, the head teller’s knowledge of the reporting requirement, and the coordinator’s knowledge that multiple deposits must be aggregated—can be combined and then imputed to the bank. Hence, it was concluded that the Bank of New England satisfied the knowledge requirement needed for establishing mens rea, thereby leading to a guilty verdict with respect to the violations of the Currency and Foreign Transaction Reporting Act. In order to assess this notion of collective knowledge in greater detail, let’s consider a structurally similar case, which will allow us to determine whether the doctrine in question is generally true. Note that if it is not, this is reason to think there is an alternative explanation for why we are inclined to hold the Bank of New England accountable for its violation of the reporting obligation. Suppose that three police officers all work for the same unit of the Chicago Police Department (CPD). Officer A knows (1): that a seven-year-old child, Jimmy Smith, has been reported missing from the Rogers Park neighborhood of the city since this morning. He knows this because it was communicated to him by his superior. Officer B knows (2): that Jimmy Smith was wearing a Frida Kahlo t-shirt this morning because he lives next door to the Smith family, and he remembers commenting on how he loves Frida Kahlo’s work when he saw Jimmy walk out of the house. And Officer C saw a seven-year-old wearing a Frida Kahlo t-shirt walking with an adult man while he was patrolling a park in Edgewater, the neighborhood just south of Rogers Park. Officer A knows only (1), but not (2) or (3); Officer B knows only (2), but not (1) or (3); and Officer C knows only (3), but not (1) or (2). According to the collective knowledge doctrine, the knowledge of the individual police officers can be properly attributed to the Chicago Police Department as a group.28 Thus, the CPD knows (1), (2), and (3), even though no single police officer knows this. But let’s examine this conclusion a bit closer. As should be clear, if anyone—individual or group—knows all three of these facts, then the knowing agent should approach the seven-yearold in the park to determine whether he is Jimmy Smith. This is especially true if the knowledge

in question is had by a police officer or unit charged with finding the missing child. Not only is this supported by the Knowledge/Action Principle discussed at length in sections 3.2–3.4, it is also intuitive. If a police unit knows that a child has just gone missing, and it also knows that a child fitting a very specific description that is known to be true of the missing boy is now walking through a park in a neighborhood close to where he was last seen, it undoubtedly should check to see if the child is Jimmy Smith.29 Imagine, for instance, that the child is in fact Jimmy Smith, and that he is walking with his abductor in the park, and his parents learn that a police officer who is part of a unit that purportedly knew (1), (2), and (3) failed to intervene. They would, quite rightly, be filled with outrage and despair but, perhaps even most of all, confusion. How, they might ask, could you know (1), (2), and (3) and yet allow our son to walk past you in the hands of his abductor? Notice, though, that none of this is true of the scenario as described above. It is not at all puzzling why the police unit, and Officer C in particular, did not approach the young boy in the park. The unit could quite easily explain to the parents that while each officer had bits of relevant knowledge, they did not communicate effectively, and so Officer C had no idea that a child was even missing from Rogers Park when he saw the boy in Edgewater. Indeed, it would be deeply problematic for him to approach what is from his perspective an ordinary child walking in the park with an adult man, with no evidence that there is a problem. If Officer C did stop Jimmy Smith, what would he be able to say to his abductor upon being asked why he was approaching them? Nothing at all relevant, it seems. And if he did succeed in preventing Jimmy Smith from being abducted, this would not be in any way the result of something creditable to him or the police unit. It would be pure luck. But action that is guided by knowledge is not the result of luck in this way, and so there is very good reason to deny that the police unit in fact has knowledge, as the collective knowledge doctrine says. This problem is similar to the one discussed in section 3.4 regarding social knowing. Collective knowledge, as it is understood in U.S. law, divorces knowledge from the intimate connection it has with action. As we saw, knowledge is said to be both necessary and sufficient for epistemically permissible action. If, for instance, one knows that p, then one is properly epistemically positioned to act as if p. But collective knowledge that p does not make one properly epistemically positioned to act as if p. In the absence of Officers A, B, and C communicating their individual pieces of knowledge with one another, any action taken by the police unit as a collective or as individual officers to question Jimmy Smith or his abductor would be unwarranted. Indeed, it would be no different than an unrelated group of officers stopping to question a random man and child walking in a park. Not only does this raise epistemic concerns, as I have been arguing here, but there are also ethical and legal problems with people who are going about their lives being stopped by law enforcement for no reason whatsoever. Since the connection that knowledge has with action is one of its distinctive features, and is even said to be what makes knowledge valuable, we have reason for concluding that collective knowledge is not knowledge after all. The proponent of the collective knowledge doctrine may respond by pointing to the case of United States v. Whitfield (2011), as it involves determining the extent of the application of the collective knowledge doctrine in U.S. law. Camden police officers in three marked police vehicles were patrolling an area of the city known for violence and drug activity involving crack cocaine. One police officer observed a hand-to-hand exchange involving the defendant in an area of the city known for drug activity while two different officers apprehended the defendant without having communicated with the first one. The defendant challenged the legality of his

seizure because the officer who apprehended him did not witness the hand-to-hand exchange. However, citing the collective knowledge doctrine, the Third Circuit of the United States Court of Appeals upheld the seizure on the grounds that “the knowledge of one law enforcement officer is imputed to the officer who actually conducted the seizure, search, or arrest.” In particular, the Court reasoned that it is unnecessary for one officer, who is working with his fellow officers as a “unified and tight-knit team,” to communicate to the other officers all of the information relevant to the seizure in question: It would make little sense to decline to apply the collective knowledge doctrine in a fast-paced, dynamic situation such as we have before us, in which the officers worked together as a unified and tight-knit team; indeed, it would be impractical to expect an officer in such a situation to communicate to the other officers every fact that could be pertinent in a subsequent reasonable suspicion analysis. Applying the collective knowledge doctrine here, there is little question that there was reasonable suspicion to seize Whitfield.

Thus, the Third Circuit found that collective knowledge of the group of officers as a whole justified the seizure of the defendant in question. The important dimension of the Third Circuit’s reasoning here is that it is the fact that the officers worked together as a “unified and tight-knit team” that purportedly justifies the application of the collective knowledge doctrine. But in what sense did the officers work as a team? In case law, the collective knowledge doctrine has two approaches that should be distinguished in answering this question. According to the vertical approach, one law enforcement officer who possesses probable cause may instruct another officer to act without communicating the requisite knowledge in question. For instance, in United States v. Hensley (1985), an informant told a police officer in St. Bernard, Ohio that Hensley had driven the getaway car from an armed robbery in St. Bernard six days earlier. The officer put out a “warranted flyer” to nearby police departments, which described Hensley and the crime for which he was being sought, and asked that he be picked up and held if seen. The police department in Covington, Kentucky read the flyer to its officers and, on this basis, the Covington police pulled Hensley over upon spotting him. The United States Supreme Court, relying on an earlier case Whiteley v. Warden (1971), argued: [L]anguage in Whiteley suggests that, had the sheriff who issued the radio bulletin possessed probable cause for arrest, then the [arresting] police could have properly arrested the defendant even though they were unaware of the specific facts that established probable cause. Thus Whiteley supports the proposition that, when evidence is uncovered during a search incident to an arrest in reliance merely on a flyer or bulletin, its admissibility turns on whether the officers who issued the flyer possessed probable cause to make the arrest. It does not turn on whether those relying on the flyer were themselves aware of the specific facts which led their colleagues to seek their assistance. In an era when criminal suspects are increasingly mobile and increasingly likely to flee across jurisdictional boundaries, this rule is a matter of common sense: it minimizes the volume of information concerning suspects that must be transmitted to other jurisdictions and enables police in one jurisdiction to act promptly in reliance on information from another jurisdiction.

Thus, the Supreme Court reasons here that when an officer who issues a warranted flyer possesses the requisite knowledge for probable cause, and instructs other officers to act on this, a corresponding arrest is proper even when the arresting officer lacks the knowledge in question. Further insight into the reasoning underlying this can be seen by looking to Commonwealth of Pennsylvania v. Yong, where the Pennsylvania Supreme Court argues that: “Read jointly, Whiteley and Hensley instruct that the collective knowledge doctrine serves an agency function. When a police officer instructs or requests another officer to make an arrest, the arresting officer stands in the shoes of the instructing officer and shares in his or her knowledge.”

This vertical approach to the collective knowledge doctrine is not at all at odds with the account of justified group belief outlined in the previous chapter. There are at least two ways of understanding this. First, the police departments working together to apprehend a suspect might be construed as forming a unified group for this purpose, where a significant percentage of the operative members possess the requisite knowledge for probable cause. In particular, the officers who put out the flyer can be seen as the operative members of the group who have the relevant justified belief, and so the mere fact that the arresting officer lacks it does not pose a problem. Moreover, notice that there is no sense whatsoever in which the arrest of the suspects in these sorts of cases is random or lucky. Instead, there is a very close connection between the knowledge and action—it just occurs through the group member(s) who has it asking another to act on it. A second, and related, understanding of the vertical approach to the collective knowledge doctrine is where the arresting officer (or officers) is understand as a proxy agent30 for the knowing officer (or officers). On this reading, the arresting officer’s actions constitute the actions of the knowing officer, and so there is no gap between knowledge and action. Since the arresting officer’s actions just are the knowing officer’s actions, the knowledge in question is guiding the detainment and arrest. This is made possible at least in part through the proxy agent knowingly acting with the delegated authority of a proxy. The arresting officer’s knowledge of the possession of probable cause is part of what allows his action to be guided by that knowledge, even in the absence of his own individual beliefs as to what that probable cause is. Once again, the account of justified group belief defended in the previous chapter supports this reading. The knowing officer is an operative member who individually possesses the relevant justified belief that serves as the foundation for the group’s knowledge, and the arresting officer is functioning as a proxy agent for him. On both of these readings of the vertical approach to the collective knowledge doctrine, the sense in which a group of officers is working as a “unified and tight-knit team” is clear. There are direct channels of communication between individuals or subgroups that provide excellent evidence to the arresting officers that other members of the group have knowledge of probable cause, even if detailed information about this is not conveyed. This is fairly standard when working as collectives. Members of groups often follow norms that allow one another to trust that each is doing his or her share of the work. This enables streamlined communication that still secures an appropriate connection between epistemic states and action without requiring the conveying of detailed information that would make group work far less efficient and effective. Importantly, these norms also allow the proper apportioning of blame when individual members do not meet their prescribed obligations. For example, if the superior officer has misread the bulletin from a neighboring jurisdiction, the blame for making an unwarranted arrest would fall on his shoulders rather than on those of the arresting officer. There is, however, a second, horizontal approach to the collective knowledge doctrine in U.S. law that is “broader” and at work in United States v. Whitfield. On such an approach, “the probable cause assessment is not focused on a single officer’s knowledge; rather, probable cause is assessed by aggregating the knowledge of two or more law enforcement officials working together.” (Commonwealth of Pennsylvania v. Yong) In Commonwealth of Pennsylvania v. Yong (2015), the majority opinion states that courts that rely on the horizontal approach “have ignored the original aim of the rule” by “eliminating the requirement that officers actually communicate with each other.” Moreover, the majority reasoned that “an expansive interpretation of the collective knowledge doctrine does not comport with the fundamental requirement that warrantless arrests be supported by probable cause.” What we see here is that the horizontal

approach to the collective knowledge doctrine is far from uncontroversial in U.S. courts. Indeed, the mere aggregation of the knowledge of individual members of a group is often explicitly rejected as sufficient for the knowledge required for probable cause. This is the case, not only because the courts have raised doubts about groups possessing collective knowledge in the absence of any relevant communication whatsoever, but also because of the potential dangers of such attributions. In Commonwealth of Pennsylvania v. Yong, concerns very similar to those raised in earlier sections of this chapter are articulated with respect to the horizontal approach: “under any approach that permits aggregation of unspoken information or justifies actions taken absent direction from a person with the necessary level of suspicion, there remain serious concerns for protecting citizens from unconstitutional intrusions.” The majority continues by quoting from United States v. Massenburg (2011, emphasis added): No case from the Supreme Court…has ever expanded the collective knowledge doctrine beyond the context of information or instructions communicated (“vertically”) to acting officers. Some of our sister courts have authorized “horizontal” aggregation of uncommunicated information. See United States v. Ramirez, 473 F.3d 1026, 1032-33 (9th Cir. 2007) (collecting cases)…. The rationale behind the Supreme Court’s collective-knowledge doctrine is, as the Court noted in Hensley, a “matter of common sense: [the rule] minimizes the volume of information concerning suspects that must be transmitted to other jurisdictions [or officers] and enables police…to act promptly in reliance on information from another jurisdiction [or officer].” Hensley, U.S. 469 U.S. at 231. Thus, law enforcement efficiency and responsiveness would be increased[.]… The Government’s proposed aggregation rule serves no such ends. Because it jettisons the present requirement of communication between an instructing and an acting officer, officers would have no way of knowing before a search or seizure whether the aggregation rule would make it legal, or even how likely that is. The officer deciding whether or not to perform a given search [or seizure] will simply know that she lacks cause; in ordinary circumstances, she will have no way of estimating the likelihood that her fellow officers hold enough uncommunicated information to justify the search. And as an officer will never know ex ante when the aggregation rule might apply, the rule does not allow for useful shortcuts when an officer knows an action to be legal, as Hensley did. Perhaps an officer who knows she lacks cause for a search will be more likely to roll the dice and conduct a search anyway, in the hopes that uncommunicated information existed. But as this would create an incentive for officers to conduct searches and seizures they believe are likely illegal, it would be directly contrary to the purposes of longstanding Fourth Amendment jurisprudence. (Massenburg, 654 F.3d at 494.)

Note especially the points emphasized in this lengthy quote. The Fourth Circuit is unambiguously stating that acting on “collective knowledge” understood merely horizontally— where individual bits of knowledge are simply aggregated and attributed to the group in the absence of any relevant communication—severs the connection between knowledge and action. The best-case scenario from an epistemic point of view is that the arresting officer will be acting on an estimate of how likely it is that there is probable cause rather than on knowledge of probable cause. But estimates of this sort clearly do not amount to knowledge in any reasonable sense. The worst-case scenario is that the arresting officer will simply gamble with respect to the lives of others and conduct a search with the hope that uncommunicated information exists for probable cause. Again, this seems to be a far cry from collective knowledge. Building on this view from the Fourth Circuit, the Pennsylvania Supreme Court says: In light of these concerns, we cannot acquiesce to the Commonwealth’s request to broadly interpret the collective knowledge doctrine and adopt an unrestricted horizontal application.…we will not endorse an approach that has the potential of encouraging police without the requisite level of suspicion to infringe on a person’s freedom of movement in the hopes that his or her fellow officers possess such level of suspicion. See Massenburg, 654 F.3d at 494.

As should be clear, I entirely agree with the Pennsylvania Supreme Court here; indeed, I think

the Court is offering an argument very similar to the one against social knowing from the previous sections. The Court is unequivocally saying that the mere aggregation of individual bits of knowledge—without any relevant communication—just isn’t collective knowledge. Because knowledge warrants the corresponding action, calling this knowledge illegitimately sanctions searches and seizures that infringe on people’s rights. This is not only epistemically problematic, it is also morally and legally wrong. Otherwise put, knowledge has a very close connection with action; it is, for instance, typically sufficient for epistemically permissible action. But searching someone on the basis of horizontally understood collective knowledge is impermissible. So, we have reason to conclude that collective knowledge, so understood, isn’t knowledge after all. Moreover, as I argued in the previous sections of this chapter, the mere fact that a group of police officers does not have the knowledge in question does not eliminate attributions of responsibility. Perhaps the group is such that they should have communicated with one another the information necessary for probable cause and thus should have possessed the relevant knowledge. We can certainly criticize both the individual officers and the collective as a whole for failing to do what they ought to have done. Or perhaps the officers are in a position to know the relevant facts, and so they are subject to disapproval for failing to do further epistemic work. However the details are fleshed out, the main point is that the absence of an attribution of knowledge does not in any way eliminate the attribution of responsibility.

3.6 Conclusion We have seen, then, that two of the more serious challenges to the account of justified group belief defended in Chapter 2 can be avoided. Both social knowledge and collective knowledge sever the crucial connection between knowledge and action, and open the door to serious abuses, not only epistemically, but morally and legally as well. Bits of information that are merely accessible to group members, or individual instances of knowledge that are aggregated with no communication, do not amount to group knowledge in any robust sense. Let’s now turn in the remaining chapters to some of the things that groups can do, particularly as they relate to their states of believing and justifiedly believing. The Epistemology of Groups. Jennifer Lackey, Oxford University Press (2021). © Jennifer Lackey. DOI: 10.1093/oso/9780199656608.003.0004

1 See Hutchins (1995). 2 Indeed, some go further and argue that Hutchins’s example, “rather than that of a jury or a board of directors facing a special decision, should serve as a paradigm of collective knowledge” (Klausen 2015, p. 823). 3 While I will here focus on Bird’s radical view, in large part because it is developed in such fine detail, my arguments apply to weaker inflationary non-summativist views of group knowledge as well. 4 Bird writes, “My view is unashamedly veritistic, and is an extension of traditional analytic epistemology and its conviction that knowledge entails truth” (Bird 2010, p. 24). 5 There are questions that may be raised about the notion of accessibility operative here, but they lie beyond the scope of this chapter. 6 Despite the fact that Bird argues throughout his (2010) that accessibility is what is key for social knowing (e.g., “What is important [for social knowing] is that the information should be accessible to those who need it” (2010, p. 48), he weakens this requirement near the end of his paper. He writes: “…the fundamental point is functional integration—the knowledge plays a social role.…[I]t is not the accessibility of the knowledge that is essential to its being social knowledge; rather it is the capacity of

the knowledge to play a social role…in virtue of the structure and organization of the group; accessibility is the principal means by which that is achieved” (2010, p. 48). This is an even weaker account than what is offered in SK. Since the arguments that follow will all show that SK is too weak, I will ignore this slightly modified version. 7 See Pettit (2003). 8 I should make explicit that I will discuss various forms of “proxy agency” in the next two chapters where an agent has authority to perform an action on behalf of a group. Such an agent may or may not be a member of the group itself, but I do not regard these cases as violating GMAP. In all cases of proxy agency, there is some action made by a member of the group that causally contributes to the group’s action, whether this is the granting of authority to the proxy agent, the acceptance of inherited authority, the absence of objections (which is an omission that causally contributes to the group’s action), and so on. 9 The explicit formulation of this quotation refers to justified belief, rather than to knowledge. But immediately following this passage, Fantl and McGrath write, “…it might seem that we are imposing an unduly severe restriction on justification and therefore on knowledge” (Fantl and McGrath 2002, p. 79). It is clear, then, that Fantl and McGrath intend for this condition to apply to both justification and knowledge. See also Fantl and McGrath (2009). 10 Hawthorne and Stanley restrict their conditions to “p-dependent choices” since p may simply be irrelevant to a given action. 11 A contextualist version of this principle is: “A first-person present-tense ascription of ‘know’ with respect to a proposition is true in a context iff that proposition is an appropriate premise for practical reasoning in that context” (Williamson 2005, p. 227). 12 I should note that I argue in Lackey (2007) against a principle similar to KAP, but the weaker condition I defend would support my arguments here against SK just as well. 13 See, for instance, Hawthorne (2004). 14 See DeRose (2002). 15 See, again, DeRose (2002). 16 More precisely, they write: “We have not of course considered all possible knowledge-free accounts of knowledge-level justification. But our discussion gives us reason to think that, at least given fallibilism, there is no such account. If, indeed, there isn’t, it looks like KJ secures the distinctive importance of knowledge” (Fantl and McGrath 2009, p. 182). 17 I will have a lot more to say about the role of spokespersons in groups in later chapters. 18 This is a significantly modified version of a case found in Tollefsen (2007). 19 For the classic discussion of this view, see Davidson (2001). 20 I am assuming that if the UN Population Commission knows that the birth rate of Latinos in the U.S. is on the rise, it believes this, too. 21 For additional arguments supporting this conclusion, approached from the perspective of holding corporations morally responsible, see Velasquez (2003). 22 Indeed, I have argued against the KAP in my (2010) because I think it is too strong, but I nonetheless accept that there is a very tight connection between knowledge and action. 23 For different accounts of group belief, see Chapter 1. 24 Kallestrup (2016) endorses Bird’s arguments and is aware of the objections raised in this chapter, but responds that his “discussion is premised on the possibility of satisfactory answers.” As should be clear, Kallestrup’s response here is unsatisfactory, as he is acknowledging that there are objections to his view but makes no attempt to address them. 25 Bryce Huebner argues against countenancing as knowledge phenomena like Bird’s notion of social knowing on the following grounds: “If there is no way for the justification of a claim to be located, checked or reproduced, and if no one is really accountable for having made it, it is hard to see why this should count as knowledge at all” (Huebner 2014, p. 214). See also Kukla (2012). By way of response, Deborah Tollefsen argues that “what seems to be motivating [Huebner’s] skepticism is an epistemic internalism that requires for knowledge the ability to give a justification or to have access to reasons. No one person seems to be able to provide a justification or has access to reasons, and so no one, including the group, can be held responsible. But why not take an externalist approach and think of group knowledge as the result of a reliable process? There are various ways to preserve the notion of epistemic responsibility within an externalist theory of knowledge. It may be that large-scale scientific collaboration is, in its current form, not using reliable processes, but this is an empirical question—one that a science of distributed cognition might one day answer” (Tollefsen 2014). (See Klausen (2015) for an endorsement of Tollefsen’s response.) As should be clear, my arguments in this chapter appeal to general features of knowledge, such as its close connection with epistemically permissible action and its incompatibility with defeaters, and so even if Tollefsen’s response works against Hueber’s (and Kukla’s) view, focusing on epistemic externalism will not help here. 26 I am assuming, of course, that I have no other relevant evidence about my friend committing the crime. 27 While I understand “should have known” in terms of normative defeat, see Goldberg (2017) for another view. 28 At the very least, the knowledge can be imputed to the unit of the Chicago Police Department that the three officers in

question belong to, but nothing in what follows turns on this distinction. 29 I take it that not many young children in Illinois wear Frida Kahlo t-shirts. 30 As already mentioned, I will discuss proxy agency in far more detail in the final two chapters.

4 Group Assertion Groups make assertions all the time. It is nearly a daily occurrence to hear a university announcing a new initiative, a police department denying a charge of brutality, or a company reporting information about its financial value. Yet despite the frequency with which we take it at face value that groups do offer assertions, there is a shocking paucity of philosophical work on this topic. This chapter aims to at least begin the process of filling this gap in the literature. As we have seen in the previous chapters, there are, broadly speaking, two different approaches to understanding collective phenomena in general. On the one hand, there is a deflationary approach, according to which explaining such phenomena does not require new theoretical resources; rather, we can simply rely on our grasp of the same phenomena at the individual level. This is because the states and acts of groups just are the states and acts of individual members “summed up.” For instance, a deflationary view of group assertion holds that a group asserts a proposition just in case some of the members of the group assert the proposition. Which members are here relevant, and under what circumstances their assertions count as the group’s, need to be fleshed out, but the core idea is clear: group assertion is reducible to the assertions of individual members. Thus, to the extent that we understand individual assertion, we have the central resources for explaining group assertion. According to an inflationary approach, on the other hand, collective phenomena cannot involve the mere summing up of the same phenomena at the individual level. This is because there can be states and acts of groups where there is no corresponding state or act of a group member. For instance, an inflationary view of group assertion holds that a group can assert a proposition even when not a single member of the group does. In a very important sense, then, collective phenomena are over and above any phenomena at the individual level. In this chapter, I will develop and defend an inflationary approach to what I will argue is the core kind of group assertion: authority-based. I will show that groups can offer assertions even when no members of the group do, and thus that it is the group itself that is doing the asserting.1 This conclusion is further supported, I will argue, by the fact that it is the group, rather than any individual, that is subject to the norm or norms governing assertion. Finally, I will explain why I regard assertion as being unlike the collective phenomena discussed in the previous three chapters in demanding that it be understood in straightforwardly inflationary terms.

4.1 Two Kinds of Group Assertion Let’s begin by highlighting some key dimensions of group assertion. Very roughly, there are two ways in which a group might assert that p: first, a group may assert that p through all of its

members reasonably intending to convey that p together in virtue of coordinated individual acts. Let us call this coordinated group assertion. An instance of this sort of group assertion is where the members of a tour group stranded on a desert island work together to form the words “We Need Help” in the sand. All of the members coordinate individual acts of communication that together convey the view of the group as a whole. Another example of coordinated group assertion is where all of the members of a research team collectively draft an article together, such as through Google Docs. If such members work collaboratively to literally compose, say, a single sentence—much like the members of the tour group put together the message in the sand —then this is the assertion of the research team. This should be distinguished from a case where each member of a group writes different parts that together make up a single article, which is how some co-authored or collaborative work is done. Whereas the former is an instance of coordinated group assertion, the latter is merely a collection of individual assertions. While coordinated group assertion is surely important, the far more common kind of group assertion, and the one that I think has not been fully appreciated, is that offered through an authorized spokesperson(s). I am understanding the notion of a spokesperson(s) as subsuming any set of individuals that is distinct from a group as a whole and that speaks on a group’s behalf with the proper authority. A spokesperson might be a member of a group, such as when the chair of a philosophy department has the authority to speak on behalf of the department when hiring decisions are at issue. Alternatively, a spokesperson might not be a member at all, such as when a lawyer is hired to speak on a philosophy department’s behalf where pending litigation is concerned. The point that I wish to emphasize here, however, is that the standard way in which a group asserts is through an authorized spokesperson(s).2 Whenever a group asserts through an authorized set of individuals that is smaller than the group as a whole, let us call this an authority-based group assertion. This kind of group assertion will be the central focus of this chapter. How should we understand a spokesperson’s having the requisite authority to speak on a group’s behalf? This question has been largely absent from work on collective phenomena, but a notable exception is found in Kirk Ludwig’s (2014). Since his is the only extended discussion of this issue in the literature, it is worth considering in some detail.

4.2 Having the Authority to Be a Spokesperson Ludwig’s central aim is to provide an account of what he calls proxy agency, where “one person or subgroup’s doing something counts as or constitutes or is recognized as (tantamount to) another person or group’s doing something” (Ludwig 2014, p. 76). According to Ludwig, being a spokesperson for a group is a paradigmatic instance of being a proxy agent and is also what John Searle calls a status function.3 “The two core ideas in the concept of a status function are that some object or thing or person has a certain social function, a function in certain social transactions, and that it has that function in virtue of its having acquired a certain status among a relevant group of people by way of their attitude toward it” (Ludwig 2014, p. 87). A standard example used to clarify the concept of a status function is money: twenty dollar bills are simply pieces of paper unless the relevant group members, such as buyers and consumers, agree to their having a particular social status. Similarly, Ludwig argues that individuals making assertions are spokespersons only when their status as such is granted by the relevant members of the

community, which here includes all members of the group and audience in question. In particular, “a status function is a property an object has in virtue of people so regarding it…that enables it to play a certain role in a social transaction. This has to include all who participate in the social transaction. The announcing group’s authorization of an individual plays its role only in the context of an action plan that specifies its function relative to a collective action by a larger group” (Ludwig 2014, p. 89). Let us call Ludwig’s view here the status function model of being a spokesperson. One of the upshots of this approach is that the actions of proxy agents do not themselves constitute the actions of groups. This is because “in one way or another, group action through proxy agency calls upon every member of the group to contribute” (Ludwig 2014, p. 100). Thus, when a spokesperson asserts, this is the culmination of the activities of all of the group members, such as their granting authority to the spokesperson to speak on their behalf or their agreeing to the institutional arrangements that provide such authority. There are, however, at least two central problems with this status function conception of spokespersons. The first is that a group’s asserting does not depend on audience recognition. More precisely, groups can assert through a spokesperson not only when the audience members fail to regard the speaker as playing the role of spokesperson for the group, but also when they reject both her status and her corresponding assertion. Ludwig explicitly denies this possibility. He writes, “…if the group identifies a possible mechanism for group announcement, but doesn’t communicate it to the audience, or if the audience doesn’t find it acceptable, and so refuses to pay attention, then the group fails to achieve its aim. This would be analogous to someone declaring a certain object was to be the royal seal without getting others to go along with it” (Ludwig 2014, p. 93). But suppose, for instance, that the police chief of a nearly universally sexist community has the sole power to designate a spokesperson to represent the department with regard to a highly publicized murder investigation, and for the first time in history, he appoints a woman to this role. Let’s call her Jane. Suppose further that the community finds the appointment of a woman utterly unacceptable and so refuses to listen to, or accept, anything that Jane asserts. Has Jane asserted on behalf of the police department? To my mind, the answer is clearly yes. What is needed for a spokesperson to assert on behalf of a group is that she has the authority to do so, regardless of whether this is recognized by the audience members.4 If, for instance, it is written into the police department’s policies and procedures that the chief has the sole power to appoint the spokesperson, and he so appoints Jane, then she has the authority to speak for the department even if the sexist community rejects that she is playing this role and ignores everything that she says. This verdict is paralleled in the individual case: those who are ignored do not fail to be asserters or fail to assert; instead, they are the victims of testimonial injustice5 or their assertion fails to achieve its desired aim of uptake.6 If, for instance, a woman asserts that she does not want to have sex with her partner, she is asserting this, even if her assertion is ignored or refused. Indeed, even if her partner is such that he does not accept her status as an asserter of refusals of sex in general, she is still an asserter. This is because she has the authority to refuse unwanted sexual advances even when her authority is ignored or rebuffed. In this way, I disagree with Ludwig’s claim that a spokesperson asserting on behalf of a group without audience acceptance “would be analogous to someone declaring a certain object was to be the royal seal without getting others to go along with it.” It would instead be analogous to an individual being the victim of testimonial injustice.7 Consider, also, some consequences of requiring audience recognition or acceptance in order

for a spokesperson to assert on behalf of a group. Women would systematically be denied the ability to serve as a spokesperson in sexist communities, Black people would be unable to do so in racist societies, and so on. Moreover, suppose that Jane asserts on behalf of the police department at T1 when the community is sexist, but then years later at T2 the community has changed and is now accepting of her role as the group’s spokesperson. Her statement would go from failing to be the group’s assertion at T1 to being the group’s at T2 and thus it would be incapable of functioning as evidence of the group’s view at T1 but not at T2. These conclusions all seem problematic, as they conflate one’s asserting with one’s asserting being properly appreciated. The second problem with Ludwig’s status function view is that whether a spokesperson asserts on behalf of a group does not require that the members of the group accept or recognize the authority of the spokesperson; the spokesperson simply has to have the authority. Suppose that all of the fellow police officers of the sexist department above refuse to accept the policy that permits a woman to be the authorized spokesperson for the department. Thus, even though Jane is appointed as the spokesperson for the department, her statement would not be the group’s assertion on Ludwig’s view because all members of the group need to accept the institutional arrangements that provide such authority. But if the policies and procedures of the police department do not require agreement or consensus in order for authority to be given to Jane to serve as the spokesperson, then the members being disgruntled or unhappy doesn’t make it the case that she is not asserting on behalf of the group. By way of response to this sort of worry, Ludwig argues that simply by virtue of agreeing to be a member of a group, one thereby accepts the policies and procedures of the group. He writes: “since meeting the membership condition requires endorsing the division of roles and responsibilities (that is partly what defines the role of membership), anyone who joins such a group explicitly endorses its arrangements,8 in accepting membership, and in that act then contributes constitutively to the authorization of its various roles” (Ludwig 2014, p. 97). But there is a dilemma facing this response: either group membership does not require the acceptance of the policies and procedures of the group or the notion of acceptance operative here is vacuous. To see this, consider, first, a sabotaging member of a group: suppose that a police officer in the sexist police department becomes convinced of the moral wrongness of the sexism of his group and (i) rejects all of the sexist policies and procedures of the department, and (ii) actively works to undermine them. Surely this officer is still a member of the police department in question, yet it is not at all clear how he accepts the group’s policies and procedures. Acceptance is typically understood as being such that it would manifest itself in one’s actions. So, for example, one might be said to accept, even if one does not believe, that smoking is safe if one would assert that smoking is safe, act as if smoking is safe, defend the safety of smoking, and so on.9 This is because acceptance often results from a consideration of one’s goals, such as the financial aim of making smoking appealing. But the sabotaging member’s actions support his rejecting rather than accepting the institutional arrangements of the police department. Of course, Ludwig might say that the very joining of the police department by the sabotaging member brings with it an acceptance of its policies and procedures, even if this wouldn’t be manifested in any of his actions. But the sense of acceptance that must be operative here is so thin that it is vacuous. To make this even clearer, consider a sabotaging joiner of a group: suppose that a new police recruit joins the department precisely because he rejects all of the sexist policies and procedures of the department and wishes to actively work to undermine them. To my mind, sabotaging joiner is clearly a member of the police department—he has the same

authority and benefits as all of the other members of the department, receives a paycheck from the department, and so on. There is, however, absolutely no sense in which he accepts the policies and procedures of the police department. Indeed, he joins the group with the sole aim of undermining the institutional structure because of its sexist nature. To say that all of this is still compatible with the sabotaging joiner accepting this institutional structure is to render the notion of acceptance here vacuous. Thus, I reject the status function model of spokespersons—spokespersons are not like money or royal seals, which require agreement or recognition by the members of the social transactions in order to be what they are. Instead, my view might be called pluralist: there are many mechanisms for securing the relevant kind of authority needed for being a spokesperson. One of the more common ways is where there is agreement, and authority is acquired through members of a group explicitly or implicitly granting it to a spokesperson. For instance, a philosophy department might vote to elect the chair as its spokesperson on matters related to job searches, or the members might grant this authority when they accept employment at an institution where this is part of the chair’s duties. Or members of a group might sign a legal contract that grants authority to a lawyer to speak on their behalf on matters related to the litigation in question. But the granting of authority by the members of a group is not the only way in which it might be acquired. Another way is through tradition or inheritance, such as when a member of a monarchy has the authority to speak on behalf of his or her nation on, say, matters of national security. Even if members of the nation explicitly reject the monarch’s authority and actively seek to distance themselves from the expressed views, the authority might exist nonetheless. Moreover, unlike heads of state who are voted into office, citizens of a monarchy might have no say in who is speaking on their behalf, and if they acquired their citizenship through birthright, there might be no sense in which they ever accepted the relevant institutional structure.10 Still another way in which such authority might be acquired is through non-objection. Suppose that a collection of protesters informally gathers outside the dean’s office at a university to object to the recent firing of a tenured faculty member. When the media shows up on the first day, suppose that one of the protesters—call her Mary—states, “We object to the faculty member’s employment being terminated without due process.” On this first day, Mary’s statement is an instance of an individual offering her own view of what a collective entity believes. In other words, the assertion is Mary’s, not the group’s. But suppose that the protesters continue to meet and no one objects to Mary reporting their views to the media. At some point, Mary acquires the authority to speak on behalf of the group through the absence of objections from the members, thus rendering her statements those of the protesters.11 It may also be worth leaving open the possibility that having the authority to be a spokesperson can be moral or fundamental. Just as I have the authority to refuse sexual advances, regardless of whether this authority has ever been recognized or appreciated, perhaps parents have the authority to assert on behalf of their very young children, even if they live in a society where this has always been denied. These are simply some examples of how authority can be acquired, but there are certainly others, such as through seizure or coercion. The central point to note here, though, is that the having of authority to be a spokesperson need not be granted or accepted by either the members of the group or the audience in question, and it is the having of authority to speak on a group’s behalf that in large part determines whether the assertion in question is an individual’s or a group’s.12

In addition to being pluralist, the conception of authority operative here is de facto or descriptive rather than normative, and thus the authority in question need not be morally or politically legitimate. Consider a case where the authority in question is acquired in some sense illegitimately: suppose, for instance, that in the case of the protesters discussed above, the members do not object to one of them speaking on their behalf because they are oppressed or bullied by him. Or suppose that some revolutionaries seize authority from a political figure to speak on behalf of a subset of the citizenry. Is the relevant group asserting in these sorts of cases? The short answer to this question is yes. The mere fact that authority is acquired in, say, a morally illegitimate way does not mean that the person in question doesn’t have it. As I said above, the authority at issue here is de facto or descriptive authority, not normative. A group of rebels might seize authority from the president of a country so as to oppress the members of an ethnic minority. Even if this seizure of authority is morally illegitimate, the rebels might still come to have the authority to speak on behalf of the country. The same is true in the individual case: a highly aggressive business executive might become the president of a corporation through immoral dealings, but this doesn’t prevent him from having the authority to serve as the corporation’s spokesperson. Or suppose that a woman feels so dominated by her husband that she never objects when he speaks on her behalf. Through this systematic non-objection, the husband might come to have the authority to be his wife’s spokesperson on a range of issues, even if the process whereby this is achieved is morally illegitimate. Of course, there are limits to this, which make clear the difference between authority and power. Some psychological trauma may be so severe that the absence of objection is due to the inability to object, and so it might not be possible to acquire authority through non-objection in such cases, despite having power. But the central point that I want to emphasize here is that illegitimately acquired authority can be authority nonetheless. There are, however, a couple of objections about this notion of authority that should be considered. First, suppose that a king has been taken to have the authority to speak on behalf of the citizenry without anyone realizing that in fact the laws of the monarchy grant this authority to the queen. Who has been the spokesperson for the nation, the king or the queen? On my view, this would simply be described as a conflict of authority. The king has authority to speak on behalf of the citizens through non-objection, and the queen has authority to speak on their behalf through the law. And such a conflict would have to be resolved through, for example, negotiation, in order to determine who has the final authority. But this is unique neither to my view nor to group assertion. Suppose that unbeknownst to us, my husband and I each hire a different lawyer to represent me in a suit. The first lawyer says on my behalf that I want to settle while the second one says on my behalf that I don’t. Which one is my actual spokesperson? Again, there is a conflict of authority that needs to be resolved here before it can be determined what my assertion is. Second, suppose that a king has the legal authority to speak on behalf of his citizens, but there is widespread discontent in his nation about the existence of the monarchy. No one acknowledges his authority and no one takes him to be speaking for the nation. Is he still asserting on their behalf?13 If the king is still regarded as the king, and with this role comes, say, the legal authority to speak on behalf of the citizens, then, yes, the king continues to assert on behalf of his nation despite their discontent. It is, however, possible that the widespread discontent among the citizens brings about social changes that do undermine the king’s having this authority. Perhaps

there is so much opposition that it becomes an open question whether the nation still has a king, or whether one of the king’s roles is to be the spokesperson for the people. In these cases, it would be indeterminate whether the king is asserting for the nation. If there is radical social change and the monarchy is dismantled or the king is stripped of much of his authority, then he would no longer be asserting on behalf of the nation. It might also be the case that the citizens forge new groups—such as a revolutionary or opposition party. While the king might still reign over the nation and thereby speak on its behalf, there might be different spokespersons for these opposition groups. We have seen, then, that having the authority to be a spokesperson can be grounded in a multitude of features, where agreement or recognition by the members of the social transactions is merely one such option. I now want to turn to another aspect of being a spokesperson that is worth highlighting.

4.3 The Autonomy of Spokespersons In addition to it being the case that the standard way in which a group asserts is through an authorized spokesperson(s), another central point that I wish to emphasize is that most spokespersons have a certain degree of autonomy or independence. A spokesperson, at the very least, is not merely a parrot or a mouthpiece with a script, repeating verbatim what she has been told by the members of the group. But even more strongly, a spokesperson often asserts on behalf of a group without consulting the group or its members regarding the specific content of the proffered statement. This is at least in part because spokespersons are frequently required to speak for their clients “on the spot,” to respond to new questions and concerns by extrapolating from the information that they already have. Moreover, spokespersons sometimes have expertise that goes beyond what the represented group and its members have. A lawyer, for instance, need not consult with her client each time she speaks on its behalf since at least some of what she states concerns legal matters over which her client might be wholly ignorant. Combining the central features of authority-based assertion thus far highlighted—namely, that the standard way in which a group asserts is through an authorized spokesperson(s), and that most spokespersons have a certain degree of autonomy—results in the possibility that a group can assert a proposition about which it and its individual members are wholly unaware. Here is an example: AUTONOMOUS SPOKESPERSON: Philip

Morris hires spokesperson S—who is not a member of the group—to represent the company’s views to the public.14 Philip Morris explicitly tells S that the company’s official view is that smoking is safe, no matter what. At a recent press conference, S, in her role as the official spokesperson for Philip Morris, is asked whether smoking causes disease X. No member of Philip Morris has ever heard of disease X, nor do they have any beliefs about its safety, but S responds on Philip Morris’s behalf that smoking does not cause disease X. In AUTONOMOUS SPOKESPERSON, Philip Morris asserts that smoking does not cause disease X while no member of the company has ever even heard of disease X. This is because S has the authority to autonomously speak on behalf of Philip Morris where the safety of smoking is concerned, even when this goes beyond matters that S has explicitly discussed with Philip Morris’s members. Any adequate account of group assertion, then, needs to accommodate this distinctive

feature of the way that groups assert.15

4.4 Coordinated and Authority-Based Group Assertion With these considerations in mind, I propose the following accounts of coordinated group assertion (CGA) and authority-based group assertion (ABGA), respectively: CGA: A group G asserts that p in the coordinated way if and only if the members of G coordinate individual acts, a1,…an, so that they all reasonably intend to convey that p together in virtue of these acts. ABGA: A group G asserts that p in the authority-based way if and only if that p belongs to a domain d, and a spokesperson(s) S (i) reasonably intends to convey the information that p in virtue of the communicable content of an individual act (or individual acts) of communication,16 (ii) has the authority to convey the information in d, and (iii) acts in this way in virtue of S’s authority as a representative of G.17 According to the CGA, coordinated group assertion simply involves individual acts—such as placing rocks in the sand or words in a document—that are coordinated, and so there is not much to add to what has been said about individual acts. I will, therefore, spend the remainder of the chapter focusing on authority-based assertion. And here there are a number of features to note. First, condition (i) of the ABGA is modeled on the account of individual testimony that I have developed elsewhere.18 In particular, the focus is on acts of communication so as to allow for assertions that do not involve statements, such as nods, pointing, and other gestures. Moreover, to avoid countenancing as assertions acts of communication where the intention is to convey the information that p in virtue of features about the assertion—such as my intending to convey the information that I have a soprano voice by asserting this in a soprano voice19—it is required that the speakers reasonably intend to convey the information that p at least in part in virtue of the act’s communicable content. Still further, the intention in question needs to be a reasonable one. A group does not assert that its name is Philip Morris—even if it intends to convey this information—through winking at the public. This is because, in the absence of prior agreement that a certain sequence of winks will be understood as conveying Philip Morris’s name, this intention is not a reasonable one. Second, according to the ABGA, a group can assert that p even when not a single member of the group either intends to convey the information that p or asserts that p, thereby permitting groups to have autonomous spokespersons who assert on their behalf “on the spot.” At the same time, the ABGA does not allow such spokespersons to assert on a group’s behalf on any topic whatsoever. Both of these results follow from condition (ii), which requires that a spokesperson(s) have the authority to convey some or all of the propositions in a domain of which that p is a member. So, for instance, a spokesperson might have the authority to speak on Philip Morris’s behalf with respect to matters that concern the safety of smoking, but not about questions concerning the company’s finances. This enables my view to deliver the correct verdict that Philip Morris is asserting that smoking does not cause disease X in AUTONOMOUS SPOKESPERSON.

Moreover, notice that condition (iii) of the ABGA requires that S assert on G’s behalf in virtue of S’s authority as a representative of G. For instance, suppose that Philip Morris’s spokesperson tells his wife while on vacation that the company disregarded valid scientific evidence about the dangers of smoking. In such a case, he might be personally asserting to his wife about this fact, but he is not doing so on behalf of Philip Morris. This is because even if he has the authority to convey this information on behalf of Philip Morris, he is not doing so in virtue of this authority; instead, he is doing so in virtue of his role as a husband to his spouse. Condition (iii) thus rules out such individual assertions from counting as a group assertion, even if one of the members in fact has the authority to speak on behalf of the group. In order to better understand both conditions (ii) and (iii) of the ABGA, I would like to draw an important distinction between what we might call a rogue spokesperson and a bad spokesperson. On the one hand, a rogue spokesperson is one who asserts that p on behalf of G either without having the authority to do so or without doing so in virtue of this authority. There are at least two different ways in which a spokesperson can be rogue. First, S might assert that p on behalf of G, where that p is not part of the domain in which S has authority to represent G. For instance, Philip Morris’s spokesperson might assert that the company’s favorite movie is Citizen Kane or that the company does not support gay marriage, despite having the authority only to speak on behalf of the company when the safety of smoking is at issue. Here, the content of the statement in question lies outside of the scope of S’s authority in speaking on behalf of G and thus condition (ii) of the ABGA fails to be satisfied. Second, S might assert that p on behalf of G, where S’s asserting that p does not aim to reflect the view G intends for S to assert on its behalf. For instance, Philip Morris might have a bumbling spokesperson who aims to be a whistleblower and expose the company’s deceptive practices, but because of her bumbling ways, ends up inadvertently asserting precisely what G wishes.20 In such a case, even though the spokesperson might in fact assert that smoking is safe, and even though this might accurately represent what Philip Morris wishes S to report on its behalf, S is speaking for herself as a whistleblower when she makes this assertion, not for the company. Given this, while S might have the authority to speak on behalf of Philip Morris when the safety of smoking is concerned, S does not assert that smoking is safe in virtue of her authority as a representative of G, thereby failing to satisfy condition (iii) of the ABGA. Thus, when a rogue spokesperson, S, asserts that p on behalf of G, the assertion in question is S’s, not G’s, either because S does not have the authority to assert that p on behalf of G or because she fails to do so in virtue of her authority as a representative of G. On the other hand, a bad spokesperson is one who asserts that p on behalf of G and has the authority to do so, but nonetheless fails through incompetence or negligence to say what G intends for S to assert on its behalf. One way this might happen is if the spokesperson is simply very bad at drawing the relevant inferences that follow from G’s other beliefs. For instance, when S is asked whether smoking causes disease X, S might answer affirmatively because S fails to realize that Philip Morris intends for S to respond negatively to this question on its behalf, even though this is the obvious inference from all of the company’s other views on the matter. Another way a spokesperson might be bad is through failing to pay close enough attention to the details of G’s views. For instance, when S is asked whether Philip Morris agrees with scientists that smoking causes emphysema, S might answer affirmatively because S failed to listen carefully to the discussions at the company’s board meetings. In the former case, S is an incompetent spokesperson and in the latter case, S is a negligent spokesperson, but in both cases

S is a bad spokesperson who is asserting on behalf of Philip Morris. This is because S not only has the authority to assert that p on behalf of G, but S also does so in virtue of her authority as a representative of G—she just does so badly. Thus, when a bad spokesperson, S, asserts that p on behalf of G, the assertion in question is G’s, not S’s. The difference between a rogue and a bad spokesperson might be made more vivid by considering the likely consequences of their respective statements. While a rogue’s assertion might be disavowed or otherwise denied by the group in question and the spokesperson might be fired, a bad spokesperson might be forced to retract the assertion on behalf of the group and be reprimanded or trained. A rough analogy on the individual side might be the difference between an unfortunate statement offered while under the control of hypnosis versus one made while drunk: in the former case, one didn’t assert anything at all, and thus can completely disavow it, while in the latter case, one did offer an assertion and thus needs to retract it, and perhaps apologize, the next morning. One might worry here that my view has the unattractive consequence that a group asserts that p even when every member of the group protests that the spokesperson in question made a serious mistake in asserting that p on their behalf. While this is indeed true of my view when the spokesperson is merely bad, rather than rogue, I regard this as the correct result. Consider a spokesperson for an individual: if I hire a sloppy or mediocre attorney to defend me in a lawsuit, I might end up asserting through the attorney that, for instance, I’ll accept a settlement offer, despite this not being what I ultimately wanted. The same is true of action more broadly—if I grant authority to a financial advisor or a stockbroker to make financial transactions on my behalf, I might end up selling one of my stocks despite my vehement opposition to this after the fact. This is why we should choose our spokespersons, and our representatives more broadly, very wisely. Notice that on this view, a rogue spokesperson and a bad spokesperson might offer assertions with the very same content in identical circumstances, yet one might be S’s assertion while the other is G’s. S1 might assert that smoking causes emphysema because she aims to be a whistleblower while S2 might assert that smoking causes emphysema because she fails to draw obvious inferences from Philip Morris’s other views on the matter. When S1 and S2 both offer their assertions on behalf of the company in response to the same question at a single press conference, S1’s assertion is her own while S2’s is Philip Morris’s. Finally, it should be noted that it is precisely conditions such as (ii) and (iii) that distinguish an individual asserting about the beliefs of a group from a group asserting. Suppose, for instance, that a member of Philip Morris, who has no authority to speak on its behalf, asserts that the group’s view is that smoking is safe. Even if this member has access to what the group’s view is and purports to be speaking on its behalf, this is not group assertion; instead, it is an individual asserting about the group’s view. According to the ABGA, this is because the member is not a spokesperson that has the authority to convey information about the safety of smoking on behalf of Philip Morris. Of course, as mentioned earlier, in some cases, a member might try to offer a group assertion and, to the extent that she succeeds, she might in part create her own authority as the group’s spokesperson. But until this happens, she is speaking for herself, not the group.

4.5 Two Other Accounts

There are two other views of group assertion in the literature.21 The first is Miranda Fricker’s variant of a joint acceptance account, according to which we should “…construe a group testifier as constituted, at least in part, by way of a joint commitment to trustworthiness as to whether p (or whatever range of p-like questions might delineate the body’s expertise, formal remit, or informal range of responsibility)” (Fricker 2012, pp. 271–2, original emphasis). Fricker here takes the joint commitment to trustworthiness to be constitutive of a group being a testifier, and thus it seems to follow that no group could offer assertions in the absence of such a commitment. But this is a puzzling requirement, for it seems to confuse being a testifier simpliciter with being an epistemically good testifier. Surely an individual can testify about all sorts of matters without any commitment at all to trustworthiness; liars and those engaged in other forms of deception do precisely this. What we would say about them is that they are not epistemically good testifiers, but that they are testifiers nonetheless. The same is true of groups. Groups whose members do not jointly commit to trustworthiness—such as certain deceptive corporations and governments —are testifiers, even though they are not epistemically reliable ones. Indeed, if we adopt Fricker’s account of group testimony, not only would we be hard-pressed to account for the lies of groups that generally eschew trustworthiness, we would also thereby have difficulty holding them responsible for such deception. Fricker appeals to Edward Craig’s distinction in his (1990) between being a testifier or informant and being a source of information and argues that she is offering an account of only the former. Footprints in the sand, for instance, might be a source of information, as are photographs, but neither is a testifier. Persons can function this way, too—I might infer that you are nervous from the hesitancy with which you deliver your testimony, even without your asserting that you are nervous. But even granting such a distinction, surely not every epistemically bad testifier turns out to be a mere source of information. When Philip Morris says that smoking is safe, the corporation is a group testifier if anything is, despite the fact that it is clearly an epistemically bad one here. So, my central criticism is unaffected by Craig’s distinction. Later in her paper, Fricker goes on to say, “At any rate, my main claim can be the weaker one: that any group partly constituted by way of a joint commitment to trustworthiness (regarding some relevant range of questions) is pre-eminently suited to enter into the secondpersonal relations of trust that characterize testimony” (Fricker 2012, p. 272). Even this weaker notion, however, is problematic, since, again, second-personal relations of trust clearly do not characterize testimony simpliciter—at best, they characterize epistemically good testimony. The second account of group testimony, or assertion, in the literature is offered by Deborah Tollefsen, where she argues as follows: A group G testifies that p by making an act of communication a if and only if: 1. (In part) in virtue of a’s communicable content G reasonably intends to convey the information that p. 2. The information that p is conveyed by either (i) a spokesperson S or (ii) a written document. 3. If (i), G does not object to S’s uttering p on its behalf and if G intends for any specific individual(s) to utter p, it intends for S to utter p and S believes that he or she knows this. 4. If (i), S utters p for the reasons in 3. 5. If (ii), G does not object to the way in which p is conveyed in writing.

6. G conveys the information that p in the right social and normative context. 7. In conveying the information that p in the right social and normative context, G is taken to have given its assurance that p is true. (Tollefsen 2009, pp. 12–13) There are issues to be raised with every condition of this account. Let’s begin with (1), which Tollefsen adapts from the account of individual testimony found in Lackey (2006 and 2008). The problem with applying it here is that it is not clear that it can accommodate the autonomy of spokespersons and, therewith, the kind of group testimony found in AUTONOMOUS SPOKESPERSON. In particular, while Philip Morris testifies that smoking does not cause disease X, not a single member of the group reasonably intends to convey this information in virtue of S’s making the act of communication a. So, unless a group can intend to do something that no individual member intends to do, (1) is a problem for group testimony.22 Regarding condition (2), there are three problems. First, a group’s conveying the information that p through either a spokesperson or a written document is an unnecessary disjunction, as a spokesperson can clearly communicate on behalf of a group in both verbal and written form. So (i) subsumes (ii). Second, there can be more than one spokesperson who conveys the information that p. A subgroup of individuals, for instance, might be called upon to communicate a company’s view, and thus all of these members would function as relevant spokespersons. (i) should, therefore, be modified accordingly. Finally, (2) lacks the resources for accommodating instances of coordinated group testimony. Turning to condition (3), worries arise regarding both parts. First, it necessitates that G does not object to S’s uttering p on its behalf. As I emphasized in the text, however, spokespersons often have some autonomy with respect to speaking on their clients’ behalf, and so they do not present their statements to the group for prior approval before they are offered. (3), then, cannot require for every instance of group testimony given via a spokesperson that the group does not object to S’s uttering that p on its behalf prior to the utterance. But nor can it be necessary that the group does not object to the spokesperson’s uttering that p on its behalf during or after the utterance. If it did, it would make whether a group in fact testified depend on something that could possibly come years after the statement was offered since a group could object to a spokesperson’s testimony long after it was offered. Even more importantly, this would permit groups to deny having testified when clearly they did. Suppose, for instance, that in AUTONOMOUS SPOKESPERSON, Philip Morris attempts to avoid legal and moral responsibility for smoking-related health problems by denying having testified to its safety simply because (3) wasn’t satisfied. Not only does it seem that the company testified despite the failure of this condition, but groups also shouldn’t be able to get off the normative hook so easily. Since a spokesperson can clearly convey information on behalf of a group in writing, similar problems apply to condition (5). The second part of (3) requires that, if G intends for any specific individual(s) to utter p, it intends for S to utter p and S believes that he or she knows this. But consider this: suppose that Philip Morris has two official spokespersons, Maria for Mondays, Wednesdays, and Fridays and Terrence for Tuesdays and Thursdays. Suppose, further, that the group, knowing that it is Monday, intends for Maria to state on its behalf that smoking is safe, but it turns out that she called in sick and was replaced at the last minute by Terrence. When he reports to the public on behalf of Philip Morris that smoking is safe, this is no less the company’s testimony than if Maria had done so, despite the fact that the group doesn’t intend for Terrence to do so. There is also a problem with (4), which requires that if the information that p is conveyed via S, S utters that p for the reasons in (3). Recall that the reasons in (3) are that the group doesn’t

object to S’s uttering that p on its behalf and intends for S to utter that p. Once again, however, this condition is too strong and ignores the different roles that are often given to spokespersons. For instance, a spokesperson might be hired specifically to piece together the information gathered at a group’s meeting into a legally sound view and then report it to the public. In such a case, the spokesperson utters that p, not because the group intends for S to do so, but because the group intends for S to use its autonomy and legal expertise to present the best version of the group’s view from a legal point of view. It is, however, still the group’s testimony. Condition (6), which requires that G convey the information that p in the right social and normative context, is included to rule out as group testimony statements such as those offered by Philip Morris’s spokesperson to his wife while on vacation. But not only is a clear, substantive characterization of which social and normative contexts are “right” difficult to come by, we have seen that the same result can be achieved by requiring that the spokesperson in question have the authority to speak on behalf of the group being represented. Finally, there are problems with condition (7), which requires that G be taken to have given its assurance that p is true. I have elsewhere argued23 extensively against what is known as the assurance view of individual testimony, and my objections apply straightforwardly to the group case. So, I will briefly mention only the following: such a condition makes the act of testifying depend on the recipient’s reception of it, but this has counterintuitive results. Suppose, for instance, that a corporation is called to testify against a partner company and the jurors are skeptical of the spokesperson’s trustworthiness because of the conflict of interest. In such a case, the jurors might not take the company to have given its assurance that p is true, but surely it has testified.24 There are, therefore, significant problems facing both of the existing accounts of group assertion, none of which apply to the view defended in this chapter.

4.6 Group Assertion Is Not Reducible to Individual Assertion If what I have argued is correct, a spokesperson asserting on behalf of a group in the right sort of way can be constitutive of group assertion, and thus this phenomenon must be understood in inflationary terms since a group may assert that p even when no member of the group asserts that p. In AUTONOMOUS SPOKESPERSON, for example, Philip Morris asserts that smoking does not cause disease X despite the fact that no member of Philip Morris asserts this or is even aware that there is such a disease as X. Given this, the only one who could be doing the asserting here is the group itself. But one might wonder how substantive this conclusion is. For even though group assertion is not reducible to the assertion(s) of individual members of the group, isn’t it still reducible to individual assertion(s)? In particular, isn’t the group’s assertion in AUTONOMOUS SPOKESPERSON reducible to the spokesperson’s assertion? If so, the mere fact that the spokesperson is not a member of the group doesn’t seem to reveal anything deeply important about the nature of group assertion. Indeed, the extent to which group assertion demands an inflationary treatment seems to be a minor quibble regarding whether the reductive base needs to be made up of group members or not. The heart of the view, however, seems clearly deflationary. This understanding of the view of group assertion that I’ve defended in this chapter is, I think, deeply mistaken. In a nutshell, my response to this worry is this: when spokespersons are

speaking on behalf of groups that they represent, they are not themselves asserting anything at all, a conclusion that is clearly supported by noticing that what they say does not have any of the paradigmatic features of assertion. Let us begin with what is arguably the most decisive consideration here: assertion is governed by an epistemic norm, but what spokespersons say is not. For instance, it has been widely argued that knowledge is the norm of assertion—that one should assert that p if and only if one knows that p.25 While such a view is not immune to objections, most of the critics simply replace it with a weaker epistemic norm, such as justified belief, or reasonable to believe, and so on.26 But now notice: there is no sense whatsoever in which spokespersons are governed by such norms. Consider a chair serving as the spokesperson for her department in a conversation with the administration about future hiring plans. Under no circumstances should she assert that p to the administration only if she knows, or justifiedly believes, or has reason to believe that p. All focus on belief, either directly or indirectly, and whether the chair believes something is entirely irrelevant to the norms she should follow as a spokesperson. More precisely, the dominant norm governing spokespersons is to assert what best reflects the view of the group she is representing. Because a spokesperson can be doing everything that she ought to even if reporting on behalf of a group a proposition that she personally has absolutely no basis for, and indeed evidence against, believing, there is simply no epistemic norm of assertion governing spokespersons. Since someone is clearly asserting something in cases such as AUTONOMOUS SPOKESPERSON, and this asserter is subject to the norm(s) governing assertion, the natural conclusion to draw is that it is the group itself. Thus, we have a group asserting that p when no individual at all is asserting that p.27 Such a view is nowhere in the ballpark of a deflationary view. This conclusion is further supported by considering other features of assertion. In addition to being governed by an epistemic norm, Sanford Goldberg highlights the following in his recent book on assertion:28 1. Conveyed Self-Representation: “Many writers describe assertions as involving the speaker’s representing herself as knowing, or at least having evidence for, what she has asserted.” (Goldberg 2015, p. 7) 2. Sincerity: “Another feature of assertion which, though not unrelated to assertion’s epistemic significance, nevertheless deserves to be called out separately, has to do with assertion’s relation to belief. Simply put, when they are performed sincerely, assertions express or manifest one’s beliefs.” (Goldberg 2015, p. 8) 3. Entitlements and responsibilities: “Suppose that you believe something on the basis of Jones’ say-so, and then are queried regarding the grounds of your belief….we might say that Jones’ assertion authorized or entitled you to do so; and when you do ‘pass the buck’ to her in this way, Jones then has the responsibility to address the challenge herself. It would thus appear that in asserting that p, the speaker authorizes the hearer to defer any legitimate challenge to the truth of the claim to her, and generates the responsibility for taking up that challenge.” (Goldberg 2015 p. 8) We see, again, that while these features are true of the group’s assertion, they are not true of the spokesperson’s report. Beginning with 1, when a spokesperson reports that p on behalf of a group, there is no sense in which she represents herself as knowing, or having evidence for

believing, that p. Instead, it is the party she is speaking for that is being represented as having the appropriate epistemic relationship to that p. This is related to 2, since even if one wishes to reject that there is an epistemic norm governing assertion, it surely is true that when assertions are performed sincerely, they generally express or manifest the asserter’s beliefs. Again, however, this is not at all the case with respect to the spokesperson’s reports, where sincerity would be wildly out of place. Indeed, a spokesperson who aimed to be sincere, and report what she herself believed, would be subject to significant criticism and censure by the party she is representing. Finally, if it is appropriate to “pass the buck” to anyone in cases of authority-based assertion, surely it should be passed to the group rather than to the spokesperson. The group is the one who espouses the view in question, and the group is the one that bears the responsibility for the assertion—not the group’s messenger. Indeed, it may even be the case that the spokesperson knows very little about why the group holds the view that it does, and would rightly need to check with the party she is representing before responding to any objections. It should thus be clear that spokespersons are not asserting anything in cases of authority-based group assertion; they are simply the means by which groups offer assertions. One issue that I would like to address is this: I argued in the earlier chapters of this book on behalf of views of collective phenomena, such as group belief and justified group belief, that are more deflationary29 than is my account here of group assertion.30 Yet in this chapter I am defending an account of group assertion that is robustly inflationary. Is there an explanation of this asymmetry that is not ad hoc? Yes, and here it is: when, and only when, it is possible to grant authority to another agent or agent-like entity to do something on one’s behalf does it follow that inflationism is true. So, for instance, I can give authority to my lawyer to speak on my behalf, to lie on my behalf, to bullshit on my behalf, and to act on my behalf. In all of these cases, then, it will be possible for my actions to be constituted by the actions of another, even when I myself am entirely ignorant of the matter. Thus, accounts of all of these phenomena will be inflationary in nature. In contrast, I cannot grant authority to another to believe on my behalf, or to desire on my behalf, or to justifiedly believe on my behalf, or to know on my behalf. To be sure, I can defer to others in such cases. When asked, “Where do you want to have dinner,” I can respond by saying, “Wherever my daughter wants to go.” What this means is that I’m giving authority to my daughter to decide where we’re going to eat our next meal. I’m deferring to her desires, but nothing she does is constitutive of my mental states. It may now be asked, however, whether there is tension in my overall view. For, on the one hand, I am saying that states such as group belief or group knowledge require member belief or member knowledge, and yet, on the other hand, I am also saying that group assertion does not even require by a single member of the group in question awareness of the proposition asserted. Thus, a group can assert that p in the complete absence of belief or knowledge that p. But then isn’t there a conflict between my appealing to, say, an epistemic norm of assertion—which has some connection to belief—to motivate the extent to which I’m an inflationary theorist, while also denying that group belief is necessary for group assertion? By way of response, let me first emphasize that I am here providing an account of group assertion, not epistemically permissible assertion. Given this, many of my examples, such as AUTONOMOUS SPOKESPERSON, are cases of a group offering an assertion, though not necessarily in an epistemically appropriate fashion. For instance, while it is true that Philip Morris asserts that smoking does not cause disease X, it is clearly not epistemically proper to do so, as there is no basis at all for believing such a claim.

Nevertheless, given my account of authority-based group assertion, surely it is plausible to think that there will be some group assertions on my view that are epistemically permissible despite the fact that the group itself fails to possess the knowledge in question. But I don’t regard this as problematic, as I have elsewhere argued extensively that assertion at the individual level can be epistemically proper in the absence of knowledge, and even in the absence of belief. This leads to my embracing what I call a Reasonable to Believe Norm of assertion.31 So it would simply follow that group assertion is like individual assertion in requiring neither knowledge nor belief; indeed, the considerations in this paper can be viewed as providing even further arguments against views such as the Knowledge Norm of assertion. Of course, given the combination of theses I hold, it would have to be shown either (i) that groups properly assert that p when, and only when, it is reasonable for groups to believe that p, or (ii) that the norms governing assertion differ at the individual and at the group levels. I favor option (i), but arguing in favor of it lies beyond the scope of this chapter. The point that I wish to emphasize here is that my inflationary view of group assertion is compatible with my appealing to general features of assertion to support such a view. It is also worth noting that my view of the nature of group assertion has significant consequences for the epistemology of group testimony. In particular, if, as I have argued, the statement in AUTONOMOUS SPOKESPERSON is an instance of a group asserting, then a widely accepted view in the epistemology of individual testimony—the transmission view32—cannot be true of group testimony. According to the transmission view, knowledge is transmitted via testimony and thus if H knows that p on the basis of S’s testimony that p, then S must know that p. But now consider a modified version of AUTONOMOUS SPOKESPERSON: suppose that it is true that smoking does not cause disease X and the public comes to learn this on the basis of Philip Morris’s spokesperson stating that this is so. In such a case, the public knows that smoking does not cause disease X on the basis of Philip Morris’s testimony, but there is no sense whatsoever in which Philip Morris knows that smoking does not cause disease X, as Philip Morris doesn’t even have the concept of disease X. So even if the transmission view is true of individual testimony, it cannot apply at the level of groups.

4.7 Conclusion In this chapter, I’ve provided the framework for an account of group assertion. On my view, there are two kinds of group assertion, coordinated and authority-based, with authority-based group assertion being the core notion. I’ve argued against a deflationary view, according to which a group’s asserting is understood in terms of individual assertions, by showing that a group can assert a proposition even when no individual does. I’ve also argued on behalf of an inflationary view, according to which it is the group itself that asserts, a conclusion supported by the fact that paradigmatic features of assertion apply only at the level of the group. A central virtue of my account is that it appreciates the important relationship that exists between most groups and their spokespersons, as well as the consequences that follow from this relationship. My view, thus, provides the framework for distinguishing when responsibility for an assertion lies at the collective level, and when it should be shouldered by an individual simply speaking for herself.

The Epistemology of Groups. Jennifer Lackey, Oxford University Press (2021). © Jennifer Lackey. DOI: 10.1093/oso/9780199656608.003.0005

1 To avoid confusion, I should note that I argued in Lackey (2014a) on behalf of a deflationary account of group testimony. But while my topic here is on what we might call the metaphysics of group assertion or group testimony (where I here use “assertion” and “testimony” interchangeably)—i.e., what is it for a group to assert or testify—the account in Lackey (2014a) takes up the epistemology of group assertion or group testimony—i.e., how do we acquire justified belief or knowledge via group assertion. Thus, my view is inflationary in a metaphysical sense, but deflationary epistemologically. 2 It is also possible for a group to be structured so that every member is authorized as a spokesperson for the group. I’m grateful to an anonymous reviewer that led to the inclusion of this point. 3 See Searle (1995). 4 This view is supported by our general practices involving spokespersons. For instance, in a recent CNN article addressing whether President Trump called for an end to Robert Mueller’s investigation into collusion with Russia, it is reported that Trump’s attorney, John Dowd, said that “he was speaking on his own behalf, although he had earlier told the Daily Beast, which first reported the statement, that he was speaking on behalf of the President. Dowd’s comment wasn’t authorized by the President, a person close to…Trump told CNN” (https://www.cnn.com/2018/03/17/politics/john-dowd-mueller-russiainvestigation/index.html, accessed March 18, 2018). 5 See Fricker (2007). 6 Some read Austin (1962) as requiring uptake in order for illocutionary speech acts to be successful. See, for instance, Langton (2009). For objections to the uptake requirement, see Antony (2011). Fricker (2012) applies this reading of Austin to testimony, writing “Without my uptake, whatever you may succeed in doing with your words, it won’t be quite testifying” (Fricker 2012, p. 254). Even if this is a correct reading of Austin, there are at least three worries with applying it to testimony or assertion. (For our purposes here, we can treat testimony and assertion interchangeably.) First, there is not a single view in the literature of what it is to testify that supports the uptake requirement. (See, for instance, Coady (1992), Fricker (1995), Audi (1997), Graham (1997), Elgin (2002), and Lackey (2008).) Second, this view has the consequence that one does not testify in a private diary that is never read, in a courtroom when one is not believed, and so on. Third, if one takes someone to be lying and thus there is no uptake, then there is, on this view, no assertion. If asserting is a necessary condition on lying, then we get the result that the known liar cannot lie. For all of these reasons, uptake should not be taken to be necessary for testifying or asserting. 7 Just to be clear, the parallel is as follows: just as individuals can assert in the absence of audience recognition, so, too, can groups assert via spokespersons without such recognition. 8 Explicit endorsement of a group’s policies and procedures is a very strong requirement for group membership. Ludwig argues that this requirement is true only of “genuine organizations,” where “members choose to join and, hence, agree to the conditions of membership, which includes an endorsement of the institutional arrangements” (Ludwig 2014, p. 97). As I will argue later, however, I think that members can join groups without such an endorsement. 9 For detailed discussions about this difference between acceptance and belief, see, for instance, van Fraassen (1980), Stalnaker (1984), Cohen (1989 and 1992), Wray (2001), and Hakli (2007 and 2011). 10 Ludwig argues that citizenship is a hybrid status, where “operative members” are those “who have accepted membership” and thus when we say that a hybrid institutional group has done something qua institution, this “entails that (and only that) its operative members have all contributed, whether or not it has non-operative members as well” (Ludwig 2014, p. 99). But why would those who obtained citizenship through birthright not be operative? Doesn’t this subgroup make up the bulk of most nations? Moreover, since it is highly questionable whether accepting membership is necessary for group membership, it would be best to not build this into one’s account of group agency. 11 Ludwig (2014) might deny that this is a case of a spokesperson asserting on behalf of a group since he claims that only genuine organizations can authorize proxy agents. But this isn’t plausible. Unstructured, informal groups can evolve to have spokespersons without any clear act of “joining” or of agreeing to the conditions of membership. 12 I will say what else is needed to distinguish individual from group assertion in what follows. 13 I am grateful to Michael Bratman for this question. 14 One might ask the following: if Philip Morris hires an outside spokesperson, S, to represent the company’s view, does this thereby make S a member of the group in question? The answer here is clearly no. If the Supreme Court hires an outside clerk to assist with legal research, this does not thereby make the clerk a member of the Supreme Court. If Northwestern University hires Bulley and Andrews Construction Firm to renovate one of the academic buildings, this does not make the construction workers members of Northwestern. Bringing a suit against the firm, for instance, is not to thereby bring suit against Northwestern. 15 While individuals might also grant authority to another to speak on their behalf, such as when a lawyer represents an individual client, group assertion is distinctive in that this is the standard way in which groups assert. 16 I should note that in Lackey (2006 and 2008), condition (i) is presented as being both necessary and sufficient for an

individual to testify (or assert). However, to distinguish what a spokesperson does in testifying or asserting on behalf of someone else, rather than on behalf of herself, my account of individual testimony (assertion) should explicitly specify this. Thus, it should read: S testifies (asserts) that p by making an act of communication a if any only if S reasonably intends to convey on behalf of herself the information that p (in part) in virtue of a’s communicable content. I am grateful to Marija Jankovic for a question that led to the inclusion of this note. 17 One might wonder whether there is a third kind of group assertion, what we might call distributed group assertion. Suppose, for instance, that there are three members of a committee, each of whom uploads information to an automated system. M1 submits that p, M2 submits that q, and M3 submits that r. The system then aggregates the information and issues a public report that the committee’s view is that s, even though no member of the group is aware of this aggregated result. Is this group assertion? Strictly speaking, the answer is no, as there is simply no one who intends to convey the information that s. When we learn that s from the automated output, we’re learning from the system, not from the group. This is supported by the fact that if it were the group’s assertion, then the committee could learn from its own assertion. For instance, when the output that s is issued and the committee learns this by reading the report, the committee itself could come to learn that s from its own assertion. Given this, distributed group assertion is assertion in only an extended sense. 18 See Lackey (2006 and 2008). 19 This is a slightly modified example from Audi (1997). 20 I am grateful to Anne Baril for this example. 21 Both of these views are presented as accounts of group testimony, but they can be understood as accounts of group assertion for our purposes. I will thus use “testimony” and “assertion” interchangeably here. 22 I should note that I am not saying that a group cannot intend to so something that no individual member of the group intends to do. But if one’s account of group testimony is going to rely on a thesis this substantive, then it should be defended. 23 See Lackey (2008). 24 I should note that Tollefsen’s account of group testimony is adapted from Justin Hughes’s account of group speech acts, according to which: For a group, G, speaker, S, and utterance, x, G utters x if and only if: 1. There exists a group, G, this group has an illocutionary intention, and x conveys that illocutionary intention. 2. S believes that he or she knows the illocutionary intention of G and that X conveys this illocutionary intention. 3. G does not object to S uttering x on its behalf and if G intends for any specific individual(s) to utter x, it intends for S to utter x. S believes that he or she knows this. 4. 2 and 3 are the reasons S utters x. (Hughes 1984, p. 388) My arguments here apply, mutatis mutandis, to Hughes’s account. 25 See Unger (1975), Williamson (1996 and 2000), Adler (2002), DeRose (2002), Reynolds (2002), Hawthorne (2004), and Fricker (2006). Cohen (2004) says that he is “not unsympathetic” to the view. 26 See, for instance, Douven (2006), Lackey (2007), and MacKinnon (2013). 27 Just as an attorney might bring a lawsuit on behalf of her client, without being a party to the suit herself, so, too, a spokesperson might assert on behalf of another without thereby asserting herself. 28 Goldberg (2015). 29 I should note that I have never defended a view that is entirely deflationary. Rather I have argued for views that have as a condition that some of the individual members of the group instantiate the phenomenon in question. 30 See Lackey (2016) and Chapters 1, 2, and 3 of this book. 31 See Lackey (2007 and 2008). 32 Proponents of different versions of the transmission view include Welbourne (1979, 1981, 1986, and 1994), Hardwig (1985 and 1991), Ross (1986), Burge (1993 and 1997), Plantinga (1993), McDowell (1994), Williamson (1996 and 2000), Audi (1997, 1998, and 2006), Owens (2000 and 2006), Reynolds (2002), Faulkner (2006), and Schmitt (2006). For objections to this view, see Lackey (2008).

5 Group Lies We often talk about groups lying. For instance, a Reuters headline regarding a lawsuit brought against BP by the U.S. government claims, “BP lied about [the] size of U.S. Gulf oil spill, lawyers tell trial.”1 In particular, the plaintiffs argue that immediately after the 2010 spill, internal company e-mails reveal that BP publicly reported that only 5000 barrels of oil were leaking into the ocean per day with the deliberate intention to be deceptive, even though the company believed that this report was false and, in fact, knew that up to 100,000 barrels per day could have been leaking. This case is not unusual: a cursory review of recent news pulls up stories about the lies of Facebook, Google, the Trump Administration, and various drug companies. Moreover, there are often enormously significant consequences that follow from group lies, for both the liars and those to whom they lied. If BP lied about how many barrels of oil were leaking into the ocean, this could be the difference between its being fined $17.6 billion instead of $4.5 billion. If the Trump Administration lied about Russian interference in the 2016 election, then this not only warrants the impeachment of Donald Trump, but also calls into question the legitimacy of our democratic processes. If a pharmaceutical company lies about the potentially harmful side effects of a highly lucrative drug to treat cancer, then this could result in its bearing responsibility for the health problems and death of countless patients. Despite the prevalence of group lies and their often far-reaching effects, there has never before been a philosophical treatment of group lies.2 This chapter begins the process of filling this surprising gap in the literature by focusing on the question of what a group lie is. After providing an account of how to understand individual lies, I will consider, first, whether group lies can be understood in terms of the lies of the group’s members and, second, whether group lies can be characterized in terms of joint agreement by the group’s members to lie. After showing both views to be misguided, I offer my own account of group lying, according to which it crucially involves the group offering a statement. In particular, because what a group says can come apart from what its individual members say, I argue that a group might lie when no individual member lies, and a group might fail to lie even though every individual member does. Thus, my view provides a framework for not only understanding what a group lie is, but also for holding groups responsible for their broader linguistic behavior.

5.1 Individual Lies A natural place to turn in trying to understand the nature of a group lie is with accounts of what is involved in an individual lying. The traditional view of lying, with roots dating back at least to

the work of Augustine in De mendacio, holds that this phenomenon involves two central components: stating what one does not believe oneself and doing so with the intention to deceive. More precisely: LIE-T: A lies to B if and only if (1) A states that p to B, (2) A believes that p is false, and (3) A intends to deceive B by stating that p.3 LIE-T remained the generally accepted view of the nature of lying until somewhat recently, with condition (3) coming under repeated attack. The form of this challenge has been to produce clear instances of lying where there is no intention on the part of the speaker to deceive the hearer, thereby showing that (3) is not a necessary condition for lying. To this end, there are three central kinds of lies that are used as counterexamples: (i) bald-faced lies, (ii) knowledge-lies, and (iii) coercion-lies.4 Each of (i) through (iii) is taken to show quite decisively that the traditional conception of lying has been radically misguided. In particular, it is concluded not only that lying does not require the intention to deceive, but also that deception is not at all a part of what it is to lie.5 Thus, LIE-T has been replaced with a variety of competing accounts, none of which even makes mention of deception. If correct, this radical shift in our conception of lying has significant implications beyond the obvious ones involved in understanding the nature of this phenomenon. For one natural criticism that we might have of the liar is that she is engaged in intentional deception, where such deceit carries the weight of the prima facie moral wrongness of such acts. Divorcing lying from deception, however, also divorces it from this explanation of its prima facie moral wrongness. My own view is that the tides have turned too quickly in the literature on lying. For while it is indeed true that (i)–(iii) are lies and that there is no intention on the part of the speaker to deceive the hearer in such cases, this does not warrant severing the connection between lying and deception altogether. Thus, I replace LIE-T with the following: LIE-L: A lies to B if and only if (1) A states that p to B,6 (2) A believes that p is false, and (3) A intends to be deceptive to B in stating that p. I will show in what follows not only that LIE-L can capture all three of the kinds of lies that LIET cannot (i.e., (i)–(iii)), but also that non-deception accounts of lying wrongly count as lies classic cases of what I have elsewhere called selfless assertions. This reveals that, contrary to the currently widespread approach in philosophy, lying is indeed tied to deception as a matter of necessity.

5.2 Counterexamples to the Traditional View of Lying Let’s begin with the first kind of counterexample to LIE-T: bald-faced lies. A bald-faced lie is an undisguised lie,7 one where a speaker states that p where she believes that p is false and it is common knowledge that what is being stated does not reflect what the speaker actually believes. For instance, suppose that a student is caught flagrantly cheating on an exam for the fourth time this term, all of the conclusive evidence for which is passed on to the Dean of Academic Affairs. Both the student and the Dean know that he cheated on the exam, and they each know that the

other knows this, but the student is also aware of the fact that the Dean punishes students for academic dishonesty only when there is a confession. Given this, when the student is called to the Dean’s office, he states, “I did not cheat on the exam.”8 This is a classic bald-faced lie: the speaker states a proposition that he believes is false and both the speaker and the hearer know that this is the case and know that the other knows this. There is, then, no intention on the part of the speaker to deceive the hearer. In particular, in stating that he did not cheat on the exam, the student does not intend to bring about any false beliefs in the Dean, either about his cheating on the exam or about his beliefs regarding this event. Indeed, he may even wish for the Dean to believe that he did cheat on the exam, just to relish in the Dean’s spinelessness. Nonetheless, the student is clearly lying. This shows that condition (3) of LIE-T is false.9 The second kind of counterexample to LIE-T involves what Sorensen (2010) calls “knowledge-lies.” “An assertion that p is a knowledge-lie exactly if intended to prevent the addressee from knowing that p is untrue but is not intended to deceive the addressee into believing [that] p” (Sorensen 2010, p. 610). For instance: In Spartacus (Universal Pictures, 1960), the victorious Roman general, Marcus Licinius Crassus, asks the recaptured slaves to identify Spartacus in exchange for leniency. Spartacus…rises to spare his comrades crucifixion. However, the slave on his right, Antoninus, springs to his feet and declares, “I am Spartacus!” Then the slave on Spartacus’ left also stands and declares “I am Spartacus!”, then another slave, and another until the whole army of slaves is on their feet shouting, “I am Spartacus!” (Sorensen 2010, p. 608)

Each slave in this case is offering a knowledge-lie; however, with the exception of Antoninus, none intends to deceive Crassus into believing that he is actually Spartacus. For once the second slave claims this identity, it is clear that he is instead aiming to prevent Crassus from learning who Spartacus is. Given that each slave seems to be lying, condition (3) of LIE-T is again shown to be false. The third kind of counterexample to LIE-T involves what we might call coercion-lies. A coercion-lie occurs when a speaker believes that p is false, states that p, and does so, not with the intention to deceive, but because she is coerced or frightened into doing so. For instance, suppose that an innocent bystander witnesses the murder of a gang member by someone from a rival gang, but is threatened with death if she testifies against the murderer. Because of this, the bystander states on the stand at trial, “I did not witness the defendant murder the victim in question.”10 Here the intention of the bystander is not to deceive the court into believing that she did not witness the murder; instead, her aim is to avoid retaliation from the defendant’s fellow gang members. Indeed, she may even desperately wish for the court to believe that she did witness the crime. That the court ends up being deceived by her statement is simply an unintended consequence of the action needed to achieve the aim of self-preservation. Despite this, the bystander clearly lies on the stand, evidenced at least in part by the fact that she could be found guilty of perjury. The intention to deceive is again shown not to be necessary for lying.

5.3 Non-Deception Accounts of Lying The combination of these three types of counterexamples provides a formidable challenge to the

traditional view of lying and has prompted a flurry of alternative views. The three most prominent ones, offered by Fallis (2009), Carson (2010), and Sorensen (2007), respectively, are: LIE-F: A lies to B if and only if (1) A states that p to B, (2) A believes that p is false, and (3) A believes that she makes this statement in a context where the following norm of conversation is in effect: Do not make statements that you believe to be false. (Fallis 2009, p. 34) LIE-C: A lies to B if and only if (1) A states that p to B, (2) A believes that p is false or probably false (or, alternatively, A does not believe that p is true), and (3) A intends to warrant the truth of that p to B. (Carson 2010, p. 37)11 LIE-S:  A lies to B if and only if (1) A asserts that p to B, and (2) A does not believe that p. (Sorensen 2007, p. 256) LIE-F, LIE-C, and LIE-S are virtually identical in the first two conditions, with the latter accounts simply allowing classic cases of bullshit, where the speaker does not believe that p is false but also does not believe that p is true, to count as lies.12 They also all share the common feature of completely divorcing lies from deception. But merely stating what one believes to be false is not sufficient for lying, since speakers frequently say what they believe is false when being ironic, joking, or reciting lines in a play and yet are not lying when they do so. For this reason, each proposal adds a further component to capture only those statements that are genuine lies. Fallis does so through requiring that the speaker believes that she is offering her statement in a context where the following norm of conversation is in effect: Do not make statements that you believe to be false. Since such a conversational norm is not believed to be in effect by a speaker who is being ironic, humorous, or acting, LIE-F successfully rules them out as instances of lying. So, too, do LIE-C and LIE-S, the former because Carson understands intending to warrant the truth of a proposition as being a promise or guarantee that what one says is true13 and the latter because speakers typically do not offer flat-out assertions in cases of irony, jokes, and acting.14 Given that all three accounts divorce lying from the intention to deceive, they can also capture bald-faced lies, knowledge-lies, and coercion-lies, unlike LIE-T. When denying cheating to the Dean, the student certainly does not believe that the context is an ironic or humorous one, so he believes that the conversational norm is in effect: Do not make statements that you believe to be false; he intends to warrant the truth of what he says since he wishes to go on the record as denying having cheated on the exam; and he offers a flat-out assertion. The same is true of both the slaves claiming to be Spartacus and the innocent bystander denying having witnessed the murder in question. Neither the slaves nor the bystander thinks that there is anything about the contexts in which their statements are being offered that would prevent this usual conversational norm from being in effect; they all are inviting their hearers to trust them, even if the invitations are empty; and their statements are flat-out assertions.

5.4 Back to Deception Despite the virtues of these non-deception views of lying, I will now argue that they are misguided, as the divorce between lies and deception is an unhappy one.

The first point to notice is that there is a range of ways of being deceptive. Perhaps the most obvious is the one that is the focus of proponents of non-deception accounts of lying, where the aim is to bring about false beliefs in the victim of the deceit. But another, less explicit, form of deception is where the aim is to conceal information. According to the Oxford English Dictionary, deceit is “the action or practice of deceiving someone by concealing or misrepresenting the truth.” And Carson, despite endorsing a non-deception account of lying, claims that “[t]o conceal information is to do things to hide information from someone—to prevent someone from discovering it. Often, concealing information constitutes deception or attempted deception” (Carson 2010, p. 57).15 Given this, I propose the following distinction, which will suffice for our purposes even if it does not fully capture all of the ways of being deceptive: Deceit: A deceives B with respect to whether p if and only if A aims to bring about a false belief in B regarding whether p. Deception: A is deceptive to B with respect to whether p if A aims to conceal information from B regarding whether p. Concealing information regarding whether p can be understood broadly here, so that it subsumes, among other phenomena, concealing evidence regarding whether p. Moreover, notice that concealing information is importantly different from withholding information. To withhold information is to fail to provide it, rather than to hide or keep it secret. If I am trying to find a home for my challenging puppy, I withhold information about her lack of being housebroken if you don’t ask me anything about it and I do not mention it. But if I frantically discard all of the training pads lying throughout my house before you come over, then I am concealing the information that she is not trained.16 Finally, notice that concealing information is sufficient, though not necessary, for being deceptive; thus, it is merely one instance of a more general phenomenon. Obviously, another way of being deceptive is to be deceitful, where one’s aim is to bring about a false belief in one’s hearer. With this distinction in mind, let us return to the three counterexamples to LIE-T. In the case of the student’s bald-faced lie to the Dean, while he does not intend to deceive the Dean into falsely believing that he did not cheat, he does intend to conceal crucial evidence from the Dean that is needed for punishment from the university—namely, an admission of wrongdoing. Without the evidence that would be provided by a confession, the Dean is paralyzed to take action against the student, and so concealment of the student’s knowledge of what actually happened is the central aim of his statement.17 According to our distinction above, then, the student does not intend to deceive the Dean, but he does intend to be deceptive to him. In the case of the slaves’ knowledge-lies to Crassus, while it is clear that there is no intention to deceive Crassus into believing that they are all Spartacus, there is the intention to conceal the true identity of Spartacus. Each slave, besides Antoninus, aims to conceal the information that that person—that is, Spartacus—really is Spartacus and, in so doing, intends to be deceptive without deceiving. Finally, in the case of the bystander’s coercion-lie to the court, while she does not intend for the court to believe that she did not witness the defendant murder the victim in question, she does aim to conceal the eyewitness testimony that can be used for a conviction. Otherwise put, the bystander is not aiming to prevent the court from convicting the defendant, but she is aiming to

prevent the court from convicting the defendant on the basis of her testimony. Again, there is deception without the intention to deceive. It is worth pointing out that one can be deceptive in the relevant sense, even if the information that one is aiming to conceal is common knowledge. In other words, ignorance of that which is being concealed is not necessary in order to be the victim of deception. To see this, notice that deception requires that A aims to conceal information from B, and A can certainly aim to do this even if A is ultimately, perhaps even inevitably, unsuccessful in achieving this. I can aim to win a marathon even if I know that I will ultimately fail to achieve this goal. In this sense, even if “conceal” is a success term—that is, if A conceals x from B, A succeeds in hiding x from B—“aiming to conceal” is surely not—that is, A can aim to conceal x from B even if A fails to succeed in hiding x from B. It is, therefore, not available to the proponent of a nondeception view of lying to reject my analyses of the above cases by arguing that, because there is common knowledge of that which is being lied about,18 there is no concealment of information and, accordingly, no deception. Thus, none of the counterexamples facing LIE-T succeeds in showing that the broader notion of deception is not necessary for lying.19 Moreover, that there is such a necessary relationship between lying and deception can be supported by considering a case of what I have elsewhere called selfless assertion.20 There are three central components to this phenomenon: first, a subject, for purely non-epistemic reasons, does not believe that p; second, despite this lack of belief, the subject is aware that p is very well supported by all of the available evidence; and, third, because of this, the subject asserts that p without believing that p. Here is an instance of selfless assertion: CREATIONIST TEACHER: Stella

is a devoutly Christian fourth-grade teacher, and her religious beliefs are grounded in a personal relationship with God that she takes herself to have had since she was a very young child. This relationship grounds her belief in the truth of creationism and, accordingly, a belief in the falsity of evolutionary theory. Despite this, Stella fully recognizes that there is an overwhelming amount of scientific evidence against both of these beliefs. Indeed, she readily admits that she is not basing her own commitment to creationism on evidence at all but, rather, on the personal faith that she has in an all-powerful creator. Because of this, Stella thinks that her religious beliefs are irrelevant to her duties as a teacher; accordingly, she regards her obligation as a teacher to include presenting material that is best supported by the available evidence, which clearly includes the truth of evolutionary theory. As a result, while presenting her biology lesson today, Stella asserts to her students, “Modern day Homo sapiens evolved from Homo erectus,” though she herself does not believe this proposition.21 Despite the fact that Stella’s statement that Homo sapiens evolved from Homo erectus satisfies all three non-deception accounts of lying above, Stella does not seem to be lying to her students. Why not? My answer to this question is that it is precisely because Stella does not intend to be deceptive to her students.22 To see this, the first point to notice is that Stella’s statement that Homo sapiens evolved from Homo erectus clearly satisfies the conditions put forth in LIE-F, LIE-C, and LIE-S. She offers this statement to her students, where she herself believes that it is false. Moreover, since she clearly does not regard the context of her classroom as an ironic, humorous, or theatrical one, she does so while believing that the following norm of conversation is in effect: Do not make statements that you believe to be false. The reason that she violates this norm is that she believes

it is overridden or defeated by the duty to state what the scientific evidence best supports when teaching her biology lesson.23 Stella also intends to warrant the truth of the proposition that Homo sapiens evolved from Homo erectus since she is promising her students that what she says is true, just as she does when she states what she herself believes.24 And, finally, there is nothing about her statement or the context that prevents her statement from qualifying as an assertion. The second point to notice is that Stella does not in any way aim to be deceptive to her students in stating that Homo sapiens evolved from Homo erectus. For though she does not herself believe this, she regards her own personal beliefs regarding religion—particularly those that are grounded in her relationship with God—as irrelevant to the information she conveys during her biology lesson. Reporting to her students what her religious beliefs are about the origin of humans would, for Stella, be comparable to sharing with them what her favorite aspect of evolutionary theory is. Both are irrelevant to her biology lesson. Given this, when Stella states to her students a proposition that she believes is false, her aim is not to bring about a false belief in her students or to conceal her own beliefs on the matter. In fact, we can imagine that she would willingly share her own views about evolutionary theory with her students, were they to ask her. Instead, Stella’s aim is to convey to her students the theories that are best supported by the current scientific evidence, which include evolutionary theory but not creationism. The final point to make is that Stella is not lying to her students. Beyond the intuitiveness of this conclusion, it can be further supported by considering a slightly modified version of CREATIONIST TEACHER. Suppose that everything about the case remains the same, except that Stella states to her students that Homo sapiens evolved from Homo erectus, not because she regards her religious beliefs on the matter as irrelevant to her biology lesson, but because she will get fired from her teaching job if she reveals such beliefs to her students. In such a case, the aim of Stella reporting what she herself does not believe is to conceal her own religiously grounded beliefs on the topic, and thus she intends to be deceptive to her students. Corresponding to this, Stella’s statement also seems to be a lie.25 We have seen, then, that LIE-F, LIE-C, and LIE-S all count as lies assertions that are clearly not, thereby showing that such non-deception accounts of lying fail to provide sufficient conditions for lying. Should we then simply add an intention-to-be-deceptive requirement to these accounts? No, since the alternative requirements found in these views (i.e., conditions (3) in LIE-F and LIE-C) also fail to provide necessary conditions for lying.26 In particular, such accounts fail to count as lies assertions that clearly are. To see this, consider first the case below: DECEPTIVE ANTHROPOLOGIST: Shawn

is an anthropologist who visits a highly isolated tribal community living in the Amazon rainforest. He does not have any beliefs at all about the norms governing the conversations in their interactions. Nevertheless, he wishes to gain their trust quickly, and so he states to them, “My grandmother was an anthropologist who lived with members of your tribe decades ago, and so I feel as though I already know you.” Not only does Shawn believe that this is false, he also states this with the intention to deceive the tribe members into believing that he has a personal connection with their ancestors. Shawn does not satisfy condition (3) of LIE-F since he fails to believe that the following conversational norm is in effect: Do not make statements that you believe to be false. But surely this lack of belief does not prevent him from lying. For not only does Shawn state what he believes to be false, he does so with the explicit aim to deceive the tribe members into believing this falsehood. That Shawn’s statement is a lie is further supported by noting that a committee

investigating a supposed ethics violation involving his research methods would hardly think the matter resolved when he says, “Look, I didn’t lie to the tribe members since I had no idea what conversational norms were operative in their community.” Consider, now, the following case: SABOTAGING FRIEND: Fran

wants to sabotage the relationship between Sam and Betty, but does not want to be held responsible for their break-up. So, she tells Sam, “Betty is cheating on you, but don’t take my word for it.” Fran not only believes that it is false that Betty is being unfaithful, she also offers her assertion with the deliberate intention to deceive Sam. Fran does not satisfy condition (3) of LIE-C since she does not intend to warrant the truth of the proposition that Betty is cheating on Sam. Specifically, because she does not want to shoulder the responsibility of Betty and Sam breaking up, she makes it explicit that she is not promising or guaranteeing that what she says is true. Despite this, Fran is clearly lying to Sam, a verdict supported by the similarity the situation bears to the paradigmatic case of Iago lying to Othello about Desdemona’s fidelity.27 Given this, I propose the following account of lying: LIE-L: A lies to B if and only if (1) A states that p to B, (2) A believes that p is false, and (3) A intends to be deceptive to B in stating that p. LIE-L avoids all of the problems afflicting rival views. It delivers the correct result in the three kinds of counterexamples to LIE-T: since the aim of the speaker is to be deceptive in bald-faced lies, knowledge-lies, and coercion-lies, my view counts all three as lies. It also provides the right verdict in CREATIONIST TEACHER and its modified version: Stella does not lie to her students in the original scenario because her aim is to report what current scientific evidence supports regarding evolutionary theory, but she does lie in the modified version since her intention is to conceal her own religiously grounded beliefs about creationism in order to avoid termination. Still further, it provides the correct verdict in the counterexamples to conditions (3) of LIE-F and LIE-C: since there is the intention to be deceptive in Shawn’s statement to the tribal community in DECEPTIVE ANTHROPOLOGIST and in Fran’s statement to Sam in SABOTAGING FRIEND, LIE-L correctly regards both as lies. Finally, my account distinguishes lying from irony, joking, and acting since the intention to be deceptive is present in the former statements, but not in the latter. While there may be reason, then, to sever the connection between lying and the intention to deceive, lying nonetheless remains fundamentally tied to the intention to be deceptive.

5.5 Summativism and Sufficiency Given the account of individual lying found in LIE-L, let’s now turn to understanding group lies. To frame our discussion, it will be helpful to return to the paradigmatic group lie presented in Chapter 1: TOBACCO COMPANY: Philip

Morris, one of the largest tobacco companies in the world, is aware of the massive amounts of scientific evidence revealing not only the addictiveness of smoking, but

also the links it has with lung cancer and heart disease. While the members of the board of directors of the company believe this conclusion, they all jointly agree that, because of what is at stake financially, the official position of Philip Morris is that smoking is neither highly addictive nor detrimental to one’s health, which is then published in all of their advertising materials. Arguably, no group is more infamous for its lies than Philip Morris, and thus the scenario described in TOBACCO COMPANY, which is fairly close to non-fiction, is precisely the sort of case that any account of group lies must have the resources to accommodate. As with other group phenomena, perhaps group lying can be understood in terms of what we might call simple summativism. Such a view can be expressed as follows: SS: A group, G, lies to B in stating that p if and only if all of the members of G lie to B in stating that p. The account of individual lying found in LIE-L can then be used to understand what it is for the members of G to lie to B. In this way, a group lie is simply constructed out of the individual lies of its members. There are, however, two immediate problems with SS that require minor modification. First, requiring that all of the members of G lie to B in order for the group to lie to B is too stringent. Surely, Philip Morris doesn’t fail to lie to the public simply because a few of its employees are on vacation or home ill and thus do not satisfy the conditions found in LIE. So perhaps SS should be revised so that only some of the members of G need to lie to B in order for the group to lie to B. But this is still not enough to render SS plausible. In particular, what if the individual members of the group who lie about whether p to B are all utterly irrelevant to the domain in question? For instance, suppose that while all of the members of the board of directors of Philip Morris do not lie to B about the harmful effects of smoking, the custodians do. Is this enough for Philip Morris to lie to B about this matter? Clearly not. What this shows is that the individual lies in SS need to be made, not just by any members, but by the right ones. Most groups have members with vastly different roles, only some of whom have the authority or power to determine certain outcomes for the group as a whole. As we saw in earlier chapters, those who have the relevant decision-making authority are often called operative members.28 Thus, while the custodians in this case might at least in part determine whether Philip Morris lies about the cleanliness of the company’s facilities, they are irrelevant to whether the group lies about the harmful effects of smoking; this is because they are operative members regarding the former, but not the latter, question. SS should, then, be modified in the following way: SS*: A group, G, lies to B in stating that p if and only if most of the operative members of G lie to B in stating that p. SS* has clear virtues. It delivers the correct verdict that Philip Morris lies to the public in TOBACCO COMPANY with ease: given that all of the members of the board of directors individually lie in stating to the public that smoking does not cause lung cancer, the company lies about this as a group, too. Moreover, it explains group lying while utilizing only resources from the account of individual lying, which is not only simple but also avoids positing phenomena, such as group belief, that require further explication.

Despite these virtues, SS* faces two objections that, to my mind, show decisively that we need to look in an altogether different place to understand group lies. The first is that all of the operative members of G might state that p to B, believe that p is false, and intend to be deceptive to B with respect to whether p in stating that p, but G still might not lie regarding this question. Consider the following: PERSONAL LIES: Philip

Morris has three operative members regarding the question whether smoking causes lung cancer, M1–M3. All three members lie to B in stating that there is not a causal connection between the two, but they do so entirely in the context of their personal relationships with B. B is M1’s wife, and M1 lies to her because he does not want her to worry about their son’s smoking habit when there is nothing that she can do to prevent it. M2 is B’s best friend, and M2 lies to her so as to not cause marital problems by contradicting M1’s testimony. And M3 is B’s son, and M3 lies to her so as to avoid his mother’s nagging to quit smoking. In PERSONAL LIES, though all three operative members of Philip Morris satisfy conditions (1)–(3) of LIE and thereby lie to B regarding the question whether smoking causes lung cancer, the tobacco company itself does not lie to B. This is because each member lies to B, not in his or her role as an employee of Philip Morris, but entirely because of the personal relationship shared with B. M1 lies to B as her husband, M2 lies to B qua her best friend, and M3 lies to B in his role as her son. Otherwise put, M1–M3 would have behaved exactly the same way toward B, even if none of them worked at the tobacco company.29 This shows that the context in which an individual member lies can affect whether the group itself lies. As we saw in Chapter 4, similar considerations apply with respect to assertion or testimony more broadly. If the CEO of Philip Morris testifies that smoking does not pose health risks, whether it is the company’s statement depends on where and to whom it was offered. It is one thing to state it at a board meeting to coworkers and quite another to say this to one’s spouse while on vacation. PERSONAL LIES thus reveals the following result about group lies: a group can fail to lie to someone with respect to the question whether p, despite the fact that every single one of its operative members lies to this person regarding this question. SS*, then, does not specify sufficient conditions for a group lie.

5.6 Summativism and Necessity A second objection to the version of simple summativism found in SS* is that none of the operative members of G might state that p to B, believe that p is false, and intend to be deceptive to B with respect to whether p in stating that p, but G might still lie to B regarding this question. Consider the following: MANIPULATED SPOKESPERSON: Philip

Morris hires spokesperson S—who is not a member of the group and is known to be naïve—to be the voice of the company’s views. S’s job is to attend the meetings of the board of directors at Philip Morris and, on the basis of the information therein presented, to draw the relevant conclusions and convey them to the public on the company’s behalf. At a recent meeting, the board of directors strategically presented only the very small

body of scientific evidence that indicates that there is not a connection between smoking and lung cancer. This was done both with the knowledge that there is overwhelming evidence showing that this is in fact false and with the deliberate intention that S would then draw the conclusion that there is not such a connection and state this to the public. Nevertheless, no operative member of Philip Morris actually ever states, to one another in the boardroom or to S, that there is not a connection between smoking and lung cancer. Moreover, as the company expected, S herself knows very little about the scientific evidence in question and so actually believes that there is not such a connection. At a recent press conference, S, in her role as the official spokesperson for Philip Morris, stated that there is not a connection between smoking and lung cancer. Let p be the proposition that there is not a connection between smoking and lung cancer. In MANIPULATED SPOKESPERSON, while all of the operative members of Philip Morris believe that p is false, none of them actually states that p, either to S or to the public. So, while they all satisfy condition (2) of LIE, none of them satisfies (1) and (3) of LIE. Moreover, while S states that p to the public, she believes that p is true and does not intend to be deceptive to the public with respect to whether p in stating that p. Instead, S’s intention is to convey the view of Philip Morris in stating that p. S, then, satisfies (1) of LIE, but not (2) or (3). Thus, neither the operative members of the company nor the spokesperson satisfies the three conditions found in LIE. Nevertheless, Philip Morris lies to the public about the connection between smoking and lung cancer in MANIPULATED SPOKESPERSON. To see this, notice that Philip Morris knowingly brings it about that S—who is the company’s official spokesperson—says something false to the public on its behalf, and does so with the explicit intention to be deceptive. In this context, the mere fact that the words do not literally come out of the mouths of the operative members of Philip Morris is irrelevant to whether the group lies to the public. This is because, as we saw in Chapter 4, the assertion, though offered by the spokesperson, is nevertheless the group’s. This, in turn, is due to the special relationship that exists between a spokesperson and the party she represents: the assertion she offers qua spokesperson is that of the represented party, not her own. Thus, she can unknowingly lie on a group’s behalf while believing herself in the truth of that which she is reporting. This point can be made clearer by considering the distinction between lying and misleading. Jennifer Saul provides the following case to illustrate the difference between these phenomena: suppose that a politician, Tony, believes that there are no weapons of mass destruction in Iraq but wants to convince his audience otherwise. Now compare the two utterances that he might offer in response to the question, “Are there weapons of mass destruction in Iraq?”: (a) There are weapons of mass destruction in Iraq. (b) Saddam Hussein is a very dangerous man. If we suppose that in both cases Tony’s intention is to bring it about that his audience believes that Iraq has weapons of mass destruction, then while he has lied in (a), he has merely misled in (b). A central difference between the two is that Tony’s statement is true in only (b), despite the fact that he deliberately conveyed something false in both (Saul 2012, p. 4). Given this, one might ask why it isn’t the case that S’s statement is merely an instance of misleading, not lying to, the public. By way of response to this question, notice that the operative

members of Philip Morris do merely mislead S. By virtue of cherry-picking only the handful of studies that suggest that there is not such a connection, they offer true statements to S with the deliberate intention to convey something false. But their misleading S results in the company itself lying to the public. This is because all parties involved recognize that S’s statements are on behalf of Philip Morris—that her reports are the company’s assertion. In this way, so long as S is functioning properly in her capacity as the spokesperson for Philip Morris, the company itself asserted that there is not a connection between smoking and lung cancer by virtue of S stating this. Thus, while the operative members’ manipulation of S avoids their lying to her, it does not get Philip Morris off the hook of lying to the public. This can be further supported by considering how easy and convenient it would be for groups, such as corporations, to avoid lying if what Philip Morris did in MANIPULATED SPOKESPERSON failed to count as a lie. Groups could hire naïve spokespeople, mislead them with cherry-picked information, and have false statements thereby conveyed, all while avoiding any responsibility for lying. Surely this is an unwelcome result. MANIPULATED SPOKESPERSON thus reveals another result about group lies: a group can lie to someone with respect to the question whether p, despite the fact that not a single one of its operative members lies regarding this question. SS*, therefore, does not specify necessary conditions for a group lie, either.

5.7 The Joint Acceptance Account of Group Lies As we have seen, a common move that is made when summative accounts of group phenomena fail is to embrace joint acceptance theories. Recall that, according to these theories, collective phenomena, such as group belief and group justification, must be understood in terms of the joint acceptance of the operative members of the group. On such a view, it is neither necessary nor sufficient for a group state that its individual members instantiate that state. Such an approach, then, might lend itself very nicely to accounting for the cases that are problematic for SS*. Specifically, perhaps group lies can be understood in terms of joint acceptance or agreement in the following way: JAA: A group, G, lies to B in stating that p if and only if most of the operative members of G jointly agree to lie (in the sense found in LIE) to B in stating that p.30 The JAA delivers the correct result in PERSONAL LIES with ease: even though M1–M3 each lies to B in stating that there is not a causal connection between smoking and lung cancer, they do not jointly agree to lie to B, and thus Philip Morris doesn’t lie to B either. This view also seems to provide the right verdict in MANIPULATED SPOKESPERSON, at least on a certain reading of the case. If, for instance, the board of directors jointly agreed to strategically present only the very small body of scientific evidence that indicates that there is not a connection between smoking and lung cancer, and this was done with the intention of being deceptive toward the spokesperson, then it might be argued that this amounts to the operative members agreeing to lie to the public regarding this question. Thus, Philip Morris would end up lying to the public, too. But it is not difficult to see that the JAA fails as a general account of group lies. On the one hand, the operative members of a group jointly agreeing to lie isn’t sufficient for a group’s lying.

Suppose, for instance, that all of the operative members of Philip Morris conceal from one another their personal beliefs regarding the safety of smoking and yet, because of peer pressure, they all nonetheless agree to lie to the public that smoking is safe. If it turns out that all of the operative members in fact believe that smoking is safe, and there is a collective commitment to the proposition that smoking is safe, then the group hasn’t lied to the public in saying that it is. This is evidenced by the fact that if the public learned all of the details of the case, they might regard Philip Morris as ignorant and misinformed for believing that smoking is safe, and they might even regard it as deceitful—given the members’ agreement to lie—but they wouldn’t say that the company lied. This is because in every sense, Philip Morris believes that smoking is safe when it states that it is, despite the joint agreement to lie about this. In this way, just as agreeing to be happy does not in fact make it the case that one is happy, agreeing to lie does not in fact make it the case that one lies. On the other hand, the operative members of a group jointly agreeing to lie isn’t necessary for a group’s lying. For instance, suppose that everything in MANIPULATED SPOKESPERSON is exactly the same, except that the board of directors strategically presented only the very small body of scientific evidence that indicates that there is not a connection between smoking and lung cancer without jointly agreeing to do so. Philip Morris seems to be engaged in a lie no less than in the original scenario. Yet, according to the JAA, these are instances of two radically different phenomena, one involving a group lie while the other doesn’t. Moreover, we can imagine a slight variation to the paradigmatic group lie—TOBACCO COMPANY—presented earlier in this chapter: suppose that the members of the board of directors of Philip Morris decide to never jointly agree to lie to the public precisely to avoid being responsible for lying. Thus, the company is aware of the massive amounts of scientific evidence revealing the causal connection between smoking and lung cancer and yet they deny such a connection to the public with the intention to be deceptive. According to the JAA, Philip Morris doesn’t lie in such a case because of the lack of joint acceptance to do so. But, surely, this is the wrong result. It is worth noting that these objections are not ones that can be avoided through simple modification. Joint acceptance or agreement necessarily requires intentional activity on the part of the members of the group at issue. Indeed, as we have seen in earlier chapters, it is this very feature that enables such accounts of group phenomena to avoid the problems facing summative views. Regardless of what the individual members believe, for example, the group’s doxastic states depend crucially on what the group chooses to do. Similarly, even if each member of a group strategically reports what she herself does not believe with the explicit intention to be deceptive, the group itself does not lie unless there is joint agreement to do so. However, while the intentional component of joint activity is what allows for the response to summativism, it is also what leaves the view open to decisive counterexample. For simply jointly agreeing to lie does not make it the case that a group lies and merely refusing to jointly agree to lie does not necessarily prevent a group from lying. Hence, it is the very heart of a joint acceptance account of lying that is the problem.

5.8 Group Lies We have seen that group lies cannot be understood in summative terms since a group’s lying can diverge significantly from the lies of its individual members. In particular, a group can fail to lie

even though every operative member does, and a group can lie even though no operative member does. We have also seen that the standard non-summative move of embracing a joint acceptance account doesn’t help here since a group can fail to lie even though every operative member jointly agrees to, and a group can lie even when there is no joint acceptance to do so. So where does that leave us? I propose that we understand group lies in the following way: G-LIE: A group, G, lies to B if and only if (1) G states that p to B, (2) G believes that p is false, and (3) G intends to be deceptive to B with respect to whether p in stating that p. The key lesson that we have learned earlier is that group lies need to be, in an important sense, made by the group rather than by the individual members. G-LIE captures this by having the group be the agent at the center of the view. Moreover, condition (1) can be understood in terms of the account of group assertion offered in Chapter 4, and (2) can be fleshed out in terms of the view of group belief developed in Chapter 1. As for (3), unlike with group belief and group assertion, much work has been done on group intentions,31 so this condition can be filled out with one’s preferred view.32 Group lies, then, should be understood in terms of groups offering either coordinated or authority-based assertions. What is important is that just as groups can assert that p without any individual member asserting that p, groups can lie that p without any individual member lying that p. This divergence between what groups do and what their members do can be explained precisely through spokespersons possessing authority to assert on behalf of the group. This is why group assertion is the core feature of group lying: while others might have the authority to speak or assert on our behalf, it is far more puzzling how others might have the authority to believe or intend on our behalf. Thus, the account of group assertion offered in Chapter 4 is what enables an understanding of the distinctiveness of group lies. Moreover, the account of authority-based group assertion developed in the previous chapter provides the framework for understanding not only group lies, but also other phenomena in the neighborhood. For instance, while the statement in MANIPULATED SPOKESPERSON is clearly a group lie, the assertion in AUTONOMOUS SPOKESPERSON from Chapter 4 might be better characterized as group bullshit. This is because while Philip Morris clearly believes that it is false that smoking does not cause lung cancer in MANIPULATED SPOKESPERSON, thereby satisfying (2) of G-LIE, the company does not have any relevant beliefs about disease X in AUTONOMOUS SPOKESPERSON. In this sense, it might fit better with Harry Frankfurt’s description of bullshit, which, it may be recalled, he describes as follows: It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose. (Frankfurt 2005, pp. 55–6)

Because Philip Morris doesn’t have any relevant beliefs about disease X in AUTONOMOUS SPOKESPERSON but states that it is safe merely to suit its economic purposes, the assertion seems to be a case of group bullshit. My authority-based account of group assertion provides the resources

for capturing this: because the spokesperson has the authority to assert on behalf of Philip Morris when the safety of smoking is concerned, the assertion about disease X is Philip Morris’s, and thus the company is the one bullshitting, not the spokesperson. Finally, it is worth pointing out that the model of group assertion and group lies presented in this book might be viewed as providing the framework for an account of group action more broadly. That is, group action in general might be understood as either coordinated or authoritybased, with the latter involving, not always a spokesperson, but another agent who has the authority to act on behalf of the group. In this way, the view developed here can shape our grasp of group agency and, therewith, group responsibility in ways that go far beyond lying and bullshitting.

5.9 Conclusion In this chapter, I have provided an account of group lies. On my view, a group’s lying cannot be understood merely in terms of features that take place at the level of its members, such as their offering individual lies or jointly agreeing to lie. Instead, it is the group itself that lies, in virtue of the group’s stating that p, the group’s believing that p is false, and the group’s intending to be deceptive with respect to whether p in stating that p. A central virtue of my account is that it appreciates the unique relationship that exists between most groups and their spokespersons, as well as the subtle and complex interactions made possible by that relationship, such as the possibility that what a group says may come apart from what its individual members say. In these ways, my view provides the basis not only for understanding how groups are responsible for their linguistic behavior, but also for determining when it is appropriate to trace this responsibility to the individual members of the group and the spokespersons who represent them. The Epistemology of Groups. Jennifer Lackey, Oxford University Press (2021). © Jennifer Lackey. DOI: 10.1093/oso/9780199656608.003.0006

1 http://www.reuters.com/article/2013/09/30/us-bp-trial-idUSBRE98T13U20130930, accessed July 20, 2019. 2 Lackey (2018b) is the only exception. 3 Proponents of various versions of the traditional view include Isenberg (1964), Chisholm and Feehan (1977), Williams (2002), and Mahon (2008). 4 See Sorensen (2007), Fallis (2009), and Carson (2010). 5 Such a move is made explicitly by Fallis: shortly after presenting counterexamples to the condition that lying requires the intention to deceive, he concludes, “These cases show that lying is not always about deception” (Fallis 2009, p. 43). 6 For an analysis of what it means for one to state that p, see Chisholm and Feehan (1977). 7 Quoted from The American Heritage Dictionary in Sorensen (2010). 8 This is a slightly modified version of an example found in Carson (2010). 9 Though he calls them “cynical assertions,” Kenyon (2003) also discusses bald-faced lies. However, because he assumes the truth of LIE-T, he concludes that such assertions are not lies. This seems problematic, not only because bald-faced lies are called lies in our ordinary talk, but also because our corresponding actions support this talk, e.g., we would charge one with perjury for offering a bald-faced lie on the stand, we would regard someone as a liar who repeatedly made such assertions, and so on. 10 This is a modified version of a case found in Carson (2010). 11 Carson also includes a condition requiring the actual falsity of the proposition that is being stated. But to my mind, the necessity of this condition is decisively refuted by a case that he himself discusses (Carson 2010, p. 16). On this point, then, I am

in agreement with Augustine when he writes, “…a person is to be judged as lying or not lying according to the intention of his own mind, not according to the truth or falsity of the matter itself” (Augustine 1952 [395], p. 55). Broncano-Berrocal (2013) objects to my account of lying on the grounds that I fail to acknowledge that lies are necessarily false but, as should be clear, I regard this as a virtue of, rather than a problem with, my view. 12 See Frankfurt (2005) for a discussion of bullshit. 13 According to Carson, one warrants the truth of a statement when one makes a statement in a context where “one promises or guarantees, either explicitly or implicitly, that what one says is true” (Carson 2010, p. 26). Moreover, whether one warrants the truth of a statement is independent of what one intends or believes. 14 According to Sorensen, the only condition on assertion is that it must have “narrow plausibility,” where this is understood as follows: “someone who only had access to the assertion might believe it.” “Wide plausibility,” in contrast, is “credibility relative to one’s total evidence” (Sorensen 2007, p. 255). Moreover, “[m]uch of what we say does not constitute assertion. We signal a lack of assertive force by clear falsity (as with metaphor) or by implausibility” (Sorensen 2007, p. 256). 15 Carson (2010) is interested in both lying and deception, but is clear that he regards the latter as not necessary for the former. 16 For further discussion of the distinction between withholding and concealing information, see Carson (2010,pp. 56–7). 17 Fallis (2014) objects to my treatment of bald-faced lies as follows: “it is clear that the student does not aim to conceal his confession. There is no confession to be concealed since he has not confessed” (p. 90). My claim is not that the student is concealing his confession, but that he is concealing evidence which would be conveyed through his confession—namely, his knowledge or memory of what actually took place. 18 It should be noted that there is the relevant common knowledge only in the case of the bald-faced lie and the coercion lie. In the knowledge-lie, Crassus obviously does not know the identity of Spartacus. 19 Staffel (2011) challenges that Sorensen’s knowledge-lies are counterexamples to LIE-T by claiming that he assumes that deception occurs only when someone is brought to flat-out believe a false proposition. She argues, however, that “[t]his notion of deception is implausibly narrow, because it overlooks the possibility of deceiving someone by merely making her more confident in a falsehood” (Staffel 2011, p. 301). While I agree with Staffel both that the conception of deception that is assumed in the arguments against LIE-T is too narrow and that a speaker can deceive a hearer by making her more confident in a falsehood, I am obviously interested in a different notion of deception in this paper. Also, Staffel grants that there are “atypical” cases in which knowledge-lies fail to deceive, but this is not true when my broader notion of deception is at work. It should also be noted that this point—that one can deceive another by making her more confident in a false belief—was already made by Krishna (1961) and Chisholm and Feehan (1977). (Thanks to Don Fallis for the references.) 20 See Lackey (2007 and 2008). 21 An article in The New York Times (February 12, 2007, “Believing Scripture but Playing by Science’s Rules”) about Dr. Marcus R. Ross, a creationist who also holds a geosciences Ph.D. in paleontology, makes clear that the situation described in CREATIONIST TEACHER is by no means merely a thought experiment. As the author of the article writes, “For him, Dr. Ross said, the methods and theories of paleontology are one ‘paradigm’ for studying the past, and Scripture is another. In the paleontological paradigm, he said, the dates in his dissertation are entirely appropriate. The fact that as a young earth creationist he has a different view just means, he said, ‘that I am separating the different paradigms.’” (I am grateful to Cristina Lafont for bringing this article to my attention.) 22 For other cases of selfless assertion that pose a problem for LIE-F, LIE-C, and LIE-S, see Lackey (2007). 23 In Fallis (2009, pp. 51–3), he discusses at length how this conversational norm can be overridden or defeated. This case can also be understood as Stella choosing to violate Grice’s first norm of quality—Do not make statements that you believe to be false—in order to obey his second norm of quality—Do not say that for which you lack adequate evidence. For more on this, see Grice (1989) and note 25 below. 24 It is important to keep in mind that Carson believes that one can intend to warrant the truth of a proposition even when one is lying, and thus one can promise one’s hearer that what one says is true, even when one knows that it is false. It is comparable to making a promise that one knows one cannot keep. 25 Interestingly, Fallis considers a version of my CREATIONIST TEACHER (which was used for a different purpose in the paper that he cites), but does not seem to recognize the full force of the case. He writes: Norms of conversation can ‘clash’ with each other as well as with other interests that we have .…For example when a teacher who believes in creationism has to give a lesson on evolution, Grice’s first maxim of quality comes into conflict with Grice’s second maxim of quality. If the teacher violates the norms against saying what she believes to be false solely in order to obey the norm against saying that for which she lacks adequate evidence, some…might want to say that she is not lying. In order to accommodate that intuition, my definition might be modified to include an exemption for such cases. (Fallis 2009, p. 52, note 74) It is unclear what sort of “exemption” could be added to Fallis’s account of lying to respond to this counterexample that would not simply be ad hoc. Moreover, by regarding this case as requiring only such a slight modification, it seems to have blinded Fallis to the far deeper point that there is a necessary connection between lying and deception. The positive view that I defend respects this point, provides a unified account of lying, and does not need to resort to such ad hoc moves.

26 I here set aside LIE-S since it does not provide a third condition that is intended to be a substitute for the intention-todeceive requirement. 27 Fallis (2009) presents a counterexample to Carson’s view where a witness to a murder follows up his statement that “Tony was with me at the time of the murder” by saying, “Of course, you know I am really bad with dates and times” (p. 49). Carson responds to this case as follows: “If the proviso ‘you know that I am bad with dates’ is intended to weaken, but not remove, the assurance of truth, then my…definition counts this statement as a case of lying. On the other hand, if the proviso is intended to completely remove or nullify any assurance of truth then the statement is not a lie” (Carson 2010, pp. 38–9). Even if this response works with respect to Fallis’s case, it does not seem at all plausible with respect to SABOTAGING FRIEND. For there is no doubt that Fran is lying to Sam, yet she also clearly intends to nullify any assurance of truth by explicitly disavowing responsibility for the truth of the statement. 28 See, for instance, Tuomela (2004). 29 Suppose that one were to argue that this is not enough to prevent the statements to B from being group lies because M – 1 M3 might be implicitly relying on a kind of authority from being known to be experts on tobacco. In this case, even though they are not speaking in their official capacity, what they say may be taken to represent the group’s view anyway. This complication can be avoided by simply adding to the case that B is entirely unaware that M1–M3 work at Philip Morris. 30 Given that Fricker (2012) argues for a broadly joint acceptance account of group testimony—where a group’s testifying requires a joint commitment to trustworthiness—it is not implausible to think she might espouse a joint acceptance account of group lying—where a group’s lying requires a joint commitment to untrustworthiness. 31 See, for instance, Tuomela (2006), Chant and Ernst (2007), Ludwig (2007), Gilbert (2009), Pacherie (2013), and Kopec and Miller (2018). 32 This is not entirely true, as some views of group intention will be unable to accommodate most of the instances of lying discussed in this paper. For instance, accounts of intention that require common knowledge, such as Bratman’s (1993) highly influential view of shared intention, would be at odds with the view of group lies developed here.

References Adler, Jonathan E. 2002. Belief’s Own Ethics. Cambridge, MA: The MIT Press. Alston, William P. 1988. “The Deontological Conception of Epistemic Justification.” Philosophical Perspectives 2: 257–99. Antony, Louise. 2011. “Against Langton’s Illocutionary Treatment of Pornography.” Jurisprudence 2: 387–401. Audi, Robert. 1997. “The Place of Testimony in the Fabric of Knowledge and Justification.” American Philosophical Quarterly 34: 405–22. Audi, Robert. 1998. Epistemology: A Contemporary Introduction to the Theory of Knowledge. London: Routledge. Audi, Robert. 2006. “Testimony, Credulity, and Veracity,” in Jennifer Lackey and Ernest Sosa (eds.), The Epistemology of Testimony. Oxford: Oxford University Press, 25–49. Augustine. 1952 [395]. “Lying” in Roy. J. Deferrari (ed.), Treatises on Various Subjects, Volume XVI. New York: Fathers of the Church, 53–120. Austin, J. L. 1962. How to Do Things with Words. Cambridge, MA: Harvard University Press. Bergmann, Michael. 1997. “Internalism, Externalism and the No-Defeater Condition.” Synthese 110: 399–417. Bird, Alexander. 2010. “Social Knowing: The Social Sense of ‘Scientific Knowledge’.” Philosophical Perspectives 24: 23–56. Bird, Alexander. 2014. “When Is there a Group that Knows? Distributed Cognition, Scientific Knowledge, and the Social Epistemic Subject,” in Jennifer Lackey (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press, 42–63. BonJour, Laurence. 1980. “Externalist Theories of Epistemic Justification.” Midwest Studies in Philosophy 5: 53–73. BonJour, Laurence. 1985. The Structure of Empirical Knowledge. Cambridge, MA: Harvard University Press. BonJour, Laurence and Ernest Sosa. 2003. Epistemic Justification: Internalism vs. Externalism, Foundations vs. Virtues. Oxford: Blackwell Publishing. Bratman, Michael. 1993. “Shared Intention.” Ethics 104: 97–113. Briggs, Rachael, Fabrizio Cariani, Kenny Easwaran, and Branden Fitelson. 2014. “Individual Coherence and Group Coherence,” in Jennifer Lackey (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press, 215–39. Broncano-Berrocal, Fernando. 2013. “Lies and Deception: A Failed Reconciliation.” Logos and Episteme 4: 227–30. Burge, Tyler. 1993. “Content Preservation.” The Philosophical Review 102: 457–88. Burge, Tyler. 1997. “Interlocution, Perception, and Memory.” Philosophical Studies 86: 21–47. Cariani, Fabrizio. 2011. “Judgment Aggregation.” Philosophy Compass 6: 22–32. Cariani, Fabrizio. 2013. “Aggregating with Reasons.” Synthese 190: 3123–47. Carson, Thomas L. 2010. Lying and Deception: Theory and Practice. Oxford: Oxford University Press. Carter, J. Adam. 2015. “Group Knowledge and Epistemic Defeat.” Ergo 2: 711–35. Chant, Sara Rachel and Zachary Ernst. 2007. “Group Intentions as Equilibria.” Philosophical Studies 133: 95–109. Chisholm, Roderick M. 1989. Theory of Knowledge, 3rd edn. Englewood Cliffs, NJ: Prentice-Hall. Chisholm, Roderick M. and Thomas D. Feehan. 1977. “The Intent to Deceive.” The Journal of Philosophy 74: 143–59. Christensen, David. 2004. Putting Logic in Its Place. Oxford: Oxford University Press. Coady, C. A. J. 1992. Testimony: A Philosophical Study. Oxford: Clarendon Press. Cohen, L. Jonathan. 1989. “Belief and Acceptance.” Mind 98: 367–89. Cohen, L. Jonathan. 1992. An Essay on Belief and Acceptance. Oxford: Clarendon Press. Cohen, Stewart. 2004. “Knowledge, Assertion, and Practical Reasoning,” in Ernest Sosa and Enrique Villanueva (eds.), Philosophical Issues 14: 482–91. Corlett, J. Angelo. 1996. Analyzing Social Knowledge. Lanham, MD: Rowman & Littlefield. Corlett, J. Angelo. 2007. “Analyzing Social Knowledge.” Social Epistemology 21: 231–47. Craig, Edward. 1990. Knowledge and the State of Nature: An Essay in Conceptual Synthesis. Oxford: Clarendon Press. Davidson, Donald. 2001. Essays on Actions and Events. Oxford: Oxford University Press. DeRose, Keith. 2002. “Assertion, Knowledge, and Context.” The Philosophical Review 111: 167–203. Dietrich, Franz. 2005. “Judgment Aggregation: (Im)possibility Theorems.” Journal of Economic Theory 126: 286–98. Douven, Igor. 2006. “Assertion, Knowledge, and Rational Credibility.” The Philosophical Review 115: 449–85. Elgin, Catherine Z. 2002. “Take It from Me: The Epistemological Status of Testimony.” Philosophy and Phenomenological Research 65: 291–308. Fallis, Don. 2006. “Epistemic Value Theory and Social Epistemology.” Episteme 2: 177–88. Fallis, Don. 2009. What Is Lying?” The Journal of Philosophy 106: 29–56. Fallis, Don. 2014. “Are Bald-Faced Lies Deceptive After All?” Ratio 28: 81–96. Fantl, Jeremy and Matthew McGrath. 2002. “Evidence, Pragmatics, and Justification.” The Philosophical Review 111: 67–94.

Fantl, Jeremy and Matthew McGrath. 2009. Knowledge in an Uncertain World. Oxford: Oxford University Press. Faulkner, Paul. 2006. “On Dreaming and Being Lied To.” Episteme 3: 149–59. Foley, Richard. 1987. The Theory of Epistemic Rationality. Cambridge, MA: Harvard University Press. Frankfurt, Harry G. 2005. On Bullshit. Princeton, NJ: Princeton University Press. Fricker, Elizabeth. 1987. “The Epistemology of Testimony.” Proceedings of the Aristotelian Society, 61 (supp): 57–83. Fricker, Elizabeth. 1994. “Against Gullibility,” in Bimal Krishna Matilal and Arindam Chakrabarti (eds.), Knowing from Words. Dordrecht: Kluwer Academic Publishers, 125–61. Fricker, Elizabeth. 1995. “Telling and Trusting: Reductionism and Anti-Reductionism in the Epistemology of Testimony.” Mind 104: 393–411. Fricker, Elizabeth. 2006. “Second-hand Knowledge.” Philosophy and Phenomenological Research 73: 592–618. Fricker, Miranda. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press. Fricker, Miranda. 2010. “Can There Be Institutional Virtues?” in Tamar Szabo Gender and John Hawthorne (eds.), Oxford Studies in Epistemology. Oxford: Oxford University Press, 235–52. Fricker, Miranda. 2012. “Group Testimony: The Making of a Collective Good Informant.” Philosophy and Phenomenological Research 84: 249–76. Fumerton, Richard. 2004. “Epistemic Probability.” Philosophical Issues 14: 149–64. Gettier, Edmund. 1963. “Is Justified True Belief Knowledge?” Analysis 23: 121–3. Gilbert, Margaret. 1987. “Modelling Collective Belief.” Synthese 73: 185–204. Gilbert, Margaret. 1989. On Social Facts. London and New York: Routledge. Gilbert, Margaret. 1993. “Agreements, Coercion, and Obligation.” Ethics 103: 679–706. Gilbert, Margaret. 1994. “Remarks on Collective Belief,” in Frederick F. Schmitt (ed.), Socializing Epistemology: The Social Dimensions of Knowledge. Lanham, MD: Rowman & Littlefield, 235–55. Gilbert, Margaret. 2002. “Belief and Acceptance as Features of Groups.” Protosociology 16: 35–69. Gilbert, Margaret. 2004. “Collective Epistemology.” Episteme 1: 95–107. Gilbert, Margaret. 2009. “Shared Intention and Personal Intentions.” Philosophical Studies 144: 167–87. Gilbert, Margaret. 2013. Joint Commitment: How We Make the Social World. Oxford: Oxford University Press. Gilbert, Margaret and Daniel Pilchman. 2014. “Belief, Acceptance, and What Happens in Groups: Some Methodological Considerations,” in Jennifer Lackey (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press, 189–212. Goldberg, Sanford C. 2015. Assertion: On the Philosophical Significance of Assertoric Speech. Oxford: Oxford University Press. Goldberg, Sanford C. 2017. “Should Have Known.” Synthese 194: 2863–94. Goldman, Alvin. I. 1979. “What Is Justified Belief?” in George Pappas (ed.), Justification and Knowledge. Dordrecht: Reidel, 89–104. Goldman, Alvin. I. 1986. Epistemology and Cognition. Cambridge, MA: Harvard University Press. Goldman, Alvin. I. 2004. “Group Knowledge Versus Group Rationality: Two Approaches to Social Epistemology.” Episteme 1: 11–22. Goldman, Alvin. I. 2014. “Social Process Reliabilism: Solving Justification Problems in Collective Epistemology,” in Jennifer Lackey (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press, 11–41. Graham, Peter J. 1997. “What Is Testimony?” The Philosophical Quarterly 47: 227–32. Grice, Paul. 1989. Studies in the Way of Words. Cambridge, MA: Harvard University Press. Hagemann, Thomas A. and Joseph Grinstein. 1997. “The Mythology of Aggregate Corporate Knowledge: A Deconstruction.” George Washington Law Review 65: 210–47. Hakli, Raul. 2007. “On the Possibility of Group Knowledge without Group Belief.” Social Epistemology 21: 249–66. Hakli, Raul. 2011. “On Dialectical Justification of Group Beliefs,” in Hans Bernhard Schmid, Daniel Sirtes, and Marcel Weber (eds.), Collective Epistemology. Frankfurt: Ontos Verlag, 119–53. Hardwig, John. 1985. “Epistemic Dependence.” The Journal of Philosophy 82: 335–49. Hardwig, John. 1991. “The Role of Trust in Knowledge.” The Journal of Philosophy 88: 693–708. Harman, Gilbert. 1973. Thought. Princeton NJ: Princeton University Press. Hawthorne, John. 2004. Knowledge and Lotteries. Oxford: Oxford University Press. Hawthorne, John and Jason Stanley. 2008. “Knowledge and Action.” The Journal of Philosophy 105: 571–90. Huebner, Bryce. 2014. Macrocognition: A Theory of Distributed Minds and Collective Intentionality. New York: Oxford University Press. Hughes, Justin. 1984. “Group Speech Acts.” Linguistics and Philosophy 7: 379–95. Hutchins, Edwin. 1995. Cognition in the Wild. Cambridge, MA: The MIT Press. Isenberg, Arnold. 1964. “Deontology and the Ethics of Lying.” Philosophy and Phenomenological Research 24: 465–80. Kallestrup, Jesper. 2016. “Group Virtue Epistemology.” Synthese doi: 10.1007/s11229-016-1225-7. Kenyon, Tim. 2003. “Convention, Pragmatics, and Saying ‘Uncle’.” American Philosophical Quarterly 40: 241–8. Klausen, Søren Harnow. 2015. “Group Knowledge: A Real-World Approach.” Synthese 192: 813–39. Klein, Peter. 1985. “The Virtues of Inconsistency.” The Monist 68: 105–35. Kolodny, Niko. 2007. “How Does Coherence Matter?” Proceedings of the Aristotelian Society 107: 229–63.

Kopec, Matthew and Seumas Miller. 2018. “Shared Intention Is Not Joint Commitment.” Journal of Ethics and Social Philosophy 13: 179–89. Krishna, Daya. 1961. “‘Lying’ and the Compleat Robot.” British Journal for the Philosophy of Science 12: 146–9. Kukla, Rebecca. 2012. “‘Author TBD’: Collaboration in Contemporary Biomedical Research. Philosophy of Science 79: 845–58. Kyburg, Henry E. 1961. Probability and the Logic of Rational Belief. Middleton, CT: Wesleyan University Press. Kyburg, Henry E. 2001. “Probability as a Guide in Life.” The Monist 84: 135–52. Lackey, Jennifer. 2006. “The Nature of Testimony.” Pacific Philosophical Quarterly 87: 177–97. Lackey, Jennifer. 2007. “Norms of Assertion.” Noûs 41: 594–626. Lackey, Jennifer. 2008. Learning from Words: Testimony as a Source of Knowledge. Oxford: Oxford University Press. Lackey, Jennifer. 2010. “Acting on Knowledge.” Philosophical Perspectives 24: 361–82. Lackey, Jennifer. 2013. “Lies and Deception: An Unhappy Divorce.” Analysis 73: 236–48. Lackey, Jennifer. 2014a. “A Deflationary Account of Group Testimony,” in Jennifer Lackey (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press, 64–94. Lackey, Jennifer. 2014b. “Socially Extended Knowledge.” Philosophical Issues 24: 282–98. Lackey, Jennifer. 2016. “What Is Justified Group Belief?” The Philosophical Review 125: 341–96. Lackey, Jennifer. 2018a. “Group Assertion.” Erkenntnis 83: 21–42. Lackey, Jennifer. 2018b. “Group Lies,” in Eliot Michaelson and Andreas Stokke (eds.), Lying: Language, Knowledge, Ethics, and Politics. Oxford: Oxford University Press, 262–84. Lahroodi, Reza. 2007. “Collective Epistemic Virtues.” Social Epistemology 21: 281–97. Langton, Rae. 2009. Sexual Solipsism: Philosophical Essays on Pornography and Objectification. Oxford: Oxford University Press. Levi, Isaac. 1962. “On the Seriousness of Mistakes.” Philosophy of Science 29: 47–65. List, Christian. 2005. “Group Knowledge and Group Rationality: A Judgment Aggregation Perspective.” Episteme 2: 25–38. List, Christian. 2014a. “Three Kinds of Collective Attitudes.” Erkenntnis 79(9): 1601–22. List, Christian. 2014b. “When to Defer to Supermajority Testimony—and When Not,” in Jennifer Lackey (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press, 240–9. List, Christian and Phillip Pettit. 2002. “Aggregating Sets of Judgments: An Impossibility Result.” Economics and Philosophy 18: 89–110. List, Christian and Phillip Pettit. 2004. “Aggregating Sets of Judgments: Two Impossibility Results Compared.” Synthese 140: 207–35. List, Christian and Phillip Pettit. 2011. Group Agency: The Possibility, Design, and Status of Corporate Agents. Oxford: Oxford University Press. Ludwig, Kirk. 2007. “Collective Intentional Behavior from the Standpoint of Semantics.” Nôus 41: 355–93. Ludwig, Kirk. 2014. “Proxy Agency in Collective Action.” Nôus 48: 75–105. Mahon, James Edwin. 2008. “The Definition of Lying and Deception,” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (fall 2008 edn). Available at: https://plato.stanford.edu/entries/lying-definition/ accessed April 27, 2020. Makinson, David. 1965. “The Paradox of the Preface.” Analysis 25: 205–7. Makinson, David. 2012. “Logical Questions Behind the Lottery and Preface Paradoxes: Lossy Rules for Uncertain Inference.” Synthese 186: 511–29. Mathiesen, Kay. 2011. “Can Groups Be Epistemic Agents?,” in Hans Bernhard Schmid, Daniel Sirtes, and Marcel Weber (eds.), Collective Epistemology. Frankfurt: Ontos Verlag, 23–45. McDowell, John. 1994. “Knowledge by Hearsay,” in Bimal Krishna Matilal and Arindam Chakrabarti (eds.), Knowing from Words. Dordrecht: Kluwer Academic Publishers, 195–224. McKinnon, Rachel. 2013. “The Supportive Reasons Norm of Assertion.” American Philosophical Quarterly 50: 121–35. McMahon, Christopher. 2003. “Two Modes of Collective Belief.” Protosociology 18/19: 347–62. Meijers, A. W. M. 2002. “Collective Agents and Cognitive Attitudes.” Protosociology 16: 70–85. Mokyr, Joel. 2002. The Gifts of Athena. Princeton, NJ: Princeton University Press. Nozick, Robert. 1981. Philosophical Explanations. Cambridge, MA: The Belknap Press. Owens, David. 2000. Reason without Freedom: The Problem of Epistemic Normativity. London: Routledge. Owens, David. 2006. “Testimony and Assertion.” Philosophical Studies 130: 105–29. Pacherie, Elisabeth. 2013. “Intentional Joint Agency: Shared Intention Lite.” Synthese 190: 1817–39. Pauly, Marc and Martin van Hees. 2006 “Logical Constraints on Judgement Aggregation.” Journal of Philosophical Logic 35: 569–85. Pettit, Philip. 2003. “Groups with Minds of Their Own,” in Frederick Schmitt (ed.), Socializing Metaphysics. New York: Rowman & Littlefield, 167–93. Plantinga, Alvin. 1993. Warrant and Proper Function. Oxford: Oxford University Press. Platts, Mark. 1979. Ways of Meaning. London: Routledge and Kegan Paul. Pollock, John. 1986. Contemporary Theories of Knowledge. Totowa, NJ: Rowman & Littlefield. Quinton, Anthony. 1975/1976. “Social Objects.” Proceedings of the Aristotelian Society 75: 1–27. Ragozino, Anthony. 1995. “Replacing the Collective Knowledge Doctrine with a Better Theory for Establishing Corporate Mens

Rea: The Duty Stratification Approach.” Southwestern University Law Review 24: 423–72. Reed, Baron. 2006. “Epistemic Circularity Squared? Skepticism about Common Sense.” Philosophy and Phenomenological Research 73: 186–97. Reynolds, Steven L. 2002. “Testimony, Knowledge, and Epistemic Goals.” Philosophical Studies 110: 139–61. Riggs, Wayne D. 2008. “Epistemic Risk and Relativism.” Acta Analytica 23: 1–8. Ritchie, Katherine. 2013. “What Are Groups?” Philosophical Studies 166: 257–72. Ross, Angus. 1986. “Why Do We Believe What We Are Told?” Ratio 28: 69–88. Rupert, Robert D. 2005. “Minding One’s Cognitive Systems: When Does a Group of Minds Constitute a Single Cognitive Unit?” Episteme 1: 177–88. Rupert, Robert D. 2011. “Empirical Arguments for Group Minds: A Critical Appraisal.” Philosophy Compass 6: 630–9. Saul, Jennifer Mather. 2012. Lying, Misleading, and What Is Said. Oxford: Oxford University Press. Schmitt, Frederick F. 1994. “The Justification of Group Beliefs,” in Frederick F. Schmitt (ed.), Socializing Epistemology: The Social Dimensions of Knowledge. Lanham, MD: Rowman & Littlefield, 257–87. Schmitt, Frederick F. 2006. “Testimonial Justification and Transindividual Reasons,” in Jennifer Lackey and Ernest Sosa (eds.), The Epistemology of Testimony. Oxford: Oxford University Press, 193–224. Searle, John. 1995. The Construction of Social Reality. New York: Free Press. Silva Jr., Paul. 2019. “Justified Group Belief Is Evidentially Responsible Group Belief.” Episteme 16: 262–81. Sorensen, Roy. 2007. “Bald-faced Lies! Lying without the Intent to Deceive.” Pacific Philosophical Quarterly 88: 251–64. Sorensen, Roy. 2010. “Knowledge-Lies.” Analysis 70: 608–15. Staffel, Julia. 2011. “Reply to Roy Sorensen, ‘Knowledge-Lies’.” Analysis 71: 300–2. Stalnaker, Robert. 1984. Inquiry. Cambridge, MA: The MIT Press. Stanley, Jason. 2005. Knowledge and Practical Interests. Oxford: Oxford University Press. Tollefsen, Deborah. 2007. “Group Testimony.” Social Epistemology 21: 299–311. Tollefsen, Deborah. 2009. “Wikipedia and the Epistemology of Testimony.” Episteme 6: 8–24. Tollefsen, Deborah. 2014. “Review of Bryce Huebner’s Macrocognition: A Theory of Distributed Minds and Collective Intentionality.” Notre Dame Philosophical Reviews. Available at: https://ndpr.nd.edu/news/macrocognition-a-theory-ofdistributed-minds-and-collective-intentionality/, accessed April 26, 2020. Tuomela, Raimo. 1992. “Group Beliefs.” Synthese 91: 285–318. Tuomela, Raimo. 1993. “Corporate Intention and Corporate Action.” Analyse und Kritik 15: 11–21. Tuomela, Raimo. 1995. The Importance of Us. Stanford, CA: Stanford University Press. Tuomela, Raimo. 2004. “Group Knowledge Analyzed.” Episteme 1: 109–27. Tuomela, Raimo. 2006. “Joint Intention, We-Mode and I-Mode.” Midwest Studies in Philosophy 30: 35–58. Tuomela, Raimo. 2011. “An Account of Group Knowledge,” in Hans Bernhard Schmid, Daniel Sirtes, and Marcel Weber (eds.), Collective Epistemology. Frankfurt: Ontos Verlag, 75–117. Unger, Peter. 1975. Ignorance: A Case for Skepticism. Oxford: Oxford University Press. van Fraassen, Bas C. 1980. The Scientific Image. Oxford: Clarendon Press. Velasquez, Manuel. 2003. “Debunking Corporate Moral Responsibility.” Business Ethics Quarterly 13: 531–62. Welbourne, Michael. 1979. “The Transmission of Knowledge.” The Philosophical Quarterly 29: 1–9. Welbourne, Michael. 1981. “The Community of Knowledge.” The Philosophical Quarterly 31: 302–14. Welbourne, Michael. 1986. The Community of Knowledge. Aberdeen: Aberdeen University Press. Welbourne, Michael. 1994. “Testimony, Knowledge and Belief,” in Bimal Krishna Matilal and Arindam Chakrabarti (eds.), Knowing from Words. Dordrecht: Kluwer Academic Publishers, 297–313. Wigmore, John H. 1904. “The History of the Hearsay Rule.” Harvard Law Review 17: 437–58. Williams, Bernard. 2002. Truth and Truthfulness: An Essay in Genealogy. Princeton, NJ: Princeton University Press. Williams, Michael. 1999. Groundless Belief: An Essay on the Possibility of Epistemology, 2nd edn. Princeton, NJ: Princeton University Press. Williamson, Timothy. 1996. “Knowing and Asserting.” The Philosophical Review 105: 489–523. Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford: Oxford University Press. Williamson, Timothy. 2005. “Contextualism, Subject-Sensitive Invariantism and Knowledge of Knowledge.” The Philosophical Quarterly 55: 213–35. Wray, K. Brad. 2001. “Collective Belief and Acceptance.” Synthese 129: 319–33. Wray, K. Brad. 2003. “What Really Divides Gilbert and the Rejectionists.” Protosociology 18/19: 363–76.

Index For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only one of those pages. ACLU of Illinois 19 Adler, Jonathan 159 n. 25 aggregation 13, 25 n. 13, 28–30, 37–44, 49, 55, 71–76, 78–80, 91, 96–98, 129, 135–137 Alston, William 34 n. 24 Antony Louise 142 n. 6 Audi, Robert 82 n. 28, 150 n. 19, 163 n. 32 Augustine 166, 170 n. 11 Baril, Anne 152 n. 20 Bergmann, Michael 82 n. 28 Bird, Alexander 7, 8 n. 11, 14, 19 n. 1, 22 n. 8, 111–115, 120–121, 126 n. 24 BonJour, Laurence 82 n. 28–29 Bratman, Michael 147 n. 13, 186 n. 32 Briggs, Rachel 50 n. 33, 99 n. 45 Broncano-Berrocal, Fernando 170 n. 11 Burge, Tyler 82 n. 28, 163 n. 32 Cariani, Fabrizio 50 n. 33, 72 n. 18, 78 n. 24, 99 n. 45 Carson, Thomas 30 n. 20, 166 n. 4, 168–172, 175 n. 24, 177 n. 27 Carter, J. Adam 37 n. 28 Chant, Sara Rachel 186 n. 31 Chisholm, Roderick M. 82 n. 28, 166–167, 173 n. 19 Christensen, David 81 n. 26 Coady, C.A.J. 142 n. 6 Cohen, L. Jonathan 25 n. 12, 144 n. 9 Cohen, Stewart 107 n. 52, 159 n. 25 Coherence 50, 99–100 see also incoherence collective action 3, 17, 141 proxy agent 14, 17, 94, 116 n. 8, 134, 140–141, 146 n. 11 collective knowledge 7, 14, 72 n. 19, 111–112, 128–137 collective responsibility 2–3 see also group responsibility Collins, Rep. Chris 1–2 Comesaña, Juan 107 n. 52 Condorcet jury theorem 91 Corlett, J. Angelo 23–24, 72 n. 19 Craig, Edward 155 Davidson, Donald 122 n. 19 defeater 37 n. 28, 57, 62, 81–82, 87–89, 106 n. 51, 108, 123–127 DeRose, Keith 119, 159 n. 25 Dietrich, Franz 28 n. 17, 72 n. 18 discursive dilemma 29 see also doctrinal paradox divergence arguments 56–57, 59, 62, 67–68, 71 different evidence cases 56–57, 59, 62–63, 68–71 different epistemic risk settings cases 56, 58–59, 62, 70–71 doctrinal paradox 29, 97 n. 39 see also discursive dilemma Douven, Igor 159 n. 26

Easwaran, Kenny 50 n. 33, 99 n. 45 Elgin, Catherine Z. 142 n. 6 Ernst, Zachary 186 n. 31 Fallis, Don 30 n. 20, 58 n. 3, 166 n. 4–5, 169–170, 172–173, 175–177 Fantl, Jeremy 94 n. 35, 117, 119 Faulkner, Paul 163 n. 32 Feehan, Thomas D. 166–167, 173 n. 19 Fitelson, Branden 50 n. 33, 99 n. 45 Foley, Richard 81 n. 26 Frankfurt, Harry G. 32–33, 35, 170 n. 12, 187 Fricker, Elizabeth 82 n. 28, 142 n. 6, 159 n. 25 Fricker, Miranda 22 n. 8, 142 n. 5–6, 154–155, 184 n. 30 Fumerton, Richard 101 n. 48 Gettier, Edmund 56 n. 2, 74–75 Gilbert, Margaret 6–8, 19 n. 1, 22–27, 35 n. 26, 57 n. 3, 186 n. 31 Goldberg, Sanford C. 128 n. 27, 160 Goldman, Alvin I. 26 n. 14, 71–77, 82–83, 92, 96, 101 n. 48 Graham, Peter J. 142 n. 6 Grice, Paul 175 n. 23, 176 n. 25 Grinstein, Joseph 129 group assertion 3, 14, 17–18, 138–140, 146 n. 12, 147, 149–151, 154, 158–159, 161–163, 186–187 authority-based 14, 138, 140, 148–150, 161–163, 186–188 coordinated 14, 139, 149–150, 156, 163, 186, 188 joint acceptance account 154 group belief 4, 6, 10, 12–14, 17–54, 113, 124 base fragility 45–48, 50, 54, 78 n. 24 direction of fit 13, 44, 50, 54, 147 eliminativism 4, 52 n. 35 Group Agent Account 13, 20, 48–52, 54 joint acceptance account 12, 17, 20, 24–29, 32–38, 40–41, 43–45, 49, 124 judgment fragility 41–43, 49 premise-based aggregation account 28–29, 37, 39–44 summative view 20–21, 23, 30 n. 18, 40, 48, 50, 52 n. 35 group bullshit 30, 33–37, 39–41, 43, 51–52, 187 Group Bullshit Desideratum 34, 37 group lies 4–5, 36–41, 43, 48, 51–54, 165–166, 178–188 see also lies Group Lie Desideratum 31, 34, 37 joint acceptance account 183–186 summativism 178–179, 181, 183, 185–186 group phenomena 3, 6, 105, 179, 183, 185 deflationary views 3–4, 13–17, 55, 71–72, 75, 88, 92, 95, 98, 107, 109, 138–39, 159–61, 163 inflationary views 3, 12–18, 55–56, 59–60, 63–64, 67, 71, 92, 95, 98, 100, 107, 109, 111–112, 115, 138–139, 158–159, 161–163 groups 3–20, 23 n. 9, 25 n. 13, 26–27, 29, 32, 34, 48, 53–57, 64, 69–70, 94–96, 99–100, 104, 108, 112–113, 115–116, 120 n. 17, 124, 126–128, 134–135, 138–139, 141–143, 146 n. 11, 148–149, 151, 154–155, 157, 159, 162–164, 166, 179, 183, 186, 188 deliberative and non-deliberative 9–11, 25 n. 13 established and non-established 7–8, 11, 112 operative members 26–27, 29–30, 32–33, 48–50, 54, 64–65, 67, 97–101, 103–106, 111, 133–134, 145, 179–186 organic 112–113 subject to normative evaluation 9, 12 versus mere collections 6, 8–11 with “minds of their own” 3, 20, 53, 115 group responsibility 123, 188 see also collective responsibility Hagemann, Thomas A. 129 Hakli, Raul 25 n. 12, 34, 52 n. 35, 61, 144 n. 9 Halliburton 19

Hardwig, John 163 n. 32 Harman, Gilbert 104 n. 50 Hawthorne, John 82 n. 28, 94 n. 35, 101 n. 48, 117, 119, 159 n. 25 Horn, Michael 1 Huckabee Sanders, Sarah 19 Huebner, Bryce 19 n. 1, 126–127 Hughes, Justin 158 n. 24 Hutchins, Edwin 7, 111–112 incoherence 12, 46, 49 n. 32, 50 see also coherence Isenberg, Arnold 166 n. 3 justified group belief 3, 6, 13–14, 18, 55–56, 71 n. 16, 73, 79–80, 91–98, 100, 102, 104–106, 107–109, 111, 128, 133–134, 137, 161 Collective Evidence Problem 84–88, 101, 105 Condorcet-inspired account of justified group belief 91–95 Defeater Problem 71, 77, 81–84, 98, 100–101 deflationary summativism 71–72, 75, 88, 92, 95, 98, 107–109 Group Epistemic Agent Account 13–14, 56, 95, 97–98, 100–101, 103–107, 109–110 Group Justification Paradox 13, 71, 77–81, 83 n. 30, 84, 98, 100–101 Group Normative Obligations Problem 88–89, 101, 106 Illegitimate Manipulation of Evidence Problem 13, 67, 70, 95, 100–101 joint acceptance account 6, 13, 55, 59–68, 71, 95, 98, 100, 108–109, 113, 124 Kallestrup, Jesper 6 n. 9, 57 n. 3, 69 n. 15, 126 n. 24 Kenyon, Tim 168 n. 9 Klausen, Søren Harnow 7, 112 n. 2, 127 Klein, Peter 81 n. 26 Kolodny, Niko 50 n. 33, 99 n. 45 Kopec, Matthew 186 n. 31 Krishna, Daya 173 n. 18 Kukla, Rebecca 127 n. 25 Kyburg, Henry E. 81 n. 26, 101 n. 48 Lackey, Jennifer 15 n. 16, 30 n. 19–20, 82 n. 28–29, 94 n. 36–37, 118 n. 12, 138–139, 142 n. 6, 150 n. 16, 156, 158–159, 161–163, 165 n. 2, 174 n. 20 Lafont, Cristina 174 n. 21 Lahroodi, Reza 8 n. 11 Langton, Rae 142 n. 6 Lauffer, Nathan 42 n. 29 Leonard, Nick 103 n. 49 Levi, Isaac 104, 58 n. 3 lies 15, 19 n. 1, 30, 32–33, 53–54, 155, 165–188 see also group lies bald-faced 166–168, 170–173, 178 coercion 166, 169–171, 173, 178 knowledge 166, 168–172, 178 List, Christian 8, 19 n. 1, 28 n. 17, 39, 41–43, 60–63, 72, 74, 91, 99 n. 44, 116 Ludwig, Kirk 94 n. 35, 140–146, 186 n. 31 Mahon, James Edwin 166 n. 3 Makinson, David 81 n. 26 Mathiesen, Kay 57–59, 70 McDowell, John 82 n. 28, 163 n. 32 McGrath, Matthew 94 n. 34, 117, 119 McMahon, Christopher 34 Meijers, A.W.M. 34, 52 n. 35 Miller, Brian 107 n. 52 Miller, Seumas 107 n. 52, 186 n. 31 Mokyr, Joel 72 n. 19

National Semiconductor 2–3, 15–17 Nozick, Robert 82 n. 28 Owens, David 163 n. 32 Pacherie, Elisabeth 186 n. 31 Pauly, Marc 28 n. 17, 72 n. 18 Pettit, Philip 3 n. 7, 8, 19 n. 1, 20 n. 7, 28, 29, 37, 40, 53 n. 37, 72, 115–116 Pilchman, Daniel 35 n. 26 Plantinga, Alvin 82 n. 28, 163 n. 32 Platts, Mark 44 Pollock, John 82 n. 28 Preface Paradox 49 n. 32, 80–81, 99 Quinton, Anthony 4, 12 Ragozino, Anthony 129 Reed, Baron 82 n. 28–29 Reller, Fredric 31 Reynolds, Steven L. 159 n. 25, 163 n. 32 Riggs, Wayne D. 58 n. 3, 99 n. 45 Ritchie, Katherine 19 n. 1 Ross, Angus 163 n. 32, 174 n. 21 Rupert, Robert D. 19 n. 1 Saul, Jennifer Mather 182–183 Schmitt, Frederick F. 6, 22–24, 57, 59 n. 5, 60–61, 69–70 Searle, John 140 selfless assertion 167, 173–174 Silva Jr., Paul 97 n. 39, 106 n. 51 social knowledge 14, 111–115, 119, 121–123, 125–128, 137 Sorensen, Roy 30 n. 20, 166–170, 173 n. 19 Sosa, Ernest 82 n. 28 spokesperson 2, 14–15, 17–18, 94, 120, 139–164, 181–188 bad 153 incompetent 153 negligent 153 rogue 151–153 Staffel, Julia 173 n. 19 Stalnaker, Robert 25 n. 12, 144 n. 9 Stanley, Jason 94 n. 35, 117, 119 Thomson, Mark 69 n. 15 Tollefsen, Deborah 22 n. 8, 24 n. 10, 121 n. 18, 127, 155–156, 158 n. 24 Trump Administration 19 Tuomela, Raimo 24 n. 10, 26–28, 64 n. 13, 72 n. 19, 179 n. 28 Unger, Peter 159 n. 25 van Fraassen, Bas C. 25 n. 12, 144 n. 9 van Hees, Martin 28 n. 17, 72 n. 18 Velasquez, Manuel 2 n. 6, 122 n. 21 Volkswagen of America 1–3, 15–17 Welbourne, Michael 163 n. 32 Wigmore, John H. 68 Williams, Bernard 166 n. 3 Williams, Michael 82 n. 28 Williamson, Timothy 94 n. 35, 117, 163 n. 32

Wray, K. Brad 25 n. 12, 34, 52 n. 35, 144 n. 9