Reliable Knowledge and Social Epistemology: Essays on the Philosophy of Alvin Goldman and Replies by Goldman (Grazer Philosophische Studien) 9042028106, 9789042028104

This special issue documents the results of a workshop on and with Alvin Goldman at the University of Düsseldorf in May,

121 113 2MB

English Pages 288 [306] Year 2010

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Reliable Knowledge and Social Epistemology: Essays on the Philosophy of Alvin Goldman and Replies by Goldman (Grazer Philosophische Studien)
 9042028106, 9789042028104

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

RELIABLE KNOWLEDGE AND SOCIAL EPISTEMOLOGY Essays on the Philosophy of Alvin Goldman and Replies by Goldman

Grazer Philosophische Studien INTERNATIONALE ZEITSCHRIFT FÜR ANALYTISCHE PHILOSOPHIE

GEGRÜNDET VON Rudolf Haller HERAUSGEGEBEN VON Johannes L. Brandl Marian David Maria E. Reicher Leopold Stubenberg

VOL 79 - 2009

Amsterdam - New York, NY 2009

RELIABLE KNOWLEDGE AND SOCIAL EPISTEMOLOGY Essays on the Philosophy of Alvin Goldman and Replies by Goldman

Edited by

Gerhard Schurz & markus werning

Die Herausgabe der GPS erfolgt mit Unterstützung des Instituts für Philosophie der Universität Graz, der Forschungsstelle für Österreichische Philosophie, Graz, und wird von folgenden Institutionen gefördert: Bundesministerium für Bildung, Wissenschaft und Kultur, Wien Abteilung für Wissenschaft und Forschung des Amtes der Steiermärkischen Landesregierung, Graz Kulturreferat der Stadt Graz

The paper on which this book is printed meets the requirements of “ISO 9706:1994, Information and documentation - Paper for documents Requirements for permanence”. Lay out: Thomas Binder, Graz ISBN: 978-90-420-2810-4 E-Book ISBN: 978-90-420-2811-1 ISSN: 0165-9227 © Editions Rodopi B.V., Amsterdam - New York, NY 2009 Printed in The Netherlands

TABLE OF CONTENTS

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii

I. Veritism, Externalism and Strong versus Weak Knowledge: General Reflections on Goldman’s Social Epistemology Elke BRENDEL: Truth and Weak Knowledge in Goldman’s Veritistic Social Epistemology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

Christoph JÄGER: Why to Believe Weakly in Weak Knowledge: Goldman on Knowledge as Mere True Believe . . . . . . . . . . . . . . . . .

19

Gerhard SCHURZ: Meliorative Reliabilist Epistemology: Where Externalism and Internalism Meet . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

II. Problems of Reliabilism Thomas GRUNDMANN: Reliabilism and the Problem of Defeaters . . .

65

Peter BAUMANN: Reliabilism—Modal, Probabilistic or Contextualist

77

III. The Value of Knowledge Erik J. OLSSON: In Defense of the Conditional Probability Solution to the Swamping Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joachim HORVATH: Why the Conditional Probability Solution to the Swamping Problem Fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian PILLER: Reliabilist Responses to the Value of Knowledge Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Markus WERNING: The Evolutionary and Social Preference for Knowledge: How to Solve Meno’s Problem within Reliabilism . . . . .

93 115 121 137

IV. Problems of Social Knowledge Michael BAURMANN & Geoffrey BRENNAN: What Should the Voter Know? Epistemic Trust in Democracy . . . . . . . . . . . . . . . .

159

Oliver R. SCHOLZ: Experts: What They Are and how We Recognize Them—A Discussion of Alvin Goldman’s Views . . . . . . . . . . . .

187

V. Understanding Other Minds Albert NEWEN & Tobias SCHLICHT: Understanding Other Minds: A Criticism of Goldman’s Simulation Theory and an Outline of the Person Model Theory . . . . . . . . . . . . . . . . . . .

209

VI. The Philosopher Replies Alvin I. GOLDMAN: Replies to Discussants . . . . . . . . . . . . . . . .

245

Grazer Philosophische Studien 79 (2009), vii–xiv.

INTRODUCTION TO RELIABLE KNOWLEDGE AND SOCIAL EPISTEMOLOGY— THE PHILOSOPHY OF ALVIN GOLDMAN Gerhard SCHURZ and Markus WERNING This volume documents the results of a workshop on Alvin Goldman’s epistemology, entitled “Reliable Knowledge and Social Epistemology. The Philosophy of Alvin Goldman”, which was organized by the editors in May 2008 at the University of Düsseldorf. Goldman was present at the workshop and commented on every talk. This special volume of Grazer Philosophische Studien contains the written versions of all papers given at the workshop, together with the written versions of Alvin Goldman’s replies.1 Further materials about the workshop, including photos of the workshop and a video of Goldman’s evening lecture, can be found on the homepage of the Düsseldorf Theoretical Philosophy Department (see www.phil-fak. uni-duesseldorf.de/philo/personal/thphil, click “workshops }”). The volume is divided into five chapters containing the papers on Goldman’s philosophy, and a sixth chapter that contains Goldman’s replies to all of the papers  with two exceptions: Goldman does not address Olsson’s paper of chapter 3, which defends Olsson’s solution to the valueof-knowledge problem, nor does he address Horvath’s critical note on Olsson’s paper. The contributions to the first chapter (Brendel, Jäger, and Schurz) address general questions of social epistemology, veritism and externalism, including critical reflections on Goldman’s conception of ‘weak knowledge’ as opposed to ‘strong’ (i.e., justified) knowledge. In the first part of her contribution, Elke Brendel argues that contrary to Goldman’s view, certain forms of pragmatism (especially that of William James) and of social constructivism (for example, that endorsed by John Dupré) are not incompatible with Goldman’s veritistic approach to social epistemology. In his reply Goldman admits that there are milder passages of the respective 1. There are two exceptions: one talk did not find its way to its written version, and one additional note by Joachim Horvath was added to this volume.

authors which one could quote, and he explains why he has concentrated on his criticism of the more radical formulations of these positions. Brendel also criticizes Goldman’s weak conception of knowledge as mere true belief. Similar criticisms are launched in the papers of Schurz and Jäger, and Goldman summarizes his replies to these criticisms in a separate section of chapter 6. Christoph Jäger’s entire paper is devoted to Goldman’s thesis that in ordinary language “knowledge” means sometimes just “weak knowledge” in the sense of true belief. In his joint paper with Olsson, Goldman infers this thesis from the premise that in some situations knowledge means nothing more than the negation of ignorance. Jäger agrees on the validity of this inference but he doubts Goldman’s premise. Jäger argues that in the examples put forward by Goldman (and similarly by Hawthorne), by which Goldman tries to support his premise, at least one of the following two semantic principles of knowledge are violated: (i) Knowledge requires firm belief, and (ii) knowledge has to be generated in a rational way  for example, it must not be based on sources which the believer knows or believes to be unreliable. Jäger’s alternative explanation of Goldman’s (and Hawthorne’s) examples is based on the premise that the ability to give the right answer to a question does not entail knowledge of the right answer. In his reply Goldman argues (among other things) that usually a situation in which the source of information is believed to be unreliable does not even generate weak knowledge. He also emphasizes that he does not identify firmness of beliefs with subjective probability being 1, as Jäger seems to assume. Gerhard Schurz starts his paper with the observation that Goldman’s social epistemology has a meliorative dimension, i.e., it intends to improve the epistemic practice of human societies. As Schurz continues in section 2 of his paper, in order to fulfill this meliorative task, true beliefs which count as knowledge must not only be reliable, but must also be recognizable as being reliable by the members of the given society. Based on this insight, Schurz develops an externalist-internalist hybrid conception of justification by adding reliability-indicators to the externalist conception of knowledge which, arguably, produce a veritistic surplus value for the social spread of knowledge. Schurz’s paper has two further sections, one in which he criticizes Goldman’s weak notion of knowledge, and another one on certain meliorative rules of epistemology, including a proof of Goldman’s conjecture concerning the veritistic value of the rule of maximally specific evidence, and including a refutation of Goldman’s thesis hat

viii

rule-circular arguments may be meliorative. Among many other points, Goldman argues in his reply that the meliorativity of knowledge should not be conceived as a conceptual condition for knowledge, but as an independent question. The next chapter (Grundmann and Baumann) discusses problems which are involved in the search for an adequate explication of reliabilism. According to the simple reliabilist account of justification, a belief is justified if it was produced by a reliable mechanism. Thomas Grundmann’s paper addresses the problem that simple reliabilism seems to be incompatible with the defeasibility of justification, i.e., the fact that the justificatory quality of a belief-producing process may be neutralized by counterevidence. In a paper of 1979, Goldman has proposed a solution to this problem which involves an internal condition. Grundmann criticizes this solution because the necessity of this condition for reliability is unclear, and because the internal character of this condition leads away from pure externalism. At the end of his paper, Grundmann suggests an account of reliability which involvers the conception of a properly functioning cognitive system. In his reply to Grundmann, Goldman proposes a refined solution to the problem of defeaters, which is a synthesis of elements from reliabilism and evidentialism. Goldman rejects Grundmann’s view that such a synthesis is not a genuine form of externalism, on the following reason: for Goldman, externalism is a weak position which claims that the notion of justification involves (merely) some external elements, while internalism is a strong position which claims that all elements of justification must be internal. What makes a type of process P which produces a certain type of belief reliable? This is the starting question of Peter Baumann’s paper. According to Baumann, what makes P reliable is its truth ratio, i.e., the ratio of true versus false beliefs produced by the process. But now, the central question becomes this: should the truth ratio of the given process P be evaluated only in the actual world, or also in certain possible worlds (which is Goldman’s position)—and if yes, in which possible worlds? Goldman gave up his suggestion of 1986 which identifies the relevant worlds in which reliability is evaluated with the normal worlds, because this position makes justification dependent on one’s internal beliefs about which worlds are normal. Baumann now focuses on the view that the relevant worlds are those which are ‘sufficiently close’ to the actual world. Baumann argues, however, that the notion of closeness between possible worlds is inescapably unclear and polysemous. Finally, Baumann turns to the conditional probability solu-

ix

tion which defines reliability in terms of probabilities in the actual world. This solution suffers from the reference class problem, because according to Baumann diagnosis, the problem of choosing an appropriate process type, of which the given process token is an instance, is not uniquely defined but again polysemous. In his reply Goldman presents (among other things) some new accounts to the problem of specifying appropriate process types, and some new ways of handling demonic worlds. The third chapter (Olsson, Horvath, Piller, and Werning) collects the papers of the workshop which were devoted to the problem of the value of knowledge. This chapter starts with Erik Olsson’s paper in which he defends and clarifies his earlier developed conditional probability (CP) solution to the value-of-knowledge problem against a number of objections. He argues that a reliably produced true belief—i.e., knowledge for the reliabilist—can be more valuable than mere true belief even if the belief in question is made no more valuable through becoming known. What sounds paradoxical at first glance is accounted for by reference to the state of knowledge, which comprises a state of reliable acquisition. The latter component of the state of knowledge now is responsible for the extra value. For, it is not only indicative of the truth of the belief thus acquired but also of the production of future true beliefs to be acquired by the same process type. States of knowledge thus are supposed to make future true beliefs more likely than mere states of true belief. Joachim Horvath’s note points to a problem in Olsson’s conditional probability solution to the value-of-knowledge problem. To repeat, according to this solution the possession of a true belief makes the acquisition of true beliefs of the same type in the future more probable if the former belief had been reliably rather than unreliably produced. Due to the counterfactual element in this condition, Horvath argues that it is not at all clear that this claim is true, because the closest world in which my knowledge that p was not reliably produced is not one in which I do not possess that reliable mechanism which has actually produced my belief, but is one in which, for example, some abnormal external situation has occurred which is the cause of the resulting unreliability but occurs so rarely that is does not significantly lower the probability. Olsson replies by recurring to the assumptions of non-uniqueness, cross-temporal access, learning and generality, which are supposed to hold in the actual “knowledge” world in a lawful manner for reliable processes and thus are also valid in all not too remote counterfactual worlds. These assumptions guarantee that problems of the same kind as in the actual

x

knowledge situation would occur, in relatively close counterfactual worlds, with some significant probability and would likely be tackled by means of processes of a similar kind. According to Olsson this assures that the probability of true belief in close possible worlds thus is significantly lower in the mere true belief than in the knowledge scenario. Christian Piller has stronger sympathies with Goldman’s solution to the value of knowledge problem by means of value autonomization and type-instrumentalism than with Olsson’s CP solution. He still criticizes the consequentialist framework, in which Goldman and traditionally most reliabilists operate, and pushes in a deontological direction. Within a consequentialist framework, the value of the likely good—in a veritistic epistemic context to be identified with likely true belief—vanishes when the good—true belief—is in fact achieved or not achieved. For a consequentialist only the outcome matters. The situation is different from a deontological point of view. Here even if we fail after having done what we ought to have done, a positive deontic fact remains: the fulfillment of a deontic norm. While Goldman’s type-instrumentalism allows a belief produced by some reliable process to inherit instrumental veritistic value from the type of all so produced beliefs even if it is not true, Piller objects that instrumental value can never justly be attributed to unsuccessful means. Value autonomization is no way out either, Piller claims, unless some new normative feature comes into play: possibly, the deontic fact that we have indeed produced our beliefs in a way we ought to have. As Markus Werning argues, the dilemma for Olsson is that he can either be read as saying (a) that the probability of having more true belief in the future is greater conditional on having knowledge than on having a merely true belief or (b) that having knowledge makes it likely that one’s future beliefs will be true. Whereas (a) follows from the reliabilist analysis of knowledge, it is consistent with a common-cause scenario and warrants neither a direct causal relationship between knowledge and future true belief nor a means-end relation transferring extra value from the future to the current belief. In contrast, (b) does imply a proper means-end relation, but is not guaranteed by reliabilism and moreover false in most cases. The weakness of the CP solution has implications for Goldman’s value autonomization solution, argues Werning: The two solutions are not independent of each other because value autonomization is to make sure that knowledge is intrinsically more valued in all cases on the basis that it is instrumentally more valuable in normal cases. However, the question why it is so in normal cases involves the CP solution as an answer.

xi

Werning proceeds with a discussion of the purpose of having extra value for knowledge. He first discusses the evolutionary question of how knowers could evolve rather than just truthful believers and then turns to the field of social epistemology. Here he develops an account of how we can manipulate our testimonial environment in an epistemically beneficial way by valuing reliably produced true belief more than just true belief. In his replies to Piller and Werning, Goldman emphasizes his preference for type-instrumentalism over value autonomization as a solution to the value-of-knowledge problem and acknowledges the strength of Werning’s objections to the conditional probability solution. The fourth chapter (Baurmann & Brennan, and Scholz) addresses two specific aspects of the social dimension of knowledge: the relation between knowledge and democracy, and the definition and recognition of expertise. Michael Baurmann and Geoffrey Brennan critically discuss Goldman’s position on what the core voter question “Which of the candidates in an election would produce a better outcome set from my point of view?” implies for the core voter knowledge, i.e. what the voter should know in a representative democracy if the democracy is to function optimally. Their first target is Goldman’s alleged outcome-oriented view. Voting is essentially a trust problem. The trustor, i.e., the voter, makes himself vulnerable to the trustee, the candidate to be elected, by an act of trust-giving, the delegation of political power. If it were required of the voter to predict or at least estimate the overall outcome likely to be generated by a certain elected candidate, the voter would need to have sufficient epistemic means to evaluate the trustworthiness of the candidate including his competence, resources, incentives, and dispositions. However, the voter faces massive limits in this regard, especially when future circumstance may effect the candidate’s trustworthiness and the generated outcome. Given those predicaments, Baurmann and Brennan favor a process-oriented view. Their second target is Goldman’s presumption of an instrumental view on voting as a means to influence collective decisions. Given the marginal de facto effects of a single vote, they argue for an expressive view on voting, which is more comparable to cheering or booing than to an effective influence on a decision. Finally, the two authors explore the conditions under which citizens can have epistemic trust in democracy, relevant authorities such as experts and institutions, and social as well as personal relationships. In his reply Goldman largely welcomes a process-oriented interpretation of his views and defends an instrumental against an expressive view on voting.

xii

Oliver Scholz is concerned with the problem of how non-experts can recognize expertise. The problem comes in two variants: (i) How can a non-expert know that someone is an expert in a given domain (the nonexpert/expert problem)? (ii) How can a non-expert justifiably choose one putative expert in the domain as more credible than another putative expert (the non-expert/two experts problem)? In many cases laypersons are not in a position to make informed judgments about the truth or justifiedness of a putative expert’s beliefs in his domain. Scholz points out in rich detail that the problem of expertise has a long history ranging from Plato and Aristotle over Augustine, the logician Bochenski to modern epistemology. Scholz then turns to a criticism of Goldman’s veritistic definition of an expert as one who has considerably more beliefs in true propositions and fewer beliefs in false propositions within his domain than the vast majority of people and possesses a set of skills and methods for apt and successful deployment of his knowledge to new questions in the domain. Scholz argues that laypersons have generally fewer and less sophisticated beliefs within the domain than experts do and thus run a much lower risk of entertaining false beliefs than experts. Given that Goldman’s definition might turn out to be materially inadequate, Scholz pleads to adopt other epistemic values like justification, coherence and understanding into the definition. In his reply Goldman generally leaves open whether other epistemic values should be adopted, but also points out the virtues of a veritistic approach towards expertise. The one paper (Newen & Schlicht) of the the fifth chapter discusses another part of Goldman’s cognitive epistemology, namely his simulation theory of mindreading. Albert Newen and Tobias Schlicht connect their criticism of Goldman’s approach to mindreading with a review of three major accounts: (i) At the core of theory-theory accounts is the assumption that subjects employ a theory-like body of beliefs (which may either be innate or acquired) to ascribe mental states to others in order to make sense of their behavior. (ii) Interaction theory assumes that subjects can, at least, in situations of social interaction, where embodied practices of movements, facial expressions, gestures, etc. are prevalent, directly perceive what other people are up to. (iii) Simulation theory a la Goldman hypothesizes that processes of simulation play a central role in mindreading where a process is understood as a simulation of another process if the former duplicates, replicates, or resembles the later in some significant respect. Newen and Schlicht now argue that Goldman has to overstretch his notion of simulation to cover both low-level mindreading (targeting

xiii

e.g., emotions, feelings, sensations of pain, and basic intentions) and high-level mindreading (e.g., attributing complex propositional attitudes like beliefs and desires). Regarding low-level mindreading, Newen and Schlicht argue that empirical phenomena relating to the mirror neuron system are better understood as mechanisms of direct perception or “registration” as purported by interaction theory rather than as mechanisms of simulation. With respect to high-level mindreading the authors object to Goldman that simulation theory has to be supplemented by a number of theoretical beliefs to get to work so that one, at best, ends up with a hybrid simulation-plus-theory theory. In their closing section Newen and Schlicht sketch their person model theory which largely makes use of the distinction between non-conceptual and conceptual representations of persons’ mental states. The former are involved in low-level mindreading and constitute a (usually unconscious) person schema and the latter in high-level mindreading and make up a (usually conscious) person image. In his reply Goldman gives an extensive refutation of Newen and Schlicht’s criticism. Among other things, he denies any substantial functional differences between conscious and non-conscious processes and defends simulation as a suitable umbrella notion covering processes involved in low-level and high-level mindreading.

xiv

I. VERITISM, EXTERNALISM AND STRONG VERSUS WEAK KNOWLEDGE: GENERAL REFLECTIONS ON GOLDMAN’S SOCIAL EPISTEMOLOGY

Grazer Philosophische Studien 79 (2009), 3–17.

TRUTH AND WEAK KNOWLEDGE IN GOLDMAN’S VERITISTIC SOCIAL EPISTEMOLOGY Elke BRENDEL Universität Mainz Summary Goldman’s project of a veritistic social epistemology is based on a descriptivesuccess account of truth and a weak notion of knowledge as mere true belief. It is argued that, contrary to Goldman’s opinion, pragmatism and social constructivism are not necessarily ruled out by the descriptive-success account of truth. Furthermore, it is shown that it appears to be questionable whether Goldman has succeeded to show that there is a weak notion of knowledge. But even if such a weak notion of knowledge can be defended, this notion can result in a complete separation of knowledge from epistemic value, which does not seem to be in accordance with Goldman’s concept of societal knowledge.

1. Introduction In his groundbreaking book Knowledge in a Social World (Goldman 1999) Alvin Goldman develops a veritistic approach to social epistemology in great detail. Social epistemology is a recently developed branch of epistemology that stresses the multiple social dimensions of knowledge. Social epistemologists emphasize the fact that we are an integral part of a community and that the acquisition and justification of our beliefs is massively determined by various forms of social interaction. These social dimensions of knowledge are either negated or neglected in traditional epistemologies, in particular, in classical rationalism or empiricism. Traditional epistemological enterprises are mainly focused on private and asocial ways of knowing. Traditional epistemology is therefore mainly individualistic in character since its central goal is to identify and assess methods and processes of knowledge acquisition from the perspective of an isolated individual subject. As a result, the proper use of certain cognitive abilities of epistemic subjects such as reasoning, sense perception, memory,

and introspection is regarded as the only legitimate source of knowledge in most traditional epistemologies. In order to account for the social factors in knowledge acquisition, in particular, for the important role of testimony as an essential source of knowledge, social epistemologists like Goldman claim that traditional epistemology needs to be extended to a theory of societal knowledge. Goldman does not regard his project of a social epistemology as a complete paradigm shift in epistemology. Instead, Goldman’s social epistemology is a continuation of classical (individual) epistemology. Classical (individual) epistemology and social epistemology are both essentially truth-oriented. They both regard the pursuit of truth as the ultimate motivation of our epistemic endeavours, and they both evaluate our epistemic processes and practices in relation to their contributions to the acquisition of true beliefs and the avoidance of false beliefs. Whereas individual epistemic accounts assess the truth-conduciveness of those practices that are typically considered as non-social, it is the main concern of Goldman’s veritistic social epistemology to examine the veritistic properties of various social-epistemic practices, such as testimony, argumentation, technologybased communication, speech regulation, that operate in different areas of the social world (science, law, democracy, education). So, veritistic social epistemology can also be described as a normative discipline whose main concern is to assess the impact of social-epistemic practices on knowledge, where knowledge is understood in the “weak sense of true belief ” (Goldman 1999, 5). Goldman’s veritistic social epistemology sharply contrasts with certain postmodern and radical constructivist positions in which knowledge is, according to Goldman, exclusively determined by interpersonal and cultural processes and in which a realistic conception of truth as a cross-culturally concern of epistemic subjects and as an objective goal of all of our epistemic endeavours is rejected. Goldman not only distances himself from proponents of anti-realist radical constructivist accounts (which he calls “veriphobes”) but also from proponents of epistemic, pragmatic, and relativist theories of truth since they deny “the basic correspondence idea that what makes sentences or propositions true are real-world truth makers.” (Goldman 1999, 68). Goldman contends that a social epistemology worth the name should go beyond mere social doxology and should account for the truth-orientation of our various social epistemic practices. That is why a social epistemology must be veritistic in character and should embrace the root correspondence idea that truth is defined non-epistemically as a

4

certain successful descriptive relation between a proposition (belief, statement etc.) and a portion of an independently existing external reality. In the following, I am going to make two main critical remarks about Goldman’s veritistic conception of social epistemology. I agree with Goldman that a social epistemology should be truth-oriented and that it should evaluate social-epistemic processes with regard to their contribution to the production of true beliefs and the avoidance of error. But I do not agree with some aspects of Goldman’s account of truth and knowledge within his veritistic social epistemology. My first criticism concerns Goldman’s own version of a correspondence account of truth and his extremely negative attitude towards pragmatism and social constructivism. To my mind, Goldman’s conception of truth consists in an extremely weak “root idea” about truth which is in no way specific to correspondence theories of truth. Many proponents of epistemic, pragmatic, or deflationist approaches to truth subscribe to Goldman’s requirement that truth bearers should describe reality. In particular, certain ideas that are associated with epistemic and pragmatic accounts of truth seem to form a more adequate and useful veritistic basis for the project of social epistemology than a mere correspondence approach. Furthermore, I think that Goldman’s radical attack on pragmatists and social constructivists in his Knowledge in a Social World is unfair to many philosophers who are sympathetic to versions of those theories but who do not embrace radical non-realistic ideas. I would even claim that social epistemology is well-suited to incorporate certain ideas of pragmatism or social constructivism. My second criticism concerns Goldman’s weak understanding of knowledge as mere true belief. Of course, it is not truth simpliciter that epistemic subjects strive for, but truths that are somehow interesting and relevant for the epistemic project in question. That is why Goldman thinks that veritistic values should only be assessed relative to the agent’s interests. It is also almost universally acknowledged among epistemologists that knowledge excludes some kind of epistemic luck. A mere true belief that we arrive at via an unreliable belief-forming method, cannot count as knowledge. So it seems that we not only strive for relevant and interesting truths, but also for truths that are in some way reliable or epistemically safe. So, in order to attain a conception of knowledge that is in accordance with his conception of veritistic value and with a certain reliability requirement, Goldman should embrace a stronger conception of knowledge.

5

2. Goldman’s Descriptive Success account of truth vs. pragmatism I will now elaborate on my first criticism. In the second chapter of Knowledge in a Social World Goldman offers the following “Descriptive Success” (DS) account of truth: (DS)

An item X (a proposition, a sentence, a belief, etc.) is true if and only if X is descriptively successful, that is, X purports to describe reality and its content fits reality. (Goldman 1999, 59)

Goldman admits that a full theory in terms of (DS) needs to explain, in particular, what portions of the reality are considered to be truth-makers, what determines a descriptive content, and what exactly the relation of “fitting” between the content of X and the reality consists in. In trying to develop a full-fledged correspondence theory of truth by spelling out its core notions such as that of the correspondence relation between a truth-bearer and the worldly truth-maker some problems for the project of defining truth in terms of a correspondence relation have emerged. In particular, although correspondence theories can be appropriate for limited language-fragments with bivalent non-ambiguous descriptive sentences, they nevertheless seem to reach their limits with regard to natural language phenomena, such as vagueness, figurative speech, and self-reference. Without any specifications of the core notions of (DS) the requirement of the descriptive success approach that “what makes sentences true are wordly truth-makers” is not one that only correspondence theories of truth endorse. Goldman admits that at least the “positive part” of deflationist theories of truth (in which truth is, for example, described as a device for semantic ascent) is compatible with the core correspondence idea spelled out in his DS-account (Goldman 1999, 66). He nevertheless explicitly excludes epistemic, pragmatic, and relativist theories of truth as incompatible with the DS-account (Goldman 1999, 68). But, to my mind, many proponents of epistemic, pragmatic, and relativist theories would not deny that the truth or falsity of a sentence is somehow related to how the world is. According to Goldman, the great pragmatist William James defines truth exclusively in terms of the usefulness a proposition has to the prospective believer, and therefore disconnects truth completely from reality (Goldman 1999, 42). But this interpretation of James’s truth theory does not do justice to James’s actual concern as a pragmatist philosopher. Just like Goldman, James points out that we have a vital interest in acquiring true

6

beliefs. The possession of true beliefs is of utmost importance for us in order to successfully find our way through the world (James 1991, 89). Furthermore and most importantly, James does not deny the core idea of a correspondence theory: “Truth”, James writes quite at the beginning of his lecture on the pragmatism’s conception of truth, “as any dictionary will tell you, is a property of certain of our ideas. It means their ‘agreement’, as falsity means their disagreement, with ‘reality.’ Pragmatists and intellectualists both accept this definition as a matter of course.” (James 1991, 87). To be sure, James does not equate truth with usefulness. For James it is an “impudent slander” if critics accuse him of claiming that whatever a person finds pleasant and useful to believe fulfils every pragmatic requirement and should therefore be called “true” (James 1991, 102f.). James happily subscribes to the core idea of a correspondence theory that truth involves a relation to reality. That is why he writes, for example, that “[t]ruths emerge from facts” (James 1991, 99) and that we are under the influence of the “coercions of the world” such that we feel an “immense pressure of objective control under which our minds perform their operations.” (James 1991, 103). But James also points out that the core idea of a correspondence account is not much more than a platitude and as such is of little use for answering the difficult questions about the nature of truth. First of all, many ideas (such as “power”, “spontaneity” etc.) do not correspond directly to reality. They are, as James put it, rather symbols than copies of realities (James 1991, 94). Furthermore, in most cases we cannot directly confront our beliefs with reality. We only gain knowledge about the past by its effects on the present. James is also quite aware of the theory-ladenness of observation, and the underdetermination of theories by empirical evidence. But these factors are not necessarily obstacles to correctness and objectivity in scientific knowledge. There are, according to James, objective criteria of scientific rationality that govern and control our pursuit of truth. So, for example, “[…] consistency both with previous truth and with novel fact is always the most imperious claimant.” (James 1991, 96). Admittedly, James’s views on truth are sometimes quite unclear and ambiguous and it seems to be impossible to offer a fair and consistent interpretation of his truth-theory without employing a principle of charity.1 As pointed out above, there is textual evidence that his pragmatist 1. Richard Kirkham even contends that “[t]here is hardly any theory of truth James did not endorse at one time or another”. In order to interpret James’s philosophical writings on truth in a profitable way, he is also “prepared to ignore some of his remarks as “not what he meant”

7

theory is not necessarily incompatible with a project of a veritistic social epistemology. In particular, he does not deny the core idea of a correspondence theory according to which a true statement successfully describes reality. James and many other pragmatists are not radical anti-realists who deny the existence of an external reality or the important role that reality plays in the acquisition of knowledge. As a pragmatist, James was eager to overcome scepticism and to reject the Cartesian strategy of seeking for absolute certainty via the method of universal doubt. But nevertheless, he avoids commitment to a specific ontology of facts and remains neutral with respect to the question of whether there is an external mind-independent reality: “That reality is ‘independent’ means that there is something in every experience that escapes our arbitrary control. […] There is a push, an urgency, within our experience, against which we are on the whole powerless, and which drives us in a direction that is the destiny of our belief. That this drift of experience itself is in the last resort due to something independent of all possible experience may or may not be true. There may or may not be an extra-experiential ‘ding an sich’ that keeps the ball rolling, or an ‘absolute’ that lies eternally behind all the successive determination which human thought has made.” (James 1987, 865). Just like James, Goldman is not committed to a special metaphysical doctrine about reality. He contends that his DS-theory “is entirely neutral on the question of what “reality” consists in” (Goldman 1999, 65). So it seems that James’s and Goldman’s conception of reality are not fundamentally different. James’s pragmatist or instrumentalist account of truth does not necessarily have to be interpreted as a theory within a metaphysical project of identifying defining conditions of truth in terms of usefulness or utility, but rather as a doctrine about what makes epistemic practices successful in order to reach our various epistemic goals, such as having beliefs about the world that are explanatorily coherent and that have a high predictive power etc. In order to arrive at these goals we do have a pragmatic interest in acquiring true beliefs. One of James’s main concerns is to point out the various factors that determine our processes of verifying our beliefs. These processes are driven inter alia by empirical data, coherence with already accepted “truths” and also pragmatic considerations, such as interests, usefulness, fruitfulness etc. I cannot see that such a pragmatist account and by offering some explanation and integration that the original author does not provide.” (Kirkham 1992, 88).

8

is in any way hostile to a veritistic approach to social epistemology that Goldman has in mind. 3. Veritistic social epistemology vs. social constructivism In a similar vein I would like to argue that Goldman’s critique of social constructivism does not do justice to many philosophers associated with social constructivism. Goldman characterizes social constructivism by six claims. The first two claims express a radical anti-realist attitude: what we call “true” are merely the products of social constructions, but not features of an external world. There is no language-independent reality. There are no objective facts of the matter that make our statements true or false. Even if there were transcendent truths, those truths are inaccessible for us, as the third claim asserts. The fourth claim consists in a certain version of epistemic relativism: There are no neutral, transcultural epistemic standards for settling disagreements. In claim five and six any attempts to attain truth are rejected since putatively truth-oriented practices are mere instruments of domination or repression and are corrupted by various self-serving interests (Goldman 1999, 10). Admittedly, there might be social constructivists who subscribe to such radical anti-realist and relativist positions. But to my mind the main concern of most social constructivists is to emphasize that many epistemic endeavours are social enterprises and as such are determined by social or political interests and goals. This does not necessarily lead to “veriphobia” or to radical anti-realism, for example, with regard to science. It is one thing to claim that scientific facts only come into existence when they are invented and constructed by scientists, as Bruno Latour seems to hold when claiming that before Robert Koch discovered tuberculosis the bacillus had no real existence (see Boghossian 2006, 26). It is another, and less radical, thing to claim that scientific investigations (such as investigations into infectious diseases) as well as scientific descriptions and conceptualizations (such as the conceptualization of tuberculosis) are in part determined (and as such constructed) by human interests and goals. The latter claim in no way implies that the phenomena scientists investigate and conceptualize do not exist independently of them and do not have real natural properties that scientists can detect. John Dupré, for example, argues that “social constructivism is in some ways a fairly banal doctrine, and that the controversy that has surrounded it derives from further claims that are

9

wrongly alleged to follow from it.” (Dupré 2004, 83). Dupré contends that few social constructivists “deny that scientific belief has some important dependence on interactions with the world”. But, according to Dupré, social constructivists also point out that in most scientific disciplines “interactions with nature are insufficient to determine scientific belief ”, and since scientists have personal, social or political goals these social factors also have an influence on scientific belief formation (Dupré 2004, 74). These social influences can be obstacles to the scientific pursuit of truth and can corrupt scientific objectivity. Nonetheless, conceding that science has social influences does not mean that science is not truth-oriented. Science, as Dupré argues, can overcome these obstacles by correcting the errors that derive from these social influences (Dupré 2004, 82). So, I am not quite sure whether there are really many philosophers who would subscribe to all of Goldman’s radical six claims of “veriphobia”. At least there is a branch of social constructivism whose proponents would be quite sympathetic with Goldman’s project of a veritistic social epistemology. In his paper “What is Social Epistemology? A Smorgasbord of Projects” (Goldman 2002) Goldman distinguishes between weak and strong versions of social constructivism. Whereas proponents of weak constructivism claim that people’s representations or concepts are socially constructed, proponents of strong constructivism defend the much more dramatic thesis that things, their properties and facts are only social constructs (Goldman 2002, 195f.). The account of veritistic social epistemology that Goldman defends in his Knowledge in a Social World seems to be perfectly compatible with this weak form of social constructivism. So, as with the version of pragmatist accounts of truth I outlined above, (weak) social constructivism is not necessarily hostile to veritistic approaches to social epistemology. 4. Knowledge as mere true belief vs. veritistic value I now turn to my second critical remark about Goldman’s veritistic social epistemology. Goldman characterizes veritistic epistemology as a discipline that “is concerned with the production of knowledge, where knowledge is here understood in the ‘weak’ sense of true belief.” (Goldman 1999, 5). According to Goldman, dealing with issues concerning strong knowledge, i.e., knowledge in which, apart from true belief an additional internalist or externalist condition of justification is required, would be a digression from the main concern of his book.

10

Goldman characterizes a strict use of the term “knowledge” as one that “conforms to some standard, ordinary sense of the term in colloquial English (as judged by what epistemologists who attend to ordinary usage have identified as such).” (Goldman 2002, 183). Mere true belief is, according to Goldman, a weak yet strict sense of “knowledge” according to which “knowledge” simply means “information possession”: “In this weaker sense, to know that p is simply to possess the information that p, where “information” entails truth but “possession” merely entails belief, not necessarily justified or warranted belief.” (Goldman 1999, 185). First of all, I find it quite implausible to claim that “possessing the information that p” always implies “believing (the fact that) p”. After reading the yellow press this morning I might possess some (true) information about a love-affair of two celebrities. But since I don’t trust stories about love-affairs of celebrities in the yellow press, I don’t believe the story. In order to believe p a strong positive degree of conviction that p is true is necessary. So, people can possess the information that p without believing that p. For the same reason neither “being aware of (the fact that) p” nor “being cognizant of (the fact that) p” which Goldman considers as “rough synonyms” of “know” in the weak sense (Goldman/Olsson 2009), necessarily implies “believing the true proposition p”. Second, it is disputable whether there really is a strict sense of “knowledge” according to which knowledge amounts to mere true belief. It has been argued, for example, that crediting a person with having knowledge normally conversationally implicates the assumption that this person has some reason for that belief. A mere true guess wouldn’t count as knowledge, since it wouldn’t be a case of belief in the first place.2 But even if there are cases in colloquial English of “knowledge” in Goldman’s weak sense, it is still questionable whether a weak account of knowledge is suited for his project of a veritistic social epistemology. Goldman contends that our dominant epistemic goal is “to obtain true belief, plain and simple.” (Goldman 1999, 24). Admittedly, our epistemic endeavours are truth-oriented. We strive for true beliefs since we are interested in being correctly informed. The possession of true beliefs and the absence of erroneous beliefs usually help us in successfully navigating through life and in satisfying our interests and needs. But mere true beliefs, plain and simple, are not per se epistemically valuable. To be sure, people’s dominant epistemic goal is not the mere collection of random or 2. For a detailed criticism of Goldman’s account of knowledge as true belief see Le Morvan 2005.

11

trivial true beliefs or beliefs that are of no importance and relevance for our epistemic projects. Unless, for example, the correct information about the exact number of leaves on a particular tree in my backyard at a certain time is of any relevance for an epistemic project, having this information does not seem to be a desirable goal we should aim at. Goldman seems to be aware of the fact that from an epistemic point of view the possession of interesting true beliefs is more valuable than the possession of mere true beliefs that are uninteresting, trivial or irrelevant. While introducing his account of veritistic value (V-value), i.e. the measure he employs in order to evaluate (social) practices with regard to their contribution to knowledge, Goldman assumes that the V-values of belief states “should always be assessed relative to questions of interests.” (Goldman 1999, 89). That is why he only assigns V-values to belief states on the assumption that the epistemic subject has an interest in knowing whether the respective proposition is the case: Suppose, then that S has an interest in a yes/no question: ‘Is it the case that P?’ I shall abbreviate such a question Q(P/-P). Then we can assign the following V-values to the three possible states in the trichotomous scheme. If S believes the true proposition, the V-value is 1.0. If he rejects the true proposition, the V-value is 0. And if he withholds judgement, the V-value is .50. The first state constitutes knowledge, the second error, and the third ignorance. (Goldman 1999, 89)

Besides this simple trichotomous approach, Goldman suggests a second analysis of ascribing veritistic values to belief states. This analysis presupposes degrees of belief (DBs) that an agent S has at a certain time t vis-à-vis a proposition P. These degrees of beliefs are equated with the agent’s subjective probabilities. The V-value of a person’s degree of belief with respect to Q(P/-P) is identical to the person’s DB in P, if P is true; it is identical to 1 minus the person’s DB in P, if P is false (Goldman 1999, 90). As in the trichotomous approach, the agent S’s belief states have veritistic values “when they are responses to a question that interests S” (Goldman 1999, 88). If the application of an epistemic practice S results in an increase of the veritistic value of an agent’s belief state from a time t1 to a later time t2, this epistemic practice deserves epistemic credit. If the result of applying S is to decrease the belief state, S deserves epistemic discredit (Goldman 1999, 90). So, in his epistemic evaluation of belief states and epistemic practices in terms of V-values, Goldman does not regard having true beliefs, plain and

12

simple, but having interesting true beliefs (with a high degree of belief ), as epistemically valuable. As a consequence, knowledge as merely believing something true is not epistemically valuable per se since knowing need not have veritistic value. To illustrate this point, let’s assume that a person S is not at all interested in football. In particular, he does not care at all about who won the champions league in 1998/1999. In a boring conversation between his friends about football, S picks up the (correct) information that Manchester United won the champions league in 1998/1999 (2:1 against Bayern Munich). As a result, he forms the true belief that Manchester United won the champions league 1998/1999. Since S has no interest in this information, his knowing that Manchester United won the champions league 1998/1999 has no veritistic value.3 To be sure, a mere increase of the V-values of a belief state does not necessarily deserve positive epistemic credit. Consider a person S who develops a vital interest in a question Q(P/-P), where P is true, and as a result applies a practice S to question Q. Even if S’s degree of her belief state vis-à-vis P decreases after applying S, S need not be epistemically discredited. Let us assume, for example, that the question (Q) of whether the bird who is singing in Jane’s backyard every morning is a goldfinch (P) or not begins to interest Jane at time t1. Let us further assume that the bird is in fact a goldfinch. At t1 Jane does not know much about goldfinches. Since a picture of a goldfinch in a book resembles the bird in her backyard, the degree of her belief that the bird in her backyard is a goldfinch is 0.95, i.e., DB(P) = 0.95 at t1. Jane became more and more interested in Q and started to read more scientific books on ornithology. She also consulted experts on this topic. In particular, she learned that a canary can look very similar to a goldfinch. As a result, her new DB vis-à-vis P at t2 slightly decreases to 0.80, because she became more scrupulous and more cautious with respect to Q. She still believes that the bird is a goldfinch with a relatively high degree, but she is also aware of the possibility that the bird could be a canary. As a consequence, she cannot sustain her former strong belief in P at t1. So, the V-value of Jane’s belief state decreased from t1 to t2. But reading scientific books and consulting experts does of course not deserve negative credit. A decrease in the V-value of a belief state with respect to a question because of an expert’s awareness of relevant error3. That is why James Maffie even writes (Maffie 2000, 251f.) that since according to Goldman it is interesting knowledge—not knowledge simpliciter—that is epistemically valuable, “knowledge is no longer an epistemological notion! Indeed, it is no more an epistemological notion than is belief ”.

13

possibilities hitherto unknown or ignored is not epistemically blameworthy. Therefore, Goldman’s account of assessing epistemic practices in terms of veritistic values of belief states need not only refer to agent’s interests, but also to some kind of a reliability requirement. Although reading scientific books and consulting experts can result in a decrease of V-values of belief states in some singular cases, this epistemic practice nevertheless seems to have a good track record with regard to its causal contribution toward high V-values of belief states.4 That is why Goldman contends that a practice can only be evaluated vis-à-vis its veritistic outcome if we take into account the performance of this practice “across a wide range of applications, both actual and possible”. We must therefore consider the veritistic “propensities” of social practices (Goldman 1999, 91). This means that we must consider the reliability of practices (whether they already have a track record or whether they haven’t been employed so far) with respect to their veritistic impact. Consequently, methods or practices which are not reliable but lead to (epistemically lucky) true beliefs in single cases—as, for example, in Gettier cases or in Goldman’s famous “barnfaçade” example (Goldman 1976), or in cases of true beliefs that arise out of wishful thinking—are, according to Goldman’s weak conception of knowledge, knowledge-producing but they do not deserve any epistemic credit. So, in defining knowledge as mere true belief, neither the possession of knowledge nor practices of acquiring knowledge are per se epistemically valuable. In Goldman’s account, the notions that have positive epistemic value are true belief states with a high V-value and practices with high veritistic propensities that deserve positive credit because they increase the V-values of the belief states of epistemic subjects. These notions are analysed in terms of the agent’s interests and in terms of certain reliability requirements, respectively—and as such they are richer than Goldman’s notion of (weak) knowledge. I find it puzzling that Goldman’s account of knowledge is divorced from his account of the epistemic evaluation of belief states and epistemic practices. Our truth-oriented epistemic practices are interest-driven, and it is interesting knowledge that Goldman has in mind when he provides the framework for the employment of veritism in his social epistemology in terms of V-value and V-evaluation. Furthermore, the mere production of a true belief or of a true belief with a high DB is not a sufficient 4. Goldman contends that the veritistic notions of reliability and power that he employed, in particular, in Goldman 1986, and Goldman 1987, “are reflected or encapsulated in the proposed veritistic measure.” (Goldman 1999, 90, footnote 16)

14

condition for positive epistemic credit, even if the belief is a response to a question that interests the epistemic subject S. In order to deserve positive epistemic credit, the practice employed by S that produced the true belief must have a high veritistic propensity, i.e., the true belief must also be reliably produced. Therefore, knowledge in a social world amounts to much more than to mere true belief. So, why doesn’t Goldman analyse knowledge as a certain kind of a reliably (via non-social or social practices) produced true belief with a high V-value right at the outset? This richer notion of knowledge would be more in keeping with Goldman’s conception of a veritistic social epistemology than the weak notion of knowledge as mere true belief. A conception of societal knowledge of the kind that interests Goldman shouldn’t deprive itself of epistemically valuable assets. If a weak and strict sense of “knowledge” is at all conceivable, it does not seem to play any significant role in Goldman’s project of a veritistic social epistemology. 5. Conclusion To conclude, Goldman’s project of a veritistic social epistemology is based on a weak notion of truth and a weak notion of knowledge as mere true belief. Goldman believes that these notions provide a sufficient foundation for his veritistic approach to social epistemology. In particular, he contends that his descriptive-success account expresses the core idea of a correspondence theory of truth and rules out all epistemic, pragmatic, and relativist truth-theories as well as all social constructivist theories on the grounds that they all suffer from “veriphobia”. I tried to show that Goldman does not do justice to all pragmatist and social constructivist accounts. As a matter of course, proponents of less radical pragmatist and social constructivist accounts, such as William James or John Dupré, are happy to embrace the idea that wordly entities determine the truth of propositions or other truth bearers. But since our access to the external world is more or less indirect and influenced by many social factors, they are more concerned with questions of how these factors determine our processes of verifying and justifying our beliefs than with questions of how truth can ideally be defined. As such, pragmatism and social constructivism are not necessarily opposed to the project of a veritistic social epistemology. I furthermore tried to argue that it appears to be questionable whether Goldman has succeeded to show that there is a weak and strict sense of

15

“knowledge” according to which knowledge amounts to information possession. But even if a weak and strict conception of knowledge can be defended, such a weak notion of knowledge as mere true belief can result in a complete separation of knowledge from epistemic value since in Goldman’s account the truth-conduciveness of a social practice in a single case is not per se an epistemically valuable feature. Truth-oriented social practices are only epistemically valuable in so far as they help epistemic subjects to increase the veritistic values of their beliefs in a reliable way relative to their interests. If the goals of our epistemic endeavours are true belief states with high veritistic values and the employment of epistemic practices with high veritistic propensities, putative knowledge as mere true belief does not have any relevant place in Goldman’s veritistic social epistemology. Although my critical remarks do not affect the general idea of Goldman’s veritistic social epistemology, they are intended for possible reconsiderations of Goldman’s understanding of truth and (weak) knowledge in his framework of Knowledge in a Social World.

REFERENCES Boghossian, Paul 2006: Fear of Knowledge. Oxford: Clarendon Press. Dupré, John 2004: “What’s the Fuss about Social Constructivism?”. Episteme June 2004, 73–85. Goldman, Alvin 1976: “Discrimination and Perceptual Knowledge“. Journal of Philosophy 73, 771–791. — 1986: Epistemology and Cognition. Cambridge/Mass.: Harvard University Press. — 1987: “Foundations of Social Epistemics”. Synthese 73, 109–144. — 1999: Knowledge in a Social World. Oxford: Oxford University Press. — 2002: “What is Social Epistemology? A Smorgasbord of Projects”. In: Alvin Goldman, Pathways to Knowledge. Private and Public. Oxford: Oxford University Press, 182–204. Goldman, Alvin/Olsson, Erik 2009: “Reliabilism and the Value of Knowledge”. In: A. Haddock / A. Millar/D. Pritchard (eds.), Epistemic Value. Oxford: Oxford University Press, 19–41. James, William 1987: The Meaning of Truth (1909). In: William James: Writings 1902-1910. New York: The Library of America. — 1991: Pragmatism (1907). Amherst, NY: Prometheus Books.

16

Kirkham, Richard L. 1992: Theories of Truth. Cambridge/Mass: The MIT Press. Le Morvan, Pierre 2005: “Goldman on Knowledge As True Belief ”. Erkenntnis 62, 145–155. Maffie, James 2000: “Alternative Epistemologies and the Value of Truth”. Social Epistemology 14, 247–257.

17

Grazer Philosophische Studien 79 (2009), 19–40.

WHY TO BELIEVE WEAKLY IN WEAK KNOWLEDGE: GOLDMAN ON KNOWLEDGE AS MERE TRUE BELIEF Christoph JÄGER Universität Innsbruck Summary In a series of influential papers and in his groundbreaking book Knowledge in a Social World Alvin Goldman argues that sometimes “know” just means “believe truly” (Goldman 1999; 2001; 2002b; Goldman & Olsson 2009). I argue that Goldman’s (and Olsson’s) case for “weak knowledge”, as well as a similar argument put forth by John Hawthorne, are unsuccessful. However, I also believe that Goldman does put his finger on an interesting and important phenomenon. He alerts us to the fact that sometimes we ascribe knowledge to people even though we are not interested in whether their credal attitude is based on adequate grounds. I argue that when in such contexts we say, or concede, that S knows that p, we speak loosely. What we mean is that S would give the correct answer when asked whether p. But this doesn’t entail that S knows that her answer is right or that S knows that p. My alternative analysis of the Hawthorne-Goldman-Olsson examples preserves the view that knowledge requires, even in the contexts in question, true (firm) belief that is based on adequate grounds.

1. Weak knowledge and firm belief Every now and then in the history of epistemology some ingenious philosopher offers an argument designed to show that knowledge reduces, at least in certain contexts, to mere true belief. The first one who toyed with this idea is Plato (in his Meno). Among the most recent ones is Alvin Goldman. Plato eventually rejects the reduction tout court. Goldman, by contrast, has argued in various places that there is “one sense of ‘know’ in which it means, simply, believe truly” (Goldman and Olsson, forthcoming, 1; cf. also Goldman 2002b, 185f.; 2001, 164f.; 1999, 24f.). In what follows I defend Plato’s rejection against Goldman’s endorsement. Knowledge, I argue, does not reduce to mere true belief, at least not in

the kinds of context and for the kinds of reason that Goldman takes to be his main witnesses.1 In his most recent presentation of his argument, Goldman (with Eric Olsson) begins with the claim that there are contexts in which knowledge contrasts with ignorance and in which, for a specified person and fact, knowledge and ignorance are exhaustive alternatives. For example, Diane either knows p or is ignorant of it. The same point can be expressed using rough synonyms of ‘know’. Diane is either aware of (the fact that) p or is ignorant of it. She is either cognizant of p or ignorant of it. She either possesses the information that p or she is uninformed (ignorant) of it. (Goldman & Olsson 2009, 1)

The argument proceeds by way of a reductio. Suppose that knowledge were, in contexts of the kind in question, justified true belief, or justified true belief that in addition fulfilled an anti-Gettier condition. Then, if p were the case but S failed to know that p, this could be so because S failed to meet the justification condition or the anti-Gettier condition or both. Thus, since the supposition is that in this context failing to know that p means being ignorant of p, S could be said to be ignorant of p despite holding the true belief that p. However, Goldman and Olsson argue, such a result would be “plainly wrong” or at least “highly inaccurate, inappropriate and/or misleading” with regard to the notion of ignorance (Goldman & Olsson 2009, 3). We can summarize this argument as follows. Suppose that: 1. Knowledge is, in every context, justified true belief (JTB) plus some further condition X. (Supposition) 2. In certain contexts, knowledge contrasts with ignorance and these alternatives are exhaustive. (Premise) 1. Authors who opt for the view that “know” ought to be analyzed invariantly as “believe truly” include Isaac Levi (1980), Crispin Sartwell (1991, 1992), and, in the German speaking philosophical literature Franz von Kutschera (1982), Georg Meggle (1997), and, with qualifications, Wolfgang Lenzen (1980). For a sympathetic discussion of Sartwell see Ansgar Beckermann (2001). An approving discussion of von Kutschera can be found in (Beckermann 1997); for a critical discussion of von Kutschera see (Brendel 1999, chapter 1). The most detailed defense in recent German epistemological literature of the view that there are certain circumstances in which we use “know” in the sense of holding a true (firm) belief is presented by (Ernst 2002; see especially part 2). A discussion of these authors is beyond the scope of this paper, yet I believe that much of what is to follow is relevant to their arguments as well.

20

3. There are contexts in which ignorance contrasts with JTB plus X and these alternatives are exhaustive. (From 1and 2) 4. One can fail to have JTB plus X regarding p but hold the true belief that p. (Premise) 5. Hence one can be ignorant of p despite having a true belief that p. (From 3 and 4) 6. (5) is false. (Premise) (6) yields the reductio. The argument is that, since (2), (4), and (6) are true and the inferences from (1) and (2) to (3) and from (3) and (4) to (5) are valid, (1) is false. The inferences in this argument are indeed valid, and neither do I want to question premise (4) or premise (6), the latter resting on the view that being ignorant of p entails lacking a true belief that p. However, why should we think that (2) is true? Goldman and Olsson’s argument for this claim adopts an example from John Hawthorne (2002), which they (re)formulate as follows: If I ask you how many people in the room know that Vienna is the capital of Austria, you will tally up the number of people in the room who possess the information that Vienna is the capital of Austria. Everyone in the room who possesses the information counts as knowing the fact; everybody else in the room is ignorant of it. It doesn’t really matter, in this context, where someone apprised of the information got it. Even if they received the information from somebody they knew wasn’t trustworthy, they would still be counted as cognizant of the fact, i.e., as knowing it rather then as being unaware of it. (Goldman & Olsson 2009, 1f.)

If someone “possesses the information” that p, does he/she believe that p? That seems to be the intended reading, a reading also suggested in an earlier paper where Goldman presents the story as follows: … we would want to include anyone in the room who believes or possesses the information that Vienna is the capital of Austria, even if he acquired the information in an unjustified fashion. For example, even if his only source for this fact was somebody he knew full well was untrustworthy (but he believed him anyway) he should still be counted as knowing that Vienna is the capital of Austria. This seems, intuitively, exactly right—at least for one sense of the term ‘know’. (Goldman 2001, 165)2 2. Here the story is adapted from Hawthorne’s discussion of the example at the Rutgers

21

Neither of these passages is explicit about whether the knowledge ascriber knows that the subjects received their information from somebody they knew wasn’t trustworthy. Yet the intended reading seems to be that the ascriber is indeed aware of this. This idea also underlies the following version of the example in Goldman (2002b): Suppose a teacher S wonders which of her students know that Vienna is the capital of Austria. She would surely count a pupil as knowing this fact if the pupil believes (and is disposed to answer) that Vienna is the capital of Austria, even if the student’s belief is based on very poor evidence. The teacher would classify the pupil as one of those who ‘know’ without inquiring into the basis of his/her belief, and even in the face of evidence that it was a poor basis. (Goldman 2002b, 185f.)

Here Goldman explicitly maintains that the knowledge ascriber would count a student as one who knows the fact in question even when aware that the student’s epistemic basis is poor. Is that claim right? An initial worry is that questions such as “How many people in the room know that Vienna is the capital of Austria?” or “Which of the students know that …?” are leading questions in contexts in which it is known that at least one of the candidates could come up with the correct answer. The formulations suggest that “none” is not among the expected replies and that such a response would be inappropriate. Suppose the speaker had put his query in a more neutral way, for example by asking: “Are there any people in the room (except for you and myself ) who know that …?”, or “Which students, if any, know that …?”. In that case the respondent, being aware that each candidate knows that his/her source is untrustworthy, might well have replied: “no”, or “none”, respectively. Why is this? If S knows that Vienna is the capital of Austria, S holds the true belief that Vienna is the capital of Austria. How firm will this belief be? Goldman and Olsson don’t address this issue. According to a widely held view, however, knowledge involves firmly held belief, i.e., belief in the sense of very high, or even maximal, confidence (conviction, certainty). Note that it is this kind of belief that figures, for example, in Hawthorne’s original construal of the story. He writes: Even if someone was given the information by an informant that they knew full well they shouldn’t trust (who happened to be telling the truth on this Epistemology Conference in 2000. An early version of Hawthorne’s example was published in Hawthorne (2000).

22

occasion), you will in this context count him as knowing what the capital of Austria was (so long as he had the firm belief ). [Footnote Hawthorne:] Of course, someone who didn’t in fact trust their informant and merely used the informant as a basis for guessing an answer—being altogether unsure on the inside—would not count. (Hawthorne 2002, 253f., emphasis C.J.)3

If someone knows full well that his informant is untrustworthy, is it plausible to assume that he generates a firm belief, in the sense of high, or even maximal conviction, in what the informant says? No; this will typically not be the case. At least for people who are minimally epistemically rational (in the situation at hand), the following propositions form an inconsistent triad: (1) Knowledge requires firm belief. (2) S is confronted with a piece of information p from a source that S knows isn’t trustworthy (in questions of the kind at issue). (3) S knows that p (solely) on the basis of the fact described in (2). In Goldman and Olsson’s example, (2) and (3) are assumed to be true. Hence, if their argument is to work, they must either reject (1) or take on board the idea that we can appropriately ascribe knowledge to a person even if he is highly irrational when generating a firm belief in the proposition in question. Let us begin by considering the first option. 2. Super-weak knowledge (1) is shorthand for the claim that knowledge always requires firm belief (conviction, subjective certainty). Accordingly, this proposition may be rejected because one thinks that knowledge never requires firm belief; or because one thinks that at least sometimes, in certain circumstances, it doesn’t require firm belief. This latter claim would do the job for Goldman and Olsson. More specifically, they might retort that in order not to be ignorant in the contexts they envisage one need not hold a firm belief, but only some weaker kind of credal attitude. Weak knowledge, they may argue, requires only weak belief.4 So what, exactly, is weak belief? 3. A slightly different and, according to Hawthorne, amended version of the example appears in Hawthorne (2004). I shall discuss this later version below. 4. Two more radical options would be to maintain (i) that knowledge never requires any kind of belief, whether firm or weak, or (ii) that at least in certain circumstances knowledge requires

23

In earlier publications, Goldman has expressed reservations about modeling degrees of belief in terms of subjective probabilities (see Goldman 1986, 324–28). Yet he does endorse the view that for many epistemological purposes an approach along such lines is a “tolerable idealization” (1999, 88). In such a framework, firm belief that p is to be modeled as a doxastic state in which the subject assigns a subjective probability of 1 to p. “Weak belief ”, on the other hand, is an umbrella term for credal attitudes corresponding to a confidence interval of subjective probability assignments of 0.5  Pr(p)  1. Let us call weak knowledge that only involves weak belief, understood in this sense, super-weak knowledge. Super-weak knowledge is merely true weak belief. Would recourse to super-weak knowledge solve our problem? To begin with, note that knowledge that only involves weak belief is a technical notion that departs significantly from our pretheoretical usage of the term “know”. Suppose Diane asks Alfred: “Do you know when the next train to Vienna leaves?” “Yes”, he replies, and presents the correct answer: “It will depart at 8:15 a.m.” “Are you sure?” “No”, says Alfred. “That I’m not. I guess 8:15 is right. But I received the information from someone who was strolling around in front of the train station. Admittedly, the guy seemed a bit drunk and did not appear to be very reliable.” Surely, a most natural reaction for Diane would be to reply: “OK, but why do you say then you know when the train leaves?” Goldman thinks that his notion of weak knowledge captures one ordinary way of using the term “know”. Weak knowledge, he argues, correneither firm nor weak belief. For example, David Lewis once remarked (1996, 556) that he would “even allow knowledge without belief, as in the case of the timid student who knows the answer but has no confidence that he has it right, and so does not believe what he knows.” Prima facie, this may suggest an interpretation along the lines of (ii). However, note that Lewis portrays his timid student as not being confident that he has it right. Hence what the student lacks is firm belief. Lewis thus seems to be using “believe” here in the sense of firm belief (conviction, being certain), and in that case his timid student example, even if it were convincing, would not in fact yield an argument for the view that knowledge, at least in certain contexts, doesn’t require any kind of belief. Second, I shall argue (in section 5) that if we say that a subject “knows the answer” to a question, even though we are aware that she doesn’t believe that her answer is true, we merely mean that she is in a position to produce words that can be used to express the “right” proposition. But this does not entail that the subject knows that the answer is right and hence not that she has the relevant propositional knowledge. On this topic, cf. also Colin Radford (1966), who presents examples which he thinks show that “neither being sure that P nor having the right to be sure that P, can be necessary conditions of knowing that P” ( 4). Another epistemologist who opts for the view that knowledge doesn’t require firm belief is Keith Lehrer. The required kind of credal attitude Lehrer calls “acceptance” (cf., for example, Lehrer 1990, 10f.).

24

sponds to a strict use of “know” that “conforms to some standard, ordinary sense of the term in colloquial English (as judged by what epistemologists who attend to ordinary usage have identified as such)” (Goldman 2002b, 182). Elsewhere he writes: Is there any ordinary sense of ‘know’ that corresponds to true belief, or have I invented it? I believe there is an ordinary sense. (Goldman 1999, 24)

However, examples as the one above illustrate that we would be disinclined to concede that S knows that p, in any ordinary sense of the term, if we assume that S only holds a weak (true) belief that p. Super-weak knowledge does not seem to meet Goldman’s ordinary usage constraint. However, note that Goldman also says that, should it turn out that his ordinary usage view is untenable, he’d be happy to treat weak knowledge as a term of art (1999, 24). Yet a second point, which undermines this option as well, is this. In our story about Diane and Alfred the protagonists suspect, but don’t know, that their source is untrustworthy. In Goldman’s examples the subjects’ situation is epistemically worse (or clearer, if you wish). In these examples the candidates do know that their source is untrustworthy. But then it is hard to see why they would form any belief at all that the capital of Austria is Vienna (and not, for example, Graz or Innsbruck). Why would any minimally rational subject under such circumstances even assign a probability greater than 0.5 to the information in question? (We are still assuming, with Goldman, that the subject has no independent evidence for the truth of the proposition.) If someone whom I know suffers from severe schizophrenia tells me that the Martians have landed, this would not motivate me to assign a probability greater than 0.5 to this proposition. (At least so I hope.) The problem with the Hawthorne-Goldman-Olsson example thus is that a subject would not normally come to “possess the information that p” even in the sense of generating a weak true belief that p when p is stated, or in some other form presented to the subject, by someone he or she knows to be an untrustworthy informant. At least for minimally epistemically rational subjects, the following propositions form a second inconsistent triad: (1*) Knowledge requires belief. (2) S is confronted with a piece of information p from a source that S knows isn’t trustworthy (in questions of the kind at issue). (3) S knows that p (solely) on the basis of the fact described in (2).

25

So far I have ignored the following complication. Following Goldman’s and Olsson’s setup of the story, (2) says unqualifiedly that S knows that their informant is untrustworthy. But in what sense of “know”? In some reliabilist sense? Or in some (other) sense of justified true belief + X? Or should we perhaps read the statement in one of the weak senses of “know” just discussed (weak knowledge involving firm belief, or superweak knowledge)? Suppose S enjoys merely super-weak knowledge that her informant is untrustworthy. For example, imagine that instead of a drunken loiterer Alfred asks an eight year old child, little John, about the train schedule. Alfred believes that John is fairly intelligent and that he has put considerable effort in memorizing the schedule. Still, Alfred cannot rid himself of the feeling that he shouldn’t believe little John. Alfred is by no means certain that John is wrong. In fact he even believes weakly that John is right. Yet he isn’t sure enough. Alfred, let us suppose, is right: John is in fact wrong about the schedule. In this story, Alfred has super-weak knowledge that John is not trustworthy (he weakly and truly believes that John is not trustworthy). Might not this be a situation in which Alfred can still (justifiably) form a weak belief that what John tells him is true? No. Even if we only weakly believe that some (potential) informant is untrustworthy, when that informant is our only source of information we should not, and normally would not, assign a probability greater than 0.5 to the information (or potential information5) retrieved from that source. For if a given source is untrustworthy the conditional probability that the (potential) information presented by that source is false is greater than the conditional probability that this (potential) information is true. I am not denying that there are circumstances in which we would, and should, believe a statement made by an informant of whom we have reason to believe he is untrustworthy. We would believe such an informant if we had either independent positive evidence for the proposition in question or overriding reasons for believing that our initial mistrust was unwarranted. However, in both cases the subject holds a true belief + X, where X is some fairly complex epistemic property. Situations of this kind involve an assessment of the epistemic force both of the (potentially) undermining defeater for the belief in question (“informant A, who claims that p, is untrustworthy”) and of meta-defeaters (for example: “other, apparently reliable, informants also say that p”, “on the present occasion A—despite 5. The term “information” is often used in a sense that rules out “false information” as a contradictio in adjecto. We may, however, use “information” and “informant” in a more liberal sense that doesn’t have “veritistic” implications.

26

his usual untrustworthiness—appears to be trustworthy”, etc.). In such circumstances the belief condition for weak knowledge could be fulfilled. Yet, the subject’s credal attitude would not simply constitute the complement of ignorance, in Goldman’s sense of having a mere true belief. Instead, S ’s (true) belief would be justified (and in some fairly complex way). The Goldman-Olsson-Hawthorne argument for weak knowledge thus runs into a dilemma. At least minimally rational epistemic subjects would, under the circumstances sketched in the examples, refrain from forming even a weak belief that the (potential) information they obtain is true. Since the authors do not reject the view that knowledge requires belief, the stories they offer in support of weak knowledge are therefore not coherent. On the other hand, if these stories were spelled out in such a way that it could coherently be maintained that the subjects generate at least a weak belief, then this would have to be on account of complex epistemic reasons they have, i.e., of reasons which override their reasons for believing that what their informant says is probably false. In the first case a minimally epistemically rational subject would refrain from forming any belief at all; in the second case the belief he does form would be justified. Either way, the subject fails to acquire even super-weak knowledge. 3. The rationality constraint I have qualified my claim that the subjects in the Hawthorne-GoldmanOlsson type of example would not even form a weak belief, by adding: “at least if they are minimally epistemically rational” (in the situation at hand). What if we drop this qualification? In fact this seems what Goldman and Olsson, as well as Hawthorne, implicitly opt for in claiming that the pupils do form the belief that Vienna is the capital of Austria even though they are well aware that their source is untrustworthy. The question is whether we—if “we” refers to an average competent speaker of English—would indeed ascribe knowledge in such cases. Consider the following example, which is closely analogous to the Hawthorne-Goldman-Olsson case. Tom wants to know what the capital of Zimbabwe is. He encounters a machine that is loaded with thirty index cards displaying the names of the thirty largest cities in Zimbabwe, including the capital. When Tom types in the question: “What is the capital of Zimbabwe?” and pushes a button, the machine spits out one card at random. Tom knows that the machine works in this way, and that it contains exactly one card

27

with the name of the capital. He knows therefore that the information he will receive is far more likely to be incorrect than correct, he knows that the chances of receiving the correct response is 1/30. Tom thus knows that he shouldn’t trust that the machine will provide him with a correct response to the question what the capital of Zimbabwe is. He pushes the button, picks the card and—solely on that basis—forms the belief that the city named on the card is the capital of Zimbabwe. As it happens, the name is correct (“Harare”). Would we say that Tom’s true belief is an instance of knowledge? Clearly not. The situation, I maintain, is analogous in all relevant respects to the one where someone trusts a human informant he believes to be untrustworthy.6 My original argument was that if (1*) is true, then (3) entails that S believes that p, but that this is ruled out by (2) if S is at least minimally rational. We have now considered the option of dropping the rationality constraint. In that case (2) doesn’t rule out that the belief condition for knowledge as stated in (1*) is fulfilled. Yet, if that condition is fulfilled because S holds an epistemically irrational belief, then we would not— contrary to (3)—ascribe knowledge either. Cut the pie any way you like, knowledge can’t properly be ascribed. It may be worth adding that, were we to ask Tom himself, he would normally deny that he knows. Similarly, the students asked about the capital of Austria would normally deny that they know. (I tested the latter kind of case, but with “Harare”, in my epistemology class.) None of the students would normally say that he or she knows what the capital is if he or she is aware that the source is unreliable and thus delivered, in all probability, the wrong result. It’s like a lottery case. Although you don’t know that you have lost, you would not normally consider yourself to know that you have won either. 4. Weak knowledge and belief suspension about source reliability Before considering an alternative explanation of the examples, let us look at two more moves that may be suggested on behalf of weak knowledge. First, why not drop the even-if clause in the Hawthorne-Goldman-Olsson 6. Note that even on purely reliabilist grounds Tom’s belief should not be classified as knowledge. For the process or method he employs—trusting a source that (i) he believes to be unreliable and that (ii) actually does produce far more false than true results (in all past, present, and future occasions of use, as we may assume)—is also de facto unreliable.

28

argument? This clause says that we would ascribe knowledge even if (we knew that) the subjects knew that their source is untrustworthy. Suppose it is conceded that my argument, as so far developed, is on target regarding cases in which the subjects know that their source is untrustworthy. “All right, then”, it may be responded, “so let us delete the even-if clause, and the Hawthorne-Goldman-Olsson argument stands!” In Knowledge and Lotteries, Hawthorne presents what he declares to be an amended version of his original example. This later version doesn’t tell us what epistemic attitude the subjects have towards their informant. Hawthorne has us imagine the following case: I give six children six books and ask them each to pick one of the books at random. All but one contains misinformation about the capital of Austria. I ask the children to look up what the capital of Austria is and commit the answer to memory. One child learns ‘Belgrade’, another ‘Lisbon’, another ‘Vienna’, and so on. I ask an onlooker who has witnessed the whole sequence of events (or someone to whom the sequence of events is described) ‘Which one of the schoolchildren knows what the capital of Austria is?’ or ‘How many of the children know what the capital of Austria is?’ It is my experience that those presented with this kind of case will answer, not by saying ‘None of them’, but by selecting the child whose book read ‘Vienna’—even though that child was only given the correct answer by luck. (Hawthorne 2004, 68f.)

First, note that in this example the question, again, is not which one of the children—if any—knows, or how many of them—if any—know, that the capital of Austria is Vienna. It is instead which one knows, or how many know, what the capital of Austria is. So this example suffers from the same problem that has already been discussed in section 1. “Which one … knows what the capital … is?” as well as “How many … know what the capital … is?” are leading questions in a context in which it is shared knowledge that at least one of the children could present the correct answer. The formulations suggest that the response “none!” is not expected and would very likely be conversationally inappropriate. I will come back to this point in the next section. The second point is that in the present form of the story the children’s attitudes towards their informant is significantly underdescribed. In this version, the assumption seems to be that they do not mistrust the person who distributes the books. But this still requires a case distinction. If they don’t believe that their source is untrustworthy they either (i) believe that it is trustworthy or (ii) suspend belief as to whether it is trustworthy 29

or untrustworthy. The way Hawthorne presents his example suggests that—contrary to what was envisaged in Goldman and Olsson’s version of the school example—it is (i) that he has in mind. (“One child learns ‘Belgrade’, another ‘Lisbon’, another ‘Vienna’ …”) However, if that is the idea, the story clearly fails to yield a good case for weak knowledge! For if S acquires the true belief that p on the basis of her belief that their source is trustworthy, S doesn’t acquire a “mere true belief ”, but a true belief that is (however weakly) justified. The belief is at least “subjectively justified”, as we may say, for example in the sense of its being internally rational for the subject to form that belief. On certain deontologoical accounts of justification one could also declare the subject to be epistemically blameless when she forms the belief on the basis of thinking that her informant is trustworthy. Such notions are of course internalist notions of justification. Yet, the kind of positive epistemic status under consideration is not confined to internalism. The process of belief formation may plausibly be described as a process or method of the type “trusting a teacher (an informant) who usually provides her pupils (hearers) with correct information”. At least in that case the children also enjoy justification in some standard process reliabilist (and externalist) sense. Hence, if it is assumed that S believes that the source of their information is trustworthy, then the knowledge one might be inclined to ascribe to S would—contrary to what is required for weak knowledge—not constitute the complement of ignorance. What we are left with, therefore, are cases in which S is told that p (or is provided in some other way with the information that p), but neither believes nor disbelieves that the source is trustworthy. Here the answer is analogous to the one I have given above. If S suspends belief as to whether his informant is trustworthy, then if we knew this and were asked whether S knows the information he has obtained, we would, other things being equal, assume that S’s epistemic reaction displays some basic level of epistemic rationality. Accordingly, we would assume that S suspends belief as to whether what he or she was told is true, and hence not normally ascribe knowledge.7 So much for the suggestion of dropping the even-if-clause in the Hawthorne-Goldman-Olsson argument. 7. It may be worth emphasizing that this conclusion has no implications for non-epistemic forms of rationality. The fact that from an epistemic point of view the proper attitude for S is belief suspension regarding p does not of course entail that it may not be rational for S in some non-epistemic sense to act as if he/she believed that p, or to act on the assumption that p. You are lost in the mountains. A fellow mountaineer who appears to be familiar with the territory

30

There is yet another move that may be suggested on behalf weak knowledge. So far we have, with Goldman and friends, considered only cases in which at least the knowledge ascriber knows that the source on which S bases his/her answer is untrustworthy. What if we drop that constraint? Suppose John, who knows nothing about the teaching situation in a certain class, enters the classroom and witnesses the teacher asking: “What is the capital of Austria?” Only Lisa replies “Vienna”. When John is asked which of the children, if any, knows that Vienna is the capital of Austria, might he not appropriately reply “Lisa”? If so, it may be suggested, it doesn’t alter the situation if we add that unbeknownst to John the teacher has distributed various books with only one (received by Lisa) containing the correct information, and that the children are well aware that their sources are untrustworthy.8 The response to this is that a knowledge ascription would not in fact be appropriate in this case. John’s answer would be false and, I maintain, he would accordingly, in normal circumstances, retract his claim that Lisa knows when he is informed that it was by sheer luck that she got hold of the right information. The reason is that, under normal circumstances, John would assume some basic epistemic rationality on Lisa’s part, which precludes her from holding either the firm or the weak belief that Vienna is the capital of Austria if she knows that her source is untrustworthy. And if she generates that belief nonetheless, her “epistemic behavior” would display a high degree of epistemic irrationality, which would again preclude her from being correctly classified as a knower. From what has been said so far I conclude that, as it stands, the Hawthorne-Goldman-Olsson argument for weak knowledge fails. However, I don’t wish to deny that their examples highlight an interesting and important phenomenon that calls for explanation. I argued that, if the questions in the stories were framed in a more neutral way (“Which of the students, if any, knows that …?”), it is doubtful whether a typical addressee would indeed simply pick the student who utters the right words. However, I don’t wish to dispute that if the question is phrased in one of the ways these authors envisage, there may be circumstances in which the addressee tells you that the only chance to reach the valley before nightfall is to take the trail to your left. A signpost which appears to be well maintained by the local mountaineer’s club directs you to the trail on your right. Suppose your evidence for the truth of either suggestion is evenly balanced. Even so, you had better not contemplate until sunset about which trail you should take. 8. For this point I am indebted to an anonymous referee of the Grazer Philosophische Studien.

31

is indeed inclined to “count those pupils” who reply “Vienna”. The question remains: Why would this be so? Unfortunately, it is always much easier to criticize an analysis of a phenomenon than to come up with a plausible alternative. I don’t have the space here to present a full account of what I think might constitute such an alternative. Yet in the final section of this paper I shall at least outline an explanation of the examples that does not invoke weak knowledge. 5. Outline of an alternative explanation of the examples I have argued that a question such as “Which of the students, if any, knows what the capital of Austria is?” would not, in contexts of the type Goldman discusses, typically be answered by mentioning the candidate who comes up with the correct answer. I also claimed that such a question, if it lacks the “if any” qualification is a leading question when it is shared knowledge between speaker and hearer that some candidates, as we may say, “possess the correct answer”. More precisely, when both (i) know that the other knows this and (ii) assume of their interlocutor that he knows that the other knows this, a question such as the unqualified “Which of the students knows what the capital of Austria is?” suggests that the answer “none” is not expected. Instead, it invites the hearer to mention the candidate who has uttered the right word(s). This description of the case is, I think, intuitively highly plausible. For example, compare the case to a multiple choice exam that asks “Which of the following five propositions are correct?” The supposition clearly is that at least one of the propositions listed is correct. If you are the examiner and—as customary for example in British universities—your questions had to be checked by the Faculty’s exam board before you were allowed to use them, this way of phrasing the question would certainly not pass if there were no correct answer among the options offered. So there is initial evidence for the view that the which-question in the Hawthorne-Goldman-Olsson example suggests to the hearer that, in the context in question, at least one person should be counted even if it is known that every candidate knows that their source is untrustworthy. Can this point be substantiated on a more theoretical level? More specifically, are there theories of meaning and communication within which this can be fleshed out? Suppose this were the case. Even so, it may be argued, what is wrong with leading questions? Such devices are included among our standard conversational practices. So why should not

32

Goldman point to such usage and take it to show that there are contexts in which “knowing that p” simply means “believing truly that p”? Let us consider Hawthorne’s 2004 version of the example. Exactly one child in the room, suppose it is Lisa, has been so lucky as to receive a book that contains the correct information. Suppose again that the hearer is aware that Lisa knows her source is untrustworthy, and someone asks an onlooker, “Which of the children knows that Vienna is the capital of Austria?” To begin with, note that Hawthorne himself indicates that things are somewhat fishy here. In a footnote to the 2004-passage quoted above he concedes: I note in passing that a few informants claimed to have slightly different intuitions as between ‘Which one of the schoolchildren knows what the capital of Austria is?’ and ‘Which one of the schoolchildren knows that Vienna is the capital of Austria?’. (Hawthorne 2004, 69)

However that may be, let us suppose that the addressee does respond “Lisa” when asked either of these questions. What he means, I suggest, is in neither case that, strictly speaking, Lisa knows that the capital is Vienna. Instead, what he means is a proposition that could also, and more appropriately, be expressed by saying: “Lisa knows, or possesses, the correct answer” or, still more appropriately: “Lisa is acquainted with, and disposed to utter, a word that can serve to give the correct answer”. If the respondent doesn’t use any of these sentences, this is because the context and the ways the questions are put conversationally license, in response, the simple utterance of “Lisa”. What is crucial is that Lisa’s possessing the right answer, in the mere sense of being acquainted with the right word, does not entail that she knows that her answer is right or that she knows that the capital of Austria is Vienna. Compare the case once more to Tom and the card machine. What we would say when Tom receives the “Harare” card is that, due to a lucky accident, he has got hold of the right name. This may, in certain circumstances, also be expressed by saying that he knows, or possesses, the right answer. But since it was by sheer luck that Tom came to possess the right answer, and since Tom knows this, we would not normally say that he knows that his answer is right and hence knows that the capital of Zimbabwe is Harare. If a subject is able to give the correct answer, it may be asked, why would she not be able to infer propositional knowledge of the correct answer from this ability?9 The subject cannot infer this because she doesn’t know that 9. This question has been raised by an anonymous referee.

33

she can give the right answer. If she knows that her source is untrustworthy, she doesn’t know—indeed does not even believe—that she was in fact lucky enough to get hold of the right information. In fact, if she is minimally epistemically rational, she even believes that what her book (or the teacher, the index card, or whatever) tells her is probably false. If the story goes such that, when asked, the subject is nonetheless disposed to utter certain words that can be used to give the correct answer, then this may plausibly be explained by the fact that she sees this as her only chance (however small) to hit the truth. My proposal is thus that when in the cases under consideration we concede that the subject knows we speak loosely, assuming for the conversational purposes at hand that what we say is precise enough. To corroborate this interpretation, I shall now take a closer look at the speech acts performed in the example. Consider Paul Grice’s famous Cooperative Principle (CP). This principle says, “contribute what is required by the accepted purpose or direction of the conversation” (cf. Grice 1989, 26f.). One of CP’s “supermaxims” concerns what Grice calls “conversational manners” and prescribes: “Be perspicuous!” Grice invokes CP in an attempt to explain implicature, which—as analyzed by Grice—is a phenomenon that pertains to assertive utterances. (Very roughly, to implicate that p is the case is to mean or imply that p is the case by saying that something else is the case.) However, in its general form stated above CP applies to non-assertive utterances as well. Gricean implicature is only one type of indirect speech act, and even though Grice’s notion of implicature may not be directly applicable to non-assertive utterances, questions can be used as well to perform indirect speech acts. By asking a question one can, we may say, conversationally imply that something is the case without saying that it is the case. According to John Searle, indirect speech acts are speech acts in which “the speaker communicates to the hearer more than he actually says by way of relying on their mutually shared background information, both linguistic and nonlinguistic, together with the general powers of rationality and inference on the part of the hearer” (Searle 1975, 60f.). More specifically, the machinery needed to explain the indirect part of indirect speech acts includes “a theory of speech acts, certain general principles of cooperative conversation (some of which have been discussed by Grice […]), and mutually shared factual background information of the speaker and the hearer, together with an ability on the part of the hearer to make

34

inferences” (ibid.). A standard example is the question “Can you reach the salt?”, as uttered for example during dinner. In such a context the speaker normally utters this sentence not, or not merely, to ask a question, but also as a request to pass the salt. A key question regarding such indirect speech acts is, as Searle notes, “that of how it is possible for the hearer to understand the indirect speech act when the sentence he hears and understands means something else” (Searle 1975, 60). Basically, Searle’s (generally plausible) answer is that the hearer (H) infers the relevant facts about the speaker’s intentions by invoking some general principles of conversation and shared factual background knowledge. For example, H will interpret the question “Can you reach the salt?” as a request to pass the salt by the following kind of reasoning. (What follows is a simplified and slightly modified sketch of Searle’s account in (1975, 73f.) and (Searle 1979), where he explains indirect speech acts partly in terms of Gricean conversational implicature). “S has asked me whether I have the ability to pass the salt. I may assume that he is cooperating in our conversation and thus that his utterance has some aim or point (principle of conversational cooperation). The context is not such as to indicate any theoretical interest on S’s part in my ability to pass the salt; for clearly, S knows that I have that ability (background knowledge). Hence his utterance is probably not meant just as a question, but has some further illocutionary point. People often use salt at dinner; there is no salt within S ’s reach, but I can reach the salt (background knowledge). Therefore, since there is no other plausible illocutionary point, S is probably requesting that I pass him the salt.” Searle also notes a number of general facts about sentences used to perform directive indirect speech acts. The most important ones are the following (cf. 1975, 67–69). (i) Such sentences do not have an imperative force as part of their meaning. (ii) They are not ambiguous between an imperative and a nonimperative illocutionary force. (iii) Yet such sentences are standardly used to issue directives. (iv) They are idiomatic, but (v) don’t constitute idioms (they don’t work, for example, like: “This is where the rubber hits the road”). (vi) When such sentences are uttered as requests, they retain their literal meaning and are uttered with, and as having, their literal meaning. (vii) Even when they are uttered with the illocutionary point of a directive, the literal illocutionary act is also performed. I have suggested that the question posed in the Hawthorne-GoldmanOlsson example (“Which of the students know …?”) is an indirect speech act. Assuming, details aside, that Searle’s account is on the right track the

35

task is to provide an analysis, and explanation, of the Hawthorne-Goldman-Olsson case that is analogous in the relevant respects to the story about indirect speech acts just outlined. I think that such an account can indeed be given. Uttering the sentence, “Which of the students know what the capital of Austria is?” constitutes, not the complex indirect speech act of asking-a-question-plus-making-a-request, but the complex indirect speech act of asking-a-question-plus-making-an-assertion. According to this proposal, what the speaker means when he/she utters this sentence in contexts of the kind in question could also be expressed by uttering (more awkwardly): “Some of the students—even though they know they received their information from an untrustworthy source—possess the correct answer to the question what the capital of Austria is. Which ones?” In order to see that this proposal matches the fundamental tenets of Searle’s account of indirect speech acts, I shall now state some facts that correspond to the features (i)-(vii) above. I will then sketch the reasoning by which a hearer can indeed understand the speaker’s utterance as the indirect speech act the latter performs. Here are some relevant facts about the sentence, “Which of the students know what the capital of Austria is?”, as uttered in a context of the kind envisaged by Goldman. (i) The assertive force of that sentence is not part of its meaning. Witness the fact that its literal utterance can coherently be supplemented with “bracketing” its assertive force, as in: “Which of the students know what the capital of Austria is? (Note that the answer may be ‘none’!).” Compare the multiple-choice test that asks: “Which—if any—of the following propositions is correct?” Here, too, what would otherwise be an assertive component of the indirect speech act performed by uttering the unqualified “Which of the following propositions is correct?”, is explicitly cancelled. Or consider the following children’s trick question: “You have a box filled with a hundred pounds of stones and another one filled with a hundred pounds of feathers. Which one is heavier?” Depending on their age and stage of education, the children who respond will either go for the box of stones or spot the catch and reply, with a smile: “neither”. However, in the latter case they will not typically accuse the questioner of having asked an incorrect question and argue that the meaning of the sentence “Which one is heavier?” doesn’t allow for the response “neither”. Yet, why does a corresponding utterance work, for some subjects and in some contexts, as a trick question? Because in the contexts in question it does produce the assumption that one of the boxes is indeed heavier.

36

(ii) The sentence is not ambiguous between the illocutionary force of a question and that of an assertion. This seems intuitively clear. Moreover it may also be pointed out, with Searle, that the onus of proof would seem to be on those who are inclined to maintain that the sentence is ambiguous. For “one does not multiply meanings without necessity” (Searle 1975, 67f.). (iii) The sentence can standardly be used to conversationally imply an assertion. Several arguments for this have already been laid out in this paper. Remember, for example, the fact that sometimes it will be appropriate explicitly to neutralize the assertive force of the question by qualifying it, as in: “Which of the students, if any, knows …?” In certain contexts this qualification is necessary precisely because otherwise the question would be taken to imply that there is at least one candidate who knows the correct answer. (iv) The sentence is clearly idiomatic; but it is (v) not an idiom. (vi) When the sentence is uttered (also) to make the assertion in question (i.e., “There is at least one student who possesses the correct answer”), it still has its literal meaning. (vii) When it is uttered with the illocutionary point of an assertion, the literal illocutionary act is also performed. After all, the assertion involved is made by way of asking a question. Moreover, the utterance may subsequently be reported by reporting the literal illocutionary act (cf. Searle 1975, 70). Having described these features of indirect speech acts, let us finally reconstruct the reasoning through which the hearer will normally understand the speaker’s utterance as the indirect speech. The hearer’s reasoning, I suggest, will roughly proceed along the following lines: “S has asked me which of the students know what the capital of Austria is. Both S and I know—and S knows that I know and that I know that S knows—that the candidates know that they have received their information from an untrustworthy source. Yet unbeknownst to one of the candidates (Lisa) she got hold of the correct answer. Neither the speaker nor I have reason to assume that Lisa is not minimally epistemically rational. So Lisa—despite in some moderate sense possessing the correct answer—fails to know that Vienna is the capital of Austria because she doesn’t even believe this. I may assume, however, that S is cooperating in the conversation and trying to be perspicuous. So, had S considered

37

it an option for me to respond: ‘None (of the students knows that it is Vienna)’, S would have indicated this—for example by adding: ‘… if any’ to his question. The only candidate who can be counted in this context is Lisa. Therefore, when S is asking ‘Which of the students know what the capital of Austria is?’, S probably means this: ‘Some of the students—even though they know they received their information from an untrustworthy source—possess the correct response to the question what the capital of Austria is. Which ones?’” H certainly does not have to go through any conscious process of inference to derive this conclusion. Instead, he may simply “hear” the speech act as involving the assertion. Admittedly, this is not an uncomplicated story; but that holds as well for Searle’s explanation of the—arguably simpler—indirect speech act “Can you reach the salt?”. (Searle’s exposition takes about two pages; cf. Searle 1975, 73–75.) With the above conclusion, H is in a position to give an appropriate response. If he/she is cooperative, the response will be “Lisa”. And while this is, in the given context, a grammatical ellipsis for: “Lisa knows that Vienna is the capital of Austria”, what H means, on the basis of his/her understanding of S’s utterance, is that Lisa possesses, in the moderate sense of being acquainted with, the correct answer to the question of what the capital of Austria is. H ’s response is an indirect speech act as well. More precisely, it constitutes an elliptical indirect speech act. In this case however the illocutionary force of the indirect component corresponds to the illocutionary force of the direct component: both speech acts are assertive. In Knowledge in a Social World Goldman says that he believes “there is an ordinary sense of ‘know’ in which it means ‘truly believe’” (24). I believe—and I believe I believe truly—that the above discussion casts considerable doubt on Goldman’s argument for this view. However, as noted above, Goldman also writes that, should his ordinary-sense claim turn out to be untenable, he will be “prepared to proceed cheerfully with ‘weak knowledge’ as a term of art (or technical term)” (ibid.). If my arguments are on target, Goldman’s ordinary-sense claim is problematic. This does not debar him from switching to the term-of-art view. Indeed, we have seen over the last ten years how the notion of weak knowledge can facilitate pursuing novel and important epistemological projects, projects concerning which Goldman has once more presented pioneering, insightful, and inspiring work. Hence my conclusion

38

is not that the notion of weak knowledge should be dismissed root and branch. Yet I should like to recommend: Let us believe weakly in weak knowledge.10

LITERATURE Beckermann, Ansgar 1997: “Wissen und wahre Meinung”. In Wolfgang Lenzen (Hg.), Das weite Spektrum der analytischen Philosophie—Festschrift für Franz von Kutschera. Berlin, New York: De Gruyter, 24–43. — 2001: “Zur Inkohärenz und Irrelvanz des Wissensbegriffs. Plädoyer für eine neue Agenda in der Erkenntnistheorie”. Zeitschrift für philosophische Forschung 55, 571–593. Brendel, Elke 1999: Wahrheit und Wissen. Paderborn: mentis-Verlag. Davis, Wayne 1998: “Implicature: Intention, Convention, and Principle in the Failure of Gricean Theory”. Cambridge: Cambridge University Press. Ernst, Gerhard 2002: Das Problem des Wissens. Paderborn: mentis-Verlag. Goldman, Alvin I. 1986: Epistemology and Cognition. Cambridge, Mass.: Harvard University Press. — 1999: Knowledge in a Social World. Oxford: Clarendon Press. — 2001: “Social Routes to Belief and Knowledge”. The Monist 84, repr. in id., 2002a, 164–181. — 2002a: Pathways to Knowledge: Private and Public. Oxford: Oxford University Press. — 2002b: “What is Social Epistemology? A Smorgasbord of Projects”. In id., 2002a, 182–204. Goldman, Alvin I., and Erik J. Olsson 2009: “Reliabilism and the Value of Knowledge”. To appear in Epistemic Value, ed. by D. Pritchard, A. Millar, and A. Haddock, Oxford University Press, 19–41. (Page references are to the manuscript version of this paper.) Grice, Paul 1989: Studies in the Way of Words. Cambridge, Mass.: Harvard University Press. Hawthorne, John 2000: “Implicit Belief and A Priori Knowledge”, Southern Journal of Philosophy 38, Spindel Conference Suppl., 191–210. — 2002: “Deeply Contingent A Priori Knowledge”, Philosophy and Phenomenological Research 65 (2), 247–269. 10. For valuable comments and discussions I am indebted to Wayne Davis, Alvin Goldman, Michael Gorman, Katherine Munn, Gerhard Schurz, and, especially, an anonymous referee of Grazer Philosophische Studien.

39

Hawthorne, John 2004: Knowledge and Lotteries. Oxford: Clarendon Press. Kutschera, Franz von: Grundfragen der Erkenntnistheorie. Berlin, New York: De Gruyter. Lehrer, Keith 1990: Theory of Knowledge. London: Routledge. LeMorvan, Pierre 2005: “Goldman on Knowledge as True Belief ”. Erkenntnis 62, 145–155. Lenzen, Wolfgang 1980: Glauben, Wissen und Wahrscheinlichkeit, Wien, New York: Springer. Levi, Isaac 1980: The Enterprise of Knowledge. Cambridge, Mass.: MIT Press. Lewis, David 1996: “Elusive Knowledge”. Australasian Journal of Philosophy (74), 549–567. Meggle, Georg 1997: Grundbegriffe der Kommunikation, 2nd ed. Berlin, New York: de Gruyter. Radford, Colin, “Knowledge—By Examples”. Analysis 27 (1966), 1–11. Sartwell, Crispin 1991: “Knowledge is Mere True Belief ”. American Philosophical Quarterly 28, 157–165. — 1992: “Why Knowledge is Mere True Belief ”. The Journal of Philosophy 89, 167–180. Searle, John 1975: “Indirect Speech Acts”. in Syntax and Semantics, ed. Peter Cole and Jerry L. Morgan, New York, San Francisco, London, 59–82. — 1979: “Indirect Speech Acts”. in id., Expression and Meaning: Studies in the Theory of Speech Acts, Cambridge: Cambridge University Press.

40

Grazer Philosophische Studien 79 (2009), 41–62.

MELIORATIVE RELIABILIST EPISTEMOLOGY: WHERE EXTERNALISM AND INTERNALISM MEET Gerhard SCHURZ Heinrich-Heine-Universität Düsseldorf Summary In sec. 1.1 I emphasize the meliorative purpose of epistemology, and I characterize Goldman’s epistemology as reliabilistic, cognitive, social, and meliorative. In sec. 1.2 I point out that Goldman’s weak notion of knowledge is in conflict with our ordinary usage of ‘knowledge’. In sec. 2 I argue for an externalist-internalist hybrid conception of justification which adds reliability-indicators to externalist knowledge. Reliability-indicators produce a veritistic surplus value for the social spread of knowledge. In sec. 3 I analyze some particular meliorative rules which have been proposed by Goldman. I prove that obedience to the rule of maximally specific evidence increases expected veritistic value (sec. 3.1), and I argue that rule-circular arguments are epistemically worthless (sec. 3.2). In the final sec. 3.3 I report a non-circular justification of meta-induction which has been developed elsewhere.

1. Meliorative reliabilist epistemology in a Goldmanian perspective 1.1 Characterization of meliorative reliabilist epistemology I think that one of the central tasks of epistemology is meliorative. Meliorative reliabilist epistemology seeks to improve the epistemic practices of human societies in regard to the supreme epistemic goal, which according to reliabilism consists in truth-conduciveness, or—to use Goldman’s (1999) concept—in veritistic value. For this purpose meliorative epistemology must be able to tell convincingly which belief-producing methods are more and which are less reliable. In other words, it should be able to recommend to society certain epistemic methods or methodologies. For example, in my view meliorative epistemology should be able to demonstrate the superiority of scientific, fact-and-argumentation-based epistemic

methods over purely authority-based epistemic methods, including religious dogmas. I don’t want to imply that meliorative epistemology should entail a dogmatic stance against all sorts of religion from the start—but if certain aspects of religious beliefs should have an epistemically rational justification, then meliorative epistemology should be able to tell us why this is so. In any case, meliorative epistemology should be able to take a stance in regard to controversies between fundamentally different worldviews and their associated epistemic methods. Several authors have emphasized that contemporary analytic epistemology should not forget its meliorative purposes (cf. Shogenji 2007, Schurz 2008a). Bishop and Trout (2005) have criticized Standard Analytic Epistemology (SAE) by arguing that it fails to serve any meliorative purposes at all. The to authors think that, in contrast to SAE, cognitive psychology has achieved admirable meliorative success, whence SAE should be better replaced by cognitive psychology. I think that this conclusion of Bishop and Trout is clearly wrong, because psychology cannot answer certain fundamental epistemological questions, for example concerning the rationality of empirical induction or the existence of an external reality. But nevertheless I think that Bishop and Trout’s criticism is correct insofar a drastic change in orientation of SAE is necessary in order to achieve meliorative relevance. For example, Bishop and Trout demonstrate that meliorative psychology is primarily concerned with methods of 2nd order reasoning which, for example, demonstrate the superiority of certain types of prediction strategies. In contrast, most contemporary epistemologists think that 2nd order justifications are either unnecessary or impossible or both. Instead they are concerned with so-called epistemic intuitions among which they want to find those which make up the best-calibrated system of intuitive epistemology. But modern cognitive psychology has demonstrated again and again how unreliable and often even irrational human epistemic intuitions can be—from egocentric biases and overconfidence to fundamental probabilistic or logical errors. In this respect I agree with Bishop and Trout that meliorative epistemology should definitely not base its theories on human’s epistemological intuitions. Nevertheless I also agree with standard epistemologists that intuitions have some role to play in the architecture of meliorative epistemology. But in my view, this role has to be confined to setting up the goals of epistemology (such as veritistic value or computational speed), while instrumental questions concerning optimal means for achieving this goal should definitely not be answered by appeal to intuitions but by recourse to experience and argument.

42

Bishop and Trout’s criticism does not apply to the new epistemological wave of reliabilism which has been created by Alvin Goldman. Goldman’s reliabilistic epistemology (e.g. 1986, 1988, 1993) is veritistic and cognitivistic—it orientates epistemology towards a clearly defined goal, namely the systematic achievement of true beliefs by reliable belief-forming methods or processes. In (1999), Goldman adds to his reliabilist framework the social dimension of the production of knowledge with its high amount of division of labor. In the same book Goldman suggests also a variety of rules for the acquisition of beliefs which are clearly meliorative insofar Goldman intends to show that obedience to them increases the veritistic value of one’s beliefs. For example, ch. 4 of (Goldman 1999) deals with the veritistic value of basing one’s beliefs on evidence and testimony, ch. 5 with strategies of argumentation, ch. 6 with technologies of communication, ch. 8 argues for the superiority of science (by appeal to meta-induction in the sense of § 5.3 of this paper), and ch. 10 deals with the advantage of independent testimonies. In conclusion, Goldman’s epistemology is reliabilistic, cognitive, social, and meliorative. I strongly sympathize with these four aspects. There is, however, one aspect of Goldman’s epistemology which does not have my full agreement: Goldman’s concept of justification is externalistic, while I will argue that the concept of justification has to include internal conditions in order to serve its meliorative purposes. The rest of this paper is organized as follows: in the next subsections 1.2 and 1.3 I will discuss two problems in Goldman’s social reliabilistic framework: the quantitative notion of veritistic value, and the weak notion of knowledge. In section 2 I will work out my philosophical thesis that in order to be meliorative epistemology should include internal conditions in the definition of justification and knowledge. In section 3 I will then examine some particular meliorative epistemic rules which are discussed in (Goldman 1999), including a discussion of the possibility of justifying induction in a non-circular way. 1.2 A problem in Goldman’s weak notion of ‘knowledge’ In (1999) Goldman argues that there exists a weak notion of knowledge which he equates with mere true belief, as opposed to the strong notion of knowledge as justified true belief. Goldman focuses on this weak notion of knowledge in order “to circumvent the intricate issues that surround the notion of S(trong)-knowledge” (1999, 24). Goldman also argues that in most ordinary (or common sense) contexts, people’s usage of “knowledge”

43

coincides with ‘weak knowledge’. I think that the following facts (F1) and (F2) speaks strongly against Goldman’s thesis of ‘weak knowledge’: (F1) In (almost) all ordinary contexts it is meaningful and rationally coherent to assert the following in regard to a proposition p : I believe p but I don’t know p. (F2) If, in a given context, knowledge would mean true belief, then the assertion “I believe p but I don’t know p” would be rationally incoherent—more precisely, it would contradict the rationality principles (R1–5) below. While fact 1is an empirical assertion, fact 2 is a logical assertion which is proved below. Although according to my intuitions, fact 1 is pretty evident, I know that this isn’t so for other philosophers, whence I will provide evidence for fact 1. In all situations of the following sort it may be required, by reason as well as by responsibility, that you tell your partner that “you believe it but you don’t know it”:  you and your partner have to decide between two options with pretty deterministic results, and wrong decisions are costly,  you may either know which is the right option by experience or memory—in which case you should tell your partner “I know it” (provided your perception and memory are functioning properly),  or you may have a belief which is the right option merely based on reasons of plausibility—and exactly this is the situation in which you should tell your partner that you merely belief it but you don’t know it. Situations of this kind may occur in ‘almost all contexts’. For example, think of a situation in which you and your partner are on a walk, and getting at a crossroad you are uncertain about the right way back to your final destination, but your time is already running out. On decision-theoretic reasons it is extremely important in such a situation, for you as well as for your partner, to know whether you merely believe or whether you know that your suggested option is the right option. It depends on the balance of costs and gains whether mere reasons of plausibility are sufficient for you to choose that option which you believed is the better one, or whether you require knowledge, and hence an extremely high degree of rationally justified belief. The latter alternative is appropriate if the risk of

44

making an error is much greater than the gain of hitting upon the truth. For example, you terminate your present job in order to get a new one not if you believe but only if you (believe to) know that the new job will be better paid than you old one. In contrast to fact 1, fact 2 is a matter of logical proof. It is entailed by the following five rationality principles for rational believers: (R1) Rational believers know what the notion of knowledge means in the given context—in particular, whether it means ‘true belief ’ or something more. (R2) Whenever rational believers believe a finite set of propositions p1,},pn, and p1,},pn entails q, then rational believers belief q. (R3) A rational believer S believes p iff S believes that p is true. (R4) A rational believer S believes p iff S believes that she believes that p. (R5) Rational believers never believe inconsistencies. Proof of fact F2: (1) S believes: (S believes p and not (S knows p)) (2) In the context of (1), ‘knowledge’ means ‘true belief ’ (3) S believes: (S believes p) (4) S believes: (not (S knows p)) (5) S believes: ((S knows p) iff (S beliefs p and p is true)) (6) S believes: (not (S believes p and p is true)) (7) S believes: (not (p is true)) (8) S believes: (not-p is true) (9) S believes not-p (10) S believes p (11) S believes: (p and not-p). (12) S violates (R5)

assumption assumption from (1) and (R2) from (1) and (R2) from (2) and (R1) from (4), (5) and (R2) from (3), (6) and (R2) from (6) by (R2) from (8) by (R3) from (3) by (R4) from (9), (10) by (R2) from (11)

The rationality principles (R1–5) are standard. (R4) requires reflexive awareness of one’s beliefs, and that is obviously required for rational persons which are able to discriminate what they belief and what they beliefto-know. (R4) requires a certain amount of ‘logical omniscience’, but this idealization is harmless. Even if the principles (R1–5) are satisfied not for all but only for some propositions, this is sufficient to produce contradic-

45

tions, if those beliefs which the person believes to believe but not to know are about propositions for which principles (R1–5) are satisfied. In conclusion, facts 1 and 2 imply that Goldman’s thesis about the weak notion of knowledge in ordinary contexts is wrong. Assuming that fact 1 is correct, the conclusion is that in (almost) all ordinary contexts (maybe with a few strange exceptions) knowledge means more than mere true belief—because one may always reasonably assert that one believes but does not know a given proposition p. 2. Meliorative externalism: an externalist-internalist hybrid-conception For traditional internalist epistemology, the condition of justification describes an internal and mentally accessible property (or capacity) of the knowing person. The rise of externalist epistemologies was mainly triggered by two problems of internalist epistemologies. First the so-called regress problem: it seems that no justification can be complete in the sense that every premise on which it depends—including second-order premises concerning the reliability of non-deductive argument-patterns—can have a non-circular justification as well (cf. Bergmann 2003, ch. 1.3; Grundmann 2003, ch. 3.2). If the regress-problem is unsolvable, then no internalist justification can be sufficient to exclude the possibility of radically skeptical scenarios. Second the so-called Gettier-problem (Gettier 1963): it may happen that one’s internal justification of a true belief depends on lucky accidences of the environment (e.g., when a person perceives a non-faked barn in the midst of faked barns). Internalists have concluded from the two problems that the condition of internal justification is too weak, and a fully satisfying notion of knowledge needs a further condition (cf. Lehrer 1990, 134f ). Externalists, however, have suggested to remove the internal conception of justification completely, and to replace it by a purely externalist notion of justification. The most prominent explication of an externalist notion of justification is Goldman’s condition of reliability, which has been explicated by Goldman in several different versions (compare Goldman 1979, 1986, 1988), and which I suggest to explicate as follows: (Ext) A person’s believe-in-p is justified iff the (suitably selected type of ) process P which produced the person’s belief-in-p is reliable in the relevant (suitably selected class of ) circumstances C, which means that

46

the probability that P leads in C to a true belief of the given person is (sufficiently) high. The question of how the type of belief-producing process P and the relevant class of circumstances C is selected is called the reference class problem. One usually requires, roughly speaking, that (a) the circumstances in C contain all epistemically relevant aspects of the actual circumstances but otherwise are as normal as possible, and (b) the type of process P contain all factors which were causally relevant to the formation of the person’s belief in p. I cannot deal with the reference class problem here, which is treated in the papers of Grundmann and Baumann in this volume. The externalist notion of justification as a reliable belief-forming process does no longer depend on the mentally accessible properties of the believing subject—it is, rather, an objective property of the world, whether or not the believing subject is aware of this property or justified in believing that it holds. For example, we are externally justified in performing inductions iff the world is inductively uniform, whether or not we or anyone else can justify that the world is inductively uniform. This sounds strange for the internalist (cf. Fumerton 1995, 171–181), but is typical for the externalist’s notion of justification. As a consequence, a central internalist knowledge principle breaks down in the resulting externalist conception of knowledge—the so-called K-Kprinciple or reflexivity of knowledge: if one knows p, then one also knows that one knows p. In this regard, the externalist’s notion of knowledge deviates from the ordinary concept, since the K-K-principle seems to be deeply entrenched in the ordinary concept of knowledge. This is made plain by the following example: You are in a situation (similar as in sec. 1.2), in which you have to decide between two options, or ways, and your partner tells you “the way towards the left is the right one”. You ask back “do you really know this?” and your partner gives his first answer (A1): “Yes”. You are still not certain enough and ask again “do you really know this?”, and now your partner replies with the answer (A2): “I don’t know”. Intuitively, the two answers (A1) and (A2) are rationally incoherent. But according to externalism, they are not incoherent at all, because (A1) means “I know that p” and (A2) means that “I don’t know whether I know that p” (where p expresses the suggested option). Since the K-K-principle is invalid for externalism, (A1) and (A2) are coherent. On the other hand, the fact that we have the strong intuition that the two answers given by our partner are incoherent shows that we implicitly assume the K-K-principle to be valid.

47

However, the fact that the externalist conception of justification leads to a violation of the deeply entrenched K-K principle does not by itself constitute a strong disadvantage of externalist knowledge. The externalist may argue that he buys this disadvantage in order to avoid the regress and Gettier problems—and apart from that, human intuitions are not always sound anyway. In the next section, however, I will show that the externalist’s re-definition of knowledge has a much stronger disadvantage: it deprives knowledge from much of its meliorative functions, because purely external knowledge, without internalist knowledge-indicators, cannot be recognized as knowledge and hence has difficulties to spread through society. 2.1 Including reliability-indicators in meliorative externalist knowledge By (Ext)-knowledge I understand true belief which satisfies the condition (Ext) for justification. Now let us ask: which properties must a piece of (Ext)-knowledge of a person possess in order to have meliorative function for the epistemic practice of the person’s society? Obviously, the person’s belief must be recognizable as being justified—as being reliable enough to be taken over by other persons. One may argue that indicators of the reliability of a belief-producing process do always exist, in the form of the person’s success record. But this is not true. First of all, cognitive processes are typically non-observable, because one cannot look into another person’s mind. They rather have to be indicated. For example, if a person tells me that God spoke to her last night, or that her heart tells her something, then indicators concerning the type of this process are largely lacking. Second, even if the type of cognitive process is recognized by indicators, the track record concerning trustworthiness of this person need not be accessible. Thirdly, even if such a track record is accessible, the estimation of the person’s reliability based on this track record presupposes the operation of induction, which itself is in need of 2nd order justification. Therefore I suggest the following two additional conditions for meliorative (Ext)-knowledge: (MelExt) Subject S’s (Ext)-knowledge p is meliorative iff the (kind of ) process by which S’s belief-in-p was produced carries some indicators of its reliability.

48

Thereby, a property \(P) of a belief-producing (kind of ) process P is an indicator of reliability iff the \(P) is objectively correlated with the reliability of the process P, and (1.1) \(P) is mentally accessible to the justification-relevant subject(s) X, and (1.2) the justification-relevant subject(s) X can demonstrate (by way of arguments) that \(P) indicates either (1.2.1) the reliability of P, or at least (1.2.2) the optimality of P in regard to reliability. Conditions (1.1) and (1.2) of (MelExt) bring us back the internalist justification requirements for knowledge—at least for that version of internalism which I call reliability-internalism. Reliability-internalism understands justification as a system of arguments which indicates the reliability of the belief-forming process.1 This is very different from deontological internalism or virtue internalism which understand justification as satisfaction of certain intuitively given epistemic norms or rules (cf. Alston 1989, 85ff; Greco 2004). The resulting notion of knowledge which satisfies the conditions of (MelExt) is called meliorative externalist knowledge, in short (MelExt)-knowledge. (MelExt)-knowledge is an externalist-internalist hybrid notion of knowledge which combines an externalist reliability with an internalist justification condition. Other authors have also suggested such hybrid-notions (e.g. Alston 1988, Henderson, Horgan and Potrc 2007, Comesana 2008), because these notions enjoy the advantages of both the externalist and the internalist perspective. I see two major advantages of the hybrid-notion of knowledge, as opposed to the purely externalist notion. First, (MelExt)-knowledge satisfies the K-K-principle, because—roughly speaking—if reliable processes are furnished with reliability-indicators, then they are reflexive. Second and more importantly, I will show in subsection 2.2 that (MelExt)-knowledge has a veritistic surplus value over simple (Ext)knowledge, concerning the social spread of knowledge. Several subtleties are involved in definition (MelExt) which I explain now. 1. Of course, not all kinds of ‘plausible arguments’ are admissible. More precise explications of admissible argument structures have to be left to future work. For example, chains of arguments must not be circular (see sec. 3.2 of this paper). Deduction and induction can only be justified by way of optimality justifications. Assuming deduction and induction, then many more specific reliable belief-producing processes can be justified as reliable by induction (& deduction) from their success record. Etc.

49

(1.) First-person versus third-person internalism: In traditional first-person internalism, the believing subject S and the justification-relevant subject X are identical. In this case, the believing subject him- or herself possesses adequate means to demonstrate the reliability of its beliefs. For the social spread of knowledge, however, it is only necessary that conditions (1.1–2) are realized by some members X of the community, not necessarily by the believer him- or herself, but for example by certain experts who evaluate the reliability of informants. In this understanding we get the account of third person internalism which has been developed in (Schurz 2008a), and the similar account of so-called community-internalism which has been developed in (Shogenji 2007). Without going into details, let me briefly point out the differences between the two accounts. In Shogenji’s account, the expert has to be an actual member of the community. This account implies a certain amount of community-relativism, and Shogenji is well aware this problem (2007, p. 33, fn. 35). Schurz (2008a) prefers the possibilist version, which merely requires the existence of a naturalistically possible subject X satisfying conditions (1.1) and (1.2) of (MelExt). This avoids cultural relativity, on the cost that it does not guarantee actual but merely possible meliorative effects. The difference between externalism and third-person internalism can be explained at hand of the following example: if there were a God who would tell his adherents the truth during their sleep but whose reliability is not scientifically detectable, then the adherents of this God would possess (Ext)-knowledge without possessing (MelExt)-knowledge, even not in the weak possibilist third-person sense. (2.) Indicators as reasons—weak (1st order) vs. strong (2nd order) justification: An indicator \(P) is mentally accessible to the justification-relevant subject X iff X has the disposition to become aware of \(P) when confronted with the process P by which the person S produced her belief. The presence of mentally accessible indicators supervenes on the mental properties (dispositions) of S and of X, and in this sense conditions (1.1) and (1.2) are internalist conditions, even in the case of third-person internalism in which X z S. Indicators which are mentally accessible to X are nothing but (internal) reasons of X for X’s belief that S’s belief-in-p is justified. Condition (1.1) is an internalist first order justification condition: it requires that X possesses reasons for the justifiedness of S’s belief-in-p by X, for example that S’s belief-in-p was based on perceptual evidence of S. Condition (1.2), on the other hand, captures internalist second order justification conditions:

50

X must be able to demonstrate the reliability or at least the reliabilist optimality of the process that produced S’s belief-in-p. Condition (1.2) is admittedly rather strong, and various authors (including, e.g., Alston 1988, or Conee and Feldman 1985) have argued that the internalist condition should be restricted to (1.1). Condition (1.2), however, is important for the rational dialogue between adherents of different world-views. World-views typically differ from each other not only in their belief system, but also in their favored epistemic methods, or in other words, in their suggested knowledge-indicators. For example, while the scientist suggests empirical confirmation as knowledge-indicator, the religious believer prefers agreement with the bible as supreme knowledgeindicator. For a rational discussion of such alternatives one obviously needs good second order justification strategies. The following example illustrates the importance of the possession of knowledge-indicators at hand of the confrontation of the knowledge claim of an empirical scientist with that of a religious fundamentalist. The empirical scientists says: Life is the result of evolution; I conclude this from the empirical evidence by induction or abduction. The externalist analysis: This is knowledge if it was caused by evidence via a reliable cognitive mechanism—though I don’t know whether this is the case.

The religious fundamentalist says: Life has been created by an omnipotent God; I conclude this from the fact that sometimes God seems to speak to me. This is knowledge if it was caused by this God in a reliable way—though I don’t know whether this is the case.

The internalist analysis: Scientific induction/abduction can be justified as being reliable by way of a 2nd order justification, but blind faith in God cannot be so justified in any way. While the pure externalist has to remain neutral regarding the knowledgeclaims of the opposing camps, the meliorative (internalist) externalist can differentiate between the two knowledge-claims in terms of their internalist justification status.

51

2.2 The epistemic surplus value of reliability-indicators for the social spread of knowledge In the domain of cultural evolution (cf., e.g., Boyd and Richerson 1985, Mesoudi et al. 2006), acquired beliefs are transmitted horizontally, between the members of a population (e.g. a tribe), and vertically, from generation to generation. For the spread of knowledge (as opposed to erroneous beliefs of all sort) it is of utmost importance that reliably produced beliefs reproduce faster than unreliably produced beliefs (e.g., beliefs based on blind faith). For this purpose, reliably produced beliefs have to be recognizable by the members of the population via reliability-indicators. Let me illustrate my point by an example. Assume a pre-modern population of humans with a subpopulation of purported information-providers (medicine men, priests, etc.), out of which, say, only 10% are truly reliable informants, who base their information on empirical induction instead of relying on intuition or religious faith. The situation is illustrated in fig. 1. reliable informants

purported informants population of epistemic subjects

Fig. 1: The problem of social spread of reliable information As long as the members of the population cannot discriminate the informants in the light from those in the dark circle in fig. 1, the reliable informations in the dark circle will be of little use for the increase of veritistic value, because these informations cannot spread through the society—the frequency of their believers cannot grow as opposed to the frequency of believers in the pseudo-informations of the purported informants. For example, in a primitive religious tribe a genius who can reliably heal diseases will be unable to compete with the witch doctors as long as the members of the tribe cannot discriminate reliable from non-reliable healing practices. But the ability to discriminate between reliable and non-reliable

52

informants requires exactly the presence of reliability-indicators according to the conditions of (MelExt). This is the reason why (MelExt)-knowledge can spread in an epistemic population much faster than mere (Ext)-knowledge. This constitutes the veritistic surplus value of (MelExt)-knowledge over (Ext)-knowledge. One might object that the greater reliability of the informations in the dark circle as compared to those in the light circle will become evident for the other members of the population as soon as their informations are tested in practice. However, the point of social knowledge based on division of epistemic labor is exactly that it is impossible for the information-user to test the success record of every purported expert. This does not mean, however, that the users of purported expert-informations must trust the experts blindly. They will rather look for the presence of reliability-indicators: good reasons which the expert (or someone else) can give for the truth of the expert’s information. It follows that informations which are furnished with reliability-indicators will be more attractive to information-users and, hence, will spread faster than informations without such reliability-indicators. I think that the argument from cultural evolution explains why the internalist K-K-condition of knowledge—for example in the form “if you don’t know whether you know it, then you don’t know it”—is so deeply entrenched in our society. Moreover, note that not only the social spread of true beliefs, but also the social learning of reliable cognitive processes requires that reliability can be cognitively detected and understood by way of reliability-indicators. Without such indicators, higher forms of cultural evolution based on learning by teaching would hardly be possible. Let me compare my point with Goldman-and-Olsson’s solution to the value-of-knowledge problem. Goldman and Olsson (2009) argue that the veritistic surplus value of reliably produced true belief over mere true belief lies in the fact that the possession of reliably produced true beliefs increases the frequency of having true beliefs of the same type in the future. For if one possesses a reliably produced true belief, then one possesses a systematic mechanism which produces these beliefs with a high truth rate. In the same sense, possessing a reliably produced true (social communicated) belief together with reliability-indicators increases the frequency of members of the society who are possessing true beliefs of the same type. For if one possesses reliability-indicators for his socially communicated beliefs, this increases the frequency of conspecifics who believe him. Now: if the first kind of surplus value is a reason to include the condition of reliability in

53

the definition of knowledge, then why should the second surplus value not also be a reason to include the condition of knowledge-indicators in the definition of knowledge? Maybe this constitutes an argument in favor of (MelExt)-knowledge which is even acceptable for reliabilists. 3. Rules of meliorative epistemology In this final section I discuss some concrete rules of meliorative epistemology. For each such rule, one should be able to demonstrate its epistemic rightness—which means that obedience to the rule increases the truthfrequency of one’s beliefs, or (even stronger) the veritistic value of one’s degree of beliefs. In my view, demonstrations of epistemic rightness are the decisive difference between effectively meliorative epistemology as opposed to mere intuition-based epistemology. Before I turn to the question of the epistemic rightness of some particular rules, let me list some typical meliorative rules which have been studied in the literature: (Rule R1) Evidence: Base your hypothetical beliefs on probabilistically relevant evidences—e.g., by way of Bayesian conditionalization (Goldman 1999, 121). (Rule R2) Maximally specific evidence: Base your hypothetical beliefs on a set of relevant evidences which is as comprehensive as possible (Goldman 1999, 145f; Carnap 1950, 211). (Rule R3) Condorcet jury theorem: Base your hypothetical beliefs on as many conditionally independent evidences as possible (Shogenji 2005). (Rule R4—prediction methods): Statistical prediction methods are more reliable than intuitive expert predictions (Bishop and Trout 2005). The demonstration of the epistemic rightness of these rules depends on certain presuppositions. For example, the epistemic rightness of (R1) and (R2) depends on the assumptions that the evidence statements are true, and the subjectively estimated likelihoods are at least close to the objective likelihoods (cf. § 3.1). In particular, the demonstration of the rightness of all four rules depends on the presupposition that the reliability of given kinds of evidences or indicators can be justified by induction. But how should one justify induction? At this point meliorative epistemology meets the fundamental skeptical challenges of second order justification: how can one justify the most fundamental reasoning processes without com-

54

mitting the fallacy of a circle or an infinite regress? In the final subsections 3.2-3 I will discuss this question drawing on the example of the problem of induction. In the next subsection 3.1 I will show –using rules (R1-2) as examples—what demonstrations of the epistemic rightness may look like, thereby attempting to close an open question of Goldman (1999). 3.1 The veritistic value of the rule of maximally specific evidence Goldman defines the quantitative veritistic value V(rp) of a person's belief in regard to a two-fold alternative of hypothesis "h-versus non h" (rh) as follows (1999, 88-90): V(rh) = B(h*), where h* is the-true-of-h-vs.-non-h, and “B(x)” is the rational degree of belief (subjective probability) of the given person in x. This definition can be extended to the degree of belief (of a person) in a n-fold partition H = {h1,},hn} of hypotheses (a set of n mutually disjoint and exhaustive possibilities) as follows: V({h1,},hn}) = B(h*), where h* is the (unique) true element of {h1,},hn}. Assume that P(e|h) is measure of the objective probability (the so-called likelihood) of an evidence e given the truth of the hypothesis h. According to the rule of conditionalization, if e becomes known and B is the old belief function, then the new degree of belief in hk is equated with B(hk|e), and according to Bayes rule, B(hk|e) is computed as B(hk|e) = P(e|hk)˜B(hk) / B(e) = P(e|hk)˜B(hk) / 61in P(e|hi)˜B(hi). Assume that E is an experiment with the possible outcomes {e1,},ek}. The objective probabilities of these outcomes given the unknown true hypothesis h* of the partition {h1,},hn} is denoted by P(ej|h*). Then, the expected veritistic value of one’s belief in partition H is given as Vexp(H|E) = 61ik P(ei|h*)˜B(h*|ei). The question of the meliorativity of rule (R1) now amounts to the following: does conditionalization of one’s beliefs over a partition of hypotheses

55

on acquired evidences (out of a partition of evidences) always increase one’s expected veritistic value? Goldman and Shaked (1992) prove that this is indeed the case, provided the evidences are probabilistically relevant (and the priors nonextreme). In other words, it holds that Vexp(H|E) > V(H). Intuitively speaking, the reason for this result is that it is more probable (i) to obtain evidences which probabilistically favor a true hypothesis than (ii) to obtain evidences which probabilistically favor is negation, and because in case (i) conditionalization increases the veritistic value, while in case (ii) conditionalization decreases the veritistic value, one achieves an increase of veritistic value in the long run. In (1999, 145f ) Goldman conjectures that also the application of rule (R2) increases veritistic value, but he reports that so far a proof of this conjecture is missing. In the following I give such a proof. Theorem (1.1) is the precise version of the GoldmanShaked-result (the epistemic rightness of rule R1), and theorem (1.2) is the precise version of Goldman’s conjecture (the epistemic rightness of rule R2). The reason why I have included theorem (1.1) is that my poof of theorem (1.2) will make use of it. Theorem 1: Assume a partition of hypotheses H = {h1,},hn} with the (unique) true element h*, and two partitions of evidences E := {e1,},ek} and F = {f1,},fm}, where (a) the likelihoods are objective: B(ei|h*) = P(ei|h*), and B(fj|h*šei) = P(fj|h*šei), (b) E is relevant to h* (P(ei|h*) z P(ei)), and F is relevant to h* conditional on E (P(fj|h*šei) z P(fj|ei)), and (c) the prior B(h*) is nonextreme (z0,1). Then: (1.1) The objectively expected change of the veritistic value of one’s beliefs in H under conditionalization on the partition of evidences E is positive. (1.2) This change is strictly greater under conditionalization to the more comprehensive partition EuF = {eišfj: 1didk, 1djdm} than under conditionalization to the less comprehensive partition E. Proof: Concerning theorem (1.1): The expected change of the veritistic value of one’s beliefs in H under conditionalization on E is given as the term in (1), since V(H) := B(h*): (1) 61ik P(ei|h*)˜(B(h*|ei)—B(h*)). 56

A nice proof that the term in (1) is strictly positive is found in Goldman and Shaked (1992, p. 246f ). This proves theorem (1.1). Concerning theorem (1.2): Likewise the expected change of the veritistic value of one’s beliefs in H under conditionalization to the partition of evidences EuF is given as: (2)

61jm 61ik P(eišfj|h*)˜(B(h*|eišfj)B(h*)).

I will mathematically transform the term in (2) in a sum of the term in (1) and something strictly positive, and this proves theorem (1.2). By trivial extension of the right bracket in (2) and since P(ašb| c) = P(a|bšc)˜P(b|c), we can transform (2) into: (3)

61jm 61ik P(fj|h*šei)˜P(ei|h*)˜((B(h*|eišfj)B(h*|ei) + (B(h*|ei)B(h*)).

Obviously, (3) is the sum of the following two terms (4) and (5): (4) (5)

61jm 61ik P(fj|h*šei)˜P(ei|h*)˜(B(h*|ei)B(h*)). 61jm 61ik P(fj|h*šei)˜P(ei|h*)˜((B(h*|eišfj)B(h*|ei)).

Since the second and the third factors of the product in (4) (from left to right) are independent from the index j, (4) can be rewritten as follows: (4*) 61ik P(ei|h*)˜(B(h*|ei)B(h*)) ˜(61jm P(fj|h*šei)), and since (61jm P(fj|h*šei)) = 1, (4*) reduces to (4**) 61ik P(ei|h*)˜(B(h*|ei)B(h*)), which is equal to the term in (1). (5) can be rewritten into (5*) 61ik P(ei|h*)˜(61jm P(fj|h*šei)˜(B(h*|eišfj)B(h*|ei)). The right factor of the term in (5*), 61jm P(fj|h*šei)˜(B(h*|eišfj)B(h*|ei), describes the change of veritistic value of beliefs in H conditional to having obtained evidence ei under further conditionalizing on evidence partition F. Note that (i) the conditionalized functions P(|ei) and B(|ei) are prob-

57

ability resp. belief functions, (ii) the Goldman-Shaked-result that term (1) is strictly positive holds for all functions P and B, thus also for P(|ei) and B(|ei), and (iii) F is relevant to h* conditional on ei. The facts (i)–(iii) and the Goldman-Shaked-result imply that the right factor of the term (5*) is strictly positive (for every choice of i). So (5*) is strictly positive, whence (2) = (1)+(5*) is greater than (1). Q.E.D. 3.2 Why rule-circular arguments are worthless even for externalists Several externalists (e.g. van Cleve 1984), and also Goldman, have argued that circular justifications of cognitive practices are not vicious, but may be virtuous. Goldman argues that the rule-circular justification of the reliability of a given belief-producing rule which uses the same rule can have veritistic or even meliorative value (1986, 104, fn. 21; 1999, 85). Here I disagree with Goldman, and I illustrate my disagreement at hand of Salmon’s famous counter-argument (1957, 46) to the circular justification of induction: Internalist reconstruction of circular ‘justifications’ of (counter-)induction: Inductivist: Counterinductivist: Past inductions have been Past counterinductions have not successful been successful. Therefore, by rule of induction: Therefore, by rule of counterinduction: Inductions will be successful in Counterinductions will be successful the future. in the future. The internalist concludes from the symmetry that both ‘justifications’ are epistemically worthless. In contrast, for the externalist both justifications are ‘correct’ in the following sense: The circular justification of The circular justification of counterinduction is correct in worlds induction is correct in worlds where where inductive inferences are counterinductive inferences are relireliable. able. I think that the fact that a conclusion as well as the opposite conclusion can be ‘justified’ by this rule-circular kind of ‘argument’ makes rule-circular arguments melioratively worthless, also for externalists, in spite of the semantic move in the externalist understanding of the notion of “justification”. What is worse, a similar rule-circular justification can even be

58

construed for the of the blind-trust-in-God rule of religious fundamentalists: (Rule TG, “trust-in-God”): If you hear God’s voice in your mind saying p, then infer that p is true. The reliability of this rule is justified as follows: I feel God’s voice saying to me that the rule (TG) is reliable, from which I infer by (TG) that (TG) is reliable. For the externalist this argument is correct in worlds in which (TG) is reliable. I conclude that rule-circular arguments do definitely not belong to the repertoire of meliorative epistemic rules. 3.3 Outlook on the regress problem: Optimality instead of reliability justifications If rule-circular justifications are worthless, how can we then defend the rule of induction, as opposed to some alternative rules of belief formation about events in the future or about non-observable events, which sound ‘weird’ to the scientist, but have been used by humans since the earliest times of their possession of culture (such as pure intuition, clairvoyance, hearing God’s voice, dreaming the future, reading the future from the stars, from the flight of the birds, from coffee grounds, etc.)? I think that Hume is right in that we cannot demonstrate the external success, i.e. the reliability of induction (or of other cognitively ultimate rules such as abduction, about which I cannot speak in this paper). But we can compare competing cognitive methods from within our system of beliefs (in a quasi-Kantian sense). In particular we can use epistemic optimality arguments as a means of stopping the justificational regress. Epistemic optimality arguments are a game-theoretical generalization of Reichenbach’s best alternative account. They do not show that induction must be successful, but they intend to show that induction is an optimal prediction method, i.e., its predictive success is maximal, among all methods of prediction which are available to us. Even in radically skeptical scenarios where induction fails, induction can be optimal provided that also all other prediction methods are doomed to failure in these scenarios. In other papers (e.g. Schurz 2008b) I have developed an optimalityapproach to the problem of induction in terms of prediction games. A

59

prediction game consists of a countably infinite sequence of (discrete or real-valued) events and a finite set of prediction methods (or players) which predict, at each discrete time, the next event, and whose predictions are accessible to the meta-inductivist. Optimality can not be demonstrated for object-inductive methods, which apply induction at the level of events. What theorem 2 below asserts is that optimality holds for certain metainductive methods. A meta-inductive prediction method observes the success rates of all (accessible) prediction methods, and calculates an “optimal” prediction from the predictions of the accessible methods according to their so-far success rates. If one method is constantly dominating the meta-inductivist chooses this method as her single favorite, otherwise she predicts according to a weighted average. Theorem 2 (Schurz 2008b): There exists a (weighted-average) meta-inductive prediction method whose predictive success rate is strictly long-run optimal in all possible prediction games (i.e. converges towards the maximal predictive success rate), and whose short run loss is upper bounded by the square root of the number of competing methods divided through the discrete time. The optimality-justification of meta-induction is mathematically-analytic. It implies, however, an posteriori justification of object-induction, i.e. of induction applied at the level of events: for we know by experience that in our real world, non-inductive prediction strategies have not been successful so far, whence it is so far meta-inductively justified to favor object-inductivistic strategies. In this way, meta-induction yields an indirect justification for the common-sense argument that it is reasonable to perform object-induction, because so far it has turned out to be superior. This argument is no longer circular, because meta-induction can be justified in a non-circular way. This optimistic prospect for solving the regress-problem by optimality arguments concludes my paper.

60

REFERENCES Alston, William P. 1988: “An Internalist Externalism”. Synthese 74, 265–283. — 1989: Epistemic Justification. Ithaca, London: Cornell University Press. Bergmann, Michael. 2003: Justification without Awareness. Oxford: Clarendon Press. Bishop, Michael A. and Trout, J.D. 2005: Epistemology and the Psychology of Human Judgment. Oxford: Oxford University Press. Boyd, Richard and Richerson, P. J. 1985, Culture and the Evolutionary Process. Chicago: Univ. of Chicago Press. Carnap, Rudolf. 1950: Logical Foundations of Probability. Chicago: Univ. of Chicago Press. Conee, Earl and Feldman, Richard 1985: “Evidentialism”. Philosophical Studies 48, 15–34. Comesaña, Juan 2008: “Evidential Reliabilism”. To appear in Nous. Fumerton, Richard 1995: Metaepistemology and Skepticism. London: Roman & Littlefield. Goldman, Alvin 1979: “What is Justified Belief ”. In George Pappas (ed.), Justification and Knowledge. Dordrecht: Kluwer, 1–23. — 1986: Epistemology and Cognition. Cambridge/Mass.: Harvard Univ. Press. — 1988: “Strong and Weak Justification”. Philosophical Perspectives 2, 51–70. — 1993: Philosophical Applications of Cognitive Science. Boulder: Westview Press. — 1999. Knowledge in a Social World. Oxford: Oxford Univ. Press. Goldman, Alvin and Moshe Shaked 1992: “An Economic Model of Scientific Activity and Truth Acquisition”. In: Alvin Goldman, Liaisons. Philosophy Meets the Cognitive and Social Sciences. Cambridge/Mass.: MIT Press, ch. 12. Goldman, Alvin and Erik J. Olsson 2009: “Reliabilism and the Value of Knowledge”. Adrian Haddock, Allan Millar, and Duncan Pritchard (eds.), Epistemic Value. Oxford: Oxford University Press, 19–41. Greco, John 2004: “Virtue Epistemology”. The Stanford Encyclopedia of Philosophy, . Grundmann, Thomas 2003: Der Wahrheit auf der Spur. Paderborn: mentis. Henderson, David, Horgan, Terry and Matjaž Potrč 2007: “Transglobal Evidentialism-Reliabilism”. Acta Analytica 22, 281–300. Lehrer, Keith 1990. Theory of knowledge. London: Routledge. Mesoudi, Alex, Whiten, Andrew and Kevin N. Laland 2006: “Towards a Unified Science of Cultural Evolution”. Behavioral and Brain Science 29, 329–347. Reichenbach, Hans 1949: The Theory of Probability. Berkeley: University of California Press.

61

Salmon, Wesley C 1957. “Should we Attempt to Justify Induction?” Philosophical Studies 8, No. 3, 45–47. Shogenji, Tomoji 2005: “Justification by Coherence from Scratch”. Philosophical Studies 125, No. 3, 305–325. — 2007. “Internalism and externalism in meliorative epistemology”. Online-paper http://www.ric.edu/faculty/tshogenji/workprogress.htm. Schurz, Gerhard 2008a: “Third-Person Internalism: A Critical Examination of Externalism and a Foundation-Oriented Alternative”. Acta Analytica 23, 2008, 9–28. — 2008b: “The Meta-Inductivist’s Winning Strategy in the Prediction Game: A New Approach to Hume’s problem”. Philosophy of Science 75, 278–305. Swinburne, Richard 1979: The Existence of God. Oxford: Clarendon Press. (Revised 2nd ed. 2004). Van Cleve, James 1984. “Reliability, Iustification, and Induction”. In: P. A. French et al (Eds.), Causation and Causal Theories, Midwest Studies in Philosophy 4, 555–567.

62

II. PROBLEMS OF RELIABILISM

Grazer Philosophische Studien 79 (2009), 65–76.

RELIABILISM AND THE PROBLEM OF DEFEATERS Thomas GRUNDMANN Universität zu Köln Summary It is widely assumed that justification is defeasible, e.g. that under certain conditions counterevidence removes prior justification of beliefs. In this paper I will first (sect. 1) explain why this feature of justification poses a prima facie problem for reliabilism. I then will try out different reliabilist strategies to deal with the problem. Among them I will discuss conservative strategies (sect. 2), eliminativist stragies (sect. 3) and revisionist strategies (sect. 4). In the final section I will present an improved revisionist approach to defeaters that is able to overcome the main shortcomings of the other approaches.

1. What is the problem? It is widely assumed that justification (and on some accounts even warrant (see Plantinga 2000, 359)) is defeasible by counterevidence.1 If an epistemic agent is justified in believing that p at time t and if at time tc she acquires either evidence for the falsity of p (i.e., a rebutting defeater) or evidence for the unreliability of the source of her belief that p (i.e., an undercutting defeater), then the belief ’s justification is removed at tc. In short, defeaters are evidence which removes justification. Two different kinds of defeaters have to be distinguished: rebutting and undercutting defeaters. Let us first consider a typical example of a rebutting defeater: David sees at some distance what he takes to be a sheep and thus forms the belief that there is a sheep in the field. He knows that Frank is the owner of the field. On the next day, Frank tells David that there has never been a sheep in that field, while Frank owns a dog that looks like a sheep from the distance and often strolls around in the field. David thereby acquires a 1. For a comprehensive overview on defeaters see Grundmann (forthcoming a). Compare also Grundmann (forthcoming b) with respect to the relevance of introspective self-knowledge to the function of undercutting defeaters.

rebutting defeater for his belief that there was a sheep in the field. In this case, what David is told by Frank is incompatible with the truth of what he believes. If David holds on to his belief it becomes unjustified. Thus, it would be epistemically appropriate for David to believe that there was a dog, not a sheep in the field. Consider now the following example of an undercutting defeater: Betty knows that she has taken a drug which has a 50 % chance of causing hallucinations. Suddenly, she happens to have a completely unexpected experience. While she is on a lonesome hiking-trip, it suddenly seems to her as if the ground is shaking. On the basis of this impression she believes that she just experienced an earthquake. She is not justified in her belief. Her knowledge of the drug’s side-effects undermines her experiential reason for believing that an earthquake just occurred. It would be epistemically appropriate for her to withhold her belief on the matter. Why is there some tension between reliabilism and the existence of defeaters? According to simple reliabilism, S is justified in believing that p at t if and only if S’s belief at t is based on a reliable belief-producing mechanism. In effect, simple reliabilism claims that reliable processes are necessary and sufficient for a belief to be justified. Now recall the examples of defeaters given above. In both cases—the sheep case as well as the drug case—it seems possible that the agent’s belief was produced by a reliable process. Consider the following true story about the sheep case: Unknown to Frank, there was one of his sheep in the field, but not his dog. And if his dog had been in the field, David would have been able to distinguish it from a sheep. Hence, David’s belief that a sheep was in the field was reliably produced. According to simple reliabilism, his belief would be classified as justified. But this can’t be true, since intuitively its justification was defeated by what David was told about the situation. Being reliably produced is thus not sufficient for being justified. And this contradicts simple reliabilism. Consider next the drug case. Let us assume that Betty is resistant to the hallucinatory side-effects of the drug, though she does not know about it. Moreover, she really experienced an earthquake on her hiking-tour. Hence, her belief that she was facing an earthquake was reliably produced. Again, given simple reliabilism, her belief would count as justified. But intuitively it isn’t justified, because its justification is removed by Betty’s knowledge about the general side-effects of the drug. In this case, the defeater may be misleading, but it successfully neutralizes the justificatory quality of Betty’s belief nonetheless. So, contrary to what simple reliabilism claims, being reliably produced is again not sufficient

66

for being justified. To put the problem in a nutshell: Simple reliabilism is incompatible with the widely acknowledged defeasibility of justification. Whereas simple reliabilism implies that reliably produced belief is sufficient for justification, the defeasibility of justification implies that reliable production is not sufficient for justification. How should the reliabilist respond to this problem of compatibility? BonJour (1980, 1985) recommended giving up on reliabilism and adopting a version of epistemic internalism instead. From an internalist point of view defeaters do not pose any problem. According to internalism a belief is justified if the relevant psychological evidence rationally supports the belief. Now, by acquiring further evidence a formerly justified belief may no longer be rationally supported by the resulting total evidence. So, the problem would be solved by adopting some kind of epistemic internalism. But there are certainly more promising strategies available to the reliabilist. Firstly, she could insist that simple reliabilism and defeasibility of justification are in fact compatible and only appear to be incompatible on first glance. This would be the strategy of conservative reliabilism. Secondly, the reliabilist could bite the bullet and simply deny the existence of defeaters in general. This would be the eliminativist strategy. Thirdly, the reliabilist could try to revise her position trying to integrate defeaters into reliabilism. Call this revisionary reliabilism. 2. Conservative replies to the problem What are the prospects of conservative reliabilism? The reliabilist might argue that, according to simple reliabilism, the justificatory status of a belief depends on the reliability of the whole process responsible for entertaining it. Very often, the relevant process is the originating cause of the belief. But this need not always be the case. It also might happen that a belief was acquired in a certain way and is now sustained by a completely different process. In such cases, the current justificatory status of the belief depends completely on the reliability of the sustaining process. In the sheep case, David’s belief was originally acquired by employing perception. When David is later informed that there was no sheep in the field, but continues to believe the contrary, then the relevant belief-sustaining process has changed. Ignoring the available counterevidence is at least part of the new belief-sustaining process. Now, the proponent of simple reliabilism might claim the following about David’s cognitive situation:

67

whereas the original perceptual process is reliable, it is plausible to assume that believing p in the face of ignored counterevidence is unreliable on the whole. Therefore, it might seem as if simple reliabilism was able to explain why a reliably acquired belief becomes unjustified when counterevidence is being ignored. I am not that optimistic about maintaining conservative reliabilism. In fact, I believe that such a conservatism is a dead end. Here is the most severe objection to this view: it relies on the general assumption that ignoring the available counterevidence is an essential part of the cognitive process sustaining the belief. If such ignorance were an essential part of the sustaining process, then it would be quite reasonable to expect that this type of process often leads to error and thereby is unreliable. But ignoring counterevidence can also just be an accidental by-product of an otherwise very reliable process. Consider the following two cases. Case One: John is a highly ambitious philosopher. He wants to come up with some new, original theory about epistemic defeaters. Finally, he happens to have a very good idea on this topic based on an inference to the best explanation. Now, there are some crucial and obvious objections to his new theory around and he knows of these objections. Nevertheless he is so strongly biased towards his own theory that he just ignores the available counterevidence and holds on to his theory. He ignores the counterevidence because he is generally influenced by a strong confirmation bias. In this case it seems obvious that the sustaining process of John’s belief is highly unreliable. But now consider case two: Frank has worked hard on the details of a scientific theory which he holds to be true. He needs all his concentration to go on. Although in principle he is very sensitive to counterevidence, this time he happens to overlook some relevant counterevidence just by accident. In this case it seems likely that the sustaining process of Frank’s belief is reliable because this type of process does not regularly make people ignoring the available counterevidence. If this is true, then the following conclusion seems reasonable. It is possible that someone (namely Frank in case two) is ignoring available counterevidence, even though his belief is sustained by a reliable process. If we assume that ignoring available counterevidence generally removes justification, then being sustained by a reliable process is not sufficient for a belief being justified. I therefore don’t think that conservative reliabilism is a tenable position.

68

3. Eliminativism about defeaters What about denying the existence of defeaters, a strategy Fred Dretske (2000) refers to as “Mad Dog Reliabilism”? A proponent of this strategy would insist that reliably produced beliefs remain justified, even if counterevidence is available to the believer. This is an odd view, since the defeasibility of justification is strongly suggested by our intuitions about justification. Of course, one need not take all these intuitions at face value. But then one should better have a story at hand that can explain away these intuitions. Mylan Engel (1992) offered such a story. Distinguishing between personal and doxastic justification he maintains that our intuitions about defeasibility concern personal, rather than doxastic justification. Hence, a person is not justified (rational or responsible) in holding on to her belief in the face of counterevidence she is aware of. But according to Engel, this does not imply that the belief she holds on would itself become unjustified. Engel’s position differs from “Mad Dog Reliabilism” in so far as it tries to accommodate our intuitions about defeaters. But I think that Engel still does not take these intuitions seriously enough, since we obviously have the intuition that the epistemic quality of the belief as such is affected by available counterevidence. 4. Revisionary approaches So far we have seen that neither conservative reliabilism nor a position that denies defeaters are promising strategies for the reliabilist. Therefore, it seems unavoidable that the reliabilist changes her position to a certain extent in order to integrate defeaters into her account. So, let us look more closely at revisionary accounts of reliabilism. Goldman (1979, 20) suggests the following modification of simple reliabilism: (G) S is justified in believing that p at time t, if and only if 1. S’s belief is based on a reliable process, and 2. there is no conditionally reliable process which S could have used and which, if it had been used, would have resulted in S’s not believing p at t. Clause (2) is an extension of simple reliablism which pays tribute to the defeasibility of justification.

69

In his comment on this proposal Goldman makes it sufficiently clear that he does not mean clause (2) to imply that justification is defeated by the fact that newly gathered evidence would yield a different doxastic attitude. According to Goldman, justification is only defeated by already acquired counterevidence that would make belief-revision internally rational. Goldman’s suggestion seems to be extensionally adequate. As far as I can see, it licenses the right cases as justified. Consider David’s case again: since Frank told him that there has never been a sheep in the field there is a conditional reliable process at David’s disposition which, if it had been used by David, would have resulted in David’s not believing that there was a sheep in the field. David just could have inferred that there was no sheep in the field from what Frank told him. So, condition (2) of Goldman’s account is not satisfied. Hence, we get the desired result: according to (G) David’s belief is no longer justified. Yet, (G) is still not fully satisfying. First, it seems to be fairly ad hoc. For, the suggestion amounts to the claim, in reliabilist terms, that a belief is justified if and only if it is reliably produced and there are no defeaters (which would lead to beliefrevision in internally rational agents). It fails to explain why internally rational counterevidence removes justification.2 Second, it is not clear why Goldman’s proposal (G) is still a version of reliabilism. If we put aside the technical details, reliabilism explains all justificationally relevant features as being objectively conducive to the goal of truth. But one does not see, how condition (2) fits into this general picture. (2) excludes cases in which someone does not adapt his beliefs to her internally available evidence. But (2) does not tell us why this internal adaptation is instrumentally good with respect to the goal of truth. The example used by Goldman (1979, 18) seems to point in the opposite direction. In that example Jones has reliable memory beliefs about his past. But his parents try to deceive him by telling him a false story according to which his memory is completely corrupted. Jones does not believe his parents, but persists in believing his memory. From a reliabilist point of view Jones’ reaction seems perfectly in order. He does not care about misleading counterevidence and persists in believing the truth. But Goldman admits that after having heard what his parents told him Jones is no longer justified in holding his memory beliefs. Now, it might be possible to go this way even as a reliabilist. But the reliabilist then owes us an answer to the question why sensitivity to 2. Notice that on Goldman’s view a defeater may be a false or even highly unreliable evidence. Condition (2) only requires that the available process is conditionally reliable. So, the inference from the evidence to not believing p must be valid. But the input can be false and unreliable.

70

counterevidence (no matter whether it is true or false) is objectively truthconducive. Goldman does not give this answer. Thirdly, it seems natural to say that Jones did not do what he epistemically should do, when he persists in believing his memories. The defeaters he has got are normative defeaters. Interestingly, Goldman himself describes the case in normative terms: So what we can say about Jones is that he fails to use a certain (…) process that he (…) should have used. (…) So, he failed to do something which, epistemically, he should have done. (…) The justificational status of a belief is not only a function of the cognitive processes actually employed in producing it; it is also a function of processes that (…) should be employed. (Goldman 1979, 20. (My emphasis))

Goldman here implicitly accepts that Jones is committed to certain epistemic obligations. But he doesn’t tell us where these obligations come from. Furthermore, (G) does not imply any normative statements. So, within Goldman’s account the normativity of normative defeaters remains unexplained. Here is another suggestion of how to integrate defeaters into the general reliabilist framework which comes close to proposals by Alvin Plantinga (2000, 359–366) and Michael Bergman (2006, Ch. 6): (IR)

S is justified in believing that p at t, if and only if (1) S’s belief is based on a reliable process, and (2) there is no mental state at t in S’s representational system which makes believing that p internally irrational. (nodefeater condition).

This conception seems to be closely related to the solution suggested by Goldman’s (G). I even think that both are approximately equivalent. (G2) excludes that there is some internal evidence available to the epistemic agent from which the withholding of p could be derived by a valid inference. If such an evidence is available and the epistemic agent holds on to p, believing p will be internally irrational since it contradicts some internally available evidence. On the other hand, if S has a mental state m which makes believing that p irrational, then there is a conditionally reliable process (i.e. a valid inference) available that would lead from m to not believing p. The interesting thing about Plantinga and Bergman is that, in contrast to Goldman, they offer explanations for condition (2) which

71

can answer the question why counterevidence removes justification and also, at least in Plantinga’s case, explain the normativity of certain defeaters. For Plantinga a justified belief must not depend on a malfunction of the cognitive system, and properly functioning cognitive systems would remove internally irrational beliefs. Since a defeater for believing that p makes that belief internally irrational, the system can tolerate that belief only if it is not properly functioning, i.e. if it is not working as it should. Hence, believing that p in the face of internally rational counterevidence is unjustified. Yet although Plantinga does explain why defeaters remove justification, his explanation remains problematic. It depends on a supra-naturalistic account of proper functions. (See Plantinga 1993b). Moreover, the normative notion of proper functioning has nothing to do with reliability. Especially the proper functioning of internal rationality has nothing to do with getting at true beliefs. Plantinga understands internal rationality as a matter of proper function “downstream from experience” (Plantinga 2000, 365). Since internal representations may be radically false, there is no truth-connection inherent to internal rationality. In contrast to Plantinga, Bergman does not need the normative concept of proper functioning. According to him, a belief is justified if it is reliably produced and, in addition, rational “from the inside.” There have to be accessible evidential states, like beliefs or experiences, whose contents support the truth of the beliefs that are based on them from a first-person perspective. My perceptual belief “There is something red in front of me” is justified if I have the experience of something red in front of me and my visual faculties are reliable on that occasion. We may call this position “Evidential Reliabilism.”3 If a belief is held without sufficient evidential support, it is internally irrational. From this perspective, defeaters can be understood as pieces of evidence that destroy or neutralize the evidential support of a belief and thereby remove its prior justification. Consider again the sheep case: When David acquires the belief that a sheep is in the field by using visual perception, both necessary conditions of justification are satisfied: (i) David possesses the supporting visual evidence that something in front of him looks like a sheep and (ii) his visual faculties are working reliably in those circumstances. However, when David is told that there was no sheep in the field, his evidential basis has changed. If he considers both that something in the field looked like a sheep and that 3. I owe this term to Frank Hofmann. See also Comesana forthcoming.

72

there was no sheep in the field, then his belief that a sheep was in the field is no longer evidentially supported. His belief is still reliably produced, but lacks the necessary evidential support. Although Evidential Reliabilism gives a cogent answer to the question why defeaters remove justification, there are a number of strong objections to this position. First, it is simply not true that every justification requires supporting evidential states, as Evidential Reliabilism claims. Consider, for example, introspective beliefs. It is a widely held view that they are not based on any evidence. If I acquire the belief that I experience something red right now and if I acquire this belief via introspection, then I do not base my belief on something like an inner experience of the experiential state in question.4 Rather, I have an immediate belief about my current perceptual state. Even if this belief were false, since it might get something wrong about the content of my state, it would nevertheless have some positive epistemic quality. If it cannot be knowledge (since the belief is false), it must be justification. Hence, it must be possible for an introspective belief to be justified without being based upon evidence. Or consider testimonial justification. In that case, we often do base our beliefs on evidential states, namely the utterances we hear. But even if we directly recognize their meaning, this does not evidentially support the truth of what we are told. Assume that you hear that someone utters that p. This evidence alone does not support your belief that p. There is no evidential connection between uttering that p and p being the case. Therefore, even in the case of testimony we lack supporting evidence. Or consider finally the case of self-evident propositions, as e.g. “1 + 1 = 2”. They seem to be justified. But their justification does not rely on any evidence (contrary to what the term suggests). We are just attracted to assent to the propositional 4. See Shoemaker 1996, 207. Some philosophers, e.g. Sosa 2007, 45, claim that in introspection the mental state itself is the evidence for the introspective belief about it. But there are severe objections to this view. First, introspective beliefs do not only represent the content of the first order state, but also that it is a mental state and which kind of mental state it is (desire, experience, belief etc.). Now, since two mental states are related evidentially solely in virtue of their content, my first-order experience of something red cannot be evidence for my introspective belief that I have an experience of something red. The evidential basis for the claim that it is an experience is simply missing in the first-order state. Second, it seems reasonable to assume that there are justified introspective beliefs which are false, maybe due to mistakes of inattention or bias through strongly misleading expectations. Let us, for example, assume that I have the experience of a certain shade of red4, but I represent it as an experience of red12. In this case, the content of my first-order experience is not an evidential basis for my mismatching introspective belief. Still my introspective belief may be justified, though false.

73

content of self-evident propositions by entertaining them in thought. But entertaining a propositional content surely is not the evidence which justifies the belief. (Compare Sosa 2007, 55). For, we are entertaining all kinds of propositions which may be unjustified. In short: Evidential reliabilism cannot explain why non-evidential justification, which obviously exists, is defeasible. Second, Evidential Reliabilism is a mixture of reliabilism and internalism. On this view, defeaters have an explanation that is completely internalist in nature. (Compare Alston 1989). Therefore, it does not give us a thoroughly reliabilist account of defeaters. Third, Evidential Reliabilism does not explain the normativity of normative defeaters. 5. An improved revisionary approach So far I have argued (1) that a promising reliabilist account should leave room for defeaters and (2) that all existing reliabilist accounts that satisfy (1) either do not give an adequate explanation of why defeaters remove justification or give an explanation which is not reliabilist in spirit. Finally, I want to present my own account of defeaters which is supposed to overcome these shortcomings and remain completely reliabilist in spirit. Here is my own suggestion: (TG) S is justified in believing that p at time t, if and only if (1) S’s belief is based on a reliable process, and (2) there is no conditionally reliable process available to S which (i) a properly functioning cognitive system of the kind to which S belongs would have used and (ii) which would have resulted in S’s not believing p at t, and (3) the proper function mentioned in (2) can be explained with respect to getting at true beliefs. In order to demonstrate that (TG) is completely reliabilist in spirit, one first has to show how belief-inhibitory processes can be classified as reliable. Of course, these processes do not lead to true beliefs more often than to false beliefs. But we can call them “reliable”, if and only if they eliminate false beliefs more often than true beliefs, when their input is true. Secondly, clause (2) mentions in, contrast to Goldman’s account, belief-inhibitory processes which a properly functioning cognitive system

74

would have used. (TG) thereby pays tribute to the normative dimension of defeaters. Thirdly and most importantly, clause (3) requires that being sensitive to internal counterevidence is itself truth-conducive or at least somehow valuable in getting at the truth (and avoiding errors). This clause distinguishes (TG) from, for example, Plantinga’s view, according to which avoiding defeaters is a purely internal affair. How can internal rationality of belief-formation, as required by (2), be understood as truth-conducive? Let me roughly sketch how such an explanation might look like. My intention is to give a completely naturalist explanation of proper functions, as it has been suggested by Ruth Millikan (Millikan 1993). According to Millikan, A has the proper function F if and only if A originated as a reproduction of some prior item that has performed F in the past, and A exists because of this prior performance. Let us apply this definition to belief-revising cognitive systems. By correcting errors or sources of error, the cognitive system usually improves the overall truth-ratio of its beliefs. The overall reliability of a cognitive system will be massively improved if its beliefs are rationally sensitive to errors or sources of error. This capacity can be implemented by the cognitive system, only in so far as the system is sensitive to what looks to be counterevidence from the inside. Under normal conditions (i.e. if the cognitive perspective on the world is reliable on the whole), avoiding internal irrationality will massively eliminate error. Now, a cognitive system that eliminates error is better adapted to its environment than a cognitive systems that does not and thereby the former gains a reproductive advantage. This explains in a naturalist manner how subsequent cognitive systems acquire the proper function of being rationally sensitive to counterevidence. That the system functions properly only if it avoids internal irrationality even holds in cases (like Jones’ case) where the available counterevidence would be misleading. If counterevidence is ignored in any particular case, the cognitive system is malfunctioning in sustaining that belief. In this case, the belief is no longer justified, since condition (2) of (TG) is not satisfied. Let me conclude by pointing out what, on my view, are the advantages of (TG) over (G) and (IR). Whereas Goldman’s (G) cannot explain the normative dimension of defeaters, (TG) says that a cognitive system is not sensitive to counterevidence as it should since it does not fulfil its proper function. Neither (G) nor (IR) really explain why the no-defeater condition (2) is reliabilist in spirit. (TG) promises with clause (3) an answer to that question. We have seen why and how a cognitive system may adopt the

75

proper function of being rationally sensitive to internal counterevidence in order to get at the truth and reproduce itself.5

REFERENCES Alston, William 1989: “An Internalist Externalist”. In: WilliamAlston: Epistemic Justification, Ithaca/London: Cornell University Press, 227–245. — 2002: “Plantinga, Naturalism, and Defeat”. In: James Beilby: Naturalism Defeated?. Ithaca/New York: Cornell University Press, 176–203. Bergman, Michael 2006: “Defeaters”. In: Michael Bergman: Justification without Awareness, Oxford: OUP, 153–177. BonJour, Laurence 1985: The Structure of Empirical Knowledge, Cambridge (MA): Harvard University Press. Comesana, Juan forthcoming: “Evidentialist Reliabilism”. Nous. Dretske, Fred 2000: “Epistemic Rights without Epistemic Duties”. Philosophy and Phenomenological Research 60, 591–606. Engel, Mylan 1992: “Personal and Doxastic Justification in Epistemology”. Philosophical Studies 67, 133–150. Goldman, Alvin 1979: “What is justified belief?”. In: George Pappas (ed.), Justification and Knowledge. Dordrecht: Reidel, 1–23. Grundmann, Thomas forthcoming a: “Defeasibility Theories”. In: Sven Bernecker, Duncan Pritchard (eds.): The Routledge Companion to Epistemology, London: Routledge. — forthcoming b: “Introspective Self-Knowledge and Reasoning: An Externalist Guide”. Erkenntnis 71. Millikan, Ruth 1993: “In Defense of Proper Functions”. In: Ruth Millikan, White Queen Psychology and Other Essays for Alice. Cambridge (MA): MIT Press, 13–29. Plantinga, Alvin 1993b: Warrant and Proper Function. Oxford: OUP. — 2000: “The Nature of Defeaters”. In: Alvin Plantinga, Warranted Christian Belief. Oxford: OUP, 359–366. Shoemaker, Sydney 1996: The First Person Perspective and Other Essays. Cambridge: Cambridge University Press. Sosa, Ernest 2007: A Virtue Epistemology. Vol. I. Oxford: OUP. 5. I am especially grateful to Alvin Goldman, Joachim Horvath, Jan Sprenger and to an anonymous referee for their detailed discussion and comments on previous versions of this paper. For other helpful commentary, I want to thank Peter Baumann, Frank Hofmann, Christoph Jäger, Christian Piller, Tobias Starzack and Woldai Wagner.

76

Grazer Philosophische Studien 79 (2009), 77–89.

RELIABILISM—MODAL, PROBABILISTIC OR CONTEXTUALIST Peter BAUMANN Swarthmore College Summary This paper discusses two versions of reliabilism: modal and probabilistic reliabilism. Modal reliabilism faces the problem of the missing closeness metric for possible worlds while probalistic reliabilism faces the problem of the relevant reference class. Despite the severity of these problems, reliabilism is still very plausible (also for independent reasons). I propose to stick with reliabilism, propose a contextualist (or, alternatively, harmlessly relativist) solution to the above problems and suggest that probabilistic reliabilism has the advantage over modal reliabilism.

Reliabilism about knowledge has it that knowledge is reliable true belief, that is, true belief which has been acquired in a reliable way. We do not have to take this as a reductive definition of knowledge. It seems hopeless to try to give a reductive definition of any ordinary (and philosophically interesting) concept, like the concept of knowledge. Let us rather see it as an indication of a conceptually necessary condition of knowledge: (KR) Necessarily, if S knows that p, then S acquired the belief that p in a reliable way. The general idea of knowledge reliabilism strikes me as being very plausible and Alvin Goldman deserves major credit for developing hints by Frank Ramsey into a full-blown theory (cf. Ramsey 1990a, 91–94, 1990b, 110; Goldman 1992a, 1992b, 1986, 1988, 2008; cf. also Armstrong 1973, Dretske 1981 and Nozick 1981). The basic idea can be formulated in such generality that it covers both internalism and externalism about knowledge. It can also be applied to other epistemic notions, like the concept of epistemic justification, or to non-epistemic notions, like the concept of moral character. In the following I restrict myself to general process

reliabilism and will not go into the different forms of “local” reliabilism (sensitivity, safety, relevant alternative accounts, etc.; cf. Becker 2007 for a combination of sensitivity with Goldman’s account). So, reliabilism looks great in general but things become tricky when we look at the details. What exactly is meant by “ways of belief acquisition” (or “processes” as I will simply call them here,—covering both what Goldman (1986, 93) calls “processes” and “methods”) and how should they be individuated? And what exactly is meant by “reliable”? Let me start with the last question. A first, rough answer says that a process is reliable just in case it tends “to produce beliefs that are true rather than false” (Goldman 1992b, 113). Here is one interpretation of this remark: A process P is reliable just in case the ratio t/f of the number t of true beliefs to the number f of false beliefs resulting from the process is above a certain value r.1 The value of “r” need not be a precise one and might well vary with context. The general idea here is that (R) P is reliable iff t/f > r.2 (R) only holds for what Goldman calls “belief-independent processes” (cf. 1992b, 117); we don’t have to go into the complications having to do with belief-dependent cases here (where we would have to add the condition that the basing beliefs are true). I also skip the necessity operator for the sake of simplicity here as well as in the formulations of the following principles. We certainly have to add certain bells and whistles, like the restriction to normal (human) subjects and to normal circumstances for the running of the process. These additional constraints are important and not without problems; however, since not much depends on this here, I will disregard them here for the sake of simplicity. Interesting questions arise: What do we mean by “beliefs resulting from the process”? All such beliefs, past, present and future, ever acquired by any subject? How can we refer to all such beliefs in our reliability judgments when we only have a very restricted sample of beliefs available? Are 1. The ratio t/f is taken to approximate a limit.—I am using the term “result” in a causal sense here. - What if P also produces beliefs without truth-values? What if we give up the idea of bivalence? What if the process does not produce any beliefs? We can disregard these questions here because the main points below do not depend on these issues. 2. I am assuming—also for the sake of simplicity—that all beliefs are created equal in the sense that the truth of some beliefs does not contribute more to the reliability of the process than the truth of some other beliefs. I am also assuming that any unequal distribution of true and false beliefs is due to the nature of the process running.

78

we justified in assuming that our sample is representative? I don’t want to further go into this because there is an even more interesting and more worrisome problem: Should we restrict t/f to the actual world? After dealing with this question (sections 1–2), I will discuss a non-modal, probabilistic alternative (sections 3–4). 1. Modal reliability Goldman has proposed a negative answer to the last question: According to him, the notion of reliability is a counterfactual, modal one (cf. 1992a, 1986, 48, 107, 1988, 61ff.; cf. Becker 2007, 11f., 32, 89–92 in support but cf. also critically McGinn 1984, 537ff.). We thus have to modify (R): (M1) P is reliable in the actual world iff in the actual world (but cf. Goldman 1988, 61) and in some suitably qualified possible worlds t/f > r. It seems obvious that a process can be reliable even if t/f < r (or ≤ r) in some worlds. It would be way too much to ask for if the ratio had to be above r in all possible worlds. Human vision can be reliable even if there are strange worlds in which it is useless. This raises a difficult question: In which worlds does t/f have to be above r in order to give us reliability for P? In (1986) Goldman proposed to restrict the set of possible worlds relevant here to what he called “normal worlds”: worlds which fit our general beliefs about the actual world (cf. 107). This gives us (M2) P is reliable in the actual world iff in normal worlds t/f > r. There are several problems with this proposal—as Goldman later (cf. 1988, 62; cf. also Becker 2007, 33f.) acknowledged: Which beliefs held by whom fix the set of normal worlds? Goldman thus gave up the restriction to normal worlds but stuck with the idea that the notion of reliability is a modal notion. How else could one restrict the set of possible worlds? One major problem with the idea of normal worlds is that it relativizes the set of the relevant possible worlds to our (true or false) beliefs about the actual

79

world. What if all or most of these beliefs are false? Call a world “close” just in case it is similar in the relevant respects (what ever those are) to the actual world. Should we really say that a process is reliable even if in close but anormal worlds t/f < r (or ≤ r)? It seems to make much more sense to say instead that (M3)

P is reliable in the actual world iff in close (normal or anormal) worlds (including the actual world) t/f > r.

Again, here and elsewhere we don’t have to take the idea of giving a full definition too seriously. I am assuming that this comes “close” to what Goldman agrees to nowadays. Let me therefore go into some problems I see with (M3) and then move on to an alternative, non-modal, account of reliability. 2. Problems with closeness The main problem is, of course, this: What determines whether a possible world is close to the actual world? Is there something—some matter of fact—which determines closeness? It is amazing that not much work has been done so far on this, crucial, question. One quick response would be to say that the further characterization of the process will determine the set of close worlds. Take vision as an example and consider a world which is so radically different from the actual world that we would not even say that vision exists in that world. The laws of optics might be so different that we would not want to call anything subjects are involved with in that world “vision”. We thus have to restrict ourselves to worlds in which vision exists, and those worlds are the close ones. The problem with this kind of response is that it is too quick. Why should there not be remote worlds in which vision exists? Think of a world in which people are often envatted for some time. Or why should there not be close worlds in which vision does not exist? Such a world might not be epistemologically close but could still be metaphysically close. Or would we have to “define” closeness in epistemological terms? But why? And even if we did and had to: there could still be “close” (in that sense) worlds in which vision would not give us the right ratio of true and false beliefs. Think of a possible world in which people are often very absentminded and confused. The existence of such a world should,

80

however, not make us deny the reliability of “our” vision (except if close worlds are only worlds where vision gives a high ratio of true beliefs—but such a stipulation would trivialize the point made). So, the question remains wide open: What makes a possible world close to the actual world? Following Lewis (1973, 48–52, 66f., 1986, 20–27), most people would explain closeness in terms of similarity. This is fine as it stands but it does not solve our problem. Everything is similar in some respect to everything else—so, which similarities count when it comes to closeness of worlds? Suppose that in the actual world I am not a brain in a vat, have never been one and will never be one. Compare the actual world to one possible world (WEIRD) in which I am not envatted but in which the laws of nature are very different from the actual ones; compare it also to another possible world (VAT) in which the laws of nature are the same as in the actual world but in which I am envatted from time to time (cf. my 2005, 232–237 as well as Neta 2003, 16, fn.51 and Grobler 2001, 293). Is VAT closer to the actual world than WEIRD? Ask an epistemologist and you get one answer, ask a physicist (or even a metaphysician) and you get a different answer. Lewis (1979) tried to give criteria dealing with such cases but without much success, I think (cf. also Fine 1975, 451–458; Jackson 1977, 4–8; Slote 1978, 20–25; Bowie 1979; Heller 1999, 116; I cannot go into this further here). Interestingly, Lewis himself conceded that similarity and thus the closeness of worlds is context-dependent (cf. 1973, 50ff., 66f., 91–95; cf. also Williams 1996 and Heller 1999, 505–507). All this suggests that there simply is no such thing as the one and only one closeness (or remoteness) metric for possible worlds (no matter whether closeness is spelled out in terms of similarity or in other ways). At least, I think we have no good reason to believe that there is such a thing. If this is correct, then there can be two (or more) different closeness rankings Ra and Rb such that one and the same process P comes out as being reliable, given Ra, but as not reliable, given Rb. If the world in which I am sometimes envatted is to count as close, then my vision might not count as being reliable while it could come out as reliable if that world was to count as a remote one. In a similar way, the answer to the question whether S knows that p will depend on the chosen closeness ranking. How bad is this consequence? It depends. Our principle (M3) P is reliable in the actual world iff in close (normal or anormal) worlds (including the actual world) t/f > r.

81

would require either a further relativization (M4) P is reliable in the actual world (given some closeness ranking) iff in worlds (including the actual world) which are close to the actual world3 according to that ranking t/f > r or a contextualist interpretation according to which the truth value of the right-hand side of (M3) depends and varies with the closeness ranking used and presupposed by the attributor of reliability. In a similar way, knowledge attributions will have to be relativized or contextualized. It is not clear whether Goldman would want to have any of this. Should we rather give up knowledge reliabilism then? 3. Probabilistic reliability But why think of reliability as modal? Why not explain the notion probabilistically (where the concept of probability is not a modal one)? This approach has, I think, certain advantages. But let me start with a sketch of probabilistic reliability (cf. for this kind of approach: Kvart 2006). The basic idea is to explain reliability in terms of the conditional probability (Pr) to acquire a true belief (T) as a result of a process aiming to settle a given question (I skip reference to the actual world from now on): (P1) P is reliable iff Pr (T/P happens) > s. Here and in the following further developments of the principle, “P” refers to a type (not token) process; for the sake of simplicity, I also skip the quantifiers (it being obvious what the full and cumbersome principles would look like). The value of “s” needs to be high enough (it need not be a precise one, though, and might well vary with context).4 We can assume that it is unrealistic to expect that s = 1 for any process. Furthermore, in normal cases of reliable processes we can expect that s > .5. (P1) isn’t the complete story yet because the probability of a true belief 3. The actual world is close to itself. 4. Again, we can disregard cases where the process results neither in a true nor in a false belief.—Similar to the modal principles above, (P1) and related principles only hold for beliefindependent processes. We also have to restrict ourselves to normal subjects and normal circumstances.

82

might not in any way be due to the process. I want to include the idea of efficacy of the process in the idea of reliability of a process. It should make a difference whether—everything else remaining the same—P happens or does not happen. Let us therefore modify (P1) in the following way: (P2) P is reliable iff (a) Pr (T/P happens) > s, and (b) Pr (T/P does not happen) ≤ s. However, the additional clause (P2b) seems too strong: A process can still be reliable even if it raises the probability of a true belief just a bit from a value which is above s. It therefore seems more appropriate to say that (P3) P is reliable iff (a) Pr (T/P happens) > s, and (b) Pr (T/P happens) > Pr (T/P does not happen). 4. Reference Class Problems Now, how does (P4) (or similar probabilistic principles) fare as an explanation of the notion of reliability? There is a famous and notorious problem for reliabilism, the so-called generality problem (cf., e.g., Feldman 1985, 160ff., Alston 1995 and Goldman 1992b, 115f., 1986, 49ff.). I think it is not just one amongst several problems of the theory. Rather, it can be generalized in such a way that some very basic questions about the adequacy of a probabilistic account of reliability arise. So, what is the problem? The problem is simply how to individuate processes of belief acquisition. One certainly does not want to be so specific that only one token of the process type exists. In that case, we would only have a reliability of 1 or of 0 but nothing in between. It would also not be advisable to individuate processes extremely broadly: The process of using one’s cognitive apparatus does not seem to have some definite reliability. So, the correct individuation of a process is neither too narrow nor too broad. It should lie in the middle—but where exactly? This is such a huge problem because even if we ignore the extremes it is still (at least very often) possible to find two (or more) different ways of individuating the process such that according to one the process is reliable

83

while according to the other it isn’t. Consider this case. Julie is looking at the sky and notices an airplane; she can even see that it is an Air France one. Just looking at the sky is not a reliable way of finding out what kind of airplane is flying by. But looking at the sky under today’s very special visibility conditions and after having taking eyedrops is a reliable method or process. And Julie has just done that. However, she is also suffering from a particular kind of headache which makes object recognition very difficult. Looking at the sky with that kind of headache, even under today’s very special visibility conditions and after having taken eyedrops is not a reliable method or process. So, does Julie have a reliable true belief (or knowledge) that an Air France plane is passing by? This depends on whether there is the one and only one right way of picking out and describing the process. If not, then there is no fact of the matter concerning reliability. To many, this would seem like a very bitter theoretical pill to swallow. And it has proven extremely difficult to find a solution to the generality problem. Goldman (1986, 50) proposes that the narrowest type of process which is causally operative in the production of the belief is the relevant one. However, what are we going to say if there is no strict causal relation between the process and its outcome? What if there is only a probabilistic one? How should we choose between a broader process with higher probabilistic correlation and a more narrow process with lower probabilitistic correlation? Also: Why should we go with the narrowest such process? I will come back to this and related problems below. At this point, I do not want to go much more into the generality problem because it is just one special case of a much broader problem: the problem of the relevant reference class (cf., e.g, Gillies 2000, 119ff.). Here is an example. Mary is graduating from her university. What is the probability that she will find a job within the first three months after graduation? Statistics have it that 65% of female university graduates do find a job within three months after graduation. However, only 55% of graduates from Mary’s university are so successful. Fortunately, Mary is from an upper class background—and 90% of upper class graduates don’t have to wait longer than 3 months for their first job. However, Mary had a child in her first term and young mothers struggle to find jobs after graduation. And so on. What then is the relevant reference class into which Mary belongs and which determines her job chances? To get back to cognitive processes: A particular type of process P is the relevant one just in case the token process belongs into the relevant reference class of all processes

84

of type P. Now, our more general reference class problem remains even if we could solve the generality problem. Why? Suppose we don’t worry what the relevant process is and simply try to determine its reliability. Consider the good old fake barn case (cf. Goldman 1992a, 86). Ernie finds himself in front of a real barn. He looks at it and acquires the true belief that there is a barn in front of him. Is, say, looking at an object of that size from that distance and under these conditions (let us just call this “looking”) reliable? What if the following is also the case: All the other barns on the farm are fake, no other farm in the village has fake barns but all other villages in the county are full of them? Again, is looking reliable? The answer to this question depends on what the relevant spatial reference class is: the area around Ernie and the barn he is looking at, the farm, the village, or the county? I don’t see how there could be a matter of fact which determines exactly one relevant spatial reference class. But couldn’t we use probabilities to solve the problem? There is a certain probability, one might want to say, that Ernie will only ever look at this one barn, and another probability that he will never travel through the county, etc. However, the same questions can be raised with respect to these probabilities: What are their relevant reference classes? A regress of the reference class is lurking here. Similar questions can be raised about temporal reference classes: with respect to which temporal intervals ought we to judge the reliability of the process? Suppose that Ernie’s reliability varies with the time of the day, day of the week, season, etc., and we get a similar problem. Again, it is very doubtful whether there is a relevant reference class. One general strategy to solve the reference class problem is to go with the narrowest probabilistically relevant reference class. In our temporal case above this would be the time of looking at the barn (let us assume that there is an uncontroversial beginning and end of the process and that probabilities don’t change during the process). However, there are several problems with this strategy. One has to do with the fact that very often the narrowest reference class will have just one element. One difficulty here is that we might not have statistical information about single cases or about unique instantiations of a given cluster of properties. A deeper problem has to do with the question whether talk about probabilities in such a single case are meaningful at all. Even if we set these worries aside, there would still be the question why narrowness should matter in the first place? It is true that Ernie looked at the barn at time t but this does not entail that time t is the relevant time for the determination of the

85

reliability of his vision. Much more could and should be said about this but I can leave it at that here. Space and time are just two aspects. More aspects could be added. Let us also not forget that the same kind of problem arises with respect to the question what the relevant process or method used was. There is thus a whole bunch of reference class problems. The prospects of solving the reference class problem - in the sense of identifying a unique relevant reference class—seem dim (cf., e.g., Fetzer 1977; Hájek 2007). Even if one does not want to go that far and deny that there is a solution to the problem one would still have to admit that we just don’t know what determines relevant reference classes. Given that our judgements about reliability and knowledge depend on this, this is still bad or interesting enough. What does all this imply for the notion of reliability? As long as there is no variation of the probabilities along the relevant dimension (space, time, etc.): not much. However, we cannot make the assumption that this will always or even very often be the case. There will be at least some cases—and not too few—where there is such a variation of the probabilities. And in those cases, we will have no unique answer to the question whether the person has acquired her belief in a reliable way. If one is a reliabilist, one will therefore also have to conclude that at least in some cases there is no unique answer to the question whether S knows that p. Again, the question is: How bad is that? And again, it depends. Our simple principle (P1) P is reliable iff Pr (T/P happens) > s would require relativization to reference classes: (P5) P is reliable with respect to a given set of reference classes RC iff Pr (T/P happens under the conditions determined by RC) > s. One might object that (P5) only gives us part of the story because we take the method or process as given and thus unaffected by the indeterminacy of the reference class. However, this need not be a problem. Let “P” be a description of the method which is “meagre” enough so as not to allow for variations in the relevant probabilities. Everything else can then be subsumed under “circumstances” under which the process takes place. This is acceptable because nothing forces us to distinguish between process

86

and circumstances in any particular way. If we go with a meagre enough description of the process, we will get a useful version of (P5). An alternative would be to go contextualist and argue that the truth value of the right-hand side of (P1) and its kind depends on and varies with the reference classes chosen by the speaker. Indirectly, this will, of course, also lead to a relativism or contextualism about “knowledge”, given reliabilism. Again, I wonder what Goldman would say. One option would be to give up reliabilism in order to avoid all that. However, I think that there is no strong enough reason to do that. Reliabilism has a lot of independent plausibility. And as far as I am concerned, contextualism does not look like a bad option at all. 5. Conclusion Finally, what about the alternative between modal interpretations and probabilistic interpretations of “reliability”? Aren’t they more or less on a par, at least with respect to the issues discussed here? I don’t think so. I think there are clear advantages on the side of the probabilistic version. Let me quickly mention two. First, closeness rankings of possible worlds seem restricted to ordinal rankings while the apparatus of probability theory can capture more than that and represent relations between differences of probabilities. Second, probability theory is closer to home if you’re a naturalist than modal logic. The natural sciences are happy to use probability theory but seem to have little use for modal notions. I would therefore propose three things (in the light of all of the above); stick with reliabilism, go for a probabilistic version of it, and accept the contextualist implications of all that. How happy Alvin Goldman would be with that, I don’t know. REFERENCES Alston, William P. 1995: “How to Think about Reliability”. Philosophical Topics 23, 1–29. Armstrong, David M. 1973: Belief, Truth and Knowledge. Cambridge: Cambridge University Press. Baumann, Peter 2005: “Varieties of Contextualism: Standards and Descriptions”, Grazer Philosophische Studien 69. 229–245.

87

Becker, Kelly 2007: Epistemology Modalized. New York & London: Routledge. Bowie, G. Lee 1979: “The Similiarity Approach to Counterfactuals”. Noûs 13, 477–498. Dretske, Fred I. 1981: Knowledge and the Flow of Information. Cambridge/MA: MIT Press. Feldman, Richard 1985: “Reliability and Justification”. 68, 159–174. Fetzer, James H. 1977: “Reichenbach, Reference Classes, and Single Case Probabilities”. Synthese 34, 185–217. Fine, Kit 1975: “Critical Notice” [of Lewis 1973]. Mind 84, 451–458. Gillies, Donald 2000: Philosophical Theories of Probability. London: Routledge. Goldman, Alvin I. 1992a: “Discrimination and Perceptual Knowledge”. In Alvin I. Goldman, Liaisons. Philosophy Meets the Cognitive and Social Sciences. Cambridge/MA & London: MIT Press, 85–103. — 1992b: “What Is Justified Belief?”. In: Alvin I. Goldman, Liaisons. Philosophy Meets the Cognitive and Social Sciences. Cambridge/MA & London: MIT Press, 105–126. — 1986: Epistemology and Cognition. Cambridge/MA & London: Harvard University Press. — 1988: “Strong and Weak Justification”. Philosophical Pespectives 2, 51–69. — 2008: “Reliabilism”. In: Edward Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2008 Edition). URL = http://plato.stanford.edu/entries/reliabilism. Grobler, Adam 2001: “Truth, Knowledge, and Presupposition”. Logique-et-Analyse 44 (173-174-175): 291–305. Hájek, Alan 2007: “The Reference Class Problem is Your Problem too”. Synthese 156, 563–585. Heller, Mark 1999: “The Proper Role for Contextualism in an Anti-Luck Epistemology”. Philosophical Perspectives 13, 115–130. Jackson, Frank 1977: “A Causal Theory of Counterfactuals”. Australasian Journal of Philosophy 55, 3–21. Kvart, Igal 2006: “A Probabilistic Theory of Knowledge”. Philosophy and Phenomenological Research 72, 1–43. Lewis, David 1973: Counterfactuals. Oxford: Blackwell. — 1979: “Counterfactual Dependence and Time’s Arrow”. Noûs 13, 455–476. — 1986: On the Plurality of Worlds. Oxford: Blackwell. McGinn, Colin 1984: “The Concept of Knowledge”. In: Peter French/ Theodore Uehling, Jr./ Howard Wettstein (eds.), Midwest Studies in Philosophy 9 (Causation and Causal Theories), Minneapolis: University of Minnesota Press, 529–554. Neta, Ram 2003: “Contextualism and the Problem of the External World”. Philosophy and Phenomenological Research 66, 1–31.

88

Nozick, Robert 1981: Philosophical Explanations. Cambridge/MA: Harvard University Press. Ramsey, Frank Plumpton 1990a: “Truth and Probability”. In: Frank Plumpton Ramsey, Philosophical Papers (ed.: D.H. Mellor), Cambridge: Cambridge University Press, 52–94. — 1990b: “Knowledge”. In: Frank Plumpton Ramsey, Philosophical Papers (ed.: D.H. Mellor), Cambridge: Cambridge University Press, 110–111. Slote, Michael A. 1978: “Time in Counterfactuals”. Philosophical Review 87, 3–27. Williams, Michael 1996: Unnatural Doubts. Epistemological Realism and the Basis of Scepticism. Princeton: Princeton University Press.

89

III. THE VALUE OF KNOWLEDGE

Grazer Philosophische Studien 79 (2009), 93–114.

IN DEFENSE OF THE CONDITIONAL PROBABILITY SOLUTION TO THE SWAMPING PROBLEM Erik J. OLSSON Lund University Summary Knowledge is more valuable than mere true belief. Many authors contend, however, that reliabilism is incompatible with this item of common sense. If a belief is true, adding that it was reliably produced doesn’t seem to make it more valuable. The value of reliability is swamped by the value of truth. In Goldman and Olsson (2009), two independent solutions to the problem were suggested. According to the conditional probability solution, reliabilist knowledge is more valuable in virtue of being a stronger indicator than mere true belief of future true belief. This article defends this solution against some objections.

1. Introduction It is commonly agreed that knowledge is more valuable than mere true belief. Many also believe that process reliabilism is incompatible with this item of common sense.1 The reason is the so-called swamping problem: if a belief is already true, adding that it was reliably produced doesn’t seem to make it more valuable. The value in reliable production seems to lie merely in its being indicative of the truth of the belief thus produced. But if that belief is already assumed true, no further value is conferred by assuming, in addition, that the acquisition process was reliable. In Goldman and Olsson (2009), we pointed out that a reliabilist can defend her theory against the swamping objection by referring to the distinct value pertaining to reliabilist knowledge in virtue of the fact that such knowledge makes future true belief more likely.2 In other words, the fact 1. Classical formulations of process reliabilism can be found in (Goldman 1979 and 1986). 2. I use “probability”, throughout this article, in its objective sense. I frequently refer to one thing q making another thing more p likely. By this I mean that the objective probability of p

that a person knows that p, in the reliabilist sense of “knows”, is indicative of that person’s acquiring further true beliefs. This we called the conditional probability (CP) solution. This value is attained normally and not in every single case. We went on to formulate a second and independent solution to the swamping problem, which may be called the type-instrumentalism and value autonomization (TIVA) solution. Among other things, this solution would explain why some people tend to think that knowledge is always more valuable than mere true belief. In this paper, I will take the opportunity to discuss some objections that have been raised against the CP solution. But first I will give a more detailed account of the swamping problem and explain, in greater depth, how the CP solutions bears on it. 2. The CP solution to the swamping problem The swamping argument, as endorsed by Kvanvig (2003), Swinburne (1999), Zagzebski (2003) and others, may be presented schematically as follows: (S1) Knowledge equals reliably produced true belief (simple reliabilism). (S2) If a given belief is true, its value will not be raised by the fact that it was reliably produced. (S3) Hence: knowledge is no more valuable than unreliably produced true belief. The characteristic swamping premise, (S2), derives support from an appeal to a principle of veritism: (Veritism) All that matters in inquiry is the acquisition of true belief. If S’s belief is already assumed true and all that matters in inquiry is the acquisition of true belief, then learning that S’s belief was reliably produced does not make the belief more valuable, just as (S2) says.3 The standard reaction to the swamping argument is to reject (S1), that is greater given q than given non-q. I use “q is indicative of p” and “q improves the prospect of p” etc. as mere linguistic variations of “q makes p more likely”. 3. For discussions of veritism, see (Goldman 1999 and 2002).

94

knowledge equals reliably acquired true belief. The CP solution is different in this regard. Rather than questioning the argument’s premises, it purports to shed doubt on its validity. What it proposes is that reliabilist knowledge can be more valuable than mere true belief even if the belief in question is made no more valuable through becoming known. This is so because a state of knowledge can be more valuable than a state of mere true belief. This works because a state of knowledge is also a state of reliable acquisition and as such it is valuable not only as an indicator of the truth of the belief thus acquired but also as indicative of the production of further true beliefs (of a similar kind), namely, true beliefs resulting from reapplications of the reliable method in question. This is a reason why knowledge is more valuable than mere true belief even if the truth of both the premises employed by the swamping argument is granted. How can it be that the probability of future true belief is greater conditional on knowledge as opposed to conditional on mere true belief? This conditional probability claim is true in virtue of certain empirical conditions characterizing our world. They are the conditions of non-uniqueness, cross-temporal access, learning and generality. By non-uniqueness, the same kind of problem will tend to arise over and over again. Once you encounter a problem of a certain type, you are likely to encounter a problem of the same type at some later point. Finding the way from point A to point B is that kind of problem. By cross-temporal access, a method that was used once will tend to be available also when the same type of problem arises in the future. This is true, for instance, of the method of consulting a map or your car’s GPS navigation equipment. By the learning assumption, if a particular method solves a problem once, and you have no reason to believe that it did so unsuccessfully, then you will tend to use the same method again, if available. If, for example, you rely at point A on your GPS and arrive safely at point B, you are likely to rely on it again in the future. By generality, finally, if a method was reliable in the past, its reliability is unlikely to be discontinued. Rather, the method is likely to remain reliable in the future. Clearly, there are a lot of exceptions to these empirical regularities—problems arising only once in a lifetime, navigation equipment that is stolen or suddenly breaks down etc.—but this is of little consequence in the present context so long as those regularities hold in normal circumstances. For, if they do, that means that knowledge is normally more valuable, i.e., more valuable in the conditional probabil-

95

ity sense, than mere true belief. In my view, this is precisely what was to be demonstrated.4 For the record, it is not my view that the CP solution captures the whole sense in which knowledge is more valuable than mere true belief. I also happen to believe that the fact that a person knows that p, in the reliabilist sense, as opposed to merely truly believes that p, makes that person’s belief that p more stable. More precisely, the probability that a given true belief will remain in place is greater given that it was reliably as opposed to unreliably acquired, and stability, as Williamson (2000) observes, is practically valuable in cases of prolonged action. For a detailed account of the stability aspect of reliable belief acquisition, see (Olsson 2007).5 3. Kvanvig’s chocolate objection In his useful 2003 book, Kvanvig presented an alternative version of the swamping argument based on a chocolate example. In (Kvanvig to appear), he returns to the example in the context of a prolonged critical discussion of the CP solution, arguing that it presents a severe obstacle to the latter. There, the example is stated in the following form: 4. Some researchers take a different stance on this matter. Thus Jonathan Kvanvig thinks that knowledge is always more valuable than mere true belief (at least for knowledge of non-trivia). The issue was dealt with in some length already in Goldman and Olsson (2009). Recently, Kvanvig has published an extended critique of the CP solution focusing partly on its lack of compliance with his view that the distinct value of knowledge should be attained universally. A reply to Kvanvig can be found in Olsson (to appear). The purpose of the present article is to focus on other objections to the CP solution. 5. See also (Olsson 2008). It might be thought that stability is not always a good thing. As the world changes, the truth values of sentences also change. What was once true can become false at a later point in time. For instance, the sentence “Ronald Reagan is the president of the United States” was once true, and yet now it is false. Rather than focusing on retaining true beliefs, it is better, so the objection goes, to be adapted to the environment in the sense that one, at any given time, believes what is true and is ready to stop believing what is, or has become, false. This objection hinges on the use of sentences that are temporally indeterminate, i.e. time is not an explicit part of the sentence but supplied by the context of utterance. The answer to the objection is that we can always switch to what Quine (1960, 191–195) called eternal sentences in which time, place etc are explicitly mentioned, without suffering any theoretical loss. If we do, the truth values of sentences will no longer be sensitive to the time of utterance. What was once true will always be true. Retaining true beliefs is therefore to be recommended after all.

96

Suppose I want chocolate. I google to find places close to me. I get two webpages: one entitled “places that sell chocolate in Waco”; the other “places likely to sell chocolate in Waco”. We may assume accuracy for both lists, and that the second list is generated from correlations: places that sell food are likely to sell chocolate, places that sell hard candy are too, etc. … We then note [that] … [i]f all I care about is chocolate, it would be no better to use the list of places that both sell chocolate and are likely to than to use the list of places that sell chocolate. What the example demonstrates is that “truth plus likelihood of truth is not preferable to truth alone” (21). Kvanvig draws the lesson that “one better not identify justification with statistical likelihood of truth” (21). So far so good. It is still unclear, though, how this is supposed to be relevant to process reliabilism, which does not equate knowledge with true belief that is statistically or objectively likely to be true. In anticipation of this objection, Kvanvig writes: After all, if objective probability itself succumbs to the swamping problem, why would the fact that there is an etiological relationship to a process or method responsible for that probability relieve the theory of the problem? Such a causal relationship to methods or process doesn’t seem to be the kind of feature that adds value beyond the value of true belief, so there is no apparent reason here to think that ordinary process reliabilism is in a better condition with respect to the swamping problem than is the simple objective probability theory [i.e. the theory that equates knowledge with true belief that is objectively likely to be true].

Kvanvig goes on to note that both the CP solution and the solution that focuses on value autonomization “go beyond the identification of justification with objective likelihood of truth, and thus provide some hope of avoiding the swamping problem” (23). However, Kvanvig finds that this initial hope quickly vanishes on closer inspection. His lack of enthusiasm for the CP solution is reflected in the following passage: Once we appreciate the nature of the swamping problem as a problem concerning properties of belief that are non-additive of value in the presence of true belief, it becomes hard to see how the above proposal is helpful at all. In the analogy involving chocolate, I don’t even know how to begin thinking about applying this idea to new businesses of the same type, conditional on the first list (places that sell chocolate) and the third list (places that both sell chocolate and are likely to).

97

The problem with Kvanvig criticism is that it is question-begging in the present argumentative context. Taking the chocolate analogy for granted, as Kvanvig does, involves focusing at the outset exclusively on the objective likelihood of the belief produced and abstracting from everything about a reliable process that doesn’t amount to that process indicating the truth of that belief. According to the CP solution, by contrast, reliable acquisition is indicative not only of the truth of the belief thus produced, but also of the future acquisition of true beliefs (of the same general kind). Excluding, as it does, the CP solution from the start, Kvanvig’s analogy cannot be taken as a point of departure in a neutral examination of the possible merits of that proposal.6 Does this mean that the CP solution for some reason is inapplicable to cases involving beliefs about where to buy chocolate? Not at all. The CP solution entails that, if you obtained your belief about where to buy chocolate from a reliable website, you are more likely to acquire further true beliefs about where to buy chocolate in the future, the reason being, roughly speaking, that you are likely to use the same website again which is likely to remain reliable. 4. Do we value future true belief? The purpose of this section is to discuss two objections that were raised by Isaac Levi in response to the CP solution.7 In Levi’s view, the value problem, as it was posed by Plato, is an artificial one, the reason being that there does not seem to be any context where there is a genuine choice to be made between knowledge and mere true belief. This is especially true from a first-person perspective, for it doesn’t appear that a person could be faced with a decision whether to know or merely truly believe that p. If so, why should we care which is more valuable? How can a solution to the value problem, whatever it turns out to be, matter in our inquiries and deliberations? As it stands, this is a general objection to the fruitfulness of trying to account for the extra value of knowledge which, as such, is not directed specifically at the CP solution. I will respond to it anyway. I am willing to grant that, from where X is standing, there is no genuine choice between 6. For a discussion of the chocolate example, see also Goldman and Olsson (2009) and Olsson (to appear). 7. Personal communication. What follows is an excerpt of a longer email correspondence.

98

knowing that p or merely believing truly that p. This may be due to the fact that a person cannot coherently grant both that p is true and that her belief has no reliable basis. From her perspective, all her (true) beliefs are cases of reliabilist knowledge.8 Rather, my first move will be to question the validity of the inference from the lack of first-person concern to the lack of real import. My approach to the value problem is thoroughly externalist. The fact that a creature has knowledge at a particular instance improves the prospects for that creature to acquire more of the same in the future. This fact need not be something that the creature in question appreciates or is even intellectually capable of appreciating. But it is a fact nonetheless and, one might add, a valuable fact. The ultimate value in having true beliefs is plausibly practical: having true representations—now and in the future—allows a creature to navigate successfully in its environment which is of obvious survival value, whether the creature in question is aware of it or not. But let us grant, for the sake of the argument, that what is valuable to us must be something that we are, or can be, consciously concerned with in our own inquiries and deliberations. Even so, what the objection leaves out is that, while the difference between X’s knowing reliably that p and X’s merely believing truly that p may be elusive, or even non-existent, from X’s own point of view, there is clearly a difference from another person’s, Y’s, perspective. Y can note that X’s true belief that p was not reliably acquired. Y can then reason, in accordance with the CP solution, to the conclusion that X would be better off had he based his belief on a reliable method. For then X would have been more likely to obtain further true beliefs in response to similar inquiries in the future. Y may now decide to inform X about the unreliability of the process leading up to X’s belief and inform X about a reliable method to arrive at the same belief. So, while it may be difficult, if not impossible, for X to appreciate the difference between knowledge and mere true belief and hence difficult, if not impossible, for X to make that difference matter in his conduct, it is quite possible for another person Y not only to appreciate the difference but to let that difference influence his conduct. Levi, however, thinks that this reply will not work. The reason, he submits, is that we attribute no value to obtaining future true beliefs. Indeed, in his 1980 and 1991 books, Levi rejected, on independent grounds, what 8. As I argue in (Olsson 2004), F. P. Ramsey, the first modern reliabilist, seems to have held this view.

99

he calls “messianic realism”, i.e., the view that we care now about avoiding error or gaining truth not only at the next stage of inquiry but also further down the line.9 This is not the place for a detailed scrutiny of Levi’s complex argument against this view. I will confine myself to making an observation that I believe sheds prima facie doubt on the proposal that it would be unreasonable, in all cases, to attach value to the prospect of arriving at true beliefs in the future. At least, it shows that that proposal is in serious need of clarification. Suppose X is embarking on a journey to Larissa. On the way there are three junctions. In order to get to Larissa, X must take the correct road at each junction. X knows this, but this, alas, is all he knows. In particular, X doesn’t know in advance what road to take at each junction. X now confronts the first junction, asking himself whether he should choose “left” or “right”. This is his first inquiry. X knows, at this point, that he will face two more inquiries of a similar kind at the two remaining junctions. If Levi is right, however, the inquirer attributes no value to obtaining further true beliefs in response to those remaining inquiries. But this seems implausible. X, by assumption, wants to go to Larissa. But in order to do so he must make the right choice at each of three junctions. Surely, then, X cares about each and every one of these choices now. Pace Levi, X desires now to make the right choice at each junction. Let us add some more details to the example in an attempt to make it even more compelling. X now decides to ask a local resident, Z, about the way. Z tells X to take the road to the right, which indeed is the correct choice, and volunteers to accompany X to Larissa. Believing Z to be a reliable local guide, X agrees. Let us compare the following two scenarios: First scenario: Unbeknownst to X, Z is unreliable and his recommendation a mere lucky guess. Hence, X’s true belief, arrived at via Z’s testimony, that the road to the right leads to Larissa is not a case of reliabilist knowledge. Second scenario: Z’s recommendation is not only true but was also reliably acquired. Clearly, X is better off in the second scenario than he is in the first. Because Z used a reliable method to find the way at the first junction, Z is 9. Cf. (Levi 1991, 161): “I suggest we distinguish between being concerned to avoid error at the next change in belief state and being concerned to avoid error not only at the next stage but at n (>2) or all stages down the line … I shall call those who advocate avoidance of error as a desideratum of inquiry in the first sense secular realists and those who favor one of the other two sense messianic realists.”

100

likely to have available the same method (map, memory etc.) at the other two junctions, and he is likely to use that method there as well. Finally, since the method used is likely to remain reliable, its reemployment is likely to lead to further correct recommendations, so that X will eventually reach his destination. The same could not be said of the first scenario. Z, to be sure, is likely to reemploy the same method but, being unreliable, that method is relatively likely to yield one or more false recommendations at the remaining junctions. We can test our intuitions as follows: Focusing once more on the first scenario, let us assume that, at the first junction, X’s friend Y comes forward pointing out that, while Z will get it right this time, he will do so as a matter of luck. If Levi were right in saying that X should not care about future inquiries, X should not find Y’s remark about Z at all relevant. Since Y gives the right recommendation now, Y’s unreliability could only be a matter of concern in the future, and, says Levi, we should not care now about our future inquiries. But, to repeat, this sounds wrong in my ears. Of course X would find Y’s remark directly relevant to his concerns. It should make X strongly hesitant to accept Z’s offer to help.10 5. Does the CP solution succeed in identifying a genuine (surplus) value? Granted that we value future true belief, does the fact that reliabilist knowledge raises the probability of future true belief really make such knowledge more valuable? If so, what kind of value are we talking about? These issues are raised in a recent paper by Markus Werning (see Werning 2009). There is much in Werning’s carefully crafted paper that I agree with and find helpful. In particular, I consider his own proposals, as developed at the end of his article, insightful and intriguing, though I have yet to figure out exactly how those suggestions fit with my own thinking on the matter. However, I am not convinced by Werning’s criticism of CP solution. Let me try to explain why. First of all, I agree with Werning that the value which that solution attributes knowledge is not of an instrumental kind. In other words, Werning 10. In our email communications, Levi replied to this argument of mine by referring to a manuscript he had recently written in response to Edward Craig’s 1990 book Knowledge and the State of Nature. The manuscript, entitled “Knowledge as True Belief ”, will be published in a forthcoming book which I am editing with Sebastian Enqvist called Science in Flux.

101

is correct to observe that the value adding mechanism which is at work here is not subsumable under what he calls “the means-to-end relation” (nor have Goldman or I claimed it is). Nor is it a case of intrinsic or final value. If, as Werning seems to take for granted, instrumental value is the only legitimate kind of extrinsic (non-final) value, it would follow that the CP solution fails to account for the added value of knowledge, which is precisely the conclusion drawn by Werning. However, the view that instrumental value exhausts the domain of extrinsic value is implausible or at least highly controversial. There is, among other things, also what we may call “indicator value”, the value accruing to something in virtue of its indicating something good. This is acknowledged by leading value theorist like Michael Zimmerman, who writes in his entry on intrinsic and extrinsic value in Stanford Encyclopedia of Philosophy: Many philosophers write as if instrumental value is the only type of extrinsic value, but that is a mistake. Suppose, for instance, that the results of a certain medical test indicate that the patient is in good health, and suppose that this patient’s having good health is intrinsically good. Then we may well want to say that the results are themselves (extrinsically) good. But notice that the results are of course not a means to good health; they are simply indicative of it. (Zimmerman 2002)

The idea behind the CP solution is that knowledge has extra value precisely in that sense: its extra value derives from its indicating something good, namely future true beliefs which are things having final value in the veritist framework. This, again, is not a kind of instrumental value, but it is still a kind of extrinsic value.11 There is another claim by Werning that I would like to challenge, though somewhat less confidently. At one point, he writes: “[t]he extra value of reliably produced true belief as opposed to simply true belief cannot be accounted for by the means-to-end relation”. Werning may have meant to say only that the CP solution doesn’t account for the extra value of knowledge by relying on a means-to-end relation. Thus interpreted, he is of course right. But it is still worthwhile to consider the claim he is actually making, which I tend to believe is at best only partly correct. I tend 11. I take the opportunity to express my gratitude to my colleagues Wlodek Rabinowicz and Toni Roennow-Rasmussen for originally drawing my attention, in personal communication, to the distinction between instrumental and indicator value, and for suggesting that the kind of extra value that CP solution attributes knowledge is of the latter sort.

102

to think that the extra value of knowledge can, in some degree at least, be accounted for by the means-to-end relation. This may come as a surprise considering the fact that I have emphasized the non-instrumental character of CP solution, a solution I endorse. The explanation is that I have, as I mentioned earlier, another account of the value of reliabilist knowledge according to which such knowledge receives surplus value from making the true belief in question more stable (which is good from the point of view of acting successfully over time). Although I usually spell out what I just wrote in probabilistic terms, I am inclined to think that it is true in a sense of “making more stable” that is stronger than a mere conditional probability relation, presumably to the point of expressing a means-to-end relation of the kind Werning would like to see. The two accounts—CPS and the stability solution—are compatible, and I view them as complementary. If my conjecture about reliabilist knowledge being instrumentally valuable with respect to stability is correct, the extra value of knowledge can be accounted for by the means-to-end relation, but that wouldn’t be a complete account. 6. Does the CP solution solve the general value problem? According to several authors, among them Kvanvig (2003) and Pritchard (2007), there are in fact two value problems to consider. One is the traditional Platonic value problem of showing that, despite some indications to the contrary, knowledge is more valuable than mere true belief. We may refer to this as the special value problem. There is also what we may call the general value problem, namely that of establishing that knowledge is more valuable than ignorance.12 To see that these two problems are indeed different, consider a full-blown reliabilist account of knowledge according to which knowledge amounts to reliably acquired true belief that satisfies an anti-Gettier clause. Solving the special value problem for this account by showing that knowledge, thus construed, is more valuable than mere true belief does not automatically mean solving the general value problem (for that account). For even if knowledge, thus understood, is more valuable than mere true belief, it could still be the case that such knowledge turns out to be no more valuable 12. This distinction corresponds to Pritchard’s (2007) distinction between the primary and secondary value problems.

103

than reliabily acquired true belief that does not satisfy the anti-Gettier clause. In the rest of the section I will focus, as I did in the formulation of the swamping argument, on simple reliabilism, i.e. reliabilism without an anti-Gettier clause. Several commentators have argued that, while the CP solution may solve the special value problem for simple reliabilism, it fails to solve the general value problem for that account. According to one critic, an opponent to simple reliabilism may concede that “true belief plus the existence of a reliable method” is better than true belief only, and that this solves the special value problem for simple reliabilism, but still deny that this is very helpful for the purposes of defending simple reliabilism against a more general swamping objection. For that more general purpose, the advocate must also claim that “true belief plus existence of a reliable method together with the fact that the belief was acquired through that method” is better than just “true belief and the existence of a reliable method”. But the CP solution does not seem to give any support for the evaluative significance of the part about how the belief was acquired.13 More precisely, the claim is that the probability of future true belief of the same kind as p will be more or less the same whether we conditionalize on “true belief acquired through a reliable method” or merely on “true belief plus the existence of a reliable method”. This would mean that, when it comes to being indicative of future true belief, there is no difference between simple reliabilist knowledge and what is less than such knowledge, i.e., ignorance. But I believe that there is a probabilistic difference in the two cases. The reason why it seems reasonable to believe that conditionalizing on “true belief acquired through a reliable method” significantly raises the probability of more true beliefs of the same kind is, to repeat, that certain empirical conditions are plausibly satisfied, at least in our world. According to one of them, cross-temporal access, a method that was used once is often available when similar problems arise in the future. According to another, the learning assumption, a method that was once employed in an unproblematic fashion will tend to be reemployed on similar occasions. Crucial here is obviously the actual use or employment of the method in question. The mere fact that the method “exists”, whatever that means 13. This objection was legitimately raised by Erik Carlson in response to an earlier version of the CP solution (personal communication).

104

more precisely, seems insufficient for the purpose of raising the probability of future true belief with reference to these empirical regularities. There is another objection in the same category that I would also like to say a few words about. As a preliminary, note that to acquire a false belief in a reliable manner makes good sense on the common interpretation of reliability as not necessarily amounting to complete reliability. By saying that a method was reliable we usually mean to say merely that it is sufficiently reliable for the purposes at hand, i.e. that it is reliable at least to degree d where d is a contextually determined threshold parameter. For instance, vision at close range counts as a reliable process in most contexts even if it happens occasionally to give rise to false beliefs. Now reliably acquired true belief, according to the objection, does not seem to be any more valuable than reliably acquired false belief on the CP solution. For the probability that S acquires more true beliefs in the future does not seem to be affected by whether we conditionalize on “S reliably acquired a true belief ” or on “S reliably acquired a false belief ”. In both cases S used a reliable method. By the cross-temporal and learning assumptions, there is a significant likelihood that S will reuse the method, thus probably acquiring more true beliefs. Whether the belief that was produced on the first occasion is true or false is apparently immaterial to the probabilistic argument upon which the CP solution is based. Therefore, if the value of a state depends merely on the extent to which that state is indicative of future true belief, reliably acquired true belief is no more valuable than reliably acquired false belief, and hence no more valuable than reliably acquired belief.14 Compelling as it sounds, there are two difficulties pertaining to this objection to the CP solution. First, one may question whether the probability of future true belief is actually unaffected by the truth value of the belief produced on the first occasion. To see why this is so, it is important to consider the exact formulation of the learning assumption that underlies the CP approach. According to that assumption, when spelled out in full, a method will tend to be reemployed when the occasion presents itself provided that the previous employment was unproblematic. A problematic employment would be one resulting in some sort of conflict or surprise. In this respect there seems to be a difference between true and false belief. The fact that the belief produced on the first occasion was false makes it 14. This objection was raised by a member of the audience at the Goldman workshop in Düsseldorf in May 2008.

105

more likely that the agent’s future expectations will be frustrated, making the learning assumptions inapplicable. To illustrate, if, on its first occasion of use, a generally reliable GPS device wrongly recommends taking a left turn in order to get to Larissa, there is some probability that this will lead to a disappointing result (e.g. a dead end street) which will lower the chance of a reemployment of the same method of navigation in the future (relative to what would have happened had the recommendation been correct). Second, and more obviously, the value of a state is not merely a function of how good the state is as an indicator of future true belief. Some states have final value. A state of true belief is a case in point since such a state, according to vertism, has a value that is not derived from other valuable states. This means that the epistemic value of a state depends on (at least) two factors: (1) the final epistemic value of that state, and (2) the value that state has as an indicator of future states of final value. Even if we grant that reliably acquired true belief and reliably acquired false belief are on a par with respect to their capacity for indicating future true belief (an assumption that we just found wanting), there is still a substantial difference in the first value aspect. A state of reliably acquired true belief has a final value that a state of reliably acquired false belief lacks. It could even be argued that a state of reliably acquired false belief has negative veritistic value. In any case, the total value accruing to a state of reliably acquired true belief is greater than the total value accruing to a corresponding state of reliably acquired false belief. 7. Is the CP solution compatible with externalism? According to externalist epistemology, of which process reliabilism is but one expression, knowledge does not depend on higher order reflective capacities but can be present also in animals and young children. In order for a subject to know something, it is sufficient that it have a belief or representation of the situation that is true and, in the case of reliabilism, was acquired in a de facto reliable manner. It is not a requirement that the subject has higher level insight into these matters. It is, for instance, not required that the subject mentally represent the belief acquisition process, let alone represent it as reliable. Consider now the following explanation from (Goldman and Olsson 2009) of the learning assumption in connection with the problem of navi-

106

gating from point A to point B: “A further empirical fact is that, if you have used a given method before and the result has been unobjectionable, you are likely to use it again on a similar occasion, if it is available. Having invoked the navigation system once without any apparent problems, you have reason to believe that it should work again. Hence, you decide to rely on it also at the second crossroads.” This talk about using a given method, having reasons to believe that it should work again and deciding, in that case, to rely on it sounds uncomfortably internalistic and raises the question whether the CP solution is really compatible with externalism. Why? The CP solution is certainly compatible with an externalist analysis of knowledge. If providing an externalist account of knowledge is taken to exhaust the externalist epistemological project, then clearly there is not much to complain about. But externalist epistemology may be taken in a narrower sense to include as well certain preconceptions about the value of knowledge. It is consonant with externalism, one could hold, to insist not only that lower level creatures can know things but also that they are in a position to enjoy the distinct benefits of their knowledge. If, by contrast, the empirical assumptions underlying the CP solution turn out to be essentially internalist assumptions, the resulting surplus value would be attainable only for introspectively relatively advanced knowers.15 In reply, I would like to point out, first, that the conditions of nonuniqueness, cross-temporal access, learning and generality were introduced in (Goldman and Olsson 2009) in connection with the higher-level cognitive task of human navigation using a GPS device. In that context, it was natural—though admittedly slightly misleading—to formulate them in relatively high-level terms. Second, and more to the point, in the paragraph immediately following the GPS example we went on to explain more carefully the role played by the empirical assumptions: To see what roles these regularities play, suppose S knows that p. By the reliabilist definition of knowledge, there is a reliable method M that was invoked by S so as to produce S’s belief that p. By non-uniqueness, it is likely that the same type of problem will arise again for S in the future. By cross-temporal access, the method M is likely to be available to S when this happens. By the learning assumption, S is likely to make use 15. Christoph Jäger raised this interesting objection in his talk at the 2008 Düsseldorf workshop on Goldman.

107

of M again on that occasion. By generality, M is likely to be reliable for solving that similar future problem as well. Since M is reliable, this new application of M is likely to result in a new true belief. Thus the fact that S has knowledge on a given occasion makes it to some extent likely that S will acquire further true beliefs in the future. The degree to which S’s knowledge has this value depends on how likely it is that this will happen. This, in turn, depends on the degree to which the assumptions of non-uniqueness, cross-temporal access, learning and generality are satisfied in a given case. Here, the empirical assumptions underlying the CP solution are formulated in terms of (objective) likelihoods and without recourse to non-eliminable internalistic vocabulary. For instance, there is no talk here of either reasons or decisions. Admittedly, the learning assumption is still spelled out in terms of the likelihood that the subject S will make use of the method M. This reference to the use of a method, however, is not intended to imply any conscious reflection on S’s part. It is the same use of “use” as in “The cat uses its teeth to clean its claws”. I take it, therefore, that the CP solution is compatible, after all, not only with the broader but also with the narrower conception of externalism.16 8. Is the central CP claim really true? In this penultimate section I will discuss, at some length, a clever and thought-provoking objection raised by Joachim Horvath. The objection is useful to study more carefully because of the light it sheds on the CP solution and its implications.17 16. This is not quite the whole story. As I mentioned, I also believe that the fact that a person knows, rather than merely believes, that p makes that person’s belief that p more stable. Stability, moreover, is practically valuable when acting over time. However, as explained in (Olsson 2007), the extra value due to stability hinges on the satisfaction of a condition of track-keeping. The individual has to keep some sort of record of where she got a given belief from. The condition of track-keeping is a (modest) internalist condition. The upshot is that an individual, in order to enjoy the full benefits of knowing, needs to be equipped with a, perhaps rudimentary, capacity to reflect upon her processes of belief acquisition. Whether “reflect” is too strong a word here is something that I choose to leave open for further investigation. 17. See (Horvath 2009). In his manuscript, Horvath has already taken some of our previous personal communications into account. Rather than reiterating those parts I will confine myself to what I take to be the most central remaining issue.

108

Horvath states the central CP claim succinctly as follows: (CPC) P(F/SKp) > P(F/SBTp) Here F stands for “future true beliefs of a similar kind”, SKp for “S knows that p”, and SBTp for “S merely truly believes that p”. As part of the stagesetting Horvath now makes the following remark: How are we to evaluate such a contrastive claim about two objective probabilities? Suppose that S actually knows that p. Then, it will be contrary to fact that S merely truly believes that p in the same situation. So, we have to imagine how probable more future true beliefs of the same kind would be for S, if S did not know but merely truly believed that p in that situation. Thus, Goldman’s and Olsson’s contrastive probability claim (CPC) seems to have an implicit counterfactual dimension, for it can never be true of any epistemic subject S that she knows that p and merely truly believes that p in the very same situation. Let us pause here for a moment. I agree with Horvath that there is a counterfactual dimension to (CPS). The reason, which may not coincide entirely with that given by Horvath himself, is that (CPS), as I think of it, is something of a natural law, and natural laws, as Hempel and others have observed, support counterfactual statements.18 For instance, the law asserting that iron melts when heated supports the statement that this piece of iron would melt, if it were subjected to heat. Similarly, the proposed law asserting that the probability of S’s obtaining further true beliefs is higher conditional on S’s knowing that p as opposed to conditional on S’s merely believing truly that p entails the following counterfactual: if this proposition p were known, as opposed to merely truly believed, by S, the probability of S’s obtaining further true beliefs would be higher. Equivalently, (CC) If the proposition p were merely truly believed, as opposed to known, by S, the probability of further true beliefs would be lower.

18. See, for instance, (Hempel 1965, 339).

109

Horwath now argues that, given some reasonable assumptions, this counterfactual is in fact false. This, if correct, would indeed be damaging to the CP solution, as it would entail that (CPC), the central CP claim, were also false. Horvath’s first dialectical move as part of his attempt to demonstrate the falsity of (CC) is to make the following observation: Human subjects typically have a number of reliable belief-producing mechanisms at their disposal, like perception or memory, as well as a number of unreliable belief-producing mechanisms, like biased reasoning or testimony based on an unreliable informant. In order to evaluate (CC) according to the standard Lewisian semantics of counterfactuals, we need to consider the closest possible world where its antecedent is true, that is, the next possible world where S does not know that p but merely truly believe that p—or, more precisely, the next possible world where S has an unreliably produced true belief that p instead of a reliably produced true belief that p like in the actual world.

Horvath now introduces a further assumption to the effect that, in the actual world, S’s knowledge that p is based on S’s reliable faculty of visual perception. He continues: Now, the closest possible world where S truly believes that p based on an unreliable mechanism does not seem to be a world where S’s visual capacities are seriously impaired, for this would be a rather drastic change compared to the actual world. More plausibly, the closest world where S has an unreliably produced true belief that p might be one, for example, where S bases his belief that p on the testimony of an unreliable informant of his acquaintance (or on one of the other unreliable mechanisms that are actually at her disposal).

As Horvath proceeds to note, it is a plausible empirical assumption that we can acquire most of our beliefs in more than one way. But if so, then … the closest possible world where the antecedent of (CC) is satisfied will typically be one where S disposes of the very same mechanisms of belief-production as in the actual world, but simply makes use of one of her unreliable mechanisms, like the testimony of an unreliable informant, instead of one of her reliable mechanisms, like visual perception. However, in such a world it is not the case that the probability of more future beliefs of the same kind is lower than 0.8, that is, lower than the corresponding actual probability. For, in a close possible world like that, all of S’s actual belief-producing mechanisms are still present in an unmodified way.

110

Hence, … given my plausible empirical assumptions about normal human subjects, the probability of more future true belief is typically not lower conditional on merely truly believing that p than conditional on knowing that p. But then, reliably produced true belief does not have the extra value of making future true belief more likely than unreliably produced true belief in actual human subjects.

As Horvath reports, I have in our previous correspondence objected to the implicit assumption that the closest possible world where S truly believes that p based on an unreliable mechanism is either one where S’s visual capacities are seriously impaired or one in which S makes use of another mechanism altogether. There is also the possibility that S’s visual capacities are impaired but only slightly. They could be impaired only to the extent required for the reliability of those capacities to fall below a given reliability threshold. Clearly, this need not entail serious impairment. The point I would like to raise now, however, is a different one. Let us return to Horvath’s claim that since, in the second scenario in which S uses a different method that is in fact unreliable, all of S’s actual belief-producing mechanisms are still present in an unmodified way, the probability of future true belief will be just as high as in the knowledge scenario. I doubt that this claim is true. What Horvath overlooks is that it may well be a significant fact that S in order to solve the problem at hand chooses to ask an in fact unreliable witness rather than to take a look for himself. To see why this is so, first note that the empirical conditions that underlie the CP solution must be assumed true in the actual “knowledge” world. Moreover, since they have the status of laws, worlds in which they don’t hold are relatively remote from the actual world. This means that, in the alternative world we are now considering, let us call it W, the assumptions of non-uniqueness, cross-temporal access, learning and generality still hold true. Let us consider more carefully the world W in which S relies on the in fact unreliable witness. By the non-uniqueness assumption, S is likely to encounter the same kind of problem again. By cross-temporal access, the same method is likely to be available then, which in this case means asking the unreliable witness. By the learning condition, S is now likely to use the same method again. But since the method is unreliable, chances are now that this will lead to a false belief. At least this is more likely than if S had had knowledge based on visual perception (which we assume, with

111

Horvath, to be reliable to the same degree in the scenarios under consideration). Hence, if we compare the two scenarios—knowledge versus mere true belief—at the second shot, it is more likely that a true belief will ensue in the knowledge scenario than it is in the mere true belief scenario. I see no reason, therefore, to think that the probability of further true belief would be no higher in the knowledge scenario as compared to the true belief scenario. The CP claim still seems to me to be, in Horvath’s words, “clearly and determinately true”. 9. Conclusion According to the CP solution to the swamping problem, reliabilist knowledge has surplus value in virtue of indicating not only the truth of the belief that was acquired but also the truth of future beliefs adopted in response to similar problems. This holds to a lesser degree of a mere true belief. Since it is valuable to have true beliefs in the future, this proposal, in my view, solves the longstanding value problem first raised in Plato’s dialogue Meno for the attractively simple theory of process reliabilism. I am grateful for this opportunity to respond to some of the objections that have been raised against this proposal, which has forced me to think more carefully about its consequences. So far, however, I have not seen any objection to that approach that survives critical scrutiny. Acknowledgements: I benefited greatly from the input I received from speakers and members of the audience during the Düsseldorf Goldman conference in May 2008. Apart from Alvin himself, who is always a great source of inspiration and insight, and the people I have already credited, I would like to thank, in particular, Peter Baumann, Thomas Grundmann, Christian Piller, and Gerhard Schurz.

REFERENCES Craig, Edward 1990: Knowledge and the State of Nature. Oxford: Oxford University Press. Goldman, Alvin. I. 1979: “What is Justified Belief?” in George Pappas (ed.), Justification and Knowledge. Dordrecht: Reidel. Reprinted in Alvin I. Goldman,

112

Liaisons: Philosophy Meets the Cognitive and Social Sciences. Cambridge, MA: MIT Press 1992. — 1986: Epistemology and Cognition. Cambridge, Mass.: Harvard University Press. — 1999: Knowledge in a Social World. Oxford: Oxford University Press. — 2002: “The Unity of the Epistemic Virtues”. In Alvin I. Goldman, Pathways to Knowledge. New York: Oxford University Press. Goldman, Alvin I. and Erik J. Olsson 2009: “Reliabilism and the Value of Knowledge”. in Adrian Haddock, Alan Millar and Duncan H. Pritchard (eds.), Epistemic Value. Oxford University Press, 19–41. Hempel, C. G. 1965: Aspects of Scientific Explanation: and Other Essays in the Philosophy of Science. New York: Free Press. Horvath, Joachim 2009: “Why the Conditional Probability Solution to the Swamping Problem Fails”. Grazer philosophische Studien, this volume. Kvanvig, Jonathan L. 2003: The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. — to appear: “The Swamping Problem: Pith and Gist”. Social Epistemology. Levi, Isaac 1980: The Enterprise of Knowledge. Cambridge, Mass.: The MIT Press. — 1991: The Fixation of Belief and Its Undoing. New York: Cambridge University Press. Levi, Isaac to appear: “Knowledge as True Belief ”. In Erik J. Olsson and Sebastian Enqvist (eds.), Science in Flux. Netherlands: Springer. Olsson, Erik J. 2004: “F. P. Ramsey on Knowledge and Fallibilism”. Dialectica 58 (4), 549–557. — 2007: “Reliabilism, Stability, and the Value of Knowledge”. American Philosophical Quarterly 44 (4), 343–355. — 2008: “Knowledge, Truth, and Bullshit: Reflections on Frankfurt”. Midwest Studies in Philosophy XXXII, 94–110. Olsson, Erik J. to appear: “Reply to Kvanvig on the Swamping Problem”. Social Epistemology. Pritchard, Duncan H. 2007: “Recent Work on Epistemic Value”. American Philosophical Quarterly 44, 85–110. Quine, W. V. O. 1960: Word and Object. Cambridge, Mass: The MIT Press, Swinburne, Richard 1999: Providence and the Problem of Evil. Oxford: Oxford University Press. Werning, Markus 2009: “The Evolutionary and Social Preference for Knowledge: How to Solve Meno’s Problem within Reliabilism”. Grazer philosophische Studien, this volume. Williamson, Timothy 2000: Knowledge and Its Limits. Oxford: Oxford University Press.

113

Zagzebski, Linda 2003: “The Search for the Source of Epistemic Good”. Metaphilosophy 34, 12–28. Zimmerman, Michael 2002: “Intrinsic vs. Extrinsic Value”. In Edward N. Zalta, Stanford Encyclopedia of Philosophy, Summer 2009 Edition. URL: http://plato. stanford.edu/entries/value-intrinsic-extrinsic/, the paper was first published in 2002 and substantially revised in 2007.

114

Grazer Philosophische Studien 79 (2009), 115–120.

WHY THE CONDITIONAL PROBABILITY SOLUTION TO THE SWAMPING PROBLEM FAILS Joachim HORVATH Universität zu Köln Summary The Swamping Problem is one of the standard objections to reliabilism. If one assumes, as reliabilism does, that truth is the only non-instrumental epistemic value, then the worry is that the additional value of knowledge over true belief cannot be adequately explained, for reliability only has instrumental value relative to the non-instrumental value of truth. Goldman and Olsson reply to this objection that reliabilist knowledge raises the objective probability of future true beliefs and is thus more valuable than mere true belief. I argue against their proposed solution to the Swamping Problem that the conditional probability of future true beliefs given knowledge is not clearly higher than given mere true belief.

The Conditional Probability Solution to the Swamping Problem is an interesting recent attempt by Goldman and Olsson (2009) to defang an important objection to epistemic reliabilism. According to the Swamping Problem, it cannot be explained why knowledge is more valuable than mere true belief if one assumes simple reliabilism about knowledge and epistemic value monism, that is, the claim that truth is the only noninstrumental epistemic value (cf. Kvanvig 2003). On these assumptions, the standard swamping argument takes the following form (cf. Goldman/Olsson 2009): (S1) Knowledge equals reliably produced true belief (simple reliabilism). (S2) If a given belief is true, its value will not be raised by the fact that it was reliably produced (because of value monism). (S3) Hence: knowledge is no more valuable than unreliably produced true belief.

A good reliabilist solution to the Swamping Problem has to specify what exactly it is about a reliably produced true belief that makes it more valuable than the corresponding unreliably produced true belief, that is, why premise (S2) of the swamping argument is false.1 To this end, Goldman and Olsson claim that “under reliabilism, the [objective] probability of having more true belief (of a similar kind) in the future is greater conditional on S’s knowing that p than conditional on S’s merely truly believing that p” (Goldman/Olsson 2009), together with some plausible empirical background assumptions (ibid.). In the following, I will argue for the most radical and straightforward response to the Conditional Probability Solution, namely that the probability of more future true belief simply is not higher conditional on knowledge than conditional on mere true belief. At the heart of Goldman’s and Olsson’s solution lies the contrastive probability claim that, for a given epistemic subject S, the probability of future true beliefs of a similar kind (F) is higher if S knows that p instead of merely having a true belief that p. Their rationale for that claim is that knowing that p, but not merely truly believing that p, entails the presence of a reliable belief-producing mechanism, which in turn leads to a higher probability of future true beliefs of the same kind—at least given some plausible empirical background assumptions like, for example, the future availability and applicability of the same belief-producing mechanism (cf. Goldman/Olsson 2009).2 More formally, Goldman’s and Olsson’s contrastive probability claim can be put as follows: (CPC) P(F/SKp) > P(F/SBTp) How are we to evaluate such a contrastive claim about two objective probabilities? Suppose that S actually knows that p. Then, it will be contrary to fact that S merely truly believes that p in the same situation. So, we have to imagine how probable more future true beliefs of the same kind would be for S, if S did not know but merely truly believes that p in that situation. Thus, Goldman’s and Olsson’s contrastive probability claim (CPC) seems to have an implicit counterfactual dimension, for it can never 1. The anti-reliabilist, on the other hand, will simply reject premise (S1). 2. Note that Goldman’s and Olsson’s solution at best establishes an extra value of knowledge that is contingent upon actual human subjects and their actual environment. Thus, the Conditional Probability Solution fails anyway if it should be plausible that knowledge necessarily is more valuable than mere true belief.

116

be true of any epistemic subject S that she knows that p and merely truly believes that p in the very same situation. But it is important, of course, that as much of the surrounding situation as possible is held constant in order to make sure that the difference in epistemic value between S’s knowing that p and S’s merely truly believing that p is due only to the difference between being a reliably produced true belief and being an unreliably produced true belief. Therefore, it seems, (CPC) commits Goldman and Olsson to counterfactual claims of the following sort: Given that S actually knows that p and that, say, the actual probability of more future beliefs of the same kind is 0.8, it holds that if S did not know that p but merely truly believed that p, the conditional probability of more future true beliefs of the same kind would be less than 0.8, that is, less than the actual probability of more future true beliefs of the same kind, which is conditional on knowing that p. More formally, this can be captured as follows: (CC) ¬SKp š SBTp o P(F) < P@(F) However, given some plausible empirical assumptions about actual human subjects, this counterfactual is arguably false. Human subjects typically have a number of reliable belief-producing mechanisms at their disposal, like perception or memory, as well as a number of unreliable belief-producing mechanisms, like biased reasoning or testimony based on an unreliable informant. In order to evaluate (CC) according to the standard Lewis/Stalnaker semantics of counterfactuals, we need to consider the closest possible world where its antecedent is true, that is, the next possible world where S does not know that p but merely truly believes that p—or, put differently, the next possible world where S has an unreliably produced true belief that p instead of a reliably produced true belief that p like in the actual world. Let’s make the further assumption that S’s actual belief that p is based on her reliable faculty of visual perception. Now, the closest possible world where S truly believes that p based on an unreliable mechanism does not seem to be a world where S’s visual capacities are seriously impaired, for this would be a rather drastic change compared to the actual world. More plausibly, the closest world where S has an unreliably produced true belief that p might be one, for example, where S bases his belief that p on the testimony of an unreliable informant of his acquaintance (or on one of the other unreliable mechanisms that are actually at her disposal). It is a further plausible empirical assumption that normal human subjects

117

can acquire most of their beliefs in more than one way, that is, based on more than one belief-producing mechanism. If this is so, then the closest possible world where the antecedent of (CC) is satisfied will typically be one where S disposes of the very same mechanisms of belief-production as in the actual world, but simply makes use of one of her unreliable mechanisms, like the testimony of an unreliable informant, instead of one of her reliable mechanisms, like visual perception. However, in such a world it is not the case that the probability of more future beliefs of the same kind is lower than 0.8, that is, lower than the corresponding actual probability. For, in a close possible world like that, all of S’s actual beliefproducing mechanisms are still present in an unmodified way. Therefore, given my plausible empirical assumptions about normal human subjects, the probability of more future true belief is typically not lower conditional on merely truly believing that p than conditional on knowing that p. But then, reliably produced true belief does not have the extra value of making future true belief more likely than unreliably produced true belief in actual human subjects. Here is what one may object to this argument.3 Since reliability comes in degrees, there has to be some threshold of reliability above which a true belief constitutes knowledge and below which a belief would merely be justified, yet fail to be knowledge. Let us be rather strict about this threshold and assume that it is 1 (but nothing much hangs on the precise value of the threshold). Now, consider again a case where S actually knows that p. What is the closest possible world where S merely truly believes that p instead of knowing that p, given a reliability threshold of 1 for knowledge? This is a world, it seems, where S has acquired her belief that p by means of the very same belief-producing mechanism, but where the reliability of that mechanism is just a little bit below the threshold required for knowledge, say 0.999999. But then, the likelihood of future true beliefs of the same kind as p seems to be negatively affected, contrary to what I have claimed above. In reply to this objection, I would first like to note that the probability of future true beliefs is only minimally affected, that is, the advantage or extra-value of knowledge over merely true belief would thus be minimal as well. I take it, though, that our core intuition about the extra-value of knowledge is that knowledge is clearly or significantly more valuable than mere true belief. So, the objection may already concede that our core intu3. Thanks to Erik Olsson for the objection.

118

ition with regard to the value of knowledge cannot be fully accommodated by the Conditional Probability Solution. Secondly, there are two different possibilities how the threshold of S’s actual belief-producing mechanism, e.g. visual perception under optimal conditions, could be lowered from 1 to 0.999999. On the one hand, it could be that it is placed in slightly less favorable circumstances than in the actual world. On the other hand, it could be because the mechanism itself works slightly less effective even in the very same circumstances. Thus, reliability could be lowered because the relevant mechanism is externally impeded or because it is internally defective. Now, is the next possible word where the reliability of S’s belief that p is lowered from 1 to 0.99999 a world where S’s visual perception is externally impeded or one where it is internally defective compared to the actual world? In the first case, the closest world is one where S’s vision is only impeded in this one perceptual situation, for anything else would be a stronger deviation from the actual world and thus a more distant possibility. But then it is hard to see how the future likelihood of true beliefs could be negatively affected in this world compared to the actual world. For, if all of S’s belief-producing mechanisms work internally just as well as in the actual world and only one external situation, namely the present one, is slightly less favorable than in the actual world, then it simply can’t be the case that the probability of future true beliefs is affected in any way. In the second case, the external perceptual circumstances of S would be the same as in the actual world but her visual capacities would have a small internal defect that lowers their overall reliability from 1 to 0.999999. Here, it is indeed true that the probability of future beliefs of the same kind as p would be slightly negatively affected. However, the crucial question for the evaluation of our counterfactual conditional (CC) is which of these two cases constitutes the closer possibility. If it is the first kind of case, external impediment, then (CC) would be false because the probability of future true beliefs is the same as in the actual world. But if it is the second case, internal defect, then the likelihood of future true beliefs would indeed be slightly lower than in the actual world. Thus, Goldman and Olsson need the second possibility to be closer than the first one in order to support their contrastive probability claim (CPC). My own intuition is that the first possibility is slightly closer than the second, so that (CPC) would still lack the required counterfactual support. Admittedly, people have contrary intuitions here and some judge the possibility of internal defect to be slightly closer than external impedi-

119

ment, so that (CPC) would be supported in the required way. However, the very fact that (CC) is so hard to evaluate because there are (at least) two almost equally close possibilities for the truth of its antecedent already spells trouble for the Conditional Probability Solution. For, in order to be a convincing solution to the Swamping Problem it should be reasonably clear and determinate that the probability of future true beliefs is higher in the knowledge case than in the case of mere true belief. But, as the foregoing reflections reveal, it is either not clear or not fully determinate that this is so, which in turn casts doubt on (CPC), Goldman’s and Olsson’s central claim about objective conditional probabilities. Thus, Goldman’s and Olsson’s Conditional Probability Solution to the Swamping Problem fails because, in the actual world, its central contrastive probability claim is either likely to be false or at least not clearly and determinately true.4

REFERENCES Goldman, Alvin and Erik J. Olsson 2009: “Reliabilism and the Value of Knowl-

edge”. In: Duncan Pritchard, Alan Millar, and Adrian Haddock (eds.), The Value of Knowledge. Oxford: Oxford University Press, 19–41. Kvanvig, Jonathan 2003: The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press.

4. Thanks for inspiration to the participants of the workshop Reliable Knowledge & Social Epistemology—The Philosophy of Alvin Goldman at Heinrich-Heine-Universität Düsseldorf, May 19–20, 2008, and special thanks to Thomas Grundmann and Erik Olsson for helpful comments and encouragement.

120

Grazer Philosophische Studien 79 (2009), 121–135.

RELIABILIST RESPONSES TO THE VALUE OF KNOWLEDGE PROBLEM Christian PILLER University of York Summary After sketching my own solution to the Value of Knowledge Problem, which argues for a deontological understanding of justification and understands the value of knowing interesting propositions by the value we place on believing as we ought to believe, I discuss Alvin Goldman’s and Erik Olsson’s recent attempts to explain the value of knowledge within the framework of their reliabilist epistemology.

We ought to know many things. For example, we ought to know who we are and what we are doing, where we come from and where we are going to. We ought to know our address, the names of our children, and what the colour of ripe strawberries is. In all these cases, we talk about what we ought to know and not what we ought to have true beliefs about. If we become interested in something that happened and in who was involved, then we want to know the answers to these questions. Again, our aim is knowledge and not mere true belief, i.e. true belief that falls short of knowledge. Why are our epistemic obligations and ambitions focussed on knowledge, when there are less demanding epistemic states, like true belief, that would, it seems, serve as equally well? I will pursue this question, which Plato was the first to raise in the Meno, along traditional lines. Assuming that justification marks the difference between knowledge and mere true belief—justification which, ideally, would also satisfy an anti-Gettier condition—we can ask why we should care about our beliefs being justified. I suppose a plausible answer to this second question will run along the lines that justified beliefs are (and are known to be) more likely to be true than unjustified beliefs. Assuming that we aim at true beliefs and that, in some sense, it would be good to believe truly, this idea, in turn, leads to a more general question. Why should we care about what is likely to be good?

In the first section of this paper, I argue that such a concern cannot be vindicated by appeal to the goodness of what is likely to be good. After all, what is likely to be good might not be good at all. We have to understand this concern as a concern for satisfying a basic deontological requirement. Preferring the likely over the unlikely good is simply how our preferences ought to be; even if by having such a preference we prefer what, as it turns out, is worse to what is better. Preferring how we ought to prefer is—like believing how we ought to believe—something we value. I will sketch how such a deontological understanding of justification solves the Value of Knowledge Problem. In a nutshell, we value being as we ought to be, because we value being active in the pursuit of the good. When our beliefs are justified we believe as we ought to believe and, thus, we value knowledge over mere true belief. Against this background of my own solution to the Value of Knowledge Problem, I then turn to investigate in sections 2 and 3 Olsson’s and Goldman’s recent attempts to solve the Value of Knowledge Problem. The basis of my solution is a deontological understanding of justification, whatever the right theory of justification will be. Goldman and Olsson are reliabilists. Given their respective methods of acquisition, justified beliefs are more likely to be true than unjustified beliefs. Olsson’s idea is that having reliable beliefs makes it more likely to have reliable beliefs in the future. How this idea could provide any explanation of the value of reliable beliefs, however, remains unclear. I have stronger sympathies for Goldman’s solution. My disagreement with him is limited to the consequentialist framework in which he operates—a framework which is independent of his reliabilism and against which I argue in the first section. 1. The normativity of what is likely to be good Suppose you have a choice between two things A and B. If you choose A, this will quite likely, but not with certainty, lead to some good. If you choose B, this might lead to the same good, but it will do so only if choosing A will not. Because it is quite likely that choosing A will lead to the good, the likelihood of B leading to the good is quite small. You care about the good. Apart from their likelihood of leading to the good, you do not care about either A or B. You can think of the good as a large sum of money, which is the single prize of a lottery. The two things A and B

122

might then be non-overlapping sets of free lottery tickets. A would be a large pile of tickets (up to all except one); B would be just a few tickets (as little as one) for the same lottery. Which of the two things should you choose? You ought to choose what has a higher likelihood of success.1 If, as I assume here, you know that one thing is much more likely to lead to a good than another, then this is the thing you ought to choose. To me this is obvious. So obvious, that I cannot offer any further argument for it.2 I have started with a deontic claim—a claim about what one ought to choose. Is this deontic claim supported by any evaluative truth? Should we choose the likely good over the unlikely good because the likely good is better than the unlikely good? We can certainly say that choosing the likely good is better. But does this indicate that we ought to choose it because it is better? If we cannot say the latter, the evaluative claim will only be a shadow of the deontic claim but not its ground. Do you choose the better thing when you choose the big pile of lottery tickets? I certainly believe now that it is better to have the big pile than the single ticket. Suppose I find out that the unlikely has happened and the single ticket wins the prize. I will stand corrected. I thought one in my pile will win and, on this basis, I thought that it is better to have the big pile of tickets. However, I was wrong. The single ticket has won. It was the better thing to have all along, though I was in no position to know that when I made my choice. Thus, which of the two things, the big pile of tickets or the single ticket, is better depends on which ticket will win. We have to wait for the draw to find out which evaluative comparison between the two options is the correct one. 1. If the two options differed in their potential gains and losses we would have to rely not on the likelihood but on the expectation of goodness and we would have to consider the agent’s attitude towards risk. A and B, in our example, have the same potential gains (the prize of the lottery) and the same losses (remaining at the status quo). Risk is, therefore, irrelevant and the expectation of goodness is determined simply by the likelihood of the gain. 2. This normative claim entails that the action which is, taking everything into account, rational, i.e. the action that we ought to do, need not be successful. To me this further claim is obvious as well. As a philosophical thesis about rationality it rests, in my view, on the normative thesis that is my starting point. GE Moore has famously denied both claims. “The only possible reason that can justify any action,” Moore (1993, 153) writes, “is that by it the greatest possible amount of what is good absolutely should be realized”. And in Ethics (Moore 1912, 73) he says that the notions of expediency and of duty—which just are the notions of the successful and of what we ought to do—will always apply to the same action. I deal with Moore’s view in (Piller 2003) and in (Piller 2009). In the latter paper I develop the argument of this first section more fully.

123

I have argued for the following two claims. First, a deontic claim: We do not need to know which ticket wins in order to make a rational choice between the likely good and the unlikely good. We ought to choose the likely good. Secondly, an evaluative claim: We need to know which ticket wins in order to assess the comparative goodness of the two options. One of them is good, the other is useless. Something interesting follows from these two claims. What we ought to do is not determined by the goodness of the alternatives.3 Consequentialism is false.4 Have I proceeded too quickly? One notion, which for many lies at the heart of the matter, has not yet been introduced. It is the notion of instrumental value. A naïve consequentialist—and I mean this to contrast with Broome’s decision theoretic consequentialism—might look to the notion of instrumental value to defend his position.5 How does this notion help? We are trying to explain the normative significance of the likely good in value-theoretic terms. Within consequentialism, its defender might argue, there is a sense in which our preference for the likely good is a preference for what is better. The likely good is better than the unlikely 3. I said that the principle that we ought to choose (what we know to be) the more likely good is not grounded in any facts about the goodness of the alternatives. Thus, I will call it a deontological principle. It bears an obvious resemblance to the Hypothetical Imperative. I leave it open whether this principle is grounded in something else, e.g. in facts about agency. For such an account see Korsgaard 1996. 4. By ‘consequentialism’ I mean the view that the good determines the right. (Subjective consequentialism, i.e. the view that an agent’s beliefs about goodness determine rightness, is not affected by my criticism.) There are two ways to defend consequentialism against this argument. Both ways keep the deontic in line with the evaluative. First, one could deny the deontic claim. What I say about evaluations, namely that we have to wait for the draw in order to find out about which alternative is better, would then also hold for what we ought to do. This is GE Moore’s position. According to it, we believed with good justification that we ought to take the big pile but, as it turns out, we were wrong. Actually, we ought to have opted for the single ticket. Secondly, one could deny the evaluative claim. I claim that we can know what we ought to do beforehand. Likewise, according to this defence of consequentialism, we can know the evaluative facts beforehand. Before the draw the big pile is better than the single ticket. The result of the draw does not reveal the ‘correct’ evaluation. Rather it changes the evaluative facts. First the big pile was better but now, after the single ticket has won, it is better. If we follow John Broome (1991) and understand decision theory as a theory of the structure of the good, such a probability-relative notion of goodness is central to decision theory. I discuss and reject both of these defences of consequentialism in (Piller 2009). 5. Note that decision theory does not distinguish between expected utility and utility. (It would be exactly this difference, however, which would form the basis of a distinction between instrumental and intrinsic value.) The lack of this distinction is at its very heart. The representation theorem shows that a system of preferences which satisfy the axioms of decision theory can be represented by a utility function which has the expected utility property.

124

good because it has more of a certain kind of value, namely instrumental value. What is likely to be good is, I have argued, normatively significant. The notion of instrumental value tries to turn this normative significance into a rather strange kind of value.6 It is strange because it violates some basic intuition we have about what it is to be of value. Let us say that something is of value if it makes an evaluative difference, maybe not always but at least in some circumstances. For example, your happiness is of value because it makes a difference. Whatever my happiness, if we add yours, we improve the situation overall. Comparing the likely and the unlikely good, however, we realize that whether the good has been achieved or not, the presence of the likely (or unlikely) good makes no such difference. The likely good is as good as the unlikely good in the presence of the good. The likely good is as good as the unlikely good in the absence of the good. This suggests that there is no evaluative difference between the likely and the unlikely good.7 The normative significance of what is likely to be good does not rest upon any evaluative facts. This rejection of consequentialism helps us to account for yet another fact which hardly fits into the consequentialist framework. In terms of the good that we pursue in our choice between A and B, no justification for the choice of A is forthcoming. A, the justified choice, might, as we know, produce no such good at all. Nevertheless, there seems to be something good about A. Failing to get the good whilst having chosen the likely good is, it seems to us, better than failing to get the good whilst having chosen the unlikely good. At least, we did what we ought to have done in the first case, whereas we behaved stupidly in the second case. If one fails, it is better to fail because the world has conspired 6. Ross (1939, 257), in my view, expresses a similar scepticism when it comes to the notion of instrumental value. ‘I do not wish to call the usage of ‘good’ as equivalent to ‘a means to good’ improper. It is a perfectly sound idiomatic use of the word. But it is clearly to be distinguished from the sense of ‘good’ as ‘good in itself ’ or ‘intrinsically good’ or ‘good apart from its results’, and it will be better, in speaking or writing philosophy, not to say ‘good’ when we mean ‘useful as a means to what is good in itself ’, but to use this phrase or an equivalent.’ 7. In the literature on the Value of Knowledge Problem, philosophers focus exclusively on the first of these two cases. The presence of the good, they say, swamps the value of the likely good. They only see one side of the issue because they refer to (one part of ) the general problem as the ‘Swamping Problem’. This notion has become standard in discussion of the Value of Knowledge Problem; see (Pritchard 2007) for a detailed list of references. The other part is missed because when we fail and do not achieve the good, there is no good there that could do any ‘swamping’. The general problem is obscured by the use of an inappropriate name.

125

against one than to fail because of one’s own incompetence.8 Such feelings are common. They show the value we place on our own contribution to achievements. They show that we value our agency. Within a consequentialist framework we are pulled into two different directions. The value of the likely good is said to be derivative value only. It vanishes in the presence as well as in the absence of the good from which it has been derived. Nevertheless, looking at these things from the perspective of agency, there is some value in choosing the likely good which remains. Is the likely good, in virtue of being a likely good, also ‘intrinsically’ good? Are pure instruments in virtue of their instrumentality intrinsically valuable? Rejecting consequentialism helps us to resolve this puzzle. If we fail after having done what we ought to have done, something positive remains: we have done what we ought to have done. This deontic fact is not based on the value of the alternative we have chosen. Lacking an evaluative basis, we can now use this deontic fact as a new ground for value. We often feel that it is good to have done what we ought to have done, even if what we have done fails to achieve its proper objective.9 For the purposes of this paper, it is less important to be convinced by the suggested this solution of the general problem than to see that the Value of Knowledge Problem, especially as it arises within reliabilism, is just an instance of what I have been talking about. I have talked generally about the good and the likely good. In the context of the Value of Knowledge Problem, the good is true belief. If a belief is the result of a reliable belief-producing process, then it is, when compared with a belief from an unreliable source, more likely to be true. Reliably produced beliefs, thus, play the role of the likely good. The same puzzle arises. It is hard to explain why a reliably produced true belief, i.e. a belief that will usually, under reliabilist premises, amount to knowledge, is better than a true belief not so produced. If we hold a true belief, have we not thereby achieved the only good that matters? And if we fail, how can it matter whether our belief stems from generally reliable or unreliable sources? I answer these questions by reference to the good achieved in believing as we ought to believe. We need a deontological understanding of justified 8. The feeling I talk about is usually stronger when we do not get the good. However, it also is present in case we do get the good. Benefiting through our own agency is, after all, usually valued more than benefiting through pure luck. 9. Similar ideas about the value of agency can be found in (Riggs 2002, Greco 2002 and Sosa 2007).

126

belief to underpin a thus understood value of rational believing. Although reliabilism is not incompatible with a deontological understanding of justification, traditional reliabilists have generally tried to solve the Value of Knowledge Problem within a consequentialist framework. If my analysis of the underlying general problem is correct, such attempts will fail. I now turn to a discussion of the two solutions suggested in Goldman and Olsson (2009). I will name them according to the joint author’s preferences and talk about Olsson’s solution first before I switch to Goldman’s solution. 2. Olsson’s idea Is a true belief which has been produced by a reliable process better than a true belief not so produced? If it is, then the reliabilist has shown that (reliabilist) knowledge will be better than mere true belief and he will thereby have solved the Value of Knowledge Problem.10 According to our first solution, if a true belief is produced by a reliable process, the composite state of affairs has a certain property that would be missing if the same true belief were not so produced. Moreover, this property is a valuable one to have—indeed, an epistemically valuable one. Therefore, ceteris paribus, knowing that p is more valuable than truly believing that p. What is this extra valuable property that distinguishes knowledge from true belief? It is the property of making it likely that one’s future beliefs of a similar kind will also be true. (Goldman and Olsson 2009, 12)

On reliabilist assumptions, knowing that p is better than merely having a true belief that p because of the future benefits which, given certain empirical assumptions, are indicated by knowing that p. What are these 10. More accurately, the reliabilist will have shown that true and justified belief is better than true and unjustified belief. The evaluative significance of satisfying an anti-Gettier condition is not answered by such a proposal. Let me also register my doubts here that knowledge is always better than mere true belief. Most of what is happening is without any interest to me. For example, I simply do not care about what you keep in your fridge (it is really none of my business). I suppose you keep some milk. However, in case I am wrong, I really do not mind. In general, I do not mind either about the truth or the justification of beliefs I happen to have about matters that are of no interest to me (as long as they do not put my abilities into doubt about getting it right in cases when it does matter to me.) Such a restriction to matters a person is or should be interested in affects the Value of Knowledge Problem. I pursue these matters further in my (2009). In this paper, I will simply assume that we are dealing with cases one is legitimately interested in.

127

assumptions? Olsson lists the conditions of non-uniqueness, cross-temporal access, generality and learning. The method or process which generates beliefs applies to kinds of situations which we repeatedly encounter (nonuniqueness), it is available to the agent at different times (cross-temporal access) whilst preserving its reliability (generality). Furthermore, the agent tends to stick to successful employments and abandons unsuccessful ones (learning). The first three of these assumptions relate to belief-producing processes, the last assumption refers to a feature of the agent. Together they guarantee that the agent will, eventually, make repeated use of the most reliable methods whilst abandoning unreliable ones. The idea here is clear enough. Repeated use of reliable methods in the right sort of circumstances has indeed a higher chance of giving you correct results than repeated use of unreliable or less reliable methods. All this is true. What is less clear, however, is how this fact about the consequences of a repeated use of reliable methods is relevant for the question whether it is better to know something than to have a merely true belief about it. I will raise two problems for Olsson’s view. First, Olsson wants to tell us what is good about a belief coming from a reliable source. The valuegrounding property he picks out, however, is, in my view, not a property of the belief the value of which he is trying to explain. Secondly, the explanation he provides takes the form ‘what is good about a reliably produced belief is that it makes the occurrence of further reliably produced beliefs more likely’. This can hardly count as an explanation of the value reliability is supposed to bestow on beliefs. On the basis of perception and memory, I believe that Lund is a pretty town. Does this belief have the value conferring property which is, as Olsson has told us, ‘the property of making it likely that one’s future beliefs of a similar kind will also be true’? Does this belief of mine (or the fact that I hold it) make it more likely that I will form beliefs of the same kind by relying on the same reliable sources? How could an episode of believing have such powers? It rather is the availability and appropriateness of the methods employed and the assumed fact that one will home in on such methods in forming one’s beliefs which assures us that the valuable property will be realized. Olsson’s four assumptions alone guarantee that someone who forms beliefs at all will come to form his or her beliefs by the most reliable methods. The chance of having true future beliefs, given Olsson’s assumptions, is higher than the chance of having true future beliefs given that I stick to unreliable methods and abandon reliable ones (i.e. given the falsity of

128

the learning assumption). This conditional probability will remain high however we expand (within reason) its condition. Olsson thinks that a reliably produced belief, given the assumptions, indicates that the likelihood of future true beliefs is high. This is true. However, the same holds for a headache, a reliably produced false belief and an unreliably produced belief (be it true or false).11 Given that, as a matter of fact I will home in on reliably produced beliefs, having a headache is the same ‘good news’ regarding my future beliefs as is having a reliably produced true belief.12 11. Let us think about this matter in terms of conditional probabilities. Consider the probability of your team scoring a goal under the condition that they win. This probability is very high. After all, one cannot win a football game without scoring (at least as long as the game is decided on the pitch). This probability is not increased any further if we add to the condition that they win the further condition that they have a strong centre forward. In fact, under the assumption that they will win we can also add that the goalie has a headache and, again, this would not change the conditional probability in question. Although Olsson indicates that he senses this problem in emphasising that the valuable property is, in his terms, not a property of the belief alone but of the ‘resulting composite state of affairs’, he fails to realize that the assumptions which make up his ‘composite state of affairs’ make it irrelevant what kind of epistemic state is added. 12. In the version of this paper available at the Düsseldorf workshop, I write, ‘what provides the agent with a high chance of future success is the fulfilment of the empirical assumptions. If we can call this high chance a property of the belief and add in aloud voice ‘When these assumptions hold’, then we can also call it a property of my headache ‘When these assumptions hold’’. In his contribution to this volume, Olsson considers one instance of this objection. Given that I will form my beliefs by reliable methods, a reliable true belief indicates future true beliefs to the same extent as a reliable false belief. Olsson’s main reply is that a reliably produced false belief, when discovered as such, lowers the chance of reemployment of the method. This sounds plausible in Olsson’s example of using a GPS device. But is it generally true? Having misjudged the colour of an object, I will not abandon the method of judging the colour of objects by looking. In general, the mistakes and successes in particular examples can have all sorts of effects. All we know from Olsson’s assumptions is that, in the end, the agent will use the most reliable methods available. The road to this destination will throw up different obstacles for different agents and the stipulations of one particular example do not indicate which detours an agent will have to take on his way to the use of reliable methods. On a general level, we can note the following things. (a) Mistakes can be epistemically beneficial. Being misinformed by the generally reliable Jones on one occasion might lead to a properly restricted view regarding his areas of competence. (b) A false belief produced by an unreliable process will, under the learning assumption, make me abandon predictive methods based on this process. This is a good thing as it speeds up the way to the destination. (c) True beliefs from reliable sources can be a bad thing as they may make us overly confident in the use of this method. We might extend the method and use it even in domains where other methods would have a higher reliability. (d) A true belief produced by an unreliable method can be a good thing. Realizing the luck I had when relying on an unreliable method, I become more cautious. Olsson understands the question whether a reliably formed true belief is better than a true belief from an unreliable source as the question how we reach the destination of using the most reliable methods more quickly. Suppose that in one case, the

129

Suppose Olsson were right and it would be a property of believing that p as a result of a reliable method that I will have a high chance of having true beliefs about similar matters in the future. There is a further problem with Olsson’s proposal. Remember that we are trying to explain why knowing that p is better than merely truly believing that p. On the reliabilist approach, when we know that p we have a belief that p, which is, given the method of its acquisition, likely to be true. The problem is to explain the evaluative significance of this fact. Olsson’s answer is the following: It is good that your belief is likely to be true because, if it is likely to be true, you will also get beliefs in the future, which are likely to be true. ‘What is so great about having rabbits?’ I ask my friend. ‘Well’, he answers ‘if you have a couple of rabbits to start with, you will have even more rabbits in the future’. One cannot explain what is good about something by saying that it will lead to more of the same thing. If all likely being true gives you is more of the same (other things likely to be true), we do not understand what good it is.13

reliably produced true belief has the negative effect mentioned in (c). Then this belief might well be less good, in the sense explained, than an unreliably produced true belief (which may or may not have some negative effect on the directness of reaching the point of using the most reliable methods) and, in this regard, the reliably produced true belief could well be much worse than an unreliably produced false belief (which, on this occasion might only speed up the way to the use of the most reliable methods). To summarize: First, given Olsson’s assumptions a true belief from a reliable source is not evidence for one’s future beliefs being true—the assumption alone guarantee that one will home in on the most reliable method. Secondly, Olsson might want to argue that a true belief from a reliable source makes one reach the point of using the most reliable methods more quickly. As I indicated above, this may be true in some cases but need not hold generally. Thus, Olsson fails to provide a sufficiently general basis from which to argue for a general advantage of reliably caused true belief over true belief from an unreliable source. Thirdly, the oddity of his proposal comes to light when we realize that the extra value which, according to Olsson, (reliabilist) knowledge is supposed to have over true belief attaches also to false beliefs from unreliable sources. 13. A further, less fundamental, problem is the following. Suppose I use a reliable method to decide some important question, like whether I will still be alive tomorrow. Knowing that I will not be alive tomorrow is not any better than having a true belief about the same thing (because there is no future for me in which to use reliable methods). However, knowing that I will be alive tomorrow is better than having the corresponding true belief. The same method might fail or succeed in adding value to true belief depending on whether I will be able to use the method again. I suppose the real problem in the background here is my first worry. The implausibility of this evaluative difference points to the idea that what, according to the Olsson solution, is evaluativly significant is not really a property of the belief. See Olsson’s contribution to this volume for references to the notes he and Jonathan Kvanvig have exchanged over this issue.

130

3. Goldman’s argument I turn to the second of the suggested solutions, Goldman’s solution. Goldman does not tackle the question whether reliabilist knowledge is better than mere true belief directly. His professed task is to explain why there is the general tendency to attribute higher value to knowledge than to mere true belief. Psychological explanations of value attributions can come apart from judgments of value. For example, think about the Nietzschean project of a genealogy of morals in which the provided explanation of moral beliefs is intended to undermine these very beliefs. On the other end of the spectrum we find realist explanations which explain value attributions in terms of values and our ability to recognize them. Goldman stands in between these two extremes. His thought, if I understand him correctly, is that once we have a satisfying psychological account of value attributions, why not turn this into an account of value. In this he echoes Mill who tells us that ‘the sole evidence it is possible to produce that anything is desirable, is that people actually desire it’. I agree with Mill and Goldman. How else should we find out about what is good if not by looking at our attitudes? Goldman’s proposal contains two ideas. The first is what he calls ‘typeinstrumentalism’, which he explains as follows. ‘When tokens of type T1 regularly cause tokens of type T2, which has independent value, then type T1 tends to inherit (ascribed) value from type T2. Furthermore, the inherited value accruing to type T1 is also assigned or imputed to each token of T1, whether or not such a token causes a token of T2’ (Goldman and Olsson 2009, 16). Simplifying somewhat, we can say that money generally buys us pleasure. Some money, however, will remain unspent. Nevertheless, according to Goldman, it remains valuable because it is of a type the tokens of which usually produce pleasure. Let me rephrase Goldman’s first idea in terms close to how I presented the problem at the beginning of this paper. All things, which are, judged by their nature, likely to produce some good have instrumental value. They are, in this respect, good things. Now, there is something true about this idea, and there is something wrong about it. Taking it at face value, Goldman’s ‘type-instrumentalism’ strikes me as false. It was not a good thing to take the train that derailed and killed me. Usually, trains bring one to the desired destination. Such killer trains, however, do not seem to inherit any goodness from their safe relatives. The same holds when we think about bad means. The winning lottery ticket did not inherit the

131

property of being a useless piece of paper from its million useless cousins. Thus, we should not accept Goldman’s principle of type-instrumentalism. Nevertheless, I understand why Goldman sees some plausibility in this claim. What he tries to capture is the idea that we always ought to care about the likely good. In other words, the likely good is normatively significant. If the argument which I presented at the beginning of this paper is correct, it would be a mistake to look for the value of good means. We prefer good means to bad means (even if they do not achieve their end) because doing so is a normative requirement. It is what instrumental rationality (restricted to good ends) amounts to. Let me continue with Goldman’s solution. He thinks he has established that all things, which are, judged by their nature, likely to produce some good have some (namely instrumental) value. Now Goldman invokes his second idea—the idea of value autonomization. ‘The main possibility we suggest is that a certain type of state that initially has merely (type-) instrumental value eventually acquires independent, or autonomous, value status. We call such a process value autonomization’. (Goldman and Olsson 2009, 17) In his recent contribution to the Stanford Encyclopedia of Philosophy Goldman (2008, section 4) uses similar words, ‘It is further suggested that sometimes a type of state that initially has merely instrumental value eventually acquires independent, or autonomous, value status’. On realist premises, this idea sounds rather odd. Something has instrumental value if it can cause something good. Something has intrinsic value if it simply has such value—no causal capacity is required. How can something that only is good because of what it causes, suddenly become good independent of any causal capacity? This would indeed be mysterious. Here, Goldman thinks, the switch in perspective from value to valuing is important. ‘Value autonomization’, he writes (Goldman and Olsson 2008, 19), ‘is a psychological hypothesis, which concerns our practices of ascribing or attributing value to various states of affairs’. Let us look at one of the examples Goldman has pointed at. John Stuart Mill (Utilitarianism, Chapter 4, 36) talks about value autonomization as a psychological process and illustrates it by the love of money. ‘There is nothing originally more desirable about money than about any heap of glittering pebbles. Its worth is solely that of things which it will buy; the desires for other things than itself, which it is a means of gratifying. Yet the love of money is not only one of the strongest moving

132

forces of human life, but money is, in many cases, desired in and for itself; the desire to possess it is often stronger than the desire to use it, and goes on increasing when all desires which point to ends beyond it (…) are falling off. (…) From being a means to happiness, it has come to be itself a principal ingredient of the individual’s conception of happiness.’ Let me try to fill out some more details of this story. The move from wanting money in order to buy things to wanting money for its own sake occurs because people who have considerable amounts of money see something new in possessing money: it gives them a feeling of power. This awareness of power is valued for its own sake. It simply is a nice thing to be able to tell oneself that one can buy anything and anyone one wants (or that one is coming increasingly closer to such a state). Nothing (or almost nothing) is beyond one’s reach. Other examples of ‘value autonomization’ exhibit the same feature. The long walk to work, which at first was just tedious, becomes valued for its own sake. One gradually sees it in a different light. Familiarity with the things one encounters on the way can create some attachment and a feeling of belonging. The importance we place on habits (and what comes along with them) facilitates such changes. Walking to work might be seen as an integral part of one’s day. Why should one change now even though a newly established bus link would provide a more comfortable means of transport? One has found something new in one’s walking, something one could identify with. The details do not matter. We are all familiar with some variations of this story. ‘Value autonomization’, thus understood, need not require any fundamental shift in one’s evaluative outlook at the base level. It is rather a process which makes us aware of evaluative features we had some disposition to like all along. What has changed is that by engaging with the means we see what was unpleasant to start with in a new and more pleasing light. On Goldman’s view, any justified belief participates in the instrumental value of the type, even if it is not true. A general psychological mechanism is then allowed to turn what used to be instrumental value into intrinsic value. Against the first idea, I have objected that instrumental value should not be attributed to unsuccessful means. Nevertheless, the fact that they make truth antecedently likely has, I have argued, normative significance. The mistake I diagnose is, thus, a subtle mistake. The idea that preference could be translated into facts about comparative value—an idea which I would endorse in principle—founders on the problems of understanding instrumental value as value which I have outlined in the

133

first section. Goldman’s second move—his appeal to the process of value autonomization—is by itself insufficient. In the terminology of value realism, which Goldman uses, the move is implausible. Things that were purely instrumentally good cannot become intrinsically good without a change in their properties. And even in terms of value attributions, we would need to be directed towards some new feature which we have come to see in being an epistemic agent whose beliefs are justified and which would help us to understand such a process. Goldman has not told us what this new perspective on reliability and justification would be. Actually, my own solution to the Value of Knowledge Problem offers such a new perspective. We see believing on the grounds of reliable methods as believing as we ought to believe. Have I, in the end, provided my own defence of reliabilism against the charge that it cannot explain the specific value of knowledge? It depends on what it is to act well. Now I have to lay my cards on the table. I do not think that reliability or, in general terms, an increase in likelihood of the good is enough. The lottery example which I used to elicit intuitions supporting my view works only under the assumption that the agent knows the relevant probabilities. If one uses a reliable method, without any awareness and regard for its reliability, then, in my view, one does not act well. One can only act well by acting in light of one’s own conception of what it is to act well. Here is not the place to defend such an internalist conception of normativity.14 In any case, such a conception is not, in principle, incompatible with the spirit of reliabilism. Maybe it is part of any reliable method that one does not act against one’s own judgement about a situation. Even internalism might, in the end, have a reliabilistic foundation.15

REFERENCES Broome, John 1991: Weighing Goods. Oxford: Blackwell. Goldman, Alvin 2008: “Reliabilism”. Stanford Encyclopedia of Philosophy. URL: http://plato.stanford.edu/entries/reliabilism/. 14. I take some steps in this direction in (Piller 2007). 15. I want to thank Gerhard Schurz and Markus Wernig for inviting me to the workshop in Düsseldorf in which this paper was first presented and I want to thank all the participants for helpful discussions.

134

Goldman, Alvin and Erik J. Olsson 2009: “Reliabilism and the Value of Knowledge”. In Adrian Haddock, Allan Millar, and Duncan Pritchard (eds.), Epistemic Value. Oxford: Oxford University Press, 19–41. (Page references are to the manuscript version of this paper.) Greco, John 2002: “Knowledge as Credit for True Belief ”. In Michael DePaul and Linda Zagzebski (eds.), Intellectual Virtue: Perspectives from Ethics and Epistemology. Oxford: Oxford University Press, 111–134. Korsgaard, Christine 1997: “The Normativity of Instrumental Reason”. In Garrett Cullity and Berys Gaut, Ethics and Practical Reason. Oxford: Oxford University Press, 215–254. Olsson, Erik J. forthcoming: “In Defense of the Conditional Probability Solution to the Swamping Problem”. Grazer Philosophische Studien, this volume. Mill, John Stuart 1992: Utilitarianism, Indianapolis: Hackett. Moore, George Edward 1993: Principia Ethica. Thomas Baldwin (ed.). Cambridge: Cambridge University Press. — Ethics. London: Williams & Norgate. Piller, Christian 2003: “Two Accounts of Objective Reasons”. Philosophy and Phenomenological Research 67, 444–451. — 2007: “Ewing’s Problem”. European Journal of Analytic Philosophy 3, 43–65. — 2009: “Valuing Knowledge: A Deontological Approach”. Ethical Theory and Moral Practice 12, 413–428. Pritchard, Duncan 2007: “The Value of Knowledge”. Stanford Encyclopedia of Philosophy. URL: http://plato.stanford.edu/entries/knowledge-value/. Riggs, Wayne D. (2002): “Reliability and the Value of Knowledge”. Philosophy and Phenomenological Research 64, 79–96. Ross, WD (1963): The Foundations of Ethics, 1st ed 1939, Oxford: Clarendon Press. Sosa, Ernest 2007: A Virtue Epistemology, Oxford: Oxford University Press.

135

Grazer Philosophische Studien 79 (2009), 137–156.

THE EVOLUTIONARY AND SOCIAL PREFERENCE FOR KNOWLEDGE: HOW TO SOLVE MENO’S PROBLEM WITHIN RELIABILISM* Markus WERNING Heinrich-Heine-Universität Düsseldorf Summary This paper addresses various solutions to Meno’s Problem: Why is it that knowledge is more valuable than merely true belief? Given both a pragmatist as well as a veritist understanding of epistemic value, it is argued that a reliabilist analysis of knowledge, in general, promises a hopeful strategy to explain the extra value of knowledge. It is, however, shown that two recent attempts to solve Meno’s Problem within reliabilism are severely flawed: Olsson’s conditional probability solution and Goldman’s value autonomization solution. The paper proceeds with a discussion of the purpose of having a higher value of knowledge as opposed to merely true belief, both in evolutionary and social terms. It claims that under a reliabilist analysis of knowledge it can be explained how knowers could evolve rather than just truthful believers. Subsequently, the paper develops an account of how we can manipulate our testimonial environment in an epistemically beneficial way by valuing reliably produced true belief more that just true belief and so gives an indirect justification of the extra value of knowledge.

Even though every instance of knowledge is an instance of true belief, knowledge—at least in most contexts—is regarded as more valuable than a merely true belief with the same content. When a person believes something true on the basis of, say, a lucky guess, reading tea leaves, or wishful thinking, that is, without knowing it, most of us would say that she is in a less valuable state than if she had knowledge. The doctrine of the extra value of knowledge (see Goldman & Olsson 2009, henceforth “G&O”) * The main argument of the paper is based on a semi-published master thesis (Werning, 1997) I wrote more than 10 years ago. Special thanks go to Dirk Koppelberg who first raised my interest in reliabilism. I am very grateful to Alvin Goldman, Erik J. Olsson, Ludwig Fahrbach, Gerhard Schurz, and Leopold Stubenberg for helpful comments on earlier drafts of the paper.

is as old as epistemology itself and was first introduced by Plato. For him the doctrine gave rise to a problem that he proposes in his dialogue Meno and which is now known to epistemologists as Meno’s Problem (Kvanvig 1998, Koppelberg 2005). Plato puts forward the problem as one of rational choice. Assume our rational agent have the desire to go to Larissa. He has to choose between a guide who knows how to get there and a guide who truthfully believes how to get there, but does not know. Since the probability of the agent’s desire to be fulfilled, everything else being equal, depends solely on the truth values of the guides’ beliefs, it is as rational to choose the second guide as it is to choose the first one. For, the fact that the first guide in addition to having a true belief also knows the way does not increase the probability of success. Plato uses the Greek adjective ophelimos ‘profitable, useful’ to express that true/correct belief (orthe doxa) is not less useful than knowledge (episteme) (Platon 1968, Meno 97c). The conclusion of his critical reasoning can thus be summarized as the claim: True belief has just the same utility as knowledge. The question for us is: Why is it still rational to value knowledge higher than merely true belief? I would like to stress that Meno’s Problem in its original version is phrased in terms of practical rationality and attaches mere instrumental value to truth. The truth of a belief is valuable—so Plato apparently implies—solely because it increases the probability of one’s desires to be fulfilled. Meno’s Problem in its original pragmatic version thus consists of the following three propositions, which apparently form an inconsistent set: MP1. Extra value of knowledge. A person’s knowledge is more valuable than a person’s merely true belief with the same content. MP2. Rational belief evaluation. A person’s belief is the more valuable, the more probable it makes successful action. MP3. No pragmatic difference. A person’s knowledge makes successful action more probable only insofar as the person’s merely true belief with the same content would make successful action more probable. In the paper I will argue that a version of reliabilism provides a solution to the problem and, as far as I can see, the only viable solution. I do however think that it does so for other reasons than G&O have proposed in their article. I will begin with some clarifications regarding Meno’s Problem and continue with a discussion of its relation to the so-called Swamping Problem and the value of truth. I will then discuss two ideas by G&O:

138

the conditional probability solution and the theory of value autonomization. After a criticism of their proposal, I will turn to a problem which is structurally analogous to Meno’s Problem, but regards the evolution of knowers. In the final section I will explore how a relatively straightforward solution to the Evolutionary Problem can be transferred to the human social case. The main idea is that valuing instances of knowledge in others (and ourselves) more than instances of merely true belief is itself a means to make our own beliefs more likely to be true, given the conditions under which we may influence our testimonial environments. The underlying psychologically well-founded assumption is that valued practices—in our case: grounding one’s beliefs on reliable processes—are more likely to be repeated in the future than unvalued ones. The proposed solution thus is fully coherent with the general epistemological attitude that G&O label psychological naturalism. Clarifications Before I will develop my argument, the three propositions deserve some additional comments. Clarifications are in need also because there are a number of dissolutions of this problem that easily come to one’s mind, but in my eyes are insufficient. First, one might argue that the doctrine of the extra value of knowledge, MP1, is simply false. Kutschera (1981), e.g., argues that knowledge fully consists in a subjective and an objective component. The subjective component is credence or subjective probability. The objective component is truth or objective probability. Both components are maximal in true (firm) belief. According to Kutschera, knowledge, hence, consists in nothing but true belief (see also Beckermann 1997). Whereas Kutschera generally equates knowledge and true belief, contextualists (e.g, DeRose 1995) have argued that in certain contexts the epistemic standards are so low that true belief alone amounts to knowledge. The player in a quiz show who has the choice between two possible answers might, e.g., be said to have known the answer simply on the basis that he gave a true response without further deliberation. Goldman (1999) advocates the view that the word “know” is polysemous in that it has both a weak and a strong sense. The objections against MP1 would certainly deserve more discussion than is possible in this paper. In accordance with the epistemological mainstream, I will here

139

simply assume that MP1 is true for an appropriate use of the verb “to know”.1 The formulation of MP2 is intended to mean that, all other things being equal, a particular belief is more valuable if it causes the probability of successful action—that is, action that fulfills the desires of the subject—to increase, rather than if it failed to do so. MP2 does not make any comparative claims about the values of beliefs in unrelated scenarios and presupposes that the subject’s desires are held fixed. MP2 does not intend to state that the caused increase in the probability of successful action, across scenarios, is the sole determinant of a belief ’s value. It, however, implies that a particular belief would be strictly less valuable if it did not cause the probability of successful action to increase to the degree it actually does. One might say in accordance with MP2 that, in a given scenario, the value of a belief is a strictly monotonous function of the caused increase in the probability of successful action.2 Another clarification addresses a presupposition of MP3 that one might call, in memory of William James’s (1907/1949) pragmatist theory of truth, a weak pragmatist principle. It links the truth of a belief to the probability (P) of successful action: WPP. Weak Pragmatist Principle.3 Let it be a causal background assumption, c, that a person’s belief that p, b[p], and her desire that q, d[q], causally explain the person’s behavior in a given case. Then the satisfaction (Sat) of the desire and the truth (True) of the belief are probabilistically related in the following way: 1. What strikes me as wrong with Kutschera’s argument is that in knowledge, rather than in merely true belief the subjective and the objective components are linked to each other in an appropriate way. As Nozick (1981) and Dretske (1971) have pointed out, in the case of knowledge, the person believes something because it is true, where the because-relation is spelled out in terms of counterfactual dependency or in some other way. To the contextualists, one might reply that even in contexts of low epistemic standards the latter can be raised in a reasonable way such that knowledge and true belief fall apart. 2. If we could assume that the value v of a particular belief, the caused increase D in the probability of successful action, and all other factors x1, x2, … determining the value of the belief are metrizable and v = v(D, x1, x2,…) is an in D partially differentiable function, MP2 would come down to the claim that the partial derivative ∂v/∂D is strictly greater than 0 at all points of the domain of v. This would not be equivalent to the sole-determinant claim that if v(D,…) = v(Dc,…), then D = Dc. 3. I call this pragmatist principle weak because, unlike principles favored by classical pragmatists who define truth in terms of success, our formulation is fully consistent with the principle being a factual rather than an analytical claim.

140

P(Sat(d[q])/c ຓTrue(b[p])) > P(Sat(d[q])/c ຓ™True(b[p])). The principle thus states that the probabilities that a person’s desires be satisfied are strictly greater given her beliefs are true than given her beliefs are false when in both cases the beliefs and desires explain the person’s behavior. A successful action is one whose consequences satisfy the person’s desires or, to put it in terms of decision theory, maximize subjective utility. Now there certainly are particular situations in which acting on the basis of a false belief de facto has better consequences than acting on the basis of a true belief would. Driving a car I have the desire to cross an intersection safely. I have the true belief that the traffic light is on red and slam on the brakes. The driver behind me did not see the red light and bumps into my car. Had I falsely believed that the light is on green, I would not have stopped and would have crossed the intersection safely—no other cars were passing by. The probabilistic formulation of the principle, however, abstracts from the circumstances of a specific situation. The probability of successful action in the weak pragmatist principle is conditioned solely on the behavioral relevance of the beliefs and desires and on the truth in contrast to the falsity of a belief with a certain content. Epistemic values The introduction of the Weak Pragmatist Principle leads us to the question of whether beliefs, as the maxim of rational belief evaluation (MP2) purports, are to be evaluated principally according to how much they increase the probability of successful action. Many epistemologists would claim that there is a specific epistemic value and what determines the epistemic value of beliefs is the intrinsic goal of truth. This attitude is captured by the principle of veritism as stated by G&O: VP. Veritist Principle. All that matters in inquiry is the acquisition of true belief. The introduction of truth as an intrinsically valuable epistemic goal does, however, not solve Meno’s Problem, but simply transforms it into what is a variant of the so-called Swamping Problem. It has been put forward among other by Jones (1997), Swinburne (1999), and Zagzebski (2003). It

141

was named so by Kvanvig (2003). The Swamping Problem can be regarded as another inconsistent set: SP1. Extra epistemic value of knowledge. A person’s knowledge is epistemically more valuable than a person’s merely true belief with the same content. SP2. Epistemic belief evaluation (derived from veritism). The epistemic value of a person’s belief is determined by its closeness to the goal of truth. SP3. No difference in closeness to truth. Knowledge is no closer to the goal of truth than is merely true belief with the same content. The core of the Swamping Problem is that the property of being knowledge does not add any epistemic value to true belief if epistemic value consists in closeness to the goal of truth. If one accepts the Weak Pragmatist Principle the main difference between the Swamping Problem and Meno’s Problem is whether truth is regarded as intrinsically or instrumentally valuable, a difference that is important, but won’t be of much concern to us in this paper. It seems that only if one were to reject the probabilistic link between the truth of beliefs and the success of action, the two problems would fall apart substantially. But if this link is assumed to hold the two problems can be dealt with roughly in parallel. Contrasting epistemic rationality with practical rationality opens up the option to explain the extra value of knowledge by introducing further epistemic values in addition to truth. The teleological goal of maximizing coherence or the deontological compliance with certain epistemic obligations like that of avoiding contradictions might be good candidates here. As Sartwell (1992), however, points out, this strategy leads to a dilemma: Either those epistemic values are instrumentally valuable with regard to truth or they are intrinsically valuable in their own right. In the first case, the goal of maximizing coherence or the obligation to avoid contradictions would be regarded as means to approach the goal of truth. Being guided by those goals always brings you closer to the truth. Here the value of those aims is derived from the value of truth, SP2 stays in place and the Swamping Problem remains unsolved. In the second case, the additional epistemic values would be regarded as valuable whether following them brings you closer to the truth or takes you farther away from it, depending on the particular situation. Since the latter possibility is not excluded, a potential conflict of epistemic values is lurking. As Sartwell puts it, there

142

would “no longer [be] a coherent concept of knowledge” (180). The dilemma indicates that appealing to further epistemic values and thereby dissolving the Swamping Problem (rejecting SP2) and—via WPP—also Meno’s Problem (rejecting MP2) fails to be a promising option. The conditional probability solution Goldman and Olsson propose a reliabilist way of solving the Swamping Problem and thus indirectly also Meno’s Problem. Their proposal at first glance seems so attractive because they apparently refrain from introducing additional epistemic values. A reliable process is one that leads to true belief with some threshold probability (Goldman 1986). The reliabilist analysis that knowledge is reliably produced true belief (plus X) on the one hand implies a difference between knowledge and merely true belief.4 It, on the other hand, does so apparently without further epistemic values because the reliability of a belief producing process is equivalent to its truth-conduciveness.5 But where does the extra value come from? Even though I agree with their overall attitude, I believe that the explanation G&O propose for the extra value of knowledge doesn’t go through. G&O’s proposal, in fact, consists of two solutions of which they claim that they “are independent, but […] also compatible with one another and perhaps complementary” (11). They call their first proposed solution the conditional probability solution (CP). The idea is the following: If a true belief is produced by a reliable process, the composite state of affairs has a certain property that would be missing if the same true belief weren’t so produced. Moreover, this property is a valuable one to have—indeed, an epistemically valuable one. Therefore, ceteris paribus, knowing that p is more 4. The addition “plus X” is intended as a placeholder for some condition apt to counter Gettier-style examples against the classical definition of knowledge as justified true belief (Gettier 1963, Lehrer 1965, Goldman 1976). In this context it is important to notice that Goldman’s (1986) process reliabilism identifies justified belief with reliably produced belief. By G&O, the choice of X is regarded as largely irrelevant for the discussion of the Swamping Problem. For, it is the justification of a belief that is supposed to raise the value of knowledge beyond that of merely true belief. 5. Some authors use the term “truth-conducive” in an absolute sense as “leading to truth”. Accordingly, a belief producing process would be called truth-conducive only if it always produces beliefs that are true. In an infinite domain this is harder to attain than even 100%-reliability. In this paper I opt for a probabilistic or statistical interpretation of “truth-conduciveness”, which levels out the contrast to the notion of reliability.

143

valuable than truly believing that p. What is this extra valuable property that distinguishes knowledge from true belief? It is the property of making it likely that one’s future beliefs of a similar kind will also be true. More, precisely, under Reliabilism, the probability of having more true belief (of a similar kind) in the future is greater conditional on S’s knowing that p than conditional on S’s merely truly believing that p. (12)

To analyze the CP solution, one has to distinguish between two statements: a) The probability of S’s having more true beliefs of a similar kind in the future is greater conditional on S’s having the reliably produced true belief that p than conditional on S’s merely truthfully believing that p. b) S’s reliably produced true belief that p makes it likely that S’s future beliefs of a similar kind will be true. There is a subtle, but important difference between the formulations (a) and (b). Whereas (a) is just a comparative statement about conditional probabilities, (b) in addition to (a) makes a direct causal claim. The phrase “makes it likely” of (b) in its most common interpretation implies that S’s reliably produced true belief that p causes the probability of S’s future beliefs of a similar kind being true to increase. While (b) presupposes a specific direction of the causal arrow, (a) is perfectly consistent with the assumption of a common cause and thus only an indirect causal link. The truth of (a) follows from the definition of a reliable process as one that leads to true beliefs with a probability greater or equal to some threshold probability Pr (Pr > 0.5). The reasoning goes as follows: If S has a reliably produced true belief that p, S has implemented some process type T that governs beliefs of kind K. The belief that p is of kind K. Every belief of kind K of S that is the outcome of a process of type T is true with a probability greater or equal Pr, because processes of type T are reliable. If one now makes the slightly oversimplifying assumption that beliefs of the same kind always are produced by belief forming processes of the same type, future beliefs of S of kind K will be true with a probability greater or equal to Pr. Since Pr is strictly greater than 0.5 and the prior probability of a belief being true is at most 0.5, we can conclude:6 The probability of S’s 6. The prior probability of a statement depends on the “most natural” partition of the domain. Consider the statement p expressed by the sentence “The most expensive evening dress

144

future beliefs of kind K being true is greater conditional on S’s having the reliably produced true belief that p than conditional on S’s merely truthfully believing that p. The good thing with (a) is that it is true, but the fly in the ointment is that the scenario is one of common cause:7 b[p] T b[p*] It is S’s access to reliable processes of type T that explains the positive probabilistic correlation between having true beliefs in the future and having a reliably produced true belief now, provided the beliefs are of the same kind. A process of type T is the common cause of the two beliefs and its reliability is the common explanation of the likely truth of the two beliefs. However, it is not the likely truth of the present belief b[p] that explains the likely truth of the future belief b[p*]. Nor is it the particular event that b[p] was produced by some process of type T that explains the likely truth of the future belief b[p*]. For, there is no causal dependency of the future belief on the present belief. Nor is there any direct causal dependency between the two particular events of belief production in the present and the future. This is evident if we consider the following scenario: Assume that on Monday Kim forms the perceptual belief b[p] that her brother wears a red shirt. On Tuesday she produces the perceptual belief b[p*] that here sister wears a red hat (precisely the same shade of red). We may assume that the belief producing process t on Monday was of the same type T as the belief producing process t* on Tuesday. Now, it certainly need not be true that if S had not had the belief b[p], she would not have had the belief b[p*]. It might well have happened that she met her sister on Tuesday, but missed her brother on Monday. Likewise it need not be true that if S’s belief b[p] had not been produced by a tokening t of the process type T, S’s belief b[p*] would not have been produced by a tokening t* of the same process sold in Paris is red”. If we chose the partition {p, ™p}, the prior probability of p would be 0.5. However, if we had chosen the partition {red, green, yellow, blue, white, black}, the prior probability would have been 1/6. Regardless of those problems, the prior probability of positive, non-disjunctive statements with natural predicates is never greater than 0.5. 7. The drawing has only heuristic value. It will in particular be made explicit in the text when we speak of type causation (T causally explains b[p] and b[p*]) and token causation (t and t* of type T cause b[p] and, respectively, b[p*]).

145

type T. The two beliefs do not causally depend on each other and neither do the two events of production. The common cause scenario is excluded in formulation (b). The bad thing, however, is that this apparently makes (b) false. S’s having the reliably produced belief that p does not cause future beliefs of the same kind to be true in the sense that S’s having the reliably produced belief that p makes it likely that future beliefs of the same kind will be true. Unless there is an inferential or some other causal link between the present belief b[p] and the future belief b[p*], which is normally not the case for two arbitrary beliefs of a kind K, b[p*] being true is not causally grounded in b[p] being a reliably produced true belief: b[p]

b[p*]

What does this mean for our evaluative question? If the likely truth of future beliefs of the same kind were indeed causally grounded in the present true belief being reliably produced, we could argue that the present true belief is more valuable than had it not been reliably produced. The rationale would be one of a means-to-end relation. If an end is valuable, whatever helps to bring about the end, i.e., makes the end more likely, is also valuable (provided that other values are not violated). If curing people with antibiotics is valuable, then producing antibiotics is also valuable. If the end of having true beliefs is valuable, then whatever makes it more likely to have true beliefs is also valuable. As we have seen, however, the present true belief that p being reliably produced does not make it likely in this causal sense that future beliefs of the same kind will be true. The extra value of reliably produced true beliefs as opposed to simply true beliefs cannot be accounted for by the means-to-end relation. What is valuable is the reliable process of type T, the common cause. The evaluation of the process itself is not at issue, though. A defender of the CP solution might eventually want to propose an interpretation of “makes it likely” that (i) avoids the unavailable causal reading and (ii) still explains why the likely truth of the future belief increases the epistemic value of the reliably produced present belief. Olsson (this volume) seems to appeal to an epistemic reading of “makes it likely” as “is indicative of ”: c) S’s reliably produced true belief that p is indicative of S’s future beliefs being true, provided the beliefs are of a similar kind.

146

The question now is whether an epistemic, but non-causal relation is apt to transfer value as a proper causal relation in a means-to-end scenario would. My suspicion is that it isn’t. To remain in the example given above, an increase of antibiotics production helps to increase antibiotics treatment. An increase in antibiotics production also causes the pollution of drinking water with antibiotics to increase. Due to this common cause scenario, the increase of antibiotics pollution is indicative of an increase of antibiotics treatment. Whereas the increase of the treatment, however, is a good thing, the increase in pollution certainly is not. We usually say that the latter is a negative side effect of the increase in production, which itself is a proper means to increase treatment. The relation “is indicative of ” does not per se transfer value. In many cases it certainly does, but only in those where it is grounded in a direct causal relation between its relata. It is hence rather questionable whether G&O’s CP solution indeed shows how the identification of knowledge with reliably produced true belief (plus X) explains the extra value of knowledge. Value autonomization G&O’s proposal to explain the extra value of knowledge includes a theory of value autonomization, which—as they acknowledge—might be regarded as “complementary” (11) to the conditional probability solution. They do so for two reasons: First, whether the truth-conduciveness of a belief producing process can be projected onto future cases depends among others on the “non-uniqueness, cross-temporal accessibility […] and generality” (G&O, 14) of the process.8 If a process that once led to a true belief happens to be unique, only accessible at a certain moment in time, or simply too specific to be repeated, its reliability would not imply anything factual about the likely truth of any future belief. Thus, a necessary condition for a reliably produced belief to even be indicative of the likely truth of future beliefs is that the process, which led to the present belief, is non-unique, cross-temporally accessible and sufficiently general. G&O concede that those conditions are not always fulfilled, but only normally. The second reason is that in a means-to-end scenario, if the end is intrinsically valuable, the means may inherit value from the end, but the 8. They also include learnability as a further constraint. There may, however, as well be innate belief forming process with projectible reliability, which are neither learned nor learnable.

147

means still is not intrinsically, but only instrumentally valuable. If veritism holds, the truth of a future belief is intrinsically value. Now, even if it were the case—which as we saw is questionable—that a person’s reliably produced true belief makes it likely (in this causal sense) that her future beliefs of a similar kind will be true, the present belief would gain additional epistemic value, but only of an instrumental sort. Value autonomization now is supposed to be a psychological mechanism that bridges the gap between “normally” and “always”, on the one hand, and between “instrumentally valuable” and “intrinsically valuable”, on the other hand. Whereas G&O so far only purport to have shown that reliably produced true belief normally has additional instrumental value with respect to the goal of truth as compared to merely true belief. They concede that there still is an explanatory gap to our practice of value attribution. As far as our practice is concerned, we always attribute more epistemic value to knowledge than to merely true belief and we regard knowledge as intrinsically more valuable than merely true belief.9 G&O hold that there is a mechanism of promotion that starts off with an initial assignment of instrumental value in normal cases and leads to a general assignment of non-instrumental value: The main possibility we suggest is that a certain type of state that initially has merely (type-) instrumental value eventually acquires independent, or autonomous, value status. We call such a process value autonomization. […] The value autonomization hypothesis allows that some states of affairs that at one time are assigned merely instrumental value are ‘promoted’ to the status of independent, or fundamental, value. (17–19, my emphasis)

I have no doubt that mechanisms of value autonomization might indeed exist. It is plausible to assume that what is normally instrumentally valuable often will after some habituation be generally regarded as intrinsically valuable. However, the shortcomings of G&O’s treatment of the Swamping Problem are quite independent of the viability of a theory of value autonomization. The deficits are with the conditional probability solution, on which G&O’s story of value autonomization seems to build. This is because the attribution of non-instrumental general value to knowledge as a result of some mechanism of value autonomization causally presup9. As we have seen in an earlier section of this paper, there might be epistemic contexts—e.g., a quiz show—where true belief and knowledge fall together. Our statement here might thus need some qualification. The problem of weak epistemic standards is nevertheless orthogonal to the problem of non-projectible processes.

148

poses that instrumental value is normally attributed to knowledge in an initial phase. The initial attribution of instrumental value in normal cases, however, is sufficiently explained by G&O only if the CP solution suffices to explain the instrumental value of knowledge in normal cases. I have argued that this in not the case because we face a common-cause rather than a means-to-end scenario with regard to future true belief. G&O haven’t shown how, in the first place, the property of being reliably produced adds any value to a true belief, even normally when non-uniqueness, cross-temporal accessibility and sufficient generality of the belief forming process are granted. For, a belief being reliably produced does simple not cause the probability of future beliefs being true to increase, even when the reliability of the process is projectible onto future cases. And since there is no direct causal relation, there is also no means-to-end relation between the present belief being reliably produced and future beliefs being true. Consequently, there fails to be a gain even of instrumental value that is grounded in the property of being reliably produced. G&O’s value autonomization account of the extra value of knowledge is not a second, independent solution to the Swamping Problem—contrary to what they claim—but stands and falls with their conditional probability solution, a solution whose adequacy we found to be questionable. The evolution of knowers To shed some light on Meno’s Problem and the related Swamping Problem, I would like to turn to a structurally analogous problem that we face in the evolution of human cognition. Disregarding sceptical doubts for a moment, when we look around us, we find lots of knowers rather than mere truthful believers. Even though some beliefs of ours might be false, much of what we believe is true, and moreover, the things we truthfully believe, most of the time, are also things we know. True beliefs that are based on lucky guesses, reading tea leaves, wishful thinking and the like are rare, at least, if we hold them firmly. We might with some justification say that human brains are knowledge producing machines. If being a knower, however, is a widespread trait among the human species, that trait should have an evolutionary explanation. It is very unlikely that the trait of being a knower is a mere epiphenomenon of evolution as the beauty of butterflies perhaps is. The trait of being a knower must have had some evolutionary advantage for our predecessors.

149

When we go back in evolution to prehuman species, it is questionable whether the concepts of knowledge, belief and truth still apply. Davidson (1999), e.g., argues that those concepts must not be applied to animate beings without a language capacity. To avoid objections of this kind, I will henceforth capitalize the relevant expressions and talk about KNOWLEDGE, BELIEF and TRUTH thereby referring to mental states and properties of mental states in prehuman species that come closest to knowing, belief and truth in the context of humans. When considering the evolution of human knowers and prehuman KNOWERS we face a problem, the Evolutionary Problem, which is analogous to Meno’s Problem. It consists of the following apparently inconsistent set of propositions: EP1. The trait of being a KNOWER is evolutionary more successful than the trait of being merely a TRUTHFUL BELIEVER. EP2. A trait within a species is evolutionarily the more successful, the more it increases fitness. EP3. The trait of being a KNOWER increases fitness only insofar as the trait of being a TRUTHFUL BELIEVER would increase fitness. EP1 is apparently justified by the fact that being a KNOWER is an evolutionary successful trait that goes beyond that of being a TRUTHFUL BELIEVER. The trait is not epiphenomenal and hence should be evolutionary advantageous. EP2 is a quite general principle of post-Darwinian evolutionary theory where the evolutionary success of a trait might be measured by the frequency with which it occurs among the species members. EP3 finally is a restatement of MP3 where successful action or behavior is biologically interpreted as evolutionary fitness. The reason for me to propose the Evolutionary Problem is that it has a straightforward solution, which by analogy might be transferred to Meno’s Problem and the related Swamping Problem. In a relatively unknown paper, “The Need to Know”, Fred Dretske (1989) compares the having of true beliefs to the having of synchronous representations. In order to survive and reproduce, animals—maybe a little more complex than protozoans—need to find food, flee predators, and mate conspecifics. To succeed in doing so, an animal has to coordinate its behavior with its environment. Fleeing a predator means: run when a predator approaches. Continuously running, whether or not there is a predator in the environment would be extremely inefficient. It would exhaust the organism and

150

as likely lead to its death as if it did not run away. Finding food means: go where edible things grow and eat what is nutritious. Eating everything alike would lead to intoxication, eating nothing to starvation. Mating is good, but not regardless with whom. Passing on one’s genes will only succeed if the mating partner is fertile, of the opposite sex, and apt in many other respects. To survive, reproduce and finally succeed in evolution, the organism must have the relevant information on when, where, what, and who and this information must result in appropriate on-time behavior. The organism has to achieve a complex task of synchronization. The external target, be it food, a predator, or a mating partner, and the appropriate behavior have to be brought into synchrony. This is typically done by an internal state, for which Millikan (1996) coined the term pushmi-pullyu representation. This trigger-like state has both a descriptive and a directive aspect.10 It is of great survival, reproductive and evolutionary value for the organism that those pushmi-pullyu representations be synchronous with their respective targets: If a chimp is swinging from liana to liana, his fingers must close at exactly the moment the targeted liana lies in between them. The success of virtually all behavior of non-human animals is dependent on the possession of synchronous pushmi-pullyu representations. It is fair to say that synchronous pushmi-pullyu representations are probably the simplest biologically realistic model of TRUE BELIEFS. The true beliefs of humans might be more complex as far as content and logical structure are concerned, and more decoupled from their targets, but they, very likely, stand in a continuous evolutionary line with synchronous pushmi-pullyu representations. Now, if synchronous pushmi-pullyu representations are so important for evolutionary success, how are they transmitted? The problem obviously is that synchrony is impossible to transfer from one generation to another. Since the environment is continuously changing, a representation that is synchronous with its target now might be asynchronous with its target in a second. The obvious answer is: what can be transmitted isn’t synchrony, but mechanisms of synchronization—not TRUTH, but TRUTH-condu10. A good example for pushmi-pullyu representations are monkey alarm cries. Those simple, syntactically unstructured, but target-specific cries have a directive component, “run and hide!”, and a descriptive component, “a lion is near”. The two components aren’t separated. Neuroscientific studies indicate that many cortically realized representations of substances like tools and fruits are pushmi-pullyu: both, descriptive features (form, color, etc.) and directive motor affordances (to be peeled in the case of a banana or to be turned in the case of a screwdriver) are part of those representations (Martin et. al. 1996, Werning 2009).

151

cive processes. The whole purpose of perception is to synchronize certain internal states of the cortex with the corresponding external target objects. The blueprints for the mechanisms of synchronizations, be it the architecture of the optic nerve or the anatomy of the ear, may well be encoded in the genes. Where the mechanisms of synchronization would be too coarse and stiff when encoded in the genes directly, at least, routines to acquire mechanisms of synchronization in development could be inherited. The solution to the Evolutionary Problem hence is that the trait of being a TRUTHFUL BELIEVER can only be inherited as the trait of being a KNOWER. We here presuppose the reliabilist assumption that KNOWLEDGE is TRUE BELIEF produced by a TRUTH-conducive process. Since synchrony/TRUTH cannot be transmitted from one generation to another, the only way to increase the chance for the next generation to have synchronous/TRUE representations is to transmit synchronizing, i.e., TRUTH-conducive mechanisms. The evolutionary relevant trait of being a TRUTHFUL BELIEVER is to be identified with the trait of being a KNOWER. Thus the inconsistency of EP1 to EP3 is resolved. Keeping up with truth across time Let’s return to Meno’s Problem and our original question: Why is it rational to value knowledge more than merely true belief? The solution of the Evolutionary Problem offers us two main options to deal with Meno’s Problem. The most radical analogy to draw would be to say that Meno’s Problem just is a disguised version of the Evolutionary Problem: success in action is to be interpreted as evolutionary success and to value a person’s knowledge is to value her as having the trait of being a knower. Being merely a truthful believer is not an evolutionary relevant trait at all because it cannot be passed on—this holds true at least for all sorts of time-bound beliefs, most importantly perceptual ones. The trait of being a truthful believer—as regards time-bound beliefs—can only be passed on as the trait of being a knower. What counts is the possession and transmission of truth-conducive processes. Each generation has to use their truth-conducive processes anew to build up representations of their ever changing environment. Taking this option, knowledge would be more valuable than merely true belief because it is part of a valuable trait in evolution, whereas merely true belief is not.

152

What speaks against drawing this radical analogy are a number of important disanalogies between the evolutionary scenario and the rational choice scenario proposed by Plato. First, increasing biological fitness and increasing utility, i.e., the degree to which desires are fulfilled, are quite independent aims. Many of our desires have nothing to do with reproduction or survival, which are the main definientia of biological fitness. Sometimes the two aims even are in conflict: In modern societies many people, e.g., have the explicit desire not to reproduce. Second, beliefs in general must not be identified with pushmi-pullyu representations. As Millikan (2004) has worked out in detail, beliefs unlike pushmi-pullyus are typically deprived of directive aspects and often locally or temporally detached from their targets. The behavioral role of beliefs thus goes beyond a mere coordination of on-time behavior with the presence of a certain target. The truth of a belief cannot in general be reduced to synchrony with a target. Third, the solution to the Evolutionary Problem focuses on processes that can be transmitted in a genetic or, at least, mimetic way. Even though a large deal of our knowledge, e.g., perceptual or grammatical knowledge, might indeed be dependent on processes that are passed on in either of the two ways, there are probably many reliable belief-forming processes that are not genetically transmitted. The solution to the Evolutionary Problem thus may be regarded a valuable contribution to answering the question how we have come to be knowers, but it is not a comprehensive explanation why knowledge is more valuable than true belief. A more indirect lesson to draw from the solution of the Evolutionary Problem is that the extra value of knowledge might have something to do with keeping up with truth across time. In the evolutionary scenario this has an intergenerational interpretation, but there might also be an interpersonal, social understanding. A large part of our knowledge depends on the testimony of others: reports, gossip, narrations, newspaper articles, TV news etc., all taken as expressions of beliefs.11 But again: why should we value a reported belief that’s true, but based on an unreliable source less than a reported belief that’s true and based on a reliable source? From the epistemic perspective of truth as well as the pragmatic perspective of rational choice, both reported beliefs, it seems, are on a par. Giving credence to the first brings us as close to the truth as giving credence to the second. However, would valuing a reliably produced reported true belief more than an unreliably produced one really make no difference with 11. For an elaboration of the notion of testimony in epistemology see (Coady 1992).

153

regard to the goal of truth?—Perhaps not in the present, but very likely in the future. By valuing reliably produced beliefs more, we have a chance to manipulate our testimonial environment in a positive way. The underlying assumption is that valued practices are more likely to be repeated in the future than unvalued ones. There is a manifold of mechanisms that seem to support this assumption. They range from psychological enforcement over social sanctions to economic market dynamics. When a child’s belief is evaluated and the assignment of value is expressed by praise if the belief is based on evidence rather than hearsay, we enforce certain doxastic dispositions. So future beliefs of the child are more likely to be true and the child reporting its beliefs will bring us closer to the truth ourselves. If we favor uttered reliably formed beliefs of our friends over those that are due to guessing and other unreliable processes, we may, on the long run, attract certain friends to us more than others and thus, by a kind of social selection, make it more likely to arrive at true beliefs in the future when those will be based on our friends’ testimony again. Finally, valuing certain beliefs over others may even have economic consequences. If we value a newspaper whose authors most often form their beliefs in reliable ways more than a newspapers whose authors less often do so, we might be ready to pay a higher price (and we should if truth is our primary doxastic goal!). The truth-conducively produced newspaper will more likely flourish and its distribution will spread. Our likelihood to arrive at true beliefs in the future will increase. It is useful to compare the social scenario to the evolutionary scenario. In the latter the factor responsible for the spread of a trait in a species is an increase in fitness. In the social case, the factor responsible for the spread of a practice in society is an increase in value. As TRUTH/synchrony, being an on-time property of pushmi-pullyu representations, cannot spread within the species directly by a mechanism of inheritance, but only through the inheritance of mechanisms of synchronization, what can be enforced by psychological, social, and economic mechanisms in a society is usually not the truth of present beliefs, but truth-conducive processes that will be in effect in the future. Since the success of our own truth-seeking activities strongly depends on testimonial beliefs of others being reliably produced, we have developed a culture of positive and negative sanctions regarding the production history of beliefs. The extra value of knowledge is manifest in a practice of providing positive and negative reinforcement which favors reliably produced beliefs over others. Valuing instances of knowledge more than instances

154

of merely true belief is itself a means to make our own beliefs more likely to be true—in the strong causal sense of increasing their probability of being true. The extra value of knowledge is instrumentally grounded in the ultimate goal of truth, which we want to achieve not only now, but also in the future. The key idea to solve Meno’s Problem and via the bridge of the Weak Pragmatist Principle also the Swamping Problem is to regard value not as a causally inert property of doxastic states, but as a property that in psychological, social, and economic ways has behavioral effects in the choices we make.12

REFERENCES Beckermann, Ansgar 1997: “Wissen und wahre Meinung”. In Wolfgang Lenzen (Ed.), Das weite Spektrum der Analytischen Philosophie: Festschrift für Franz von Kutschera. Berlin: Walter de Gruyter, 24–43. Coady, C. A. J. 1992: Testimony. Oxford: Oxford University Press. Davidson, Dondald 1999: “The Emergence of Thought”. Erkenntnis 51(1), 511– 21. DeRose, Keith 1995: “Solving the Skeptical Problem”. Philosophical Review 104, 8–51. Dretske, Fred 1971: “Conclusive Reasons”. Australian Journal of Philosophy 49, 1–22. — 1989: “The Need to Know”. In Marjorie Clay & Keith Lehrer (eds.), Knowledge and Skepticism. Boulder: Westview Press, 89–100. Gettier, Edmund 1963: “Is Justified True Belief Knowledge?”. Analysis 23, 121–3. Goldman, Alvin I. 1976: “Discrimination and Perceptual Knowledge”. The Journal of Philosophy 73, 771–91. — 1986: Epistemology and Cognition. Cambridge, MA: Harvard University Press. — 1999: Knowledge in a Social World. Oxford: Oxford University Press. 12. It should be noted that the position defended here is not one that equates or even equivocates “being valuable” and “being valued”. It is rather a position of epistemic value realism which as is common for realist positions assumes that the epistemic value of doxastic states has causal power with regard to human preferences. A doxastic state’s property of being valuable is the cause of our valuing that state. The property is somehow analogous to a fruit’s property of being sweet. A certain fruit being sweet may justly be said to be the cause of an animal’s preference for this fruit over other fruits that are less sweet. In the case of sweetness, uncovering the underlying causal mechanisms may be a rather ambitious aim that may involve a complex evolutionary story. A similarly complex story remains to be told in the case of epistemic value.

155

Goldman, Alvin I. & Erik J. Olsson 2009: “Reliabilism and the Value of Knowledge”. In Duncan Pritchard, Alan Millar, & Adrian Haddock (eds.), Epistemic Value. Oxford: Oxford University Press, 19–41. Page references are to the manuscript version of this paper.) James, William 1949: “Lecture VI: Pragmatism’s conception of truth”. In William James, Pragmatism. New York: Longmans Green & Co. (Year of original: 1907), 197–236. Jones, Ward E. 1997: “Why Do We value Knowledge?”. American Philosophical Quarterly 34, 423–39. Koppelberg, Dirk 2005: “Zum Wert des Wissens: “Das Menon-Problem”. Logical Analysis and History of Philosophy 8, 46–55. Kvanvig, Jonathan L. 1998: “Why Should Inquiring Minds Want to Know? Meno Problems and Epistemological Axiology”. The Monist 81, 426–51. — 2003: The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. Lehrer, Keith 1965: “Knowledge, Truth, and Evidence”. Analysis 25, 168-75. Martin, Alex, Cheri L. Wiggs, Leslie G. Ungerleider, & James V. Haxby 1996: “Neural Correlates of Category-Specific Knowledge. Nature 379, 649–52. Millikan, Ruth 1996: “Pushmi-pullyu Representations”. Philosophical Perspectives 9, 185–200. — 2004: Varieties of Meaning. Cambridge, MA: MIT Press. Nozick, Robert 1981: Philosophical Explanations. Oxford: Clarendon Press. Platon 1968: Œuvres complètes, Tome III, 2e Partie: Gorgias, Ménon (A. Croiset & L. Bodin, eds.). Paris: Les Belles Lettres. Sartwell, Crispin 1992: “Why Knowledge Is Merely True Belief ”. Journal of Philosophy 89, 167–80. Swinburne, Richard 1999: Providence and the Problem of Evil. Oxford: Oxford University Press. von Kutschera, Franz 1981: Grundfragen der Erkenntnistheorie. Berlin: de Gruyter. Werning, Markus 1997: Erkenntnis und Schlußfolgerung. Master thesis, Free University of Berlin, Berlin. (http://tinyurl.com/5mgqnk) — 2009: “Complex First? On the Evolutionary and Developmental Priority of Semantically Thick Words”. Philosophy of Science 76(5). (In press) Zagzebski, Linda 2003: “The Search for the Source of Epistemic Good”. Metaphilosophy 34, 12–28.

156

IV. PROBLEMS OF SOCIAL KNOWLEDGE

Grazer Philosophische Studien 79 (2009), 159–186.

WHAT SHOULD THE VOTER KNOW? EPISTEMIC TRUST IN DEMOCRACY Michael BAURMANN Heinrich-Heine-Universität Düsseldorf Geoffrey BRENNAN Australian National University / Duke University / The University of North Carolina at Chapel Hill Summary Alvin Goldman develops the concept of “core voter knowledge” to capture the kind of knowledge that voters need to have in order that democracy function successfully. As democracy is supposed to promote the people’s goals, core voter knowledge must, according to Goldman, first and foremost answer the question which electoral candidate would successfully perform in achieving that voter’s ends. In our paper we challenge this concept of core voter knowledge from different angles. We analyse the dimensions of political trustworthiness and their relevance for the voter; we contrast two alternative orientations that the voter might take—an “outcome-orientation” and a “process-orientation”; and we discuss how an expressive account of voting behaviour would shift the focus in regard to the content of voter knowledge. Finally, we discuss some varieties of epistemic trust and their relevance for the availability, acquisition and dissemination of voter knowledge in a democracy.

1. A veritistic theory of voter knowledge Alvin Goldman’s stimulating and multifaceted book Knowledge in a Social World explores the possibilities by which human knowledge can be increased via social institutions and processes. He calls this normative project “veritism”: “Under veritism we are asked to select the social practices that would best advance the cause of knowledge.” (Goldman 1999, 79) One of the domains in which Goldman tries to find answers to this question is democracy. His starting point is the suggestion that “the successful functioning of democracy, at least representative democracy, depends on the acquisition

of certain types of knowledge by particular actors or role-players”. As the essence of democracy for Goldman is rule of the people for the people by means of voting, “voter’s knowledge is the first place to look for forms of knowledge that are central to democracy” (315). Following Christiano, Goldman interprets having a vote in a certain group as having a certain type of resource that enables one to influence that group’s collective decisions. What then is the role that knowledge ought to play from this point of view in a well-functioning democracy? Whatever this role may be, the diagnosis by political scientists of the state of affairs seems to be clear: “ordinary American citizens have a minimal, even abysmal, knowledge of textbook facts about the structure of American government, the identity of their elected officials, and fundamental facts about contemporaneous foreign policy” (317). The picture for German voters may not be as grim as for Americans, but surely is far from the ideal of completely informed rational deciders who consider all potentially relevant facts before casting their vote. However, Goldman rightly argues that we cannot evaluate the average voter’s knowledge adequately and think of possible remedies if we do not have a firmly grounded idea about the kind and depth of knowledge a well-functioning democracy actually demands: “What kinds of knowledge (or information) is it essential that voters should have?” (320) We have to specify the kinds of facts that are critically important for voters to have before we can think in the spirit of veritism about social practices/ institutions that would best advance the cause of relevant knowledge in the domain of democracy. Accordingly, Goldman’s first task is to specify “core voter knowledge”, a type of knowledge that voters in a representative democracy should have if the democracy is to function optimally. (320) Goldman develops such a specification on the basis of a particular view about the aim of representative democracy: according to this view, democracy is supposed to promote the citizens’ goals or ends and in a representative democracy, therefore, it is the duty of elected representatives to execute the best political means towards the achievement of these goals or ends. The citizenry itself will normally be some composite of egoistic and altruistic types. But, whatever each citizen’s ends are: “it is assumed that he or she votes for electoral candidates on the basis of his or her estimate of how well the competing candidates would perform in achieving that voter’s ends.” (321) For the sake of simplicity, Goldman ignores the problem of whether it is rational to vote given the low probability that a single vote will swing the election—an omission to which we will want to return.

160

The voter’s ends are operationalised by Goldman as preference orderings over outcome sets. “Outcome sets” are the combination of outcomes that have resulted from a certain politician being elected and holding office for a given term. The elements of an outcome set—for example, the level of employment, the cost of living, the crime rate, the quality of the environment—are directly valued by the voters so that, for each pair of outcome sets, a voter prefers one to the other or is indifferent between them. Consequently, if the result of the performance of a politician C is an outcome set O1 which a voter V rank-orders above an outcome set O2 which another politician C* would have produced as an elected official, then C was a better official from the point of view of voter V than C* would have been. Of course, the holder of an office is constrained by all sorts of restrictions; and the outcomes that result from that holder’s term of office are a function of numerous factors. But as long as there are differences between the two outcome sets associated with any two candidates and as long as a given voter is not indifferent between these outcome sets, which one is elected should make a genuine difference to the voter. Based on this analysis, Goldman states the “core voter question” that a voter needs to ponder in deciding how to vote: “Which of the two candidates, C or Cc, would, if elected, produce a better outcome set from my point of view?” (323) If a voter believes the true answer to this question, he has “core knowledge” and “it is reasonable to assume” that his vote will accord with his core belief: if he believes that C would produce a better outcome set than Cc, then he will vote for C (324). According to Goldman, democracy is successful when the electorate has full core knowledge, that is when every voter knows the true answer to his or her core question. Full core knowledge, under majority rule in a two-candidate election, guarantees that a majority of citizens get their more preferred outcome set; high levels of core knowledge at least can make such a result highly probable. This, says Goldman, “is a good or successful result from the standpoint of democracy’s goals” (326). The greater the core knowledge, the better for democracy: “core voter knowledge is critically valuable for the realization of democratic ends.” (329) The concept of core voter knowledge serves Goldman as a decisive criterion for the importance or unimportance of other types of voter knowledge—for example knowledge about the candidates’ past records, their policy platforms and promises, their ideologies, their personalities, skills, and political competences, their debts to interest groups, and so forth. For Goldman, the importance of all these other forms of knowledge

161

lies exclusively in their impact on the voter’s core opinion (325, 329). Such knowledge is valuable if it contributes to core voter knowledge, and irrelevant otherwise. Similarly, social practices and policies that influence the circulation of political information and disinformation among voters should be assessed by their conduciveness to core voter knowledge. How can we, in realizing the veritistic program, improve core information for the voter and by these means improve core voter knowledge? Which facts and patterns exist in current democracies that are detrimental to adequate voter knowledge and what could be the remedies? In regard to the information-seeking practices of voters themselves, Goldman discusses two problems which both have to do with the hypothesized shortcuts voters actually engage in. One is a tendency to listen to like-minded sources and to ignore conflicting sources of political information. This problem might be ameliorated, Goldman maintains, by implementing Fishkin’s concept for a “national caucus” (Fishkin, 1991). The idea is to assemble a representative sample of the citizenry for several days and let them debate political issues in depth with the candidates. The preferences and opinions of the delegates would then be polled and communicated to the public. In this way, better grounded opinions might influence the assessment of candidates by the other citizens. A second shortcut that voters are supposed to take in making their decision is “retrospective voting”. According to this hypothesis, voters simplify their decision between an incumbent and an opponent by judging how well the incumbent has performed during the current term in office and how well off voters are as a result. Goldman contends that it is obvious that the retrospective voting shortcut can be seriously misleading as a guideline to answering the core question. Even if the incumbent has performed well during the past term of office, the challenger might do even better the next time; and if the incumbent performed badly, the opponent might do even worse. Moreover, the retrospective approach does not adequately consider the importance of contextual factors for good or bad results of policies; and in any case, is applicable only to chief executives since politicians in other positions can hardly be held responsible for the outcomes of politics in a certain term. Despite the perceived shortcomings of retrospective voting practices, however, Goldman does not recommend any special remedies. Goldman then turns to the behaviour of candidates and elected officials and the parties that endorse them. He takes the dominant aim of politicians and parties to be electoral victory, and argues that contenders will have strong incentives to communicate to the voters whatever they

162

think will contribute to that victory whether or not the statements are true or accurate. Goldman mentions several measures that might counteract these incentives: systematic coverage of political advertisements, “in which reporters examine campaign ads for truthfulness and realism” (338); the application of laws that require candidates to disclose campaign contributions and expenditures and reveal who is paying for commercials and airtime; the “Freedom of Information Act” that allows citizens access to information from federal agencies; and a system of proportional representation that encourages an extensive articulation of party platforms and programmes and thus a spread of detailed information for voters (as prevails specifically in Germany). Finally, Goldman turns to the pivotal role of the press in political information processing. “Ideally”, Goldman argues, “the press should comprise a set of experts who would report, interpret, and explain political events in a way that serves the veritistic interests of voters, especially their interest in core voter knowledge. Since ordinary citizens cannot be expected to acquire such knowledge entirely on their own, and since successful democracy depends on their acquiring such knowledge, the responsibility of promoting and facilitating this knowledge naturally falls to the press” (340). Goldman envisages two barriers to an adequate fulfilment of this role by the press. The first is the profit-orientation of commercial media, which results in striving for popularity by the publication of superficial and plainly entertaining stories. Goldman especially criticizes the tendency to present politics in a “strategic game schema” that emphasizes the competitive and horserace-like nature of politics, instead of interpreting election-rated information within a “policy schema” more focused on citizen interests. The second problem Goldman identifies is the insufficient professional training of journalists and reporters. Currently, journalists are not required to have any systematic knowledge of history, the liberal arts, natural sciences, or sociological and economic analysis. Therefore, they are not equipped to fulfil the role of “public explainers” who put the events of the day in context. Goldman is not optimistic about prospects for improvement of the press: certainly in the case of the commercial press he thinks it unrealistic to set expectations very high. Hopes for improvements in the media in doing a responsible and commendable job from the veritistic perspective, Goldman confines to publicly supported radio and television.

163

2. Discussion 2.1 Political trust In what follows, our object is to broaden and complement Goldman’s treatment rather than to criticise and revise it. In pursuing that objective, it will be useful to frame the analysis of the role of knowledge in a representative democracy in a slightly different way. We accept Goldman’s point of departure that democracy is supposed to promote the citizens’ goals or ends; and that in a representative democracy, therefore, it is desirable that elected representatives try to achieve these goals or ends as best they can. We can conclude from this elementary characterization that the successful functioning of a representative democracy depends on having representatives that are trustworthy—that they are motivated to pursue citizens’ goals/ends; have the ability to discern what these goals/ ends are; and the capacity to achieve those goals/ends on the citizens’ behalf. To use the term trustworthiness to characterize the essential feature of a democratic representative refers to the fact that the relation between citizens and their representatives exhibits a strategic structure that can be characterized as a “trust-problem” (Lahno 2002). A trust-problem is embodied in situations in which one person, as the “trustor”, makes himself vulnerable to another person, the “trustee”, by an act of “trust-giving”. That a trustor makes himself vulnerable to a trustee signifies that the trustee can harm the trustor by his actions. The incentive for the trustor to take this risk lies in the fact that trust-fulfilment by the trustee would improve the situation of the trustor compared with a situation in which the trustor fails to make himself vulnerable to the potential trustee. Trustproblems, so understood, are a ubiquitous feature of human co-operation and coordination; and their structure is responsible for the fundamental dilemmatic character of social order because incentives to abuse trust can prevent a mutual advantageous trust-relationship and harm the interests of both parties. The relation between citizens and their democratic representatives embodies a trust-problem because (a) in assigning political decision-making power to their representatives, the citizens make important aspects of their well-being dependent on the acts of their representatives, and in this sense make themselves vulnerable to them; (b) their incentive to do so rests on the hope that a delegation of political power to reliable representatives

164

can realize their interests better than without such a delegation; (c) the citizens express this hope in a variety of ways, but most centrally by casting their vote for candidates in democratic elections. We have said that a “trustworthy” representative both tries to promote the represented people’s goals or ends, and is also able to do so. We can be a bit more specific, by enumerating at least four factors that are crucial in this respect (Baurmann 2007a): 1. Competence. To successfully promote the goals or ends of represented citizens, a politician in a democracy must possess appropriate intellectual and practical abilities. These abilities rest on a combination of political skills such as assertiveness, communicative competence, rhetorical talent, bargaining ability, strategic planning, visionary thinking, and empathy towards the electorate. 2. Resources. To be successful in politics also requires the factual means and opportunities to achieve one’s objectives during a term in office. If a brilliant politician lacks the resources and political power to deploy his personal qualities successfully, she will not be able to realize her projects and wishes. Obviously, in a democracy, politicians can be constrained by manifold restrictions that hinder them from effectively influencing political decisions and implementing their plans. 3. Incentives. Material and immaterial benefits and costs, formal and informal rewards and sanctions, institutional checks and balances, social recognition and contempt can motivate officials to utilize their resources to promote the goals and interests of their electorate. But discretionary power and extrinsic incentives can also tempt politicians to behave opportunistically, to underachieve or to neglect their duties, to misuse their resources and political power for private goals and interests and/or to manipulate or deceive the citizens. 4. Dispositions. Emotional bonds of solidarity, sympathy and benevolence; the internalisation of social values and norms; moral virtue and personal integrity—these can all motivate representatives to act in the well-being of the represented, for its own sake. Equally, emotional aversion and hatred; the internalisation of deviant values and norms; moral vices and malice: are potential reasons to misuse power and to harm the interests of the citizenry. Dispositions of intrinsic motivation are of special importance because they can trump extrinsic incentives—in both directions. Extrinsic incentives

165

to behave opportunistically could be overridden by intrinsic motivation to behave in accordance with moral principles and ideals, just as extrinsic incentives which reward obliging behaviour could become invalidated by emotional repugnance, personal weakness and mischievous aims. What this list suggests is that the overall trustworthiness of politicians is dependent on a complex set of interconnected conditions and factors. Accordingly, it will not be an especially easy task to assess the trustworthiness of an official or a candidate for office. What should the voter know if he wants to form a considered judgement about the reliability and qualifications of a politician? If we agree with Goldman that citizens vote for competing electoral candidates on the basis of their estimate as to how well a candidate will perform in achieving their ends, and that these ends are adequately operationalised as preference orderings over outcome sets, then the demand for knowledge would indeed include the full range: a voter would then have to have knowledge of the competence and political skills of candidates, the resources and opportunities these candidates will probably have access to during their time in office, the hurdles they will face, the incentives that will have an impact on their decisions and performance, and last but not least the personal dispositions, which will shape their intrinsic motivation in face of the temptations of power. In this case the “core voter knowledge” would include a wide range of context-specific sub-types of knowledge and the sources and the bases of the relevant information accordingly differentiated and diverse. To judge the professional competence and political skills of candidates would require knowing their track records in different kinds of political situations; to estimate their resources and opportunities would demand a well-founded assessment of their future position, for example in their party or in a government and a prediction about the composition of government and parliament, and a prognosis of the possible development of the general political situation. To estimate the incentives that will have an impact on their political acting requires knowledge ranging from the overall institutional structure of a political system and political culture, to the influence of interest groups and the general stability of the political process in a country. To judge personal dispositions and individual virtues and vices of a person presupposes knowledge of a quite different sort: facts about personality and past behaviour, even of a private sort.

166

Depending on the respective type and source of knowledge, the voter is confronted with different problems and obstacles—and risks. Past record may not be a good source of knowledge about comparative advantages or disadvantages of competing candidates in regard to successfully realizing certain outcomes when in office. But past records serve much better if they are utilized to get information about the virtues and vices of different persons. Commercial media may not distribute qualified knowledge from political experts and “public explainers” or communicate the intricate details of candidates’ political programmes. But they are likely to fare better in circulating information about the personal characteristics of politicians. In this sense, a bias towards the “strategic game schema” and a proclivity for reporting conflicts and scandals may not be entirely dysfunctional. Knowledge about incentives would presuppose knowledge about institutions, political culture and general facts in a society, and sources in this respect will range from “political education” to gossip and hearsay. 2.2 Outcome vs. process Goldman concretizes the general presumption that democracy should promote the people’s goals or ends by conceiving these goals or ends as preference orderings over outcome sets. For the moment let us accept this broadly instrumental picture. Our question is whether, given this view, and given the inevitable difficulties of predicting the future course of events, it makes sense for voters to focus their evaluations on policies or on candidate qualities. Suppose the voter is essentially egoistic: he seeks policy outcomes that will serve his personal interests. Of course, his preferences over specific outcomes cannot be unconditional. He wants to have clean water and clean air, but only if the costs of a healthy environment are not too large. He does not want his country to become engaged in a costly war, but will not want the government just to surrender to a foreign aggressor. He would like to have low taxes, but only if low tax levels do not risk costly social turmoil associated with a sense of injustice by the socially disadvantaged. The problem here is that the relevant “conditions” might change: the disadvantaged may become restive; external relations may become more tense; perceived environmental costs may increase or fall. Therefore, a self-interested citizen expects from politics that it will produce a state of affairs in which not only certain prefixed and enumerable outcomes are realized, but in which all his ends, goals and interests are

167

considered as inclusively and well-balanced as possible so that the overall result is maximized from his point of view. But at the beginning of a term, no voter will be able to foresee how things will work out or what policies are required to best promote his or her interest. Even if voters did know that a candidate would indeed produce a certain outcome set, they cannot be sure ex ante how they would evaluate this outcome set in the future because this evaluation will depend on other circumstances that may have altered in the interim. From this it follows that the “egoistic” voter must switch her attention from “outcome” to “process”. As she cannot know at the beginning of a term what kind of outcome would serve her interests best at the end or during the coming term, her chief concern must be that the procedure by which future collective decisions are reached is such that her personal interests are considered and weighed as strongly as possible—so that the outcome set, unknown and not yet specifiable, will then be optimal according to her preferences. Similar conclusions can be made in regard to an “altruistic” voter. Let’s suppose that the dominant preference of an “altruistic” voter is that politics produces “just” outcomes which include the interests of everyone. But there are at least two ways to ascertain the justness of an outcome. The first one is to apply a “patterned” or “end-state” view of justice. That means that the justness of a given state of affairs is measured against criteria that evaluate directly the existing facts: whether, for example, a certain distribution of goods and burdens maximizes the utility of the greatest number, promotes the interests of the most disadvantaged or complies with egalitarian yardsticks—irrespective of the history of its development or the conditions of its origination. In this case the “altruistic” voter faces the same difficulty as her purely egoistic counterpart. Because of inevitably limited knowledge about the future she can not specify in advance a concrete outcome set which will at the end of a term satisfy her criteria for justice. Therefore, she too is forced to switch her attention away from the outcome set to the process of politics and to ask what qualities a process of collective decision-making must have to promote outcomes with “patterns” that, in the end, can count as “just”. Of course, the focus on processes will follow directly if justice is itself defined in process terms (as it is in certain entitlement theories of justice and procedural accounts of democracy). It seems then that, independent of the precise details of voter motivation, a shift from outcome-orientation to process-orientation in the

168

attitudes of voters will be required. We can leave it open here whether this shift will be complete or whether there will a kind of mixture of outcome- and process-orientation. What is central here are the consequences such a shift would have for core voter knowledge and therefore for the veritistic program. The core question for the voter would be no longer “which is the best policy package” (whether “best” is understood as “best for me” or “best” in some more normative sense) but rather “which of the candidates would, if elected, be likely to choose a better outcome set from my point of view?” What qualities must a representative have from this perspective, and what kind of core voter knowledge is hence necessary? For an “egoistic” voter the main concern will be that his interests are accounted for as extensive as possible in the political process and in political decisions. Such a voter will have to discern the extent to which alternative candidates have internalised their particular interests. For an “altruistic” voter the main concern will be that the political process produces “just” outcomes which include everyone’s interest. From this it follows, at least on the “straight-forward” view, that politicians in office should consider the interests of everyone as thoroughly and in as balanced a manner as possible—again, whatever their concrete role and the extent of their power may be. More sophisticated views may induce “egoistic” voters to assume that their personal interests would be better served if their representatives observed the limitation imposed by appropriate moral or political principles when in office and did not merely act as ruthless executors of their ideology. Conversely, an “altruistic” voter might think that the general welfare is better achieved if representatives act as advocates of their constituents’ interests and do not presume a vocation to act for the common good, relying on abstract properties of the process to generate the desired overall pattern of outcomes. And several positions between these extremes are conceivable. However, these complications are not strictly relevant to the point we wish to make—which is that process-oriented voters in a representative democracy, whatever their motives, will be primarily interested in the characteristics of their empowered agents in their political roles. The “trustworthiness” of a candidate will be depend in part on the extent to which he bases his decisions on the “right” reasons from the point of view of the voter. Of, course, the other dimensions of trustworthiness will not lose their relevance. The competence of a politician, his resources to influence

169

outcomes of the political process, the incentives he faces and his personal dispositions still play their role in the overall judgement of the voter. But an important difference with the outcome-oriented voter remains: process-oriented voters will not make their judgement of “trustworthiness” contingent on the ability of a politician to produce a certain and specified outcome set. This different focus has some significant consequences within the veritistic perspective. Goldman is very sceptical about the veritistic value of “retrospective voting” where voters are supposed to simplify their decision problem by asking how the incumbent has performed during his term in office. Goldman is right in his scepticism if retrospective voting is tantamount to answering the question as to how well-off the voter is as a result of the incumbent’s current tenure. The prospect for retrospective voting brightens, though, if the voter does not focus upon outcomes but upon the behaviour of an incumbent during his term in office and the reasons on which he based his decisions. Even if the outcome were satisfactory but for the “wrong” reasons, the voter could well conclude that prospects for the future are better if the “right” reasons had determined the decisions of the incumbent. And, in contrast to the outcome-orientation, if voters are able to recognize the decision behaviour of officials, they do have not to estimate the influence of contextual factors to judge the “true” impact of the politician. Furthermore, process-orientation has the additional advantage that by retrospective voting the voter also has a better chance to judge the qualities of the challenger of an incumbent. With an outcome-oriented approach this is difficult because it is not easy to get evidence of the possible performance of a challenger with regard to producing certain future outcomes. But it is much easier to get evidence for the decision calculus of a challenger—the calculus that she will apply in future situations. There are many different contexts in which a challenger can convincingly reveal the reasons on which she will base her political decisions if she is elected to office. And her past performance in a different context can be telling in this regard—even when her relevant political experience is very thin. All in all, there seem to be better chances for process-oriented voters to acquire core voter knowledge than for purely outcome-oriented voters. Their focus will be on the personal characteristics and intrinsic motivations of candidates—features which are revealed by the facts about the candidates’ past and current behaviour and performance, not on the risky and

170

complicated prognosis what kind of outcome they will produce in a future term when elected in office. In short: voters’ attention will be rationally directed more towards candidates than towards policies. 2.3 Instrumental vs. expressive voting Goldman’s concept of core voter knowledge could also be challenged in a more fundamental way. He follows Christiano in his interpretation that having a vote is to have a certain type of resource to influence collective decisions. Consequently, voters will use this resource to vote for electoral candidates on the basis of their estimation of how well candidates would perform in achieving the voter’s ends. From this point of view the vote is an instrument by which the voters try to intervene in the world and to change the course of things in a way which best serves their preferences. This approach has a long history in the Rational Choice and Public Choice tradition. But as an interpretation of what truly rational voting behaviour would require, this ‘instrumental’ view of voting is deeply problematic—for the simple reason that the single voter in a fairly large group does not determine the result of an election, except in very special circumstances. Unlike decisions in the market place, for example, the voter does not actually choose between political options. The opportunity cost of V’s voting for candidate A is not candidate B forgone—just a vote for B forgone. So the idea of agents directly choosing policy packages (or the social outcomes that those packages produce) is defective. In other places and collaborations (see Brennan and Lomasky 1993 and Brennan and Hamlin 2000) one of us has developed an alternative “expressive” view of voting behaviour according to which the act of voting is to be seen more as a speechact by which a voter wants to express his support for a candidate or his approval for a policy and in which his instrumental interests will play only a minor or indirect role. Voting is to be thought of more as a matter of cheering at a football match—of “showing support”—than of choosing an assets portfolio. For example, on this view, voters can rationally vote for candidates even when the outcome of the election is determined (as Californian voters have been known to do in US Presidential elections, when the result has already been known). More to the point, the kinds of considerations that weigh in voter deliberation are connected to the factors that induce people to “cheer” rather than to the factors that might induce them to choose.

171

To specify what such considerations are is no small task. But things like the personal characteristics of candidates (charisma, charm, rhetorical appeal, even good looks), or the moral attributes of the candidate and/or the policies she is associated with seem more likely contenders in most cases than the individual voter’s interests. Just as Rawls’ veil of ignorance serves to background individual interests, so the “veil of insignificance” that characterises the individual voter’s actions will rationally reduce the role of self-interest and augment the role of directly “expressive” and symbolic factors. Of course, expressive voting does not exclude voting for the candidate who, in a voter’s estimation, will serve that voter’s ends best. But any voter who does this cannot plausibly do it instrumentally, to influence the collective decision in the “right” direction; she must vote that way because she wants to identify herself with a particular position and to express her affirmation and appreciation for a candidate who takes that position. If we accept that the theory of expressive voting captures relevant aspects of voting in a democracy, then we have to adapt the concept of core voter knowledge accordingly. The main consequence will be that core voter knowledge no longer has a specified substance. The reason for this is that it is not prefixed what individual voters want to express by their voting in a democratic election. If voters want to express their approval of a candidate, because they think that that candidate will probably produce the best outcome set from their point of view, then the core voter knowledge as Goldman has specified it will remain the same. But voters in a democracy can and in fact do express quite different attitudes, beliefs and values by their votes. They can express by their vote that they very much appreciate an important singular outcome of a recent policy, without necessarily assuming that the incumbent will also in future be the one who will produce the best outcomes. For example, many German voters seem to have expressed their approval of Chancellor Schröder’s decision not to take part in the Iraq war, quite apart from his perceived qualities as a future leader. In the same way, voters can use their vote to express their disapproval with a singular outcome—for example, that the incumbent has not kept his election pledge in a certain question. Voters can express a general disenchantment with politics or politicians—for example by casting their vote for a radical party—without necessarily hoping that this party will come into power. And voters may express their esteem for a politician who has acted especially admirably in a certain regard even if they do not assume that he is adept at producing overall good outcomes.

172

An example could be the respect Chancellor Willy Brandt enjoyed at the ballot box after his reconciliation politics. Of course, this list of possibilities just is a list of possibilities. Unlike the instrumental account of voting, the expressive account is somewhat open-ended and the limits imposed on voter attitudes extremely loose. All of these possibilities are, however, entirely consistent with rationality on the voter’s part. There is of course systematic evidence that shows—for example—that a candidate’s vote share, other things equal, is significantly increased by his good looks (Leigh and Susilo 2008). But the important point here is that the qualities that induce “cheering” (and “booing”) are as likely to be connected with the perceived qualities of the candidate as the policies that candidate promotes—and even in policy assessment are unlikely to track voters’ prudential interests in any close way. From the perspective of expressive voting the often bemoaned “personalization” of politics makes perfect sense. The core voter knowledge would therefore be a quite intangible and fluent phenomenon and the core voter question consequentially would be highly time and context dependent. What are the implications for the veritistic agenda? Certainly not that the supply of reliable information and knowledge about politics and politicians should be reduced. But we have to face the fact that the nature of the political information demanded is likely to show a substantial variation across voters, and for any one voter across time. Core voter knowledge for voter V is not the same as for voter V* and for voter W at time t1 not the same as at time t2. Therefore, we have to put a question mark behind the possible veritistic ideal that all voters should possess a uniform and maximal knowledge about politics and politicians all the time. The expressive voting account has some similarities with the problem of “rational ignorance”. The rational ignorance arguments emphasise the lack of incentive to acquire relevant political knowledge given the fact that no rational voter can expect the probability of his being determinative in an election to be other than very tiny. The expressive voting arguments take the same point of departure but the conclusions made are rather different. Many “expressive” voters may be quite well-informed about those aspects of politics that engage their expressive concerns—much in the same way as keen football fans often know a huge amount about their team members and their records and about football statistics in general (none of which information, incidentally, has any prudential relevance!). Even so, “rational ignorance” considerations still lurk in the undergrowth: nothing in the expressive account denies that many voters will know very

173

little about the objects of their vote or the issues at stake in casting that vote one way or the other. And we think that Goldman is rather too quick to set aside the “rational ignorance” challenge. In any “veritistic” enterprise in the democratic context, what incentive voters will have to acquire whatever information is deemed relevant has to be a central question. The rational ignorance challenge is too basic to be set aside in the interests of simplification. As we have indicated, the expressive account of voting offers a reasoned account not just of the levels of turnout (why people will rationally vote in the numbers that they do) but also of why they may acquire information about the aspects that are relevant to electoral choices. However, if the expressive voting theory is correct—whether as a supplement to an instrumental theory of voting or a substitute for it and whether applicable to all voters or just a subset—there are important follow-up questions for a theory of democratic information. One of the most salient is the question of how democratic elections can be made to reliably generate political outcomes that will best serve the ends or goals of the citizens. This question takes us well beyond the scope of this paper. But it can hardly be pretended that it is an unimportant one; or that it does not bear critically on the kind of information that it is plausible that democratic citizens will have reason to acquire. 2.4 Epistemic trust in democracy From a veritistic perspective, societal, political and legal institutions of public knowledge production and distribution matter a great deal—both in general, and in relation to politically relevant knowledge in particular. These institutions determine to a large extent whether the production and distribution of knowledge is efficient, whether there is control of and competition between different sources, whether there is freedom of speech and information, whether experts acquire adequate competence and sufficient resources and have incentives to distribute reliable information and useful knowledge. However, what is true for other kind of institutions is also true for epistemic institutions: institutions are always embedded in a social and cultural environment that is a crucial factor for the efficiency and the functioning of these institutions. “Soft” factors like social norms and cultural values, history and tradition are important in determining whether institutions can actually realize the aims for which they were designed or on the basis of which those institutions are justified

174

(Baurmann 2007b). Both the institutional framework of a society and the social embeddedness of this framework and its impact are central to the project of realising veristic ideals. In the context of the present discussion we want to investigate a factor that seems to us of special importance for the veritistic agenda in general and in regard to the availability of knowledge in a democracy in particular. This is the role of trust in the acquisition, validation and utilisation of information (Hardwig 1991, Govier 1997). This role of trust is not so much a matter of the relation between voters and politicians as such; it deals rather with the role trust plays in relations among citizens within the epistemic division of labour, specifically when they want to gather information about the trustworthiness and other relevant personal characteristics of officials and candidates. The relation between institutions and trust is generally intricate. On the one hand, well-designed and well-ordered institutions in politics, law or economy can create and nurture trust. On the other hand, without trust even well-designed and well-ordered institutions can hardly function properly and produce the results that might be hoped for them. The same is true for institutions that are planned to serve veritistic purposes in a democracy. Where exactly does trust come into play when we are dealing with the ways in which voters can gain relevant knowledge? In the first place, whatever kind and range of knowledge is needed for voters, it seems to be obvious that it cannot be acquired by individual voters entirely on their own. Voters will be dependent on testimony, on information and knowledge from other people and sources, in order to accumulate the necessary knowledge (Coady 1992, Matilal and Chakrabarti 1994, Schmitt 1994)—a fact that Goldman himself mentions when he discusses the pivotal function of the press. This dependence on external sources exists not only because individuals have a resource-problem and simply do not have the time or the opportunity to gather and validate all relevant information about politics and politicians entirely individually. Average citizens also have a competence-problem. If, for example, they want to know something about the typical incentives politicians face in office or whether a certain policy is an appropriate instrument to bring down unemployment or to limit household deficits, they will need expert assistance. Goldman is right to emphasise the role of professional experts and “public explainers” who can elucidate political issues for the political laymen. From this follows that to identify trustworthy politicians, voters must identify trustworthy informants who can provide them with the kind of

175

knowledge they need. Not surprisingly, the requirements for being a trustworthy informant are much the same as the requirements for being a trustworthy representative: a trustworthy informant must be competent, and he must possess appropriate cognitive and intellectual abilities as well as sufficient external resources to identify the relevant information, and he must be disposed to pass on that information accurately. Informants’ motivations to exploit their cognitive potential, to utilize their connections to discover useful information and to transmit their knowledge to the recipients depend both on informants’ incentives and dispositions; but incentives and dispositions can also tempt informants to behave opportunistically, to underachieve and/or to misuse their resources and to deceive recipients with wrong, misleading or useless information. Of course, different information transfer settings demand different levels of trust. To judge the reliability and sincerity of information about the time of day does not require deep insights into the special competence, incentives or motivations of the informer (Fricker 1994). But as a typical voter, to judge the special competence of political experts and “public explainers” is quite another task. Two questions, then. First, what epistemic sources are relevant for voters to gain relevant knowledge? And second, what is at stake in assessing the reliability and trustworthiness of such sources? Trust in epistemic authority As already noted, the individual voter does not only have a resource-problem to accumulate all relevant information about politics and politicians, but also a competence-problem. That means that the average voter is dependent—over a more or less wide range—on additional information and knowledge of political experts and authorities to form a well-founded opinion about the trustworthiness of politicians in general and in the concrete case. He may also be looking for advice and orientation from opinion leaders and spokespeople who are able to condense and articulate the interests and hopes of a group or community. Therefore, as Goldman already points out, from a veritistic perspective it is highly important that, in a democracy, political experts and specialists are available who are professionally competent, possess personal integrity and can explain political complexities and problems to the public. But to have trustworthy experts and analysts is only half the battle. They must also be recognised as trustworthy—that is, actually trusted—by the public so that the “truths” they reveal are believed and distributed. To accomplish

176

this and to promote and secure trust in experts and authorities in all fairly developed societies—in politics as well as in other areas—numerous variants of rules and criteria are employed to assign and identify the experts and authorities who are trustworthy (Fricker 1998, Manor 1995). This is obvious in the case of officially licensed indicators of scientific competence and academic expertise. Among the most important are certifications from approved educational institutions such as diplomas, degrees, credentials and testimonials, public acknowledgement of the certified qualifications by official accreditation and authorisation, membership or employment in professional institutions or in the public service. These indicators tell us not only to believe that the experts in our society are competent and able but also to believe that, provided normal conditions apply, they are acting according to appropriate extrinsic and intrinsic incentives (Baurmann 2009). Less precise but also clearly recognisable are the more informal criteria that identify political experts and analysts as “reliable”. Sometimes these will be the same criteria as in the academic case. Far more important in modern democratic societies are experts who are labelled as authorities by their membership in professional media—like television, radio or newspaper. However, that trust is conferred to them via their membership in the professional media presupposes in turn that trust is invested in these media. And at this level we can observe criteria to differentiate between respectable and dubious media in a society. The media we should trust must fulfil certain requirements to be taken seriously as a source of information—for example, an official accreditation of a newspaper or a television channel or a certain degree of coverage and circulation. To ensure that the knowledge of political experts and analysts is made available to the public and can really contribute to the knowledge of the voters, it is necessary that a community has reliable rules and criteria to identify the trustworthy sources and authorities and that the people believe that these rules and criteria are indeed reliable and credible! Pathological distortions from a “healthy equilibrium” in this respect are possible in different directions: a society can be endowed with competent and trustworthy authorities and reliable epistemic institutions, but people do not trust them and do not believe in the validity of the social criteria that label them, and instead trust incompetent and unreliable sources: as is the case when members of a fundamentalist denomination believe in the truth of creationism which is propagated by their religious leader. Or the official

177

licences of a society themselves could be corrupted—and people could be led thereby to trust incompetent or otherwise defective “authorities”, for example, in authoritarian or dictatorial regimes that preach the absolute certitude of their ideology. Therefore, the veritistic enterprise must pay attention to the conditions that promote a healthy “epistemic equilibrium”. To produce and circulate the knowledge that voters should have in a democracy, we need to have efficient epistemic institutions which compile knowledge and make it available; and we need recipients who trust in these institutions and sources and believe in the reliability of the information they offer. What conditions are conducive to a widespread trust in the “official” epistemic sources in a democracy? To answer this question one has to answer another question: namely, by what means do average citizens judge the quality of the prevailing social and institutional mechanism for identifying (political) experts and authorities in a society? How do citizens become confident that these mechanisms do serve to indicate competence, reliability and trustworthiness? We cannot deal with these crucial questions in detail here (Goldman 2001). But whatever strategies and possibilities are, in principle, available to voters in this respect, one thing seems clear: voters will base their judgement of the trustworthiness of political experts, epistemic authorities and the professional media and of the reliability of the respective rules and criteria to identify them not only on their individual information and knowledge, but also on information and knowledge they receive from others (Baurmann 2007a). As users of the media, for example, we will often notice whether information by the media is indeed true or not and we will see differences in this respect between different kinds of newspapers or television channels. But we could hardly come to a well-founded judgement on the basis of our individual experience alone. So again we have to rely on collective knowledge. Social trust Political experts and the professional media are not the only sources of voters’ knowledge about the performance and trustworthiness of politicians. Another important source is the personal experiences and insights of fellow citizens in regard to political issues and politicians. Moreover, the testimony of fellow citizens will be important for individuals to assess the trustworthiness of the political experts and analysts as well as the reliability of the media and other institutions of information.

178

That means that the question of epistemic trust is to be raised again. If the judgement and the knowledge of fellow citizens are important for individuals, what is the basis for their trust in these sources? Again, we can uncover a number of rules which incorporate criteria for distinguishing those of our ordinary fellow citizens we should trust with regard to certain issues from those we should mistrust. These rules are highly context-dependent and cover a wide range of areas: from trivial everyday questions to religious and social subjects right up to the problem which is of interest here, namely whom should we trust as witnesses of the achievements and failures of policies and politicians, political experts and the media (Fricker 1994). The criteria specified by these rules are not specific and clear-cut. They are informal, and socially evolved. Nonetheless, they serve the function of allowing a prima facie judgement of epistemic reliability and credibility. These rules lay the foundations for social trust and thereby—beside other things—determine the scope and nature of collective knowledge from which an individual can benefit. In this respect a continuous range of possibilities between two extremes exists (Baurmann 1997): at one extreme, epistemic trustworthiness is attributed in a highly generalized form. Rules of such a generalized social trust entail the presumption of epistemic trustworthiness as a default position: accordingly a recipient should assume that an informant conveys the truth unless there are special circumstances which defeat this presumption. Such generalized epistemic trust presupposes that relevant sources have epistemic competence in regard to the topic in question and that there are no extrinsic or intrinsic incentives to withhold the truth from others. A trivial example would be that under normal circumstances we trust that people on the street would give correct answers when asked for the time of day or for directions to a desired destination. Similarly, in our societies most people tend to believe most of the putative facts promulgated by the mass media. The other extreme consists in attributing epistemic trustworthiness in a highly particularistic way. Individuals adhere to a particularistic trust if they trust only members of a clearly demarcated group and generally mistrust the members of all other groups. Under this condition, their epistemic sources will be restricted to people who share the distinctive features which separate them from the rest of the world and grant them membership in an exclusive group. Particularistic trust is supported by rules which are the mirror image of those rules which embody a generalized trust. Rules of generalized trust state that one should trust everybody unless exceptional

179

circumstances obtain, rules which constitute particularistic trust state that one should mistrust everybody, with the exception of some specified cases. Paradigmatic examples of particularistic trust can to be found in enclosed sects, radical political parties, ostracized groups and suppressed minorities. The availability and distribution of knowledge in a community depends critically on which form of social trust prevails. Generalized social trust in the epistemic sense enables people to utilise a huge reservoir of collective knowledge at low cost. People gain access to a large number of different sources all of which can provide them with some information and insight. In the democratic context specifically, individual voters can benefit from the experience of a huge number of other people in very diverse contexts and can base their political judgements on a broad assemblage of facts and data. In a high-trust society the individual will get a lot of information and criticism by happenstance, and on the cheap. Particularistic trust, in contrast, has very negative consequences from an epistemic point of view. It restricts the chances of individuals to get a solid foundation for their opinion formation. The aggregated collective knowledge on which they could base their judgement of the trustworthiness of politicians and the credibility of epistemic authorities and other sources will be severely limited. But particularistic trust not only limits the available knowledge. If the collective knowledge of a particular group entails selective information and one-sided world views, the systematic lack of alternative information and views will contribute not only to unjustified mistrust of trustworthy persons and institutions, but also to an unjustified trust in untrustworthy and unreliable persons and institutions. From a veritistic point of view, the prevalence of particularistic trust in a society is a serious threat. In politics, it limits the amount of accessible collective knowledge for individual voters and thereby restricts their chances to gather core voter knowledge, and it contains the risk that voters can adopt wrong or misleading information, which motivates them to cast their votes for untrustworthy and/or incompetent politicians. Particularistic trust is associated with the fragmentation of a society—a feature that poses a danger for democracy on a number of fronts. But epistemic considerations are to be included among the dangers. If we ask which factors determine the scope of social trust, we are again confronted with an iteration of our problem: the rules of social trust also embody a kind of knowledge which is hardly at the disposal of one individual alone. Without the experience of others, the assessment of the

180

rules of social trust would be based on thin evidence. As single individuals we cannot acquire sufficient information about the average competence of the members of our society, the incentives they face in different social contexts and situations and the motivations and attitudes they normally possess. To form a reasoned opinion about whether or not I am justified in trusting my fellow citizens, I have to know relevant facts about the institutions and the social structure of my community, the ethnic and political composition of the population, possible conflicts between the values and interests of different sub-groups and much more. Personal trust So far we have referred to the fact that individuals place trust in experts, institutions and their fellow citizens by applying socially shaped criteria and rules. But this does not mean that there are no situations in which people base their judgements on individual evaluation. If favourable conditions obtain in the relationship to particular persons, individuals can rely on their own knowledge and experience to assess whether these persons have competence, what kind of extrinsic incentives effect their behaviour, and what character and dispositions they reveal. We can characterize cases in which we come to trust other persons on such an “individualized” basis as instances of personal trust. The epistemic base for this kind of personal trust lies mainly in the context of ongoing and close relationships—connections that produce a lot of information about other persons. But we can have reasoned opinions about the trustworthiness of certain persons even under less favourable conditions. Even if there is no direct relationship with a person but otherwise a regular or intensive flow of information and impressions, I may be in a position to make good guesses at the abilities, the situation and the character of the informant. Personal trust must not be reciprocal. I can deeply trust other persons without their even knowing me. I can be the ardent follower of a political or religious leader or be convinced of the trustworthiness of a famous scientist, foreign correspondent or a news moderator. This kind of “detached” personal trust can be well-founded if it is based on sufficient evidence, and even being instantly impressed by the charisma of a person is not per se misleading or irrational. We possess a certain intuitive ability to judge trustworthiness and personal integrity—at least to a certain degree (Frank 1992, Baurmann 1996, 409ff.). The larger the number of individuals I trust personally, the broader the potential reservoir of independent information and knowledge I can

181

draw from to judge the validity of social rules and criteria for the credibility and trustworthiness of people, institutions and authorities. This judgement also involves reference to testimony to a large extent—but it is testimony from sources whose quality I can evaluate myself. Therefore, I can ascribe a high “trust-value” to the testified information. In these cases my trust is based not only on predetermined rules and their more or less reliable indicators of trustworthiness but on my own, sometimes careful, individual assessment of persons and situations. Information which stems from personal confidants, therefore, often overrides the recommendations of social rules and criteria. I will also be inclined to ascribe a comparable high trust-value to information that stems from sources whose trustworthiness has not been ascertained by myself, but by the testimony of people I personally trust. In this way it is possible to profit from a more or less widespread network of personal trust relations linked together by people who trust each other personally and thus simultaneously function as mutual trust-intermediaries (Coleman 1990, 180ff.). Such trust-networks pool information and knowledge and make that knowledge available to the individual at low costs or even for free. They represent important instances of “social capital” (Baurmann 2008). The efficiency of personal trust-networks as information pools is enhanced if the networks transgress the borders of families, groups, communities, classes or races. The more widespread and the larger the trust networks, the more diverse and detailed the information they aggregate. Particularistic networks that only connect people of a certain category or which are very limited in their scope are constantly in danger of producing misleading, partial and one-sided information. The chances of individuals deriving from their trust-networks the quality and quantity of information they need to form a realistic and balanced picture of their world is, therefore, largely dependent on the coverage their trust-networks provide. Trust-networks can remain latent and silent about the established social criteria for epistemic credibility and authority for a long period. The special importance of these personal trust-networks becomes evident when, for example, under a despotic regime a general mistrust towards all official information prevails. But personal trust-networks also provide fall-back resources in well-ordered societies with usually highly generalized trust in the socially certified epistemic sources (Antony 2006). Under normal circumstances in our societies we consult books, read newspapers, listen to the news and pay attention to our experts and authorities if we want to learn

182

something about the world. And even when we develop mistrust towards some of those authorities or institutions, we normally do so because we hear contrary “facts” promulgated by other authorities or institutions. Nevertheless, the ultimate touchstone of my belief in testimony can only be my own judgement. And it makes an essential difference whether I can base this judgement only on my own very limited personal information or if I can rely on the information pool of a widely spread personal network which is independent of socially predetermined criteria for epistemic credibility and authority. Of course, I can myself check for internal consistency and general plausibility, and compare different kind of sources with each other—but it makes my assessment much more reliable if I can base it on the collective knowledge of a group that aggregates a huge amount of information from different areas and contexts. We can conclude that personal trust-networks provide individuals with a pool of independent information about the trustworthiness of other people, groups, institutions, specialists, experts and politicians. Thus they improve the basis for a critical assessment of the validity of the formal and informal criteria a society develops for differentiating between reliable and unreliable sources of information and knowledge. The rules which guide and determine our social trust and our confidence in authorities and experts can be scrutinized by utilising the collective experience and knowledge which is embodied in our personal trust-networks. Given the important function of trust-networks as ultimate sources of reliable information and testimony, a systematic restriction of their scope and an arbitrary limitation of their members has serious consequences for the quality of the collective knowledge they incorporate. Exclusive networks that only consist of people who belong to a special and limited group can create a vicious circle with social rules that prescribe particularistic social trust, whereas widespread personal networks can support and strengthen a generalized social trust and can contribute to the validity of individual knowledge. Therefore, the chances that people will get reliable information from their personal networks will be all the greater, the more these networks are open and inclusive. Summary The knowledge a voter should have in a democracy is preserved in different storages. It is collective knowledge that is not immediately available

183

to the individual user. Therefore, the veritistic programme should not be restricted to the supply side of knowledge; it must also consider the demand side and analyse the conditions under which individual recipients will have an incentive and a chance to participate in existing collective knowledge. One important condition, as we have tried to line out, is trust in the epistemic sources. We have referred to three kinds of epistemic sources that contribute to voter knowledge: political experts and “public explainers” as they are present mainly in the professional media, the anonymous group of fellow citizens, and members of personal networks. Each source disposes of an aggregated collective knowledge which is potentially important for individual voters. To utilise these different sorts of collective knowledge voters must place trust in the reliability of the sources: voters must trust political experts and authorities, they must believe in the reliability of the professional institutions of communication and information, they must place social trust in their fellow citizens and personal trust in the people who form their social networks. We have pointed out that each form of trust is based on different conditions and poses different kinds of problems for verification. But most important is the phenomenon that the different forms of trust are not isolated from each other but are mutually dependent and embedded in a kind of intricate hierarchy with complicated interrelations between its different levels. Provided that a society is actually blessed with reliable institutions of public knowledge, trustworthy political experts and citizens an “optimal” veritistic situation would be one in which voters trust their institutions and experts on the basis of the given social rules and criteria, exhibit a generalized social trust and possess a widespread personal trust-network so that they can utilise collective knowledge as much as possible. An efficient epistemic constellation can be endangered on the “demand side”—without any changes on the “supply side”—in different ways: trust in institutions and experts can weaken because people begin to question the official rules for reliability of sources—for example, if green activists challenge the expertise of scientists in regard to environment protection; generalized social trust can begin to particularize because people begin to mistrust certain groups of fellow citizens, as, for example, when the cultural homogeneity of a population dissolves; personal trust-networks can become more and more exclusive because people increasingly restrict their personal trust—for example, if new social or political conflicts arise. In all these cases the collective knowledge that will be available to a person

184

will diminish—with the growing risk that that knowledge becomes biased, selective and one-sided. Processes of trust erosion are also multilayered and interrelated. Suppose that individuals were to limit their personal trust-networks to a peculiar group of people; and that these people exhibited a particularistic social trust, and one moreover that includes only people who mistrust the official experts and epistemic institutions. Suppose each sub-group were to insist exclusively on the credibility of “alternative” experts. Then trust in the sources of collective knowledge might well break down in a cascade. The shape and scope of personal trust-networks will often play a crucial role in such a process. For the veritistic agenda, therefore, lots of things matter—not only institutions but also informal social facts and processes that determine how the available knowledge of a community is adopted and accepted. In the case of democracy, it seems highly likely that voters will have access to relevant core knowledge—whatever is judged to be the specific content of that knowledge—only if there is trust in political experts and institutions which, in turn, will only prosper if that trust is embedded in a highly generalized social trust and in inclusive far-reaching personal trust-networks.

BIBLIOGRAPHY Antony, Louise 2006: “The Socialization of Epistemology”. In: Robert E. Goodin and Charles Tilly (eds.), The Oxford Handbook of Contextual Political Analysis, Oxford: Oxford University Press, 58–76. Baurmann, Michael 1996: Der Markt der Tugend—Recht und Moral in der liberalen Gesellschaft. Tübingen: J.C.B. Mohr (Paul Siebeck). (Englisch: The Market of Virtue. Morality and Commitment in a Liberal Society, The Hague: Springer 2002). — 1997: “Universalisierung und Partikularisierung der Moral. Ein individualistisches Erklärungsmodell’’. In: Rainer Hegselmann/Hartmut Kliemt (eds.), Moral und Interesse. München: Oldenbourg, 65–110. — 2007a: “Rational Fundamentalism? An Explanatory Model of Fundamentalist Beliefs”. Episteme. Journal of Social Epistemology 4, 150–166. — 2007b: “Markt und soziales Kapital: Making Democracy Work’’. Politisches Denken. Jahrbuch 2006/2007. Politik und Ökonomie, 129–155.

185

Baurmann, Michael 2008: “Political Norms, Markets and Social Capital”. In: Jörg Kühnelt (ed.), Political Legitimization without Morality, Wien/New York: Springer, 161–180. — 2009: “Fundamentalism and Epistemic Authority”. In: A. Aarnio (ed.), Varieties of Fundamentalism. The Tampere Club Series. Volume 3. Tampere, im Druck. Brennan, Geoffrey and Alan Hamlin 2000: Democratic Devices and Desires. Cambridge: Cambridge University Press. Brennan, Geoffrey and Loren Lomasky 1993. Democracy and Decision. Cambridge: Cambridge University Press. Coady, C. A. J. 1992: Testimony, Oxford: Oxford University Press. Coleman, James S. 1990: Foundations of Social Theory, Cambridge/London: Harvard University Press. Fishkin, James 1991: Democracy and Deliberation: New Directions for Democratic Reform. New Haven: Yale University Press. Frank, Robert H. (1992), Passions Within Reason. The Strategic Role of the Emotions, New York/London: W W Norton & Co. Fricker, Elizabeth 1994: “Against Gullibility”. In: Bimal K. Matila and A. Chakrabarti (eds.), Knowing from Words. Dordrecht: Springer, 125–161. — 1998: “Rational Authority and Social Power: Towards a Truly Social Epistemology”. Proceedings of the Aristotelian Society 98, 159–177. Goldman, Alvin I. 1999: Knowledge in a Social World, Oxford: Oxford University Press. — 2001: “Experts: Which Ones Should You Trust?”. Philosophy and Phenomenological Research LXIII, 85–110. Govier, Trudy 1997: Social Trust and Human Communities, Montreal & Kingston: McGill-Queen’s University Press. Hardwig, John 1991: “The Role of Trust in Knowledge”. The Journal of Philosophy LXXXVIII, 693–708. Lahno, Bernd 2002: Der Begriff des Vertrauens. Paderborn: mentis. Leigh, Andrew and Tirta Susilo 2008: “Is Voting Skin Deep?” CEPR Research Paper RSSS ANU No. 583. Matilal, Bimal K. and A. Chakrabarti 1994 (eds.): Knowing from Words, Dordrecht: Springer. Manor, Ruth 1995: “My Knowledge, Our Knowledge, and Appeals to Authority”. Logique & Analyse 38, 191–207. Schmitt, Frederick F. 1994 (ed.). Socializing Epistemology. The Social Dimensions of Knowledge, Lanham: Rowman & Littlefield Publishers.

186

Grazer Philosophische Studien 79 (2009), 187–205.

EXPERTS: WHAT THEY ARE AND HOW WE RECOGNIZE THEM— A DISCUSSION OF ALVIN GOLDMAN’S VIEWS Oliver R. SCHOLZ Westfälische Wilhelms-Universität Münster Summary What are experts? Are there only experts in a subjective sense or are there also experts in an objective sense? And how, if at all, may non-experts recognize experts in an objective sense? In this paper, I approach these important questions by discussing Alvin I. Goldman’s thoughts about how to define objective epistemic authority and about how non-experts are able to identify experts. I argue that a multiple epistemic desiderata approach is superior to Goldman’s purely veritistic approach.

1. Introduction What are experts? Are there only experts in a subjective or recognitional sense or are there also experts in an objective sense? And how, if at all, may non-experts recognize experts in an objective sense?1 In this paper, I approach these questions by discussing Alvin I. Goldman’s thoughts about expertise. I focus on two important contributions of his: (1) the section on Recognizing authority in Chapter 8 of Knowledge in a Social World (1999)2 and (2) the more recent essay “Experts: Which 1. I gratefully acknowledge support by the Deutsche Forschungsgemeinschaft (Project Grant no. Scho 401/4-2: “A Case Study in Applied Epistemology”). For valuable comments and advice, I am grateful to Alvin I. Goldman, Gerhard Schurz, Michael Baurmann, Thomas Bartelborth and Richard Schantz. 2. Goldman’s book Knowledge in a Social World (1999) is an essay in social epistemology with applications in various fields. Although Chapter 8, Section 12 contains the main treatment of expert recognition, KSW has several passages on related topics: pp. 123ff. (on estimating testimonial likelihoods); 150f. (on arguments from authority); 174–182 (on peer review); 226–229 (on authority in science); 305–311 (on expert witnesses in litigation); 334f. (on political authorities); 363–367 (on the student-teacher relationship) and 367–372 (on textbooks and other curricular

Ones Should You Trust?” (2001).3 The phenomenon of expertise is a special case of the general social phenomenon that each of us depends on other persons, that we do not severally suffice for our own needs, but each of us needs many things, especially many other persons as helpers, as Plato put it (Republic 369b6–7). Here, we are concerned with epistemic dependence. Besides symmetric forms of epistemic dependence4, there are asymmetric forms of epistemic dependence: relationships of layperson to expert (e.g. patient-doctor), or novice to expert (e.g. pupil-teacher; student-professor). No one can deny that we all depend in countless ways on experts. The ubiquity of asymmetric epistemic dependence is especially obvious in modern societies with a high degree of division of labour and specialization, but it has been part of the human condition at least since antiquity. Thus, it should come as no surprise that the nature and value of expertise is a major theme already in Socrates’ teaching and in Plato’s dialogues (cf. Gentzler 1995, LaBarge 1997 and 2005). Our dependence on experts raises epistemological, ethical and political questions (cf. Selinger/Crease (eds.) 2006, 4f.). Like Goldman, I will focus on the epistemological concerns. From this point of view, the topic of expertise is closely related to the topic of testimony. Indeed, the problem of identifying and evaluating expert testimony is a special case of the general epistemological problem of testimony and trust—to be sure, a very acute case (cf. Goldman 2001, 85–89; idem 2002, x). As should be clear, our topic is not only of theoretical interest; it is of urgent practical importance.

materials). Cf. also the early discussion of the “problem of expert identifiability” in Goldman 1991, Section VIII (128ff.). 3. “Experts: Which Ones Should You Trust?” (2001) is an essay in applied social epistemology. It has already been reprinted twice, namely in (Goldman 2002a, 139–163 and in Selinger/Crease (eds.) 2006, 14–38)—Section 8 Identifying the Experts (16–19) of (Goldman 2006), contains a review of the highlights of (Goldman 2001). 4. E.g.: When I want to know how you are, I have to ask you; if you want to know how I am, you have to ask me.

188

2. What experts are 2.1 Introduction In the 60s and 70s, the renowned Polish logician Joseph M. Bochenski developed the fundamentals of a logic of authority.5 We may approach our topic “expertise” which is closely related to epistemic authority by considering some of its theorems. Quite obviously: (1.1) Authority is a relation. A bit more precisely: (1.2) Authority is a triadic relation obtaining between a person x who has authority (the bearer of authority), another person y for whom x is an authority (the recognizing subject) and a domain of authority (the field). Accordingly, the general formal structure of authority is this: (A) x is an authority for y in the domain D. For short: A(x,y,D). The important restriction of authority to a certain field or domain D has been emphasized early on. We find it already in Plato: An expert must have a grasp of a well-defined subject matter or art (cf. LaBarge 2005, p. 16). In the tradition of Aristotelian and Ciceronian Topics, the so-called “locus ab auctoritate“ was associated with the maxim: „unicuique experto in sua scientia credendum est“ (Petrus Hispanus 1972, 76 [my emphasis]), i.e.: any expert is to be believed in his science.6 Joachim Jungius (1587–1657) lists among his four maxims for the correct use of testimony: “I. Probato artifici in sua arte credendum est […]”. (The proven expert is to be believed in his art.) This is the formula which is explained as follows: „hoc est, Peritis in quaque arte, scientia, professione, praesertim si consentiant, fides habenda est“ (Jungius 1957, 339). By inserting the clause „praesertim si 5. In an Appendix to The Logic of Religion (1965), Bocheński first presented a brief introduction to the logic of authority that he later expanded in a paper “An Analysis of Authority” (1974) and finally in a little book by the title Was ist Autorität? [What is authority?] (1974). Cf. also (Menne 1969). 6. For additional references cf. (Schröder 2001, 45–48, 126–129).

189

consentiant“ (especially when they agree with each other), Jungius alludes to the possibility of rival expert judgments; but the problem is not pursued any further. Bocheński suggests the following definition of “x is an authority for y in domain D”: (1.3) x is an authority for y in domain D if and only if y, in principle, accepts everything that is asserted to him by x in domain D. This strikes me as a very strong sense of “authority”. More importantly, it is a definition of “authority” in a reputational or subjective sense. As we shall see below, we also need a concept of authority in an objective sense. Of fundamental importance is a distinction emphasized by Bocheński between two major kinds of authority: (a) epistemic authority7 and (b) deontic authority. To the domain of epistemic authority belong classes of statements. To the domain of deontic authority belong classes of imperatives. In what follows, we will deal with experts in the sense of epistemic authorities. But, let us first return to Bocheński’s definition: (1.3) x is an authority for y in domain D if and only if y, in principle, accepts everything that is asserted to him by x in domain D. As mentioned above, what Bocheński has defined here, is “authority“ in a subjective sense. In addition to this concept of subjective authority, we need for many purposes a concept of objective authority. This is especially true when we are dealing with the question, whether and how a nonexpert is able to identify or recognize an expert. It should be clear that the interesting problem is not, whether and how a non-expert is able to know whether someone is taken to be an authority by another person (typically, there are rather good sociological indicators for this), but whether and 7. In KSW, Goldman used the general term “authority”, but it is quite clear that he was dealing with epistemic authority. In (Goldman 2001), he has made explicit that he is focussing on “cognitive or intellectual experts” and “expertise in the cognitive sense” (Goldman 2001, 91). He contrasts “cognitive expertise” with “skill expertise”, but does not mention “deontic authority (or expertise)”. Bruce D. Weinstein distinguishes between “performative expertise” (i.e. an ability “to perform a skill well”) and “epistemic expertise” (i.e. an ability to offer “strong justifications for a range of propositions in a domain”) (see especially Weinstein 1993, 58ff.).—The most fine-grained analysis of authority I know of is to be found in the works of Richard T. De George (1985). (De George 1970, idem 1976) and (De George 1985, Chapter 3) focus on epistemic authority.

190

how a non-expert is able to know whether someone is an expert in an objective sense. 2.2. A note on terminology Whereas Goldman and many others tend to use the terms “non-expert”, “layperson” and “novice” interchangeably, I want to suggest the following terminological conventions: Let’s agree to use “non-expert” or “layperson” (relative to a domain D) as neutral generic terms for all forms of non-expertise. In addition, we may use “ignoramus” (relative to D) for a non-expert in D doomed to stay a non-expert in this domain, typically without an ambition or prospect of becoming an expert in D at some future time, and, by contrast, “novice” (relative to D) for someone who is at present a non-expert in D but has a prospect of becoming an expert in this domain at some future time. Thus, e.g., a student of archaeology is a novice relative to this domain whereas I am an ignoramus in this field. 2.3 Goldman’s definitions In Knowledge in a Social World, Alvin I. Goldman suggested the following explication of objective authority: (Official Def. KSW) „Person A is an authority in subject S if and only if A knows more propositions in S, or has a higher degree of knowledge of propositions in S, than almost anybody else.” (Goldman 1999, 268)8 To be sure, in KSW he already allowed for an expansion to cover potential knowledge, but he did not build it into his official definition.9 In “Experts: Which Ones Should You Trust?” (2001), he partly supplemented, partly modified his earlier proposal. As he now emphasizes: “Expertise is not all a matter of possessing accurate information. It includes a capacity or 8. Cf. “[…] experts in a given domain […] have more beliefs (or high degrees of beliefs) in true propositions and/or fewer beliefs in false propositions within that domain than most people do (or better: than the vast majority of people do).” (Goldman 2001, 91.) 9. In his early discussion of the “problem of expert identifiability” in (Goldman 1991), he suggested the following definition: “[…] let us define an expert as someone who either (1) has true answers to core questions in the domain (i.e., believes or assigns high probability to these answers), or (2) has the capacity (usually through the possession of learned methods) to acquire true answers to core questions when they arise.” (ibid. 129).

191

disposition to deploy or exploit this fund of information to form beliefs in true answers to new questions that may be posed in the domain. This arises from some set of skills or techniques that constitute part of what it is to be an expert.” (Goldman 2001, 91) Borrowing a different, but wellestablished terminology from the Cognitive Sciences, we might say that expertise is a productive and systematic capacity, projectible to new cases of application. Accordingly in 2001, Goldman offers a different official definition of objective expertise: (Official Def. EWOSYT) “[…] an expert […] in domain D is someone who possesses an extensive fund of knowledge (true belief ) and a set of skills or methods for apt and successful deployment of this knowledge to new questions in the domain.” (Goldman 2001, 92.) If we draw together the different threads in these quotations, we get something like the following definition of objective expertise: (Expertobj-Truth) Person A is an expert in domain D if and only if: (1) A has considerably more beliefs in true propositions and/or fewer beliefs in false propositions within D than the vast majority of people do; and (2) A possesses a set of skills and10 methods for apt and successful deployment of this knowledge to new questions in D. 2.4 Comments on Goldman’s definitions There are several prima facie difficulties associated with (Expertobj-Truth). Let me begin with the issue of vagueness: The definition contains a lot of vague predicates. This need not be a real difficulty because “being an expert” is itself a vague predicate so that Goldman may fairly contend that his definiens accurately reflects the vagueness of the definiendum. More serious difficulties might be involved in Goldman’s veritistic approach which manifests itself in his definitions of objective expertise. Goldman’s explications are intended to be purely veritistic, i.e. they are orientated towards one single epistemic desideratum, namely truth. (To be sure, Goldman intends to include a second desideratum: the avoidance of errors, i.e. false beliefs. See below.) 10. My “and” seems more adequate than Goldman’s “or” (cf. Goldman 2001, 92).

192

To begin with, let me emphasize that I am in great sympathy with Goldman’s attacks on veriphobia and epistemic relativism and, particularly, in sympathy with his objectivism about expertise. Nevertheless, I think that his myopic concentration on truth maybe unduly restrictive and, in some respects, even materially inadequate. In this vein, I want to bring up the following points for discussion: (1) The fundamental problem is this: Whereas the non-expert may only have a few general and coarse opinions about D, the expert will typically entertain thousands of highly special and sophisticated beliefs about D. (Think, e.g., of the early days of astronomy.) Thus, the expert runs a much greater risk to entertain false beliefs than the layperson does. Consequently, the following situation may easily arise: A non-expert may have more true and fewer false beliefs about D than the expert. Or, at least, it might happen that a non-expert may have fewer false beliefs about D than the expert. (2) The addition of “a set of skills and methods for apt and successful deployment of this knowledge to new questions in D” seems apt; but then a question arises: Is the resulting explication still purely veritistic? It seems that Goldman has tacitly introduced an additional epistemic desideratum. (3) This brings me to my overall suggestion: In a full-fledged theory of expertise, all epistemic values and desiderata should be taken into account: besides truth and other truth-conducive desiderata (such as justification or rationality), especially features of systems of beliefs such as coherence and understanding (cf. Alston 2005). This is not the place to work out a full-fledged theory of expertise; a book would be needed to fulfil this task. At least to illustrate my proposal, I ask you to consider the following partial definitions of objective expertise: (Expertobj-Justification) Person A is an expert in domain D if: (1) A has considerably more justified beliefs and/or fewer unjustified beliefs within D than the vast majority of people do; and (2) A possesses a set of skills and methods for apt and successful deployment of these justified beliefs to new questions in D. (Expertobj-Coherence) Person A is an expert in domain D if: (1) A has a more coherent set of beliefs within D than the vast majority of people do and (2) A’s beliefs within D cohere better with the rest of his beliefs than it is the case with the vast majority of people. (Expertobj-Understanding) Person A is an expert in domain D if: A has a considerably better understanding of domain D than the vast majority of people do.

193

Of course, it remains to be spelled out in detail which factors contribute to a belief set’s being justified or coherent and to the understanding of a specified domain. Nevertheless, it seems clear that a multiple desiderata approach along these lines promises to overcome the indicated difficulties in Goldman’s purely veritistic approach. 3. How we are able to recognize experts 3.1. Goldman’s approach in Knowledge in a Social World (1999) In KSW, Goldman characterizes the pertinent epistemological problem by way of the following question: (Q-KSW) “How is a novice to identify an authority?” (KSW 267)11 In accordance with the terminological conventions introduced above, I prefer the following more neutral and thus more general formulation of the problem: (Q*-KSW) How is a non-expert to identify an expert? Since asking other authorities would only push the problem one step further, the root problem is: “the problem of direct identification of an authority by a novice“ (Goldman 1999, 268). Goldman gives us a schematic example: “Consider a judge, J, who wishes to determine (directly) whether a candidate authority, A, really is an authority in a given subject. Assume that judge J himself is not an authority, but wishes to determine whether A is. If A is an authority, he will know things—perhaps many things—that J does not know. If J himself does not know them, however, how can he tell that A does? Suppose A claims to know proposition P, and J does not (antecedently) believe P. Should J credit A with knowledge 11. The problem whether and how laypersons are able to recognize experts, was already discussed by Plato (Charmides 170d–171; cf. Gentzler 1995, LaBarge 1997 and 2005) and by Aristotle who asked: How does one recognize the moral expert, the phronimos? (cf. Biondi Khan 2005). The ancients, however, did not go very far in analyzing, much less in solving the problem. St. Augustine, in his De utilitate credendi, called it a „difficillima quaestio“, i.e., a very difficult question, which he put in the memorable form: “For how will we fools be able to find a wise man?” (De util. cred. 28; Augustine 1947, 429.)

194

of P? If J does not already believe P, there are two possibilities (restricting ourselves to categorical belief rather than degrees of belief ): either J believes not-P, or he has no opinion. In either case, why would J credit A with knowing P? If J believes not-P, he should certainly not credit A with knowing P. If he has no opinion about P, he could credit A with knowledge on the basis of pure faith, but this would hardly count as determining that A knows that P. Although he could place blind trust in A, this would not be a reliable way of determining authority. Moreover, he cannot follow this policy systematically. If two people, A and A’, each claims to be an authority, and A claims to know P, while A’ claims to know not-P, how is J to decide between them when he himself has no opinion?” (Goldman 1999, 268) Several problems should be distinguished here: (The Non-Expert/Expert Problem) How can a non-expert know that someone is an expert in a given domain D? And: How can a non-expert know how good an epistemic authority someone is? (The Non-Expert/2 Experts Problem) How (if at all) can a non-expert justifiably choose one putative expert E1 in the domain as more credible than another putative expert E2? And what, if anything, might be the epistemic reasons for such a choice? It should be obvious that each of us is continually confronted with these problems. They are often described as though they were unsolvable in principle. Thus, we read in Augustine: “But when the fool tries to find out who that one is [= the wise man; ORS], I do not at all see how he can clearly distinguish and know him. […] Accordingly as long as anyone is a fool, he cannot be completely sure of finding a wise man through whom, if he obey him he can be freed from so grevious an evil as foolishness.“ (De util. cred. 28; Augustine 1947, 429f.) In more recent times, the situation of the layperson has likewise been characterized as hopeless: “[…] if I am not in a position to know what the expert’s good reasons for believing that p are and why these are good reasons, what stance should I take in relation to the expert? If I do not know these things, I am in no position to determine whether the person really is an expert. By asking the right questions, I might be able to spot a few quacks, phonies, or incompetents, but only the more obvious ones. For example, I may suspect that my doctor is incompetent, but generally I would have to know what doctors know in order to confirm or dispel my suspicion. Thus, we must face the implica-

195

tions of the fact that laymen do not fully understand what constitutes good reasons in the domain of expert opinion.” (Hardwig 1985, 340f.) The special difficulty of the Non-Expert/Expert-Problem becomes perspicuous when we compare it to a different problem (cf. Kitcher 1992, 249f.; idem 1993, 314–322; Goldman 2001, 89f.): (The Expert/Expert Problem) How can an expert E1 in domain D assess the epistemic authority or trustworthiness of another expert E2 in D? In this case a direct calibration is possible: One expert uses his own expert opinions about D—somewhat in the manner of a measuring instrument— to evaluate a target expert’s degree of epistemic authority in D. In the NonExpert/Expert Scenario (especially the Non-Expert/2 Experts-Scenario) a direct calibration along these lines is not possible. The layperson either has no beliefs about D or else he has not got enough confidence in them to use them in the process of the evaluation of the expert beliefs. Thus, many are inclined to give a skeptical answer to our problems: (Skepticism about the Non-Expert/Expert Problem) For a non-expert (in D), it is not possible to know who is an expert (in D). (Skepticism about the Non-Expert/2 Experts Problem) For a non-expert (in D), it is not possible to know which of two conflicting experts (in D), E1 and E2, is to be trusted. To be sure, the consequences of such a skepticism would be quite dramatic and unpalatable: One would have to renounce the rational use of a vital source of knowledge and justification. We would be left with blind trust or equally blind distrust.12 Thus, there is ample reason to look for a more optimistic assessment: (Moderate Epistemic Optimism concerning the Non-Expert/Expert Problem) For a non-expert (in D), it is, in principle, possible to know who is an expert (in D). (Moderate Epistemic Optimism concerning the Non-Expert/2 Experts Problem) For a non-expert (in D), it is, in principle, possible to know which of two conflicting experts (in D), E1 and E2, is to be trusted. 12. John Hardwig sometimes uses expressions like „blind trust“ (cf. Hardwig 1991, 693, 699).

196

In KSW, Goldman approaches the problem by giving it a temporal dimension: “The first crucial step in solving the problem is to give it a temporal dimension.13 Although J cannot credit A with knowing something now that J does not know now, he can certainly credit A with knowing things before he, J, knew them.” (Goldman 1999, 268) Against this background, he proceeds to ask: (Q*-KSW) In which scenarios is an empirical determination of authority possible for the non-expert? In order to show that expert identification by non-experts is, in principle, possible, i.e. that skepticism can be avoided, Goldman lists four simple scenarios (Goldman 1999, 269f.; cf. idem 1991, 129ff.). The first two are ones in which J is able to observationally verify the putative expert’s verdict. (S 1) Goldman’s first example is taken from the domain of American geography: “[…] A might assert at time t that St. Paul is the capital of Minnesota, a proposition which J, at t, either denies or doubts.” (Goldman 1999, 269) J may verify the truth of the proposition by either travelling to Minnesota or by consulting an uncontestedly reliable atlas or encyclopedia. When he, at the later time t’, also knows that St. Paul is the capital of Minnesota, he is in a position to concede that A knew the proposition before him. (S 2) The second class of scenarios has to do with propositions about the repair or treatment of malfunctioning systems, such as machines, computers, gadgets, economies or organisms (Goldman 1999, 269; cf. idem 1991, 130). Let’s call them system-repair scenarios. Here the personal verification is a slightly more complex matter (J has to the check the system before the treatment, the application of the treatment suggested by A and the state of the system after the treatment), but the general procedure is the same. (S 3) The third scenario is one in which J obtains argumentation-based evidence of A’s epistemic authority: Roughly speaking, while arguing with A, 13. Plato already hinted at the importance of the temporal dimension for the assessment of expertise. Cf. the so-called “argument from the future” in Plato’s Theaetetus (177c6–179b9) which is added to the “argument from experts” (170a6–c1; cf. Cratylus 386c2-d2 and Euthydemus 285c9–287b1) directed against the truth relativism of Protagoras (cf. Puster 1993). Aristotle who endorsed Plato’s “argument from the future” provides a summary of the basic idea: “And again with regard to the future, as Plato says, surely the opinion of the physician [i.e., the expert; ORS] and that of the ignorant man [i.e., the layperson; ORS] are not equally weighty, for instance, on the question whether a man will get well or not.” (Metaphysics 1010b11–14)

197

J can come to know A’s superior authority by being persuaded by A on many occasions of the truth of propositions in D (cf. Goldman 1999, 269). (S 4) In the fourth scenario J, finally, faces the task of “comparing the relative authority of two competing candidates, each of whom might be more knowledgeable than J is” (KSW 270). This is the Non-Expert/2 Experts Problem that is much more thoroughly explored in Goldman’s essay “Experts: Which Ones Should You Trust?” (2001) to which we now turn. 3.2. Goldman’s approach in “Experts: Which Ones Should You Trust?” (2001) As its title indicates, in this essay Goldman focuses on the Non-Expert/2 Experts Problem right from the beginning and asks, accordingly: (Q-EWOSYT) “Can novices, while remaining novices, make justified judgments about the relative credibility of rival experts? When and how is this possible?” (Goldman 2001, 89) In this article, prima facie, Goldman uses a somewhat different approach. Instead of listing exemplary scenarios in which the identification of experts seems plausibly possible, he proceeds by asking the fundamental question: (Q*-EWOSYT) Which sources of evidence might a non-expert have and use in a Non-Expert/2 Experts Situation? Goldman comments on five possible sources of evidence (and on the associated strategies to exploit them): (A) Arguments presented by the contending experts to support their own views and criticize their rivals’ views. There is a whole spectrum of cases. Some experts only state their opinions without adducing any evidence. Some experts do publish their reasons in journals or at professional conferences; but even in this form, mostly, they will not reach the layperson. In the normal case, the non-expert will not even become acquainted with these publications; and where it occasionally happens, he probably could not understand them. At best, he gets a superficial second hand representation of the evidence and arguments in the popular media in which much is left out or is oversimplified.

198

This is the rule; but of course it is at least possible to obtain better evidence from scenarios of type (A): The layperson may witness a full-scale debate between the experts or read a detailed documentation of it. But, quite obviously, this is an area where our practices, institutions and media could be considerably improved. Within the expert’s discourse, it is important to distinguish between esoteric and exoteric claims (Goldman 2001, 94, 106f.). Epistemically esoteric statements belong to the relevant domain of expertise; their epistemic status (truth value; degree of justification; etc.) is inaccessible to the nonexpert. (Statements the layperson does not even understand may be termed „semantically esoteric statements”.) In addition, non-experts are commonly unable to assess the inferential relations (and hence the strength of support) between the adduced evidence and the suggested conclusion. Epistemically exoteric statements, by contrast, are located outside the domain of expertise; their epistemic status may well be accessible to the non-expert—either at the time of their assertion or at least at a later time. Thus, the temporal dimension, emphasized already in KSW, again becomes important. Needless to say, the esoteric statements are the ones that make trouble for the non-expert. He would have to—but normally cannot—(1.) understand the expert statements, (2.) assess their epistemic status, (3.) assess the support relations between the adduced evidence and the suggested conclusion. A direct argumentative justification would be one where someone becomes justified in believing the argument’s conclusion by justifiedly believing the premisses and their (strong positive) support relation to the conclusion. Since in a Non-Expert/2 Experts Situation there will be many esoteric statements involved, an expert’s argument will often fail to produce direct argumentative justification in a hearer. Fortunately, there is the possibility of indirect justification by arguments. One speaker in a debate may display what may be called „dialectical superiority“ over the other. This superiority may be a plausible indicator for the non-expert of greater expertise on one side of the debate. From the observed performances of the putative experts the non-expert may reason—via an inference to the best explanation—to a conclusion as to their respective levels of expertise. After that he may make a second inference from greater expertise to a higher probability of accepting a justified and true proposition. There is a whole lot of plausible indicators of expertise that are accessible to the non-expert: dialectical superiority, self-assured demeanor, the com-

199

parative quickness and smoothness of the responses etc. To be sure, some of these indicators are problematic, i.e. possibly unreliable.14 For example, demeanor can be trained where expertise is missing.15 (B) Agreement from additional putative experts on one side or other of the subject in question. (C) Appraisals by “meta-experts” of the experts’ expertise. Since (B) and (C) both have to do with the appeal to further experts, these sources may be taken together. Route (B) invites the non-expert to consider whether other experts agree with E1 or with E2. Under (C) the non-expert should seek evidence by consulting the assessments of metaexperts, i.e. third parties who have their own expertise. The meta-experts do not directly evaluate propositions from the domain D of expertise, but they judge the respective level of expertise of E1 and E2. Ratings and credentials—such as academic degrees, professional accreditations, etc.—can be viewed as special cases of source (C). The most important point Goldman makes concerning the agreementbased evidence of types (B) and (C) is a warning: The greater number of those who agree with a given expert opinion is not always a reliable indicator for the truth or degree of justification of the opinion. Put differently, there are scenarios where the amount of consensus does not tell us anything about the epistemic value of the expert statement in question. Goldman mentions two pertinent examples: (1) the case of a guru with slavish followers; (2) the case of rumors (Goldman 2001, 98ff.). As he shows in some detail, in these types of case, trustworthiness does not increase with the number of witnesses. (D) Evidence of the experts’ interests and biases vis-à-vis the question at issue. When the non-expert has good evidence that the assertions of one expert E1 are influenced by interests and biases, and no evidence for such bias in a rival expert E2, then he is justified in placing greater trust in the unbi14. The role of demeanor was, of course, recognized in the tradition of rhetoric. Hume, e.g., notes: „We entertain a suspicion concerning any matter of fact, when the witnesses […] deliver their testimony with hesitation or, on the contrary, with too violent asseverations.” (Hume 1975, 112f.; cf. Brewer 1998, 1622) 15. In times when there is a market for demeanor, it becomes an especially untrustworthy guide, as Scott Brewer and Alvin I. Goldman have emphasized (See Brewer 1998, 1622ff. and Goldman 2001, 95f.).

200

ased expert. As Goldman notes, this advice „comes directly from common sense and experience.“ (Goldman 2001, 104) The distorting effects of, e.g., economic interests are familiar to all of us. Fortunately, information bearing on an expert’s relevant interests belongs to the pieces of information that are potentially accessible to a non-expert. Admittedly, the relevant investigations might end in a stalemate, when it transpires that the rival experts are both biased and both to the same extent. Of greater significance—and potentially more dangereous—is a bias that infects a whole discipline or community. If all or most members of a given research group are infected by the same bias, the layperson will have a hard time assessing the epistemic value of additional testimonies from other experts or meta-experts (Goldman 2001, 107). (E) Evidence of the experts’ past “track-records”. A very important and in favorable circumstances possibly decisive source of evidence relevant to the assessment of conflicting expert statements is the evidence about the putative experts’ past track records of epistemic success. At first glance, the problem may, again, seem hopeless: How can a non-expert assess past track records of experts? A non-expert typically will have no opinions about D, or anyway not enough confidence in them. How, then, can he have any relevant beliefs about past answers in D by which to judge the putative experts’ degree of expertise? At this point, the distinction between esoteric and exoteric statements is once again relevant. It is important to see that this is not a categorical distinction such that each statement is either esoteric or exoteric. Instead, the epistemic status of a given statement may change from one time to another since the epistemic standpoint can change from one time to another. An example may illustrate the point: Consider the statement “There will be a total eclipse of the sun on March 21, 2010 in Central Europe”. At the present time, this is an epistemically esoteric statement for most of us. Astronomical laypersons—in contrast to experts in this domain—cannot tell now whether the statement is true or false. Things change when the day has come. On March 21, 2010, ordinary people in Central Europe will be able to tell effortlessly whether there is a total eclipse of the sun or not. In this new epistemic position, the epistemic status will have changed: the question now is an exoteric one. (To tell you the truth: When the day has come, you won’t see a total eclipse, only my 50th birthday. Please remember and send me a postcard!)

201

Of course, there are many examples of this kind. Expert statements often are or imply predictions. These may, but need not, be theoretical statements of the form: “At future time t, event e will occur.” No less important are practical statements of the kind: “If you apply treatment X to System Y, it will immediately return to proper functioning” or “If you take medicine X, you will recover within one week”. Before the treatment the statement is an esoteric one, after the treatment it is exoteric and can easily be assessed by a non-expert. As we have seen, it is, in principle, possible that a non-expert assesses past expert statements retrospectively and, thus, makes use of the past “track records” of experts. This possibility of directly determining the degree of expertise of some experts makes it possible to draw plausible inductive or abductive inferences about considerably wider classes of experts. If, e.g., a non-expert N has good evidence that E1 has substantial expertise in domain D, and E1 is known to have trained persons C1 to Cn in the skills and methods relevant to D for many years, then N has plausible inductive grounds to expect C1 to Cn themselves to have obtained expertise in D to a significant degree. We are now in a position to evaluate the pessimistic stance of Saint Augustine, John Hardwig and others. As you might remember, Hardwig wrote: “[…] if I am not in a position to know what the expert’s good reasons for believing that p are and why these are good reasons, what stance should I take in relation to the expert? If I do not know these things, I am in no position to determine whether the person really is an expert.” (Hardwig 1985, 340) Hardwig is right when he says that a non-expert N will typically lack the expert’s reasons for believing, say, p. He is wrong when he concludes: “If I do not know these things, I am in no position to determine whether the person really is an expert.” N may have access to reliable indicators that the candidate expert has good reasons for believing p; and N may have access to reliable indicators that one expert has better reasons for believing his conclusion than his rival has for his. Just as it is possible to know that a given instrument is reliable without understanding how it works, it is possible to know that someone is an expert without understanding how he possesses and applies his expertise (Goldman 1999, 270).

202

4. Summary Thus, Goldman succeeds in showing that the Non-Expert/Expert Problem and the Non-Expert/2 Experts Problem are both soluble. Flat-out skepticism about expert identification and expert comparison is not warranted. I completely agree with these conclusions. Nevertheless, let me reiterate my complaint and the ensuing suggestion: In a full-fledged theory of expertise, all epistemic values and desiderata should be taken into account: besides truth and other truth-conducive desiderata (such as justification or rationality), especially features of systems of beliefs (such as explanatory coherence and understanding). This will make possible (1) more adequate explications of “epistemic authority” and “expert” in an objective sense and (2) a more complete list of sources, methods and strategies which may help to identify and assess experts.

BIBLIOGRAPHY Alston, William P. 2005: Beyond “Justification”. Dimensions of Epistemic Evaluation. Ithaca and London: Cornell University Press. Aristotle 1928: Metaphysica. Translated by W.D. Ross (= The Works of Aristotle, Volume VIII), Second Edition. Oxford: Clarendon Press. Augustine 1947: The Advantage of Believing. Translated by Luanne Meagher, O.S.B. In: The Fathers of the Church. A New Translation, Volume 4. Washington, DC: Catholic University of America Press, 381–442. Biondi Khan, Carrie-Ann 2005: “Aristotle’s Moral Expert: The Phronimos” In: Lisa Rasmussen (ed.), Ethics Expertise: History, Contemporary Perspectives, and Applications. Dordrecht; Springer, 39–53. Bocheński, Joseph M. 1965: The Logic of Religion, New York: New York University Press. — 1974a: “An Analysis of Authority”. In: Frederick J. Adelmann, S.J., Authority. Chestnut Hill and The Hague: Springer, 56–85. — 1974b: Was ist Autorität? Freiburg. Reprinted in: idem 1988: Autorität, Freiheit, Glaube. Sozialphilosophische Studien, Munich/Vienna: Philosophia Verlag, 9106. Brewer, Scott 1998: “Scientific Expert Testimony and Intellectual Due Process”. Yale Law Journal 107, 1535–1681.

203

Burge, Tyler 1993: “Content Preservation”. The Philosophical Review 102, 457– 488. Coady, C. A. J. 1992: Testimony: A Philosophical Study, Oxford: Oxford University Press. De George, Richard T. 1970: “The Function and Limits of Epistemic Authority”. The Southern Journal of Philosophy VIII, 199–204. — 1976: “The Function and Limits of Epistemic Authority”. In: R.Baine Harris (ed.), Authority: A Philosophical Analysis, Alabama: The University of Alabama Press, 76–93. — 1985: The Nature and Limits of Authority. Lawrence, Kansas: University Press of Kansas. Gentzler, Jyl 1995: “How to Discriminate between Experts and Frauds: Some Problems for Socratic Peirastic”. History of Philosophy Quarterly 3, 227–246. Goldman, Alvin I. 1991: “Epistemic Paternalism: Communication Control in Law and Society”. The Journal of Philosophy 88, 113–131. — 1999: Knowledge in a Social World, Oxford: Oxford University Press. [KSW] — 2001: “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63, 85-110. Reprinted in: (Goldman 2002a, 139-163) and in: (Selinger/Crease (eds.) 2006, 14–38). — 2002a: Pathways to Knowledge—Private and Public, Oxford: Oxford University Press. — 2002b: “Précis of Knowledge in a Social World”. Philosophy and Phenomenological Research 64,185–190. — 2006: “Social Epistemology, Theory of Evidence, and Intelligent Design: Deciding What to Teach”. The Southern Journal of Philosophy XLIV, 1–22. — 2007: “Social Epistemology”. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2007 Edition). URL = . Hardwig, John 1985: “Epistemic Dependence”. The Journal of Philosophy 82, 335–349. — 1991: “The Role of Trust in Knowledge”. The Journal of Philosophy 88, 693– 708. — 1994: “Towards an Ethics of Expertise”. In: Daniel E. Wueste (ed.), Professional Ethics and Social Responsibility. Lanham, Md.: Rowman & Littlefield Publishers, 82–101. Harris, R.Baine (ed.) 1976: Authority: A Philosophical Analysis, Alabama: The University of Alabama Press. Hume, David 1975: An Enquiry Concerning Human Understanding. In: David Hume, Enquiries concerning Human Understanding and concerning the Principles of Morals, ed. by P.H. Nidditch, Oxford: Oxford University Press.

204

Jungius, Joachim 1681: Logica Hamburgensis. Ed. by Johannes Vagetius, Hamburg; Reprint ed. by Rudolf W. Meyer, Hamburg 1957. Kitcher, Philip 1992: “Authority, Deference, and the Role of Individual Reason”. In: Ernan McMullin (ed.), The Social Dimensions of Science. New York: University of Notre Dame Press, 244–271. — 1993: The Advancement of Science. New York: Oxford University Press. LaBarge, Scott 1997: “Socrates and the Recognition of Experts”. In: Mark L. McPherran (ed.), Wisdom, Ignorance and Virtue: New Essays in Socratic Studies. Edmonton: Academic Printing & Publishing. — 2005: “Socrates and Moral Expertise”. In: Lisa Rasmussen (ed.), Ethics Expertise: History, Contemporary Perspectives, and Applications. Dordrecht: Springer, 15–38. Menne, Albert 1969: “Zur formalen Struktur der Autorität’’. Kant-Studien 60, 289–297. Petrus Hispanus [Peter of Spain] 1972: Tractatus (Summulae logicales). Ed. by L.M. de Rijk. Assen: Van Gorcum. Puster, Rolf W.1993: “Das Zukunftsargument—ein wenig beachteter Einwand gegen den Homo-mensura-Satz in Platons Theaitetos’’. Archiv für Geschichte der Philosophie 75, 241–274. Rasmussen, Lisa (ed.) 2005: Ethics Expertise: History, Contemporary Perspectives, and Applications, Dordrecht: Springer. Scholz, Oliver R. 2001: “Das Zeugnis anderer—Prolegomena zu einer sozialen Erkenntnistheorie’’. In: Thomas Grundmann (ed.), Erkenntnistheorie. Positionen zwischen Tradition und Gegenwart. Paderborn: mentis, 354–375, 391–394. [2nd Edition 2003] — 2008: “Erkenntnistheorie, soziale’’. In: Stefan Gosepath, Wilfried Hinsch, and Beate Roessler (eds.), Handbuch der Politischen Philosophie und Sozialphilosophie. Berlin & New York: Walter de Gruyter, 287–290. Schröder, Jan 2001: Recht als Wissenschaft. Geschichte der juristischen Methode vom Humanismus bis zur Historischen Schule. Munich: Beck. Selinger, Evan and Robert P. Crease (eds.) 2006: The Philosophy of Expertise, New York: Columbia University Press.

205

V. UNDERSTANDING OTHER MINDS

Grazer Philosophische Studien 79 (2009), 209–242.

UNDERSTANDING OTHER MINDS: A CRITICISM OF GOLDMAN’S SIMULATION THEORY AND AN OUTLINE OF THE PERSON MODEL THEORY Albert NEWEN & Tobias SCHLICHT Ruhr-Universität Bochum Summary What exactly do we do when we try to make sense of other people e.g. by ascribing mental states like beliefs and desires to them? After a short criticism of Theory-Theory, Interaction Theory and the Narrative Theory of understanding others as well as an extended criticism of the Simulation Theory in Goldman’s recent version (2006), we suggest an alternative approach: the Person Model Theory. Person models are the basis for our ability to register and evaluate persons having mental as well as physical properties. We argue that there are two kinds of person models, nonconceptual person schemata and conceptual person images, and both types of models can be developed for individuals as well as for groups.

Consider Ralph. Ralph is strolling along the beach where he sees a man wearing a brown hat, black sunglasses and a trench coat. He has seen this man several times before in town and his strange and secretive behaviour has made Ralph suspicious. Since the man, let’s call him Ortcutt, always tries to cover his face and turns around all the time to see if he is being followed etc., Ralph has come to believe that Ortcutt might be a spy. Since Ralph finds this exciting, he follows him. Now, Ortcutt is in fact a spy and when he turns around and notices Ralph, he starts walking faster, takes his cell phone out of his pocket and makes all kinds of wild gestures while talking to someone. Ralph, in turn, comes to believe that the man in the brown hat believes that Ralph has recognized him as a spy and that his cover has been blown. Only now does it occur to Ralph that it might not have been such a good idea to show so much interest in the man and he runs away. How does Ralph acquire this belief about what Ortcutt might be thinking? This question is an instance of the more general question of how we understand others, how we come to know what they believe and

desire or intend to do, what they feel and perceive. Typically, when we think about what others are (or might be) thinking, we represent them as having mental states (processes or events1) like beliefs, desires, emotions and so on. This mental capacity of ours is sometimes called mentalizing or mindreading and it has been among the most-discussed topics in recent philosophy of mind and cognitive science. This research area has been transformed profoundly by recent developments in the cognitive neurosciences and developmental psychology. In the last decade, there has been an intensive investigation into the neural mechanisms underlying the capacities associated with mindreading and we have also learned a lot about some of the relevant capacities displayed by young children at various ages. Thus, research in this field has become essentially interdisciplinary. But despite the scientific progress in the empirical disciplines, there is still no consensus about how we should best understand and conceptualize these capacities subsumed under the name of mindreading. What exactly do we do when we try to make sense of other people by ascribing mental states like beliefs and desires to them? How should we best characterize the mechanisms of this capacity being executed in us when we do this? 1. Theory-Theory, Simulation Theory, Interaction Theory, and Narrative Theory Four systematic positions can be distinguished under which most theories that are currently on the table can be subsumed (while some accounts are hybrids of these approaches). According to the so-called ‘Theory-Theory’ (TT), when we ascribe mental states like beliefs and desires to a person, we employ a folk-psychological theory similar to a scientific theory (e.g. Gopnik and Wellman 1994, Gopnik and Meltzoff 1997). Without such a theoretical embedding we cannot make sense of other people’s behaviour. This idea stems largely from experiments showing that children gradually learn about people and start to explicitly represent other people’s propositional attitudes at around the age of four years, when they are capable of ascribing false beliefs to others (Wimmer and Perner 1983). Until that age, children have acquired mental state concepts by observing others and thus have formed such a (rudimentary) theory, which may, on the 1. In general, we will use the notions ‘mental state’, ‘mental process’, ‘mental event’ interchangeably and do not want to make any ontological commitments regarding these notions.

210

basis of new observations, be revised during their cognitive development, just like a scientific theory may be revised given new observations. On this view, Ralph is like a scientist, trying to make sense of his observations by positing mental states as theoretical entities, and ascribing them to Ortcutt’s mind—just like a scientist may posit theoretical entities like quarks and strings to explain certain observations. A competing version of Theory-Theory is based on a modular approach to the mind; it distinguishes various innate modules and claims that one such specific innate mechanism in our brain is designed particularly to understand other minds (a modular version of the Theory-Theory is defended by Baron-Cohen et al. 1985, Baron-Cohen 1995, Leslie 1987). On this view, Ralph employs this innate mechanism in order to understand Ortcutt’s behaviour. What these approaches have in common is the contention that we employ a rather detached theoretical stance towards people, analogous to scientists who employ a theoretical stance towards their subject matter. An alternative approach is the ‘Simulation-Theory’ (ST) put forward in different versions by Gordon (1986), Heal (1986), Goldman (1989, 2006) and others. The central tenet of this theory is that we use our own experience as an internal model, i.e. we simulate in our own minds what the other person might be thinking. Thus, we explore the mental states of others by putting ourselves in the position of the other in a current situation. We create pretend mental states in ourselves, which we then ascribe to or project onto the other. On this view, Ralph does not employ a theoretical stance towards Ortcutt but uses his own mind as a model and puts himself in Ortcutt’s ‘mental shoes’ in order to find out what he thinks. A third and more recent approach is called ‘Interaction-Theory’ (IT), defended by Gallagher (2001, 2005) and others (see also Gallagher and Zahavi 2008, Hobson 2002, Ratcliffe 2007, Reddy 2008). It stands in strong opposition to the first two approaches in rejecting a crucial assumption shared by those two rival views, namely, that there is even a problem of gaining access to other people’s minds in the sense that they have mental states which are ‘hidden’ behind their behavior, while the latter is always everything that we can observe. It is rejected that we only have access to meaningless behavioural patterns and only subsequently hypothesize that this behaviour is guided by mental states. Proponents of this view emphasize that, on the contrary, we are typically engaged in second-person conversational situations with others whom we share a world with. In such social interaction, we mostly play an active part ourselves instead of taking

211

a detached theoretical stance towards the other. Such pragmatic interaction is, according to Gallagher, to a large extent characterized not only by what people are saying, but also by their embodied practices, including bodily movements, facial expressions, gestures, and so on. The central claim of this view is that we can, at least in most cases, directly perceive what other people are up to; neither theoretical inference nor simulation are thus the most pervasive ways of understanding others, they are seldom necessary.2 Thus, according to this view, Ralph can somehow directly perceive what Ortcutt is up to. His beliefs and desires can be ‘read off’ his behaviour on the basis of Ortcutt’s embodied communicative practices such as displaying nervous movements, turning around many times, covering his mouth with his hand while talking on the phone and so on. Another recent development is Hutto’s (2008) so-called ‘Narrative Practice Hypothesis’, which states that from the beginning of childhood we are exposed to and engage in various narrative practices; in direct encounters but also in various other situations we are exposed to stories about people acting for reasons. Such stories form the basis of our acquisition of the forms and norms of folk psychology. Thus, Ralph may understand Ortcutt’s behaviour on the basis of his (stereotypical) knowledge about spies, which he may have acquired via the relevant stories, e.g. from reading novels or watching movies. We do not claim that this is an exhaustive list of positions that one may develop on this issue, but they are the ones that have been discussed extensively in the recent literature and distinguishing between them suffices for the purposes of this paper. The bulk of this paper is devoted to a critical discussion of the Simulation-Theory of mindreading. More specifically, we will focus on Alvin Goldman’s recent elaborate defense of this theory (Goldman 2006, forthcoming). To characterize the main criticism right at the beginning it is helpful to distinguish two demands: 1. We can ask which mental states someone else might have and how we come to know about this. 2. We can try to estimate the decision someone is going to make presupposing knowledge about the other person’s initial mental states (especially the relevant beliefs and desires). We argue that Goldman’s theory of high-level mindreading focuses only on the second question and thereby only deals with a very special case of understanding other minds which cannot be generalized. This case misses the main task 2. For a critical discussion of the notion of ‘direct perception’ in this context see Van Riel 2008 and Gallagher 2008a, b.

212

of a theory of understanding other minds (without additions it simply involves the mistake of a petitio) since the simulation of decision-making already presupposes an initial understanding of the other’s beliefs and desires. In his recent approach, Goldman introduces a theory of low-level mindreading which deals with the relevant question 1, but as we argue below, (i) it cannot account for almost all propositional attitudes and (ii) it is not clear why it should be evaluated as being a case of mental simulation. Therefore, Goldman’s Simulation Theory suffers from severe gaps given that it wants to offer a complete theory of understanding other minds. We grant that it is an important progress that he introduces the distinction between ‘low-level’ and ‘high-level’ mindreading as two radically different ways of understanding others. Our positive account will benefit from it. After offering a detailed characterization of Goldman’s theory in section 2, we put forward several objections to his approach (sections 3 and 4), where section 3 is devoted to low-level mindreading and section 4 concerns high-level mindreading. We argue that the two accounts suggested by Goldman are so essentially different in kind and in complexity, that it is unmotivated to subsume both of them under the same umbrella of a generalized Simulation-Theory. It is explanatorily more fruitful to accept a multi-level theory of understanding other minds, based on the insight that we have very different strategies and mechanisms at our disposal for understanding others. Whether and when we employ these various strategies depends not only on our prior relation to the person whose ‘mind’ we wish to understand, but also on their behavioural patterns which we observe and on the context of the situation in which the observed person displays these patterns. Thus, we need a new alternative account in order to capture all cases of understanding others. In section 5, which is the constructive part of the paper, we suggest that we essentially rely on ‘person models’ to understand other minds. We introduce and explain this notion and distinguish two different kinds of person models: person schemata and person images. Person schemata are sufficient to establish a non-conceptual understanding while person images are constitutive for a conceptual understanding. Person models in general are used for selfunderstanding as well as for understanding other minds.

213

2. Goldman’s Simulation-Theory 2.1 The general structure of Goldman’s theory When evaluating the alternative accounts of mindreading mentioned above, one needs to keep in mind that they do only exclude each other if each of them is interpreted as making the strong and universal claim that only one of them is the single (or at least the most pervasive) strategy we use to understand others (Cf. e.g. Baron-Cohen 1995, 3f, Goldman 2002, 7f ). Indeed, it seems that if proponents of these various approaches would not make this strong claim, then there might not even have been such a lively debate in the past twenty years or so. Once one allows for different kinds or strategies of mindreading, both simpler and more complex ones, then also hybrid accounts combining elements of some of them are possible. Goldman defends such a hybrid theory, “a blend of ST and TT, with emphasis on simulation” (2006, 23). One of the reasons why he no longer subscribes to a pure Simulation-Theory is the phenomenon of self-ascribing current mental states for which the simulation routine just does not make sense.3 In order to highlight the essential structure of the Simulation-Theory, it helps to contrast it with the structure of the Theory-Theory. Here, it is important to note that Goldman discusses the differences between these two main rival views only in the special context of predicting a decision, i.e. of someone’s prediction of what another person shall decide on the basis of given beliefs and desires. As already mentioned, Goldman owes us a story how we come to know the initial propositional attitudes while the Theory-Theory explicitly accounts for them: It is an essential ingredient of Theory-Theory that the attributor employs a background belief in a folk-psychological law, e.g. a law about means-end reasoning. For example, Ralph may run away since he believes both that Ortcutt has the initial belief that he has been exposed by someone, that he desires to get rid of this person and that (generally) ‘in situations where their cover is blown, spies usually decide to consult a colleague or their boss to ask them whether they should kill the guy who blew their cover’. 3. Another reason is that he accepts that Simulation-Theory cannot account for our understanding of other minds in the numerous cases of people suffering from mental diseases, which involve radically different experiences (e.g. thought insertions in schizophrenia, the experiences which are connected to Cotard Syndrome, and so on). It will be argued below that the additions necessary to account for such phenomena radically change the Simulation-Theory such that it is no longer adequate to characterize it in the intended way.

214

Ralph’s beliefs about Ortcutt’s initial mental states result from wondering about how to make sense of the target’s behaviour. The target’s presumed beliefs are treated like hypothetical theoretical entities and are in turn fed into a reasoning-mechanism, which then yields further beliefs (or rather, an inference) as output. The result is first the attributor’s belief that the observed person is a spy and that he noticed that his cover has been blown. Then the attributor uses his reasoning mechanism to infer that the target person, in this case Ortcutt, decides to kill him. According to Goldman, ST just presupposes the same initial mental states but it tells a different story about how they are used by the attributor: The attributor uses the “information that T desires g … to create a pretend desire” (Goldman 2006, 28). Similarly, the attributor creates pretend beliefs, which are supposed to match the target’s initial beliefs. These pretend mental states are then fed into the attributor’s own decision-making mechanism resulting in a pretend decision, which, crucially, does not result in an action. Instead of being carried out or acted upon, this (pretend) decision leads to a genuine (not pretend) belief about what the target will decide to do in this situation. Thus, on this account, Ralph asks himself what he would do if he faced Ortcutt’s situation and thus creates the pretend belief that his cover has been blown and the pretend desire to get rid of the man who exposed him, only to reach the pretend decision to kill this man. Then, instead of acting upon this decision, he projects it onto Ortcutt. This schema characterizing ST has the following important features: First, the pretense involved in the creation of pretend propositional attitudes is a special kind of imagination. In contrast to imagining that something is the case, e.g. that someone is elated or that one sees a car, one imagines feeling elated or seeing a car. That is, one creates a state that is phenomenologically more similar to the real feeling or perception since one enacts the relevant state. Therefore, Goldman calls the relevant kind of pretense ‘enactment imagination’. It involves a deliberate creation of a mental state with a special phenomenal character (Goldman 2006, 149). This state is then projected onto the other subject. A further feature of the mindreading process is the process of “quarantining”. In order for the simulation routine to work it is crucial that the attributor’s own mental states do not interfere with the pretend states. Thus, in the example, Ralph needs to “quarantine”, i.e. isolate or ‘repress’ his own idiosyncratic beliefs and desires (Goldman 2006, 29). Failing to do so may result in an egocentric bias that contaminates the evaluation of Ortcutt’s mental states. Thus,

215

according to Goldman, third-person attribution of a decision (high-level mindreading) consists of (i)

creating pretend propositional attitudes (in a special way through enactment imagination) (ii) using a (the same) decision making mechanism (as in the firstperson case) (iii) projecting the product of this decision-making process onto another person (attributing the decision), while quarantining those mental phenomena that are only specific for me and not for the other person. Goldman tries to generalize this model to account even for basic forms of understanding other minds while introducing some modifications. In general, Simulation-Theory can be distinguished negatively from TheoryTheory by the rejection of the belief in a psychological law (or generalization) posited by TT, but it can also be positively characterized by positing this two stage-process of mindreading, namely the simulation stage and the projection stage (Goldman 2006, 40). The simulation stage demands a process P in the attributor that duplicates, replicates or resembles the relevant process P* realized in the person observed and it should always result in a first-person-attribution of a mental state. The second stage is then the projection of this type of mental state onto the other subject. 2.2 Low-level and high-level mindreading Let us critically examine these core features while adopting Goldman’s useful distinction between low-level and high-level mindreading. Mindreading in general comprises all cases of evaluating the mental state(s) of another person, including the language-based attribution of a mental state to a person. Now, in the last section, the general pattern of mindreading postulated by ST has been introduced with a focus on propositional attitudes like beliefs and desires, and on the prediction of a decision made by someone else. According to Goldman’s distinction, this is a typical case of high-level mindreading, to be contrasted with low-level mindreading. The latter is defined as a process which is “comparatively simple, primitive, automatic, and largely below the level of consciousness” (Goldman 2006, 113). It typically targets relatively basic mental states like emotions, feelings, sensations like pain, and basic intentions and it is usually grounded

216

in basic perceptual information. A paradigm case of low-level mindreading is thus face-based recognition of emotion. According to Goldman, such low-level mindreading is based on a mirroring process “that is cognitively fairly primitive” (ibid.). Thus, low-level mindreading may be caused or generated by the activation of ‘mirror neurons’. These neurons, which have been discovered about ten years ago in macaque monkeys, are activated both when the monkey executes a goal-directed hand action (reaching for and grasping a peanut, say) and when the monkey observes another individual (be it a monkey or a human being) executing a similar action (Rizzolatti et al. 1996, Gallese et al. 1996, Rizzolatti and Craighero 2004). According to Goldman, in order for a genuine mirroring process to take place, it is not enough that mirror neurons be activated endogenously. This may be the result of mere accidental synchronisation. Instead, they have to be activated in an observation mode, which excludes imagination-based mirroring, like motor imagery, from counting as a case of mindreading (cf. Goldman forthcoming). Therefore, although mirroring alone does not constitute mindreading, low-level mindreading may be based upon it or caused by it. To mention only one empirical example, it has been shown that activating a specific neural circuit underlying the experience of disgust is also causally efficacious in the normal recognition of this emotion in others, while failing to activate it (because of a brain lesion, for example) prevents both the capacity to experience it and the capacity to recognize it in and attribute it to others (Wicker et al. 2003). A mirroring event needs to be supplemented by a classification of the target’s mental state(s) and a projection (or imputation) of that classified state onto the target. Although a case of mindreading demands both a simulation and a projection stage, the simulation stage need not involve multiple steps, but may be constituted by a “single matching (or semimatching) state or event” (Goldman 2006, 132). But not all mindreading is caused by or based upon mirroring, as Goldman emphasizes. This is so partly because “some forms of mindreading are susceptible to a form of error to which mirror-based mindreading isn’t susceptible“ (Goldman forthcoming). Such errors are typically egocentric “failures of perspective-taking” or inhibition of self-perspective which simply cannot happen in mirroring. Secondly, the definition of a ‘mirroring process’ explicitly excludes imagination-driven events, while mindreading can sometimes be initiated by the imagination (e.g. when one learns about the other person’s situation from an informant). High-level mindreading then is defined as follows:

217

‘High-level’ mindreading is mindreading with one or more of the following features: (a) it targets mental states of a relatively complex nature, such as propositional attitudes; (b) some components of the mindreading process are subject to voluntary control; and (c) the process has some degree of accessibility to consciousness. (Goldman 2006, 147)

Thus, high-level mindreading is paradigmatically illustrated by the evaluation of a decision someone is going to make, as the example above illustrated. First of all, we wish to emphasize that we applaud Goldman’s intention to introduce a distinction between two such radically different kinds of mindreading instead of trying to account for all cases of mindreading by mentioning one single mechanism or strategy. The problem is that his way of drawing this central distinction is rather sketchy and ultimately does not withstand close scrutiny. For example, it is not clear whether the relevant criterion is the type of mental state to be attributed (a sensation or a propositional attitude) or whether it is the fact whether the mindreading process is conscious or not. Moreover, Goldman merely demands that in the case of high-level mindreading ‘one or more’ of the relevant features are present. He apparently does not intend to introduce necessary or sufficient conditions, and it is disputable whether he points out adequate conditions. A further problem is that although he mentions these crucial differences between low-level and high-level mindreading, Goldman still claims that both are essentially cases of simulation and that they can thus both be accounted for by his two-stage framework of ‘simulation plus projection’. In the following two sections, we take issue with both the way Goldman draws the distinction in the first place and with his interpretation of these two strategies of mindreading as cases of simulation by looking more closely at both strategies, starting with low-level mindreading. 3. Problems of the Simulation-Theory of low-level mindreading As has been explained above, low-level mindreading is supposed to proceed in two steps, a first step of registering a type of mental state, e.g. an emotional or painful experience and a second step of projecting the emotional or sensational state in question onto another subject. Registering the emotional state is supposed to be constituted by a mirroring process on the neuronal level. Mirroring another person’s emotional state amounts to the activation of the same neurons in the observer’s brain, which would 218

be activated in case the observer felt the emotion or pain herself. This mirroring process is not something being subject to conscious control of the observer. Rather, it happens automatically and remains unconscious. In order for this to be a case of mindreading, a second step needs to follow. The observer needs to attribute the emotion or sensation in question to the other. This cannot be done unconsciously; it is rather a conscious and deliberate action. According to Goldman, the attribution of the mental state to the other person involves projection. He suggests that in every case of understanding others we first detect the mental state as a state of ourselves, secondly attribute it to ourselves, and then thirdly project it onto the other person. More explicitly, we find the following steps in Goldman’s account of low-level mindreading (see Goldman 2006, 128):

1

Visual representation of the target’s facial expression (seeing a face)

2

Activating a somatosensory representation of what of what it would feeil like to make that expression (registering the type of emotion, sometimes on the basis of mirror neurons)

3

Experiencing the emotion (including a selfattribution of the type of emotion)

4

Projecting the self-attributed mental state to another person (while quarantining idiosyncratic mental dispositions)

Figure 1

Let us now critically examine the three steps 2,3,4 of realizing simulation and projection in cases of low-level-mind-reading: We all agree what the minimal basis of low-level mindreading is: in the case of underlying mir-

219

ror neuron activation we register a type of mental state, e.g. tooth pain, independently from representing the person having it: “A simulation process can consist, minimally, of a single matching (or semimatching) state or event.” (Goldman 2006, 132) Here, our critical remark is why we should still characterize this kind of registering as a case of simulation. It is radically different from simulation in the case of high-level mindreading since the crucial element of enactment imagination is missing and the generalized condition of ST, namely that the representation “duplicates, replicates or resembles the mental process P in the other person in some significant respects” remains radically underspecified. Since the mirror neuron processes are unconscious, the “significant respects” cannot involve any conscious features of mental phenomena. So then the candidates of simulation are processes sharing the functional role of the unconscious automatic processes underlying the mental phenomena of another person. Simulation is then reduced to a resembling representation which does not involve any similar conscious experience or any state of pretending. It is not useful to summarize both processes in the cases of high-level and low-level mindreading under the same label of “simulation”. Gallagher suggests that we should best interpret mirror neuron activation in terms of direct perception (Gallagher 2007). After all, being in ‘observation mode’ is part of Goldman’s (Goldman, forthcoming) definition of a mirroring process. One may take that literally and argue that in many cases we can simply observe, i.e. ‘perceive’ other people’s mental states; we can just ‘see’ them in their embodied practices (gestures, facial expressions, etc.). For example, we can often see that someone is disgusted or in pain simply by looking at their facial expressions. Why would we need to posit a simulation process? Gallagher’s (2007) alternative perception-based account is more parsimonious and persuasive here. Furthermore, in cases where we have not yet experienced the relevant mental state ourselves we still start to create an attribution of a mental state. It seems that in such cases, Simulation-Theory has nothing to say. In such cases, other strategies may be needed.4 4. Goldman may try to treat such cases as exceptions, which he can account for since he defends a hybrid of ST and TT. Here, a strategy of theoretical explanations seems to be relevant. The problematic presupposition in this reply is that it involves—without sufficient reason—the claim that those cases are exceptions. We will argue in our constructive part (see section 5) that we as adults regularly have to attribute mental states that we do not experience ourselves. Otherwise we cannot understand the majority of the people in sufficient detail. Here we may have to switch to high-level mindreading even in the case of emotions and sensations.

220

Goldman’s step 2 also underestimates the fact that the mirror neurons only represent a type of mental state without being sufficient for a self-other distinction. The fact that mirror neurons fire irrespectively of whether the monkey executes the action or merely observes the other executing it suggests that this firing does merely encode an action plan but is otherwise completely neutral with respect to who is performing the action. Far from solving the problem of understanding others, the mirror neuron discovery seems to give rise to the question what further mechanism enables us to distinguish our own actions (or mental states) from those of others. It points to the need of some further system, sometimes called a ‘Who-system’ (Georgieff and Jeannerod 1998, de Vignemont and Fourneret 2004) for registering a mental state as a state of ourselves (and not of someone else): The self-other representation is installed by a process at least partly independent from mirror neurons. This makes it plausible that the information about the type of the mental state is combined either with the self- or with the other-representation, but not with both. It may be part of the format in which an action plan is encoded that it is either first-personal (proprioceptive etc.) or third-personal (outer perception) and this difference in format might be realized in a different neural mechanism that interacts with the mirror system. Let us illustrate this with an example: If I see an angry face, my mirror neurons may be activated and represent the anger, but now the information that the activation is based on the visual input of a face automatically leads to an other-representation of the mental event. If such an other-representation of anger is established we only need to express the content linguistically in order to attribute it adequately to the other person. On this account, no intermediate linguistic self-attribution is needed. Even if we would grant that in all cases of observing mental states some experience is produced by mirror neurons inside of me, such an experience need not lead to a self-attribution of the mental state. But this is what Simulation-Theory claims when it posits that in general, understanding others proceeds by modelling the other’s mental states with one’s own first-person experience. While Goldman concedes that simulation is radically impoverished in the case of low-level mindreading, he suggests that projection is the same in low-level and high-level mindreading. The case of low-level mindreading is also supposed to involve the self-attribution of the mental state (step 3 in his model) which then leads to an attribution to another person: “If, in addition, the observer’s classification of his own emotion is accurate, his attribution of that same emotion to the target will also

221

be accurate.” (Goldman 2006, 129) What is the status of the proposed self-attribution? It seems that Goldman faces a dilemma here: If he claims that the self-attribution is a conscious event, then this stands in contrast to our phenomenological observations (and to his general characterization of low-level mindreading as an unconscious process): We simply do not consciously self-attribute pain or disgust in the case of observing someone else’s pain or disgust (at least in most everyday cases)5. But if Goldman claims that the self-attribution in low-level mindreading is an unconscious event, then we simply lack sufficient empirical evidence. We can offer an alternative explanation, which is more parsimonious and does not involve a projection on the basis of a self-attribution. As explained above, low-level mindreading is particularly manifested in the recognition of basic mental states like emotions, sensations (e.g. pain) and simple intentions (to grasp something, say). There is strong evidence that recognizing basic emotions like anger, fear, sadness, etc. on the basis of the perception of facial expressions is a strongly modularized process: if both amygdalae are damaged it seems to be impossible to experience fear and to register fear in other people. The relevant brain areas are ‘mirroring’ areas, underlying both the experience of fear and the registration of fear in others (Damasio 1999, 66, Goldman 2006, 115f.). But what is important here is that in order to describe the process of recognizing fear in another person—in normal cases—we just need to presuppose a self-other distinction in addition to the registration of fear. And such representations come in various degrees of complexity: A non-conceptual self-other distinction is already available for a cognitive system like humans (and other animals with a minimal behavioral complexity) on a very basic level of bodily self-acquaintance (Bermúdez 1998, Newen and Vogeley 2003, Newen and Vosgerau 2007, Vosgerau and Newen 2007). The combination of registering fear with a non-conceptual ‘other-representation’ is sufficient to register ‘fear in the other person’. To arrive at an attribution of fear, this other-representation of fear is expressed in natural language. Therefore, our alternative view ideally involves three closely connected elements of understanding an emotion in someone else: 1. the non-conceptual registration of the type 5. An exception may be a case where I am very closely related to the person who is suffering, e.g. if my child has burned her finger on the hot cooking plate. Although I may experience pain consciously in such cases, the most relevant experience is not pain but concern (about what to do next). Moreover, we would have to distinguish between real pain (on the basis of a burned finger) and mere emphatic pain (which is not caused by burning one’s finger). Also, it seems that in the case of contagion (for example, laughing or yawning) we have a third case to distinguish.

222

of emotion, 2. the combination of a non-conceptual other-representation with the non-conceptual registration of a type of an emotion, and 3. the expression of this content in natural language. No projection from self to other is involved in this model.

1

2a

Visual representation of the target’s facial expression (seeing a face)

Registering the type of emotion (sometimes on the basis of mirror neurons)

3

Application of the self-other representation

2b

Natural language attribution (either self attribution or other attribution)

Figure 2

This is the standard scenario for understanding others on the basis of visual information. In most cases, visual information about a facial expression displaying an emotion triggers a representation of the type of emotional state in question and an other-representation, which then leads to an attribution of an instance of that type of emotion to the other person whose facial expression triggered the representation of the emotion. But note that visual information about a facial expression does not necessarily lead to an attribution of an emotion to another person. It may also lead to a self-attribution of that emotion, e.g. when one looks in a mirror and receives information about one’s own facial expression. In that case, the visual information triggers the application of a self-representation instead of an other-representation, based on prior knowledge about mirrors and about ourselves.

223

If a simpler story is available, especially if it involves less demanding cognitive mechanisms, then the principle of parsimony can be employed against Goldman’s Simulation-Theory of low-level mindreading, in favour of an alternative, perception-based approach.6 Let’s now turn to our criticism of Goldman’s Simulation-Theory of high-level mindreading. 4. Problems of the Simulation-Theory of high-level mindreading According to Goldman, while low-level mindreading is fully automatic and usually applies to emotions, sensations and intentions, high-level mindreading crucially involves “enactment imagination” as a cognitively high-level activity which is at least potentially under our conscious control. First of all, Goldman conceptualizes pretending as an operation or process, not as a distinct mental attitude in addition to belief and desire (Goldman 2006, 47), because otherwise we could not make intelligible what a pretend belief or a pretend desire were supposed to be. More specifically, pretense is supposed to be a kind of imagination. Goldman distinguishes various kinds of imagining: One can imagine that something is the case, e.g. that someone is elated. One can also imagine feeling elated or seeing a car. That is, imagining something may not amount to a supposition but to “conjure up a state that feels, phenomenologically, rather like a trace or tincture of elation … When I imagine feeling elated I do not merely suppose that I am elated; rather, I enact, or try to enact, elation itself ” (ibid.). This is what Goldman calls “enactment imagination” and it is the crucial process underlying high-level mindreading, e.g. mental states that are projected towards another person are supposed to be the results of this process. We will argue in the following that Goldman’s account of high-level mindreading, put forward as a hybrid account of ST and TT, is unpersuasive as long as it is supposed to be a variant of Simulation-Theory, simply because of the elements of Theory-Theory that it relies upon at various points. The major problem with Goldman’s Simulation-Theory of high-level mindreading is that it does not even get off the ground. In short, it cannot provide an explanation of how we come to attribute mental states to another person and this is the core of mindreading. Recall that mindreading may be understood in two ways: One may ask how we recognize mental 6. This also holds against Goldman’s interpretation of mirror neuron activity in terms of simulation (Gallese and Goldman 1998).

224

states in others and one may ask how we estimate the decision someone else is going to make on the basis of (our knowledge about the person’s) initial beliefs and desires. These are two different questions and since, arguably, the second presupposes an answer to the first, the first is at the core of mindreading. In presenting his account of high-level mindreading, Goldman presupposes that we already know the target’s beliefs and desires. Thus, it presupposes what it is supposed to explain. Our criticism, in short, is that ST does not provide an answer to the first question but only to the second, and in doing so, presupposes an alternative account of what it means to recognize or understand the beliefs and desires of others. And in this regard, Theory-Theory is more attractive than Simulation-Theory, especially since theoretical assumptions enter Goldman’s hybrid account anyway. Let us elaborate this objection in more detail. Goldman’s model of high-level mindreading concerns decision-making, i.e. it starts with the attributor’s beliefs about the initial mental states the target supposedly has (Goldman 2006, 26-30). In elaborating his model, Goldman explicitly says that Theory-Theory and Simulation-Theory start with the same assumptions on the part of the attributor regarding the target’s initial mental states. In both cases, the attributor thinks that the target has the belief that p and the desire for g. Goldman says that the two accounts only differ with respect to how the attributor uses these presumed mental states, or to what the attributor does with them. So, a crucial element of Simulation-Theory, which Goldman does not elaborate in any detail, is the initial “information that T desires g” (Goldman 2006, 28) which the attributor supposedly has at her disposal. In order for the attributor to create (in herself ) the correct pretend mental states, she needs to know in advance which mental states the target is undergoing, i.e. what the target initially believes and desires. Obviously, only if the attributor knows that the target desires g, she can create a pretend desire for g instead of creating a pretend desire for f. But importantly, this first step already constitutes what needs to be explained, namely mindreading or mental state ascription (the first question posed above). Thus, the simulation routine can only get off the ground given some prior knowledge about the target’s initial beliefs and desires. Arguably, Goldman needs to tell a story about how the attributor arrives at her beliefs about the initial beliefs and desires of the target. If he cannot tell such a story, then the simulation routine does not have any explanatory power by itself. Therefore, the question arises of how this initial “information” acquisition should be spelled out. In order to make good his case for the claim that

225

a simulation routine is the essential ingredient in the mindreading process, Goldman would have to supplement his account with a story about how the attributor comes to hypothesize that the target has the initial desire for g, and this story would need to be formulated in terms of simulation. But as we submitted above, it seems that simulation cannot do the job since it always presupposes knowledge about mental phenomena that can then be pretended. The alternative theories do not seem to face this problem. A more theoretical explanation, for example, does not depend on this condition. According to Theory-Theory, the attributor comes to posit the specific beliefs and desires of the target on the basis of her observation of the target’s behaviour (which she cannot make sense of merely on the basis of the pure perceptual information). These hypothesized initial mental states are evaluated against some folk-psychological “generalizations” (to avoid the term “law”) in order to come up with a further hypothesis about the target’s decision.7 At this point, it may be useful to briefly elaborate what the belief in a “theory” may amount to. Some proponents of Theory-Theory argued that it is akin to a scientific theory (Gopnik and Meltzoff 1997, Gopnik 1993). This gave rise to a number of objections and a heated debate about how cognitively demanding TT is; we agree that this is problematic since it is not even agreed upon what a scientific theory really is and because this may be too demanding when it comes to infants and their capacity to understand others. For our purposes, a theory may be understood as a systematically interconnected set of beliefs regarding a set of phenomena. The relevant class of phenomena in this context are mental states and the relevant set of beliefs can be characterized as a certain limited number of generalizations. The advantage of Theory-Theory over Simulation-Theory in the case of high-level mindreading is that it is designed to deal with the first of the two questions above, which Simulation-Theory does not even attempt to answer. The claim that Theory-Theory is more persuasive than Simulation-Theory can be further justified by emphasizing the extent to which Goldman relies on elements of Theory-Theory at other points. 7. If a proponent of Simulation-Theory wants to include knowledge of folk-psychological principles as well as the evaluation of mental states on the basis of observing behavior by using these principles into his account, then this is not a modest modification into a hybrid account since the application of these principles then does the essential work in the process of understanding other minds. Even if Goldman’s model of decision-making were correct, it would just be a very special case of understanding others presupposing the important basic case (see below).

226

Goldman’s account makes use of a further hidden premise before the projection stage. Before the attributor projects her own pretend states onto the target, she tacitly assumes that this particular target (but also other people in general) is “like her” in the relevant respects. That is, the attributor tacitly believes that people are equipped with the same decision-making mechanisms and arrive at pretty much the same decisions, given certain beliefs and desires. Otherwise, the attributor would have no justification (or weaker, motivation) whatsoever to assume that her own pretend belief, desire and decision—arrived at by enactment imagination—resembles or even matches the target’s belief, desire and decision. So the attributor believes that the target is like her in relevant (cognitive) respects. What is the status of this belief? Because of its generality and universality, it seems reasonable to regard it as a belief in a generalization (if not a psychological law) about people, just as suggested above. Moreover, it seems that this belief in the semblance of decision-making processes in the attributor and the target contains what others have called a “rationality assumption” (Dennett 1987). This rationality constraint enters the story because of the relation between the target’s presumed initial belief and desire which are of course interrelated: It needs to be assumed that the attributor believes that, given the desire for g and the belief that action m will lead to g, one should rationally arrive at the decision to do m. Otherwise, she would neither be justified (motivated) in arriving at her own pretend decision herself, nor would she have reason to believe that the target should arrive at this decision; that is, the projection would be unmotivated. But seen in this light, it is unclear why Goldman so vehemently opposes what he calls “Rationality Theory” (Goldman 2006, 53-68). Apart from the fact that Goldman’s account of Dennett’s theory is at times unfair, it is clear that this ‘Rationality Theory’ does not claim that we always think that other people make rational decisions. As Dennett (1971) emphasizes, at a certain point we may have to give up the rationality assumption when we try to make sense of other people’s behaviour and this behaviour does not fit our model of rational decision-making. It seems that not even Simulation-Theory can do without the attributor’s assumption of a minimal rationality in her own case as well as on the side of the target. Otherwise it is impossible to predict what one may decide; at least one can never be sure if anything is possible and the target may be completely irrational. Furthermore, Goldman introduces the process of quarantining, according to which pretence requires that one must isolate or ‘repress’ one’s own

227

idiosyncratic beliefs and desires to account for the case that the attributor notices that he has a (radically or partially) different mind-set compared to the other person. But how can Simulation-Theory account for such a process? Here we marked a further feature, which can also be best understood as a theoretical component. To be able to notice that the other person has essentially different mental states (a different mind-set) than myself, I have to represent a minimal model of the other person and a minimal model of myself, and I have to compare both ‘person models’ to register relevant differences. This observation provides essential support for our positive account, which we shall call the ‘person model theory’ (see section 5). To sum up these points, it seems that at three points Goldman has to make use of theoretical assumptions that have nothing to do with simulation. First, at the beginning of high-level mindreading, he invokes “information” regarding the target’s initial mental states that is supposed to be at the attributor’s disposal; and it is plausible that this information is arrived at by some sort of inferential process as posited by TT. Secondly, after the alleged simulation routine has taken place, the projection of the pretend decision arrived at is based on the belief that this pretend decision matches the target’s decision, which in turn presupposes the attributor’s belief in the resemblance of self and other (in relevant cognitive respects). Finally, to account for the exclusion of idiosyncratic mental states of the attributor, Goldman has to presuppose “quarantining” which introduces a third theoretical component. Goldman foresees the second criticism, formulating it as the objection that ST ultimately collapses into TT. Against this, he presents various replies. First, he says that it would not be a total collapse of ST into TT if a “theoretical box” was added to the story since the overall process would still be simulational in nature. But since we have identified not only one but three theoretical assumptions or inferences that have to be added to the story, one wonders why the intermediate stages of the mindreading process should be framed as simulations and why this should matter very much, given the theoretical assumptions.8 8. But Goldman also rejects the objection that unless the attributor believes that the target is relevantly like her, she would not be justified in attributing her own decision to the target. Goldman’s reason for this rejection is that he wants to distinguish this question of justification from the question of how mindreading actually works. But one need not put the objection in terms of justification. Instead, one can ask what would motivate the attributor to arrive at her own (pretend) decision to do m on the basis of her pretend mental states rather than to arrive at the (pretend) decision do n. It seems that only the assumption of a rational link between the initial belief and desire can make this intelligible.

228

Goldman’s second major reply is to reject the claim that the belief in resemblance-to-self amounts to a belief in a psychological law (Goldman 2006, 31). If it is not a belief in a psychological law then the proponent of TT arguably has no argument for his position. Again, this dispute boils down to the debate about what counts as a “theory”, according to TheoryTheory. As we have hinted at above, it is not easy to settle this dispute, partly because not even philosophers of science have a clear and uncontested answer to this question. We offered a minimal proposal, namely, that a theory consists of a set of beliefs about a class of phenomena. If one grants that the number of beliefs required for something to be a theory can be very small, then the crucial beliefs that Goldman needs to posit in his own account justifies to call it a belief in a theory. But when the fate of Simulation-Theory is at stake, it is not so much the point whether we want to call the relevant beliefs a psychological law or a theory. The point is rather that these beliefs are further crucial ingredients of the whole story and that they are not explained in terms of simulation. A final problem arises if we take a closer look at the underlying neural mechanisms in the case of high-level mindreading: High-level mindreading in general is supposed to be based on the so-called ‘mindreading network’, which includes the medial prefrontal cortex and the temporo-parietal junction (Fletcher et al. 1995, Gallagher et al. 2000, Frith and Frith 2003, Saxe and Kanwisher 2003). But we have to distinguish between first-person and third-person attribution. According to Simulation-Theory, we would expect that the neural correlate of third-person attribution includes all those brain areas that are activated in the case of first-person attribution of mental states since self-attribution constitutes an essential stage in the simulation-and-projection process. But recent empirical evidence does not support that expectation: In the study of Vogeley et al. (2001) it has been shown that first- and third-person attribution have different neural correlates and, most importantly, that the correlate for third-person attribution does not include the significant activations of first-person attribution (see also Vogeley and Newen 2002). These points together raise the question what makes Simulation-Theory of high-level mindreading so attractive in the first place since other crucial elements of the overall account have nothing to do with simulation. Goldman’s contention that the whole process is still essentially a simulation routine is just that, a contention. Moreover, Nichols and Stich (2003) have pointed out correctly that Goldman is in danger of using the term ‘simulation’ for too many processes, which are too different in kind to form a

229

theoretically interesting category. This objection is especially pressing when one puts low-level simulation and high-level simulation together in one category. It seems that an account of high-level mindreading that relies merely on theory will be simpler and more parsimonious than a hybrid account that combines simulation and theory. At this point it looks like Goldman’s Simulation-Theory does not withstand closer scrutiny. While what Goldman calls high-level mindreading should best be explained in terms of the application of a theory, his low-level mindreading should best be explained in terms of perception (or ‘registration’). 5. The person model theory of understanding other minds. An outline Before we introduce our own positive alternative account, we shall now finally turn to our objections against the way Goldman draws his distinction between low-level and high-level mindreading. As mentioned above, we applaud his general intention to draw such a distinction since there is empirical evidence of a twofold system of understanding other minds (e.g. Olsson and Ochsner 2007). But Goldman leaves it largely unclear what his criteria are.9 Even more importantly, we suggest that we have to apply the distinction twice over: We should first distinguish kinds of mental phenomena, and secondly strategies of understanding other minds. Let us begin by illustrating the distinction of different kinds of mental phenomena: 1. Concerning emotions, it is already commonplace to distinguish basic emotions (like joy, anger, fear, sadness), which do not involve any higher cognitive processes, from cognitive emotions, which essentially involve propositional attitudes (Zinck and Newen 2008). Ekman (Ekman et al. 1969) has shown that there are culturally universal facial expressions of basic emotions. Damasio (2003) also argues that we have to account for what he calls primary emotions. Disagreement only concerns the question which emotions exactly count as basic. We can define basic emotions by the underlying (relatively) modular processes that are independent from higher-order cognition (Zinck and Newen 2008) and we may generalize this to define basic mental phenomena in general (like colour vision which in the case of a local lesion leads to achromatopsia). Then pain, disgust and 9. De Vignemont (2009) also criticizes Goldman’s distinction and points out some problems regarding the compatibility of the way he draws the distinction and the empirical evidence of two neural networks for mindreading.

230

fear are basic mental phenomena without higher cognition being involved since they are realized by modular brain activations like mirror neuron activation in cases of pain and disgust, amygdala activation in the case of fear etc. Shame, jealousy, love, envy, etc. are cognitive emotions since they essentially involve propositional (or relational) attitudes; e.g., in addition to a basic feeling, envy presupposes the belief that someone has a valuable object which I do not have, but really want to have yet cannot get, given my abilities and further conditions (Zinck and Newen 2008). We propose to apply this distinction regarding emotions to all mental states. That is, in general, we can distinguish basic mental phenomena, which are realized by modular brain processes, and high-level mental phenomena, which essentially involve propositional (or relational) attitudes. 2. The second distinction concerns the way of understanding someone’s mental phenomena: Face recognition is a well-known modular process, which is essentially relying on activations in the ‘fusiform face area’ (Kanwisher 2001). This is a typical example of an unconscious modular process, which realizes a registration of the other person’s face and since this representation is coupled with the detection of basic emotions, we have here a basic process of registering other minds independent from highlevel cognition. Such a registration can then produce an adequate reaction which still does not involve any higher-order cognition: The position of a slightly bended head, for example, signals sympathy and is understood as such on an unconscious level by mere registration. The behavioural response of also signalling sympathy is caused by this registration of the other’s mental condition (Frey 1999). One may dispute that this form of registering the other’s mind already counts as a case of understanding the other’s mind. Nevertheless, we suggest to classify it as mindreading while we wish to highlight that the relevant strategy involved is a non-conceptual form of understanding. The alternative strategy is a conceptual way of understanding other minds, which essentially involves conceptual and propositional representations (the distinction between these kinds of representations is elaborated in Newen and Bartels 2007). This is what Goldman has in mind. That is, what he calls low-level and high-level mindreading are both cases of the conceptual way of understanding other minds because they always involve a linguistic attribution of mental states. We can account for Goldman’s distinction by granting that there is a conceptual understanding of other minds which can be further distinguished into two forms according to the relevant mental phenomena: a conceptual understanding of basic

231

mental phenomena (low-level mindreading) and a conceptual understanding of high-level mental phenomena (high-level mindreading). But in addition, there are important non-conceptual forms of understanding others not captured by Goldman’s distinction. With these distinctions at hand we can develop an outline of a new theory of understanding other minds. 5.1 What are the central claims of this account? We suggest that we develop ‘person models’ from ourselves, from other individuals and from groups of persons. These person models are the basis for the registration and evaluation of persons having mental as well as physical properties. Since there are two ways of understanding other minds (non-conceptual and conceptual mindreading), we propose that there are two kinds of person models: Very early in life we develop nonconceptual person schemata: A person schema is a system of sensory-motor abilities and basic mental phenomena10 realized by non-conceptual representations and associated with one human being (or a group of people), while the schema functions typically without awareness and is realized by (relatively) modular information processes. Step by step, we also develop person images: A person image is a system of conceptually represented and typically consciously registered mental and physical phenomena related to a human being (or a group of people). Person models are created for other people but also for myself.11 In the case of modelling myself we can speak of a self-model which we develop on the non-conceptual level as a self schema and on the conceptual level as a self image. A person schema is sufficient to allow newborn babies to distinguish persons from inanimate objects, manifested in neonate imitation, which is also sufficient for seven month old babies to separate persons from animals (Pauen 2000). We already mentioned the observation of non-conceptual 10. Mental phenomena include different ontological types: states, events, processes and dispositions. So, not only stable mental phenomena are included but also situational experiences (like tokens of perceptions, emotions, attitudes, etc.). In a more detailed explication of the theory it would be useful to distinguish situational person schemata (only stored in working memory) and dispositional person schemata (stored in a long-term memory). This has to be done in another paper. 11. The distinction between person schema and person image is based on Shaun Gallagher’s distinction between body schema and body image. Establishing a person schema of my own body amounts to Gallagher’s body schema, while a person image of my own body is what he introduced as body image (Gallagher 2005, 24).

232

understanding of other minds by unconsciously registering someone’s position of her head as signalling sympathy (Frey 1999).12 Those registrations are part of a situational person schema which influences our interaction even though we are not consciously aware of it. On the basis of such nonconceptual person schemata, young children learn to develop conceptual person images. These are models of individual subjects or groups. In the case of individual subjects they may include names, descriptions, stories and whole biographies involving both mental and physical dispositions as well as manifestations. Person images are essentially developed not only by observations but also by telling, exchanging and creating stories (or ‘narratives’).13 Person images presuppose the capacity to consciously distinguish the representation of my own mental and physical phenomena from the representation of someone else’s mental and physical phenomena. This ability develops gradually, reaching a major and important step when children acquire the so-called theory-of-mind ability (operationalized by the false-belief task, see Wimmer and Perner 1983). Person schemata are closely related to basic perceptual processes. Therefore, we adopt Gallagher’s view that we can sometimes just directly perceive mental phenomena, but take it to be true only for basic mental phenomena. Person images presuppose higher-order cognitive processes including conceptual and propositional representations, underlying a conscious evaluation of the observations. Here our background knowledge plays an important role to evaluate the mental phenomena. So on our view, the theory of direct perception is implausible for these complex phenomena.14 To sum up: The understanding of other minds is based on unconsciously established person schemata and consciously developed person images (if the latter are already established in the course of cognitive development) while both are normally closely interconnected.

12. We leave the question open to which extent person schemata are constituted by inborn or by learned dispositions. The examples mentioned above indicate that they involve properties of both kinds. 13. This is the true aspect of the narrative approaches of understanding other minds mentioned above (e.g. Hutto 2008). But narratives are only one method to establish a person model. Representatives of the narrative approach underestimate other sources like perceptions, feelings, interactions etc. which often do not involve narratives. 14. This is acknowledged by Gallagher (2005) since he supplements his theory of direct perception with a narrative view akin to Hutto’s (2008).

233

5.2 What is the evidence for our view? As far as the non-conceptual understanding of other persons is concerned, an important ability is biological motion detection: We can see, just on the basis of point light detection of a movement, whether an observed person is a man or a women, whether s/he moves happily or angrily (Bente et al 1996, Bente et al 2001). This is a very basic observational ability which allows us to register basic intentions of actions as well as basic emotions, without necessarily being conscious of it. Hobson and colleagues (Moore et al 1997, Hobson 2002) showed that exactly this ability to perceive a biological movement as displaying a certain emotion (like anger or happiness) is impaired in autistic children. They do not understand bodily movements as expressions of emotions. These examples illustrate the capacity of a non-conceptual understanding of other minds and they indicate that we (at least healthy subjects) can directly perceive these basic mental phenomena. To support the latter claim, we rely on the classical study by Heider and Simmel (1944) who showed that typical kinds of movements are immediately seen by us as intentional actions even if they are realized by geometrical figures. Furthermore, recent studies using that paradigm show that autistic patients characteristically lack this ability (Santos et al 2008). Direct perception of basic mental phenomena like basic intentions and emotions is a standard ability and lacking it has dramatic consequences for social interactions because then the person schemata essentially lack the standard information we normally receive. Furthermore, there is empirical evidence that we not only develop person schemata but also person images. Here we simply rely on folk-psychological evidence, on the one hand, and the theory-of-mind ability on the other, allowing us to establish complex person images. We can develop person images of individuals but also of groups. Those person models of groups are also called ‘stereotypes’. Stereotypes are an essential part of characterizing groups. One function of stereotypes is to provide an economical way of dealing with other persons (Macrae & Bodenhausen 2000). Besides minimizing cognitive effort they also play an important role in social identification: Situating oneself inside some groups but outside others seems to be a constitutive process of developing a social identity: It has been shown that even independently from competitive conditions we start to support in-group members (of a group we belong to) and disadvantage members of the out-group (Doise & Sinclair 1973, Oakes et al. 1994). The existence of stereotypes is also supported by recent studies which try to identify the

234

relevant neural correlates of stereotypes in social comparisons claiming that medial prefrontal cortex is essentially activated in these cases (Volz et al. under review). So there is not only evidence from folk psychology that we rely on stereotypes by classifying people but also some support from recent social neuroscience. Concerning person images of individuals, we all share the intuition that we develop a very rich and detailed image of people who we are very familiar with, our husband or wife or our kids, say. To treat such a specific image as an image of an individual can again be disrupted in pathological cases, e.g. either when a patient thinks that the person image of my brother can be instantiated in very different people (Fregoli’s syndrome involves a too coarse-grained individuation of person models) or in the case of patients suffering from Capgras’ syndrome. The latter have the delusional belief that one of their closest relatives, e.g. their wife, has been replaced by an impostor. They typically say things like ‘this person looks exactly like my wife, she even speaks and behaves like my wife but she is not my wife’ (Davies and Coltheart 2001); they insist on a too fine-grained individuation of person models such that no one can satisfy it due to a lack of a feeling of familiarity. Such pathological cases can be accommodated nicely within our general framework of person models. The general functional role of person models is to simplify the structuring and evaluation of social situations and to initiate adequate behaviour. An additional special functional role of stereotypes consists in stabilizing my self-estimation since there is a strong tendency to have positive stereotypes of one’s own in-group members and negative stereotypes of the out-groups (see Volz 2008, 19). So, there is empirical evidence in support of the person model theory for both levels, non-conceptual and conceptual mindreading. 5.3 What are the advantages of the person-model theory? The thesis that we can directly perceive basic mental phenomena avoids the implausible claim central to Theory-Theory that we always have to rely on theoretical assumptions and make inferences when we try to understand other minds. Young infants at around one year of age do not seem to rely on any theory even if we presuppose only a basic understanding of what a theory is.15 We argued that there is a non-conceptual understanding of 15. One version of Theory-Theory is the so-called Child-Scientist Theory. Representatives argue that the understanding of other minds starts without an ability to understand false beliefs. This is learned—in a scientific fashion—in the first four years of life. For a critical discussion

235

other minds. We can also avoid the shortcomings of Simulation-Theory which cannot account for high-level attribution of beliefs and desires which are used as input for a decision making process—a deficit we pointed out in Goldman’s theory. Furthermore, Goldman is forced to include TheoryTheory in his hybrid account since he cannot account for our understanding of people with radically different mental dispositions (like people suffering from schizophrenia, autism, Capgras syndrome etc.). In order to capture this, we offer the notion of a conceptual understanding of others by creating, using and improving person images of individuals and groups which allow us to estimate quickly the mental situation of others. Finally, we can avoid the implausible implication of a pure Interaction-Theory that even complex mental states can be directly perceived. Instead, we offer the view that person schemata are essentially based on direct perception while person images are essentially relying on the interpretation of situations involving background knowledge and the construction of narratives. As we have emphasized in our criticism of Goldman’s Simulation-Theory, we have to acknowledge various quite different strategies of understanding others. We have distinguished, broadly, non-conceptual from conceptual understanding. Which of these strategies is (or needs to be) employed in order to understand another person depends crucially (a) on the person in question and our prior relation to and familiarity with that person (that is, basically, on the richness of our person-model regarding that person), (b) on the situation and context and, finally, (c) on the type and complexity of the mental state(s) in question. All these three dimensions have to be taken into account in developing a persuasive theory of understanding other minds.16 see (Goldman 2006, Chap. 4). Regarding our view, it is sufficient to say that even according to child-scientist views it is not justified to attribute children a theory before they have learned to master the false-belief task. There has recently been some debate about the onset of this competence which we cannot touch in this article. 16. For example, in case we are very close to the person, we may rely on a non-conceptual way of understanding. For example, we rarely need to theorize about what our own children may think or feel because they are very close to us and we have developed a rich person model of them. But we may also rely on this non-conceptual strategy if we observe a complete stranger displaying a very familiar type of behaviour. But in case we see a person for the first time and see her displaying a behaviour that is quite strange to us, then we may need to employ various strategies at once in order to understand what she is up to; we may need to consult person images of other people and our own person model of ourselves. Similarly, when our children reach puberty, they may display quite strange types of behaviour such that we may need to theorize about what the kids are up to, despite our rich person model about them.

236

That being said we wish to clarify the relation between the person model theory and simulation theory: We can account not only for the observation that simulation strategies are sometimes used but also indicate to which extent they are used: The simulation strategy of high-level mindreading as suggested by Goldman (estimating the actual beliefs and desires of someone, introducing those attitudes into my reasoning systems producing a decision and projecting this decision to someone else) is used in situations when I have evidence that another person is psychologically very similar to me: If I believe that the other person is of the same gender, age and in the same professional and private life situation, then I start to understand that person mainly on the basis of simulation. If I discover differences as time goes by, I start to quarantine individual differences and step by step develop an individual person image different from my selfimage which becomes the basis of understanding that person. Such cases of strong psychological similarities are rare.17 Quarantining our own beliefs and desires is not a problem in our theory because once I have developed a person image of Peter, for example, I can always rely on that image to understand Peter and may rely on another person image (or images) if faced with limits in understanding Peter’s behaviour in order to improve and adjust my person model of him. But then it is in no way clear that we (sometimes or even always) use our person image of ourselves. If I have evidence that in a certain situation, Peter behaves more like Karl (whose behaviour is also very different from mine), I immediately switch to my person image of Karl, using it as a pattern to understand Peter. Our introductory story of Ralph who believes that Ortcutt is a spy can be nicely accounted for: We all usually acquire a stereotype of a spy by reading crime stories and watching movies. Such a stereotype is used by Ralph to understand Ortcutt’s behaviour although he has only very sparse observational information about him. The person-model theory can also account for the observation that, ontogenetically, human beings gain a better understanding of other people step by step. After first only relying on person schemata, we then develop person images, which become richer and richer. We can learn to understand a person who is psychologically radically different from us without ever being able to simulate her. We can simply 17. If a person does only have a very impoverished self-model then of course s/he can detect people which seem to be like herself more easily. It is implausible that I have to rely on simulation to understand decisions which are common to almost all humans because it is cognitively much more economical to rely simply on our stereotypes of common human behaviour.

237

enhance our repertoire of person images by acquiring (or adopting) person images, which we find in story telling, literature, sciences and so on. We also acquire person images of familiar persons with detailed knowledge of them even if they are essentially distinct from ourselves concerning their psychological dispositions. Another advance of the person model theory is that it can account for the fact that we sometimes make explicit evaluations of a person that do not fit with our behaviour towards her: If I consciously evaluate a person as trustworthy and friendly but at the same time my nonverbal communication signals that I am suspicious and notice some aggressiveness, then this can be described as a conflict between my person model and my person schema of the other.18 Finally, it is an advantage that we offer a theory which can account for the fact that, normally, first-person understanding and third-person understanding are roughly on one level. Person schemata are the product of automatic psychological processes which develop and are used to treat first- and third-person-information. To construct complex person images we have to learn the classifications of mental and physical dispositions, which are then used in both cases, for myself and for other people. If someone has a strong tendency to use only their own psychological dispositions and mind-set to understand other people then this leads to a strong egocentric bias which puts a limitation on an adequate social interaction. An extreme example of such a bias is manifested in egomania. To sum up: This alternative view avoids the disadvantages and shortcomings of Theory-Theory, Simulation-Theory, Interaction-Theory, and the Narrative Practice Hypothesis while retaining their benefits. At the same time, it can account for many of our folk-psychological intuitions as well as scientific results in psychology and neuroscience. We are therefore optimistic that this sketch of the person-model theory can be further developed into a full-blown theory. REFERENCES Baron-Cohen, Simon 1995: Mindblindness. An Essay on Autsim and Theory of Mind. Cambridge, Mass.: MIT Press. Baron-Cohen, Simon, Alan M. Leslie and Uta Frith 1985: “Does the autistic child have a ‘theory of mind’?” Cognition 21, 37–46. 18. In the same line some cases of self-deception can be characterized as cases in which my self-image is different from my self-schema.

238

Bente, Gary, Ansgar Feist and Stephen Elder 1996: “Person perception effects of computer simulated male and female head movement”. Journal of Nonverbal Behavior 20, 213–228. Bente Gary, Nicole C. Krämer, Anita Petersen and Jan Peter de Ruiter 2001: “Computer animated movement and person perception. Methodological advances in nonverbal behavior research”. Journal of Nonverbal Behavior 25(3), 151–166. Damasio, Antonio R. 2003: Looking for Spinoza. Joy, sorrow, and the feeling brain. London: Heinemann. Davies, Martin, Max Coltheart, Robyn Langdon and Nora Breen 2001: “Monothematic delusions: Towards a two-factor account”. Philosophy, Psychiatry, and Psychology 8, 133–58. Dennett, Daniel C. 1971: “Intentional Systems”. The Journal of Philosophy 68, 87–106. — 1987: The Intentional Stance. Cambridge, Mass.: MIT Press. de Vignemont, Frédérique 2009: “Drawing the boundary between low-level and high-level mindreading”. Philosophical Studies 144(3), 457–466. de Vignemont, Frédérique and P. Fourneret 2004: “The sense of agency: a philosophical and empirical review of the Who system”. Consciousness and Cognition 13(1), 1–19. Doise, Willem and Anne Sinclair 1973: “The categorization process in intergroup relations”. European Journal of Social Pathology 3, 145–157. Ekman, Paul, E. Richard Sorenson and Wallace V. Friesen 1969: “Pan-cultural elements in facial displays of emotion”. Science 164, 86–88. Fletcher, P., F. Happé, U. Frith, S. C. Baker, R. J. Dolan, R. S. Frackowiak, C. D. Frith 1995: “Other minds in the brain: a functional imaging study of “theory of mind” in story comprehension” Cognition 57, 109–128. Frey, Siegfried 1999: Die Macht des Bildes. Der Einfluss der nonverbalen Kommunikation auf Kultur und Politik. Göttingen: Huber. Frith, Uta and Christopher D. Frith 2003: “Development and neurophysiology of mentalizing”. Philosophical transactions of the Royal Society London Series B 358, 459–473. Gallagher, H. L., F. Happé, N. Brunswick, P. C. Fletcher, U. Frith, and C. D. Frith 2000. “Reading the mind in cartoons and stories: an fMRI study of ‘theory of mind’ in verbal and nonverbal tasks”. Neuropsychologia 38: 11–21. Gallagher, Shaun 2001: “The practice of mind: Theory, simulation, or interaction?” Journal of Consciousness Studies 8 (5-7), 83–107. — 2005: How the body shapes the mind. Oxford: OUP. — 2007: “Simulation trouble”. Social Neuroscience 2/3, 353–365. — 2008a: “Direct perception in the intersubjective context”. Consciousness and Cognition 17, 535–543.

239

— 2008b: “Another look at intentions: A response to Raphael van Riel’s Seeing the invisible”. Consciousness and Cognition 17, 553–555. Gallagher, Shaun and Andrew N. Meltzoff 1996: “The Earliest Sense of Self and Others: Merleau-Ponty and Recent Developmental Studies”. Philosophical Psychology 9, 213–236. Gallagher, Shaun and Dan Zahavi 2008: The phenomenological Mind. An Introduction to Philosophy of Mind and Cognitive Science. London: Routledge. Gallese, Vittorio, Luciano Fadiga, Leonardo Fogassi and Giacomo Rizzolatti 1996: “Action recognition in the premotor cortex”. Brain 119, 593–609. Georgieff, Nicholas and Marc Jeannerod 1998: “Beyond consciousness of external reality. A ‘Who’ system for consciousness of action and self-consciousness”. Consciousness & Cognition 7(3), 465–477. Goldman, Alvin I. 1989: “Interpretation Psychologized”. Mind and Language 4, 161–185. — 2006: Simulating minds. The Philosophy, Psychology, and Neuroscience of Mindreading. Oxford: OUP. — forthcoming: “Mirroring, mindreading and simulation”. To appear in Jaime A. Pineda (ed.), Mirror Neuron Systems: The Role of Mirroring Processes In Social Cognition. Gopnik, Alison 1993: “How we know our minds: The illusion of first-person knowledge of intentionality”. Behavioral and Brain Sciences 16, 1–15, 90–101. Gopnik, Alison and Andrew N. Meltzoff 1997: Words, thoughts, and theories. Cambridge, Mass.: Bradford, MIT Press. Gopnik, Alison and Henry M. Wellman 1994: “The ‘Theory-Theory’”. In: Lawrence A. Hirschfield and Susan A. Gelman (eds.), Mapping the mind: Domain specificity in culture and cognition. New York: Cambridge University Press, 257–293. Gordon, Robert M. 1986: “Folk Psychology as Simulation”. Mind and Language 1, 158–171. Heal, Jane 1986: “Replication and Functionalism”. In: Jeremy Butterfield (ed.), Language, Mind, and Logic. Cambridge: Cambridge University Press, 135–150. Heider, Fritz and Marianne Simmel 1944: “An experimental study of apparent behavior”. American Journal of Psychology 57, 243–259. Hobson, Peter 2002: The Cradle of Thought. London: Macmillan. Hutto, Daniel D. 2008: Folk-psychological narratives. Cambridge, Mass.: MIT Press. Jacob, Pierre and Marc Jeannerod 2003: Ways of seeing. The scope and limits of visual cognition. Oxford: OUP. Kanwisher, Nancy 2001: “Neural events and perceptual awareness”. Cognition 79, 89–113.

240

Leslie, Alan M. 1987: “Pretense and Representation: The origins of ‘Theory of Mind’”. Psychological Review 94, 412–426. Macrae, C. Neil and Galen V. Bodenhausen 2000: “Social cognition: Thinking categorically about others”. Annual Review of Psychology 51, 93–120. Meltzoff, Andrew N. and Jean Decety 2003: “What imitation tells us about social cognition: a rapprochement between developmental psychology and cognitive neuroscience”. Philosophical transactions of the Royal Society London Series B 358, 491–500. Meltzoff, Andrew N. and M. Keith Moore 1977: “Imitation of facial and manual gestures by human neonates”. Science 198, 75–78. Moore, D. G., R. P. Hobson and A. Lee 1997: “Components of person perception: An investigation with autistic, non-autistic retarded and typically developing children and adolescents”. British Journal of Developmental Psychology 15, 401–423. Newen, Albert and Andreas Bartels 2007: “Animal Minds: The Possession of Concepts”. Philosophical Psychology 20/3: 283–308. Newen, Albert and Kai Vogeley 2003: “Self-Representation: The Neural Signature of Self-Consciousness”. Consciousness & Cognition 12, 529–543. Newen, Albert and Gottfried Vosgerau 2007: “A representational theory of selfknowledge”. Erkenntnis 67, 337–353. Nichols, Shaun and Stephen P. Stich 2003: Mindreading. An integrated account of pretence, self-awareness and understanding other minds. Oxford: OUP. Oakes, Penelope J., S. Alexander Haslam and John C. Turner 1994: Stereotyping and social reality. Malden, MA: Blackwell. Olsson, Andreas and Kevin N. Ochsner 2007: “The role of social cognition in emotion”. Trends in cognitive sciences 12(2), 65–71. Pauen, Sabina 2000: “Wie werden Kinder ‘Selbst’-Bewusst? Entwicklung in früher Kindheit’’. In: Albert Newen and Kai Vogeley (eds.), Selbst und Gehirn. Menschliches Selbstbewusstsein und seine neurobiologischen Grundlagen. Paderborn: Mentis, 291–312. Ratcliffe, Matthew J. 2007: Rethinking Commonsense Psychology: A Critique of Folk Psychology, Theory of Mind and Simulation. Basingstoke: Palgrave Macmillan. Reddy, Vasudevi 2008: How infants know minds. Cambridge, Mass.: Harvard University. Rizzolatti, Giacomo and Laila Craighero 2004: “The mirror neuron system”. Annu. Rev. Neurosci. 27, 169–192. Rizzolatti, Giacomo, Luciano Fadiga, Vittorio Gallese and Leonardo Fogassi 1996: “Premotor cortex and the recognition of motor actions”. Cogn. Brain Res. 3, 131–141.

241

Santos, Natacha S., Nicole David, Gary Bente and Kai Vogeley 2008: “Parametric induction of animacy experience”. Consciousness and Cognition 17(2), 425–37. Saxe, R. and Nancy Kanwisher 2003: “People thinking about people: The role of the temporo-parietal junction in ‘theory of mind’”. NeuroImage 19, 1835– 1842. Van Riel, Raphael 2008: “On how we perceive the social world. Criticizing Gallagher’s view on direct perception and outlining an alternative”. Consciousness and Cognition 17(2), 544–552. Vogeley, K., P. Bussfeld, A. Newen, S. Herrmann, F. Happé, P. Falkai, W. Maier, N.J. Shah, G.R. Fink and K. Zilles 2001: “Mind reading: Neural mechanisms of theory of mind and self-perspective”. Neuroimage 14, 170–181. Vogeley, Kai and Albert Newen 2002: “Mirror Neurons and the Self Construct”. In: Maxim Stamenov and Vittorio Gallese (eds.), Mirror Neurons and the evolution of brain and language”. Amsterdam and Philadelphia: John Benjamins Publishers, 135–150. Volz, Kirsten G. 2008: „Ene mene mu—insider und outsider“. In: Ricarda Schubotz (ed.), Other minds. Die Gedanken und Gefühle anderer. Paderborn: Mentis, 19–30. Volz, Kirsten G., Thomas Kessler and D. Yves von Cramon (under review): “Ingroup as part of the self: In-group favoritism is mediated by medial prefrontal cortex activation.” Social Neuroscience. Vosgerau, Gottfried and Albert Newen 2007: “Thoughts, motor actions and the self ”. Mind and Language 22(1), 22–43. Wicker, Bruno, Christian Keysers, Jane Plailly, Jean-Pierre Royet, Vittorio Gallese and Giacomo Rizzolatti 2003: “Both of us disgusted in my insula: The common neural basis of seeing and feeling disgust”. Neuron 40, 655–664. Wimmer, Heinz and Josef Perner 1983: “Beliefs about beliefs: representation and constraining function of wrong beliefs in young children’s understanding of deception”. Cognition 13, 103–128. Zinck, Alexandra and Albert Newen 2008: “Classifying Emotion: A Developmental Account”. Synthese 162(1), 1–25.

242

VI. THE PHILOSOPHER REPLIES

Grazer Philosophische Studien 79 (2009), 245–288.

REPLIES TO DISCUSSANTS* Alvin I. GOLDMAN Rutgers University I. Introduction Initiated and organized by Gerhard Schurz, a workshop on various aspects of my research—especially epistemology—was held in Düsseldorf, Germany in May, 2008. I was greeted there by a bevy of friendly philosophers with challenging critiques of positions I have defended either recently or over the years. The papers to which this article responds are products of that workshop. I have greatly enjoyed the opportunity to rethink these issues under the pressure of thoughtful reflections of twelve astute philosophers. (I do not reply to Erik Olsson, because his contribution relates to a previous collaboration of ours, and contains no critical comments directed at me.) If I occasionally go on at length, it is because the points at issue are important ones. I thank all of the contributors, and especially Gerhard Schurz. Vielen Dank! II. Formulating and interpreting reliabilism (Grundmann, Baumann) Many papers in this collection address the prospects of reliabilism about justification or knowledge. I therefore begin these responses with the question of how, exactly, reliabilism can best be formulated, interpreted and possibly revised. In this initial section I examine two papers that pose significant problems for reliabilism and weigh alternative formulations and interpretations to see which alternatives offer the brightest prospects. The first paper is by Thomas Grundmann, who asks how reliabilism should deal with defeaters. Grundmann begins with the simplest version of reliabilism found in the inaugural statement of justificational reliabi* I am indebted to Blake Roeber for valuable discussion of most of the papers commented upon here.

lism, “What Is Justified Belief?” (Goldman 1979). This simple version of reliabilism says that a belief is justified if and only if it is formed by reliable belief-forming processes. As recognized even there, however, this formula has problems with defeating evidence that the subject ignores. In Grundmann’s drug case, Betty forms a belief that she has just experienced an earthquake because it feels to her as if the ground is shaking, and this belief-forming process is reliable. However, Betty knows that she has recently taken a drug with a 50% chance of causing hallucinations. So, intuitively, Betty’s belief isn’t justified because its prima facie justifiedness is defeated by her knowledge that she has taken the drug. She isn’t entitled to trust her experience given her information about the drug. Thus, the simple analysis needs to be modified to accommodate defeaters. Grundmann considers three possible strategies by which reliabilism might handle defeaters: (1) conservative reliabilism, (2) eliminativism, and (3) revisionary reliabilism. Conservative reliabilism tries to deal with unjustified belief that arises from (undercutting) defeaters by building the process of ignoring-available-counterevidence into the whole process responsible for sustaining the belief. This approach is regarded as a dead end. Eliminativism insists that reliably produced belief remains justified even if there is counterevidence available to the believer. Grundmann dismisses this view as poorly motivated. He then turns to revisionary approaches. One example of a revisionary approach is the one I offered in (Goldman 1979), i.e., adding a supplementary condition to the basic reliable-process condition requiring that there be no reliable process that the subject could have used such that, had it been used, would have resulted in the subject’s not believing p at t. Grundmann grants that this proposal is extensionally adequate, but he regards it as unsatisfying for two reasons. First, he says it is ad hoc. Second, he says it is not clear that it’s a version of reliabilism. Reliabilism explains all justificationally relevant features as being objectively conducive to the goal of truth. But how, Grundmann asks, does the supplementary condition—there is no reliable process such that, had it been used, would have yielded non-belief in p—fit this general picture? Requiring a subject to adapt his beliefs to internally available evidence comports better with internalism than with externalist reliabilism. This line of criticism, I submit, misinterprets the proper relationship between reliabilism, externalism and internalism. Under the standard interpretation, internalism is the thesis that all justifiers of a belief (i.e., condi-

246

tions relevant to a belief ’s justificational status) are internal.1 Externalism is simply the denial of internalism. It is the thesis that not all justifiers of a belief are internal. In other words, externalism holds that some justifiers (or J-factors) are external. Assuming that reliabilism is a (paradigmatic) form of externalism, what does this imply about admissible justifiers under reliabilism? Does it imply that a theory fails to qualify as a version of reliabilism if it admits any internalist justifiers? Surely not. It would be wrong to argue that a theory that features any internal condition for justification thereby fails to be an externalist theory and therefore fails to be a form of reliabilism. To clarify this point, consider the following recursive principle of justification that appears in “What Is Justified Belief?” (6B) If S’s belief in p at t results (“immediately”) from a belief-dependent process that is (at least) conditionally reliable, and if the beliefs (if any) on which this process operates in producing S’s belief in p at t are themselves justified, then S’s belief in p at t is justified. (Goldman 1979/1992, 117) This principle of justification adverts to beliefs on which belief-forming processes operate as among the factors relevant to justification. Surely these beliefs are internal states. So principle (6 B) invokes at least one type of internal state as a justifier. Moreover, (6B) also adverts to belief-forming processes as justifiers, and these processes are equally internal affairs. Does Grundmann mean to assert that these features of reliabilism disqualify it from being an externalist theory? I don’t see him as making so radical a claim. Hence, it doesn’t seem right to hold that any theory that admits beliefs and other mental states (e.g., perceptual and memorial experiences) as defeaters automatically abandons externalism. However, Grundmann has additional grounds for discontent with my treatment of defeaters in “What Is Justified Belief?” The supplementary condition I provide, he says, fails to explain why internal adaptation to the evidence is instrumentally good with respect to the goal of truth. In other words, it doesn’t explain why sensitivity to counterevidence is objectively truth-conducive. If it isn’t truth-conducive, though, then my account is not reliabilist through and through (as Grundmann evidently wants it to be). 1. For example, Conee and Feldman (2001) hold that the supervenience base of justifiedness consists entirely of mental states and conditions.

247

As an improvement over my condition (referred to as “G”), Grundmann proposes the following revised theory: (TG) S is justified in believing that p at time t if and only if (1) S’s belief that p is based on a reliable process, (2) there is no conditionally reliable process available to S which (i) a properly functioning cognitive system of the kind to which S belongs would have used and (ii) which would have resulted in S’s not believing p at t, and (3) the proper function mentioned in (2) can be explained with respect to getting at true beliefs. Grundmann proceeds to argue that belief-inhibitory processes can be classified as reliable, at least if they eliminate false beliefs more often than true beliefs. He takes this to hold when they are part of a properly functioning cognitive system. A prime virtue of (TG), in his eyes, is that it makes clear how taking account of defeaters improves the overall truth-ratio of the system’s beliefs. I agree that it’s a virtue of any account of defeaters (in the reliabilist framework) that it should show how, or reflect the fact that, being sensitive to defeaters improves the overall truth-ratio of a subject’s beliefs. Perhaps (TG) does a better job of this than the corresponding principle (G) that I had formulated. On the other hand, I don’t think it’s the responsibility of a principle to explain why satisfaction of the defeater-sensitivity condition should be correlated with improvement in overall truth-ratio, as long as it is so correlated. And I would have thought—indeed, did think—that this correlation would be transparent enough in the case of (G). Grundmann is presumably attracted to the proper-function feature of (TG) because he likes the proper function idea generally, especially in the “naturalistic” form advanced by Millikan (1993) (not the supernaturalistic form it takes in Plantinga 2000). I am less attracted to proper functionalism because I am not convinced it works in detail, either as a theory of content or as an epistemology. One of its major problems in the epistemological area is the familiar swampman problem. Someone who recently popped into existence ex nihilo in the swamp would not have an appropriate evolutionary history to qualify as a possessor of proper functions, and hence would be disqualified by (TG-2) from having justified beliefs. Intuitively, however, it seems that a swampman would be capable of having justified beliefs.

248

In the interest of full disclosure, I would be remiss not to mention a forthcoming article (Goldman, forthcoming a) in which I tentatively propose a somewhat different theory of justification than my long-standing form of reliabilism, a theory that might handle defeaters rather easily but would probably be greeted unenthusiastically by Grundmann. The theory sketches the attractions of a synthesis, or hybrid, of reliabilism and evidentialism. On the topic of inferential justification, it endorses a twocomponent theory. One component appeals to causal processes of beliefformation and the other appeals to relations of evidential support between (justifiably) believed premises and a target proposition. The theory says that a subject S is partly justified in believing proposition p if S satisfies just one of the two components with respect to p, and is fully justified in believing p if S satisfies both components. An example that motivates the theory features Shirley and Madeleine. Shirley is highly incompetent at determining inferential support relations. When considering how much her total evidence supports a hypothesis, she usually gets confused, throws up her hands, and arrives at a degree of credence simply by guessing. On one occasion she makes such a guess, and by luck her guess is exactly right. She assigns degree of belief .45 to H when that is fully appropriate given her evidence. Madeleine has exactly the same background evidence as Shirley; but Madeleine is a highly proficient confirmationist theorist. She applies proper inferential procedures to arrive at the same degree of belief in H, i.e., .45. Shirley and Madeleine have both done one epistemic thing well: assigned the right degree of belief to H. In another respect, however, Madeleine’s epistemic performance is much better than Shirley’s. Madeleine arrives at her degree of belief in fine fashion, whereas Shirley’s method of choice is incompetent. A good way to assess their respective overall justificational statuses with respect to H is to credit Shirley with being partly justified and Madeleine with being fully justified. How might this approach be used to deal with defeaters? A defeater is a newly acquired piece of evidence that, in combination with one’s previous total evidence, renders it inappropriate to continue believing a target proposition (or appropriate to reduce one’s degree of belief in it). Ignoring a defeater—i.e., maintaining one’s prior belief or degree of belief despite the defeater—automatically yields unjustifiedness, at least on the evidential dimension. What shall we say about defeat on the reliable-process dimension? There are two possibilities here. One possibility is to argue that the defeat notion can be captured in terms of reliable processes—specifically,

249

omitted reliable processes. This might not add a separate and additional component of justifiedness, but would re-express the defeat notion in reliabilist-friendly terms. A second possibility is to contend that a reliableprocess treatment of defeat adds a separate component for justificational assessment, additional to the evidentialist component. Thus, there might be a Shirley* and a Madeleine*, who perform equivalently with respect to a defeater on one dimension but not equivalently on the other dimension. Let us illustrate this type of case: a case in which there is a “correct” response to a defeater in at least one respect. Suppose that Shirley* encounters an item of evidence D that defeats her prior evidence that strongly supported H. When D is combined with the prior evidence, it makes it inappropriate for her to believe H; indeed, the doxastic attitude that “fits” the new body of total evidence is considerably under .50. Shirley takes notice of D, but as usual she is flummoxed by it. She is confused as to how it interacts with the previous evidence she had that favors H. Thus, she winds up guessing about the proper impact of her total body of evidence on H, and assigns the degree of belief .30. By luck, this is a fitting degree of belief. Is Shirley* justified in holding this doxastic attitude toward H? I would say that she is partly justified, insofar as she selects a suitable degree of belief. She isn’t fully justified, however, because the way she arrives at this degree of belief is quite incompetent. Now consider Madeleine*. She has exactly the same evidence as Shirley*, both in the initial stage and in the later stage, after acquiring evidence D. Madeleine* takes note of D, but unlike Shirley* she insightfully discerns its relationship to her total body of evidence and uses a reliabilistically competent process to select the degree of belief .30 for H. What is Madeleine*’s justificational status vis-à-vis H? I would say she is fully justified vis-à-vis H. She is justified both on the evidential, or fittingness, dimension and also on the psychological-process dimension. Like our previous Shirley-Madeleine example, this shows that it is useful and intuitive to have a two-component theory of justification, which works as well in the case of defeaters as it does in the general case. How does this help with our original problem about the formulation of an adequate form of reliabilism? Ignoring, at this point, the evidential dimension of justification, we can formulate the reliabilist dimension as follows: (RJ) S’s belief (or degree of belief ) in P is reliabilistically justified at t if and only if this belief (or degree of belief ) is produced by a reliabilistical-

250

ly competent process or sequence of processes that are suitably applied to one’s total bodies of evidence held at the times in question. Of course, since we are generalizing the theory of justification to degrees of belief (not just binary belief ), we would have to spell out what a “reliabilistically competent” process consists in; and this is a non-trivial task. But this task is set aside for present purposes. The novel point is that, by including a requirement that processes be suitably applied to one’s total body of evidence, one disqualifies beliefs and degrees of belief that ignore evidential defeaters, because they fail to reflect the total body of evidence.2 Grundmann may be disappointed by my willingness to abandon pure reliabilism. I am sorry to disappoint, but I feel obliged to go where the (philosophical) evidence drags me. If he can persuade me that the hybrid view is hollow, unnecessary, or unhelpful, I shall be happy to abandon it. A different complaint Grundmann might register is that the new theory fails to exhibit the normative element in the way that (TG) does, for example. But I do not think that normative language must be built explicitly into the analysis. The analysis aims to specify the non-normative states and processes on which justifiedness (and unjustifiedness) supervene. It does not ignore the normative dimension of justifiedness, but simply provides the supervenience base of this normative property. Peter Baumann discusses the problem of interpreting the notion of a reliable process in process reliabilism. Simple reliabilism says that a belief is justified if and only if it is produced by a reliable belief-forming process.3 Presumably this means: produced by a reliable process token. But 2. Of course, (RJ) uses the term “evidence,” an epistemic term. Thus, as stated, (RJ) does not present the kind of substantive analysis, wholly free of epistemic terms, that reliabilism originally presented as a desideratum (Goldman 1979). However, talk of evidence is readily replaced by talk of experience and (antecedently) justified belief, laying the groundwork for a recursive analysis of the kind promoted in (Goldman 1979). (RJ) also accommodates very readily the defeaterbased solutions I have previously offered to certain counterexamples to reliabilism. In the case of BonJour’s clairvoyants, for example, I argued (Goldman 1986) that some of these clairvoyants have evidence that defeats, or undermines, the proposition that they possess clairvoyant powers, and hence defeats, or undermines, a belief that comes “out of the blue” to the effect that the President is in New York City. 3. Actually, the analysandum on which Baumann focuses is knowledge rather than justification. This is slightly problematic for the following reason. As argued in (Goldman 1986), two notions of reliability must be used in connection with knowledge: “global” and “local” reliability. Global reliability is the reliability of a process across all or many of its uses. Local reliability is a

251

what does a token’s reliability consist in? How can a reliability number be assigned to a token process? Only process types have associated reliability numbers (other than 1 or 0). Given that a process token is instantiated by indefinitely many types, which type should be selected to represent the token’s reliability? This is the familiar generality problem, to which we shall return a bit later. It is not Baumann’s main problem, however. He is interested in what makes a process type count as reliable. I have offered different answers to this difficult question, and it’s not clear which of these answers—if any—is satisfactory. Baumann divides the most attractive approaches into two types: modal approaches and probabilistic approaches. He reviews a number of proposals that have been considered, and indicates the problems they face. Oddly, he doesn’t consider two of the proposals I find most pertinent—or why such proposals seem to be needed. A salient difficulty for process reliabilism (about justification) is the demon-world case, in which an epistemic subject has similar visual experiences as you or I do but none of these experiences is veridical—and nobody else has veridical visual experiences in this world. Intuitively, however, beliefs people would form by applying to these experiences the same belief-forming processes we use in the actual world would also be justified in the demon-world. This shows that the reliability score of a process in the world of an example cannot be the correct formula for scoring a process type’s reliability. Yet that formula is the one many people are initially tempted to apply. A better interpretation I sometimes suggest is that reliability of a process should first be judged by the truth ratio of its belief outputs in the actual world, and then this reliability score should be rigidified. Rigidification implies that visually-based beliefs produced by the same process in a demon-world will be equally justified as similarly produced beliefs in the actual world. This idea can also be expressed in terms of a two-dimensional semantics treatment of reliability (see Comesana 2002). Although these proposals are helpful with the demon-world case, I would not claim that they resolve all problems. Baumann does not find modal interpretations very promising. He prefers a conditional probability approach. His initial proposal (in a slightly strong modal notion, which excludes error possibility completely, at least in the closest or most “relevant” situations in which the target proposition would be false. However, this error exclusion applies only to a putative case of knowledge, and in the modal “vicinity” of its occurrence. The global or general reliability of a process is not so tightly constrained. General reliability is the only kind necessary for justification (according to process reliabilism). Hence, justification is the analysandum on which I shall concentrate here.

252

more detailed rendering) is this: “process P is reliable if and only if the conditional probability of having a true belief, given that P happens, exceeds some (reasonably high) threshold s.” One problem here is that this formula expresses a different truth-related notion than reliability, namely question-answering, or problem-solving, power (see Goldman 1986, chap. 6). Reliability is attained if, among the beliefs that a process generates, a high proportion are true. But reliability does not require a process to generate true beliefs very frequently; most of the time it might generate mere suspensions of judgment, which don’t count against reliability. Power, by contrast, is a magnitude that does require frequent generation of true beliefs. Suspensions of judgment, like false beliefs, are counted negatively. So the probabilistic definition Baumann really wants, I suggest, runs as follows: “process P is reliable if and only if the conditional probability of a belief ’s being true, given that it is caused by P, exceeds s.” Baumann says that he means his formulas to include the idea of “efficacy”. This is correct if all he means is that the processes in question should cause the beliefs in question. But his formulations are sometimes compatible with the idea that a process should be efficacious with respect to a belief ’s being true. This is not exactly what we need. Reliable processes are not generally efficacious with respect to the truth-values of the propositions believed, but only with respect to the beliefs—i.e., believings—in those propositions. In other words, cognitive processes don’t generally cause facts in the external world that render their belief outputs true. They merely cause beliefs in propositions that are true independently of the process. Thus, the formula I suggest at the end of the preceding paragraph is closer to what Baumann seeks, I believe. Baumann rightly worries about the adequacy of the conditional probability interpretation. The chief problem he pinpoints is the familiar generality problem, which, in the context of a probability interpretation, is naturally formulated as a “reference class” problem. In the theory of justification, we start with a belief that is produced by a token process. But the processes featured in the conditional probability interpretations are process types. (He is explicit about this.) How, then, does an evaluator or theorist move from a process token to a selected process type, since—as is generally acknowledged—each process token of a particular (temporally dated) belief instantiates indefinitely many types? On what principled basis does an evaluator or theorist choose a particular type as the appropriate one to use in making a justification judgment?

253

This is the usual way of formulating the generality problem for reliabilism (see Conee and Feldman 1998). When so formulated, it is indeed unclear that any satisfactory answered has been provided. But perhaps the usual way of formulating matters has a mistaken presupposition. Maybe judgments of justification do not require the choice of a unique process type—yet nonetheless rely on reliability scores. This is the suggestion of Mark Wunderlich (2003). What we are interested in, as judges of justificational status, is the comparative status of multiple pairs of beliefs. As Wunderlich argues, this may not require us to select a unique process type for each member of such a pair and then compare their reliability scores. Instead, says Wunderlich, each token process may be associated with a profile of types. Two profiles can be compared by first establishing a correspondence between the types in each profile and comparing the members of these pairs for reliability. For example, if every type in the first profile is more reliable than its mate in the second profile, then the first type profile has greater overall reliability than the second profile. Thus, reliability scores can generate comparisons of justifiedness even if there is no way to select a unique type for each token. Of course, only in rather special cases will it be true that every type in one profile has a higher reliability score than its mate in the other profile. So the criterion of comparative justifiedness just sketched won’t help with very many cases. But Wunderlich also explores other criteria that are less restrictive. It remains to be seen whether Wunderlich’s “vector reliability” approach will prove satisfactory; it has yet to be examined in the literature. What I suggest here is that there are novel ways to address the generality problem that may indeed be promising. So reliabilism is not saddled with the deadend prospect that Conee and Feldman (1998) provide. Baumann seems pessimistic about finding any solution. I am not quite so pessimistic.4 III. Reliabilism and the value of knowledge (Piller, Werning) Christian Piller considers the two solutions Erik Olsson and I offer (Goldman and Olsson 2009) to the value problem for reliabilism, the problem of how reliabilism can explain the extra value that knowledge has as compared with true belief. We call this the “extra value of knowledge” 4. Another promising tack, of an entirely different nature, is currently being explored by Erik Olsson. I leave it to Olsson to expound his ideas in future work.

254

(EVOK) problem. Piller discusses the two solutions in the context of his own interesting views about normative consequentialism in general. As he indicates, Olsson and I had different ideas about the best response available to reliabilism. Piller discusses both solutions, but here I respond only to his discussion of the second solution, the one I favor. An important difference between Olsson’s and my responses to EVOK, as Piller indicates, is that Olsson accepts the challenge in the usual terms, whereas I do not. The usual terms of the challenge is why knowledge has more “objective” value than mere true belief. Olsson solution is offered in these very terms (though he rejects the assumption that knowledge is always more valuable than true belief ). I prefer to address the challenge by changing the terms of the “evidence.” It is not a datum, by my lights, that knowledge is more valuable than mere true belief. The datum I accept (provisionally) is that people regard knowledge as more valuable than mere true belief. The applicable problem for reliabilism, then, is to explain why we value knowledge more than true belief. The common thread of our responses is that knowledge is true belief formed by a reliable process, and the process component somehow adds value to the true belief component so that a composite state of affairs consisting of a true-belief-caused-by-areliable-process has (or is assigned) more value than the same true belief would have if it were caused by an unreliable process. The solution I offered has two elements: type-instrumentalism and value autonomization. The second element receives greater emphasis in the published paper. For example, the section containing this response is called “Value Autonomization.” Here, however, I want to give greater weight to the first element: type-instrumentalism. I also want to modify what I previously said about value autonomization. To review, the chief difficulty posed by the so-called “swamping problem” (or EVOK) is that any value a reliable process brings to the table is derived from the true belief it causes. This value, then, cannot be added to the true belief to create extra value, because that would involve double counting. Whatever (epistemic) value is contained in the reliable process derives from its being a means, or instrumentality, to the production of the true belief. But all of this epistemic value is contained in the true belief itself, which is part of the composite state of affairs that knowledge consists in (according to reliabilism). So the instrumental value of the reliable process cannot really contribute any extra value. My reply to this argument (in the original paper) is that it presupposes a token-instrumentalist account of surplus value. It assumes that whatever

255

value is contained in a token reliable process derives wholly from the singular causal fact relating the token process to its token belief output. When the token belief output is true, the token reliable process accrues some instrumental value from it in virtue of two things: (i) the true belief has (fundamental) epistemic value, and (ii) the token reliable process bears a singular causal relation to that true belief. My strategy is not to deny that this is one way in which reliable processes can accrue value. What I deny is that it is the only way they can do so. A second way value instrumentalism might proceed is via type-instrumentalism, a distinct but little noticed version of instrumentalism about value (I have not encountered it elsewhere). Suppose that a certain type of belief-forming process reliably generates true beliefs. That is, a high proportion of the beliefs it generates are true. Given that true beliefs are (epistemically) valuable, this type of process then accrues type-instrumental value. Since reliability does not require 100% accuracy, some tokens of the process generate false belief tokens rather than true ones. Do these process tokens accrue instrumental value? According to type-instrumentalism, they do. They inherit epistemic value from their associated process type, by virtue of being instances of this type (which has type-instrumental value). In cases in which the token process does not generate a true token belief, token-instrumentalism and type-instrumentalism render different judgments about the value-status of the token process. Whereas tokeninstrumentalism says that it lacks any epistemic value (more precisely, it lacks what I elsewhere call “veritistic” value), type-instrumentalism allows that it does have instrumental value. We might call this type-inherited instrumental value. At least this token has default instrumental value in virtue of inheritance. This value can be overridden, however, by other instrumental relationships, whether type-inheritance or singular-causal relationships. In the case of processes that do cause true beliefs, they can enjoy two types of instrumental value. First, they can possess token-instrumental value by virtue of standing in singular causal relations to true belief tokens. Second, they can possess type-inherited instrumental value by inheriting such value from a process type, most of whose tokens cause true belief tokens. In the latter case, obviously, a process token acquires some but not all of its instrumental value from the true belief token that it causes. The preponderance of its instrumental value, though, is acquired (ultimately) from different true belief tokens. The upshot is that the EVOK problem is averted. The composite state of affairs consisting of the reliable process

256

token together with the true belief token it produces can have more value than a counterpart composite would have in which the true belief token is caused by an unreliable process. The first composite has extra value derived from different sources than the true belief that it causes, so this extra value can redound to the composite without reliance on double-counting. The foregoing articulation of type-instrumentalism (some nuances of which are new to this presentation) and its bearing on the swamping problem proceeds in terms of the standard formulation of the issue, i.e., in terms of objective value possession rather than value attribution. I assume, however, that type-instrumentalism and its implications can also be re-formulated in terms of value attribution, my favored terms. I won’t undertake such a formulation here. But the remainder of my discussion will highlight value attribution. Piller resists type-instrumentalism about value, just as he resists the more familiar token-instrumentalism. With respect to type-instrumentalism, he offers the following counterexample: “It was not a good thing to take the train that derailed and killed me. Usually, trains bring one to the desired destination. Such killer trains, however, do not seem to inherit any goodness from their safe relatives.” I respond that this is by no means clear. If trains reliably bring one to one’s desired destination, and if train X was scheduled to travel to that destination, then taking train X was a “choiceworthy” action. Choiceworthiness is a kind of value. To be sure, this particular taking of train X also had a salient disvalue, since it involved a derailment that killed the agent. The upshot is that the choiceworthiness value of taking the train is overridden by another, disvaluable feature. This does not obliterate the fact that taking train X on this occasion had the property of choiceworthiness. Moreover, its choiceworthiness arises in part—as type-instrumentalism maintains—from its being a token of a type with instrumental value, viz., reliably bringing passengers to desired destinations. Piller would deny, of course, that choiceworthiness is a kind of value. Instead, he would consider choiceworthiness a deontic status, a thesis he elaborates at the beginning of his paper. In general, he tries to explain away all temptation to talk of instrumental value by substituting deontic status for such (putative) value. This approach has a straightforward problem. Non-actions as well as actions are often regarded as bearers of instrumental value. Rain that falls on the lawn, for example, is valuable as a means to keeping the lawn attractive. Rain, however, has no deontic status. (Thanks to Holly Smith for the general point as well as the example.)

257

What about value autonomization? Our paper gives two slightly different characterizations of autonomy or autonomization. The first characterization is that an autonomous value is a value of an event or property that isn’t “dependent, on a case-by-case basis, on the value of ” its results. Our second characterization of autonomization is that of “promoting” a value from one status to another, specifically, from instrumentally valuable to fundamentally valuable. Here I want to emphasize and slightly modify the first of these characterizations, while discarding the second entirely. What we should have said—given that the revised topic concerns modes of valuing rather than types of value—is something like the following. An act of valuing a given event or property is autonomous, or non-instrumental, when it does not rest on any belief to the effect that the target event or property is (or will be) a cause of some valued outcome. Thus, in the case of knowledge ascription and valuation à la reliabilism, the attributor assigns value to a specified belief-generating process without assuming, or presupposing, that this instance of the process results in a true belief. (Of course, the attributor must realize that the agent’s process results in a true belief. Otherwise, the target cognitive act has no prospect of qualifying as knowledge. But the valuation of the process component of the knowledge state does not depend on, or presuppose, the resultant belief ’s being true.) An act of valuation that satisfies this independence condition has the crucial property of not contributing toward an act of double counting. The attributor’s valuation of the process type (and hence token) may historically derive from his (or his community’s) recognition that the process type in question usually leads to true beliefs. But this leaves his current application of this value free of any prior commitment to the proposition that the token process generates a truth. The reliabilist reconstruction of knowledge valuation is thereby inoculated from the swamping problem. Notice that under this new interpretation of autonomy, no commitment to “fundamental” value is involved. Hence, Piller’s objection (against the old version) that putative acts of value autonomization always rely on fresh “noticings” of fundamental values goes by the boards. His point, whether correct or incorrect, no longer bears on the new kind of autonomy crucial to the account. Markus Werning. What about the genesis of our valuation of reliable belief-forming processes? How should that story go? Werning suggests an interesting part of the story. I particularly like the social component he presents near the end of his paper. Here are some pieces of that component of his story:

258

By valuing reliably produced beliefs more, we have a chance to manipulate our testimonial environment in a positive way. The underlying assumption is that valued practices are more likely to be repeated in the future than unvalued ones … When a child’s belief is valued and the assignment of value is expressed by praise if the belief is based on evidence rather than hearsay, we enforce certain doxastic dispositions … Since the success of our own truth-seeking activities strongly depends on testified beliefs of others being reliably produced, we have developed a culture of positive and negative sanctions regarding the production history of beliefs. The extra value of knowledge is manifest in a practice of enforcing and disenforcing which favors reliably produced beliefs over others. In our paper (Goldman and Olsson 2009), we offered an ever-so-tiny hint of the importance of social origins in the valuation of reliable processes (see footnote 14). A much wider story is certainly in order, and other parts of Werning’s evolutionary model are quite congenial in this regard. However, I won’t explore this territory any further, in light of its diminished importance for present purposes, in light of my present revision of the second solution to the EVOK problem. Although Werning hits many nails on their heads in discussing the evolutionary background of the valuation of reliable belief formation, I think he goes astray in interpreting the relationship between the first and second solutions to the EVOK problem that Olsson and I offered. Here is how we introduced our solutions: [W]e propose two distinct solutions to the EVOK [extra-value-of-knowledge] problem. The solutions are independent, but they are also compatible with one another and perhaps complementary. (Goldman and Olsson 2009; emphases added)

We indicated that the two of us had different preferences between the two independent solutions. Olsson favored (and indeed developed) the conditional probability (CP) solution, and I favored (and developed) the second. In light of this, I don’t think of myself as having a “stake” in the CP solution’s viability, and won’t try to defend it here. I leave that task to Olsson, who makes a highly pertinent response in his paper that appears here (Olsson, this volume). At the same time, I shall continue to defend the second solution—modified, as indicated above, to emphasize typeinstrumentalism as more central to the solution than value autonomiza-

259

tion. I don’t defend this solution as part of a “package deal,” i.e., as joined with the CP solution. I disagree with Werning’s apparent diagnosis that the two solutions are “joined at the hip,” So I am in a position to concede the strength of Werning’s objection to the CP solution, but I find Olsson’s reply to it quite persuasive. In any case, it poses no threat to the type-instrumentalist solution, which is entirely independent. Nonetheless, let me review Werning’s interesting observations about the CP solution. First, the solution receives two formulations of the basic idea, which are not equivalent to one another. Second, each formulation has its problems. The two formulations are: (a) The probability of S’s having more true belief of a similar kind in the future is greater conditional on S’s having the reliably produced true belief that p than conditional on S’s merely truthfully believing that p. (b) S’s reliably produced true belief that p makes it likely that S’s future beliefs of a similar kind will be true. Werning agrees that formula (a) is generally true under reliabilism. The trouble is that it doesn’t guarantee any causal relationship between a reliably generated true belief and a similar future true belief. The possibility of a common-cause situation is what excludes any such guarantee. But in a common-cause situation, where the reliably generated true belief fails to cause the future true belief, the reliably caused true belief cannot obtain any extra value in virtue of an instrumental relationship to the future belief. Only singular causal relations enable instrumental value to be acquired. This is how I understand Werning, at any rate.5 As concerns formula (b), the problem is its falsity. In common-cause situations, the likely truth of future true beliefs of the same type is not “causally grounded” in the first true belief being reliably produced. Suppose, for the sake of argument, that we granted the force of these points against the CP solution to EVOK. What would the implications be for the second, type-instrumentalist solution—or the value-autonomization solution, as it is billed in the Goldman-Olsson paper? Werning sees one motivation for the value-autonomization solution as the desire to cover “all” cases of knowledge, not just “most” cases. This correctly recounts 5. Werning does not spell out the step from the lack of a singular causal relation to the absence of instrumental value creation. But I assume this is his worry.

260

one rationale given in our joint paper. But if one already prefers the typeinstrumentalist solution, this is not a strong additional motivation. And it is no reason to require the two solutions to be joined at the hip. If the second solution works, it works all by itself and has the indicated strength. It is not a reason why it should be wedded to the CP solution. Werning seems to locate another reason, however, why the valueautonomization solution hinges on the CP solution. He argues as follows: [T]he attribution of non-instrumental general value to knowledge as a result of some mechanism of value autonomization causally presupposes that instrumental value is normally attributed to knowledge in an initial phase. The initial attribution of instrumental value in normal cases, however, is sufficiently explained by G&O only if the CP solution achieves to explain the instrumental value of knowledge in normal cases. I have argued that this is not the case because we face a common-cause rather than a means-to-end scenario with regard to future true belief. … Consequently, there fails to be a gain even of instrumental value that is grounded in the property of being reliably produced. G&O’s value autonomization account of the extra value of knowledge is not a second, sovereign solution to the Swamping Problem … but stands and falls with the … conditional probability solution. (This volume, 148f.)

There is a confusion here, however. The relevant causal connection needed for the type-instrumentalism/value-autonomization solution is only a causal connection between a reliable process and a token belief that it causes. This has nothing to do with causal relations between the composite of such events (a state of knowledge) with future true belief tokens. The latter is what Werning worries about in his critique of the CP solution. But this has no implications about problems for a causal relation between a belief-generating process and the belief it generates. Let me next turn to the (substantial) modifications I have made in my present formulation and my defense of the type-instrumentalist solution. This solution does not really need the thesis of value autonomization. It doesn’t have to show that the value of a reliable process that causes a true belief must be fundamental (or non-instrumental, at any rate) for it to pass the “no double counting” stricture. As argued above in my reply to Piller, there is such a thing as type-inherited instrumental value. This is a species of instrumental value that reliable processes may enjoy, and it may be the kind of value it contributes to a composite consisting of a token reliable process and a true belief that it causes. Since this sort

261

of value isn’t acquired directly from the true belief it causes, it is a kind of “extra” value that doesn’t violate the no-double-counting stricture. It is not, however, a species of fundamental value. I no longer need to defend a story of value autonomization for reliable processes, because extra (i.e., independent) value can be achieved in the type-inheritance fashion even if it is a species of instrumental value. So Werning’s worries, whatever their merit as directed against the original account in the Goldman-Olsson paper, are not genuine worries for the modified view presented here. IV. Is There a Weak Sense of ‘Know’? (Jäger, Brendel, Schurz) Christoph Jäger examines the suggestion that there is a “weak” sense of ‘know’ in which it means “believe truly.” The suggestion to this effect is not meant to deny that there is another sense of ‘know’ in which it means something in the vicinity of popular analyses of the form, “believe truly and justifiably (plus X)” (where ‘X’ is replaced with a suitable anti-Gettier clause). Jäger focuses on arguments I offer in Knowledge in a Social World (1999a) and “Reliabilism and the Value of Knowledge” (Goldman and Olsson), and similar ones John Hawthorne gives in several places (e.g., Hawthorne 2002). Jäger disputes the conclusion that there is such a weak sense of ‘know’, offering several responses and alternative diagnoses of the evidence. Jäger’s major themes are stated briefly in the first paragraph of his paper and developed in the remainder. His first suggestion is that when we concede that S knows that p even though we are uninterested in whether her credal attitude is based on adequate grounds, we are speaking loosely. Although “loose speech” is one possible way to explain the uses adduced in support of the weak-knowledge thesis, Jäger does not tell us in detail what he means by “loose speech.” Nor does he give a detailed defense of why a loose-speech explanation is superior to the hypothesis that ‘know’ is polysemous in the fashion indicated. A different complaint he lodges against the adduced evidence is that the questions posed to the ascriber are “leading questions,” and hence (presumably) non-diagnostic. Such questions as “How many people in the room know that Vienna is the capital of Austria?” or “Which of the students know that …?” imply that at least one of the candidates can give the correct answer. The wording of the question implies that “none” is an

262

inappropriate answer. Thus, the respondent is impelled to assent to one or another knowledge ascription to answer the question appropriately. This is a biased test, he suggests. So let’s change the test.6 Suppose a teacher says to her students, “Who knows what city is the capital of Austria? If you know, write the city’s name on a piece of paper and hand it in.” Some students submit the correct answer, viz., Vienna. Would the teacher, or an observer, say of these students that they know the answer? Certainly! (At least if nothing additional is built into the scenario.) What, then, should we infer about the sense of ‘know’ that lies behind such verbal ascriptions, or preparedness to make them? Writing the answer strongly suggests that the students think, or believe, that Vienna is the capital of Austria, and the answer, of course, is correct. So it would explain the teacher’s (or the hypothetical observer’s) knowledge ascription to suppose that she implicitly has a mental representation of ‘know’ in which it means “truly believe.” And it would be plausible to add that this was all she intended to ask about when she said, “Who knows what city is the capital of Austria?” She meant: “Who has a true belief with the content ‘… is the capital of Austria’”? Moreover, there are no leading questions in this scenario. What about the belief condition? Must the belief be firm, as Jäger urges? And if firm belief is required, is that a difficulty for the view I put forward? I have not taken a clear-cut position about the strength of belief required for weak knowing. Is this a serious flaw? For years epistemologists have debated the analogous question for the strong sense of knowing (JTB + X), without reaching much consensus. Why should one have to take a stance on this matter to provide a substantial case for the existence of a weak sense of ‘know’? Jäger gets tangled in some unfortunate knots in pursuing the matter, even when it comes to textual interpretation. For example, he correctly notes that in Knowledge in a Social World (1999a, 88) I allude to the difficulty of translating the belief categories of the “trichotomous” approach—belief, suspension, and disbelief—into the categories of the degree of belief (or subjective probability) scheme. He then interprets me as equating firm belief with a subjective probability of 1.0. I am puzzled what textual evidence in Knowledge in a Social World leads him to this (unintended) interpretation. In one place I write,

6. Thanks to Holly Smith for the example.

263

[S]hould plain ‘belief’in the trichotomous scheme be translated as ‘DB 1.0’? That would be misleading, since not all belief involves total certainty” (1999a, 88).

Does this imply that “firm” belief must be equated with certainty, or 1.0? Surely not. “Firm belief ” might be equated with, say, any subjective probability greater than 0.95. Jäger finds the firm belief option (construed as subjective probability = 1.0) unpalatable, so he proceeds to conclude (without warrant) that weak knowledge must involve weak belief, where weak belief is any subjective probability between 0.5 and 1.0. He calls this “superweak” knowledge, and argues that this is an unsuitable view. But why must a proponent of weak knowledge adopt this option? Higher choices of a threshold for firm belief are certainly available. Granted, the choice of any specific probability threshold is problematic. Among other things, lottery problems arise. But this isn’t a special difficulty for a weak-knowledge hypothesis. Pinpointing a threshold level of belief for strong knowledge is an equally pressing problem, but nobody would jettison commitment to a strong sense of knowledge on this flimsy basis. The strength-of-belief question leads Jäger to pose a supposedly new problem for the weak-knowledge hypothesis. “The problem with the Hawthorne-Goldman-Olsson example thus is that a subject would not normally come to ‘possess the information that p’ even in the sense of generating a weak true belief that p when p is stated, or in some other form presented to the subject, by someone they know to be an untrustworthy informant.” This is because a minimally rational subject will not even weakly believe p if she believes—indeed knows—that her only source of information that p is untrustworthy. How does this seemingly innocuous point generate a problem for the weak-knowledge hypothesis? If the subject does not generate a strong enough belief when she knows that her source is untrustworthy, then she clearly lacks weak knowledge (of p). Why is that a problem? If the subject does not generate a strong enough belief in this circumstance, then, to be sure, it would be inaccurate to ascribe knowledge to her—even weak knowledge. But that is hardly a difficulty for the weak-knowledge view. Notice that it isn’t implied by the weak-knowledge hypothesis that potential knowledge attributors never pay attention to the adequacy of a subject’s evidence (including the trustworthiness of a source). Adequacy of evidence clearly comes into play whenever a potential attributor is thinking of applying the strong sense of knowledge. When does that transpire, you may ask? What conditions prompt speakers to deploy the weak sense of 264

‘know’ and what conditions prompt them to deploy the strong sense? These are jackpot questions, of course. Unfortunately, semantical and pragmatic theories of language and speech are insufficiently developed to address these kinds of questions, even in uncontroversial cases of polysemy. When it comes to ‘know’, we are still in an early stage of marshalling evidence for and against polysemy. It is no embarrassment to lack a general account of when one sense of ‘know’ is mentally accessed in a speaker’s head and when another sense of ‘know’ is so accessed. This is no reason to avoid speculation, however. A plausible hypothesis is that whenever questions of adequacy of evidence or informational source are specifically raised in conversation, the strong sense of ‘know’ is mentally accessed, and the speaker decides, based on available evidence, whether or not it is applicable. If no questions of evidence or informational source concerning the subject are on the conversational table, a speaker may be interested in the applicability of the weak sense. All this can explain Jäger’s report concerning a test he used with his own class. The class received putative information about the capital of Zimbabwe from cards delivered by a machine. They were told that the machine had only a 1/30 chance of being correct. None of Jäger’s students said that they knew (based on what their card said) what the capital of Zimbabwe is. If my previous hypothesis is right, the salient unreliability of the information they received would trigger in their minds the strong sense of ‘know’ rather than the weak one. If the strong sense of ‘know’ was before their minds when they declined to claim knowledge, this provides no negative evidence about the existence of a weak sense of knowledge stored in their semantic repertoire. It just shows that they didn’t use such a weak sense on the occasion in question. This does not imply, or support, the nonexistence of such a sense of ‘know’. It is unfortunate that the debates about weak knowledge, on both affirmative and negative sides, have focused so heavily on a narrow class of examples. I am as guilty as anyone of contributing to this situation. So let me now turn to a different example I adduced in Knowledge in a Social World, one that hasn’t been discussed again by me or by anybody else (so far as I know). I believe I can now strengthen the case in support of a weak sense of ‘know’. Consider the following case. Smith (running in from the street) blurts out to Jones: (1) You don’t want to know what just happened to your new car that was parked in the street.

265

What does Smith mean by this utterance? Is (2) a good paraphrase of (1), as implied by the standard analysis of (strong) knowing? (2) You don’t want to have a justified unGettierized belief in a certain (true) proposition describing what happened to your new car that was parked in the street. (2) correctly captures the idea, conveyed by (1), that there is some state Jones wants to avoid (or should avoid) involving the true proposition in question. But does (2) correctly identify which state this is? Is the state of affairs to be avoided that of having a justified unGettierized belief in the proposition? Notice some possible ways of avoiding this state. It could be avoided by having an unjustified belief in the proposition; or by having a Gettierized belief in the proposition. Is either of these methods a good way to avoid the state Smith means to specify in uttering (1)? No. These states of affairs are not what Smith’s language conveys. What Smith’s utterance conveys is that Jones should avoid being aware or apprised of this truth—that is, should avoid believing it—because he would be enraged, upset, depressed, or whatever if he did come to believe it. If this is right, then what is conveyed by ‘know’ in (1) is the meaning “believe [the truth in question].” This is the weak sense of ‘know’ I have been promoting. It helps capture just the meaning Smith would naturally express by (1). The strong sense of ‘know’, by contrast, provides a totally implausible reading of what is naturally expressed by (1). Elke Brendel devotes a small chunk of her paper to the issue of weak knowledge, and I briefly reply to that chunk here. Brendel suggests that some of the paraphrases offered of ‘know’ in the weak sense aren’t adequate, including “possess information” and “be cognizant of (a fact)”. In neither case is it entailed that the knower believes the proposition, and a strong positive degree of conviction is necessary, she argues. I am tempted to agree that these two paraphrases of weak knowledge are not quite accurate, for the reason she offers, but this is no objection to the preferred paraphrase “believe truly.” Furthermore, in a different frame of mind, I am tempted to point out that knowing itself—whether in the weak or the strong sense—doesn’t always seem to entail positive conviction. For example, the following sentence seems quite acceptable: “Although I knew that p was the case, I just couldn’t bring myself to believe it.” This use of ‘know’ parallels Brendel’s yellow-journalism case about the celebrity love affair. It shows that not only “information” can be possessed without conviction, but “knowledge” as well.

266

Gerhard Schurz also has a brief segment of his paper dedicated to criticizing the existence of a weak-knowledge concept. He states two principles: (3) In (almost) all ordinary contexts it is meaningful and rationally coherent to assert the following in regard to a proposition p: I believe p but I don’t know p. (4) If, in a given context, knowledge would mean true belief, then the assertion “I believe p but I don’t know p” would be rationally incoherent—more precisely, it would contradict [certain] rationality principles [which Schurz later enumerates]. I am prepared to accept (3) but not (4). Indeed, I think one can never say that “knowledge” would mean true belief in a given context—that is, would have a specific sense fixed by the context independent of the utterance in which it occurs. Which sense ‘knowledge’ or ‘know’ has in a given context depends on the sentence uttered. Different sentences might evoke different senses in the minds of hearers than the true belief sense. This might be true of “I believe p but I don’t know p.” This does not prove that there is no sense in which ‘know’ means “truly believe.” Which sense is likely to be assigned to ‘know’ when used in the sentence “I believe p but I don’t know p”? Two senses seem promising. One is the standard meaning offered by epistemologists in which ‘S knows p’ entails ‘S justifiably believes p’. Another is the sense that epistemologists occasionally acknowledge for the predicate ‘knows that p’, namely, “is sure (certain) of p”. In this latter sense, “I believe p but I don’t know p” could be paraphrased as “I believe p but I’m not subjectively sure, or certain, of p”. Notice that this sense of ‘know’ resembles the weak sense insofar as it entails nothing about justification or Gettier-proofness. If one would like some evidence for a subjective certainty sense of ‘know’, here is a case in point: “In reading the detective novel, Jeremy just knew that the butler was the murderer until the final chapter, when it turned out that he wasn’t.” This example does not support the weak sense directly, because it isn’t a case in which ‘know’ means “believe truly.” But it helps to show that ‘know’ partakes of polysemy, which should make one more receptive to the weak sense of ‘know’. Schurz offers a long and complex argument against the existence of weak knowledge, which appeals to numerous principles of rationality to establish (4). Many of these principles strike me as highly dubious, so the

267

“proof ” he adduces rests on debatable grounds. However, I won’t detail my doubts about these principles, for reasons of space. I shall content myself with the point made above, namely, that (4) is highly questionable because the sense of a polysemous term can rarely be fixed in advance simply by a context alone without considering the sentence in which it occurs. It may be true that utterance of a specified sentence (or sentence schema) tilts away from the weak sense of ‘know’. But, as we saw in discussion of Jäger, finding utterances or contexts that dictate paraphrases featuring different senses of ‘know’ than the weak one does nothing to show that there is no weak sense. V. Social epistemology: veritism, meliorism, and expertise (Brendel, Schurz, Scholz) The main target of Elke Brendel’s paper is the project of veritistic social epistemology developed in Knowledge in a Social World. She begins her paper by reviewing various positions I criticize in laying the ground for the veritistic conception. These positions include pragmatism and social constructivism, which (on my glosses) embody rejections of realism about truth. Brendel complains that my interpretation of William James is too extreme, and in fact “James’s and Goldman’s conception[s] of reality are not fundamentally different.” When it comes to social constructivism, she argues that social constructivism does “not necessarily lead to ‘veriphobia’ or to radical anti-realism.” John Dupre is cited as someone who offers a moderate, indeed “banal,” form of social constructivism, which merely asserts that interactions with nature are insufficient to determine scientific belief. Brendel may well be correct that, for each of these doctrines or philosophers, some passages can be found that support mild viewpoints rather than radical and outlandish ones. As she herself admits, however, her mild interpretations don’t cover all of the relevant texts. About James, she writes: “Admittedly, James’s views on truth are sometimes quite unclear and ambiguous and it seems to be impossible to offer a fair and consistent interpretation of his truth-theory without employing a principle of charity.” About social constructivism she writes: “Admittedly, there might be social constructivists who subscribe to such radical anti-realist and relativist positions.” In short, she does not dispute that there are very radical formulations of pragmatism (to be found in James) and very radical formulations

268

of social constructivism (to be found among some social constructivists). She just prefers to concentrate on the milder and more sensible versions of each view. Why do I concentrate on the more radical versions? The answer is twofold. First, the more radical versions have generally been picked up by other writers and popularized as the canonical “isms” in question. I followed suit and presented canonical versions of pragmatism and social constructivism. Second, and more importantly, I discuss these radical views to remove possible impediments and obstacles to my own veritistic (true-belief oriented) approach to social epistemology. To clear the ground for my positive view, there was no need to bother with mild versions of these doctrines, because they pose no threats to my position. Threats come only from the extreme views that deny the existence of mind-independent, or verification-independent, truth. In other words, it’s the intellectual I.E.D.’s—the improvised explosive devices—that need to be cleared away! That’s why I concentrate on the comparatively extreme variants of the views in question. (In fact, Knowledge in a Social World does take note, albeit briefly, of the fact that milder versions of social constructivism exist). Perhaps the heart of Brendel’s article, however, concerns the role that interests play in Knowledge in a Social World and the relationship of interests to my account of knowledge in that work. She first provides an account of how KSW makes the veritistic value of a belief state (including degrees of belief ) partly a function of the agent’s interest in the question that the content of the belief answers. Specifically, when the agent has no such interest, then no veritistic value (V-value) attaches to the belief state. From there KSW proceeds to define the instrumental V-value of a social practice π as the average of the V-values of the belief-states to which π causally contributes, across a wide range of actual and possible applications. Now Brendel does not criticize my introduction of an interest-relevant component in the epistemic credit-worthiness of states and practices. Her complaint comes a bit later. What troubles, or puzzles, Brendel is my failure to use the notion of a reliable practice, together with that of increases and decreases in V-value, to analyze the notion of knowledge. (This harks back to her dissatisfaction with the weak notion of knowledge.) She writes: “So, why doesn’t Goldman analyze knowledge as a kind of reliably (via non-social or social practices) produced true belief with a high V-value right at the outset? This richer notion of knowledge would be more in keeping with Goldman’s conception of a veritistic social epistemology than the weak notion of knowledge as mere true belief. A conception of

269

societal knowledge of the kind that interests Goldman shouldn’t deprive itself of epistemically valuable assets.” Brendel is right that I could have availed myself of the resources she mentions to define a richer notion of knowledge than I worked with in KSW. But I simply wasn’t focused on providing an analysis of the strong sense of knowledge. Needless to say, this is a core part of classical individual epistemology, and I have contributed to that part of epistemology in the past. But in KSW I wanted to focus my efforts elsewhere. This analytical project simply wasn’t a principal aim of that book. I wanted to show how and why epistemology can also be directed at institutional policies and practices that bear on epistemic outcomes, and I didn’t want the discussion to be ensnarled in the usual debates over the analysis of (the strong sense of ) knowledge. In addition, the reliabilist approach to knowledge has engendered controversy. Rightly or wrongly, I suspected that if a reliabilist analysis of knowledge were the linchpin of the book, most of the attention would focus on that feature, potentially obscuring what I regarded as new and important. By focusing on mere true belief as the core notion, this distraction could be avoided. Moreover, many of the quantitative treatments developed in various chapters could proceed smoothly in terms of degrees of belief and truth-values. It was highly doubtful that analogous treatments could be developed using the reliabilist notion of strong knowledge. Finally, the move Brendel here proposes—of incorporating social practices into the account of knowledge—did not and does not strike me as correct. Strong knowledge is a function of the psychological, not the social, processes an agent uses to form a belief. Social processes aren’t necessary at all for (strong) knowledge acquisition. Even when social processes are among the inputs to an agent’s psychological processes, the social processes need not be reliable. (This was not discussed in detail in KSW.) There is no intimate tie, then, between strong-knowledge acquisition and reliable social processes, which were to be the prime target of the book. There was, then, no compelling reason to focus on strong knowledge. For purposes of social epistemology, I wanted to re-center the focus of attention. A more catholic approach is adopted in my current writing (Goldman 2009; forthcoming b), where I invite social epistemologists to select from a list of epistemic desiderata on which to concentrate, including weak knowledge, strong knowledge, justification, rationality, etc. But the choice made in KSW still seems to me to be a reasonable one, even if other choices would have been reasonable as well.

270

I turn next to the central part of Schurz’s paper, which is directed largely at social epistemology. More precisely, it is a mix of social and individual epistemology, since it imports considerations from social epistemology to motivate a certain account of what it is for an individual to know. Schurz defends a hybrid internalist-externalist account of knowledge (and justification) by appeal to a certain class of indicator properties that are said to play a role in the societal spread of information. As indicated earlier, I am not averse to hybrid theories of justification, involving types of elements usually regarded as externalist and types of elements usually regarded as internalist. But the kinds of elements highlighted by Schurz do not, in my opinion, succeed in reaching the goals he formulates. Nor is it clear that the connection he hopes to establish between social epistemology and the analysis of knowledge is wholly successful.7 Schurz takes process reliabilism to be the prototype of externalism. He adds that the externalist notion of justification as a reliable belief-forming process “no longer depends on the mentally accessible properties of the believing subject.” This is a slight over-statement, because processes that lead to belief may be accessible, even according to process reliabilism. Many of the mental inputs to such a process will commonly be accessible, including perceptual-experience inputs or prior-belief inputs. Still, the reliability theory as a whole is considered externalist because some of its elements—e.g., the reliability of the process used—are external. Although Schurz appears to approve of many features of reliabilism, he says that it “breaks down” by failing to support the reflexivity of knowledge, or the K-K principle. This principle says that if one knows, then one also knows that one knows. Schurz claims that the “ordinary notion” of knowledge has the K-K principle “deeply entrenched” in it. Hence, it is a weakness of simple reliabilism (or pure externalism) that it does not endorse the K-K principle. There are two problems here. Schurz nowhere demonstrates that the ordinary notion of knowledge entails, or presupposes, the K-K principle. He offers only a bald assertion to this effect. Second, this thesis is highly questionable, because the K-K principle seems to lead to skepticism via a vicious regress, and it is far from clear that this is the appropriate fate of the ordinary concept of knowledge. Next Schurz claims that the externalist’s “re-definition” of knowledge has another, much stronger disadvantage, i.e., it deprives knowledge of much of its meliorative function. Why does he say this? Because “with7. In my discussion of Schurz, I received especially helpful advice from Blake Roeber.

271

out internalist knowledge-indicators, purely external knowledge cannot be recognized as knowledge and hence has difficulties to spread through society.” Now, the meliorative dimension of social epistemology is one I prominently emphasized in Knowledge in a Social World, so I am pleased to see Schurz give it a lot of emphasis as well. Moreover, I agree that the role of knowledge indicators is potentially a very important one, as Edward Craig (1990) among others has stressed. But it’s an open question whether Schurz manages to identify a critical new link between knowledge-indicators, meliorism, and the need for injecting an internalist element into the analyses of justification and knowledge. I remain unconvinced, for reasons to be explained. What properties must a piece of knowledge possess, Schurz asks, for it to enjoy a meliorative function in a knower’s society? It must be recognizable as being justified, he replies, as being reliable enough to be taken over by other persons. In order for such a piece of knowledge to be recognizable (as such), there must be indicators of the reliability of the process that led to the belief. This leads Schurz to propose the following principle, which he apparently views as a new analysis of knowledge, which supersedes the familiar process-reliabilist analysis: (MelExt) Subject S’s (Ext)-knowledge p is meliorative iff the (kind of ) process by which S’s belief-in-p was produced carries some indicators of its reliability. He then gives two conditions for indicatorship. The first is that there must be a property of the belief-forming process that is mentally accessible to the relevant subject. The second is that the subject can demonstrate—by way of arguments—that the process is reliable (in one of two senses he specifies). He claims that the two conditions bring us back to internal requirements for knowledge, thereby rendering MelExt an externalist-internalist hybrid notion of knowledge. He further claims that MelExt satisfies the K-K principle, because if reliable processes are furnished with reliability indicators, then they are reflexive. Finally, he claims that MelExt knowledge has a veritistic surplus value over simple Ext knowledge, concerning the social spread of knowledge. The way this account is presented, however, seems to run two separate things together. Although Schurz claims that MelExt gives us a distinct kind of knowledge, its formulation gives us something quite different.

272

The formula labeled “MelExt” does not provide necessary and sufficient conditions for knowledge, but rather necessary and sufficient conditions for the meliorativity of knowledge. Knowledge and the meliorativity of knowledge are two different things. Externalist knowledge might be fine as it stands whenever it has the meliorativity property. (Doubtless the meliorativity property is a good property for knowledge to have.) But Schurz doesn’t show that ordinary externalist (reliabilist) knowledge fails to possess meliorativity. He speaks as if MelExt presents a rival species of knowledge; but really it just tells us when other species of knowledge are or are not meliorative. (It is not entirely clear whether meliorativity is meant to be predicated of types of knowledge or of specific knowledge tokens. I think the latter makes more sense; but Schurz may be thinking of the former.) Schurz suggests that the social spread of accurate information plausibly depends, in prototypical situations, on the ability of a population of epistemic subjects to discriminate between reliable and unreliable informants. He further contends that the ability to discriminate between reliable and unreliable informants requires exactly the presence of reliability indicators according to the condition of MelExt. This is the reason why (MelExt)knowledge can spread in an epistemic population must faster than mere (Ext)-knowledge. But this is not correct.8 Suppose that one group of scientists have MelExt knowledge while a second group of scientists have only Ext knowledge of the things they study. The lay members of this society mistakenly think that the second group has MelExt knowledge of the things they study. In this case the mere Ext knowledge of the second group will spread just as quickly as the MelExt knowledge of the first group. Schurz also adverts to the conditional probability solution to the valueof-knowledge problem advanced by Goldman and Olsson (2009). This solution finds surplus value in a true belief in virtue of its being caused by a reliable process. Why can’t reliability indicators play the same role in adding surplus epistemic value in the case of socially communicated beliefs? Schurz writes: “if the first kind of surplus value is a reason to include the condition of reliability in the definition of knowledge, then, why should the second surplus value not also be a reason to include the condition of knowledge-indicators in the definition of knowledge?” This passage confirms the idea that Schurz thinks of himself as arguing for the thesis that possession of knowledge-indicators should be built 8. The counterexample that follows is due to Blake Roeber.

273

into the definition of ‘knowledge’. He thereby writes as if he is offering a new theory of knowledge (or of justification). But his original condition “MelExt” doesn’t really do that. It just offers a definition of meliorativity, a property that might sometimes attach to knowledge. This is a different beast. Finally, let us turn to the question of whether a system of arguments, which is introduced in condition (1.2), really indicates the reliability of the cognitive process(es) used. He admits that this condition is “rather strong”, and this is certainly true. It appears from other discussion that Schurz intends this condition to introduce a requirement for the noncircular demonstration of reliability of the processes one uses. How should we react to this proposal? It may be conceptually possible that some processes (e.g., inductive processes) can have their reliability non-circularly demonstrated (as Schurz may have shown elsewhere), but it is far from clear that this is conceptually possible for all justifying processes. Moreover, it is clear that only a handful of epistemologists, at most, would have any such non-circular demonstrations available to them. Schurz’s highly demanding theory ostensibly implies that no other believers would have justification or knowledge. That would surely reduce the extent of knowledge in society! Furthermore, we should not forget that condition (1.2) is supposed to introduce an indicator property. What is the indicator here, and who is supposed to have the ability to recognize it? Schurz seems to suggest that this indicator relationship holds when other people can tell that a given person possesses this kind of reliability-demonstrating system of arguments. But how many people in societies past or present have been able to tell whether somebody else possesses any such indicator property (i.e., possesses a demonstration that passes the stringent criteria Schurz imposes)? Hardly anybody—only people with impressive epistemological sophistication. Obviously, such people are very few in number in any society. So, condition (1.2) imposes an incredibly high bar for the social spread of information! Fortunately, the real facts about the social spread of information are much more lax. People learn things from others without being able to tell whether their informants pass the Schurz-test of non-circular demonstrability. So all is relatively well in the arena of social learning, much better than it would be if the Schurz-test were a genuine requirement of informational transmission. Oliver Scholz provides a careful treatment of my views on the epistemology of expertise, along with a broad historical survey of literature on the

274

problem. I have no comments to offer on this survey, except to thank him for providing this fund of information, which attests to the longstanding interest in this topic. I limit my remarks to Scholz’s critical comments and suggestions concerning my proposals. His comments are largely confined to the definitions of an expert offered in Knowledge in a Social World and in “Experts: Which Ones Should You Trust?” (Goldman 2001). The core of the definitions of (objective) expertise adverts to knowledge possession and error avoidance. ‘Knowledge’ is here understood in the weak sense of true belief. In EWOST the primary condition of being an expert in domain D is that one have “considerably more beliefs in true propositions and/or fewer beliefs in false propositions within D than the vast majority of people do.” Scholz’s chief criticism is that I am unduly myopic in concentrating on truth (or true belief ). This makes my account unduly restrictive and, in some respects, even materially inadequate. He argues that other epistemological notions could fruitfully be used in a more full-fledged theory of expertise. In particular, justifiedness, coherence, and understanding could contribute to a more complete definition of expertise. Basically, I accept this criticism. To put my choice of definition(s) in context, however, note that KSW was a systematic and unified approach to social epistemology using “veritistic” notions throughout (where “veritism” concerns true belief or degrees of belief ). Thus, my original definition in KSW tried to stay within this framework, and EWOST was written in KSW’s shadow. Nowadays, when I paint a general picture of social epistemology (Goldman 2009; forthcoming b), I customarily allow other epistemic values or desiderata, beyond true and false belief, to enter the picture, in particular such notions as justification and rationality. Apart from the quest for unity, there is a substantive reason to emphasize true belief, as compared with justifiedness or coherence, for example. This concerns the problem—arguably the most salient problem in the sphere of expertise—of identification: how can a non-expert identify an expert or make comparisons of expertise? If it’s primarily a matter of determining who has true or false beliefs, it is arguably a more manageable task than determining who has justified or unjustified beliefs. Even a non-expert can check up on the correctness of a putative expert’s beliefs when new evidence creates an epistemic situation that reveals the truth to everyone. When the date of the predicted eclipse arrives, even a non-expert can determine which putative experts predicted correctly or incorrectly that there would (or wouldn’t) be an eclipse on that date. If the key question

275

were whether one of these putative experts had been justified in making (or denying) this prediction, the non-expert would have to know a lot more about the putative experts in question. They would need to know what other beliefs the putative expert had—probably a great mass of beliefs—to decide whether his predictions were justified (or coherent). The problem of expert identification might thereby become, in practical terms, intractable. One couldn’t decide who is an expert without becoming exceedingly knowledgeable about each candidate’s background beliefs, which is typically unfeasible. The “fundamental” problem Scholz adduces for my primary definition is that a non-expert may have only a few general and coarse opinions about D, whereas an expert will have thousands of highly special and sophisticated beliefs about D. Thus, the expert runs a much greater risk of having false beliefs than the layperson. As Scholz describes the scenario, the non-expert may even have more true beliefs and fewer false beliefs (about D) than the expert. I don’t find this scenario very plausible, however. Presumably, an expert will have almost all of the same true beliefs of a coarse kind as the non-expert plus many true ones that the non-expert lacks. So when it comes to true beliefs, it is unlikely that an expert will have fewer than the non-expert. However, since an expert may well have many more beliefs within D than the non-expert, it is realistic to expect an expert (often) to have more false beliefs than a non-expert. This is Scholz’s second problematic scenario. The upshot is that an improved definition of expertise along these lines would have to place greater weight on the true belief dimension than the error avoidance dimension. How to formulate this, exactly, is a problem that remains on the table. Introduction of the justification desideratum could be helpful here. Even when experts fail to get the truth—as happens frequently in complex and arcane subject-matters—they will typically have better justified beliefs (or degrees of belief ) than their non-expert cousins. This is a legitimate ground for introducing the justification component into a revised definition, but it complicates the prospects for expert identification, as explained above. VI. Knowledge and democracy (Baurmann and Brennan) Michael Baurmann and Geoffrey Brennan have produced an extremely interesting set of reflections on knowledge, democracy, and trust in society, which take as their jumping-off point Chapter 10 (“Democracy”) of

276

Knowledge in a Social World. I cannot hope to address all the issues raised in their paper, but I’ll respond to three of their themes. Baurmann and Brennan begin with a reconstruction of the theoretical perspective advanced in Chapter 10. Among other things, they pinpoint the fact that what I call “core voter knowledge” is to be understood in terms of the outcome-sets that would be produced by each of the candidates who appeal for voter V’s vote in a given race. Core voter knowledge for voter V is knowledge of which of the two candidates would, if elected, produce a better outcome-set by V’s lights, i.e., in terms of V’s preferences, be they egoistic or altruistic. Furthermore, Chapter 10 shows that when the electorate has full core knowledge—that is, when all voters know the true answers to their core questions—and when they vote accordingly, democracy “succeeds” in the sense that a majority of citizens get their more preferred outcome-sets (of those available from the candidates in the races). High levels of core knowledge make such a result highly likely. In section 2.2 (“Outcome vs. Process”), Baurmann and Brennan argue for the superiority of a “process” orientation to what they call my “outcome” orientation. Here are some crucial passages that express this idea: Even if voters did know that a candidate would indeed produce a certain outcome set, they cannot be sure ex ante how they would evaluate this outcome set in the future because this evaluation will depend on other circumstances that may have altered in the interim … It seems then that … a shift from outcome-orientation to process-orientation in the attitudes of voters will be required … The core question for the voter would no longer be “which is the best policy package” … and [but] rather “which of the candidates would, if elected, be likely to choose a better outcome set from my point of view?” (This volume, 168f.)

Baurmann and Brennan clearly mean to favor the process-orientation over the outcome-orientation, but something goes awry in the final sentence of this passage. This sentence endorses a view about the core voter question that virtually coincides with the one I endorse. The sentence puts forward as the correct core voter question just what I propose in Chapter 10 (and what they earlier characterize me as endorsing). So it is mysterious, at this stage, wherein lies the difference between my position and the one they mean to endorse. Their intention may be better expressed in a later passage: [A]n important difference with the outcome-oriented voter remains: processoriented voters will not make their judgment of [candidate] “trustworthiness”

277

contingent on the ability of a politician to produce a certain and specified outcome set. (This volume, 170. Emphasis added.)

Apparently, Baurmann and Brennan impute to me the idea that core voter knowledge consists in the voter’s knowing (having true belief about) which specific outcome-set each candidate would produce. Chapter 10 does not advocate this, however. Rather, it advocates precisely the view Baurmann and Brennan themselves endorse in the passage quoted earlier, i.e., the view that the core voter question is the question of which candidate, if elected, would produce the better outcome-set. Core voter knowledge consists in knowing (having true belief ) to the effect that one candidate, C, would produce a better outcome-set than the other candidate, C’. Having knowledge of this sort does not necessarily require any belief about the specific outcome-set that either candidate would produce. Thus, on this first topic, Baurmann and Brennan’s disagreement with me seems to be based on a misunderstanding. When spelling out the suspected difference between our views, Baurmann and Brennan emphasize the need for voters to weigh the personal characteristics and intrinsic motivations of candidates, not simply their stated policy positions. But this is entirely compatible with my theoretical position. The way I characterize core voter questions, or core voter knowledge, does not exclude the possibility that the best way to determine which candidate would produce the better outcome-set is by considering their personal characteristics and motivations rather than their policy statements. In section 2.3 (“Instrumental vs. Expressive Voting”) Baurmann and Brennan defend an “expressive” as contrasted with an “instrumental” approach to voting. This raises a lot of difficult questions that cannot be fully canvassed here. But let’s make a start. The instrumental view (not a term I use) is described in two ways. First, it holds that a vote is a resource by which elections (or other collective decisions) can be influenced. Second, it is the view that single voters directly choose a policy package, in the sense that such single voters determine the result of an election. I accept the first way of characterizing voting power, but not the second. Obviously, it rarely happens in large elections that the margin of victory is a single vote. Hence, it rarely happens that we can say truly of any individual vote (or voter) that, had that vote been cast differently, the outcome would have been different. But it doesn’t follow from this that none of the voters have a causal effect on the outcome. Rather, the votes of those who vote on the winning side are all contributing causes toward 278

the victory. The situation is analogous to a case in which a car is stuck in the mud, or the snow, and the driver manages to recruit a large number of helpers to help push it out. With everybody pushing, there is far more than enough force exerted on the car to push it out. What shall we say about each pusher? It would be wrong, of course, to say that each one is a “pivotal” pusher, in the sense that the desired outcome (freeing the car from the mud or snow) would not have occurred if that pusher hadn’t pushed. But it would also be wrong to deny to each of the pushers the role of being a cause of the desired effect. To deny this is to imply that nobody caused this effect, which is ridiculous. (It is correct, of course, to say that nobody caused it single-handedly.) It also seems wrong to say that nobody had an instrumental effect on the outcome. Thus, I would embrace an instrumental view if this means merely that a vote gives a person power to influence an election. I would not embrace an instrumental view if this is interpreted to imply that a single vote frequently serves as a pivotal determinant, or decisive determinant, or “direct” determinant (in Baurmann and Brennan’s phrase) of an electoral outcome (see Goldman 1999b). There is also a separate—though ostensibly related—question on which Baurmann and Brennan offer a theory but on which I, in effect, remain silent. This is the question of how voters (most voters, anyway) view their voting actions. What is the significance of their act of voting, in their eyes? Yet a different question is what would be a rational view of voting. Baurmann and Brennan, like many political scientists and theorists, regard the instrumental conception of voting as “irrational”. If this is correct, it would be irrational of voters to view their voting actions as instrumental. But it doesn’t follow from this that they don’t view their voting actions as instrumental (in one or another sense of this term). They may be irrational. I am not always sure which question Baurmann and Brennan are addressing in section 2.3. For my part, Chapter 10 does not offer any specific account of how voters view their voting actions. It simply assumes that their voting decisions are in line with their beliefs as to which candidate would yield a better outcome-set. Is this assumption unrealistic? Is the expressivist theory more realistic? These are difficult matters to decide. I find the expressivist idea initially implausible because an individual vote never has a proposition-sized “message” attached to it, e.g., “I am voting against Jones because I didn’t like his doing X during his previous term of office.” The notion that votes send messages is usually the product of election interpreters putting a particular twist on a large block of votes. Any expressivist intent of an individual’s

279

vote is normally quite inscrutable, especially if the voter isn’t interviewed in exit polls, for example. In any case, the main moral I mean to derive from my model is that the “success” of democracy—as judged by my specified measure of success—is tied to an electorate’s level of core voter knowledge. That is an interesting theoretical finding, whatever the story turns out to be of how voters intend their own voting actions. In section 2.4 of their article (“Epistemic Trust in Democracy”), Baurmann and Brennan deliver a fascinating discussion of the role of social trust in democracy. I restrict my comment here to a very narrow slice of this material: their distinction between generalized and particularistic social trust. This is an interesting distinction, which they deploy to interesting effect, but I remain dubious of the thesis that selective information and one-sided world views are the special province of particularistic systems rather than wider societies. History provides many examples of whole societies with “one-sided” world views, judged by standards of enlightened thought. Moreover, large societies have sometimes been dominated by nationalistic world-views whereas smallish groups within them may have held less nationalistic—and hence less “one-sided”—outlooks. Germany in the Nazi period might be a good example of this. While there is prima facie reason to expect smaller groups to restrict trust to fewer people—namely, members of that very group—there is no necessity here. Whole countries can see the rest of the world as hostile to it, and therefore deserving of distrust, whereas segments within those countries can be more internationally oriented and trusting. Thus, one cannot sustain the general thesis that more selective and one-sided information is always associated with particularistic trust rather than generalized trust. VII. Simulation and mindreading (Newen and Schlicht) Albert Newen and Tobias Schlicht offer a sophisticated and detailed critique of the Simulation Theory of mindreading presented in my book Simulating Minds (2006), as well as an outline of their own favored theory, the Person Model Theory. Both their critique and their positive theory appeal to philosophical considerations as well as to the extensive literature in several branches of empirical cognitive science. All of this material would repay careful thought and consideration for anyone interested in this topic. Given present limitations of space (and the time and energy of

280

the author), my response is restricted to the critical issues raised, indeed, only a selection of those issues. As Newen and Schlicht indicate, two of the central features of the theory I offer are the following. First, although it highlights simulational aspects of human mindreading, it does not claim that simulation exhausts the processes or mechanisms involved in mindreading. It only hypothesizes that simulation “often” plays a role, an important role, in mindreading, i.e., in the process(es) of making mental-state attributions (especially to others). It does not exclude the use of theorizing processes. Indeed, it clearly acknowledges (pp. 43-46) that simulation and theory could well complement one another in a variety of ways. Different possible forms of complementarity include using simulation for certain types of mindreading tasks and theorizing for others, or using elements of both simulation and theorizing in one and the same (token) mindreading task. A second major theme is the value of distinguishing two different forms of simulation, low-level and high-level simulation, and assigning different places to them in the overall battery of mindreading procedures. The first theme especially bears emphasis, because in several places Newen and Schlicht criticize the approach on the grounds that it doesn’t successfully defend pure simulationism. This criticism is misplaced because the approach doesn’t purport to defend pure simulationism. Chapter 2 of Simulating Minds offers a collection of definitions concerning simulation in general, mental simulation, and mental simulation for mindreading. A few of the relevant definitions are along the following lines (details omitted in the interest of brevity). The generic idea of simulation is that a process P simulates another (possible) process Pc if and only if P duplicates, replicates, or resembles Pc in some significant respects (relative to the purposes of the task). More precisely, this definition concerns successful simulation. A related notion of simulation is attempted simulation, which can be defined as follows. Process P is an attempted simulation of process Pc if and only if P is executed with the aim of duplicating or matching Pc, or has the function of duplicating or matching Pc. After presenting these and related definitions, I advance the thesis that cognitive simulations are often used to answer questions, specifically, third-person mindreading questions. In other words, cognitive simulations are often used to determine or form beliefs about other people’s mental states, i.e., to attribute or assign mental states to them. In general, execution of these simulational processes can be either conscious or non-conscious affairs. Low-level mindreading is usually non-conscious

281

whereas high-level mindreading has “some degree of accessibility to consciousness” (147). However, consciousness versus non-consciousness does not play a pivotal role in the story (defense of this stance is presented below). Turning to the criticisms raised in the Newen-Schlicht paper, I divide my responses into three categories. One criticism focuses on the bipartite treatment of simulation—its division into low-level and high-level species. Newen and Schlicht argue that the two species of simulation are so essentially different in kind and complexity that it’s unmotivated to subsume them under the same umbrella. A second series of criticisms focuses on inadequacies in the account of low-level simulation, and a third series focuses on inadequacies in the account of high-level simulation. I take up these groups of criticism in the indicated order. Is it “unmotivated” to subsume low-level and high-level simulational mindreading under a single umbrella? Not at all. The chief respect of thematic unity or commonality is the theme of replication or resemblance, as explained above. In sharp contrast to the theory-theory of mindreading, low-level and high-level variants of simulation share the property of reusing the same (or similar) cognitive operations or neural machinery in the process of arriving at a mental attribution (to another person); or at least trying to do so. This separates the simulation account of mindreading, at either level, from the theory-theory and rationality-theory accounts, as well as from the newer accounts mentioned by Newen and Schlicht. True, there is arguably more “complexity” in high-level simulation than in low-level simulation. What an attributor tries to replicate in high-level simulation is generally a larger stretch of cognition than what is replicated in low-level simulation. But whenever a single species is sub-divided into two sub-species, there must be some differences in kind, quality, or magnitude. Why not a difference in complexity? Types of molecules differ enormously from one another in size and complexity, but this doesn’t deter physicists or chemists from considering all of them one kind of physical unit: molecules. Similarly for cells. Newen and Schlicht make much of the fact that low-level simulation is non-conscious whereas high-level simulation is conscious. (Actually, I only portray high-level simulation is having “some degree” of accessibility to consciousness, which varies across cases.) They assume that the consciousness/non-consciousness divide is a major psychological divide; but I would dispute this contention. A large swath of contemporary psychological research raises doubts about the existence of any interesting cognitive

282

tasks that are the exclusive province of conscious processes.9 If so, then although a distinction can certainly be drawn between conscious and nonconscious mental events, it is questionable whether there are substantial functional differences between the conscious and the non-conscious. For example, although the pursuit of goals might seem to be the distinctive province of the conscious mind, Hassin et al. (2009) report evidence that goal pursuit can also be non-conscious. The same sort of finding has been made for a large number of basic psychological tasks. Thus, the mere fact (if it is a fact) that high-level mindreading is conscious and low-level mindreading is non-conscious would not mark a categorical difference between the two from a psychological vantage point. I now turn to the Newen-Schlicht critique of low-level simulational mindreading. A first point follows up on the immediately preceding discussion of conscious versus non-conscious performances. In discussing lowlevel simulational mindreading, which features mirror processes, Newen and Schlicht correctly say that the mirror-produced activation of pain or emotion in an observer is a non-conscious event. In order to be a case of mindreading, however, a second step needs to follow: the observer must attribute the sensation or emotion to the target. They then remark: “This cannot be done unconsciously; it is rather a conscious and deliberate action.” What is the basis or evidence for this claim? They present no such evidence, and none, I submit, exists. Attributing a state to another is forming a belief about the other. Why must this be conscious? Cognitive science is replete with nonconscious representations. Why should this case be any different? What precludes a non-conscious attribution of a mental state to a target? As to the question of deliberateness, most epistemologists agree that belief and belief-formation are rarely if ever voluntary or deliberate affairs. There would be little if any dissent from psychologists on this point. So what supports Newen and Schlicht’s contention here? Their next point is that since an observer’s mirror event is unconscious whereas the target’s (model’s) pain or emotion event is conscious, how can one claim a resemblance between them? The answer is that the resemblance lies along two dimensions: neural and functional. The neural dimension is paramount in establishing mirroring phenomena, whether at the level of single neurons or of neural regions and networks. In single-cell recordings, the very same neuron is found to be activated both in planning a specific 9. An extensive and illuminating review of this literature is provided by Karen Shanton (in preparation), in her doctoral dissertation at Rutgers University.

283

type of motor action and in observing someone else perform that action; or in undergoing a painful sensation and in observing someone else be pricked by a pain-inducing instrument. Other empirical findings support functional resemblances between mirroring events, although observation-caused mirror activations typically have their functionally associated behavioral outcome inhibited. Yet another objection is that mirror neurons are insufficient for a self-other distinction, which is needed for third-person mindreading. I respond: true, mirroring per se does not suffice for self-other discrimination. One cannot tell from a mirror event alone whether it was produced autonomously or exogenously, and therefore whether to interpret it in first-personal or third-personal terms. But there is other evidence available to the cognitive system, roughly contemporaneous with a mirror event, that can provide information to make this discrimination. Here is an analogy from a slightly different domain. Externally caused tactile stimulation, such as a tickle, is experienced as more intense than the same stimulation when self-produced (Blakemore et al. 1998). How can the system tell how it should interpret, or experience, the stimulation? Information doesn’t come from the stimulation per se. Setting the details aside, the system has information that bears on whether it was self-produced or externally produced. And this is true for mirroring as well. Thus, it isn’t necessary that a self-other discrimination be made by a mirroring process itself; still, such a discrimination can be made. Thus, there is no mystery that undercuts the very possibility of a cognitive system using a self-experienced mirror event as part of the basis for attributing a mental state to another. Notice that I have never claimed that mirroring events in and of themselves constitute acts of third-person attribution, only that such events—along with other events—can be causes of acts of third-person attribution. Finally, what are the problems Newen and Schlicht pose for highlevel simulational mindreading? One problem is what I’ll call the “regress” problem. My favorite illustration of high-level mindreading is predicting someone else’s decision. If you approach this task via the simulation heuristic, you would start with prior beliefs about the target’s desires and beliefs, then you would imaginatively adopt these same desires and beliefs (in pretend mode), and so forth. Newen and Schlicht interrupt this story at the first step: “How did you get your prior beliefs about the target’s desires and beliefs? Could they have been obtained by simulation?” Well, yes, they could have been obtained by earlier acts of simulation, even acts

284

of high-level simulation. But how were those earlier acts performed, and will this series of simulational operations ultimately terminate? Essentially what worries Newen and Schlicht is akin to the familiar regress of justification in epistemology (although they don’t phrase their problem in this fashion). Just as epistemic justification (à la foundationalism) must eventually terminate in “basic beliefs” that aren’t inferred from other beliefs, so simulational mindreading should be able to deliver cases of “basic” acts of (high-level) simulational mindreading that do not depend on previously acquired beliefs about a target’s mental states. How are such “basic” acts of (high-level) simulational mindreading possible? My first reply is that basic acts of simulational mindreading are indeed possible and presumably exist. For example, to assign a current perceptual belief to a target, a mindreader might observe the target’s perceptual environment, imagine being in the target’s shoes and seeing a certain portion of the scene. She then attributes a belief to the target that is generated by the visual experience thus imagined. No prior beliefs about the target are utilized. Thus, there are cases of “basic” high-level simulational mindreading. There is no principled reason, however, why even a pure simulation theory should be required to show that basic acts of simulational mindreading are executed by high-level simulational mindreading (as Newen and Schlicht seem to assume). Such acts could be executed by low-level simulation instead. A mindreader might attribute to a target an intention to kick a ball by undergoing a mirrored kicking intention. This would be a basic act of low-level simulational mindreading. After attributing this kicking intention to the target, one could later use knowledge of the intention to help execute a further act of mindreading, this time using a high-level simulation heuristic. In short, the feasibility of basic acts of simulational mindreading is not terribly problematic.10 Yet another criticism raised by Newen and Schlicht resurrects a familiar kind of collapse argument against the simulation theory. The suggestion is that all acts of simulation involve a tacit assumption on the part of the attributor that the target is “like me” in relevant respects. Doesn’t this amount, however, to a belief in a generalization—indeed, a psychological generalization—about people? Isn’t this the stuff of which theory-theory is made? So doesn’t ST ultimately collapse into TT? 10. I now realize that it would have been instructive to discuss these possibilities in Simulating Minds.

285

I responded to this kind of objection in Simulating Minds (30–34), and I still think that this response was reasonably adequate. First, collapse champions go overboard in imputing tacit beliefs to people, and these excessive imputations should be resisted. Second, the “like me” principle is not the kind of lawlike psychological generalization theory-theory requires. TT requires intra-personal diachronic laws, whereas the “like me” principle is an interpersonal synchronic generalization. Finally, what ST postulates is that simulation “mode” is not a kind of principle but an operating procedure. Under this procedure, initiated and controlled by the imagination, a person mentally enacts a different “role”—a counterfactual role—than the one she is actually in, a role mentally associated with a different person. A competent use of this procedure involves creation and maintenance of a different mental “space,” in which the enacted role often instantiates different (“pretend”) mental properties than her actual current properties. This requires erecting a “border” or “barrier” between the created mental space and her ordinary space—a barrier that simulationists often refer to as “quarantine.” Unfortunately, this border is readily encroached upon, so that actual current properties (feelings, beliefs, etc.) seep across it into the specially created space. This leads to a pattern of so-called “egocentric errors,” which are extensively documented by psychological research (Simulating Minds, 164–173). Newen and Schlicht seek to treat egocentric errors, or quarantine failure, as a theorizing failure, but ST contends that it is more parsimoniously treated as the malfunction of an operating procedure. Finally, consider an empirically based objection to ST offered by Newen and Schlicht. According to ST, they explain, we would expect the neural correlate of third-person attribution to include brain areas activated in the case of first-person attribution of mental states, since self-attribution constitutes an essential stage in the simulation-and-projection process. But studies by Vogeley et al. (2001) and Newen and Vogeley (2003) suggest that first- and third-person attribution have different neural correlates. I do not dispute the relevance of these findings. But contrasting findings are reported by Mitchell et al. (2005) and others. Mitchell et al. cite a number of fMRI studies in which medial prefrontal cortex (MPFC), especially the ventral sector thereof, was selectively engaged during tasks requiring self-reflection, self-referencing, or introspection (see Simulating Minds, 162f. for review). Mitchell et al. take these findings to be congenial to ST, because MPFC is one of the regions long thought to be associated with (high-level) theory of mind, and a study of their own lends further

286

support to this notion. Additional support for ST comes from work by Jean Decety and colleagues on how the brain distinguishes first- and thirdperson perspectives (see Simulating Minds, 211ff., for review). Given the conflict between these groups of studies and those of Vogeley et al. it is premature to draw any firm conclusion at this time. This completes my discussion of the Newen-Schlicht paper. I have not replied to all of the critical points they raise, or to their own interesting positive approach. But any additional treatment would exceed the space allotted to me here as well as time constraints of my own.

REFERENCES Blakemore, Sarah-J., Daniel M. Wolpert and Chris D. Frith 1998: “Central Cancellation of Self-Produced Tickle Sensation”. Nature Neuroscience 1(7), 635–640. Comesana, Juan 2002: “The Diagonal and the Demon”. Philosophical Studies 110, 249–266. Conee, Earl and Richard Feldman 1998: “The Generality Problem for Reliabilism”. Philosophical Studies 89, 1–29. — 2001: “Internalism Defended”. In: Hilary Kornblith (ed.), Internalism and Externalism. Malden, MA: Blackwell. Craig, Edward 1990: Knowledge and the State of Nature. Oxford: Clarendon Press. Goldman, Alvin I. 1979: “What Is Justified Belief?” In: George Pappas (ed.), Justification and Knowledge. Dordrecht: Reidel. Reprinted in Alvin I. Goldman 1992: Liaisons: Philosophy Meets the Cognitive and Social Sciences. Cambridge, MA: MIT Press. — 1986: Epistemology and Cognition. Cambridge, MA: Harvard University Press. — 1999a: Knowledge in a Social World. Oxford: Oxford University Press. — 1999b: “Why Citizens Should Vote: A Causal Responsibility Approach”. Social Philosophy and Policy 16(2), 201-217. Reprinted in: David Estlund (ed.) 2002: Democracy. Malden, MA: Blackwell. — 2001: “Experts: Which Ones Should You Trust?”. Philosophy and Phenomenological Research 63, 85–110. Reprinted in: Alvin I. Goldman 2002: Pathways to Knowledge, Public and Private. New York: Oxford University Press. — 2006: Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. New York: Oxford University Press.

287

— 2009: “Systems-Oriented Social Epistemology”. In: Tamar Gendler and John Hawthorne (eds.), Oxford Studies in Epistemology. — forthcoming a: “Toward a Synthesis of Reliabilism and Evidentialism? Or: Evidentialism’s Problems, Reliabilism’s Rescue Package”. In: Trent Dougherty (ed.), Evidentialism and Its Discontents. New York: Oxford University Press. — forthcoming b: “Why Social Epistemology is Real Epistemology”. In: Duncan Pritchard, Adrian Haddock and Alan Millar (eds.), Social Epistemology. Oxford: Oxford University Press. Goldman, Alvin I. and Erik J. Olsson 2009: “Reliabilism and the Value of Knowledge”. In: Adrian Haddock, Alan Millar, and Duncan Pritchard (eds.), Epistemic Value. Oxford: Oxford University Press. Hassin, Ran R., John A. Bargh and Shira Zimerman. 2009: “Automatic and Flexible: The Case of Nonconscious Goal Pursuit”. Social Cognition 27(1), 20–36. Hawthorne, John 2002: “Deeply Contingent A Priori Knowledge”. Philosophy and Phenomenological Research 65(2), 247–269. Mitchell, Jason P., Mahzarin R. Banaji and C. Neil Macrae 2005: “The Link Between Social Cognition and Self-Referential Thought in the Medial Prefrontal Cortex”. Journal of Cognitive Neuroscience 17, 1306–1315. Millikan, Ruth 1993: “In Defense of Proper Functions”. In: Ruth Millikan, White Queen Psychology and Other Essays for Alice. Cambridge, MA: MIT Press. Newen, Albert and Kai Vogeley 2003: “Self-Representation: The Neural Signature of Self Consciousness”. Consciousness and Cognition 12, 529–543. Olsson, Erik J. this volume: “In Defense of the Conditional Probability Solution to the Swamping Problem”. Grazer Philosophische Studien. Plantinga, Alvin 2000: “The Nature of Defeaters”. In: Alvin Plantinga, Warranted Christian Belief. New York: Oxford University Press. Shanton, Karen in preparation: “The Cognitive Unconscious: Philosophical Implications”. Doctoral dissertation, Rutgers University. Vogeley, Kai, P. Bussfield, Albert Newen, S. Hermann, et al. 2001: “Mind Reading: Neural Mechanisms of Theory of Mind and Self Perspective”. NeuroImage 14, 170–181. Wunderlich, Mark E. 2003: “Vector Reliability: A New Approach to Epistemic Justification”. Synthese 236, 137–162.

288

GPS Editors Johannes L. Brandl (Universität Salzburg) Marian David (University of Notre Dame) Maria E. Reicher (Universität Graz) Leopold Stubenberg (University of Notre Dame) Editorial Assistant Martina Fürst (Universität Graz) Editorial Board Peter Baumann Monika Betzler Victor Caston Thomas Crisp Dagfinn Føllesdal Volker Gadenne Hanjo Glock Robert M. Harnish Reinhard Kamitz Thomas Kelly Andreas Kemmerling Jaegwon Kim Peter Koller Wolfgang Künne Karel Lambert Keith Lehrer Hannes Leitgeb Joseph Levine Alasdair MacIntyre

Georg Meggle Thomas Mormann Edgar Morscher Herlinde Pauer-Studer Christian Piller Edmund Runggaldier Heiner Rutte Werner Sauer Alfred Schramm Gerhard Schurz Geo Siegwart Peter Simons Barry Smith Thomas Spitzley Matthias Steup Mark Textor Thomas Uebel Ted Warfield Nicholas White

Subscription Rates There is no annual subscription rate. Each volume has its own price, varying from 60 to 90 US Dollars. Individual subscribers get a 50% discount. Informations for Contributors See the inside of the back cover.

Information for Contributors GPS publishes articles on philosophical problems in every area, especially articles related to the analytic tradition. Each year at least two volumes are published, some of them as special issues with invited papers. Reviews are accepted only by invitation. Manuscripts in German or English should be submitted electronically as an e-mail attachment, either in MS Word or in rtf format, prepared for anonymous reviewing (i.e. without the author’s name and affiliation), together with an English abstract of 60–100 words. Footnotes should be kept to a minimum, and references should be incorporated into the text in (author, date, page) form. An alphabetical list of references should follow the text. A submitted paper will normally be sent to a referee. Authors are responsible for correcting proofs. Corrections that deviate from the text of the accepted manuscript can be tolerated only in exceptional cases. Authors will receive 25 free offprints of their article.

Manuscripts should be sent to the following address: [email protected]

Email Addresses Editors [email protected] [email protected] [email protected] [email protected] Editorial Assistant [email protected]

GPS online Further information about GPS (back volumes, electronic publication, etc.) is available at the publisher’s web site: http://www.rodopi.nl, section “Series & Journals”.