Language and Reality from a Naturalistic Perspective: Themes from Michael Devitt [142, 1 ed.] 9783030476403, 9783030476410

This book celebrates the many important contributions to philosophy by one of the leading philosophers in the analytic f

298 130 5MB

English Pages 477 [461] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Contributors
About the Editor
Chapter 1: Introduction – Michael Devitt at Eighty
Part I: Philosophy of Linguistics
Chapter 2: Invariance as the Mark of the Psychological Reality of Language
2.1 Introduction
2.2 The Very Idea of ‘Psychological Reality’
2.2.1 Grammars and Psychologism
2.2.2 Chomsky on Psychological Reality
2.2.3 Minimal Realism
2.2.4 General Remarks
2.3 Devitt’s ‘Master Argument’
2.4 Questioning Premise (2): Applying the Distinctions
2.4.1 Chomsky and Devitt’s Three Distinctions
2.4.2 Products Before Competence?
2.4.3 Invariance
2.5 Questioning Premise (4): Interpreting a Grammar
2.5.1 The Intuitive Conception of Competence
2.5.2 Chomsky’s Own Words
2.5.3 Symbols
2.5.4 Intuitions and Aboutness
2.5.5 Non-intuitive Data
2.5.6 Representation
2.6 Concluding Remarks
References
Chapter 3: Priorities and Diversities in Language and Thought
3.1 The Language of Thought and Diversities in Cognitive Format
3.2 Language as Expressing Thought: Diversities in Linguistic Function
3.3 Universal Grammar and the Psychology of Language Processing
3.3.1 UG-Violating Strings
3.3.2 UG-Conforming Complexities
3.4 Priorities, Sufficiencies, and Speculations
References
Part II: Theory of Reference
Chapter 4: Theories of Reference: What Was the Question?
4.1 Introduction
4.2 A Brief Look at the Development of NTR
4.2.1 Descriptivism and Its Critique
4.2.2 The Historical Chain Picture
4.2.3 The Varieties of Reference
4.2.4 The Qua Problem
4.2.5 Can Reference Never Change?
4.3 What Was the Question?
4.3.1 The “Main Problem” of the Theory of Reference
4.3.2 The Millian View and Frege’s Puzzles
4.3.3 Shared Meanings
4.3.4 Meaning, Understanding, and Manifestability
4.4 New Forms of Descriptivism
4.4.1 Rigidified Descriptions
4.4.2 Causal Descriptivism
4.4.3 Nominal Descriptivism or Metalinguistic Descriptivism
4.4.4 A Theory of Meaning?
4.4.5 Substantial and Trivial Versions of Descriptivism
4.5 Kind Terms
4.6 Back to the Millian View?
References
Chapter 5: Multiple Grounding
5.1 Devitt vs Kripke
5.2 Reference Change
5.3 Confusion
5.4 Degrees of Designation
5.5 Semantic Coordination
5.6 Coreference De Jure
5.7 Mental Files
5.8 Coordination via Proper Names
5.9 Conclusion
References
Chapter 6: Reference and Causal Chains
References
Chapter 7: The Qua-Problem for Names (Dismissed)
7.1 Introduction
7.2 Brief Background: The Causal Theory of Reference
7.3 Why Focus on Names?
7.4 The Qua-Problem for Names
7.5 In Defense of a “Dismissive” Response
7.6 Devitt and Sterelny’s (Tentatively) Proposed Solution
7.7 Failed Grounding?
7.8 A Trio of Objections
7.8.1 Missing the Point
7.8.2 A Different Kind of Problem but a Problem Nonetheless
7.8.3 Empty Names Left Unexplained
7.9 Referring to felis catus
7.10 The Curious Origins of an Apocryphal Problem
References
Chapter 8: Language from a Naturalistic Perspective
8.1 What to Expect
8.2 Proper Names
8.3 Two-Dimensionalism
8.4 A Second Application of Linguistic Modesty
8.5 Twin Earth for Two-Dimensionalists
8.6 The Internalism-Externalism Debate
References
Chapter 9: Michael Devitt, Cultural Evolution and the Division of Linguistic Labour
9.1 The Division of Linguistic Labour
9.2 Intuition and Evidence
9.3 Two Conceptions of Cultural Evolution
9.4 Is Vertical Transmission Different from Horizontal Transmission?
References
Part III: Theory of Meaning
Chapter 10: Still for Direct Reference
10.1 Direct-Reference Theory
10.1.1 Direct Reference, Semantic Content, Millianism, and Russellian Propositions
10.1.2 Direct Reference, Attitude Ascriptions, and Shakespearean Attitude Ascriptions
10.1.3 Direct Reference, Definite Descriptions, and Scope Ambiguity
10.2 Devitt’s Methodology and Initial Theory of Meaning
10.2.1 Devitt’s Methodology
10.2.2 Devitt’s Initial Theory of Meaning
10.3 Some Potential Hindrances to Dialogue Between Devitt and Direct-Reference Theorists
10.3.1 Direct Reference and Devitt on Propositions
10.3.2 Direct Reference and Devitt on Conventional Meaning
10.3.3 Devitt’s Notions of Opacity and Transparency, and Being Shakespearean
10.4 Devitt’s Revised Theory of Opaque Attitude Ascriptions
10.4.1 Monolingual Non-English Speakers
10.4.2 Bilingual Speakers and Devitt’s Revised View
10.5 From Devitt’s Revised Theory to Shakespearean Attitude Ascriptions
10.5.1 Devitt’s Revised Theory and Substitution of ‘Bernard’ for ‘Ortcutt’
10.5.2 A Translation Relation that Hinges on Co-reference
10.5.3 Well-Established, Frequently Used Translation
10.5.4 Distinct-Language-Only Translation Relations
10.5.5 How a Devittian Might Resist the Above Argument
10.5.6 What the Above Argument Does, and Does Not, Show About Devitt’s Revised Theory
10.5.7 Reflections on Direct-Reference Theory and the Preceding Argument that Devitt’s Revised Theory Implies Shakepearean Attitude Ascriptions
10.6 Replies to Devitt’s Arguments Against Direct-Reference Theory
10.6.1 The Identity Problem and the Opacity Problem for Direct Reference
10.6.2 A Direct-Reference Reply to Devitt’s Identity and Opacity Problems
10.7 More on Direct-Reference Theory and Explanation of Behavior
10.7.1 Truth-Conditions for ‘Because’ Sentences
10.7.2 ‘Because’ Sentences and Explanation
10.7.3 Explanations and Identity
10.7.4 Direct Reference and True ‘Because’ Sentences
10.7.5 Direct Reference and Explanation
10.8 Devitt’s Reply
10.9 Conclusion
References
Chapter 11: Naming and Non-necessity
11.1  The Examples
11.2  A Purported Proof
11.3  Quasi-a-priority
11.4 Kripke’s Revised Case
References
Chapter 12: Against Rigidity for General Terms
12.1 Introduction
12.2 Against Rigid Essentialism
12.3 Against Rigid Expressionism
12.4 Conclusion
References
Chapter 13: Devitt and the Case for Narrow Meaning
13.1 Narrow Content
13.2 Syntactic Psychology
13.3 Narrow Psychology
13.4 Defending Narrow Psychology
13.5 Objections to the 1989 Picture
13.6 Abandoning Narrow Psychology
13.7 Against the Functional-Role View of Narrow Meanings
13.8 Explaining “Wide” Behavior
13.9 The Problem of Psychosemantics
13.10 Against Two Further Candidates for Narrow Meaning
References
Chapter 14: Languages and Idiolects
14.1 Introduction
14.2 To Be Defended
14.3 Objections and Replies
References
Part IV: Methodology
Chapter 15: Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics
15.1 Introduction
15.2 Scientific vs. Commonsense Realism
15.2.1 Secondary Properties: Color
15.3 Language
15.3.1 Ontology
15.3.2 Linguistic Explanation
15.4 The A Priori
15.4.1 A Working vs. an Explanatory Epistemology
15.4.2 Is Quinean Holism a Good Abduction?
15.4.3 Is a Naturalistic A Priori Obscure?
15.5 Conclusion
References
Chapter 16: Experimental Semantics, Descriptivism and Anti-descriptivism. Should We Endorse Referential Pluralism?
16.1 Introduction
16.2 Two Distinctions: Use vs. Reflections on Use, and Use vs. Interpretation. Testing Use
16.3 On Referential Pluralism
References
Part V: Metaphysics
Chapter 17: Scientific Realism and Epistemic Optimism
17.1 Introduction
17.2 Devitt’s Formulations of Scientific Realism
17.3 Metaphysical and Scientific Issues
17.4 Confidence
17.5 Scientific Realism
References
Chapter 18: Species Have Historical Not Intrinsic Essences
18.1 Millian Kinds
18.2 Essences
18.3 Biological Taxa
18.4 Devitt on (Partly) Historical Essences
18.5 Historical over Intrinsic Essences
18.6 Conclusion
References
Part VI: Michael Devitt’s Responses
Chapter 19: Stirring the Possum: Responses to the Bianchi Papers
19.1 Philosophy of Linguistics
19.1.1 The Linguistic Conception of Grammars (Collins, Rey)
19.1.1.1 Introduction
19.1.1.2 The “Master Argument”
19.1.1.3 Linguistic Realism and Explanation
19.1.1.4 The Paraphrase Response
19.1.1.5 Criticism of the Paraphrase Response
19.1.2 The Psychological Reality of Language (Camp)
19.2 Theory of Reference
19.2.1 Reference Borrowing (Raatikainen, Sterelny, Horwich, Recanati)
19.2.2 Grounding (Raatikainen, Recanati)
19.2.3 Kripkean or Donnellanian? (Bianchi)
19.2.4 The Qua-Problem for Proper Names (Raatikainen, Reimer)
19.2.5 Causal Descriptivism (Raatikainen, Jackson, Sterelny)
19.2.5.1 Jackson
19.2.5.2 Sterelny
19.3 Theory of Meaning
19.3.1 Direct Reference (Braun, Horwich)
19.3.2 Descriptive Names and “the Contingent A Priori” (Salmon, Schwartz)
19.3.3 Rigidity in General Terms (Schwartz)
19.3.4 Narrow Meanings (Lycan, Horwich)
19.3.5 The Use Theory (Horwich)
19.4 Methodology
19.4.1 Putting Metaphysics First (Rey)
19.4.2 “Moorean Commonsense” (Rey)
19.4.3 Intuitions (Martí, Sterelny, Jackson)
19.4.4 Experimental Semantics (Martí, Sterelny)
19.5 Metaphysics
19.5.1 The Definition of “Scientific Realism” (Godfrey-Smith)
19.5.2 Biological Essentialism (Godman and Papineau)
19.5.2.1 Introduction
19.5.2.2 Summary of Argument for Intrinsic Biological Essentialism (IBE)
19.5.2.3 G&P on Alice and Artifacts
19.5.2.4 Implements
19.5.2.5 Species
References
Index
Recommend Papers

Language and Reality from a Naturalistic Perspective: Themes from Michael Devitt [142, 1 ed.]
 9783030476403, 9783030476410

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Philosophical Studies Series

Andrea Bianchi  Editor

Language and Reality from a Naturalistic Perspective Themes from Michael Devitt

Philosophical Studies Series Volume 142

Editor-in-Chief Mariarosaria Taddeo, Oxford Internet Institute, Digital Ethics Lab, University of Oxford, Oxford, UK Executive Editorial Board Patrick Allo, Vrije Universiteit Brussel, Brussel, Belgium Massimo Durante, Università degli Studi di Torino, Torino, Italy Phyllis Illari, University College London, London, UK Shannon Vallor, Santa Clara University, Santa Clara, CA, USA Board of Consulting Editors Lynne Baker, Department of Philosophy, University of Massachusetts, Amherst, USA Stewart Cohen, Arizona State University, Tempe, AZ, USA Radu Bogdan, Dept. Philosophy, Tulane University, New Orleans, LA, USA Marian David, Karl-Franzens-Universität, Graz, Austria John Fischer, University of California, Riverside, Riverside, CA, USA Keith Lehrer, University of Arizona, Tucson, AZ, USA Denise Meyerson, Macquarie University, Sydney, NSW, Australia François Recanati, Ecole Normale Supérieure, Institut Jean Nicod, Paris, France Mark Sainsbury, University of Texas at Austin, Austin, TX, USA Barry Smith, State University of New York at Buffalo, Buffalo, NY, USA Nicholas Smith, Department of Philosophy, Lewis and Clark College, Portland, OR, USA Linda Zagzebski, Department of Philosophy, University of Oklahoma, Norman, OK, USA

Philosophical Studies aims to provide a forum for the best current research in contemporary philosophy broadly conceived, its methodologies, and applications. Since Wilfrid Sellars and Keith Lehrer founded the series in 1974, the book series has welcomed a wide variety of different approaches, and every effort is made to maintain this pluralism, not for its own sake, but in order to represent the many fruitful and illuminating ways of addressing philosophical questions and investigating related applications and disciplines. The book series is interested in classical topics of all branches of philosophy including, but not limited to: • • • • • • • •

Ethics Epistemology Logic Philosophy of language Philosophy of logic Philosophy of mind Philosophy of religion Philosophy of science Special attention is paid to studies that focus on:

• the interplay of empirical and philosophical viewpoints • the implications and consequences of conceptual phenomena for research as well as for society • philosophies of specific sciences, such as philosophy of biology, philosophy of chemistry, philosophy of computer science, philosophy of information, philosophy of neuroscience, philosophy of physics, or philosophy of technology; and • contributions to the formal (logical, set-theoretical, mathematical, informationtheoretical, decision-theoretical, etc.) methodology of sciences. Likewise, the applications of conceptual and methodological investigations to applied sciences as well as social and technological phenomena are strongly encouraged. Philosophical Studies welcomes historically informed research, but privileges philosophical theories and the discussion of contemporary issues rather than purely scholarly investigations into the history of ideas or authors. Besides monographs, Philosophical Studies publishes thematically unified anthologies, selected papers from relevant conferences, and edited volumes with a well-defined topical focus inside the aim and scope of the book series. The contributions in the volumes are expected to be focused and structurally organized in accordance with the central theme(s), and are tied together by an editorial introduction. Volumes are completed by extensive bibliographies. The series discourages the submission of manuscripts that contain reprints of previous published material and/or manuscripts that are below 160 pages/88,000 words. For inquiries and submission of proposals authors can contact the editor-in-chief Mariarosaria Taddeo via: [email protected] More information about this series at http://www.springer.com/series/6459

Andrea Bianchi Editor

Language and Reality from a Naturalistic Perspective Themes from Michael Devitt

Editor Andrea Bianchi Department of Humanities, Social Sciences and Cultural Industries University of Parma Parma, Italy

ISSN 0921-8599     ISSN 2542-8349 (electronic) Philosophical Studies Series ISBN 978-3-030-47640-3    ISBN 978-3-030-47641-0 (eBook) https://doi.org/10.1007/978-3-030-47641-0 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Michael Devitt

v

Contents

1 Introduction – Michael Devitt at Eighty������������������������������������������������    1 Andrea Bianchi Part I  Philosophy of Linguistics 2 Invariance as the Mark of the Psychological Reality of Language ��������������������������������������������������������������������������������    7 John Collins 3 Priorities and Diversities in Language and Thought����������������������������   45 Elisabeth Camp Part II  Theory of Reference 4 Theories of Reference: What Was the Question?����������������������������������   69 Panu Raatikainen 5 Multiple Grounding ��������������������������������������������������������������������������������  105 François Recanati 6 Reference and Causal Chains ����������������������������������������������������������������  121 Andrea Bianchi 7 The Qua-Problem for Names (Dismissed)���������������������������������������������  137 Marga Reimer 8 Language from a Naturalistic Perspective��������������������������������������������  155 Frank Jackson 9 Michael Devitt, Cultural Evolution and the Division of Linguistic Labour��������������������������������������������������������������������������������  173 Kim Sterelny

vii

viii

Contents

Part III  Theory of Meaning 10 Still for Direct Reference ������������������������������������������������������������������������  193 David Braun 11 Naming and Non-necessity����������������������������������������������������������������������  237 Nathan Salmon 12 Against Rigidity for General Terms ������������������������������������������������������  249 Stephen P. Schwartz 13 Devitt and the Case for Narrow Meaning����������������������������������������������  267 William G. Lycan 14 Languages and Idiolects��������������������������������������������������������������������������  285 Paul Horwich Part IV Methodology 15 Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics ������������������������������������������������������  299 Georges Rey 16 Experimental Semantics, Descriptivism and Anti-descriptivism. Should We Endorse Referential Pluralism?������������������������������������������  329 Genoveva Martí Part V  Metaphysics 17 Scientific Realism and Epistemic Optimism������������������������������������������  345 Peter Godfrey-Smith 18 Species Have Historical Not Intrinsic Essences������������������������������������  355 Marion Godman and David Papineau Part VI  Michael Devitt’s Responses 19 Stirring the Possum: Responses to the Bianchi Papers������������������������  371 Michael Devitt Index������������������������������������������������������������������������������������������������������������������  457

Contributors

Andrea  Bianchi  Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Parma, Italy David Braun  Department of Philosophy, University at Buffalo, Buffalo, NY, USA Elisabeth  Camp  Department Brunswick, NJ, USA

of

Philosophy,

Rutgers

University,

New

John  Collins  School of Politics, Philosophy, Language, and Communication, University of East Anglia, Norwich, UK Michael  Devitt  Philosophy Program, City University of New  York Graduate Center, New York, NY, USA Peter Godfrey-Smith  School of History and Philosophy of Science, University of Sydney, Sydney, Australia Marion  Godman  Department Aarhus, Denmark

of

Political

Science, Aarhus

University,

Paul  Horwich  Department of Philosophy, New York University, New York, NY, USA Frank Jackson  School of Philosophy, Australian National University, Canberra, ACT, Australia William  G.  Lycan  Department of Philosophy, University of Connecticut, Storrs, CT, USA Genoveva Martí  Department of Philosophy, ICREA and University of Barcelona, Barcelona, Spain David Papineau  Department of Philosophy, King’s College London, London, UK City University of New York Graduate Center, New York, NY, USA

ix

x

Contributors

Panu Raatikainen  Degree Programme in Philosophy, Faculty of Social Sciences, Tampere University, Tampere, Finland François  Recanati  Chaire de Philosophie du langage et de l’esprit, Collège de France, Paris, France Marga Reimer  Department of Philosophy, University of Arizona, Tucson, AZ, USA Georges  Rey Department of Philosophy, University of Maryland, College Park, MD, USA Nathan  Salmon Department of Philosophy, University of California, Santa Barbara, CA, USA Stephen  P.  Schwartz Department of Philosophy and Religion, Ithaca College, Ithaca, NY, USA Kim  Sterelny  School of Philosophy, Research School of the Social Sciences, Australian National University, Acton, Canberra, ACT, Australia

About the Editor

Andrea  Bianchi is an associate professor at the University of Parma. He has published a number of papers on various topics in philosophy  of language and philosophy of mind, and is especially interested in foundational issues concerning language. His current research focuses on the relationships between language and thought and the nature of the primal semantic relation, reference. Among other things, he has edited On Reference (Oxford University Press 2015).  

xi

Chapter 1

Introduction – Michael Devitt at Eighty Andrea Bianchi

It is difficult to deny, I believe, that during the last forty years or so Michael Devitt has been a leading philosopher in the analytic field. The purpose of this volume is to celebrate his many important contributions to philosophy on the occasion of his eightieth birthday. Born to Australians in Kuala Lumpur, Malaysia, Devitt was initially raised in Sydney – and anyone who has had the chance to meet him knows just how Australian he is – but at the age of eight moved to England, where he spent all of his youth. There, after a passionate reading of Russell’s The Problems of Philosophy, he started to become interested in philosophy. Back in Australia for various reasons, in 1962 he enrolled at the University of Sydney, where he majored in philosophy and psychology. In 1967 he moved to the United States (an unprecedented choice for an Australian philosopher) to take a Ph.D. in philosophy at Harvard University, where he had W.V. Quine as his supervisor and Hilary Putnam among his teachers. Back in Australia again in 1971, he taught at the University of Sydney for seventeen years, before returning to the United States to occupy a position first, in 1988, at the University of Maryland and then, in 1999, at CUNY’s Graduate Center, which he contributed to making one of the top places for studying, and doing research in, philosophy. A tireless traveler, throughout his career Devitt continuously gave talks and participated in conferences all around the world, disseminating ideas within the philosophical community, fostering the philosophical debate, and building deep intellectual as well as human relationships everywhere. Together with Quine, from whom he inherited his unabashed naturalism and the animadversion to the a priori, and Putnam, a thinker who had a deep influence on Devitt’s philosophical development was Saul Kripke. In fact, in 1967 Devitt attended A. Bianchi (*) Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Parma, Italy e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_1

1

2

A. Bianchi

a series of lectures by the young Kripke at Harvard, which anticipated those on naming and necessity given at Princeton University in 1970 – as he likes to recall, he missed only one of them to take part in a rally against the Vietnam war. Impressed by them – he was among the first to realize how revolutionary Kripke’s ideas were –, Devitt decided to work on the semantics of proper names and other singular terms (a topic to which he had been already introduced by C.B. Martin in Sydney) and elaborated his causal theory of reference, which brought him international fame. His Ph.D. dissertation, The Semantics of Proper Names: A Causal Theory (1972), was devoted to it, as well as his first philosophical article, “Singular Terms” (1974), his first book, Designation (1981), and dozens of later publications. In the following years, Devitt defended the related, and “shocking,” idea that meanings can be causal, non-descriptive, modes of presentation, and began to be interested in the more general issue of the nature of language. This led him to argue, first, in Ignorance of Language (2006), against Chomskyan orthodoxy, claiming that languages are external rather than internal; and, second, in Overlooking Conventions, which is about to appear for Springer, against various forms of contextualism in the philosophy of language. On philosophy of language he also wrote, together with one of the contributors to this volume, Kim Sterelny, an opinionated and very successful introduction, Language and Reality (1987), whose title (which he did not like) inspired that of this book (which, alas, he likes no better).1 But Devitt’s philosophical interests extend way beyond philosophy of language. He is famous for vigorously defending realism (in his second, successful, book, Realism and Truth, 1984), against various, once trendy, forms of constructivism – from Kant through Goodman and the “renegade” Putnam to post-modernism –, which are less trendy now perhaps thanks to his criticisms too. Moreover, he has always been interested in methodology and metaphilosophy: he has tried to get clear about the role and nature of intuitions, he has criticized the widespread idea that we may have a priori knowledge from a naturalistic perspective, and he has insisted on Putting Metaphysics First, as the title of a collection of his essays (2010) declares. And he has also contributed to philosophy of mind, advocating a version of the representational theory of mind, and, more recently, to philosophy of biology, where he has argued in favor of a version of biological essentialism. I first met Devitt in April 2005. I had just come back to Italy from Los Angeles, where I had spent one year doing research at UCLA after finishing my graduate studies. Invited by the late Eva Picardi, he and Stephen Neale came to Bologna, the city where I was living at the time, to discuss the referential use of definite descriptions, a topic made famous by Keith Donnellan. I admit that I was quite surprised to discover that even outside California people were able to say sensible things on the subject. However, my human and intellectual relationship with Michael did not begin until some years later, when, in September 2009, we were both speaking at a 1  Just for the record, Devitt and Sterelny wanted to call their book Language, Mind, and Everything, inspired on the one hand by the opening of Quine’s “On What There Is” and on the other by the Ultimate Question of Life, the Universe, and Everything in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy. The publisher found the title too jocular.

1  Introduction – Michael Devitt at Eighty

3

conference on meaning organized by Alex Burri in Erfurt, Germany. We started to argue about reference, and we are not through with it yet. Afterwards, Devitt came various times to Parma (because of the quality of its food, he would probably gloss), to give talks and take part in workshops and conferences at my university. We have also frequently met elsewhere: in Bologna, in Rome, a couple of times in Barcelona thanks to another of the contributors to this volume, Genoveva Martí, a couple of times in Dubrovnik. And, more recently, in his wonderful house (“Versailles on Hudson”!) in Upstate New York. Although we disagree on various issues, as my contribution to this volume also witnesses, on each of these occasions I learned a lot from him. And, of course, it was always fun. Most, if not all, contributors to this volume came to know Devitt much earlier than me. All renowned philosophers from all over the world, they are former students or colleagues, but first of all friends, of his. And they have all used the chance offered to them by this celebration of his eightieth birthday to add another twist to their, often long-lasting, intellectual exchange with him, engaging with many aspects of his philosophical work. As should have become clear from what I have written so far, Devitt likes to argue, or, as they colorfully put it in Australia, “to stir the possum” (Stirring the Possum was indeed his suggestion for the title of this volume, a suggestion which, to his dismay, was eventually rejected because of its potential obscurity to non-­ Australian readers). Philosophy advances this way, he says. Thus, he wrote extensive replies to all the contributions to this volume, which, organized, like the contributions themselves, into five parts (Philosophy of Linguistics, Theory of Reference, Theory of Meaning, Methodology, and Metaphysics), are collected at the end of it and reveal his current stand on many of the issues he has been interested in during his long career. And I am pretty sure that the show will go on: many of these exchanges will continue, back and forth, for years. Thanks, Michael!

Part I

Philosophy of Linguistics

Chapter 2

Invariance as the Mark of the Psychological Reality of Language John Collins

Invariants are the concepts of which science speaks in the same way as ordinary language speaks of “things”, and which it provides with names as if they were ordinary things. Born (1953: 149)

Abstract  Devitt articulates and defends what he calls the ‘linguistic conception’ of generative linguistics, where this position stands in contrast to the prevailing ‘psychologistic conception’ of Chomsky and generative linguists generally. I shall argue that the very idea of anti-psychologism vis-à-vis generative linguistics is premised upon a misunderstanding, viz., the thought that there are linguistic phenomena as such, which a linguistic theory may target directly, with psychological phenomena being targeted only indirectly. This thought is incorrect, for the ontology of a theory is ultimately what is invariant over and essential to the explanations the theory affords. In this light, linguistic theory is about psychological phenomena because the psychological states of speaker-hearers are the invariances of linguistic explanation, and there are no such invariances that involve externalia. What ultimately counts as psychological itself is partly determined by the very kind of explanations our best theories offer. In a nutshell, the explanations of generative theories neither entail nor presuppose an external linguistic reality, but do presuppose and entail a system of internal mind/brain states the theories seek to characterise. Keywords  Noam Chomsky · Realism · Linguistic competence · Psychologism · Michael Devitt · Linguistic intuitions · Mental processes · I-language

J. Collins (*) School of Politics, Philosophy, Language, and Communication, University of East Anglia, Norwich, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_2

7

8

J. Collins

2.1  Introduction The major meta-theoretical issue throughout the history of generative linguistics has been the ontological status or ‘reality’ of the posits of the various generative theories. The received view is that generative linguistics is a branch of psychology, a component of the cognitive sciences. In this light, if taken to be true, a given theory is read as specifying various ‘psychologically real’ properties, rather than properties of mind-external entities however else construed (social artefacts, platonic entities, etc.). Michael Devitt’s book, Ignorance of Language (2006a), is the most developed and philosophically sophisticated assault on this received view, reflecting but improving upon the earlier attitudes of Katz (1981), Soames (1984, 1985), Katz and Postal (1991), Cowie (1999), and numerous others. Devitt articulates and defends what he calls the ‘linguistic conception’ of generative linguistics, where this position stands in contrast to the prevailing ‘psychologistic conception’ of Chomsky and generative linguists generally. Such anti-psychologism amounts to the claim that generative theories are really about an external language such that, in the first instance, the theories’ explanations pertain to the putative externalia that constitute the language, not the psychological (internal) states of speaker-hearers. I shall argue that the very idea of anti-psychologism vis-à-vis generative linguistics is premised upon a misunderstanding, viz., the thought that there are linguistic phenomena as such, which a linguistic theory may target directly, with psychological phenomena being targeted only indirectly. This thought is incorrect, for the ontology of a theory is ultimately what is invariant over and essential to the explanations the theory affords. In this light, linguistic theory is about psychological phenomena because the psychological states of speaker-hearers are the invariances of linguistic explanation, and there are no such invariances that involve externalia. What ultimately counts as psychological itself is partly determined by the very kind of explanations our best theories offer. In a nutshell, the explanations of generative theories neither entail nor presuppose an external linguistic reality, but do presuppose and entail a system of internal mind/brain states the theories seek to characterise.1 Devitt’s book has attracted some strong criticism, but largely misdirected, according to Devitt, for the critics uniformly neglect his ‘master argument’.2 I 1  Some positions admit both externalist and internalist commitments, such as those articulated by George (1989) and Higginbotham (1991). By the lights of the arguments to follow, the externalist aspects of such positions are questionable insofar as they flow from the kind of reasoning that informs the straightforward externalist positions. There are other positions that defend the notion of an external language in opposition to the generative approach, but these tend to be animated by the kind of concerns Devitt makes explicit (e.g., Lewis 1975; Wiggins 1997). 2  Thus: ‘none of [my] critics pays much attention to my argument for rejecting the psychological conception. The failure to address arguments against the psychological conception is traditional’ (Devitt 2006b: 574; cp. 2006a: 8). The critics Devitt has in mind are Collins (2006), Matthews (2006), and Smith (2006). Devitt repeated his charge at an ‘Author meets his critics’ session at the

2  Invariance as the Mark of the Psychological Reality of Language

9

shall (i) reconstruct and carefully analyse Devitt’s ‘master argument’, which, according to Devitt, has so confounded his critics, and (ii) question its two principal premises. First, though, we need to address how we ought to understand ‘psychological reality’ and ‘reality’ more generally, in the context of a theoretical inquiry.

2.2  The Very Idea of ‘Psychological Reality’ Following Chomsky, generative linguistics is broadly conceived by its practitioners to be a branch of psychology, ultimately human biology. This conception rests on the notion that the human language capacity is a biophysical phenomenon, which, of course, gives rise to complex communicative, social, and historical arrangements; indeed, it is hardly implausible to think that human culture largely depends upon our shared linguistic capacity. Generative theories, though, seek to abstract and idealise from such massive interaction effects in order to target a supposed distinctive core linguistic capacity.3 Thus, generative linguistics, in essential concert with other disciplines, seeks to explain the development and function of this capacity as an aspect of the human mind/brain. Such is what I mean by ‘psychologism’ in the linguistic realm. Of course, whether past or current theories have been successful in this endeavour is another matter.

meeting of the Pacific Division of the APA (2007), by which time the benighted critics had swelled to include Collins (2007a), Higginbotham (2007), and Pietroski (2008): Smith [(2006)] and company do make some rather perfunctory attempts at [refuting the argument] … but they all fail dismally in my view... It is time that my Chomskian critics made a serious attempt to refute it. If the argument is mistaken, it should be fairly easy to say why: it is not an attempt to prove Fermat’s Last Theorem! Whatever the perceived failings of Devitt’s critics, I trust the present paper is at least a meeting of Devitt’s challenge. 3  For example: Complex innate behavior patterns and innate “tendencies to learn in specific ways” have been carefully studied in lower organisms. Many psychologists have been inclined to believe that such biological structure will not have an important effect on acquisition of complex behaviour in higher organisms, but I have not been able to find serious justification for this attitude. (Chomsky 1959: 577 n. 48) [T]here is surely no reason today for taking seriously a position that attributes a complex human achievement entirely to months (or at most years) of experience, rather than to millions of years of evolution or to principles of neural organization that may be even more deeply grounded in physical law... [Language] would naturally be expected to reflect intrinsic human capacity in its internal organization. (Chomsky: 1965: 59) The faculty of language can reasonably be regarded as a “language organ” in the sense in which scientists speak of the visual system, or immune system, or circulatory system as organs of the body. (Chomsky 2004: 380)

10

J. Collins

Devitt (2006a: 9) suggests that a theorist’s intention to demarcate a cognitive domain of inquiry offers little reason, by itself, for the conclusion that the technology of the theory (a grammar, let’s call it) should be credited with ‘psychological reality’. The move is ‘fast but dirty’, for ‘[i]t remains an open question whether the rules hypothesised by a grammar are psychologically real’. Devitt’s thought here is that merely intending a theory, no matter how apparently successful, to be about X doesn’t make it about X; it remains an open question that the theory might be true of some non-X domain. So, in particular, linguistic theory tells us about speaker-­ hearers alright, but it just doesn’t follow that the posits of the theory are ‘psychologically real’; the theory could tell us about speaker-hearers by way of being true of something else, where what a theory is true of should attract our proper ontological commitment, assuming we uphold the theory in the first place. Well, just what is it to be psychologically real? Devitt entertains a host of construals familiar from the literature of the past forty years; at its simplest, though, ‘psychological reality’ describes ‘structures [that] are employed in speaking and understanding’ (Chomsky 1975: 160, quoted by Devitt 2006b: 574). Devitt glosses Chomsky’s description of structures being ‘employed in speaking and hearing’ as ‘on-line’ processing, the production of linguistic tokens (Devitt 2006a: 36; 2006b: 578; cp. Soames 1984: 154–156; 1985: 160).4 For Devitt, as we shall see, ‘psychological reality’ just is the domain of what he calls ‘processing rules’. So, Devitt’s worry is that the mere intent to be offering a theory of the mind/brain does not suffice to credit the theory’s posits with psychological reality; for, without further ado, it has not been established that the grammar subserves on-line linguistic production/consumption. With this lacuna unfilled, a grammar might reasonably be true of some non-psychological realm. Put another way, a speaker might ‘behave as if her behavior were governed by’ the grammar’s posits, while in fact it is not (Devitt 2006a: 57). It follows that we are not entitled from the acceptance or, indeed, corroboration of a grammar alone to declare the grammar’s posits to be psychologically real.5 4  The ‘on-line’ conception of psychological reality, although never defended by Chomsky, has been articulated, in some form or other, by many in the field; e.g., Levet (1974), Bresnan (1978), Fodor et al. (1974, 1975), Bever et al. (1976), Bresnan and Kaplan (1982), Fodor (1983), Berwick and Weinberg (1984), Soames (1984), and Pylyshyn (1991). The kind of ‘transparency’ model entertained by Miller and Chomsky (1963) did not take processing to be a criterion of reality for a grammar’s posits, but merely suggested that there was a structural concordance between the two, a claim that can be elaborated in different ways without disrespecting the competence/performance distinction (cp. Berwick and Weinberg (1984) and Pritchett (1992)). Although Devitt accepts that ‘on-line’ processing is the mark of the psychologically real, given his ‘linguistic realism’, he also holds that a grammar need only be respected by the processing rules, i.e., that is all a grammar tells us about psychology. Construed internally, in the way I shall suggest below, ‘respect’ indeed suffices for a perfectly good sense of psychological reality independently of any theory of processing. 5  Devitt’s point here is inherited from Quine’s (1972) suggestion that rule ‘following’, as opposed to ‘conforming’, involves consciousness of the rule. The distinction misses the obvious difference between behaviour being explained by a posited rule, regardless of consciousness, and behaviour merely conforming to any number of conceivable rules (Chomsky 1975). Devitt also echoes Soames’s (1984: 134) thought that ‘linguistic theories are conceptually distinct and empirically divergent from psychological theories of language acquisition and linguistic competence’.

2  Invariance as the Mark of the Psychological Reality of Language

11

Before we assess Devitt’s ‘master argument’ for the conclusion that generative linguistic theories are, in fact, about something non-psychological, I have three main points to make about the reasoning just presented, one to do with nomenclature, one exegetical, another more substantial, which bears on the general question of what notion of reality is supposed to be informing thoughts of what a theory is really about.

2.2.1  Grammars and Psychologism First, then, nomenclature. The notion of a grammar (particular and universal) is familiarly ambiguous between the linguist’s theory and what the theory is about. As we shall see, Chomsky (1986) resolves the ambiguity by coining the term ‘I-language’ for the type of cognitive state he takes the relevant theories to be about; we may retain the term ‘grammar’ to designate the theories themselves that take I-languages as their objects. In this sense, a grammar’s (/theory’s) structural description of a sentence will be a hypothesis, not about a sentence as an external type (or external tokens of a type), but about the capacity of a cognitive system to (amongst other things) represent various external media (sound waves or ink marks, say) in terms of abstract syntactic, phonological, and semantic features, much as theories of vision generate hypotheses about how organisms represent visual scenes. Of course, to adopt this resolution of the ambiguity will make it hard for externalists about the object of inquiry to state their claims coherently, as linguistic externalia on Chomsky’s conception will simply be dead sounds or gestures, as it were, not anything linguistic at all; the theory is about the cognitive resources that lend the According to Soames, the first claim of conceptual distinctness rests upon linguistic theory being animated by ‘leading questions’ that are independent of psychology and the second claim of empirical divergence rests upon the clear implausibility of taking the rules and principles posited by syntactic theories to be the actual causal springs of linguistic behaviour. As we shall see, the first claim amounts to a stipulation in favour of an externalist notion of language (the ‘leading questions of linguistics’ are open to both externalist or internalist construal; the choice between them cannot be decided by a priori fiat). The second distinction rests upon a conception of the relevant psychology as restricted to speech production/recognition. It is perfectly sensible to attempt to delineate the abstract structure the mind/brain realises without thinking that one is thereby specifying the actual causal processes involved in linguistic processing, whatever that might mean. Besides, it is not even the case that generative theorists have sought psychological theories in Soames’s sense. Soames (1984: 147–151) appears to be confused on the competence/performance distinction. He imagines that the traditional cognitivist approach is to insulate competence (narrowly construed) from any data from processing; competence is merely a theory of the ‘grammatical judgements of idealized speaker-hearers’ (1984: 148 n. 19). Little wonder that Soames (1984: 154–155) sees the linguist as facing a ‘dilemma’: on the one hand she seeks a psychological theory, on the other she insulates herself from the relevant data. In explicit contradiction of this reasoning, the first chapter of Aspects (Chomsky 1965) seeks to establish the empirical integration of performance and competence, without competence itself being competence to produce or consume anything. Competence is a standing state, abstractly specified, that enables the integration of distinct capacities called upon in linguistic behaviour (for discussion, see Collins 2004, 2007b).

12

J. Collins

externalia a linguistic life, not the externalia themselves. In order, then, to at least be able to state the conflicting interpretations of generative theory, I shall, pro tem, use ‘grammar’ in the theorist’s sense, and leave it open what the object of a grammar actually is – an I-language or something external (I shall return to this issue below). My second point about nomenclature concerns ‘psychologism’. After Frege (1884/1950), ‘psychologism’ is most often used abusively to denote any account that confuses the logical or the properly semantic with the psychological, which is understood to be a local, contingent mental set-up, a mere subjective matter, which is constitutively unable properly to realise the normative, logical, or modal character of thought. It would be far beyond my present scope adequately to discuss any of the many issues that attach to psychologism so construed. Two quick remarks will have to suffice. First, as already indicated, by ‘psychologism’ in the positive sense I shall employ, I do not mean any thesis about logic, normativity, meaning, concept possession, or the like; all I intend is that linguistic theory is concerned with the mind/brain, in a sense to be explained, as opposed to patterns of behaviour or some external media of symbols and signs. By such lights, one may happily accept the anti-psychologism of Frege and others who followed him. Of course, if the anti-psychologism that prevailed throughout much of twentieth century philosophy is extended to a kind of transcendental claim, whereby the philosopher a priori judges what is and isn’t possible in empirical inquiry, then the thesis should be flatly rejected. Secondly, to say that linguistic theory is psychologistic is not to suggest a reductionist attitude towards the relevant linguistic kinds. This will be a crucial feature of the following. It is perfectly coherent to view linguistic kinds as specified in a theory as essentially abstract, sui generis (Chomsky 1987). The theory will still be psychological if it only explains psychological phenomena.

2.2.2  Chomsky on Psychological Reality My initial exegetical point pertains to Devitt’s reading of Chomsky. Pace Devitt (2006a: 64), Chomsky has not ‘persistently suggested’ a processing conception of ‘psychological reality’; on the contrary, Chomsky’s (1955–56/75: 36) use of the notion comes from Sapir, and Chomsky has been persistently leery of its standing, if understood as being more substantial than the bare idea that linguistic theory targets psychological phenomena for its explananda. So, if a grammar is a cognitive theory, then, of course, the posited structures are understood to be ‘employed’ in cognitive activities. It does not follow, though, that such cognitive theories are ones of processing (production/consumption of tokens), or, in Soames’s (1984) terms, that there is an ‘isomorphism’ between the grammar and processing rules; indeed, such a consequence would contradict Chomsky’s position in Aspects, which explicitly separates competence as the object of theory from on-line processing (1965: 8–9). A competence theory is to be thought of as the abstract specification of a function (lexical items to complex structures) that imposes a set of conditions upon

2  Invariance as the Mark of the Psychological Reality of Language

13

processing, but is not a theory of the processing itself or reducible to it. By 1980, Chomsky (1980a: 106-109) diagnoses the appeals to the ‘mysterious property’ of psychological reality as an undue insistence that certain special kinds of evidence (e.g., from parsing or neurology) are required to establish that the posits of an otherwise accepted (evidentially supported) theory are real.6 Of course, we should like as much convergent evidence as possible, but if doubts over the psychological reality of certain theoretical posits boil down to just a concern that a certain kind of evidence is missing, then there is no issue about reality at all; the theory simply remains unsupported rather than true of some other reality.7 I think Chomsky’s deflation of the issue is in essence correct, but it misses a crucial step by not offering a positive construal of what psychological reality a competence theory might have, given the kind of theory it is, i.e., given that it is not a processing theory. That is, the sceptic of the psychological reality of linguistic posits is liable to think, ‘Bother different kinds of evidence! If a competence theory does not make claims about processes, then it is not a psychological theory, and so, if true, it must be true of something other than psychology’. Chomsky resists this urge to treat competence as really a form of processing, if a competence theory is to be psychologically real, but he leaves those with the urge unsatisfied. The way to bring resolution here is to offer a theory-relevant notion of ‘reality’ in general, from which it follows that linguistic theory is psychological notwithstanding the fact that it is not a processing theory. This takes us to the substantial issue of what we should mean by ‘reality’ when assessing a theory.

2.2.3  Minimal Realism The following conceptual claim seems to me unremarkable when we are not dealing with linguistics: Conceptual Thesis: If a theory T primitively explains D-phenomena, then at least some of T’s posits are minimally real over D, i.e., a theory is about what it explains. 6  Of course, not every term of an evidentially supported theory is understood to correspond to a real element of the domain; many elements will be wholly theory internal and at a given stage of inquiry it might not be clear what is real and what is not. This kind of complication, however, holds for any empirical inquiry, and poses no special problem for linguistics in particular. See Harman (1980) and Chomsky (1980b) for discussion. 7  Although my focus in this paper is on intuitive evidence, for that is the locus of much of the philosophical disputes, it bears emphasis that it is Chomsky’s long-standing position that ‘discoveries in neurophysiology or in the study of behaviour and learning... might lead us to revise or abandon a given theory of language or particular grammar, with its hypotheses about the components of the system and their interaction’ (Chomsky 1975: 37). In general, ‘We should always be on the lookout for new kinds of evidence, and cannot know in advance what they will be’ (Chomsky 1980a: 109). For overviews of the relevant evidence far beyond intuitive data, see, e.g., Jenkins (2000, 2004) and Anderson and Lightfoot (2002).

14

J. Collins

Let the following definitions hold: Primitive explanation: T primitively explains D iff T explains D-phenomena independently of other theories, and T does not explain anything non-D without being embedded in a larger theory. T is explanatorily invariant over D. Minimal reality: A theory’s posits are minimally real over D iff they are interpretable (not mere notational) elements of evidentially supported theories that explain D-phenomena. The thesis is unremarkable for it claims nothing more than that at least some of the posits of successful theories are counted as ‘real’ over some domain when the theories actually explain the relevant phenomena and that is all they explain on their own. Let’s consider two familiar examples from physics. We may say that Newtonian mechanics (or the classical form via Lagrange) primitively explains phenomena for which the classical concepts of mass and force apply in the above sense. So, mass and force are presupposed in every explanation, and the theory doesn’t explain anything beyond the application of these concepts without additional resources. Thus, Newton was led to his theory by consideration of the orbit of Earth’s moon, but the theory does not primitively explain the orbit, i.e., the theory says nothing about the particular mass of the moon or its distance from Earth or whatever masses might be affecting it; the theory is invariant over such contingencies. Otherwise put, even if there were no Earth or moon, or their masses and relative distance were different from what they are, the theory would not be refuted. All the theory primitively explains is the interaction of mass with force, not why we find the particular masses we do and their relative distances from one another. In such a sense, the theory is only minimally committed to certain kinds of interactions, given mass and force. For another example, consider the development of the theory of electromagnetism in the mid to late nineteenth century. The theory takes fields to be ‘physically real’, for the field equations primitively explain electromagnetic phenomena. On the other hand, it is not at all standard to take potentials or lines of force to be physically real, for they explain nothing that the fields do not explain. That is, while we can appeal to electric potential, say, describing the voltage carried by a wire, we know that it is not the potential itself that explains the current, for it is not invariant relative to the field. It is somewhat analogous to measuring the height of a mountain relative to sea-level as opposed to the lowest point of the Earth, or some other arbitrary point.8 It bears emphasis that I do not intend the notion of ‘minimal reality’ to decide on any issues in the philosophy of science as to the ultimate reality of fields or anything else; on the contrary, the expression describes those putative entities and relations towards which is directed one’s general philosophical position, be it empiricist, realist (with a capital ‘R’, if you like), structuralist, or something else. To be ‘real’ in this sense means that the posit is not arbitrary, conventional, or merely notational, but is an invariant feature of the theory’s explanations and so is counted as real, insofar as the theory is deemed true or successful. The present point, then, is simply

 See Lange (2002: ch. 2) for a good discussion of these issues.

8

2  Invariance as the Mark of the Psychological Reality of Language

15

that one can be minimally realist about fields, but not potentials or lines of force, say, independently of any wider commitments about what such reality amounts to in some more metaphysically robust sense. My claim is that linguistics enjoys the same status as physics in this regard; linguistics need clear no extra hurdle in order for us to count its posits as real for the purpose of understanding the nature of the explanations and ontology of the science of the domain at hand. So, here is the substantive thesis (which mirrors one for field theories and electromagnetic phenomena): Substantive Thesis: Generative theories primitively explain psychological phenomena. Consequence: The posits of generative theories are psychologically real in a sense that concerns the interpretation of scientific theories. Devitt is perfectly right, then, that it ‘remains an open question’ what structures govern behaviour, have their hands on the causal steering wheel. But generative theories, qua competence theories, are not directly concerned with the governance of behaviour. This does not affect their psychological status, however, for they independently (/primitively) explain nothing non-psychological and do explain phenomena that are uncontroversially psychological. That, at least, will be the claim I shall seek to substantiate.

2.2.4  General Remarks Before we consider Devitt’s ‘master argument’, some general morals can be recovered from the preceding discussion. Lying behind the doubts about the psychological reality of a grammar’s posits must be some selection of the following thoughts: (i) linguistic theory does not primitively explain psychological phenomena; (ii) to be genuinely psychological, linguistic theory must be rendered as a theory of processes or neuronal organisation; or (iii) the reality of psychological posits demands the satisfaction of a priori conditions or a ranking of the significance of different kinds of evidence, where both the conditions and ranking are peculiar to psychology. I think we can dismiss the third thought, for I take no relevant party to be happy with such a methodological dualism (this is the essence of Chomsky’s deflationary attitude to the issue). We are left, then, with the first two thoughts. The second thought is really contingent on the first thought; for if a grammar in fact explains only psychological phenomena, then it is just a semantic stipulation to claim that the grammar remains non-psychological merely because it is not a processing theory. At any rate, if a grammar’s explanatory domain were psychological, then even if one thought that a psychological theory must be one of processing, the idea that linguistics targets externalia wouldn’t be advanced any. One would simply be left with a problem of how to classify linguistics and how its claims relate to a likely processing theory. The crucial thought, therefore, is the first one. It is only this thought that leads one to be genuinely sceptical of the psychological status of linguistic theory, because if the thought is true, then a grammar will not primitively

16

J. Collins

explain psychological phenomena, which is just to say that a grammar is really about something other than psychology, even if it can be used, in concert with other theories, to explain such phenomena (the explanation will be indirect, non-­ primitive). As mentioned above, tackling these issues via Devitt is especially apt, for Devitt’s argument is the most elaborated version of linguistic anti-psychologism in the literature and can be read as a distillation of the thought that there is something other than psychological phenomena that a grammar targets.

2.3  Devitt’s ‘Master Argument’ Devitt (2006a: ch. 2) does not present an argument as such, rather (i) he offers three general distinctions that are intended to pertain to representational systems in general; (ii) he purports to show that the distinctions apply to language as conceived by generative linguistics in particular; and (iii) he concludes that the distinctions as so applied support a ‘linguistic reality’ construal of generative linguistics rather than a psychological one. Here is my reconstruction of Devitt’s line of reasoning.9 (DA) (1)

a. There is a distinction between ‘the theory of competence [and] its outputs/products or inputs’ (2006a: 17). b. There is a distinction between ‘the structure rules governing the outputs of a competence [and] the processing rules governing the exercise of the competence’ (18). c. There is a distinction between ‘the respecting of structure rules by processing rules [and] the inclusion of structure rules among processing rules’ (22). (2) These general distinctions apply to language as conceived by generative theories. (3) Therefore: a. The theory of linguistic competence and its processing rules is distinct from the theory of the structure rules of the linguistic expressions that are the product of that competence. b. The theory of structure posits rules that the competence respects, but the rules need not be involved in processing. (4) A grammar is best interpreted as a theory of the structure rules of linguistic expressions, not of linguistic competence. (5) Therefore, a grammar is about linguistic reality (structure), not psychological reality. Devitt (2006b: 576–577) encapsulates this argument as a challenge: If the psychological conception of linguistics is to be saved, there must be something wrong either with the distinctions [(1a-c)] or their application to linguistics [(2)]. It’s as simple as

 Devitt offered me something very similar to this argument in personal correspondence.

9

2  Invariance as the Mark of the Psychological Reality of Language

17

that. And if the problem is thought to lie not with the distinctions but with their application we need to be shown how human language is relevantly different from the bee’s dance.

In line with Devitt’s challenge, for the purposes of my riposte, I shall only tackle the crucial premises (2) and (4); that is, I shall grant that Devitt’s three distinctions listed under (1) are in good standing for at least some non-linguistic systems (maybe the birds and the bees and the blacksmiths), but not for language in the form Devitt presents the distinctions. With the distinctions correctly construed, the conclusion does not follow, i.e., (5) is false. Thus, in terms of Devitt’s challenge, I shall primarily be questioning the application of the distinctions to language as conceived by generative theories, and leave unmolested the idea that the distinctions might apply elsewhere beyond language. It bears emphasis that the conclusions I seek to establish by way of challenging Devitt’s argument do not reflect any general metaphysical orientation; I reject linguistic externalia, not because they are metaphysically outrageous, or because I am inclined towards some species of idealism, but simply because they are explanatorily otiose: they neither constitute phenomena to be explained nor explain any phenomena.

2.4  Questioning Premise (2): Applying the Distinctions Devitt (2006a: ch. 2) presents his distinctions via a motley set of cases, from von Frisch’s dancing bees, via logic machines that spit out theorems, to blacksmiths and their horseshoes. The general idea is this: how a system (a bee or a blacksmith) manages to produce its outputs is one thing; the structure or properties of those outputs is another thing. Still, the processing mechanism that determines how the products are produced respects the structure of the outputs insofar as they are its outputs. For example, von Frisch gave us a splendid theory of the information communicated by a honeybee’s dance. This is a theory of the structure rules alone, for von Frisch didn’t tell us how the bees produce the dance; mind, however the bees do their thing, the enabling processes respect the structure of the dance, for that is what the bees’ mechanism is for, to produce a structure that may carry the appropriate information about the presence of nectar relative to the position of the hive. Likewise, a logic machine might follow all kinds of procedures in its production of well-formed formulae, but we have programmed it so that it respects the structure rules we have invented (e.g., our definition of a well-formed formula of first-order logic and what counts as validity). Devitt suggests that the same holds for language (cp. Soames 1984). We have a competence that produces external objects (sound waves, hand gestures, inscriptions, etc.) that constitute a linguistic reality. Our linguistic theories are about the structure of these objects under conventions of use that fix what is to count as nouns, verbs, etc., and their phrasal projections with all the attendant syntactic complexity. Linguistic theory is not about the internal processes or states of speakers that produce the strings that have complex high-level grammatical

18

J. Collins

properties; it is about such strings themselves. Still, the processes or mechanisms that do produce and consume the strings respect the linguistic properties of the strings, properties that linguistic theory is about. Even if we grant these distinctions for the bees, the machines, and blacksmiths, without further ado, it does not follow that the distinctions apply to language; after all, we have here nothing but analogies. Furthermore, the distinctions appear not to apply to the mammalian visual system or the immune system, say, neither of which produce external products such as horseshoes. Perhaps language is more like vision in this sense. Besides, even if one were inclined to think that the distinctions do apply to language, it would be nice to see precisely how they do. Devitt (2006a: 29–30), for sure, is aware of the lacunae. He offers a single substantive reason to think that the analogies are a compelling basis to think that his process/structure distinctions apply to human language. He writes: How could we make any significant progress studying the nature of competence in a language unless we already knew a good deal about that language? Just as explaining the bee’s dances is a prerequisite for discovering how the bee manages to produce those dances, so also explaining the syntax of sentences is a prerequisite for explaining how speakers manage to produce those sentences. (Devitt 2006a: 29)

Devitt’s thought here is that if linguistics on its psychologistic construal is worthwhile, then so must be linguistics on his construal, for both construals require a clear conception of the structure rules that are evident in (or at least recoverable from) the products of competence prior to inquiry into the psychological processes that produce such products. If this is so, then Devitt’s three distinctions appear to apply, which, in essence, simply distinguish structure from psychology, with the latter respecting the former.10 If one were already convinced of the existence of a linguistic reality such that a grammar is a theory of it, then Devitt’s analogical reasoning might bolster one’s conviction.11 But we precisely want a reason to think that there is such a reality that is relevant to linguistics and we are not given one here: Devitt’s argument reads his ‘linguistic reality’ into the metatheory of generative theories, as if the linguists’ appeal to structure must be about external structure, given that only processing rules are internal. I don’t imagine, of course, that Devitt is unaware that his distinctions presuppose what is in contention. His reasoning appears to be that since the distinctions are general, they enjoy default application to language as generatively

 This line of reasoning is equivalent to that of Katz (1981: 70–73, 81–83) and Katz and Postal (1991: 524–525), who argue that a conception of a language must be prior to a conception of the putative underlying psychological states, for any evidence on such states must be indirect relative to direct evidence from the language itself. Soames (1984: 140) makes the same point, suggesting that psychological evidence is ‘indirect’ given a fixed ‘pretheoretical’ conception of language. In a related vein, Wiggins (1997: 509–510) claims that any psychological inquiry into ‘speakers’ ‘presupposes the results’ of a non-psychological inquiry into language. 11  Of course, there is a reality of ink marks, hand gestures, pixels, etc. The only issue is whether such a heterogeneous domain supports properties of the kind that concern linguistics, where these properties might depend upon the human mind/brain but not be part of it. 10

2  Invariance as the Mark of the Psychological Reality of Language

19

conceived. Devitt’s burden is to show that they do in fact apply to language; the burden of his opponent is to show that they don’t. The above quotation provides a challenge to Devitt’s opponents, for sure, but, I shall suggest, the distinction between processing and structure is a distinction internal to cognition, not between cognition and some other putative linguistic reality. Before I spell out this thought (§2.4.2), it is worthwhile to question the analogical nature of Devitt’s reasoning here.

2.4.1  Chomsky and Devitt’s Three Distinctions As we saw above, Devitt’s distinctions do apply to numerous systems, but they also fail to apply to numerous other systems, such as the ‘organs’ that Chomsky favours for his own analogical purposes. Moreover, Chomsky is pretty explicit in rejecting all of the distinctions, as Devitt construes them, precisely because they do presuppose an external ‘linguistic reality’. In short, Devitt’s distinctions might have some generality, but they are hardly a neutral, default conception of a cognitive system or other organic systems. Let us briefly see this. The first distinction is between a competence system and what it produces. Famously, Chomsky (1965: 3–4) does make a competence/performance distinction, but it is not Devitt’s distinction. For Chomsky, the distinction is between internal systems, some of which govern speech production and comprehension, and others that independently constrain such processes, but might be systematically misaligned with them for independent reasons (more on this below): ‘To study actual linguistic performance, we must consider the interaction of a variety of factors, of which the underlying competence of the speaker-hearer is only one’ (1965: 4). In this light, what Devitt calls ‘competence’, Chomsky would call ‘performance’, for Chomsky’s notion of competence by itself does not relate to the production of anything at all, let alone external tokens of strings. ‘Competence’ designates what a speaker-hearer knows in the abstract sense of conditions that apply to performance, but which hold independently of any production or consumption activity. Insofar, then, as Devitt’s ‘structure rules’ demarcate competence, they do not describe external types that internal processes respect, but rather abstractly specify internal factors that enter into an explanation of the character of the performance in concert with other language-­independent factors. Indeed, Chomsky has long been keen to point out that most uses of language are internal, integrated into thought, entirely lacking any external garb (Chomsky 1975, 2012; Hauser et al. 2005). For sure, the ensemble of systems does produce acoustic waves, hand gestures, inscriptions, etc., which we consume as language, but the rules/principles of the grammar do not have application to them in the first instance, as if it were the properties of the modalities of language use that linguistics targets. A grammar is supposed to explain the character of our capacity to produce and consume material as linguistic, not merely to describe the result of the capacity (the input and output) in linguistic terms. We shall return to this point shortly; pro tem, my present moral is twofold. First, Devitt’s distinction between competence and its product presupposes an external linguistic reality,

20

J. Collins

which is currently in dispute. Secondly, Chomsky does distinguish competence from any potential products, but not in such a way as to presuppose a ‘linguistic reality’, for the products are massive interaction effects that only have a structure understood relative to a cognitive system that may produce or consume them as linguistic; they don’t possess a structure that internal states respect. Devitt’s second distinction is between ‘processing rules’ and ‘structure rules’. Again, Chomsky does make some such distinction, for his claim is not that the rules/ principles of a grammar are an account of the mechanism that produces particular performance events. A grammar is construed as an abstract specification of the function (in intension) the human mind/brain realises, without any accompanying assumption as to how the function is realised.12 Adopting Devitt’s terms, we may say that a grammar is of structure that the mind/brain respects in its processing (consumption and production) of acoustic waves, hand gestures, etc., but such external material does not possess the hypothesised structure. We shall get to Devitt’s notion of respect shortly, but even on the sketch given, it should be obvious how processing and structure are not two distinct realms that require an external (as opposed to an internal) relation between them. They both relate to internal systems that set both gross and fine-grained constraints on linguistic behaviour. For instance, we can decide what sentence to use on an occasion, but we can’t decide to speak or understand Spanish or Navaho, if we were just to try hard, or were really smart. Also, once a competence (structure rules in Devitt’s sense) is acquired, it sets fine-­ grained constraints on what we can process. Consider, for example, the following case: (1) a. ∗What did Mary meet the man that bought? b. (What x)(Mary met the man that bought x) Here we see that (1a) has a potential interpretation that is perfectly coherent, but we just cannot interpret the string in that way. We shall consider other cases of this kind of phenomenon later. The present moral is that a grammar seeks to explain the constraints our competence places on the interpretations we can associate with ‘vehicles’ (sounds, etc.), where these constraints are not exhausted by or explicable in terms of the non-linguistic systems with which competence interacts. It is in such a sense that competence may be viewed as a body of ‘knowledge’ that is ‘used’ by independent systems, rather than being an abstraction or idealisation from those systems. Just how we are to understand this relation of constraint between competence and performance remains highly problematic, but the bare distinction is not one that Devitt is questioning. 12

 For example: We do not know for certain, but we believe that there are physical structures of the brain which are the basis for the computations and representations that we describe in an abstract way. This relationship between unknown physical mechanisms and abstract properties is very common in the history of science... In each case the abstract theories pose a further question for the physical scientist. The question is, find the physical mechanisms with those properties. (Chomsky 1988: 185)

2  Invariance as the Mark of the Psychological Reality of Language

21

What might be leading Devitt astray is a conflation of formal rules of generation with putative processing principles. In a formal sense, the rules that strongly generate structures will be equivalent to structure rules in the sense Devitt appears to be using the notion. This follows from the mathematical equivalence of membership conditions on a recursively enumerable set and a set of rewrite rules with a closure condition. Strings understood weakly (i.e., independent of a particular system of generative rules) have no inherent structure at all. Let us consider a toy example, which suffices to make the general point. Let the string ‘aabb’ have the structure ‘[Z a[Z ab] b]’ because it was generated by the rules: (i) Z → ab (ii) Z → aZb The same string can also have a distinct structure ‘[Z [Z [Z aa] b] b]’ because it was generated by a different pair of rules: (iii) Z → aa (iv) Z → Zb So, a string understood as strongly generated is a formal object, as it were, with an intrinsic structure that reflects the rules that generate that object as a member of a class of structures that the rules define. From this perspective, strings themselves have no structure at all and do not acquire any structure as if they could carry the structure with them independently of the rules that define the string as a member of the consequence class of the rules. Still, if we were to think of bare strings as externalia and the rewrite rules as processing rules, then one could be misled into thinking that the rules exemplified produced strings that do carry the structure indicated. Yet that just is to be confused about the character of the rules. The rules are all structure rules (in Devitt’s sense). They generate a set of structures that are usable to characterise strings as meeting certain conditions and so as belonging to linguistic types, but they do not produce any strings at all, and no string acquires, still less retains, a structure from the rules. Viewed in terms of strong generative capacity, the set of strings a grammar weakly generates is a pointless abstraction, a striping away of all but linear information to leave a concatenation of symbols. The reverse does not hold. Viewed weakly, a grammar generates no structure at all, and so no structure can be abstracted from it. Devitt’s third distinction is between processes respecting structure, and structure being included in the processing. Again, Chomsky cleaves to such a distinction, but not in Devitt’s sense. The processes of the brain respect the grammar insofar as the grammar proves to be explanatory of cognitive phenomena; just how the grammar relates to brain states understood at a different level of abstraction (e.g., neuronal organisation) is an open question about which we know very little. Respect, in this sense, just means ‘realise’. For Devitt, respect appears to be an external relation between structured outputs (external entities) and processing states. According to Devitt, the outputs clearly acquire their structure from the processing, wholly or in part, but somehow retain the structure like horseshoes on the floor of the smithy. To be frank, just what Devitt’s notion of ‘respect’ amounts to remains obscure, but the

22

J. Collins

key thought seems to be that a grammar tells one nothing psychological beyond the fact that the mind/brain produces objects with the structure the grammar specifies. This claim, though, precisely tells us that the grammar is about the mind/brain, and it would be wholly about the mind/brain, if it turned out that the putative externalia that we talk about as having a structure were not to enter into the explanations the grammar provides. Thus, Devitt’s notion of respect only militates for externalism given the presupposition that there is an external linguistic reality that is respected. Read minimally, the respect condition merely says that a grammar specifies a class of structures or a function that generates the set to which mind/brain processes conform, which does not entail anything linguistically external at all. So, respect (to adopt Devitt’s jargon) may be read as an internal relation in the sense that we take the human brain to respect (realise) the constraints abstractly specified in the grammar, but there is nothing outside of the human brain that has a structure that demands respect. The rewrite rules exemplified above, say, do not describe strings, but describe conditions a cognitive system or a computer respects such that it can produce and consume the otherwise unstructured strings as possessing a specific compositional form. I have so far argued that even if Devitt’s distinctions do apply to a heterogeneity of systems (from bees to logic machines on to blacksmiths), they do not apply to language as Chomsky conceives of it. The reason they do not is that the distinctions presuppose an externalism of structured outputs. So, for Devitt’s distinctions to perform the job asked of them, Devitt must show that the kind of general internalist construal of the distinctions as just outlined does not suffice to support a ‘psychologistic conception’ at the expense of an externalist conception.

2.4.2  Products Before Competence? Let us turn now to Devitt’s (2006a: 29) substantive claim that ‘explaining the syntax of sentences is a prerequisite for explaining how speakers manage to produce those sentences’. Well, as suggested above, it is certainly true that one requires some conception of syntactic structure (the function computed) if one is to understand linguistic performance (processing), which is supposed to be the key moral of Chomsky’s (1965) competence/performance distinction (see Collins 2007b). It does not follow, though, that syntax has a plausible externalist construal, as if syntax amounted to properties of ink marks or sound waves or something more abstract. Devitt appears to take such an externalist construal to be obvious (cp. Cummins and Harnish 1980; Katz 1981; Soames 1984; and Katz and Postal 1991). I shall return to pleas to common sense below. What I want to demonstrate now is that the very phenomena syntactic theory targets are psychological, not linguistic in some extra-­ mental sense. So, even though ‘explaining the syntax of sentences is a prerequisite for explaining how speakers manage to produce those sentences’, explaining syntax remains in the orbit of psychology.

2  Invariance as the Mark of the Psychological Reality of Language

23

Linguistic textbooks are full of example sentences that are described as ambiguous, marginal, unacceptable, etc. Prima facie, it would appear as if linguists are talking about the properties of the inscriptions, or at least some non-mental idealisation thereof. It thus seems as if a cognitive theory proper is one that explains how speaker-hearers can act in ways that respect the properties the linguist has specified, such as finding a string ambiguous in just the ways the linguist describes. If, however, one considers the theories and their intended explananda rather than the mode of presentation of the phenomena, it becomes clear that the properties of the inscriptions, understood as an independent domain, are not at issue. A linguistic theory primitively explains the speaker-hearer’s understanding, not properties of the inscriptions themselves. Our conception of the inscriptions, understood independently of the technology of a grammar (a linguistic theory), remain invariant under theoretical analysis. In crude terms, the grammar tells us nothing about the strings, but lots of things about how we interpret them. Unsurprisingly, linguistic theory is not the means to find out about ink marks, acoustics, pixel arrays, etc.; our conception of such entities remains constant under divergent grammatical analyses. For example, let us agree that we say ‘a sentence is ambiguous’ when competent speaker-hearers are robustly able to construe it as having two or more interpretations. Imagine, then, that we have settled on an analysis under which a given sentence is two ways ambiguous as opposed to three or four ways ambiguous. The analysis appears not to have explained anything about the string itself, why, say, it should be ambiguous independently of the interpreting capacities of particular speaker-hearers. Might our analysis explain something about the cognitive states of speaker-hearers? Clearly, if we construe the analysis to be a hypothesis about the mental structure speaker-hearers ‘employ’ to interpret the string, which is the phenomenon we are seeking to explain. Of course, the analysis might be wrong, but that is a different matter. On the other hand, attributing the structure to the external string explains nothing, for we still don’t know why speaker-hearers robustly respond to the string the way they do, which is the very explananda. Consider the old chestnuts: (2) a Visiting relatives can be boring b. Barking dogs can be boring (2a) is familiarly two ways ambiguous; (2b) is not ambiguous. The data here, however, are not the strings themselves, but the fact that speaker-hearers reliably can find just two interpretations for (2a), but just the one for (2b). A standard way of explaining this difference is to say that visiting in (2a) allows for a phonologically null subject PRO (to give one the reading where someone or other, most times the speaker, is visiting the relatives), but barking does not allow for such a covert subject, so only has the reading corresponding to Dogs that bark can be boring. This may seem arbitrary or merely descriptive, but it is not. Visit is transitive and bark is intransitive. Thus, with the participle forms, visiting allows for an elided subject, barking does not, for the subject of the root verb bark is provided, i.e., dogs. The reasoning generalises across present participles in English as the reader may check

24

J. Collins

for herself. See Chomsky (1955–56/75: 467–470) for an early transformational account of this kind of ambiguity. Imputing such complex properties to the strings, however precisely that is to be understood, does no explaining for us unless we also impute some ‘grasp’ of it to the speaker-hearers. After all, if they were differentially responding to the strings for some other reason than the one explained, our explanation would be fallacious. Yet once we properly credit the speaker-hearers with the requisite syntactic competence, it becomes opaque what reason there could be to further claim that the mental complexity the speaker-hearers must possess is recapitulated externally such that speaker-hearers can recover or recognise the relevant properties in the strings themselves. What was to be explained – the ambiguity phenomena – are explained without rerouting the syntactic properties through the otherwise syntactically unstructured strings, which leaves the phenomena unexplained without again crediting the speaker-hearers with the very structure at issue. A grammar should also explain why there are perfectly coherent interpretations that are unavailable to us. Consider the following cases: (3) a. He wants Fred to leave b. His brother wants Fred to leave Note that we can’t construe (3a) as meaning that Fred wants to leave, but we can construe (3b) as meaning that Fred’s brother wants Fred to leave, i.e., the available construal of the pronouns he and his differs between the cases. Following Devitt’s line, here we would be after explaining a difference between the strings that precludes a certain interpretation. Again, we are confronted with a cognitive phenomenon, for whatever properties we attribute to the strings does not entail that speaker-hearers should interpret them one way as opposed to a multitude of other ways. If the relevant analyses are attributed to speaker-hearers, then the differences between the cases in (3) are explained.13 The same reasoning exhibited in these two cases holds across the board. The phenomena linguistic theories seek to explain are cognitive phenomena, not phenomena essentially involving external entities. This is obvious when one reflects that the preponderate data are un/acceptability judgements. An explanation of why a string is acceptable or not must involve the informant to whom it is un/acceptable. We are not interested in explaining anything about a string itself beyond informants having the reactions to it they do, which does not depend upon any independently identifiable properties of the string itself. Or consider the productivity of language. It is often said that English (German, etc.) contains infinitely many sentences. That sounds like a claim about abstract entities, not cognitive states. But what are the empirical phenomena? The phenomena are that speaker/hearers display continuous 13  According to standard accounts, the relevant difference is that, in (3a), he c-commands Fred, which rules out a joint construal. In (3b), his does not c-command Fred, and so a joint construal is permitted. C-command is a central relation of grammatical analysis: an item c-commands all and only its sisters and their daughters, where sisters and daughters are arboreal items related to a given item as if in a family tree.

2  Invariance as the Mark of the Psychological Reality of Language

25

novelty and have no observed bound of competence. Now, these two facts do not necessitate a grammar that generates infinitely many structures, but the only way to preclude an infinity of structures would be to set some arbitrary sufficiently high bound on structural embedding or co-ordination. That would be an idle stipulation. Thus, we credit the speaker-hearers with a cognitive system of unbounded capacity that explains their continuous novelty. The idea that English, say, is an infinite set of objects is an abstraction, which plays no explanatory role in linguistics as far as I can see. Here is a concessive way of putting my general point against the claim that linguistic theory targets or presupposes linguistic externalia. Let us grant that, in some sense, external marks have linguistic properties and that we normally speak as if they do, at least when talking about texts, if not the ephemera of sounds. The question, now, is ‘How come we can invest all these sounds and marks with a linguistic life?’ That is a question about a cognitive phenomenon and requires a cognitive explanation, for rabbits and fish don’t do it, and it is wholly unobvious how we do it. But once the investment is up and running, as it were, and we unthinkingly do produce and consume various materials as linguistic, for all purposes other than theoretical explanation, it is obtuse, pointless, and inconvenient not to be uncritical and say, ‘That sentence has this and that structure’. Yet this is just a convenience.14 When a linguistics text tells us that a sentence is ambiguous, say, the claim is not that the exemplified string has some peculiar hidden structure or some high-level functional property or any other property as an external entity. The claim is simply that competent speaker-hearers robustly and reliably associate 2+ specific interpretations with tokens of such a string type, which is a phenomenon to be explained, rather than a given fact that enters into the desired explanation. Thus, an analysis that explains the ambiguity is not imputing properties to the string, but making explicit the divergent structures (2+ of them) speaker-hearers may cognitively employ to interpret the string. The string itself, qua an external entity, remains exactly as it was. Devitt’s error is to read our cognitive accomplishment (the

 It is tempting to adopt a ‘projectionist’ position here, as if the mind projects linguistic structure onto a string such as to render it structured, much as a given opaque surface might be said to be coloured. I think this temptation should be resisted. In an obvious sense, there is a projection, for we do hear and read various materials as being structured and meaningful, even though they are just lifeless marks in our absence. Such a projection, however, is far too shallow to support an attribution of full syntactic structure to the external material. We can, indeed, distinguish word boundaries and (some) phrases to such a degree as to make their attribution to the string seem obvious, but obviousness quickly reduces to zero for the kind of structure and properties that linguistic theory posits, much of which has no morphological signature at all. Still, we can think of the mental structure linguistics posits as constraining our phenomenology, so that we have available to us a shadow or blueprint, as it were, of the actual constraints. I think these remarks critically bear on the position of Rey (2006a, b), who views linguistic structure as a kind of illusion our minds reliably generate. We might well have an illusion of words out there, but not the illusion of PRO, or of phonologically null copies, or relations of domination, etc. Again, such features seem to generate or at least constrain the character of our ‘illusion’, but they are not part of it (see Collins 2009, 2014).

14

26

J. Collins

production/consumption of strings as linguistic) back into the explanation of the accomplishment. The accusation might be made that I am dealing with too wooden (too nominalist) a conception of ‘linguistic reality’ as ink marks or sound waves. After all, Devitt (2006a: 155) takes syntactic structure to be ‘high-level functional’ properties of strings fixed by the conventions of use of speaker-hearers. Being a noun, say, is supposed to be a property a word has thanks to playing a certain role in relation to other words as fixed by the regularities that hold in given languages. So, whether linguistic properties as so understood obtain or not, indeed, largely depends upon our cognitive states, but the properties are not themselves mental. Such a position has its deep problems.15 Fortunately, for present purposes, the only relevant issue is whether linguistic explanation requires there to be external syntactic properties, no matter how functional or high-level they might be. We have so far seen no reason to think so. My considerations are not directed towards nominalism, but any species of realism towards linguistic externalia.16 Devitt could appeal to Katz’s (1981: 195–196) distinction between the source and the import of data; that is, (psychological) intuitions might be the source of our data, but the import of the data concerns the external types. The distinction is in good standing generally, but, again, we are after a reason to apply some such distinction to the present case of language and so far it appears that source matches import in the case of language.17 Data on ambiguity, say, has clear import for how a  A functional specification of some traditional grammatical notions is not uncommon, and can be extended (Chomsky 1965: 68–74). Such a procedure, however, involves relations between linguistic categories or structural positions, and so cannot be a general characterisation of linguistic structure of the sort Devitt supposes is available. In essence, Devitt’s problem is to identify some non-linguistic properties that might be conventionally recruited to carry syntactic properties by way of their role in communication or the general expression of thought. This task is difficult enough for simple cases of being a noun (I’ve never heard of any attempt to carry out such a programme), but looks impossible for empty categories that are defined in terms of syntactic position. For debate on this point in relation to PRO, see Devitt (2006b, 2008a, b) and Collins (2008a, b). Devitt (2006a: 39–40) readily acknowledges that linguistic reality largely supervenes/depends on the mind/brain. Such dependence, though, does not mean that linguistic reality is cognitive: dependence does not make for constitution; were it to, the only inquiry would be physics. The issue of supervenience, however, is irrelevant. The reason linguistics is about the mind/brain is not that language supervenes on the mind/brain, but that only cognitive phenomena are explained by the linguistic theories, and external factors are neither entailed nor presupposed by such explanations. 16  Devitt (2006a: 98–100) does rightly claim that evidence for a grammar is not restricted to intuitions. As previously noted, though, it doesn’t follow that any such extra-intuitive evidence is nonpsychological. The only pertinent case Devitt mentions is corpus studies (2006a: 98–99). A corpus, however, is simply an example or database of constructions used. It only serves as evidence for a grammar on the basis of the theorist taking the various constructions to reflect the understanding or competence of the users of the language. After all, a linguistic theory is not a theory of what utterances people have made. A corpus, of course, can provide invaluable evidence for acquisition models, but here the corpus is treated as a record of the cognitive development of the child, not as direct data on the language itself. 17  Katz made his distinction in defence of a Platonist position, where it is much easier to construe the import of the intuitive data as being about abstracta on analogy with the case of mathematical 15

2  Invariance as the Mark of the Psychological Reality of Language

27

speaker-hearer will respond to and interpret a string, but it looks to have no import for the string itself, our conception of which can remain as it was without affecting our analyses or explanations. Furthermore, this conclusion does not entail that we cannot couch linguistic explanation in externalist terms, as if, indeed, the import were concerned with the external types; my claim is only that such explanation is not required and, if offered, is parasitic on an internalist or psychologistic explanation. Let us see this by considering yet another example. Consider: (4) Fred’s brother loves himself Here, the reflexive himself is jointly construed with Fred’s brother; a construal that relates the reflexive to Fred or brother is clearly excluded. Yet why shouldn’t the structure be ambiguous, or mean something different? The standard explanation is that reflexives require a c-commanding antecedent, and neither Fred nor brother in (4) c-commands himself. Here is an externalist explanation of this phenomenon in line with Devitt’s position: (EE) (i) S is competent in English and hence respects its structure rules. (ii) Fred’s brother loves himself is an English sentence in which himself is c-­commanded by the whole DP but not by either of its constituents. (iii) It is a rule of English that, in these circumstances, the reflexive must be bound by the whole DP. (iv) Therefore, S, because he respects the rules of English, gives a joint construal to himself and the whole DP. So, here we have a cognitive phenomenon of S’s unique understanding of (4) and an explanation of it that appeals to the external structure rules of English. Now, we should not say that this explanation is wrong; rather, (i) it is not required and (ii) the explanation only works on the back of an internalist one. On the first point, we can construe the explanation in an internalist manner without appeal to rules of English or external properties. (IE) (i) If S is competent in English, S’s interpretation of the marks himself is constrained to be jointly interpreted with the interpretation of other marks occurring in the inscription i.e., the interpretation is ‘reflexive’.

intuition; after all, the abstracta just are the essential linguistic properties pruned of any contingent excrescence (what Katz calls their ‘cohesiveness’), not so for Devitt’s concrete tokens. Even so, Katz’s analogy does not hold up. A linguistic theory is meant to explain the data – why we have the intuitions we have, and not others – whereas a mathematical theory does not explain why we have the mathematical intuitions we have. It is the job of psychology, I take it, to explain our mathematical competence, regardless of whether Platonism is true or not.

28

J. Collins

(ii) The constraint is such that the interpretation of himself is dependent on the interpretation of a mark categorised by S as a c-commander of the first interpretation. (iii) To determine a c-commander, S must project the lexical interpretations into a hierarchical structure determined by the interpretations mapped onto the given marks. (iv) Based on the projection, the interpretations of Fred or brother do not c-­ command the reflexive interpretation; only the interpretation of Fred’s brother does. (v) Therefore, the reflexive is jointly construed with the DP interpretation. We have made no appeal to rules applying to external languages or external linguistic properties. We have appealed to abstract structures that S maps onto the marks for the purpose of interpretation, but the marks have no such properties themselves. So, the explanation doesn’t require an appeal to linguistic reality, but it does clearly require S to be in a complex of mental states that the grammar describes. On the second point, the (EE) explanation is essentially parasitic on the internalist explanation. (EE) simply posits the structure to be ‘out there’. But S must be sensitive to it such that she can get the right interpretation; otherwise, we have no explanation of the cognitive phenomenon. (EE) sweeps this point under the rug by saying that S is linguistically competent in English. Well, in the present case, what does that mean? It means that S can reliably (unthinkingly) map the right interpretation onto the inscription, but that explanation presupposes that S employs the linguistic technology of being a reflexive, being a c-commander, etc. under the appropriate constraints as (IE) describes. That is what it means to take seriously the explanations linguistics offers. The magic of respect is neither sufficient nor necessary for this explanatory task, for even if the structure were ‘out there’, it must also be such that S could survey it, as it were, to determine what c-commands what, so S must be credited with what (IE) offers. And now we see that the external structure rules and appeal to languages are just explanatory danglers. The crucial issue here is parsimony. The linguistic explananda are cognitive. One explanation of them is that external concreta have complex syntactic properties and our judgements on them are explained by some sensitivity or respect we have towards them. This position incurs at least two burdens: one is obliged to (i) explain how such properties are externally realised and (ii) characterise the internal equipment that allows our mind/brains to respect such putative structure. My internalist conception of the situation does without the external properties and so incurs no burden of accounting for their external existence or of how we get to respect them. The burdens the internalist shoulders are in fact shared by Devitt’s model in addition to the two just described. First, we need to account for ‘Saussurean arbitrariness’, how one sound/mark gets associated with a cluster of linguistic features. That is a problem for everyone and merely placing structure outside the mind goes no way to solving it, for the association remains arbitrary either way insofar as no acoustic type is inherently nominal or verbal, say. The association, however brought about, is a cognitive

2  Invariance as the Mark of the Psychological Reality of Language

29

effect: the very phenomenon of arbitrariness is that the external marks are not essentially suited to enter into any particular association, beyond general constraints on frequency band, etc., if we are talking about sound (mutatis mutandis for sign). One might imagine that resolving the problem of arbitrariness just is resolving the respect problem in Devitt’s sense. That thought would be mistaken, for the association does not render linguistic properties external, but merely renders the signs or vehicles of lexical items that realise such properties external. Secondly, we should like to explain how cognitive structure is neuronally realised. Again, that is also a desideratum for Devitt, for he does not imagine that the relevant internal mechanisms on his model are readily understandable at the neuronal level. Of course, one may here appeal to general cognitive structure (a ‘language of thought’) rather than specifically linguistic structure, but the structure remains intrinsically internal and so does not entail or presuppose the existence of external linguistic properties. So, in sum, external syntactic properties are surplus to explanatory requirements. They bring with them new problems and go no way to resolving or even making sense of current problems every theorist faces.

2.4.3  Invariance The upshot of the preceding may be encapsulated by saying that generative theories target the invariance of cognition within the exercises of language capacity generally. The production and consumption of language can exploit a range of modalities or ‘vehicles’, such as acoustics, hand gestures, orthography, facial movements, and potentially other sets of properties. In itself, this does not signal trouble for Devitt’s ‘linguistic reality’, for he is not so benighted to identify linguistic properties with any first-order properties of concreta; the linguistic properties are intended to be second-order properties of the concreta (‘high-level, functional’). This escape route, however, evades the important moral that the variability of vehicles makes evident. Any given set of properties that are utilised as linguistic vehicles must be sufficiently differentiated to support the differences between the linguistic properties, even though there is no equivalence relation between the two (e.g., just consider empty categories and phrase boundaries). If we consider the super-set of potential linguistic vehicles, then there appears to be no interesting generalisation to be had at all. In other words, there are no second-order non-linguistic properties that support the linguistic properties across the range of vehicles; the relevant properties only come into view from the perspective of the cognitive capacities of speaker-­hearers. Thus, the language system is understood to be modality-independent, even though we look at certain modalities for evidence for the character of the independent language system, on the ways in which it constrains our understanding of vehicles across all modalities; particular vehicles themselves are differences that don’t make a

30

J. Collins

difference.18 In this light, to identify linguistic properties with properties of the external concreta, no matter how functional, is to conflate presentation of data with phenomena, as if photographic plates of starlight used in confirmation of general relativity meant that general relativity is a theory of a particular class of such plates (suitably idealised) as opposed to the general gravitational field of space-time. I have argued that Devitt’s distinctions do not apply to language as conceived by linguistics. They presuppose linguistic externalia and so they are not neutral, regardless of whether they apply to other systems or not. Furthermore, linguistic theory has no apparent need of the externalism that the distinctions enshrine: its explanations neither presuppose nor entail externalia. If all this is right, we have knocked out Devitt’s second premise. It is at best an indirection to construe a generative theory’s explanatory work as going via an external language, replete with the properties our theory posits. The explanations work by crediting the hypothesised structures to speaker-hearers alone. In this minimal sense, the theories’ posits are psychologically real, to the extent to which we accept any given explanation as being correct, independent of an adequate account of the human brain’s mechanisms at a less abstract level. Now let’s turn to Devitt’s fourth premise.

2.5  Questioning Premise (4): Interpreting a Grammar Devitt’s fourth premise claims that a grammar is best interpreted as a theory of the structure rules of linguistic expressions, not of linguistic competence. Remember, according to Devitt, ‘expressions’ are external, concrete tokens and ‘competence’ is a production/consumption processor. Devitt advances six reasons for this claim. I shall take each in turn and argue that none of them supports the premise.

2.5.1  The Intuitive Conception of Competence Devitt (2006a: 31) writes: [The] actual and possible idealised outputs, governed by a system of rules and fitting into a structure, are what we would normally call a language. Indeed wherever there is a linguistic competence there has to be such a language, for the language is what the competence produces: the language is what the speaker is competent in; it is definitive of the nature of the competence.

First, it is unclear why our common-sense notion of language (or ‘what we would normally call language’) is germane to linguistics, any more than our

 This is one of the chief messages of Hauser et al. (2002). For neurophysiological findings on the modality-independence of the language faculty, see Pettito (2005).

18

2  Invariance as the Mark of the Psychological Reality of Language

31

common-­sense conception of matter is germane to physics, or our common-sense conception of life is germane to biology. Linguistics seeks to explain a certain class of phenomena; the discipline is not constrained to cleave to our intuitive conception of language in doing so.19 To be sure, it might turn out that the idea of an external language does have a role to play in linguistic explanation, but the claim that it does is a meta-theoretical hypothesis, which, as such, is neither plausible nor implausible in the absence of reflection on what the first-order theories explain. Secondly, similar remarks apply to Devitt’s conception of competence. Devitt’s idea of competence as the (idealised) production of sentence tokens may or may not be some kind of conceptual truth. The matter is academic, for ‘competence’ as coined by Chomsky (1965) is a technical notion, not our ordinary notion. As employed by linguists it designates an internal system that interfaces with further internal performance components; it is not a performance system itself; it has no products. If the term is misleading, then we are free to drop the notion and speak of the ‘language faculty’ or an ‘I-language’, neither of which has resonance for normal speakers.

2.5.2  Chomsky’s Own Words Devitt (2006a: 31) claims that Chomsky himself commends a version of externalism about grammar. He quotes the following: The fundamental aim in the linguistic analysis of a language L is to separate the grammatical sequences which are sentences of L from the ungrammatical sequences which are not sentences of L and to study the structures of the grammatical sequences. (Chomsky 1957: 13)

Devitt is perfectly correct to think these remarks appear to support his conception.20 He neglects to mention, however, that this way of characterising the aim of linguistic analysis is unique to just the very beginning of Syntactic Structures. Two pages

19

 Chomsky (1981: 7) writes: The shift of focus from language (an obscure and I believe ultimately unimportant notion) to grammar [I-language] is essential if we are to proceed towards assimilating the study of language to the natural sciences. It is a move from data collection and organization to the study of the real systems that actually exist (in the mind/brain) and that enter into an explanation of the phenomena we observe. Contrary to what is widely assumed, the notion “language” (however characterized) is of a higher order of abstraction and idealization than grammar, and correspondingly, the study of “language” introduces new and more difficult problems. One may ask whether there is any reason to try to clarify or define such a notion and whether any purpose is served in doing so. Perhaps so, but I am sceptical.

See Collins (2008c) for wide discussion of these themes.  This particular passage from Syntactic Structures has often been cited by defenders of externalism: e.g., Cummins and Harnish (1980: 18), Katz and Postal (1991: 521), and Postal (2004: 5, 174). Curiously, none of the critics seem to have been bothered to understand the remarks in their proper context (see below and Collins 2008c for lengthy discussion).

20

32

J. Collins

later, the goal of linguistic theory is said to be ‘an explanation for this fundamental aspect of linguistic behaviour [i.e., the cognitive projection from finite input to unbounded competence]’ and the explanatory burden of the theory falls squarely in that province (15). Moreover, the apparent externalist ‘aim’ of Syntactic Structures is nowhere expressed in the larger and older The Logical Structure of Linguistic Theory (1955–56/75), where the stated aim is to account for ‘a large store of knowledge … and a mass of feelings and understandings’ that ‘develops’ within each speaker-hearer and the projection from a finite exposure to unbounded competence (essentially, acquisition) (1955–56/75: 62–63). The aim of a linguistic theory is not merely to catalogue sequences, but to depict each sequence as structured in a manner that reflects our understanding and acquisition of language, i.e., the theory is to meet the traditional constraints of descriptive and explanatory adequacy. Further, the ‘aim’ Devitt quotes has been explicitly rejected by Chomsky ever since.21 Indeed, Chomsky (1963: 326) dubs Devitt’s kind of interpretation a ‘gross error’. In general, Syntactic Structures is a somewhat misleading book. It was gathered from lecture notes to engineers and leads off with a discussion of weak generative capacity (the strings a grammar can generate) as a means to frame the results from the comparison of finite-state machines and phrase structure grammar. In a sense, weak generative capacity is an externalist notion, but only because, as discussed above, the strings are treated as not possessing any structure at all, which can only be provided via the specification of the generating function. For this reason, Chomsky has never thought weak generative capacity to be central to linguistics. In short, if one is looking for textual evidence for Chomsky’s opinion on our present issue, Syntactic Structures is not the best place to look.22

2.5.3  Symbols Devitt’s third reason has already been discussed above. Devitt (2006a: 31) writes: Work on phrase structure, case theory, anaphora, and so on, talk of “nouns”, “verb phrases”, “c-command”, and so on, all appear to be concerned, quite straightforwardly, with the prop See Chomsky (1955–56/75: 5, 53 n. 75; 1964: 53 n. 4; 1965: 60–62; 1980a: 123–127; 1995: 162 n. 1; 2000: 141 n. 21). 22  After quoting from Syntactic Structures, Devitt (2006a: 31) asks us to compare another quotation from Chomsky (1980a: 220), where Chomsky does indeed talk about a grammar being a theory of a language. Devitt, however, appears to misunderstand the point of the passage. Chomsky closes the relevant paragraph by saying: 21

When we speak of the linguist’s grammar as a “generative grammar,” we mean only that it is sufficiently explicit to determine how sentences of the language are in fact characterised by the grammar. Chomsky’s point here is just to unpack the notion of ‘generative’ in terms of explicitness, i.e., the linguist’s grammar has sufficient deductive structure to generate a set of phrase markers that encode the relevant properties that explain the speaker-hearer’s unbounded competence. There is no conception here of a grammar being about an external language.

2  Invariance as the Mark of the Psychological Reality of Language

33

erties of symbols of a language, symbols that are the outputs of a competence. This work and talk seems to be concerned with the properties of items like the very words on this page.

I have given reasons why this just can’t be true, for if it were true, linguistic theory would be denuded of its intended explanatory power. Attributing structure to an ink mark or a sound wave does not explain why, say, it might be two instead of three ways ambiguous. For that explanation, one has to credit the speaker-hearer with structure that conditions how she might respond and interpret strings, and such explanation does not turn on the external marks having any structure whatsoever beyond whatever structure they have anyhow. Let us, though, marshal some further considerations. First off, how things ‘seem’ to Devitt or anyone else is of no pressing interest. It might appear to someone that mathematics is about numerals, or that chemistry is about changes in the colour of paper strips. A theory is about the phenomena it explains, which are, more often than not, pre-theoretically opaque. Secondly, one might wonder: ‘How would linguistics appear if linguists were not interested in external concreta, but in the abstract mental structures speaker-hearers employ in their interpretation and production of such concreta, inter alia?’ Well, it would appear just as it does, surely. No one thinks we can open up someone’s head and see the linguistic structure there, but we can seek evidence for it on the basis of how speaker-hearers respond to and judge certain material, just as we infer unobservable structure from observables in every branch of science. So, when we are invited to contemplate a sequence of marks in a textbook or on a chalkboard, we are not being invited to reflect on the properties of the marks, but on how we would interpret the string, or respond to it. In other words, a cognitive reaction is elicited and the theory takes the character of that response as evidence for the structure that explains the reaction, viz., the constraints that shape it and which exclude many other logically possible responses. Thirdly, Devitt’s position is incoherent as it stands. On the one hand, the position is meant to be ‘straightforward’. On the other hand, all of the properties Devitt mentions that have been the concern of linguists are highly abstract and are not properties of ink marks or sound waves in any straightforward sense at all, in the way in which colour or length or pitch, say, might be. For sure, Devitt is happy to think that the relevant marks are idealised in some sense, so their colour or length do not count. One is left to wonder what is to count. Does linear order count? Presumably, it must do so, for otherwise the marks would be gibberish, uninterpretable, i.e., the very identity of a string depends upon its linearity. But, if so, then ‘c-command’ cannot be a property of the ink marks, for c-command only applies to hierarchical structure, and cannot be expressed in linear terms. Likewise, phrase structure can’t be a property of the marks, for it, again, applies to hierarchical structure and is defined in terms of containment and domination, not precedence. Likewise, the properties of verb phrases can’t be determined linearly, for their arguments can be

34

J. Collins

distributed throughout a linear structure.23 The situation for Devitt deteriorates still further when we introduce the full suite of so-called ‘empty categories’, which have no register in any concrete string at all. Indeed, case, which Devitt mentions, is hypothesised to hold for all nominals within a structure, whether it is morphologically marked or not. There is nothing ‘straightforward’ about such theoretical technology. Even the lowly word is abstract and not to be found on Devitt’s pages or any other page. A word is commonly defined as a cluster of features: syntactic, phonological, and semantic. Such features are not, in any obvious sense, properties of ink marks, and so words can’t be understood as ink marks either. It bears emphasis that I am not suggesting that it is impossible for linguistic theory to be construed quasi-nominalistically in the way Devitt desires. My present point is merely that it is far from obvious that it can be done and belief in the prospect is certainly not a meta-theoretical commitment of linguistics; quite the contrary. Imagine, however, that the kind of realism Devitt favours were the explicitly favoured meta-theory of linguists. The linguists would be under no particular obligation to spell out their position, no more than mathematicians, who tend to favour Platonism, are obliged to spell out their metaphysics. On the other hand, surely it is the job of the philosopher of linguistics precisely to spell out their meta-theory, much as Katz (1981) attempted to spell out his linguistic Platonism. Suffice it to say, nowhere does Devitt attempt any analysis of the properties he mentions to render them as high-level functional properties of external marks.

2.5.4  Intuitions and Aboutness I have argued that the phenomena linguistic theory seeks to explain are cognitive. Devitt (2006a: 31), however, writes: [T]he linguistic evidence adduced for a grammar bears directly on a theory of the language in my sense; evidence about which strings of words are grammatical; about the ambiguity of certain sentences; about statement forms and question forms; about the synonymy of sentences that are superficially different; about the difference between sentences that are superficially similar; and so on.

We have already seen that, notwithstanding first appearances, this is not so. Evidence of ambiguity, say, does not pertain to external marks in themselves, but first to the speaker-hearer who consumes or produces the marks as ambiguous. Devitt (2006a: 32) adds a twist to his position. In response to the charge that data are also gathered from speaker-hearer intuitions, he happily concedes that this is so, but ripostes that  One can think of a phrase structure analysis as the description of how categorical information relates a set of lexical items to each other. The relations are not ones identifiable on the surface of the strings, although, of course, competent speaker-hearers are able to map phrase structure to and from linear strings. The relation between linearity and phrase structure is contested, but no one suggests that phrase structure just is a property of linear organisation, no matter how high-level or functional (see Kayne 1994; Nunes 2004).

23

2  Invariance as the Mark of the Psychological Reality of Language

35

the linguist is interested in ‘correct’ or ‘accurate’ intuitions, and so the data still bear on linguistic reality, i.e., that which the intuitions are about. As a riposte to my argument, then, Devitt might contend that the speaker-hearers’ understanding is properly evidential, but only if it is getting right what the linguist is primarily interested in, viz., linguistic reality, not the cognition of it. Of late, there has been much discussion of the status of intuitions in linguistics. Happily, in order to tackle the thought that intuitions must be accurate to be evidential, we may eschew most of the disputes.24 First, Devitt (2006a: ch. 7) conceives of intuitions as ‘theory laden’ metalinguistic propositional attitudes, of the form, S judges/intuits P is G, where S is the informant, P is a description of a sentence, and G is a linguistic property (ambiguous, grammatical, interrogative, etc.). The thought is that much as we have intuitions about the properties of any other objects in our environment, so we have intuitions about the properties of sentences. Space precludes a thorough discussion of this conception of intuitions, but we may still immediately see the oddity of the position as an analysis of what linguists mean by intuitions.25 On this conception, the linguist can only gather intuitive data from speaker-hearers capable of wielding the appropriate linguistic concepts (ambiguous, grammatical, etc.), but this is far too restrictive a condition. Speaker-­ hearers need have no such concepts at all for them to find strings ambiguous or acceptable.26 It is the linguist, not the informant, who uses such concepts. This is obvious in the case where the informants are children, whose utterances are controlled in an artificial scenario, but the point holds generally. Besides, even if speaker-hearers were to possess the relevant concepts, their employment of them would be of no obvious interest to the linguist. The linguist is not interested in informants as amateur linguists. This is transparent when we are explicitly concerned with data on phenomena that have no common-sense label, such as the head of a phrase, or the interpretation of PRO, or reconstruction sites. Devitt (2006a: 98–103) is aware of all of this, but takes such considerations to signal a problem for the use  See, for example, Schütze (1996) and Maynes and Gross (2013) for surveys of positions on the nature and status of intuitions. See Ludlow (2011) and Sprouse and Almedia (2013) for sound discussions of why much of the controversy is misplaced. 25  It is useful here to distinguish between ‘linguistic hunches’ and ‘linguistic intuitions’. The former are suggestions from theorists themselves about the status of a construction, not the naïve view of an informant. Of course, to distinguish between these cases is not to suggest that the theorist will depart from the naïve informant in what they reckon to be acceptable or unacceptable; on the contrary, there is great concord (Sprouse and Almedia 2013). The point, rather, is that the theorist may have a hunch about the reason for a construction’s unacceptability, say, in a way the informant may not. 26  Since Aspects (1965: 11–15), Chomsky has distinguished between grammaticality and acceptability. The former is a theoretical notion referring to what structures a given grammar generates. Speaker-hearers do not have grammaticality intuitions, but only acceptability ones, which a grammatical theory seeks to explain, in concert with other theories. Acceptability refers to what a speaker-hearer finds non-deviant, OK. It is a complex empirical matter, of course, to determine how acceptability bears on grammaticality in particular, as opposed to matters of semantics, pragmatics, lack of imagination, contextual priming, etc. 24

36

J. Collins

of intuitions as data, for the informants can have no intuitions, qua meta-linguistic judgements, about the phenomena since they lack the relevant concepts. Devitt’s reasoning here is unfortunate, for he is right to assail the meta-linguistic conception of intuition as being of not much good in linguistics. He just goes wrong in thinking that such a conception is one the linguist adopts.27 Intuitions clearly bear on the relevant phenomena, whether the informants have the relevant concepts or not. All Devitt’s reasoning in fact shows is that the kind of meta-linguistic reflection he has in mind is not the evidential basis of linguistics. For example, trivially, (5) provides evidence that the head of a subject DP cannot occur in an adjoined relative clause: (5) a. The men the woman met loved themselves b. ∗The men the woman met loved herself The head of a phrase is that lexical item that determines the category of the phrase in which it occurs, and so determines how that phrase as a whole can relate to other items in the structure, e.g., in terms of agreement. Here we see that the DP the men in (5a) agrees in number with the reflexive object of the verb, regardless of the presence of the singular the woman closer to the reflexive. In (5b), the men does not agree with the reflexive, and this produces unacceptability, showing that the woman of the relative clause cannot be the head of the phrase, even though it has the right morphology to agree with the reflexive. Devitt’s error rests in his claim that for intuitions to be evidence for X, they must be about X (the concept of an X must be part of the intuitive content). Clearly, no linguist has ever employed such a restrictive conception of the evidential role of intuitions, for a host of parade examples, like the case of (5), would have been ruled out immediately. In general, evidential relations are not restricted by ‘aboutness’, and so there is no reason to expect anything different in linguistics. The observation of starlight bears on the curvature of space-time, but one’s observations are not about space-­ time. The beaks of finches bear on genetic mutation rates, but observations of beaks are not about DNA. For Y to count as evidence for X, it suffices that X would explain the occurrence of Y, while not-X would not (or not do so as well, or not in combination with antecedent commitments, etc. – fill in one’s favourite account of explanation). The same minimal principle applies in the case of linguistics; intuitive data do not usher in a novel notion of evidence that involves aboutness. This leads us to our second point. Devitt contends that intuitions must be ‘accurate’ for them to count as evidence for the relevant linguistic hypotheses. This is not so in any straightforward sense. First, the relevant intuitions need not be propositionally articulate. They can be as minimal as ‘Dunno’, or ‘Can’t make sense of that’, or ‘What is that word doing  Devitt (2006a: 96) does provide evidence that something like the ‘voice of competence’ view is widely held, that intuitions are direct evidence on the nature of the language faculty, as if one can intuit that a construction is F, for some grammatical property F. However this evidence should be read, it does not militate for an orthodox meta-linguistic conception of intuitions in linguistics (see Collins 2008a; Ludlow 2011: ch. 3).

27

2  Invariance as the Mark of the Psychological Reality of Language

37

there?’, etc. It is up to the theorist to determine just what the intuition is evidence for; it can’t be read off the propositional structure (if any) of the judgement. This is clearly the case where the intuitions are concerned with truth conditions or what someone would say in a given situation. Here, we are eliciting semantic or discourse responses in order to determine the relevant syntactic constraints.28 In such cases, the informant is not at all accurate about what we are interested in; still, we seek to infer from what she does say to the real object of our inquiry. Again, it is not so much that Devitt does not appreciate these points (cp. Devitt 2006a: 99), but that he draws the wrong conclusion: that there is a problem with the linguist’s dependence on intuitive data. Read aright, there is no problem at all, for the relevant intuitions are not accurate meta-linguistic judgements. Secondly, intuitions can readily serve as evidence even where they are inaccurate (what Devitt (2006a: 227–228) himself calls ‘performance errors’). For example, consider these two familiar cases: (6) a. The horse raced past the barn fell b. The butter spread on the toast melted Normal speakers find (6a) unacceptable, while (6b) is understood to be just fine. Theoretically, though, neither is syntactically deviant. There is a parsing problem with (6a), whereby we treat race as if it were the main verb and so are ‘led down the garden path’ not to expect a further verb at the end of the sentence. The diagnosis is that raced past the barn is a relative clause: What fell? The horse (who/that was) raced past the barn. There is no problem with (6b), for we do not take spread to be a main verb. Why? Well, part of the story is that The horse raced past the barn is ambiguous between a sentence and a DP with an adjoined relative clause. On the other hand, The butter spread on the toast is not ambiguous; it can only be a DP, for the butter is not a suitable subject of spread. The upshot is that here we have a case where informants are inaccurate about their competence, but we still retrieve robust evidence about their competence. Again, for intuitions to count as evidence for a grammar it suffices that we can inferentially trace back to our hypothesised grammatical principles such and such intuitions as opposed to so and so intuitions informants do reliably report. This inferential relation doesn’t demand aboutness, or even accuracy of the intuitions.

2.5.5  Non-intuitive Data An apparent problem for Devitt’s position is that data that bear upon linguistic theory can also come from what he would regard as kosher psychological research, such as language processing and acquisition. That is prima facie odd, if linguistic

 For extensive use of such intuitive data from children, see, e.g., Crain and Thornton (1998) and Roeper (2007).

28

38

J. Collins

theory is not about anything essentially cognitive. Devitt (2006a: 32), however, contends that the psycholinguistic evidence about language comprehension and acquisition, offered to support the view that a grammar is psychologically real, bears directly on a theory of the language, in my sense… The right theory of a language must ascribe rules to the language that competent speakers of the language respect: the Respect Constraint.

The idea here is that evidence on processing (what Devitt calls competence) still serves as evidence for the language in his external sense, for the processing rules must respect the rules of the language itself.29 It is certainly true that much of psycholinguistic research is informed and constrained by syntactic theory, which in turn is answerable to the findings of the research. This relationship is exactly as one would predict from the psychologistic conception. Whether the relationship can also be finagled to support Devitt’s conception turns on how we are to understand respect, the supposed constraint that the mechanism that produces linguistic tokens respects the syntactic properties of those tokens. There is a problem. Familiarly, there are mismatches between the licence syntactic theory gives a structure and how we in fact are able to process or acquire competence with it. We saw an example of a parsing mismatch just above with the garden path sentence, and there are many others. Similarly, in the case of acquisition, children tend to regularise, interpret certain structure to be flat, pronounce medial copies, etc. In short, children make ‘errors’. Here is a question: Are these mismatches cases of respect? The standard generative position on such cases is that they reflect a difference between competence and performance. So, there is no real respect at all in Devitt’s sense, where an internal process respects the properties of its external products; there are internal interfaces, which are more or less noisy. Indeed, Chomsky (1991: 49) has gone so far as to suggest that language is ‘in general unusable’, by which he means that the language faculty is usable just to the extent that the interfacing components can interpret the structures it makes available to them, but the faculty itself is not designed to be so usable. In short, the generativist is interested in such mismatches between syntactic licence and processing/acquisition, and has a ready general explanation for it in terms of a competence/ performance interface (of course, this is just to signal the kind of explanation to be offered).

 Often, externalist critics of Chomsky neglect to offer an account of the role of psychological evidence in linguistics. For example, Cummins and Harnish (1980), without denying the existence of something like a language faculty, suggest that Chomsky has somehow begged the question by presuming linguistics to be a branch of psychology. Chomsky (1980c: 43) correctly replies that if one is concerned with the truth of one’s theories, ‘as opposed to one or another way of axiomatizing some range of data’, then one should seek all available evidence, including ‘“psychological constraints” deriving from other studies’ (see note 7). Katz and Postal (1991: 526–527) defend Cummins and Harnish by suggesting that Chomsky begs the question again by presuming that the truth of a linguistic theory could only be a psychological matter. Not a bit of it. Chomsky’s point is merely that Cummins and Harnish presume that a certain data source is somehow irrelevant – an unprincipled presumption shared by Katz and Postal.

29

2  Invariance as the Mark of the Psychological Reality of Language

39

It is difficult to see what Devitt could say here. On his model it is as if the subject has some failure of respect-recognition in being unable to perceive the external structure of the language, with children especially being subject to this deficit. Such an account is a non-starter, for relative to our general grasp of external structure, syntax is all equally abstract, whether the structures are usable or not. Devitt, of course, is free to say that processing respects syntactic structure, but it remains wholly opaque what that means apart from some magical coincidence between abstract properties of concreta and internal processing. Consider these standard cases: (7) a. The boat the sailor the rat bit built sank b. Sailors sailors sailors fight fight fight30 Although perfectly interpretable, these structures are unparsable. To suggest that the relevant structure is out there as a property of the strings, but just undetectable, misdescribes the case. Even when we are able to identify the relevant structure (after a course in linguistics, say), the structure remains unparsable. Whether or not the external strings possess the relevant structure (center embedding), therefore, is not a feature of the explanans. The problem is that the structure relevant to the interpretation of the strings is not employable in such cases, whether or not the structure is externally realised. That is just a description of the phenomenon. Devitt (2006a: 227–228) is aware of such cases, of course; he classifies them as ‘performance errors’. The notion of such an error is supposed to explain why a string can be both interpretable and unacceptable in terms of the interface between independent systems, e.g., the relevant interpretation imposes too great a demand upon working memory that is independent of the language faculty. Devitt, however, has no proper competence/performance distinction, i.e., his notion of competence just is a species of performance, production of strings. So, according to Devitt, the classification of (7) as ‘errors’ amounts to our being told that the system has linguistic reality wrong. But what we want to explain is why certain structures are interpretable but unacceptable, which is a psychological phenomenon, not a phenomenon that bears on our being right or wrong about anything. Even post-debriefing, the structures remain unparsable, so the problem to be explained is not how or even that speaker-hearers go wrong. Without a proper competence/performance distinction, it is difficult to see how we could provide any kind of principled explanation of mismatches. I shouldn’t want to suggest that Devitt can tell no story here, but it remains opaque what it might be.

30

 These are cases of center embedded relative clauses. Thus: (i) The boat [(that) the sailor [(that) the dog bit] built] sank.

40

J. Collins

2.5.6  Representation Devitt’s final consideration bears on an interpretation of Chomsky. Devitt (2006a: 34) writes: Chomsky’s assumption (on the natural interpretation) [is] that competence … is knowledge of the language, involving the representation of its rules… For, the language that would be thus known and represented would be the very same language that is the output of the competence. Chomsky assumes that competence consists in knowledge about the I-language. The point I am emphasizing is that this very I-language is, indeed must be at the appropriate level of abstraction, the output of that very competence.

The idea here is that knowledge of language requires an object, which is just what the competence produces: the I-language. Hence, Chomsky ‘must’ accept Devitt’s distinctions (cp. Katz 1981: 81). Unfortunately, Devitt’s interpretation is somewhat awry. Space precludes any proper discussion of Chomsky’s actual views, so let me direct my points to Devitt’s reasoning rather than to a full counter interpretation.31 First, Chomsky could not possibly mean by ‘knowledge of language’ what Devitt takes him to mean, at least not without falling into immediate contradiction. Chomsky rejects the idea of an external language, so whatever knowledge is, in the relevant sense, it is not an external relation between internal states and their products, like the relation between a blacksmith and the horseshoes he produces. For what it is worth, Chomsky thinks of ‘knowledge’ as an informal notion picking out the relevant states of the speaker-hearer, with no presupposition of an object of the knowledge, relative to which it is to be assessed.32 Again, my present point is simply that Chomsky would be flatly inconsistent were we to read him any other way. Secondly, as already documented, pace Devitt’s appeals to common sense, competence in Chomsky’s sense does not produce external tokens. The competence is one aspect of an ensemble, which as a whole produces and consumes non-linguistic material in highly specific ways; it is these ways of interpretation that a theory of competence targets. So, again, there is no commitment to a set of products of competence on Chomsky’s view. Thirdly, an I-language is not the output of competence at any level of abstraction. An I-language is the competence, a state of the mind/brain. Chomsky makes this 31 32

 For Chomsky’s views on ‘knowledge of language’, see Collins (2004, 2008a, c).  For example: [I]n English one uses the locutions “know a language,” “knowledge of language,” where other (even similar) linguistic systems use such terms as “have a language,” “speak a language,” etc. That may be one reason why it is commonly supposed (by English speakers) that some sort of cognitive relation holds between Jones and his language, which is somehow “external” to Jones; or that Jones has a “theory of his language,” a theory that he “knows” or “partially knows.” … One should not expect such concepts to play a role in systematic inquiry into the nature, use, and acquisition of language, and related matters, any more than one expects such informal notions as “heat” or “element” or “life” to survive beyond rudimentary stages of the natural sciences. (Chomsky and Stemmer 1999: 397)

2  Invariance as the Mark of the Psychological Reality of Language

41

clear every time he uses the notion, but let us just consider the introduction of the expression into the literature. Chomsky (1986: 22) asks us to consider the formulation ‘H knows L’, where L is a language: [F]or H to know L is for H to have a certain I-language. The statements of a grammar are statements of the theory of mind about the I-language, hence statements about structures of the brain formulated at a certain level of abstraction from mechanisms… UG is now construed as the theory of human I-languages, a system of conditions deriving from human biological endowment that identifies the I-languages that are humanly accessible under normal conditions (Chomsky 1986: 23)

Here, the apparent external relation ‘K(H, L)’ is analysed as H being in certain brain states abstractly characterised in terms of an I-language, where a grammar or theory is about such states so characterised, i.e., at a level of abstraction from mechanisms.33 So, an I-language is not the object of knowledge or even a product of the mind, but the state of the mind/brain that is picked out by the informal locution ‘H knows L’. Chomsky couldn’t mean anything else, for I-language is a term of art introduced to focus attention on internal states as the object of the theory rather than the putative products of the mind (cp. Chomsky 2001: 41–42). In short, Devitt’s misreading of Chomsky gives us no reason to think of linguistics as being implicitly committed to the meta-theory Devitt favours. This is hardly surprising, of course, for it is virtually beyond belief that Chomsky and everyone else in the generative tradition could have been confused about the most elementary meta-theoretical principle of their field, which developed out of a rejection of all forms of nominalism.

2.6  Concluding Remarks The aim of the foregoing has not been to show that every non-cognitivist interpretation of linguistic theory must be mistaken. I should say, though, that, once read aright, the psychologism generative linguistics offers is almost banal: linguistic theory’s posits are psychologically real because they serve to explain psychological phenomena, and only explain such phenomena – the explanations they furnish can only be parasitically construed as speaking of an external linguistic reality. Unfortunately, so much of the philosophical controversy that generative linguistics has occasioned is not due to any inherent difficulty of interpretation of the theories, as, say, we find in quantum mechanics. The disputes arise from philosophical presuppositions being imposed upon linguistics, from which perspective the theories can seem dubious or not properly supported, in need of a more intimate relation  See Matthews (2007) for a general treatment of propositional attitude attribution consistent with this approach. Matthews suggests, after others, that the relational form of propositional attitude attributions is measure-theoretic, allowing, but not requiring, the mind/brain states so picked out to be monadic rather than relational, just as X weighs 3 kg has a relational form, even though the underlying magnitude picked out is a monadic property.

33

42

J. Collins

with ‘reality’. The reality a theory speaks of, however, just is the phenomena it primitively explains.34

References Anderson, S., and D. Lightfoot. 2002. The language organ: Linguistics as cognitive physiology. Cambridge: Cambridge University Press. Berwick, R., and A. Weinberg. 1984. The grammatical basis of linguistic performance: Language use and acquisition. Cambridge, MA: MIT Press. Bever, T., J.  Katz, and D.T.  Langendoen, eds. 1976. An integrated theory of linguistic ability. Hassocks: The Harvester Press. Born, M. 1953. Physical reality. The Philosophical Quarterly 3: 139–149. Bresnan, J. 1978. A realistic transformational grammar. In Linguistic theory and psychological reality, ed. M. Halle, J. Bresnan, and G. Miller, 1–59. Cambridge, MA: MIT Press. Bresnan, J., and R. Kaplan. 1982. The mental representation of grammatical relations. Cambridge, MA: MIT Press. Chomsky, N. 1955–56/75. The logical structure of linguistic theory. New York: Plenum. ———. 1957. Syntactic structures. The Hague: Mouton. ———. 1959. Review of B.F. Skinner’s Verbal behaviour. Language 35: 26–58. ———. 1963. Formal properties of grammars. In Readings in mathematical psychology, ed. R.D. Luce, R.R. Bush, and E. Galanter, vol. II, 323–417. New York: Wiley. ———. 1964. Current issues in linguistic theory. In The structure of language: Readings in the philosophy of language, ed. J. Fodor and J. Katz, 50–118. Englewood Cliffs: Prentice-Hall. ———. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press. ———. 1975. Reflections on language. London: Fontana. ———. 1980a. Rules and representations. New York: Columbia University Press. ———. 1980b. Reply to Harman. The Behavioral and Brain Sciences 3: 45–46. ———. 1980c. Reply to Cummins and Harnish. The Behavioral and Brain Sciences 3: 43–44. ———. 1981. On the representation of form and function. Linguistic Review 1: 3–40. ———. 1986. Knowledge of language: Its nature, origin and use. Westport: Praeger. ———. 1987. Reply to George. Mind & Language 2: 178–197. ———. 1988. Language and the problem of knowledge: The Managua lectures. Cambridge, MA: MIT Press. ———. 1991. Linguistics and cognitive science: Problems and mysteries. In The Chomskyan turn, ed. A. Kasher, 26–53. Oxford: Blackwell. ———. 1995. The minimalist program. Cambridge, MA: MIT Press. ———. 2000. Minimalist inquiries: The framework. In Step by step: Essays on minimalist syntax in honor of Howard Lasnik, ed. R. Martin, D. Michaels, and J. Uriagereka, 89–155. Cambridge, MA: MIT Press. ———. 2001. Derivation by phase. In Ken Hale: A life in language, ed. M. Kenstowicz, 1–52. Cambridge, MA: MIT Press.

 My greatest debt is to Michael Devitt, for literally hundreds of emails and conversations. I disagree with Michael, but I’ve learnt a lot from this engagement, which has been one of the highlights of my philosophical life. A nigh-on equal debt is to Georges Rey, who is my constant interlocutor on all things philosophical. I’ve also benefitted enormously from conversations with Bob Matthews, Frankie Egan, Paul Pietroski, Barry Smith, and Guy Longworth. Especial thanks, too, to Andrea, for organising the volume and very helpful comments.

34

2  Invariance as the Mark of the Psychological Reality of Language

43

———. 2004. Language and mind: Current thoughts on ancient problems. In Variation and universals in biolinguistics, ed. L. Jenkins, 379–406. Amsterdam: Elsevier. ———. 2012. The science of language: Interviews with James McGilvray. Cambridge: Cambridge University Press. Chomsky, N., and B. Stemmer. 1999. An on-line interview with Noam Chomsky: On the nature of pragmatics and related issues. Brain and Language 68: 393–401. Collins, J. 2004. Faculty disputes. Mind & Language 17: 300–333. ———. 2006. Between a rock and a hard place: A dialogue on the philosophy and methodology of generative linguistics. Croatian Journal of Philosophy 6: 471–505. ———. 2007a. Review of Michael Devitt’s Ignorance of language. Mind 116: 416–423. ———. 2007b. Linguistic competence without knowledge. Philosophy Compass 2: 880–895. ———. 2008a. Knowledge of language redux. Croatian Journal of Philosophy 8: 3–44. ———. 2008b. A note on conventions and unvoiced syntax. Croatian Journal of Philosophy 8: 241–247. ———. 2008c. Chomsky: A guide for the perplexed. London: Continuum. ———. 2009. Perils of content. Croatian Journal of Philosophy 9: 259–289. ———. 2014. Representations without representa: Content and illusion in linguistic theory. In Semantics & beyond: Philosophical and linguistic inquiries, ed. P.  Stalmaszczyk, 27–64. Berlin: de Gruyter. Cowie, F. 1999. What’s within? Nativism reconsidered. Oxford: Oxford University Press. Crain, S., and R. Thornton. 1998. Investigations in universal grammar: A guide to experiments on the acquisition of syntax and semantics. Cambridge, MA: MIT Press. Cummins, R., and R. Harnish. 1980. The language faculty and the interpretation of linguistics. The Behavioural and Brain Sciences 3: 18–19. Devitt, M. 2006a. Ignorance of language. Oxford: Oxford University Press. ———. 2006b. Defending Ignorance of language: Responses to the Dubrovnik papers. Croatian Journal of Philosophy 6: 571–606. ———. 2007. Response to my critics. Meeting of the Pacific Division of the APA (April, 2007). ———. 2008a. Explanation and reality in linguistics. Croatian Journal of Philosophy 8: 203–232. ———. 2008b. A response to Collins’ note on conventions and unvoiced syntax. Croatian Journal of Philosophy 8: 249–255. Fodor, J. 1983. The modularity of mind. Cambridge, MA: MIT Press. Fodor, J., T. Bever, and M. Garrett. 1974. The psychology of language: An introduction to psycholinguistics and generative grammar. New York: McGraw-Hill. Fodor, J.D., J. Fodor, and M. Garrett. 1975. The psychological unreality of semantic representations. Linguistic Inquiry 6: 515–532. Frege, G. 1884/1950. Foundations of arithmetic. Oxford: Blackwell. George, A. 1989. How not to become confused about linguistics. In Reflections on Chomsky, ed. A. George, 90–110. Oxford: Blackwell. Harman, G. 1980. Two quibbles about analyticity and psychological reality. The Behavioral and Brain Sciences 3: 21–22. Hauser, M., N. Chomsky, and W.T. Fitch. 2002. The faculty of language: What is it, who has it, and how did it evolve? Science 298: 1569–1579. ———. 2005. The evolution of the language faculty: Clarifications and implications. Cognition 97: 179–210. Higginbotham, J. 1991. Remarks on the metaphysics of linguistics. Linguistics and Philosophy 14: 555–566. ———. 2007. Response to Devitt. Meeting of the Pacific Division of the APA (April, 2007). Jenkins, L. 2000. Biolinguistics: Exploring the biology of language. Cambridge: Cambridge University Press. ———., ed. 2004. Variation and universals in biolinguistics. Amsterdam: Elsevier. Katz, J. 1981. Language and other abstract objects. Totowa: Rowman and Littlefield.

44

J. Collins

Katz, J., and P. Postal. 1991. Realism vs. conceptualism in linguistics. Linguistics and Philosophy 14: 515–554. Kayne, R. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press. Lange, M. 2002. Introduction to the philosophy of physics: Locality, fields, energy, and mass. Oxford: Blackwell. Levet, W.J.M. 1974. Formal grammars in linguistics and psycholinguistics. Vol. 3. The Hauge: Mouton. Lewis, D. 1975. Languages and language. In Minnesota studies in the philosophy of science, ed. K. Gunderson, vol. 7, 3–35. Minnesota: University of Minnesota Press. Ludlow, P. 2011. The philosophy of generative linguistics. Oxford: Oxford University Press. Matthews, R. 2006. Could competent speakers really be ignorant of their language? Croatian Journal of Philosophy 6: 457–504. ———. 2007. The measure of mind: Propositional attitudes and their attribution. Oxford: Oxford University Press. Maynes, J., and S. Gross. 2013. Linguistic intuitions. Philosophy Compass 8: 714–730. Miller, G., and N. Chomsky. 1963. Finitary models of language users. In Readings in mathematical psychology, ed. R. Luce, R. Bush, and E. Galanter, vol. 2, 419–491. New York: Wiley. Nunes, J. 2004. Linearization of chains and sideward movement. Cambridge, MA: MIT Press. Pettito, L.-A. 2005. How the brain begets language. In The Cambridge companion to language, ed. J. McGilvray, 84–101. Cambridge: Cambridge University Press. Pietroski, P. 2008. Think of the children. Australasian Journal of Philosophy 86: 657–669. Postal, P. 2004. Skeptical linguistic essays. Oxford: Oxford University Press. Pritchett, B. 1992. Grammatical competence and parsing performance. Chicago: Chicago University Press. Pylyshyn, Z. 1991. Rules and representations: Chomsky and representational realism. In The Chomskyan turn, ed. A. Kasher, 231–251. Oxford: Blackwell. Quine, W.V.O. 1972. Methodological reflections on current linguistic theory. Synthese 21: 386–398. Rey, G. 2006a. The intentional inexistence of language – but not cars. In Contemporary debates in cognitive science, ed. R. Stainton, 237–255. Oxford: Blackwell. ———. 2006b. Conventions, intuitions and linguistic inexistents: A reply to Devitt. Croatian Journal of Philosophy 6: 549–570. Roeper, T. 2007. The prism of grammar: How child language illuminates humanism. Cambridge, MA: MIT Press. Schutze, C. 1996. The empirical basis of linguistics: Grammaticality judgments and linguistic methodology. Chicago: Chicago University Press. Smith, B. 2006. Why we still need knowledge of language. Croatian Journal of Philosophy 6: 431–456. Soames, S. 1984. Linguistics and psychology. Linguistics and Philosophy 7: 155–179. References to the reprint in S. Soames, Philosophical papers, volume 1. Natural language: What it means and how we use it, 133–158. Princeton: Princeton University Press, 2009. ———. 1985. Semantics and psychology. In Philosophy of linguistics, ed. J.  Katz, 204–226. Oxford: Oxford University Press. References to the reprint in S. Soames, Philosophical papers, volume 1. Natural language: What it means and how we use it, 159–181. Princeton: Princeton University Press, 2009. Sprouse, J., and D.  Almeida. 2013. The role of experimental syntax in an integrated cognitive science of language. In The Cambridge handbook of biolinguistics, ed. K.  Grohmann and C. Boeckx, 181–202. Cambridge: Cambridge University Press. Wiggins, D. 1997. Languages as social objects. Philosophy 72: 499–524.

Chapter 3

Priorities and Diversities in Language and Thought Elisabeth Camp

Abstract  Philosophers have long debated the relative priority of thought and language, both at the deepest level, in asking what makes us distinctively human, and more superficially, in explaining why we find it so natural to communicate with words. The “linguistic turn” in analytic philosophy accorded pride of place to language in the order of investigation, but only because it treated language as a window onto thought, which it took to be fundamental in the order of explanation. The Chomskian linguistic program tips the balance further toward language, by construing the language faculty as an independent, distinctively human biological mechanism. In Ignorance of Language, Devitt attempts to swing the pendulum back toward the other extreme, by proposing that thought itself is fundamentally sentential, and that there is little or nothing for language to do beyond reflecting the structure and content of thought. I argue that both thought and language involve a greater diversity of function and form than either the Chomskian model or Devitt’s antithesis acknowledge. Both thought and language are better seen as complex, mutually supporting suites of interacting abilities. Keywords  Systematicity · Language of thought hypothesis · Non-sentential logic · Modularity · Maps · Diagrams · Discourse structure · Illocutionary-force-­ indicating devices · Williams Syndrome

This article grew out of an Author Meets Critics session at the 2007 Pacific APA; thanks to audiences there, and especially to Michael Devitt. I also presented some of this material at the 2007 Workshop in Philosophy of Linguistics in Dubrovnik, addressing work by Peter Ludlow; thanks to audiences there, and especially to Peter Ludlow, Brian Epstein, and Gurpreet Rattan. Additional thanks to Josh Armstrong and Paul Pietroski, and to Carolina Flores for editorial work and advice. E. Camp (*) Department of Philosophy, Rutgers University, New Brunswick, NJ, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_3

45

46

E. Camp

Which comes first, thought or language? Some sort of thought-first model has considerable intuitive pull. Indeed, although analytic philosophy has fixated on language since its inception, this interest has generally been driven by the assumption that language is important primarily or only because it affords our most direct, transparent window onto the structure and content of thought (Dummett 1994). In particular, while Frege and Russell often inveighed against the inconstancies and confusions of ordinary discourse, they proposed formal logics as an improved means for accomplishing the aim they took actual languages to achieve only imperfectly: of transparently reflecting the structure and content of thought. And much subsequent philosophical analysis has been driven by the assumption that ordinary language can itself be revealed to be in perfect logical-conceptual order, given a sufficiently ingenious mapping from surface to logical forms (Wittgenstein 1921/2001) and a sufficiently sophisticated understanding of the relation between words’ meanings and the uses to which speakers put them (Grice 1975). Thus, despite its intense focus on language as a topic of investigation, much of twentieth-century analytic philosophy embraced a more fundamental focus on thought at the level of explanation. The Chomskian linguistic program extended the “linguistic turn” to this more fundamental level at which philosophers still prioritized thought and concepts, instead treating language as an explanatory end in its own right. But it did this in part by understanding ‘language’ in a very particular way, which it takes to be more theoretically tractable and scientifically pertinent than either the ordinary or philosophical construals. Analytic philosophers and laypeople typically treat language as a public phenomenon: a system of conventions for using certain sounds to stand for certain ideas, which each of us accede to by “tacit consent” (Locke 1689) in order to make ourselves understood. By contrast, Chomskians accord primacy to the sets of sentences generated by an individual speaker’s linguistic knowledge, and inquire into the essential characteristics of the mechanism that generates them. More specifically, they investigate the ‘language faculty’, construed as an innate biological mechanism for generating syntactically complex representational forms which map conceptual meanings to sounds within an individual speaker’s idiolect. Any connection to public conventions for use in communication is for them secondary at best, and perhaps altogether illusory. It is easy to feel that something has been lost in the shift away from language conceived as a public system for expressing thoughts and toward language conceived as an individual system for generating formal structures. In Ignorance of Language, Michael Devitt aims not just to restore the balance, but to shift it entirely to the side of thought. While much of the book focuses on various negative arguments against what he takes to be current linguistic methodology, his positive argument for the priority of language  is fairly direct. He begins with the claim that “language expresses thought” (2006: 128), which he takes to be “relatively uncontroversial” in itself, but to entail the exuberantly controversial tenet that “[t]hought has a certain priority to language ontologically, explanatorily, temporally, and in theoretical interest” (276). He then argues that thought itself is sententially structured; and concludes that there is thus “little or nothing” for the language faculty to do beyond matching sounds to complex mental representations, in a way that can be

3  Priorities and Diversities in Language and Thought

47

accomplished by “fairly brute-causal associationist processes.” All the wondrous complexity of contemporary linguistics really belongs to thought instead: “Humans are predisposed to learn languages that conform to the rules specified by UG because those rules are, largely if not entirely, innate structure rules of thought” (276). While Devitt’s conclusion is strong and surprising, the main premisses are widely accepted in some form. My own theoretical proclivities also lie on the thought-first side of the seesaw; and I share Devitt’s conception of language as a public, social construction. However, I see no reason to reject the existence of a substantive, biologically-­based language faculty. More importantly, I think humans employ a range of formats for thought, both naturally and by enculturation. And I think language does much more than express thought. Thus, I will argue that neither thought nor language can be assigned clear explanatory priority over the other. In particular, instead of either a single “language faculty” or a single set of “rules of thought,” it is more plausible to posit complex suites of distinct, interacting abilities that add up to make certain ways of talking and thinking very natural for us. I consider the function and format of thought and language in §3.1 and §3.2, respectively. In §3.3, I argue that the constraints imposed by Universal Grammar are more plausibly explained as originating from language, as Chomskians maintain, rather than thought, as Devitt proposes.

3.1  T  he Language of Thought and Diversities in Cognitive Format The first step in Devitt’s broadside for the priority of thought is establishing that thought itself has a sentential structure. To support this conclusion, he invokes the familiar Fodorian inference to the best explanation from systematicity and productivity in observed behaviors to the existence of a representational system with recurrent, systematically recombinable parts. Devitt forthrightly admits that the Language of Thought Hypothesis (LOTH) is “controversial” (142). However, it plays a central role in his overall argument for the priority thesis – and indeed, the specific role it plays lends it an additional degree of controversy. That is, for Devitt, as for Fodor and many others, the most direct and compelling source of evidence for LOTH is the systematicity and productivity of human speech. As we’ll discuss in §3.2, the claim that human speech is indeed so highly systematic is contested by various philosophers of language and linguists. But even accepting that we do observe such systematicity, the direction of causal influence remains an open question: perhaps thought is systematic because and to the extent that language is. While this is a thesis on which most proponents of LOTH can remain neutral, in order for Devitt to establish that thought is language-like in a non-question-begging  way, he needs independent evidence for the systematicity of thought that doesn’t rely on language. Moreover, to establish his ultimate conclusion that language exhibits the particular features it does because thought possesses those features, he needs independent

48

E. Camp

evidence not just that thought is highly systematic, but that it has a specifically sentential structure. In this section, I argue that if we bracket off linguistic evidence about the format of thought, then the case that thought has distinctively sentential format becomes much weaker. The Language of Thought Hypothesis is amenable to at least two construals (Camp 2007). On the stronger construal, thought is claimed to possess a distinctively linguistic structure; on the weaker one, it is merely like language in being a compositional representational system. For Fodor’s central aim of defending computationalism against connectionism (Fodor and Pylyshyn 1988), the weaker construal suffices. Fodor himself consistently extends his arguments to nonhuman animals (e.g. 1987), and offhandedly assumes that pictorial representational models are compositional (2007). Like Fodor, Devitt recognizes that an appeal to representational complexity doesn’t entail the strong claim about  specifically sentential structure, because maps and other non-linguistic representations have a syntax that is “very different” from language (146). And like Fodor, Devitt appeals to the cognitive states of non-human animals as evidence about the nature of thought – in his case to establish thought’s temporal priority over, and contemporary independence from, language (131). So, the argument from systematicity alone does not justify an inference to sentential structure, especially in the current argumentative context; and Devitt acknowledges this. However, he argues that the need to explain the processes of thought does. “Formal logic,” he says, “gives us a very good idea of how thinking might proceed” (146–147); by contrast, we “have very little idea how thinking could proceed if thoughts were not language-like” (147). Devitt says very little about what he means by ‘formal logic’, but he appears to have something like a traditional predicate calculus in mind. Bermudez (2003: 111) makes the same claim more explicitly: “We understand inference in formal terms – in terms of rules that operate on representations in virtue of their structure. But we have no theory at all of formal inferential transitions between thoughts that do not have linguistic vehicles” (see also e.g. Rey 1995: 207). Although this assumption is common – and understandable, given the intimate historical connection between analytic theorizing about inference and the development of predicative logic – it’s not true that formal sentential logic provides our only model for “how thinking could proceed” in general. Recent successes with connectionist models of “deep learning” have challenged the computational orthodoxy (e.g. Schmidhuber 2015); while hierarchical Bayesian models have introduced probabilistic inference to computational methodology in ways that function very differently from traditional logics (Tenenbaum et al. 2011). So it is not obvious that systematic cognitive abilities must be implemented by a system of representational vehicles which are comprised of recurrent symbolic parts governed by fixed, formally-­specified rules. However, even assuming that they must be implemented by such a system, a diversity of representational formats can satisfy this criterion. First, maps – both those, like seating charts, that exploit a finite base of elements and principles of composition, and also those, like road atlases, that exploit potentially continuously varying shapes, colors, and textures – can be constructed and

3  Priorities and Diversities in Language and Thought

49

interpreted by means of formal principles (Pratt 1993; Casati and Varzi 1999; MacEachren 2004). These principles depart substantively from those of language (Rescorla 2009; Camp 2007, 2018a). And they can be exploited to define rules for updating and integrating distinct maps within a larger cartographic system (or from inter-translatable systems), so long as they represent regions that are themselves related in spatially appropriate ways (e.g. that are at least partially contiguous). Given a definition of validity that is not specifically linguistic, we can assess such transformations for validity (Sloman 1978). Finally, there is substantive psychological and neurophysiological evidence that both people and other animals do process spatial information, including abstract information about spatial relationships, in a distinctively spatial way (Morgan et  al. 2011; Franconeri et  al. 2012; Marchette et al. 2017). Devitt’s second reason for rejecting the hypothesis that thought might be structured like a map rather than a set of sentences is that maps are expressively limited in comparison to language (146). In comparison to the invocation of constraints on explaining processes of thought, this argument is more compelling. While the expressive limitations of maps are often exaggerated – in particular, ordinary maps can be enriched to represent negation, tense, disjunction, and conditionals in various ways – it is true and important that maps cannot represent information that is not spatial. Most notably, they cannot represent abstract quantificational information (Camp 2007, 2018a). At the same time, though, there also exist diagrammatic systems, which are likewise formally defined and differ substantively from language (and from one another), and which have a much richer expressive range than maps (Shin 1994; Allwein and Barwise 1996). Moreover, some of these diagrammatic systems have robust, rigorous practical applications in science and mathematics (Tufte 1983; Giardino and Greenberg 2014). Indeed, De Toffoli (2017) argues that diagrams are useful in mathematical practice precisely because the process of using them – by manipulating constituent algebraic elements – constitutes a valid form of inferential ‘calculation’. Likewise, diagrams can be distinctively useful in tracking information about abstract relations in the real world. In particular, directed graphs or ‘Bayes nets’ offer a rigorously defined diagrammatic format for representing and manipulating causal information, one that is arguably more effective than sentential logics at least for certain purposes (Pearl 2000; Elwert 2013), and that has been argued to implement causal knowledge in children (Gopnik et  al. 2004) and possibly non-­ human animals (Camp and Shupe 2017). Thus, given this diversity of formally definable, practically relevant representational systems, there can be no in-principle argument that thought per se must be sentential. At the same time, a more modest version of the appeal to expressive power can be used to establish that at least some human thought does have a distinctively sentential structure (Camp 2015). Language is distinguished from other representational formats by its abstractness, in at least three respects. First, it employs a highly arbitrary semantic principle mapping basic elements to values. Second, it employs a highly neutral or general combinatorial principle (e.g. predication, functional application, or Merge), which itself has only minimal representational

50

E. Camp

significance. And third, its principles of construction and interpretation are defined entirely in terms of operations on the values of the basic elements, rather than on the vehicular elements themselves. Most diagrammatic systems are like language, and unlike most maps, in employing a highly arbitrary semantic principle, largely freeing them of significant constraints on the types of values their constituents can denote. In contrast to maps, some diagrammatic systems are  also like language in employing highly neutral combinatorial principles: for instance, Venn diagrams use spatial relations to represent set-theoretic relations among denoted entities. The relatively high abstractness and generality of those set-theoretic relations permits such diagrams to represent relations among a correspondingly wide range of entities. (By contrast, other diagrammatic systems employ principles with more robust significance, which impose commensurate expressive restrictions: for instance, because phylogenetic tree diagrams assign branching tree structures the significance of branching ancestry, they invariably represent the entities denoted by the nodes in a branching tree as related by ancestry and descent; Camp 2009a.) However, even the most general diagrammatic systems fail to be fully abstract along the third dimension, of vehicular implementation. That is, simply in virtue of being diagrams, their construction and interpretation rules exploit the spatial (or topological) structure of their representational vehicles. And this inevitably generates some expressive restrictions: for instance, even sophisticated Venn diagrams can only represent relations among sets that can be implemented with closed continuous figures in a single plane (Lemon and Pratt 1997). Thus, Devitt is right that language is distinctively expressively powerful, in virtue of its distinctively abstract semantic and combinatorial properties. Still, the class of complex relations that exceed the scope of diagrammatic representation is rather rarified, and so an advocate of LOTH might be nervous about resting their case for expressive generality on them. To bolster their case, they might point to the fact that ordinary human thought is highly intensional in order to suggest a more pervasive and relevant potential expressive restriction on non-sentential systems. Diagrammatic systems can represent at least some kinds of modality – for instance, Pearl (2009) argues that directed graphs are uniquely equipped to capture counterfactual causal inference. But most diagrammatic systems are extensional; and the best-developed and most general intensional logics are all extensions of the predicate calculus. Nonetheless, even if we grant that intensional relations, as well as certain extensional relations among sets, can only be expressed in language, this still falls well short of establishing that “the innate structure rules of thought” have a sentential syntax and semantics, in the way Devitt needs. First, like the basic argument for LOTH, inferring that the logic of intensionality must be predicative relies on an appeal to a lack of available alternatives that is vulnerable to subsequent counterexemplification. Second and more generally, even if a formal predicate calculus does constitute our most rigorous general model for “how thinking could proceed” when we analyze “thought” in terms of the prescriptive “laws” of thought, it is frustratingly obvious that much, even most actual human thinking fails to conform to this

3  Priorities and Diversities in Language and Thought

51

model (Evans and Over 1996). The various species of intensionality have proven to be especially recalcitrant to systematic formal analysis. Given that, models of thought that appeal to schemas and other partly abstract, partly iconic modes of representation may hold more promise for capturing the distinctive contours of actual ordinary human cognition, including especially intensionality (Johnson-­ Laird 2005). Finally and most importantly, establishing that some of the contents that people sometimes think about can only, or most easily, be represented and manipulated sententially doesn’t establish that all thought takes that form. Devitt, like many advocates of LOTH, implicitly assumes that thought is governed by a single set of innate structure rules; but empirical evidence suggests that humans regularly and spontaneously employ multiple representational formats. Here, one might argue for the centrality of sententially-structured thought on the grounds that its expressive generality uniquely equips it to integrate thoughts encoded in distinct formats (Carruthers 2003). But this too is a substantive argument by exclusion, which proponents of modularity can resist in various ways (Rice 2011). More importantly, it would still not establish language as the exclusive format for thought, only as the privileged vehicle for integration when it occurs. And advocates of cognitive modularity often point to the pervasive failure of full substantive integration in human cognition in support of a multiplicity of representational forms and structures (Fiddick et al. 2000). Thus, we have multiple reasons to think that human cognition can, and does, take multiple forms. And while I think we do have good reasons to accept that a significant amount of human thought is indeed sententially structured (Camp 2015), it is very much an open possibility that this reflects the influence of language as a biologically-­endowed and overlearned communicative medium, rather than the other way around.

3.2  L  anguage as Expressing Thought: Diversities in Linguistic Function The central lesson of §3.1 was that we have good reasons to reject Devitt’s claim that human thought in general takes a sentential form, akin to a predicate calculus. Suppose, though, that we do accept that assumption. Shifting from thought to language, the next big move in Devitt’s argument for the priority of thought is the claim that language takes the form it does because it expresses thought, and in particular because the structure of language reflects the structure of thought. (As Dummett (1989: 197) puts it, “a fully explicit verbal expression is the only vehicle whose structure must reflect the structure of the thought.”) In this section, I argue that while expressing thought is indeed one central thing that language does, it also has other important functions.

52

E. Camp

Devitt doesn’t offer much detail about what it means for language to express thoughts. ‘Thoughts’, for him, are “mental states with meanings” (142): “propositional attitudes, mental states like beliefs, desires, hopes, and wondering whethers” (125). ‘Expressing’ is a matter of “convey[ing] a ‘message’” by “uttering a sentence of the language to express a thought with the meaning that the sentence has in that language” (127), where that ‘meaning’ is determined by public conventions for use (132). So his overall picture is that language expresses thought by combining words whose conventional meanings match the concepts that constitute the propositional attitude expressed, in a structure that mirrors the structure of that propositional attitude. An initial, somewhat ancillary worry focuses on the role Devitt assigns to conventional meaning here. He needs to do this to establish his overall negative conclusion, that “the primary concern in linguistics should not be with idiolects but with linguistic expressions that share meanings in idiolects” (12). However, the move from the claim that language expresses thought to the conclusion that linguistic meaning is conventional is too quick. Even many theorists who embrace a conception of language as a communicative device and who accept that linguistics should study “shared meanings” reject the conventionality of meaning. In particular, where Devitt simply assumes that the conventional meaning of an uttered sentence “often” matches the thought that the speaker intends to express with it (132), ‘radical contextualists’ like Recanati (2004) argue that many if not all utterances involve significant context-local influences that are not triggered by elements within the sentences uttered; and they often conclude, with Davidson (1986), that any appeal to convention is an irrelevant chimera. I agree with contextualists that most utterances involve context-local influences on communicated meaning. But I also agree with Devitt that conventional meaning plays an important role in the theoretical explanation of linguistic communication (Camp 2016). However, establishing this latter conclusion requires closer attention to the dynamics of ordinary discourse than Devitt provides; and I am suspicious of the claim that language as such, shorn of pragmatic modulation and amplification, typically expresses complete thoughts that speakers would be willing to endorse, let alone care to communicate. Let’s put general worries about the existence and role of linguistic convention aside, though, and focus just on what conventions for direct and literal use might actually be like. Crucially, linguistic terms and constructions implement a variety of conventional functions, not all of which can be smoothly assimilated under the rubric of ‘expressing thought’. One key source of complexity centers around illocutionary force, which lies at the intersection of syntax, semantics, and pragmatics. Standard linguistic theories now reject the traditional ‘marker’ model, on which different sentence types conventionally mark distinct forces applied to a common propositional core. Instead, declarative sentences are standardly taken to denote propositions, while questions denote partitions of possible worlds and imperatives denote goals or properties that are indexed to the addressee (see e.g. Roberts 1996/2012, 2018). None of these denoted objects are themselves “thoughts,” in Devitt’s intuitive sense; rather, utterances of sentences of these three syntactic types conventionally function to undertake the speech acts of assertion, interrogation, and

3  Priorities and Diversities in Language and Thought

53

direction, and those speech acts have the conventional effect of altering the discourse in a certain way, for instance by adding the denoted object to the common ground (Stalnaker 1978). There are obviously often intimate causal and normative connections between those speech acts and speakers’ psychological attitudes, especially beliefs and intentions. But the claim that those acts function in their entirety only to express those attitudes is implausible. To take the most straightforward case, of assertion, the view that asserting is the expression of belief (e.g. Bach and Harnish 1979) is at a minimum incomplete, because it fails to distinguish assertion from other modes of linguistic belief-expression such as presupposition and implicature. More seriously, it also fails to allow for assertions that do not even purport to be grounded in belief, such as bald-faced lies (Sorensen 2007), ‘selfless’ assertions (Lackey 2007), and suppositions and other contributions for sake of the current conversation (Stalnaker 1978). Thus, rather than functioning exclusively or ultimately to express psychological attitudes, it is more plausible that utterances of sentences of the relevant syntactic types have the operative conventional function of doing something in discourse, either by altering the structure and contents of the common ground (Stalnaker 1978; Roberts 1996/2012; Murray and Starr 2020), and/or by undertaking a public commitment to produce other, suitably related speech acts in appropriate circumstances (Brandom 1983; MacFarlane 2011), where this action may be linked to, but is not identical with, the expression of belief. Further, sentential mood is not the only morpho-syntactic element with the conventional function of indicating or modulating illocutionary force. Other “illocutionary-­force-indicating devices” include performative verbs like ‘I apologize’; appositive clauses like ‘as I claim’ (Searle and Vanderveken 1985; Green 2007); and adverbials like ‘frankly’ or ‘admittedly’ (Bach 1999). Related terms and constructions, such as ‘while’, ‘but’, ‘therefore’, ‘actually’, and ‘all in all’, function to regulate the structure of discourse, by indicating and modulating relations among utterances of distinct sentences so that they form a coherent whole (Asher and Lascarides 2003; Kehler 2004). (Indeed, Clark and Fox Tree 2002 argue that apparent disfluencies in spontaneous speech, like ‘um’ and ‘uh’, are conventional English words which function to implicate that the speaker is initiating a major or minor delay in speaking.) Discourse particles like ‘Man,’ (McCready 2008) and ‘like’ (Siegel 2002) function to intensify or hedge the semantic contents of their focal terms. Evidentials, like ‘I heard’, ‘I saw’ or ‘as they told me’, function to indicate the evidential status of the illocutionary act’s core at-issue content (Murray 2014). And a wide range of performative terms function to regulate social dynamics, to display emotional attitudes, and to mark social affiliation: thus, expressives like ‘damn’ express the speaker’s emotional state (Potts 2007); honorifics like French ‘vous’ implicate a social or attitudinal relation between speaker and addressee (McCready 2010); and slurs like ‘kike’ undertake a commitment to the appropriateness of a derogating attitude toward the target group (Camp 2013). The point of mentioning all of these classes of terms and constructions is not that they are disconnected from psychology; on the contrary, they are some of the most nuanced linguistic tools we have for coordinating minds and behaviors. In a suitably

54

E. Camp

capacious sense of the words ‘express’ and ‘thought’, on which ‘expression’ is the outward showing of an inner state and ‘thought’ includes any kind of inner state (e.g. Green 2007), we can presumably identify psychological states correlated with each of these term-types  – although an analysis of their meanings as simply the expression of those states will often encounter challenges like those facing a purely expressive account of assertion, and the relation between term and state will often not be aptly characterized on the familiar model of Gricean non-natural meaning in the form of reflexive intention-recognition. Rather, the key point is that the conventional functions of many of these terms and constructions do not arise simply from the need to exteriorize inner thoughts, but rather in significant part from the need to manage distinctively social and communicative dynamics. As such, they are not functions we would expect to be manifested in a Mentalese prior to and independent of linguistic communication, which are then merely exteriorized by speech. Specifically, many of these terms and constructions function to provide higher-order comments on primary, first-order speech acts (Neale 1999), in a way that renders correlative mental states at least partly dependent on that lower-order speech act. More generally, the presence of such terms and constructions reflects the status of natural language as a deeply social construction. Pace Chomsky, language is not just a biological mechanism for constructing complex meaningful strings; but neither is it just the public avatar of a commonly instantiated but ultimately essentially individual Language of Thought. To make sense of, or even notice, these linguistic phenomena, we need to approach language on its own terms: as a shared tool for achieving various species of coordination beyond just representation. In addition to the type of meaning they have, these terms and constructions are also theoretically interesting because of the way they interact with the rest of the linguistic machinery. Specifically, their conventional contributions are typically rhetorically peripheral rather than ‘at-issue’ (Horn 2014), so that they are not the natural target of direct anaphoric agreement and denial, like ‘That’s true’ or ‘I disagree’. Many of them resist syntactic embedding under more complex constructions like negation and conditionalization. And when they do embed, they are typically interpreted as ‘projecting out’ of those constructions, so that the speaker is interpreted as undertaking a straightforward, unmodified commitment to their associated contribution even as the ‘core’ content is negated, conditionalized, etc. Thus, much as in the case of cognition, when we examine how language actually works, we find significant functional diversity, which is reflected in significant semantic and syntactic diversity. A parallel response here, as in the case of cognition, is to ‘go modular’, by segregating peripheral contributions from the ‘core’ compositional machinery (Potts 2005), leaving the latter free to be analyzed in ways that more closely approximate the traditional model of a predicate calculus. Such a segregationist model is formally attractive. But it cannot accommodate the fact that the resistance of such terms and constructions to embedding is in many cases merely a default status, which can be overridden by syntactic and pragmatic factors in particular cases. In particular, imperatives and interrogatives, expressives, and slurs can also sometimes  receive embedded interpretations, when their contribution is

3  Priorities and Diversities in Language and Thought

55

rendered at-issue relative to the larger discourse structure (Siegel 2006; Simons et  al. 2010; Camp 2018b). Given this, an empirically adequate linguistic theory needs not only to acknowledge and explain these ‘peripheral’ constructions in isolation, but also to analyze the familiar core logical machinery, including negation, disjunction and conditionalization, in a way that reflects the diversity of uses to which that machinery can be put in natural language, which, as we’ve seen, includes operating on non-representational semantic values. ‘Dynamic’ approaches to linguistic meaning, which analyze the meanings of words in terms of their compositional contributions to the ‘context change potentials’ or ‘update instructions’ associated with sentences in which they occur (Heim 1983; Groenendijk and Stokhof 1991; Veltman 1996), appear to be especially well-­ equipped to provide the requisite flexibility in a theoretically motivated way. They may even support a resuscitated version of the thesis that language expresses thought (Charlow 2015). But they are also likely to have radical consequences, both for the analysis of logical machinery in natural language and also potentially for our theoretical understanding of the cognitive states expressed. These are consequences that need to be articulated and assessed in detail. But they are consequencees that many more traditional philosophers, including especially Devitt, are likely to want to resist.

3.3  U  niversal Grammar and the Psychology of Language Processing 3.3.1  UG-Violating Strings The final major step in Devitt’s argument for the elimination of the language faculty, after establishing that thought has sentential form and that language expresses thought, is the claim that linguistic competence, and hence the language faculty, is no more than “the ability that matches token sounds and thoughts for meaning” (129). Once we shift the entirety of the explanatory burden onto cognition, the argument goes, and accept that language merely transduces the contents and structure of thoughts, there is “little or nothing” left for the language faculty to do; the little work that does remain can be performed by “fairly brute-causal associationist processes.” In §3.1, I rejected the claim that thought itself is universally sentential; and in §3.2 I argued against a monolithic model of language as expressing “beliefs, desires, hopes, and wondering whethers.” So we already have significant reasons to doubt that language universally functions to implement, in publically observable form, a structure that is antecedently instantiated by a univocally structured mental state like belief. In this section, I assess the claim that the processes that govern distinctively linguistic processing are merely associationist, with all or most observed constraints on grammatical structure arising at the level of the thoughts expressed.

56

E. Camp

If UG really constituted the rules of thought itself, this would seem to entail that ordinary people are unable to generate or classify, let alone comprehend, UG-violating strings. However, it appears that we do regularly make sense of UG-violating strings, at a minimum in the course of correcting other speakers’ disfluencies. Devitt acknowledges that we can indeed make sense of UG-violating strings, but suggests that we do so, “not by carrying its syntax into our thought but by translating it into a thought with a syntax that is like a sentence in our language” (151) – where this process of ‘translation’ is presumably also achieved by “fairly brute-causal association.” However, empirical evidence does not support such an ‘associationist translation’ view of the interpretation of UG-violating strings. Typical humans can learn to construct and classify strings using both UG-conforming and UG-violating rules. Specifically, although they may have difficulty extrapolating UG-violating rules from unstructured data (Smith et  al. 1993), they can learn to deploy rules that violate UG because they utilize “rigid” linear distance between words, when those rules are stated explicitly (Musso et  al. 2003). At the same time, though, UG-conforming and -violating rules are not on a cognitive par. In particular, they are implemented in distinct neural regions; specifically, the regions within Broca’s area that are also activated during ordinary natural language processing are only activated when deploying artificial syntactic rules that conform to UG (Embick et al. 2000; Moro et al. 2001). Indeed, Musso et al. (2003) found that Broca’s area progressively disengaged as subjects learned the UG-violating grammar, without any other distinctive pattern of brain activity being manifested. The first, most straightforward implication of these findings for the current discussion is that the overall cognitive abilities of normal subjects can underwrite at least some UG-violating ‘thought’, in the sense of rule-governed classification, without any translation into or activation of UG-conforming structures. Second, however, the crucial mechanism that does process UG-conforming strings is not part of ‘thought’ in Devitt’s favored sense, of “mental states with meanings.” In particular, while some of the relevant experiments contrasted real and artificial rules for languages like Italian and Japanese, others contrasted rules for classifying meaningless symbols as ‘agreeing’ with respect to patterns involving color and size (Tettamanti et al. 2009). Thus, the distinctive quality that activates the neural areas especially associated with UG seems to be a purely abstract, structural one. More specifically, the crucial feature is whether the rule for ‘agreement’ among elements is “non-rigid”: concerning structural relations among features that are neutral with respect to position, in contrast to “rigid” rules about linear distance between elements. Further, these same neural areas also appear to subserve cognitive processing for other domains that involve the same sort of hierarchical structure, such as music (Patel 2003) and planning complex actions (Koechlin and Jubault 2006). Devitt might take this last fact, that these neural regions are activated for domains other than language, to support his claim that the relevant structures are processed at a level of ‘thought’ rather than language, and so that there is “little or nothing to the language faculty” after all. However, the sense in which this holds is at best terminological. Chomsky and colleagues take the “faculty of language” in the

3  Priorities and Diversities in Language and Thought

57

relevant, “narrow” sense (‘FLN’) to be a mechanism that generates complex internal representations by recursion, and then “pairs sound and meaning” by interfacing with the “sensory-motor” and “conceptual-intentional” systems (Hauser et al. 2002: 1571). The fact that the core mechanism is also utilized for other cognitive purposes does not undermine the existence of biologically innate, distinctively linguistic package of a hierarchical recursive syntax plus phonology and semantics. Further, if that core recursive mechanism is what generates the complex matching structures that are utilized by both the articulatory and conceptual systems, then this mechanism is what implements the mapping from sounds to meanings that Devitt himself calls the ‘language faculty’ – but in a way that is the very opposite of a “brute-causal associative process.”

3.3.2  UG-Conforming Complexities Classificatory tussling aside, and even ignoring all the varieties of pragmatic and ‘peripheral’ conventional aspects of meaning cited in §3.2, there remains much more to natural language than the pure recursive operation of predication or Merge. ‘Universal Grammar’ encompasses all the initial constraints and operations required to derive the full complexities of adult linguistic competence. Devitt is committed to the claim that these linguistic complexities, like the more obviously systematic operations of predication and functional application, are to be explained as manifestations of the “innate structure rules of thought,” rather than as arising from language itself. We saw in §3.1 that there are good reasons to doubt that human thought innately has any one universal format. But even if we focus just on the sorts of thoughts that are most plausibly canonically expressed in language – “beliefs, desires, hopes, and wonderings whether” – it is still implausible that those aspects of UG that aren’t directly derivable from a Merge-like core operation of hierarchical recursion are ‘largely’ derived from innate rules of thought alone. Rather, they are more plausibly generated by distinctive features of the structure of natural language. This is shown first, by various types of constraints on well-formedness that are not plausibly motivated by anything about the thoughts expressed, but that also don’t vary in a conventional way across languages. Ludlow (2009) invokes the case of filler-gap constructions to make this point, using the following minimal pair: (1) (2)

Who(m) did John hear that Fred said that Bill hit? # Who(m) did John hear the story that Bill hit?

(1) is a perfectly well-formed question; and (2) is lexically and structurally highly similar to (1). But while the question that a speaker of (2) might be trying to ask is perfectly comprehensible – who was the subject of the Bill-hitting story that John heard  – the string itself is irredeemably ill-formed. Explaining the difference between (1) and (2), and the vast range of analogous minimal pairs, has motivated

58

E. Camp

linguists to posit highly complex unpronounced syntactic structures and transformation constraints that are specific to natural language; in the case of (1) and (2), these are so-called ‘island constraints’ on movement across phrases. Where island constraints are almost purely syntactic, other classic cases demonstrate syntactic constraints that are responsive to the meanings of their constituent elements, but again not in a way that is derivable from constraints on thought itself. NPIs or negative polarity items are expressions like ‘any’ and ‘lift a finger’ that can only appear in certain environments. In the following minimal pairs, (3a) I’ve got some money. (3b) # I’ve got any money.

(4a) # I don’t have some money. (4b) I don’t have any money.

the co-numbered pairs of sentences seem to express equivalent thoughts, but only one is well-formed. Negation, as in (4a) and (4b), is the most obvious NPI-licensing environment, and many licensers are like negation in being downward entailing (Ladusaw 1979). But some contexts which are not straightforwardly downward entailing, such as antecedents of conditionals, questions, and expressions of surprise, can also license NPIs; and some licensing appears to depend on more fully pragmatic factors (Krifka 1995; von Fintel 1999; Israel 2011).Finally, some constraints on well-formedness appear not to be motivated by any systematic property at all, syntactic, semantic or pragmatic. Johnson (2004) cites the contrast between ‘put’ and ‘stow’ as illustration: (5a) John put his gear down. (5b) ∗John stowed his gear down.

(6a) ∗John put his gear. (6b) John stowed his gear.

Here again, the co-numbered pairs seem to express equivalent thoughts, but only one is well-formed.Devitt could resist the conclusion that these last two classes of cases reveal constraints on well-formedness that are not derivable from thought by insisting that in fact,  all of the minimal pairs in (3) through (6) actually  express distinct thoughts,  and so that their different linguistic statuses are inherited from thought after all. However, going this route requires embracing a notion of ‘thought’ that is so fine-grained as to verge on equivalence to the inner voicing of sentences. This would stipulatively rule out the intuitively plausible and linguistically significant possibility that distinct sentence-types can express the same thought. More importantly, it would threaten to undermine any independent grip on what a ‘thought’ is, or else to suggest that Devitt’s ‘thoughts’ are ultimately individuated by the public-language sentences that express them rather than the other way around. A second route to demonstrating the irreducibility of language to thought appeals not to constraints on well-formedness, but to grammatical rules that concern meaning in a way that is difficult to generate from Mentalese alone. Perhaps the clearest cases of this  involve what Pietroski and Crain (2012) call “unambiguities”: constraints on mappings from forms to meaning that can’t be derived from either the bare sentential structure or from the thoughts themselves. Thus, the sentence

3  Priorities and Diversities in Language and Thought

(7)

59

John is eager to please

appears to have two implicit slots for pronoun assignment, and there appear to be at least two distinct, perfectly coherent thoughts that would result from filling in those slots: (7a) (7b)

JOHN1 IS EAGER THAT HE1 PLEASE US JOHN1 IS EAGER THAT WE PLEASE HIM1.

But only (7a) is available as an interpretation of (7) (Pietroski and Crain 2012). Finally, in addition to it being unclear how to generate or even define all the constraints exhibited by natural languages from the hypothesis that they reflect the “innate structure rules of thought,” there is also empirical evidence that UG can govern linguistic production in the absence of commensurately complex thought. Advocates of linguistic modularity (e.g. Pinker 1999) often cite people with Williams Syndrome in this context, because they display differentially robust linguistic abilities – specifically, implicit grasp of syntactic principles like c-command, scope, and binding  – against a strongly impaired general cognitive background. Although this interpretation has been resisted by ‘neuroconstructivists’ (e.g. Thomas and Karmiloff-Smith 2005), it is increasingly well-established that adults with WS do process syntax and morphology using the same mechanisms as typical subjects (Brock 2007). Differences in their performance on grammatical tasks from normal subjects are more plausibly attributed to limitations in handling complexity – that is, to  limitations arising from extra-grammatical cognitive resources like working memory, in a way that parallels limitations exhibited by typically-developing children matched to WS adults for overall cognitive function (Musolino and Landau 2012). Devitt acknowledges that there are people who utter complex grammatical sentences despite being highly cognitively impaired. But he argues that in order for them to count against his priority claim, “we would need to establish both (a), that the savants [e.g. people with WS] cannot think thoughts with certain meanings, and (b), that sentences out of their mouths really have those meanings” (165). He argues against (a) by suggesting that such people might just be bad at reasoning with the thoughts they do have, and against (b) by saying that if (a) were true, then we would thereby  be forced to conclude that the sounds coming out of their mouths were “mere noise.” Against this latter claim, note first that even if the words uttered by people like those with Williams Syndrome were “mere noise,” this would still demonstrate the existence of a psychological mechanism for generating specifically grammatical complexity, independent from commensurately complex thought  – a mechanism, that is, very close to the sort of language faculty under dispute. But second, it is not plausible that people with WS are in fact just making “noise.” For instance, Musolino and Landau (2012) probed grammatical knowledge in people with WS by asking subjects to match lexically aligned but syntactically distinct sentences, such as

60

E. Camp

(8a) The cat who meows will not be given a fish or milk (8b) The cat who does not meow will be given a fish or milk to animated vignettes  – a task that they were indeed  able to perform, and that requires assigning truth-conditions. More generally, people with WS are theoretically notable because their utterances are typically not just  syntactically well-­ formed but also semantically coherent, both internally and in relation to one another, and at least basically pragmatically appropriate. For instance, they are often good at spontaneously generating coherent and engaging narratives from pictures; Rossen et  al. (1996: 367) cite the following spontaneous description by a 16-year-old Williams Syndrome subject, Crystal, about her future aspirations: You are looking at a professional bookwriter. My books will be filled with drama, action, and excitement. And everyone will want to read them … I am going to write books, page after page, stack after stack. I’m going to start on Monday.

Meanwhile, this same patient “fails all Piagetian seriation and conservation tasks (milestones normally attained by in the age range of 7 to 9 years); has reading, writing, and math skills comparable to those of a first- or second-grader, and requires a babysitter for supervision.” It’s hard to make sense of just what is going on in the mind of a person like this. But a flat-footed appeal to general “stupidity” and “failure in practical reasoning” (Devitt 2006: 165) won’t be satisfying unless it can explain how such subjects do produce such complex, sophisticated, and specifically verbal behavior.

3.4  Priorities, Sufficiencies, and Speculations Establishing the strong conclusion that there is little or nothing to the language faculty requires commensurately strong assumptions: first, that thought has a sentential format, specifically one that conforms to UG; second, that language expresses thought, directly and in virtue of conventions for use; and third, that the transparent expression of thought leaves no further work for psychological mechanisms distinctively associated with language beyond associating surface forms with complex, independently meaningful mental representations. These assumptions can seem highly plausible, even inevitable, when formulated within the context of a model of both thought and language as monolithic instances of a “rational calculus.” This picture has held philosophers captive since before the founding of analytic philosophy. But when we look at how both thought and language work, we find that the actual contours of human cognition and natural languages are more complex, and less systematic, than the strong argument requires. The broad terms ‘thought’ and ‘language’ encompass a range of importantly diverse functions, each implemented by a suite of intimately interacting but at least partially dissociable abilities and mechanisms.

3  Priorities and Diversities in Language and Thought

61

At the same time, those strong assumptions appear plausible, and have dominated analytic philosophy, for a reason. It is not merely wishful thinking that people are rational animals: humans really do, at least sometimes, engage in logical reasoning – though we also often blithely ignore or spectacularly fail to follow the laws of logic. Likewise, even if compositionality is better seen as a regulative methodological principle than a truistic observation about natural languages (Szabo 2012), it has still proven to be an enormously productive principle, with apparently recalcitrant constructions, such as epistemic modals, receiving compelling analysis given more sophisticated formal tools. More specifically, we have seen that both thought and language do manifest a common core that does approximate to the predicate calculus, not just in the general sense of being highly systematic but in the narrower one of employing a hierarchical recursive combinatorial principle. Moreover, this core appears to be implemented by a common neural mechanism, which plausibly plays a central role in making both distinctively human thought and talk possible. As I noted in §3.1, one crucial, much-discussed feature of distinctively human cognition is expressive generality: the ability to think about a wide range of contents of indefinite complexity. Hierarchical recursive syntactic structure plays a key role in underwriting expressive generality. But it does not suffice on its own. In addition, a representational system must also employ a semantic principle that is arbitrary enough to represent a wide range of types of values. More relevantly, its combinatorial principle must also be highly neutral, or else the system as a whole will only have the capacity to represent the sorts of values that can be meaningfully related by that principle. At least some primates appear to possess hierarchically-structured but domain-­ limited cognitive abilities. In particular, there is good behavioral evidence that baboons represent hierarchical, recursively structured relations of social dominance (Cheney and Seyfarth 2007); but they don’t seem to think similarly complex thoughts about other domains. This suggests that they may employ something like a branching tree structure with a dedicated significance, much like a phylogenetic tree (Camp 2009a). Conversely, representational systems can also achieve a high degree of expressive generality without employing a branching tree structure, as in Venn diagrams. Given these dissociations, hierarchical recursive syntax should not be viewed as the necessary and sufficient essence of a distinctively powerful, and therefore distinctively human, capacity for thought. If we want to speculate about the origins of human thought and language, it is at least prima facie plausible that evolutionary pressures for a neutrally interpreted recursive tree structure stemmed as much from a need to communicate as from the need to represent hierarchically complex contents. Perhaps our pre-human ancestors possessed a dedicated module for social cognition plus a syntactically simple signaling system, much as baboons do. Syntactic complexity is only advantageous once the range of potential signals exceeds a certain threshold (Nowak et al. 2000); and in principle, any sort of combinatorial system could satisfy the need to generate an indefinitely large number of signals from a restricted base. But the communicative media plausibly most reliably accessible to those pre-human ancestors – sounds

62

E. Camp

and gestures – are saliently distinguished by having a uni-dimensional, specifically temporal structure. An operation that merges multiple branches into single nodes, which are themselves hierarchically ordered, permits the representation of complex contents in a linear order. Thus, the exaptation of the basic syntactic structure of the social dominance module, by means of abstracting away from the semantic significance of branching trees within that module, would permit communication of a wide range of contents in a single, readily available, and flexibly implementable medium. But even granting this highly speculative step, the bare potential to represent an indefinitely wide range of complex contents would be practically irrelevant without a robust actual ability to form, connect, and transform a wide range of representations from within any given context  – that is, without a significant degree of stimulus-­independence (Camp 2009b). Such active cognitive flexibility is a crucial ingredient in instrumental reasoning. But it too can be implemented in a variety of ways, including by imagistic simulation, and it is not restricted to humans (Camp and Shupe 2017). And here again, an argument can be made that the distinctively communicative use of language facilitated (and continues to facilitate) the development of imaginative flexibility, by giving thinkers a means to simulate what someone else, or they themselves, would say about a given problem or possibility (Carruthers 1998; McGeer and Pettit 2002). Again, this is highly speculative. But a synthetic view along these general lines requires far fewer strong assumptions and big leaps than either pure  Chomskian structuralism or pure Devittian conceptualism. In lieu of the picture that has been implicitly embraced by many philosophers throughout the twentieth century, on which monolithic thought has sweeping priority over equally monolithic language, we should instead embrace a model on which both human cognition and natural language involve many distinct, potentially dissociable abilities functioning together in a way that is significantly but not entirely integrated. Both thought and talk do involve a systematic predicative core. But in neither case is there a clean division between this core and the rest of cognition or language. Nor is there good reason to privilege that core as what makes us distinctively human, whether by nature or by enculturation.

References Allwein, G., and J.  Barwise, eds. 1996. Logical reasoning with diagrams. Oxford: Oxford University Press. Asher, N., and A. Lascarides. 2003. Logics of conversation. Cambridge: Cambridge University Press. Bach, K. 1999. The myth of conventional implicature. Linguistics and Philosophy 22: 367–421. Bach, K., and R.  Harnish. 1979. Linguistic communication and speech acts. Cambridge, MA: MIT Press. Bermudez, J.L. 2003. Thinking without words. Oxford: Oxford University Press. Brandom, R. 1983. Asserting. Noûs 17 (4): 637–650.

3  Priorities and Diversities in Language and Thought

63

Brock, J. 2007. Language abilities in Williams syndrome: A critical review. Development and Psychopathology 19: 97–127. Camp, E. 2007. Thinking with maps. Philosophical Perspectives 21 (1): 145–182. ———. 2009a. A language of baboon thought? In The philosophy of animal minds, ed. R. Lurz, 108–127. Cambridge: Cambridge University Press. ———. 2009b. Putting thoughts to work: Concepts, systematicity, and stimulus-independence. Philosophy and Phenomenological Research 78 (2): 275–311. ———. 2013. Slurring perspectives. Analytic Philosophy 54 (3): 330–349. ———. 2015. Logical concepts and associative characterizations. In The conceptual mind: New directions in the study of concepts, ed. E.  Margolis and S.  Laurence, 591–621. Cambridge, MA: MIT Press. ———. 2016. Conventions’ revenge: Davidson, derangement, and dormativity. Inquiry 59 (1): 113–138. ———. 2018a. Why cartography is not propositional. In Non-propositional intentionality, ed. A. Grzankowski and M. Montague, 19–45. Oxford: Oxford University Press. ———. 2018b. Slurs as dual-act expressions. In Bad words, ed. D. Sosa, 29–59. Oxford: Oxford University Press. Camp, E., and E. Shupe. 2017. Instrumental reasoning in non-human animals. In The Routledge handbook of philosophy and animal minds, ed. J. Beck and K. Andrews, 100–108. London: Routledge. Carruthers, P. 1998. Thinking in language? Evolution and a modularist possibility. In Language and thought, ed. P. Carruthers and J. Boucher, 94–119. Cambridge: Cambridge University Press. ———. 2003. On Fodor’s problem. Mind and Language 18 (5): 502–523. Casati, R., and A.  Varzi. 1999. Parts and places: The structures of spatial representation. Cambridge, MA: MIT Press. Charlow, N. 2015. Prospects for an expressivist theory of meaning. Philosophers’ Imprint 15: 1–43. Cheney, D.L., and R.M.  Seyfarth. 2007. Baboon metaphysics: The evolution of a social mind. Chicago: University of Chicago Press. Clark, H.H., and J.E. Fox Tree. 2002. Using ‘uh’ and ‘um’ in spontaneous speaking. Cognition 84: 73–111. Davidson, D. 1986. A nice derangement of epitaphs. In Truth and interpretation: Perspectives on the philosophy of Donald Davidson, ed. E. Lepore, 433–446. New York: Blackwell. De Toffoli, S. 2017. ‘Chasing’ the diagram: The use of visualizations in algebraic reasoning. The Review of Symbolic Logic 10 (1): 158–186. Devitt, M. 2006. Ignorance of language. Oxford: Clarendon Press. Dummett, M. 1989. Language and communication. In Reflections on Chomsky, ed. A.  George, 192–212. Oxford: Oxford University Press. ———. 1994. Origins of analytical philosophy. Cambridge, MA: Harvard University Press. Elwert, F. 2013. Graphical causal models. In Handbook of causal analysis for social research, ed. S.L. Morgan, 245–273. New York: Springer. Embick, D., A. Marantz, Y. Miyashita, W. O’Neil, and K.L. Sakai. 2000. A syntactic specialization for Broca’s area. PNAS 97 (11): 6150–6154. Evans, J.St.B.T., and D.E. Over. 1996. Rationality and reasoning. Hove: Psychology Press. Fiddick, L., L. Cosmides, and J. Tooby. 2000. No interpretation without representation: The role of domain-specific representations and inferences in the Wason selection task. Cognition 77: 1–79. Fodor, J. 1987. Why there still has to be a language of thought. In J. Fodor, Psychosemantics: The problem of meaning in the philosophy of mind, 135–154. Cambridge, MA: MIT Press. ———. 2007. The revenge of the given. In Contemporary debates in philosophy of mind, ed. B.P. McLaughlin and J.D. Cohen, 105–116. Oxford: Blackwell. Fodor, J., and Z. Pylyshyn. 1988. Connectionism and the cognitive architecture of mind. Cognition 28: 3–71.

64

E. Camp

Franconeri, S.L., J.M.  Scimeca, J.C.  Roth, S.A.  Helseth, and L.E.  Kahn. 2012. Flexible visual processing of spatial relationships. Cognition 122: 210–227. Giardino, V., and G. Greenberg. 2014. Introduction: Varieties of iconicity. Review of Philosophical Psychology 6 (1): 1–25. Gopnik, A., C. Glymour, D.M. Sobel, L.E. Schulz, T. Kushnir, and D. Danks. 2004. A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review 111 (1): 3–32. Green, M.S. 2007. Self-expression. Oxford: Oxford University Press. Grice, H.P. 1975. Logic and conversation. In Syntax and semantics volume 3: Speech acts, ed. P. Cole and J.L. Morgan, 41–58. New York: Academic Press. Groenendijk, J., and M. Stokhof. 1991. Dynamic predicate logic. Linguistics and Philosophy 14 (1): 39–100. Hauser, M.D., N. Chomsky, and W.T. Fitch. 2002. The language faculty: What is it, who has it, and how did it evolve? Science 298: 1569–1579. Heim, I. 1983. File change semantics and the familiarity theory of definiteness. In Meaning, use and interpretation of language, ed. R. Bäuerle, C. Schwarze, and A. von Stechow, 164–189. Berlin: De Gruyter. Horn, L. 2014. Information structure and the landscape of (non-)at-issue meaning. In The Oxford handbook of information structure, ed. C.  Féry and S.  Ishihara, 108–128. Oxford: Oxford University Press. Israel, M. 2011. The Grammar of polarity: Pragmatics, sensitivity, and the logic of scales. Cambridge: Cambridge University Press. Johnson, K. 2004. On the systematicity of language and thought. The Journal of Philosophy 101 (3): 111–139. Johnson-Laird, P. 2005. Mental models and thought. In The Cambridge handbook of thinking and reasoning, ed. K.J.  Holyoak and R.G.  Morrison, 185–208. Cambridge: Cambridge University Press. Kehler, A. 2004. Discourse coherence. In Handbook of pragmatics, ed. L.R. Horn and G. Ward, 241–265. Oxford: Basil Blackwell. Koechlin, E., and T. Jubault. 2006. Broca’s area and the hierarchical organization of human behavior. Neuron 50: 963–974. Krifka, M. 1995. The semantics and pragmatics of polarity items. Linguistic Analysis 25: 209–257. Lackey, J. 2007. Norms of assertion. Noûs 41 (4): 594–626. Ladusaw, W.A. 1979. Polarity sensitivity as inherent scope relations. PhD dissertation, University of Texas, Austin. Lemon, O., and I.  Pratt. 1997. Spatial logic and the complexity of diagrammatic reasoning. Machine Graphics and Vision 6 (1): 89–108. Locke, J. 1689. An essay concerning human understanding. London: Thomas Bassett. Ludlow, P. 2009. Review of Devitt’s Ignorance of language. Philosophical Review 118 (3): 393–402. MacEachren, A. 2004. How maps work: Representation, visualization, and design. New  York: Guilford Press. MacFarlane, J. 2011. What is assertion? In Assertion, ed. J.  Brown and H.  Cappelen, 79–96. Oxford: Oxford University Press. Marchette, S., J. Ryan, and R. Epstein. 2017. Schematic representations of local environmental space guide goal-directed navigation. Cognition 158: 68–80. McCready, E. 2008. What man does. Linguistics and Philosophy 31 (6): 671–724. ———. 2010. Varieties of conventional implicature. Semantics and Pragmatics 3 (8): 1–57. McGeer, V., and P.  Pettit. 2002. The self-regulating mind. Language and Communication 22: 281–299. Morgan, L., S. MacEvoy, G. Aguirre, and R. Epstein. 2011. Distances between real-world locations are represented in the human hippocampus. The Journal of Neuroscience 31 (4): 1238–1245.

3  Priorities and Diversities in Language and Thought

65

Moro, A., M. Tettamanti, D. Perani, C. Donati, S. Cappa, and F. Fazio. 2001. Syntax and the brain: Disentangling grammar by selective anomalies. NeuroImage 13: 110–118. Murray, S. 2014. Varieties of update. Semantics and Pragmatics 7 (2): 1–53. Murray, S., and W. Starr. 2020. The structure of communicative acts. Linguistics and Philosophy. https://doi.org/10.1007/s10988-019-09289-0. Musolino, J., and B. Landau. 2012. Genes, language, and the nature of scientific explanations: The case of Williams syndrome. Cognitive Neuropsychology 29 (1–2): 123–148. Musso, M., A. Moro, V. Glauche, M. Rijintjes, J. Reichenbach, C. Büchel, and C. Weiller. 2003. Broca’s area and the language instinct. Nature Neuroscience 6 (7): 774–781. Neale, S. 1999. Coloring and composition. In Philosophy and linguistics, ed. K. Murasugi and R. Stainton, 35–82. Boulder, CO: Westview Press. Nowak, M.A., J.B.  Plotkin, and V.A.  Jansen. 2000. The evolution of syntactic communication. Nature 404: 495–498. Patel, A.D. 2003. Language, music, syntax and the brain. Nature Neuroscience 6: 674–681. Pearl, J. 2000. Causality. Cambridge: Cambridge University Press. ———. 2009. Causal inference in statistics: An overview. Statistics Surveys 3: 96–146. Pietroski, P., and S.  Crain. 2012. The language faculty. In The Oxford handbook of philosophy of cognitive science, ed. E. Margolis, R. Samuels, and S.P. Stich, 361–381. Oxford: Oxford University Press. Pinker, S. 1999. Words and rules: The ingredients of language. New York: Basic Books. Potts, C. 2005. The logic of conventional implicature. Cambridge, MA: MIT Press. ———. 2007. The expressive dimension. Theoretical Linguistics 33 (2): 165–198. Pratt, I. 1993. Map semantics. In Spatial information theory: A theoretical basis for GIS lecture notes in computer science, ed. A.U. Frank and I. Campari, 77–91. Berlin: Springer. Recanati, F. 2004. Literal meaning. Cambridge: Cambridge University Press. Rescorla, M. 2009. Predication and cartographic representation. Synthese 169: 175–200. Rey, G. 1995. A not ‘merely empirical’ argument for the language of thought. Philosophical Perspectives 9: 201–222. Rice, C. 2011. Massive modularity, content integration, and language. Philosophy of Science 78 (5): 800–812. Roberts, C. 1996/2012. Information structure in discourse: Toward an integrated formal theory of pragmatics. Semantics and Pragmatics 5: 1–69. ———. 2018. Speech acts in discourse context. In New work on speech acts, ed. D.  Fogal, D. Harris, and M. Moss, 317–359. Oxford: Oxford University Press. Rossen, M.L., E.S. Klima, U. Bellugi, A. Bihrle, and W. Jones. 1996. Interaction between language and cognition: Evidence from Williams syndrome. In Language, learning, and behavior disorders: Developmental, biological, and clinical perspectives, ed. J.H. Beitchman, N. Cohen, M. Konstantareas, and R. Tannock, 367–392. New York: Cambridge University Press. Schmidhuber, J. 2015. Deep learning in neural networks: An overview. Neural Networks 61: 85–117. Searle, J., and D. Vanderveken. 1985. Foundations of illocutionary logic. Cambridge: Cambridge University Press. Shin, S. 1994. The logical status of diagrams. Cambridge: Cambridge University Press. Siegel, M. 2002. Like: The discourse particle and semantics. Journal of Semantics 19: 35–71. ———. 2006. Biscuit conditionals: Quantification over potential literal acts. Linguistics and Philosophy 29: 167–203. Simons, M., D. Beaver, J. Tonhauser, and C. Roberts. 2010. What projects and why. Proceedings of SALT 20: 309–327. Sloman, A. 1978. The computer revolution in philosophy: Philosophy, science and models of mind. Atlantic Highlands, NJ: Humanities Press.

66

E. Camp

Smith, N., I.-A. Tsimpli, and J. Ouhalla. 1993. Learning the impossible: The acquisition of possible and impossible languages by a polyglot savant. Lingua 91: 279–347. Sorensen, R. 2007. Bald-faced lies! Lying without the intent to deceive. Pacific Philosophical Quarterly 88 (2): 251–264. Stalnaker, R. 1978. Assertion. In Syntax and semantics volume 9: Pragmatics, ed. P. Cole, 315–332. New York: Academic Press. Szabo, Z. 2012. The case for compositionality. In The Oxford handbook of compositionality, ed. W. Hinzen, E. Machery, and M. Werning, 64–80. Oxford: Oxford University Press. Tenenbaum, J., C. Kemp, T. Griffiths, and N. Goodman. 2011. How to grow a mind: Statistics, structure, and abstraction. Science 331: 1279–1285. Tettamanti, M., I. Rotondi, D. Perani, G. Scotti, F. Fazio, S.F. Cappa, and A. Moro. 2009. Syntax without language: Neurobiological evidence for cross-domain syntactic computations. Cortex 45 (7): 825–838. Thomas, M.S.C., and A. Karmiloff-Smith. 2005. Can developmental disorders reveal the component parts of the language faculty? Language Learning and Development 1: 65–92. Tufte, E. 1983. The visual display of quantitative information. Connecticut: Graphics Press. Veltman, F. 1996. Defaults in update semantics. Journal of Philosophical Logic 25: 221–261. von Fintel, K. 1999. NPI-licensing, Strawson-entailment, and context-dependency. Journal of Semantics 16: 97–148. Wittgenstein, L. 1921/2001. Tractatus logico-philosophicus. Translated by D.  Pears and B. McGuinness. New York: Routledge.

Part II

Theory of Reference

Chapter 4

Theories of Reference: What Was the Question? Panu Raatikainen

Abstract  The new theory of reference has won popularity. However, a number of noted philosophers have also attempted to reply to the critical arguments of Kripke and others, and aimed to vindicate the description theory of reference. Such responses are often based on ingenious novel kinds of descriptions, such as rigidified descriptions, causal descriptions, and metalinguistic descriptions. This prolonged debate raises doubt whether various parties really have any shared understanding of what the central question of the philosophical theory of reference is: what is the main question to which descriptivism and the causal-historical theory have presented competing answers. One aim of the paper is to clarify this issue. The most influential objections to the new theory of reference are critically reviewed. Special attention is also paid to certain important later advances in the new theory of reference, due to Devitt and others. Keywords  Reference · Meaning · Descriptivism · New theory of reference · Frege’s puzzles

4.1  Introduction In the beginning of the 1970s, the philosophical community experienced a genuine revolution in the philosophy of language – one that had ramifications in many other areas of philosophy. Namely, Saul Kripke (1971, 1972) and Keith Donnellan (1970) famously attacked what was then the prevailing view on the meaning and reference of names, the description theory of reference (or, more briefly, “descriptivism”).

P. Raatikainen (*) Degree Programme in Philosophy, Faculty of Social Sciences, Tampere University, Tampere, Finland e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_4

69

70

P. Raatikainen

Hilary Putnam (1970, 1973, 1975a, b) argued for related points in the case of kind terms (as did Kripke). The new theory of reference (in short: NTR), as the general view that emerged is often called, has risen to favor. However, a number of noted philosophers have also attempted to reply to the critical arguments of Kripke and others, aiming to vindicate descriptivism. Such responses are often based on ingenious, novel kinds of descriptions, such as rigidified descriptions, causal descriptions, and metalinguistic descriptions. Many seem to be confident that the critical arguments against descriptivism have been neutralized for good. This prolonged debate raises doubt as to whether various parties really have any shared understanding of what the central question of the philosophical theory of reference is: what is the main question to which descriptivism and NTR have presented competing answers? And more generally: exactly what are these theories supposed to be theories of? One aim of the present paper is to clarify this issue. The most influential objections to NTR are critically reviewed. In addition, the essential content of NTR is elucidated, and the most important developments (in the author’s opinion) of NTR are reviewed.

4.2  A Brief Look at the Development of NTR 4.2.1  Descriptivism and Its Critique Though the story should be familiar, let us review the main developments for later reference and in order to fix some terminology. Descriptivism is, according to the definition,1 the view that the meaning of a name is expressed,2 and the reference of the name determined, by the description (or cluster of descriptions) a language user analytically associates with it.3 Thus, with the name “Socrates”, for example, one might associate the description “the Greek philosopher who drank hemlock” and so 1  Before Donnellan and Kripke, there was no explicit school that would have identified itself as “descriptivists” (or advocates of “the description theory of reference”). Rather, Kripke, and to some extent also Donnellan, isolated and abstracted the idea from the rambling literature. Therefore, it was largely up to them to define the view they then critically scrutinized. Given the number of indignant reactions they received, it seems that they were not criticizing a straw man. 2  Let us sort out one misunderstanding I have sometimes met: the suggested interpretation is emphatically not that the relevant description itself – a linguistic entity – is the sense, or the meaning, of the name. That would obviously be quite an absurd view. Rather, the idea is that the meaning is some sort of abstract or mental entity – perhaps a combination of properties, attributes or concepts – that is expressed more transparently by the associated description, which the referent uniquely satisfies: the referent has exactly those properties (or most of them). 3  There is also a broader view than descriptivism that Devitt and Sterelny (1999: 62) call “the identification theory”: it allows that a speaker may not be able to describe the bearer, provided that he can recognize her; the speaker can, so to say, pick the bearer out in a lineup. This is an improvement, but does not help with names of temporally or spatially distant bearers, such as Cicero or

4  Theories of Reference: What Was the Question?

71

on.4 The idea is that such an associated description expresses more explicitly the meaning of a name for the speaker and determines which object is denoted by the name. It is very natural to interpret descriptivism also as a theory of understanding: to understand an expression is to know its meaning, and that meaning is expressed by the appropriate description or cluster of descriptions (cf. Devitt and Sterelny 1999: 46). Consequently, only by knowing the appropriate description (or cluster of descriptions) and its (their) association with the expression – that is, only by knowing expression’s meaning – can the language user understand the expression and successfully use it in reference; otherwise not. This view – descriptivism, that is – has its roots in certain remarks by Frege and Russell, although it is a bit unclear whether either of these historical figures actually advocated for exactly the description theory of reference as later defined.5 Such exegetical questions are, however, orthogonal to the main issue, as there have subsequently been numerous philosophers who have advocated descriptivism, and in doing so have taken themselves to be followers of Frege or Russell.6 In the 1950s, the simple version of descriptivism was increasingly criticized, and –  apparently inspired by certain remarks by the later Wittgenstein7  – John Searle (1958) and Peter Strawson (1959), for example, suggested that the meaning of a name is not expressed by any single definite description, but rather by a cluster of descriptions associated somehow more loosely with the name by the speaker. The idea was that a name refers to an entity, which satisfies sufficiently many of these descriptions, though perhaps not all: there is room for some error. The view became quickly predominant. Searle was invited to write an entry on proper names Feynman: with all such names, the descriptivist account is (in this view) the only one possible. And the problems with ignorance and error (see below) remain relevant. 4  Or, according to the cluster theory version, a larger cluster of such descriptions. 5  Dummett (1973: 110), McDowell (1977), Burge (1979), Evans (1985), Noonan (2001), and Heck and May (2006), for example, have argued against interpreting Frege as a descriptivist. (Currie (1982: 170), on the other hand, defends the descriptivist interpretation.) However, the interpretation of Frege many of these critics favor is more or less the same as “the identification theory” (see note 3 above), which clearly does not save Frege’s view. 6  In Naming and Necessity, Kripke called the simple version of descriptivism “the Frege-Russell view.” Devitt and Sterelny (1999: 45) write, more cautiously, of the classical description theory as “derived from the works of Gottlob Frege and Bertrand Russell.” In his 2008 Schock prize lecture, Kripke reflects on the issue as follows: “[C]ertainly Frege, like Russell, had generally been understood in this way. This made it important for me to rebut the theory, whether historically it was Frege’s theory or not” (Kripke 2008: 208). Moreover, Kripke (1979: 271 n. 3) wrote: “In any event, the philosophical community has generally understood Fregean senses in terms of descriptions, and we deal with it under this usual understanding. For present purposes, this is more important than detailed historical issues.” 7  Although neither Searle nor Strawson explicitly mentioned Wittgenstein, it is plausible to assume that Philosophical Investigations, published in 1953, and the remarks on “Moses” in particular (§79), inspired this view: Kripke suggested the connection in (1980: 31). It is unclear, though, whether Wittgenstein really intended to present any sort of general theory of names, and what exactly was his true aim here; see Travis 1989 and Bridges 2010.

72

P. Raatikainen

for the most eminent encyclopedia of philosophy at the time (Searle 1967). This later, more sophisticated variant of the description theory is now commonly called “the cluster theory.”8 This was the received view in philosophy when Kripke and Donnellan came forward with their critiques. To begin with, Kripke  – already a leading figure in modal logic  – presented against descriptivism different arguments based on modal considerations. First, he suggested that ordinary proper names are what he called “rigid designators” (i.e., they refer to the same entity in every possible world9), whereas customary descriptions are not; the referent of a typical description (such as “the Greek philosopher who drank hemlock”) varies from one possible world to another; therefore, names cannot be synonymous with such descriptions. Second, Kripke argued that descriptivism entails various “unwanted necessities”: for example, it is not analytically true or necessary that Socrates drank hemlock; it seems possible that events could have gone quite differently. Sometimes one further isolates as a distinct argument “the epistemological argument”: traditionally it has been thought that analytical truths can be known a priori; that Socrates drank hemlock is not, on the other hand, knowable a priori. Third, in addition to the modal arguments, both Kripke and Donnellan presented, against both traditional and cluster versions of descriptivism, various “arguments from ignorance and error.” Their central idea was to underline the fact that users of language often have much less knowledge than descriptivism presupposes, or that even the few relevant beliefs they have about the bearer of a name may be false. All a person in the street might be able to say, when asked who Cicero was, is that he was some Roman, or perhaps he could associate with the name “Feynman” only a description such as “some physicist.” Such descriptions are often the most that a speaker can say about the bearer of a name. They are, however, much too loose and general for them to be able to determine the referent of a name; a great number of other people also satisfy such descriptions. Thus people often are much more ignorant than descriptivism requires – and more fallible: a speaker may associate with the name “Einstein” the description “the inventor of the atomic bomb,” or with the name “Columbus” the description “the first European in America” (and no other non-trivial description). Yet the latter description actually picks out some unknown Viking who found America many centuries before Columbus; the former description may not uniquely pick out anyone (the invention of the atomic bomb was group work) or, at best, fit Robert Oppenheimer (who was leading the team), but certainly not Einstein. Nonetheless, it is plausible that even such ignorant and erring language users can successfully use such names to refer to their actual bearers: i.e., to Einstein and Columbus. (If such a sparse or mistaken description is all that a language user can provide, the cluster theory is in trouble just as much.) As Devitt has been fond of saying, descriptivism puts “far too large an epistemic burden” on language users. 8  My understanding of the situation in philosophy before the revolution has benefited from writings of and personal correspondence with John Burgess. 9  More exactly, the expression refers to the same entity in every possible world in which that entity exists.

4  Theories of Reference: What Was the Question?

73

As to kind terms, Putnam (1975a) presented his famous Twin Earth thought experiment, inviting us to imagine that there was a planet very much like Earth called “Twin Earth.” We could even assume that each one of us had a doppelgänger there and that languages similar to ours were spoken there. There was, however, one peculiar difference: the liquid called “water” on Twin Earth was not H2O, but a totally different liquid whose chemical formula was very long and complicated. We could abbreviate it as XYZ. It was assumed that it was indistinguishable from water under normal circumstances: it tasted like water and quenched thirst like water; the lakes and seas of Twin Earth contained XYZ; it rained XYZ there; and so on. Putnam next asked us to roll back time to, say, 1750, when chemistry had not yet been developed on either Earth or Twin Earth. At that time, no one would have been able to differentiate between XYZ and H2O. Now Oscar, on Earth, and his doppelgänger associate, by stipulation, exactly the same qualitative description to “water.” However, Putnam posited that the extension of “water” was just as much H2O on Earth, and the extension of “water” was just as much XYZ on Twin Earth. Putnam’s argument can be viewed as a powerful argument from ignorance. Kripke noted that description theories can, at least in principle, also be viewed as mere theories of reference and not as theories of meaning: as such, they only contend that the reference of an expression is determined by the description associated with it (Kripke 1980: 31–32). Kripke went on to argue that (because of the arguments from ignorance and error) at least the familiar forms of descriptivism (involving “famous deeds”) fail also if interpreted as mere theories of reference. However, as Devitt (1981: 13) states, “[d]escription theories are mostly offered as theories of the meaning of a name.” Kripke himself added that “some of the attractiveness of the theory [descriptivism] is lost if it isn’t supposed to give the meaning of the name”; this is because it is not clear it can still solve Frege’s puzzles (see below) (Kripke 1980: 33). All in all, it is descriptivism understood as a theory of meaning that is a well-motivated, natural and unified whole, as well as the main target of NTR.10

4.2.2  The Historical Chain Picture Kripke also presented a brief sketch of an alternative positive account of reference, the historical chain picture of reference, or “the causal theory of reference”.11 Kripke’s picture falls into two parts: there is the initial introduction of a name, and the subsequent transmission of the name or “reference borrowing”.  Note that many of Kripke’s key critical arguments (the arguments from rigidity and from unwanted necessity and “the epistemological argument”) only make sense if descriptivism is understood as a theory of meaning. This also suggests that Kripke himself primarily thought of descriptivism in this way. 11  The label “causal theory” can mislead and has misled. Even competent philosophers repeatedly interpret the causal theory as claiming that the referent is whatever causes the particular utterance 10

74

P. Raatikainen

Baptism  First, there is the introduction of a referring expression12 to the language, a baptism or a dubbing event in which the reference of the name is initially fixed. There, an object must obviously somehow be singled out for naming. According to Kripke, this can happen either with the help of an ostension (by pointing to it or exhibiting it) or of a description. Kripke even adds, “[t]he case of a baptism by ostension can perhaps be subsumed under the description concept also. Thus the primary applicability of the description theory is to cases of initial baptism”. (Kripke 1980: 96 n. 42) Devitt in particular wanted his theory of reference to be more thoroughly causal, also at the stage of introduction of names. He emphasized that, in a typical name introduction, those who are present perceive the naming ceremony and the object to be named; by virtue of being in an appropriate causal interaction with each other and the object in the event, they gain the ability to later refer to the object (Devitt 1981: 27). No definite descriptions are needed, and, even if one is involved in the baptism, the named entity may still fail to satisfy it (referential and not attributive use of a description, in Donnellan’s sense). However, even Devitt never denies that names can also be introduced with the help of descriptions, without any perceptual contact or direct causal connection (he simply calls such naming ceremonies “abnormal”). The baptism need not be purely causal or purely descriptive: it typically involves a categorial concept such as human, place, or animal, but the causal-perceptual part does most of the work. In any case, it was never part of NTR that baptism should always be purely causal. Thus Devitt and Sterelny concede that “the introducer of a name must use some general categorial term such as ‘animal’ or ‘material object’” (Devitt and Sterelny 1987: 65) and “[i]t seems then that our causal theory of names cannot be a ‘pure-causal theory’. It must be a ‘descriptive-causal’ theory” (ibid.). Allowing such descriptive elements does not compromise the essence of NTR, as the unspecific descriptive content in question alone is clearly insufficient to determine reference. Even if a description is essentially used in baptism, it is important to note that the description then normally is not and cannot be (i) a description in terms of “famous deeds” (e.g., “taught Plato,” “drank hemlock,” etc.) characteristic of pre-Kripkean descriptivism13 (for the baby has not yet done any of these things); (ii) a meta-­ linguistic or causal description (see below), such as “the thing to which ‘Titanic’ refers”, popular among some recent descriptivists (for the name does not yet refer to anything). Consequently, it does not provide the sort of description that

of the name. That is emphatically not the idea. The cause of my utterance of, say, “Aristotle” may be, for example, my friend’s question; it is typically not Aristotle himself. 12  That is, introduction as the expression with this specific reference. It is obviously possible and even common that the (syntactically) same name has already been used in the community with a different reference. 13  In Searle’s words, what speakers “regard as essential and established facts” about the bearer.

4  Theories of Reference: What Was the Question?

75

well-­developed forms of descriptivism typically utilize (cf. Burgess 2013: 28–29). Or, as Kripke himself put it: Two things should be emphasized concerning the case of introducing a name via a description in an initial baptism. First, the description used is not synonymous with the name it introduces but rather fixes its reference. Here we differ from the usual description theorists. Second, most cases of initial baptism are far from those which originally inspired the description theory. Usually a baptizer is acquainted in some sense with the object he names and is able to name it ostensively. (Kripke 1980: 96 n. 42, my emphasis)

Reference Borrowing  The second, very important part of Kripke’s picture is the idea of “reference borrowing”. Other language users not present at the name-giving occasion acquire the name and the ability to refer with it from those in attendance at the baptism, still others from the former users, and so on. Later users of the expression need not know or be able to identify the referent. It is sufficient for successfully referring that they are part of an adequate “historical” or “causal” chain of language users which goes back to the first users. Speakers may also be largely ignorant of this chain or even from whom they got the name. Even if the expression was originally introduced in the short term by means of a description, that particular description is not usually transmitted with the expression. Nor is any other uniquely identifying description. Nevertheless, it appears that these later users can use the expression to refer successfully. Devitt (2006, 2008) attempted to develop Kripke’s sparse and sketchy remarks about reference borrowing into a somewhat more systematic theory: reference borrowing takes place when a name is used in a communication situation. Devitt grants, however, that a mere causal connection is not sufficient for the hearer to borrow the reference from the speaker. The borrowing has to be an intentional act. First, the borrower must have a sufficient level of linguistic sophistication (a rock or a worm cannot borrow a reference). Second, she must understand what is going on, e.g., that the string of sounds (or symbols) is being used as a proper name. Devitt emphasizes a distinction that is insufficiently clear in the literature on reference, a distinction between what is required at the initial time of borrowing the reference of a name and what is required at the later time of using the borrowed name. In NTR, the hearer borrowing the reference of a name from a speaker must, at the time of borrowing, intend to use it with the same reference as the speaker14: When the name is ‘passed from link to link’, the receiver of the name must, I think, intend when he learns it to use it with the same reference as the man from whom he heard it. If I hear the name ‘Napoleon’ and decide it would be a nice name for my pet aardvark, I do not satisfy this condition. (Kripke 1980: 96)

 I am myself inclined, at least tentatively, to go even further: I do not think there has to be any specific intention to use that particular word with the same reference, even at the time of borrowing: perhaps all that is needed is the absence of an actual decision to begin using that name in a new way (like “Napoleon” for a pet), and a rough understanding of how proper names generally function. However, I should emphasize that this is my own personal view and not something that Kripke or Devitt, for example, would clearly state.

14

76

P. Raatikainen

The idea is definitely not, as some have understood it, that the person who has borrowed the reference of a name must, at the time of later using it, intend to refer to the same object as the person from whom he borrowed the name. Both the initial borrowing and the later use are intentional actions, but, according to NTR, subsequent use need not involve any intention to defer to the earlier borrowing; it need not involve any “backward-looking” intention (Devitt 2006: 101–102). It may be that some philosophers’ use of the word “deferring” instead of “borrowing” has contributed to this confusion. (Searle (1983: 244), for example, seems to contribute to this confusion.) I believe that this is exactly where one crucial difference, often not sufficiently recognized, between descriptivism and NTR lies: the spirit of descriptivism seems to require that there must be some sort of description (perhaps one about borrowing) present every time the expression is used. NTR denies this. Although a typical name introduction event does involve a causal interaction with the bearer of the name, the causal chain is, in NTR, primarily a chain of communication between the earlier and later uses and users of the name, and mostly concerns borrowing. It does not require the bearer of the name to be a causal relatum. This is clear once we note that NTR has always permitted the introduction of a name through a description, even in absence of the (postulated) bearer. Even Devitt states that “[t]he central idea of a causal theory was that present uses of a name are causally linked to first uses” (Devitt 1981: 28, my emphasis). It is emphatically not part of NTR to require, as a common misinterpretation suggests, that there must necessarily and always be a (direct) causal relationship between the bearer and the introducers of the name. Or, if the label “causal theory of reference” is reserved for the thoroughly causal picture in which also baptism is causal (à la Devitt), then it was never a claim in NTR (or by Devitt) that the causal theory truthfully applies to all names (and other referring expressions).

4.2.3  The Varieties of Reference Kripke did not even pretend to have presented a well-developed theory: he only briefly sketched what he called an alternative “picture.” “I want to present just a better picture than the picture presented by the received views” (Kripke 1980: 93). Others have attempted to develop more systematic theory on this basis. Devitt’s book Designation (1981) in particular has been an important contribution to this effort. Some critics, e.g., Unger (1983) and Searle (1983: 239), seem to suggest that refuting counterexamples exist for NTR. Such a criticism, though, assumes that the causal theory of reference is a general theory of how expressions refer. However, it was never claimed – by Kripke, Putnam, or Devitt, for example – that all expressions, or even all names, refer along the lines of the causal theory of reference. That some expressions really are, in a sense, descriptive was admitted from the

4  Theories of Reference: What Was the Question?

77

beginning: e.g., Kripke gave as an example “Jack the Ripper”,15 Putnam “vixen”, and Devitt and Sterelny “pediatrician”. Devitt (1981) even systematized this and generalized Donnellan’s distinction between referential and attributive uses of descriptions to apply to all sorts of expressions, including names. In this terminology, typical names are referential, but “Jack the Ripper,” for example, is attributive. More exactly, as we have seen, descriptions sometimes play an essential role in the introduction of an expression. Even in that case, reference borrowing does not require the description to be transmitted with the expression. Consequently, it is not necessary for a subsequent language user to know what the description is in order to successfully make a reference using the expression. In some cases, only the experts know the relevant description, and in other cases, the description is lost in history, and nobody presently knows it. In a few cases, such as “bachelor”, nearly all users are aware of the relevant description. But this seems to be more a sociological fact than a necessary requirement for successfully referring using the expression. My own view is that the reference of even this kind of expression can be borrowed without knowing the relevant description. In his introduction to his influential 1977 collection Meaning, Necessity, and Natural Kinds,16 Schwartz emphasized that one need not assume that all of language operates in just one way. “[T]he [causal] theory is correct about natural kind terms and the traditional theory is correct about nominal kind terms. It is only the belief in the universal application of one view that excludes the other” (Schwartz 1977: 41). I think we can easily distinguish more than two different categories of (referring) expressions: Before the heyday of NTR, early Putnam (1962) suggested, based on certain ideas of Quine, that some terms in science are what he called “law-cluster terms.” Namely, Quine (1960, 1963) argued that, even if a word is originally introduced into science by means of an explicit definition, definitions in science are “episodic”: that is, the status of the resulting equivalence of the new word and its “definiens” need not be an eternally privileged status, a necessary truth, or true by convention. Putnam went further by introducing his notion of “law-cluster concepts,” concepts that are implicated in a number of scientific laws. If any of these laws is treated as a necessary condition for the meaning, one is, Putnam submits, in trouble. Putnam’s proposal was not very different from the cluster version of descriptivism. Early and late in his career, Putnam did not claim that there were no analytic truths or that “vixen”, for example, could not be correctly analyzed as “female fox”. The point is, rather, that words such as “vixen” or “bachelor” are quite rare and special (in Putnam terminology, “one-criterion concepts”) and not representative, nor do they offer a good model for a general theory of meaning. Many other words have no such standing definitions, if the argument is sound.

15  Kripke adds, “But in many or most cases, I think the theses [descriptivism] are false” (Kripke 1980: 80). 16  In which, by the way, Schwartz also introduced the now-common label “new theory of reference.”

78

P. Raatikainen

Now Putnam’s more mature NTR view implies that his earlier law-cluster account must be wrong with respect to natural kind terms (see especially Putnam 1975b: 281). This has apparently led many to think that Putnam abandoned the whole idea. But this is not the case: if we read Putnam’s later work (see, e.g., Putnam 1986; Putnam 1988: 8–11; cf. Putnam 1973: 206), it is clear that he still thought that the general idea of law-cluster concepts was valid; it holds true for some concepts in science (“momentum” may be an example), even if it does not for common natural kind concepts. And I, for one, think that these observations of Quine and Putnam are still worthwhile. Though Putnam never formulated the issue this explicitly, I think we can look at his works and systematize the following picture; it is plausible that referring expressions (and terms in science in particular) fall into (at least) four different types17: 1. One-criterion words, e.g., “vixen,” “bachelor,” and “Jack the Ripper.” (I am inclined to think that “Vulcan” and perhaps also “phlogiston” belong to this category.) 2. Law-cluster terms: “momentum” may be an example. 3. Observable or manifest natural kind terms – e.g., “gold,” “water,” and “tiger,”18 – and common proper names, e.g., “Aristotle,” “Mount Blanc,” and “Cologne”. 4. Observational terms,19 e.g., “yellow,” “liquid,” and “sour”. Much confusion has resulted from the assumption that all referring expressions should refer in the one and same way, and that NTR in particular assumes that. But it is plausible that the causal theory of reference applies most smoothly to the third category (although I believe that the reference of any sort of term can be borrowed).

4.2.4  The Qua Problem The causal theory of reference as applied to general terms has been often criticized as follows: Papineau (1979), Dupre (1981), Crane (1991), Segal (2000), and many others have complained that a sample will usually be a member of many kinds.20 For example, a particular tiger is simultaneously, say, an Indochinese tiger, a tiger, a

 I once put this sort of division forward in a discussion with Devitt, and he more or less agreed.  These terms are first tentatively identified with the help of their observable properties; but it is part of the idea that their extension is determined by their “inner structure,” lineage, or something else, i.e., more theoretical traits that go beyond direct observation (see Sect. 4.2.4). More theoretical natural kind terms may, rather, belong to the first two categories. Devitt (1981), for example, explicitly mentions “observational natural kinds” in this connection. More recently, some philosophers of language have begun using the term “manifest natural kinds” (e.g. Soames). 19  I am well aware of the problems with the notion of “observational terms”, but I think we can use a rough and relative notion here in contrast to the other categories (1–3). 20  Wittgenstein’s critique of ostensive definitions in Philosophical Investigations can perhaps be viewed as a predecessor of this critical argument. 17 18

4  Theories of Reference: What Was the Question?

79

feline, a mammal, and an animal, as well as a predator and a striped animal. So how can a general term such as “tiger” be introduced? If it happens through an initial baptism in the contact with a sample, as NTR seems to suggest, how can one rule out incorrect kinds of generalizations? This is the so-called qua problem (see Devitt and Sterelny 1999: 72–75). In fact, however, it has long been recognized among advocates of NTR that, especially in the case of general terms, the introduction of a word must involve some descriptive content (see, e.g., Sterelny 1983; Devitt and Sterelny 1987, 1999; Stanford and Kitcher 2000). Recall that Devitt and Sterelny granted that some categorial description may be used even in the case of proper names, which may in part rule out the wrong sort of generalizations. Stanford and Kitcher (2000) in particular have substantially improved Putnam’s original account of the reference of natural kind terms. Roughly, in their approach, there is a whole range of samples (not only a single sample), a range of foils, and some associated properties involved in the introduction of a natural kind term. This shows how one can rule out the wrong kind of generalization (at least many of them), and it also shows how an apparent natural kind term can fail to refer to anything. According to the approach of Stanford and Kitcher, term introducers make stabs in the dark: they see some observable properties that are regularly associated, and conjecture that some underlying property (or “inner structure”) figures as a common constituent of the total causes of each of the properties. This conjecture may be incorrect, in which case the term may fail to refer. But if it is correct, one can exclude incorrect generalizations and fix the reference in the intended way to the set of things that share that underlying property, belong to the same species, etc. In such a situation, it may remain indeterminate whether or not some borderline cases belong to the extension of a term (e.g., heavy water, often mentioned by the critics of NTR), and there may be some room for conventional choice, but this is not relevant to the fundamental issue. Superficially similar but internally radically different objects or substances (as XYZ in Putnam’s argument, for example, is stipulated to be) simply do not belong to the extension, and this is sufficient for the argument in favor of NTR.

4.2.5  Can Reference Never Change? Soon after Kripke’s groundbreaking contribution, Evans (1973) raised a concern: the picture that Kripke had sketched, the initial simple version of the causal theory of reference, apparently entails that the reference of a name can never change: the initial dubbing or baptism fixes it for good. However, it appears that in reality the reference does sometimes change. For example (this is Evans’ example), apparently “Madagascar” was originally used as a name of part of the African mainland. Due to some confusion, it is now used to refer to a large island.

80

P. Raatikainen

Perhaps the most important further refinement of the causal-historical theory of reference is Devitt’s idea of “multiple grounding.” Devitt has suggested that it is not only the initial dubbing or baptism that determines the reference: a name typically becomes multiply grounded in its bearer in other uses of the word relevantly similar to a dubbing. In other words, other uses involve the application of the word to the object in direct perceptual confrontation with it (see Devitt 1981: 57–58; Devitt and Sterelny 1999: 75–76).21 This more sophisticated framework allows reference change and makes it possible to explain it. Unger (1983) has devised some ingenious variations on Putnam’s Twin Earth thought experiment that seem to support contrary intuition.22 Many of them, and arguably the most puzzling ones, are based on some radical but unnoticed change in the environment. For example, imagine that all the water (i.e., H2O) on Earth was replaced overnight (say, secretly by aliens) with XYZ; what would the extension of “water” here on Earth be after, say, 100  years?23 The positive picture of Kripke, Putnam and others (that is, the causal theory of reference in its original form) seems to require that the extension would be only H2O. But intuitively, this does not seem at all clear: it seems that “water” would sooner or later switch its reference to XYZ (such scenarios are sometimes called “slow switching” cases in the literature). I contend, however, that Devitt’s improvement  – the idea of multiple grounding  – enables one to reply not only to Evans’ initial concern over reference change, but also to most of Unger’s much-cited alleged “counterexamples” to NTR. Critics of NTR, however, tend to ignore this important development. I think that Devitt’s idea has a further application that is not often recognized. Consider a name that is initially introduced via description, without any perceptual contact with the bearer. “Neptune” is a plausible candidate (or “the Boston Strangler”). The idea of multiple grounding makes it possible that, if we later come into perceptual contact with the bearer, the name becomes also non-descriptively grounded in it. In Devitt’s terminology, the status of the name can then change from attributive to referential (cf. Devitt 1981: 57). I think something like this may frequently happen with many terms in science.

 In fact, Devitt first proposed this modification in 1974. Putnam (2001) comments on it approvingly. Also, Kripke (1980: 163) acknowledges the need for some such refinement, but does not explicitly show any awareness of Devitt’s specific suggestion. 22  Also Bach, for example, refers to them; see Bach 1987: 276–277; cf. Bach 1998. 23  This is my own example, not Unger’s, but I think that it captures fairly the basic idea of many of his cases. 21

4  Theories of Reference: What Was the Question?

81

4.3  What Was the Question? 4.3.1  The “Main Problem” of the Theory of Reference But what is the essential problem at issue in the theory of reference? Descriptivism and the historical chain picture have been competing answers, but what exactly was the question? What should an adequate theory of reference be able to do? It is illuminating to take a look at what Searle, a leading figure (perhaps the leading figure) of the descriptivist camp before the critical attack of Kripke and others, has had to say in retrospect24: You will not understand the descriptivist theories unless you understand the view they were originally opposed to. At the time I wrote ‘Proper names’ in 1955 there were three standard views of names in the philosophical literature: Mill’s view that names have no connotation at all but simply a denotation, Frege’s view that the meaning of a name is given by a single associated definite description, and what might be called the standard logic textbook view that the meaning of a name “N” is simply “called N”. (Searle 1983: 242, my emphasis)

Searle continues: Now the first and third of these views seem to be obviously inadequate. If the problem of a theory of proper names is to answer the question, “In virtue of what does the speaker in the utterance of a name succeed in referring to a particular object?”, then Mill’s account is simply a refusal to answer the question … But the third answer is also defective. (Ibid.)

In his 1967 encyclopedia entry, Searle had expressed the issue with respect to a particular name “Aristotle”: “The original set of statements about Aristotle [what speakers regard as essential and established facts about him] constitute the descriptive backing of the name in virtue of which and only in virtue of which we can teach and use the name” (Searle 1967). And in his 1971 “Introduction,” Searle discusses what he calls Frege’s “most important single discovery”: “in addition to the name and the object it refers to, viz. its reference, there is a third element, its sense (or as we might prefer to say in English: the meaning or descriptive content) of the name in virtue of which and only in virtue of which it refers to its reference” (Searle 1971: 2, my emphasis). Searle thus clearly thinks that a central task of a theory of reference is to explain in virtue of what an expression refers to the entity it actually refers to. These passages also evidently show that, at least to Searle, the question in descriptivism was essentially about the meaning of a name, and not just about the fixation of reference – and that he equated Fregean sense, descriptive content, and linguistic meaning.25 Finally, “the descriptive backing” for Searle is clearly supposed to be non-trivial, and a language user may well fail to have one and consequently fail to refer with the name.

 Note that this was written some time after the critique by Kripke, Donnellan and others was published. 25  This is clear, if not in “Proper names” (Searle 1958), at least in Searle’s 1967 encyclopedia entry. 24

82

P. Raatikainen

In the opposite NTR camp, Devitt has expressed the same general idea: “The main problem in giving the semantics of proper names is that of explaining the nature of the link between name and object in virtue of which the former designates the latter” (Devitt 1974).26 And similar formulations have been common in the literature. Thus Marga Reimer, in her entry “Reference” in The Stanford Encyclopedia of Philosophy, states that of the three central issues, the central questions concerning reference, the first is: “What is the mechanism of reference? In other words, in virtue of what does a word (of the referring sort) attach to a particular object/ individual?”27 Accordingly, William Lycan, in his recent survey on the theories of reference (Lycan 2006), proposes that the two central questions of the theory of names are: 1. The Referring Question: In virtue of what does a proper name designate or refer to its bearer? 2. The Meaning Question: What and how does a name mean or signify? What does it contribute to the meaning of a sentence in which it occurs? I contend that we are justified to conclude that the fundamental question of the theory of reference is the following. Main Question  In virtue of what does a referring expression refer to whatever it in fact refers to? Consequently, it is reasonable to require that any satisfactory theory of reference should at least answer this question. Indeed, the pre-Kripkean versions of the description theory of reference (both in its simple and more sophisticated cluster forms) do exactly that: according to these versions, a name refers to the entity it refers to because that entity satisfies the description (or the majority of the descriptions in the cluster) that the language user associates with the name. The historical  chain picture, or the causal theory, in turn answers the question with the suggestion that a name refers to its bearer because, roughly, the user of the name stands in an appropriate causal-historical relation to the first uses of the name.

 Devitt expresses the idea in virtually the same words in Designation (Devitt 1981: 6). In his much later encyclopedia entry, he writes: “The central question about reference is: In virtue of what does a term have its reference? Answering this requires a theory that explains the term’s relation to its referent” (Devitt 1998, my emphasis). 27  The second is, “What is the relation between reference and meaning?”, and the third, “What is the relation between reference and truth?”. 26

4  Theories of Reference: What Was the Question?

83

4.3.2  The Millian View and Frege’s Puzzles It has been common in both camps, the descriptivists and the advocates of NTR,28 to take as their point of departure the so-called Millian view of meaning, or “the direct reference theory” (DRT), and its alleged problems (this view is commonly ascribed to John Stuart Mill – hence the name; but the exegetical issue is again less important).29 By this, one means the simple view according to which (at least in the case of proper names) the meaning of a name is simply its referent – the entity it denotes. In the case of general terms, the analogous view – that the meaning of a general term is just the set of entities that it applies to – has sometimes been called “extensionalism” (cf. Braun 2006). Today, it is widely thought that such views encounter enormous difficulties in so-called “Frege’s puzzles”.30 One of them concerns identity statements. Let us consider the example that derives from Frege, the following pair of sentences: Hesperus is Hesperus. Hesperus is Phosphorus. It is now well known that the two names “Hesperus” and “Phosphorus” denote in fact the same heavenly body, namely the planet Venus. However, this was not known in ancient times. Rather, the names were used as if they referred to different heavenly bodies: “Phosphorus” to a bright star visible in the morning, and “Hesperus” to a bright star visible in the evening. Now if the Millian view was right, it should follow31 that the above two sentences would also have the same meaning. Nevertheless, the first of them is trivial and knowable a priori, whereas knowing the latter requires substantial empirical knowledge. It does not appear to be analytically true. Consequently, it is not plausible that the sentences have the same meaning. An analogous problem can be presented, in the case of general terms, for extensionalism, e.g., with the predicates “renate” (creature with a kidney) and “cordate” (creature

 Accordingly, Searle begins his encyclopedia entry (Searle 1967) as well as his 1971 “Introduction” in this way. Kripke also discusses it at the beginning of his first lecture in Naming and Necessity (1980: 26–27); also Devitt begins Designation (1981: 3–6) by reflecting on the Millian view and its apparent problems. Braun (2006) begins his handbook chapter similarly, and both Reimer (2009) and Lycan (2008: ch. 3) motivate descriptivism in this same way. 29  It is now quite popular to assume that the second camp, those who favor NTR, advocate the Millian view. Although some do, this assumption is generally the result of confusion, as we shall see. 30  Whether or not the actual, historical Frege intended his “senses” to be linguistic meanings (or a central aspect of meaning, or something close), his arguments work beautifully with respect to linguistic meaning and have consequently become – thus interpreted – standard in the philosophical theory of meaning (see, e.g., Searle 1967, 1971; Devitt and Sterelny 1987, 1999; Braun 2006; Lycan 2008). However, the problem of empty names was in reality more central to Russell than to Frege. Four different puzzles are often mentioned, even, but I must be brief and condense here. 31  That is, under some plausible assumption of compositionality or the principle of substitutivity of synonyms. 28

84

P. Raatikainen

with a heart) – presumably (at least, so the argument assumes) these have the same extension, but it is difficult to maintain that they have the same meaning. The Millian view also runs into trouble with names without a referent such as “Father Christmas” or “Vulcan”.32 It seems to entail that such names have no meaning at all. Apparently, however, sentences containing such names – and accordingly the names themselves – are perfectly meaningful. Consider: Vulcan is a planet. Vulcan does not exist. It seems therefore plausible – pace Millianism – that a name can have meaning even if it does not refer to anything real. At least the latter sentence even seems true. Consequently, descriptivism, put forward as a more plausible alternative to the Millian view of meaning, proposes that there must be more to the meaning of a name than the referent, the entity named – namely the descriptive content of the name (what the associated description, or the cluster of descriptions, expresses). Indeed, descriptivism has been standardly motivated by referring to Frege’s puzzles. It has been frequently considered to be a major virtue of descriptivism, as opposed to the Millian view, that it can so neatly solve the puzzles. This is clearly the case, for example, in Searle’s above-mentioned encyclopedia entry (Searle 1967). Also, another leading contemporary descriptivist, Bach, writes, “Avoiding these puzzles is the theoretical motivation underlying any description theory of names” (Bach 1987: 134). Further, Katz (1990), still another prominent descriptivist, also motivates descriptivism with Frege’s puzzles. Chalmers (2002), too, presents his “broadly Fregean account of meaning” by starting with Frege’s puzzles.33 Thus it also appears reasonable to require that any well-motivated form of descriptivism should at least be able to deal with Frege’s puzzles.34 And if so, descriptivism must also be understood as a theory of meaning (and not merely as a theory of reference-fixing) in the first place.

 Vulcan was the alleged directly unobservable planet scientists once postulated, orbiting between Mercury and the Sun, and causing the deviations in Mercury’s orbit. However, it turned out that there exists no such thing. 33  Chalmers is not a flag-carrying descriptivist, but, rather, distances himself from descriptivism. Nevertheless, he suggests that a meaning of a description can “approximate” the meaning of the original expression (see, e.g., Chalmers 2002: 149, 160). As will become evident, I do not believe that is true. 34  Everett (2005) makes essentially the same point. 32

4  Theories of Reference: What Was the Question?

85

4.3.3  Shared Meanings The view that (conventional linguistic) meanings are and must be intersubjectively shareable and public has been widely advocated in the analytic tradition of philosophy.35 Already Frege held that meaning (or “sense”) is, in general, shareable36: mankind has, according to him, “a common store” of meanings (“thoughts”, as he called the senses of sentences). Such a meaning can be expressed in different languages, and is objective. The meaning (sense) of an expression or a sentence is what one grasps in understanding it: it is grasped by everyone sufficiently familiar with the language in which it belongs. (Frege 1892a, b) The latter idea is reflected, e.g., in Carnap: “If we understand the language, then we can grasp the sense of the expressions.” (Carnap 1947: 119). This sort of view of understanding, which quite literally identifies understanding with knowledge of meaning, has been labelled by Miller “the epistemic conception of understanding”.37 This picture and the shareability of meaning play an essential role in the philosophy of Dummett and his disciples, for example. Dummett too has been another important recent advocate of descriptivism.38 Indeed, it has been quite common, from Frege onwards, to think that successful communication requires shared meanings. Furthermore, the idea that whatever meaning is, it is what the expression and its translation share (i.e. they have “the same meaning”) has been highly influential in the philosophy of language, especially after Quine. All such considerations point towards a largely public community-­ wide meaning. In contrast, the idea of radical meaning variance was famously suggested, in the context of the philosophy of science, by Kuhn and Feyerabend in the 1960s: their proposal was that the meaning of an expression occurring in a scientific theory (or in a system of beliefs) changes when the theory is modified or replaced by another theory in which that expression also occurs.39 This led to their notorious thesis of incommensurability. It was soon recognized that this idea entails conclusions that are highly implausible and even inconsistent with its own starting points (see, e.g.,

 Russell, at least in his “The philosophy of logical atomism” (1918), has been a notable exception: to him, language is essentially private. This view became quite unpopular due to the criticism of the later Wittgenstein. 36  There may be, for Frege, some exceptions in the case of indexicals. Moreover, Frege grants that different speakers may attach different senses to a name. But for Frege, this was more an unhappy shortcoming of natural language, something that would be eliminated in the ideal logical language. Furthermore, such a difference of senses amounted to, for Frege, speaking really different languages. Be that as it may, one should not one-sidedly exaggerate this aspect of Frege’s views on sense at the expense of just how central the objectivity and the shareablity of meaning was for him (see, e.g., May 2006; Kremer 2010). 37  “Call the psychological thesis that a speaker’s understanding of a sentence consists in knowledge of its meaning the epistemic conception of understanding” (Miller 2006: 994). 38  Or, strictly speaking, of the more general “identification theory”; see notes 3 and 5. 39  Their “contextual theory of meaning” can be viewed as a version of descriptivism. 35

86

P. Raatikainen

Shapere 1964, 1966; Achinstein 1968). No doubt the meanings of expressions sometimes change,40 but it is not reasonable to postulate this massive and extreme meaning variance. It is therefore plausible to assume that meanings (in the relevant sense) are shared and, by and large, stable, so that two persons can believe and state quite different things about the same subject matter and contradict each other but still mean the same by their relevant words. More logically, it should be possible for two sets of statements, beliefs or theories, even if quite different, to stand in logical relation to each other: in particular, be inconsistent with each other. However, this requires that the meanings of the relevant expressions must be the same. In other words, we often want to have a genuine disagreement and not merely equivocation or talk past each other. In fact, Frege realized this. According to him, if meanings were subjective, a common science would not be possible: “It would be impossible for something one man said to contradict what another man said, because the two would not express the same thought at all, but each his own” (Frege 1914). Meanings do occasionally change. And some degree of variation of meaning in a larger linguistic community is a fact of life. However, one should not exaggerate the frequency with which this occurs. I contend that it is rational to follow here a maxim that Putnam (1965: 130) has playfully called “Occam’s eraser” (the idea goes back to Ziff): (OE)

Differences of meaning are not to be postulated without necessity.

Descriptivism in particular, when interpreted as a theory of understanding in accordance with the epistemic conception of understanding, presupposes that meanings are public and shared: the idea is that learning and understanding an expression requires the correct descriptive content to be associated with the expression (the one that other competent speakers already associate with it); not just any arbitrary subjective association suffices. In his 1967 encyclopedia entry, Searle described the situation as follows. What speakers (already competent with the expression) consider as essential and established facts about the bearer constitutes the descriptive backing of the name, and some indefinite subset or the disjunction of these descriptions is analytically tied to or logically connected with the name. We teach and learn to use the word with that descriptive backing. It is natural to understand all this as meaning that not just any subjectively associated description would do. It is also worth noting that Searle, in his classic paper of the modern description theory, “Proper names”, criticized the more traditional simple versions of descriptivism, among other things, on the ground that they would entail that a name “would have different meanings for different people” (Searle 1958: 169). He clearly assumes that this is a defect and that a satisfactory theory should provide a stable meaning shared by different speakers of the community. For his part, Strawson (1969) reflects, for example, how children learn

 Indeed, Devitt’s idea of multiple grounding (see above) provides one account of how this could happen.

40

4  Theories of Reference: What Was the Question?

87

to master the meaning rules of the language through conditioning and training by adult members of the community. The adults teach “the same, the common language” and, as a consequence of this, it is a natural fact that language and linguistic meaning are public. Emphasizing the public and shared nature of meanings does not, obviously, mean that it would not be possible to identify and scrutinize different, more subjective notions of “meaning”, such as “coloring” and “shading” (Frege), “expressive meaning” (Carnap), “speaker’s meaning” (Grice), “conception” (Burge) or whatever else. However, the fact remains that the sort of meaning that has been the main focus in the honorable analytic tradition is the public and stable one. And that is what we can plausibly assume is at stake in the debate surrounding descriptivism as well. Consequently, it is also natural to demand that a satisfactory theory of meaning and reference should provide socially shared public meanings – that it would not entail that the meaning of a referring expression varies wildly from one person to another, even inside a particular linguistic community. Otherwise, it is unclear whether the theory is even a contribution to the same debate.41 Interim Conclusion  We can summarize the reflections of the above few subsections as follows. It is plausible and fair to require that any satisfactory theory of reference and meaning – and any well-motivated version of descriptivism in particular – at least (i) is able to answer the main question (“In virtue of what does a referring expression refer to whatever it in fact refers to?”); (ii) is able to accommodate Frege’s puzzles; and (iii) provides relatively stable and public meanings.

4.3.4  Meaning, Understanding, and Manifestability Let us briefly digress to a different topic. Namely, quite independently of descriptivism, the view that meanings must be not only shared but also exhaustively manifested in the observable (non-linguistic and linguistic) behavior has been quite influential in modern philosophy of language. This has been a common theme in the otherwise different views of such towering figures as Dummett, Quine and Davidson – and their many followers. It is easiest to begin with Dummett, because there is apparently less controversy over what his view is, and he is most closely connected to the debates we have already discussed: Dummett was a devoted Fregean and advocated for a broadly

 Chalmers, for example, recently granted that in his two-dimensional “broadly Fregean” approach, the senses, or intensions, “do not play the ‘public meaning’ role” (Chalmers 2012: 249). Consequently, I think it is somewhat misleading to even put that theory forward as a statement in the mainstream debate on meaning and reference (as Chalmers has repeatedly done). Chalmers, though, justifies all this with vague gestures towards Frege: that he allowed his senses to vary between different speakers as well (251). See, however, note 36 and the discussion of Frege above.

41

88

P. Raatikainen

descriptivist interpretation of “sense”.42 However, he gives his view an important twist not found in Frege: he adds that learning language and communication require meanings to be fully manifested in observable behavior. This led Dummett to his semantic antirealism: the view that some sentences, although they have determinate meaning, are neither true nor false (Dummett 1978, 1993). Quine is famous for his thesis that translation, and accordingly meanings, are indeterminate. There has been less agreement, however, on what exactly his key reasons or premises are for this view. My own interpretation (defended in some detail in Raatikainen 2005) is that the thesis is essentially grounded in his “linguistic behaviorism”,43 a view not that different from Dummett’s manifestability requirement. Language learning, according to Quine, turns on intersubjectively observable features of human behavior and its environing circumstances, “there being no innate language and no telepathy.” Finally, even if the admirers of Davidson tend to emphasize the differences between Davidson and Quine, it is hardly controversial that Davidson took for granted linguistic behaviorism he learned from Quine: “Meaning is entirely determined by an observable behavior, even readily observable behavior” (Davidson 1990: 314). To be sure, Davidson then went on to develop his own original program, but that general view formed the basis for it. The founding fathers of NTR did not much comment on the issue of mani-­ festability,44 and even in later literature, the relationship between manifestability and NTR is rarely touched upon. Although the tension between NTR and Dummett’s views had not gone unnoticed but had been observed by several philosophers, including Dummett himself,45 I believe it was Devitt (1983) above all that made this clash current. The connection with Quine had been suggested in the literature in passing a few times, but was properly treated only in my own article (Raatikainen 2005). (I am not aware of any discussion of NTR and Davidson in this respect; but I believe that what I said about Quine largely applies.) Briefly: Consider, for example, once again Putnam’s Twin Earth story and the year 1750. One simply could not determine, on the basis of observable linguistic behavior of language users in 1750, whether our “water” and “water” on Twin Earth had the same meaning or not. The manifestable uses of the two linguistic

 More precisely, Dummett advocated for “the identification theory” (see note 3), but, as was noted above, in a great many cases this makes no difference. 43  This view is somewhat different from and more specific than the traditional all-encompassing behaviorism and is not obviously refuted by “the standard objections to behaviorism” (as I argue in Raatikainen 2005; see especially note 47). 44  Putnam (1975c) did comment on Quine’s indeterminacy thesis, but apparently failed to see the possibility that NTR would undermine it. Furthermore, he presented more or less the relevant argument as a critique of Sellars in Putnam 1974, but seemingly did not see its significance for Quine’s thesis. 45  See Millar (1977), McGinn (1982), Currie and Eggenberger (1983), and Dummett (1974); cf. Raatikainen (2010). 42

4  Theories of Reference: What Was the Question?

89

communities would be indistinguishable  – as would be any explicit verbalizable knowledge of meaning. Nevertheless, under the standard assumption that meaning determines extension – that is, if the extensions of two expressions differ, their meanings cannot be the same – it is the case that our “water” and “water” on Twin Earth do differ in meaning. This thought experiment and its kin undermine the assumption of the manifestability of meaning and the whole equation of competence in a language with knowledge of its meanings, i.e. the epistemic view of understanding. All this is perfectly natural from the NTR point of view, according to which reference is determined by a historical chain largely opaque to a language user (often even to the entire community within a fixed time period). Inasmuch as difference in reference is sufficient for difference in meaning (a widely shared assumption), there can be differences in meaning that are not detectible on the basis of observable behavior. Hence linguistic behaviorism is false.

4.4  New Forms of Descriptivism Let us now revisit various new forms of descriptivism which have been proposed as responses to the critical arguments of Kripke and others.

4.4.1  Rigidified Descriptions As Plantinga (1974) first noted, Kripke’s argument based on rigid designation can be circumvented if, instead of simple descriptions, one focuses on descriptions that have been “rigidified.” For example, one may associate with the name “Socrates” a description of the following type: The philosopher who actually (in the actual world) drank hemlock. Such rigidified descriptions, just as proper names, refer to the same entity in every possible world. There is no question that this modification to descriptivism can bypass Kripke’s argument from rigid designators. Furthermore, it is altogether possible that the philosopher who actually drank hemlock might not have drunk hemlock. In other words, this version of descriptivism does not lead to unwanted necessities either. These facts have led many eager defenders of descriptivism to think that rigidified descriptions can save the description theory of reference. This is, however, premature; the situation is not quite that bright for descriptivism. To begin with, it is intuitively implausible that a simple sentence such as “Socrates is snub-nosed” would have the actual world in its entirety as part of the subject matter of what is said by the sentence (cf. Fitch 2004: 48). Besides, in the context of

90

P. Raatikainen

possible world semantics,46 descriptivism based on rigidified descriptions faces difficulties. Namely, there are problems with it in the context of Frege’s puzzle concerning identity statements: names with intuitively different meaning but with the same referent now refer to the same entity in all possible worlds, and consequently have – in the possible-worlds framework – the same intension (which is the technical explication of the notion of sense or meaning in this context); but the names were presupposed to have different meanings.47 (Soames (2001) also presents more complicated technical critique of rigidified descriptions.) Furthermore, the epistemological argument has not disappeared. If it was not analytically true and a priori knowable that Socrates drank hemlock, neither is it analytically true and knowable a priori that Socrates is the philosopher who in the actual world drank hemlock. Finally, the weighty arguments from ignorance and error have not been circumvented. If the best description a speaker can provide for, say, “Feynman” is “some physicist,” or a speaker associates with “Einstein” the description “the inventor of the atomic bomb,” rigidifying such insufficient or false descriptions does not provide a way out. In sum, rigidified descriptions cannot save descriptivism.

4.4.2  Causal Descriptivism A popular recent version of the description theory of reference is the so-called “causal descriptivism” favored by David Lewis (1984), Fred Kroon (1987), and Frank Jackson (1998), for example. Its ingenious idea is the suggestion that associated with a name “N” is a description, roughly, of the form “the entity which stands in the appropriate causal-historical relation (which accords with the causal theory of reference) to the name.” Slightly more exactly, according to this theory, speakers associate with a name “N” a description of the form The entity standing in relation R to my current use of the name “N”, and this description determines the reference of “N”. The relation R here is drawn from the rival non-descriptivist (e.g. the causal-historical chain picture) theory of reference. Its popularity notwithstanding, causal descriptivism has a few serious problems. Devitt and Sterelny (1999: 61) summarize them accurately. First, it is psychologically implausible: it requires that every competent speaker must possess a theory of

 Personally, I think that possible world semantics has, in the philosophical theory of meaning, rather limited value. But some recent descriptivists attribute to it a highly central role. The following observation has some bite against such descriptivists. 47  I have borrowed this observation from Cumming 2016. (The first version appeared in 2008.) I do not know whether it originated with Cumming or whether it has an earlier history, nor do I know who exactly to credit for it. 46

4  Theories of Reference: What Was the Question?

91

reference – the absolutely correct and complete theory of reference – and it is doubtful that anyone possesses such a theory. Second, the descriptions it provides are parasitic and redundant: if it is true, it admits that a name stands in a causal-­ historical relationship, R, to its bearer; R alone is sufficient to explain reference, and further description involving R is redundant.48 Finally, if we are interested in the question of the meaning of proper names and not only in the fixation of reference, as we should be, causal descriptivism is quite problematic (see Sect. 4.4.4 below).

4.4.3  Nominal Descriptivism or Metalinguistic Descriptivism A third popular new version of the description theories is so-called “nominal descriptivism” or “metalinguistic descriptivism”.49 It has been advocated as a response to the arguments of Kripke and others, e.g., by Searle (1983), Bach (1987), and Katz (1990, 1994), and apparently Chalmers (2002) has built it into his more general “two-dimensional” theory.50 This approach attempts to circumvent the powerful arguments from ignorance and error with the suggestion that surely even the ignorant and erring language users can associate with a name “N” a description of the form: The thing to which “N” refers. But it is now important to note that this theory does not even begin to answer the main question: In virtue of what does an expression refer to whatever it refers to? The theory already presupposes the reference relation, and cannot explain it. Searle should be credited, though, for being at least to some extent aware of this problem.51 So he adds that ignorant speakers can use descriptions of this sort, but there must be other speakers who know some more substantial descriptions to whom the ignorants can “defer” (Searle 1983: 243). Now one problem with this move is that it entails that the name has a different meaning for the ignorant speakers and for the more knowledgeable ones in a single linguistic community – a sort of consequence which Searle has criticized in other contexts, and which is, in any case, ad hoc and implausible.  Cf. Kripke 1980: 70: “Obviously if the only descriptive senses of names we can think of are of the form ‘the man called such and such’, ‘the man called “Walter Scott”’, ‘the man called “Socrates”’, then whatever this relation of calling is is really what determines the reference and not any description like ‘the man called “Socrates”’”. Though Kripke’s target here is more nominal or metalinguistic descriptivism (see below), his point seems to be more or less the same. 49  Sometimes, however, it is counted as a version of causal descriptivism. 50  Or should Chalmers’ view be classified as a version of causal descriptivism? I am not sure. In any case, his formulation is, “The person called ‘N’ by those from whom I acquired the name.” 51  Recall Searle’s above-cited statement that “the standard logic textbook view” – which is simply metalinguistic descriptivism – is “obviously inadequate”. 48

92

P. Raatikainen

Nominal or metalinguistic descriptivism also seems to be in conflict with the spirit of descriptivism. Let us recall what Strawson, another key figure of modern descriptivism, said: “[I]t is no good using a name for a particular unless one knows who or what is referred to by the use of the name. A name is worthless without a backing of descriptions which can be produced on demand to explain the application” (Strawson 1959: 20). Also: “One cannot significantly use a name to refer to someone or something unless one knows who or what it is that one is referring to by that name. One must, in other words, be prepared to substitute a description for the name” (181). Searle also seemed to share the same spirit (in the passage we already cited): “The original set of statements about Aristotle [what speakers regard as essential and established facts about him] constitute the descriptive backing of the name in virtue of which and only in virtue of which we can teach and use the name” (Searle 1967). In other words, descriptivism, in its original pre-Kripkean form, required a speaker to have some non-trivial identifying knowledge of the bearer of the name – otherwise she fails to successfully use the name to refer to anything. Metalinguistic descriptivism, on the other hand, completely trivializes the issue: a speaker can always, almost trivially, provide a metalinguistic description. From the perspective of the more traditional descriptivism, metalinguistic descriptivism results in what might be called “miraculous competence”. Imagine a monolingual English-speaker, Jack, who does not understand a word of French. But assume then that Jack associates, with every French proper name “N” he faces, a description of the form “the thing to which ‘N’ refers” and, analogously, with every French predicate “P” a description of the form “the entities which are in the extension of ‘P.’” According to metalinguistic descriptivism, Jack should now be also able to successfully refer with all these names and predicates of French. This seems to be totally contrary to the original spirit of the description theory of reference.52 Finally, the descriptions that metalinguistic descriptivism provides are even more parasitic and redundant (see above) than those of causal descriptivism.

4.4.4  A Theory of Meaning? I take it that the gist of descriptivism has always been – in contradistinction to the Millian view – that the meaning of a name is something more than the referent and that the associated description expresses this meaning. Nevertheless, both causal and metalinguistic descriptivism are in fact quite implausible as theories of meaning (and do not even “approximate” meaning, as Chalmers suggests). Not only can co-referential names be non-synonymous (Frege’s puzzle of identity), but presumably distinct expressions can be synonymous, that is, share the

 From the perspective of NTR this may not be unacceptable. My point here is simply to underline just how different this is from the spirit of traditional, pre-Kripkean descriptivism.

52

4  Theories of Reference: What Was the Question?

93

same meaning. For example, apparently the proper names “Köln” (in German) and “Cologne” (in English) have the same meaning.53 But the descriptions the thing to which “Köln” refers and the thing to which “Cologne” refers attach different descriptive content, or meanings, to them.54 Causal descriptivism has, mutatis mutandis, similar problems. Consider the Swedish sentence Jultomten är vänlig and the English sentence with the same meaning, Father Christmas is kind. However, both causal descriptivism and metalinguistic descriptivism ascribe different meanings to Jultomten and “Father Christmas”; consequently, the entire sentences should also have different meanings. This consequence is unnatural and highly implausible. There are similar problems with general terms. “Woodchuck” and “groundhog” are synonymous. However, their descriptions,55 the entities which are in extension of “woodchuck” and the entities which are in extension of “groundhog,” attach different descriptive content (i.e. meaning) to them. Causal descriptivism has the same problem. At worst, some versions56 of causal and metalinguistic descriptivism imply that even distinct utterances of a single name cannot have the same meaning.

 As we have noted, Frege explicitly contended that different expressions may well have the same sense: “The same sense has different expressions in different languages or even in the same language” (Frege 1892a: 159). Furthermore, many philosophers take the apparent fact that distinct expressions may be synonymous, that is, have the same meaning, as part of the basic data that any philosophical theory of meaning should be able to explain; see e.g. Lycan 2008: 65–66, 78. Also Chalmers (2002: 139), for example, explicitly grants this possibility. 54  Such line of argument is by no means original with me. I picked it up from Putnam (1988: 27). It now seems to me that Kripke (1979: 274 n. 12) is making essentially the same point. I developed the idea already in Raatikainen 2006. Everett (2005) makes more or less the same observation. The general idea of such translation arguments goes back to Church’s critique of Carnap (Church 1950). 55  Metalinguistic descriptivism is typically presented explicitly only for singular names; but I assume that if it is supposed to work for general terms too (and recall that there are also Frege’s puzzles for them to be dealt with), it uses descriptions like the ones given here (details are irrelevant for the general point here). 56  Often such versions use, more exactly, descriptions such as “the entity standing in relation R to my current use of the name ‘N’” or “the entity called ‘N’ by my interlocutors”. 53

94

P. Raatikainen

A quite obvious countermove explicitly advocated by Kroon and Jackson is to simply deny that causal or metalinguistic descriptivism is even intended to be a theory of meaning – that it is merely a theory of reference fixation.57 But it is important to note that, if this line is taken, then these versions of descriptivism cannot even begin to deal with Frege’s puzzles, e.g., explain the difference in meaning of co-referential but non-synonymous names. Yet, that has always been a major motivation for descriptivism (see Sect. 4.3.2 above). Kroon notes this and is ready to bite the bullet. Jackson apparently never even mentions the puzzles. One may feel, with some justice, that too much has now been given up. Katz in turn contends that, at least in the realm of proper names, two distinct names can never be synonymous. I, for one, cannot help feeling that such a line of response is intolerably ad hoc and ignores many actual cases that appear, at least prima facie, to provide counterexamples. Moreover, Katz abandons the key Fregean assumption that meaning determines reference. And this is a defense of the description theory of reference? Such watered-down versions of descriptivism may be a bit more defensible, but they are impotent in addressing the questions the description theory of reference has standardly been developed to provide answers to.

4.4.5  Substantial and Trivial Versions of Descriptivism It appears to be a common assumption that NTR was aimed at refuting descriptivism in any possible form. Some philosophers then argue that Kripke and others in fact fail to demonstrate this general conclusion: devices such as metalinguistic or causal descriptions are presented as ingenious and effective responses to the critical arguments of Kripke and others. However, if one bothers to look at what Kripke and Donnellan actually said in their seminal texts, one sees that they did not declare a complete and unconditional victory. If one reads only these confident contemporary descriptivists, it would be difficult to guess that, in fact, Kripke and Donnellan were well aware of the possibility of something like causal descriptivism or metalinguistic descriptivism. Kripke asked whether descriptivism could be rescued: [T]here is a sense in which a description theory must be trivially true if any theory of the reference of names, spelled out in terms independent of the notion of reference, is available. For if such a theory gives conditions under which an object is to be the referent of a name, then it of course uniquely satisfies these conditions. (Kripke 1980: 88 n. 38, my emphasis)

Kripke called such a theory “a trivial fulfilment” of descriptivism (162). In the “addenda” to Naming and Necessity (160–162), he also explicitly discussed

 Jackson nevertheless says (1998: 206) that names are abbreviated descriptions; it is difficult indeed to understand what this is supposed to mean, if not that names are synonymous with descriptions.

57

4  Theories of Reference: What Was the Question?

95

metalinguistic descriptivism. Kripke noted that “the resulting description would hardly be one of the type which occurs to a speaker when he is asked such a question as, ‘Who is Napoleon?’, as the description theorists intended” (162, my emphasis). In the footnote quoted above, Kripke wrote: “however, the arguments I have given show that the description must be one of a completely different sort from that supposed by Frege, Russell, Searle, Strawson and other advocates of the description theory” (my emphasis). For his part, Donnellan noted that it is necessary to add some qualifications to “the principle of identifying descriptions” (his label for descriptivism), namely to require that the descriptions provided are “non-question-begging” if descriptivism is supposed to be an interesting view at all. This is because there are certain descriptions that a user of the name (providing he can articulate them) could always provide and which would always denote the referent of the name (providing there is one). No argument could be devised to show that the referent of a name need not be denoted by these descriptions. At the same time anyone who subscribes to the principle of identifying descriptions would hardly have these descriptions in mind or want to rely on them in defence of the principle. (Donnellan 1970: 365, my emphasis)

As examples of such “question-begging” descriptions, Donnellan mentions “the entity I had in mind” and “the entity I referred to.” He contends that if descriptions such as these are included in the “backing descriptions”, descriptivism “would become uninteresting.” Donnellan then points out that Strawson, for example, explicitly excludes descriptions such as these. In Searle (1958, 1967), there are also passages that suggest Searle would, likewise, not at the time have accepted such “question-begging” descriptions, either. In a similar spirit, Putnam (1970) wrote: In the traditional view, the meaning of, say ‘lemon’, is given by specifying a conjunction of properties.… In one sense, this is trivially correct. If we are allowed to invent unanalyzable properties ad hoc, then we can find a single property – not even a conjunction – the possession of which is a necessary and sufficient condition for being a lemon, or being gold, or whatever. Namely, we just postulate the property of being a lemon, or the property of being gold, or whatever may be needed. If we require that the properties P1, P2, …, Pn not be of this ad hoc character, the situation is very different. (Putnam 1970: 140, my emphasis)

Consequently, it is odd that so many philosophers present causal or metalinguistic descriptivism as conclusive responses to Kripke and others, as the latter were aware of the possibility of such trivial and ad hoc descriptions from the beginning. The argument was always specifically about the well-motivated and non-trivial types of descriptivism that were popular in the literature then. It was never denied that some ad hoc and trivial variants of descriptivism could possibly circumvent the critical arguments. Therefore, it is hardly a great philosophical achievement to now put forward such versions of descriptivism. Interim Conclusions  It seems fair to conclude that many of the recent defenses of descriptivism amount to moving the goalposts. These new forms of descriptivism

96

P. Raatikainen

–– are often impotent in answering the main question (i.e., in explaining in virtue of what does a name refer to a specific entity). –– provide often only parasitic and redundant descriptions (see Sect. 4.4.2 above). –– often fail to provide a socially shared intersubjective meaning, but make meaning differ wildly between different speakers. –– often cannot solve Frege’s puzzles, which was always a key motivation of descriptivism.

4.5  Kind Terms The literature on general terms has largely focused on Putnam’s Twin Earth thought experiment and “water.” Jackson (1998) and Chalmers (2002), for example, suggest in response that the meaning of “water” can, after all, be captured (at least approximately) with a description. If we abbreviate “the clear drinkable liquid that fills the lakes and rivers, falls from the sky in rain (etc.)” as “the watery stuff”, they propose that a description such as “the watery stuff I am acquainted with” or “the watery stuff found around here” would be sufficient. It picks out H2O, but would have referred to XYZ if it had been introduced by someone on Twin Earth, or if the watery stuff on Earth had been XYZ. Whatever the details, this example has some importance for Jackson and Chalmers: it is supposed to provide a paradigm of the foundations of conceptual analysis or the Fregean sense of a kind term. Although “water” served its purpose in Putnam’s science fiction, it is in many ways an atypical example and can be quite misleading: Almost three-quarters of Earth is covered by water; our bodies are mostly made up of water; and clean drinking water is fundamental to our survival. Therefore, we are all enormously familiar with water. We are in touch with water on a daily basis. Not so with many other natural kinds. By focusing on an atypical and extreme example, Jackson and Chalmers smooth away the relevance of reference borrowing and the phenomenon of ignorance and error so central to NTR. Consider, instead, the following scenario: Imagine uneducated peasants somewhere in northern Europe some time ago (perhaps even in the Middle Ages). Assume they have picked up some kind words from their priest, who has been reading the Bible to them. They have heard, in The Song of the Songs, about gazelles and leopards, cedars, firs and fig trees, spikenard and saffron, myrrh and aloes, pomegranates, sapphires, and alabaster. However, it may be quite unclear to them what these things really are – even whether they are trees, flowers, mammals, predators, metal, or something else.58

 There is a problem with this example: as it is presented, it involves translation from Hebrew to English, and the early translations of the Bible were quite inadequate. A cleaner example would be one with a Jewish community hearing all this in the original Hebrew. I only wanted to present the example (including the words) in English for the reader’s convenience.

58

4  Theories of Reference: What Was the Question?

97

Nevertheless, if the historical chain picture is at least roughly correct, even such ignorant people can borrow the reference, have the word, and use it to refer successfully. However, there is not much descriptive content in their minds about these kinds to go on – nothing like “the watery stuff” description. For contingent reasons, we all know a great deal about water (even before we know its chemical formula is H2O), but knowing that much is neither typical nor necessary for successful reference with a kind term. Moreover, it seems that reference-determining meanings must be something quite different from what is expressed by descriptions in the style of “watery stuff”.

4.6  Back to the Millian View? One noteworthy, more recent development in the theory of meaning has been the revival of the Millian view. Several philosophers have concluded not only that the critical arguments against descriptivism are decisive, but also that they leave no choice but to return to the direct reference theory (DRT): the meaning of a proper name is, after all, simply the object denoted.59 (Accordingly, many of those who are sympathetic towards descriptivism seem to assume that the only alternative to descriptivism is DRT – that these are the only possible options.) Kaplan (1989a, b) has been an important background figure in this development. He defined the notion of “directly referential,” but it is not entirely clear to what view exactly he wanted to commit himself, at least as far as proper names are concerned.60 Be that as it may, the full-blown DRT emerged with Almog (1984, 1985), Salmon (1986), Wettstein (1986), and Soames (1987), for example (see also, e.g., Braun 1993, 1998, 2001; Soames 2001). In contrast, of the central figures of NTR, neither Kripke nor Putnam61 or Devitt have subscribed to DRT. Kripke does later say that his own view is closer in various respects to Mill’s view than to the descriptivist tradition (see Kripke 1973/2013: 11; Kripke 1979: 239) – even that “a Millian line should be maintained as far as is feasible” (1979: 248, my emphasis). And Kripke does endorse the substitutivity of co-referential names in the contexts of alethic modalities.62 However, he does not advocate universal substitutivity of names (e.g. in belief contexts), nor does he

 Martí (1995) distinguishes between two different ideas in DRT: that the meaning (or “semantic value”) of a name is its referent, and the idea of Russellian singular propositions (in which the referent itself is a constituent of the proposition expressed). I shall focus here only on the former, less technical idea. 60  I think much the same can be said about Salmon’s early work (1981), in which he states, e.g., “it is neither helpful nor illuminating to see the central issue [with DRT] as a question whether proper names have sense” (11). 61  In Putnam’s case, I content myself with referring to Putnam (2001). 62  That is, in sentence contexts involving metaphysical necessity or possibility. 59

98

P. Raatikainen

regard “Hesperus is Phosphorus” and “Hesperus is Hesperus” as interchangeable (see Kripke 1980: 20).63 Devitt has been a particularly important force at this issue. He has argued (Devitt 1989) that the revival of DRT is based on a problematic background assumption, a false dichotomy (he calls it “semantic presupposition”; abbreviated SP): (SP) The meaning of a name is either descriptive or else it is the name’s referent. Many seem to think that SP is true by definition or trivial. Devitt, however, questions this assumption. As a third alternative – he later called it “a shocking idea” – Devitt (1974, 1981, 1989, 2001, 2015) proposed that referring expressions do have, over and above the object denoted, a sort of sense – a way the referent is presented – but that this sense is not descriptive. Rather, he suggests that the causal-historical chain relevant for the name itself, which is often opaque to the particular language user, can play (at least in many respects) the role of the sense, or the meaning. Different types of causal-historical chains underlie “Hesperus” and “Phosphorus”, for example, and this explains their different roles in reasoning and communication, and justifies the conclusion that they have distinct meanings. What is the motivation for ascribing meanings to some strings of symbols (or of sounds), in any case? One plausible reason is to serve as an explanation of human behavior. Assume, for example, that Jason is told that Bob Dylan is in the room. This may trigger certain behaviors in him: for example, he might try to find Dylan and shake his hand. If Jason is told instead that Robert Zimmerman is in the room (and Jason does not know that Robert Zimmerman is in fact Bob Dylan), he may well react very differently. This supports the hypothesis that there is a difference of meaning in “Bob Dylan” and “Robert Zimmerman.” One might now object that this suggestion would make meaning, or sense, much too fine-grained: does it not entail that any pair of different tokens of the same name, not to mention distinct but intuitively synonymous expressions, would always have different meanings? Not necessarily. First, Devitt’s proposal concerns types and not tokens.64 Second, Devitt grants that similarity between causal-historical chains is a matter of degree (1981: 154–155). I contend, for my part, that in practice, it is, to some extent, a matter of conventional stipulation how similar we require the chains to be for two expressions to be considered synonymous. The question is a bit tricky, especially with proper names: for example, “Germany” and “Deutschland” or “Finland” and “Suomi” are now commonly considered as synonymous pairs, although they probably have distinct origins. The idea of multiple grounding can perhaps be used to partly explain such cases. In other cases, like that of Saint Peter, the historical chains of Aramaic “Kepa” and Greek “Petros” (both meaning “rock”) presumably overlap, whereas his  Kripke, however, raises some doubts as to whether the real historical Mill endorsed the latter, either. 64  “Note that my view is not the genuinely preposterous view that the meaning of a name is a particular token causal link … and so is not open to Salmon’s ‘argument from subjectivity’” (Devitt 2012: 73 n. 18). 63

4  Theories of Reference: What Was the Question?

99

original name “Shimon” has a more distinct chain. Be that as it may, however, if the (multiple) groundings and the subsequent causal-historical chains of two co-­ referential expressions are largely distinct, this may be a sufficient reason to consider two expressions as non-synonymous. Given how natural the idea of looking at the causal-historical chains is (once we have recognized them, at any rate), I find it puzzling how little explicit attention Devitt’s suggestion has received in the literature. Most philosophers have not given serious consideration to this alternative. Many do not seem to even be aware of the proposal. From the DRT camp, Salmon notes it but quickly dismisses it as “ill conceived if not downright desperate … wildly bizarre … a confusion, on the order of a category mistake” (1986: 70–71). However, allow me make an interesting historical observation: apparently very few have noted that in the original 1972 article version of “Naming and necessity”, Kripke himself, though he perhaps did not unambiguously endorse the idea, did at least briefly mention it: Hartry Field has proposed that, for some of the purposes of Frege’s theory, his notion of sense should be replaced by the chain which determines reference. (Kripke 1972: 346 n. 22)

This passage is, however, omitted from the reprinted 1980 book version.65 As it happens, Kripke also mentioned the idea in passing in “A puzzle about belief”: It has been suggested that the chain of communication … might thereby itself be called a ‘sense’. Perhaps so. (Kripke 1979: 248)

Furthermore, as Devitt (2015) notes, recently Kaplan, of all people – to many he is the father of contemporary DRT – has endorsed an idea very similar to this: This might be an appropriate place to raise the question whether these arguments show that proper names are not Millian. If Millian means that different names of the same individual never differ semantically, I do not think that names are Millian, because I take the way the bearer is represented, even if nondescriptive, to belong to semantic theory. However, Mill himself claimed only that names had denotation but no connotation. Connotation was, for Mill, descriptive meaning that determines denotation. Mill believed that predicates and natural-kind terms had such connotations. So, if by Millian we mean that names do not have Millian connotations, then I do regard names as Millian since the way the bearer is represented is nondescriptive. As we have learned, it is important to separate how the individual is represented from the mechanism that determines what individual is represented. This is a distinction that the notion of a referential use of a definite description presupposes. (Kaplan 2012: 167 n. 22)

I believe that this idea of Devitt – whatever its limits – deserves broader interest. Though probably nothing can play all the roles that the traditional notion of meaning or Frege’s notion of sense was purported to play, it seems that if we accept the idea of the causal-historical chain of communication, it can play at least some of those roles. At any rate, Devitt’s proposal handles Frege’s puzzles quite nicely (see

 At the 2013 Buenos Aires workshop (where both Devitt and I were present), Kripke explained that he had deleted the note simply because someone had informed him that he should have credited the idea to Devitt and not to Field.

65

100

P. Raatikainen

Devitt 1989). DRT, on the other hand, has made little indisputable progress with them.66

References Achinstein, P. 1968. Concepts of science: A philosophical analysis. London: The Johns Hopkins University Press. Almog, J. 1984. Semantic anthropology. Midwest Studies in Philosophy 9: 478–489. ———. 1985. Form and content. Noûs 19: 603–616. Bach, K. 1987. Thought and reference. Oxford: Oxford University Press. ———. 1998. Content: Wide and narrow. In Routledge encyclopedia of philosophy, ed. E. Craig. London: Routledge. Braun, D. 1993. Empty names. Noûs 27: 449–469. ———. 1998. Understanding belief reports. Philosophical Review 107: 555–595. ———. 2001. Russellianism and explanation. In Philosophical perspectives, 15: Metaphysics, ed. J. Tomberlin, 253–289. Atascadero: Ridgeview Publishing Company. ———. 2006. Names and natural kind terms. In The Oxford handbook of philosophy of language, ed. E. Lepore and B.C. Smith, 490–515. Oxford: Oxford University Press. Bridges, J. 2010. Wittgenstein vs contextualism. In Wittgenstein’s Philosophical investigations: A critical guide, ed. A.M. Ahmed, 109–128. Cambridge: Cambridge University Press. Burge, T. 1979. Sinning against Frege. Philosophical Review 88: 398–432. Burgess, J.P. 2013. Kripke: Puzzles and mysteries. Cambridge: Polity Press. Carnap, R. 1947. Meaning and necessity: A study in semantics and modal logic. Chicago: The University of Chicago Press. Chalmers, D. 2002. On sense and intension. Philosophical Perspectives 16: 135–182. ———. 2012. Constructing the world. Oxford: Oxford University Press. Church, A. 1950. On Carnap’s analysis of statements of assertion and belief. Analysis 10 (5): 97–99. Crane, T. 1991. All the difference in the world. The Philosophical Quarterly 41: 1–25. Cumming, S. 2016. Names. In The Stanford encyclopedia of philosophy, Fall 2016 edition, ed. E.N. Zalta. https://plato.stanford.edu/archives/fall2016/entries/names/. Currie, G. 1982. Frege: An introduction to his philosophy. Brighton: Harvester Press. Currie, G., and P. Eggenberger. 1983. Knowledge of meaning. Noûs 17: 267–279. Davidson, D. 1990. The structure and content of truth. Journal of Philosophy 87: 279–328. Devitt, M. 1974. Singular terms. Journal of Philosophy 71: 183–205. ———. 1981. Designation. New York: Columbia University Press. ———. 1983. Dummett’s anti-realism. Journal of Philosophy 80: 73–99. ———. 1989. Against direct reference. Midwest Studies in Philosophy 14: 206–240. ———. 1998. Reference. In Routledge encyclopedia of philosophy, ed. E.  Craig. London: Routledge. ———. 2001. A shocking idea about meaning. Revue Internationale de Philosophie 4 (218): 471–494.

 Earlier versions (of parts) of this paper have been presented in London, Stirling, Florence, Hamburg, Helsinki and Turku. I would like to thank those who participated in the discussions on these occasions. I have learned about these topics more from Michael Devitt than from anyone else – first from his writings, and later also from our innumerable discussions and from our correspondence. With this little piece, I want to congratulate my good friend Michael on the occasion of his eightieth birthday and, most of all, to celebrate his momentous life’s work in philosophy.

66

4  Theories of Reference: What Was the Question?

101

———. 2006. Responses to the Rijeka papers. Croatian Journal of Philosophy 6: 97–112. ———. 2008. Reference borrowing: A response to Dunja Jutronic. Croatian Journal of Philosophy 8: 361–366. ———. 2012. Still against direct reference. In Prospects for meaning, ed. R.  Schantz, 61–84. Berlin: Walter de Gruyter. ———. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Devitt, M., and K. Sterelny. 1987. Language and reality. Oxford: Basil Blackwell. ———. 1999. Language and reality. 2nd ed. Oxford: Blackwell. Donnellan, K. 1970. Proper names and identifying descriptions. Synthese 21: 335–358. Reprinted in Semantics of natural language, ed. D.  Davidson and G.  Harman, 356–379. Dordrecht: Reidel, 1972. Dummett, M. 1973. Frege: Philosophy of language. London: Duckworth. ———. 1974. The social character of meaning. Published in Dummett 1978: 420-430. ———. 1978. Truth and other enigmas. London: Duckworth. ———. 1993. The seas of language. Oxford: Oxford University Press. Dupre, J. 1981. Natural kinds and biological taxa. Philosophical Review 90: 66–90. Evans, G. 1973. The causal theory of names. Proceedings of the Aristotelian Society 47 (Suppl): 187–208. ———. 1985. Understanding demonstratives. Reprinted in G. Evans, Collected papers, 291–321. Oxford: Clarendon Press. Everett, A. 2005. Recent defenses of descriptivism. Mind & Language 20: 103–139. Fitch, G.W. 2004. Saul Kripke. Bucks: Acumen. Frege, G. 1892a. On sense and meaning. Reprinted in G. Frege, Collected papers on mathematics, logic, and philosophy. Edited by B. McGuinness, 157–177. Oxford: Basil Blackwell, 1984. ———. 1892b. On concept and object. Reprinted in G. Frege, Collected papers on mathematics, logic, and philosophy. Edited by B. McGuinness, 182–194. Oxford: Basil Blackwell, 1984. ———. 1914. Letter to Jourdain. Reprinted in G. Frege, Philosophical and mathematical correspondence. Edited by B. McGuinness, 78–80. Oxford: Blackwell, 1980. Heck, R., and R.  May. 2006. Frege’s contribution to philosophy of language. In The Oxford handbook of philosophy of language, ed. E.  Lepore and B.C.  Smith, 3–39. Oxford: Oxford University Press. Jackson, F. 1998. Reference and description revisited. Philosophical Perspectives 12: 201–218. Kaplan, D. 1989a. Demonstratives: An essay on the semantics, logic, metaphysics, and epistemology of demonstratives and other indexicals. In Themes from Kaplan, ed. J. Almog, J. Perry, and H. Wettstein, 481–563. New York: Oxford University Press. ———. 1989b. Afterthoughts. In Themes from Kaplan, ed. J. Almog, J. Perry, and H. Wettstein, 565–614. New York: Oxford University Press. ———. 2012. An idea of Donnellan. In Having in mind: The philosophy of Keith Donnellan, ed. J. Almog and P. Leonardi, 122–175. New York: Oxford University Press. Katz, J.J. 1990. Has the description theory of names been refuted? In Meaning and method: Essays in honour of Hilary Putnam, ed. G. Boolos, 31–61. Cambridge: Cambridge University Press. ———. 1994. Names without bearers. Philosophical Review 103: 1–39. Kremer, M. 2010. Sense and reference: The origins and development of the distinction. In The Cambridge companion to Frege, ed. T.  Ricketts and M.  Potter, 220–292. Cambridge: Cambridge University Press. Kripke, S. 1971. Identity and necessity. In Identity and individuation, ed. M.K. Munitz, 135–164. New York: New York University Press. ———. 1972. Naming and necessity. In Semantics of natural language, ed. D.  Davidson and G. Harman, 253–355. Dordrecht: Reidel. ———. 1973/2013. Reference and existence: The John Locke lectures. Oxford: Oxford University Press.

102

P. Raatikainen

———. 1979. A puzzle about belief. In Meaning and use, ed. A.  Margalit, 239–283. Dordrecht: Reidel. ———. 1980. Naming and necessity. Reprint of Kripke 1972 with a new introduction. Cambridge, MA: Harvard University Press. ———. 2008. Frege’s theory of sense and reference: Some exegetical notes. Theoria 74: 181–218. Kroon, F. 1987. Causal descriptivism. Australasian Journal of Philosophy 65: 1–17. Lewis, D. 1984. Putnam’s paradox. Australasian Journal of Philosophy 62: 221–236. Lycan, W. 2006. Names. In The Blackwell guide to the philosophy of language, ed. M. Devitt and R. Hanley, 255–273. Oxford: Blackwell. ———. 2008. Philosophy of language: A contemporary introduction. 2nd ed. Oxon: Routledge. Martí, G. 1995. The essence of genuine reference. Journal of Philosophical Logic 24: 275–289. May, R. 2006. The invariance of sense. Journal of Philosophy 102: 111–144. McDowell, J. 1977. On the sense and reference of a proper name. Mind 86: 159–185. McGinn, C. 1982. The structure of content. In Thought and content, ed. A. Woodfield, 207–258. Oxford: Oxford University Press. Millar, A. 1977. Truth and understanding. Mind 86: 405–416. Miller, A. 2006. Realism and antirealism. In The Oxford handbook of philosophy of language, ed. E. Lepore and B.C. Smith, 983–1005. Oxford: Oxford University Press. Noonan, H.W. 2001. Frege: A critical introduction. Cambridge: Polity Press. Papineau, D. 1979. Theory and meaning. Oxford: Clarendon Press. Plantinga, A. 1974. The nature of necessity. Oxford: Oxford University Press. Putnam, H. 1962. The analytic and the synthetic. In Scientific explanation, space, and time, Minnesota studies in the philosophy of science, vol. 3, ed. H. Feigl and G. Maxwell, 358–397. Minneapolis: University of Minnesota Press. Reprinted in H.  Putnam, Mind, language and reality: Philosophical papers, vol. 2, 33–69. Cambridge: Cambridge University Press. ———. 1965. How not to talk about meaning. In Boston studies in the philosophy of science, vol. 2, ed. R.S. Cohen, and M.R. Wartofsky, 205–222. New York: Humanities Press. Reprinted in H. Putnam, Mind, language and reality: Philosophical papers, vol. 2, 117–131. Cambridge: Cambridge University Press. ———. 1970. Is semantics possible? In Language, belief and metaphysics, ed. H.  Kiefer and M.  Munitz. Albany: SUNY Press. Reprinted in H.  Putnam, Mind, language and reality: Philosophical papers, vol. 2, 139–152. Cambridge: Cambridge University Press. ———. 1973. Explanation and reference. In Conceptual change, ed. G. Pearce and P. Maynard, 199–221. Dordrecht: Reidel. Reprinted in H. Putnam, Mind, language and reality: Philosophical papers, vol. 2, 196–214. Cambridge: Cambridge University Press. ———. 1974. Comment on Wilfrid Sellars. Synthese 27: 445–455. ———. 1975a. The meaning of ‘meaning’. In Language, mind, and knowledge, Minnesota studies in the philosophy of science, vol. 7, ed. K. Gunderson, 131–193. Minneapolis: University of Minnesota Press. Reprinted in H. Putnam, Mind, language and reality: Philosophical papers, vol. 2, 215–271. Cambridge: Cambridge University Press. ———. 1975b. Language and reality. In H. Putnam, Mind, language and reality: Philosophical papers, vol. 2, 272–290. Cambridge: Cambridge University Press. ———. 1975c. The refutation of conventionalism. In Semantics and meaning, ed. M.  Munitz. New York: New York University Press. Reprinted in H. Putnam, Mind, language and reality: Philosophical papers, vol. 2, 153–191. Cambridge: Cambridge University Press. ———. 1986. Meaning holism. In The philosophy of W.V. Quine, ed. L.E. Hahn and P.A. Schilpp, 405–431. La Salle: Open Court. ———. 1988. Representation and reality. Cambridge, MA: MIT Press. ———. 2001. Reply to Michael Devitt. Revue Internationale de Philosophie 4 (218): 495–502. Quine, W.V. 1960. Carnap and logical truth. Synthese 12: 350–374. ———. 1963. Necessary truth. Reprinted in W.V. Quine, The ways of paradox and other essays. Revised and enlarged edition, 68–76. Cambridge: Harvard University Press, 1976.

4  Theories of Reference: What Was the Question?

103

Raatikainen, P. 2005. On how to avoid the indeterminacy of translation? The Southern Journal of Philosophy 43: 395–414. ———. 2006. Against causal descriptivism. Mind & Society 5 (1): 78–84. ———. 2010. The semantic realism/anti-realism dispute and knowledge of meanings. The Baltic International Yearbook of Cognition, Logic and Communication 5: 1–13. Reimer, M. 2009. Reference. In The Stanford encyclopedia of philosophy, Summer 2009 edition, ed. E.N. Zalta. http://plato.stanford.edu/archives/sum2009/entries/reference/. Russell, B. 1918. The philosophy of logical atomism. In Logic and knowledge, ed. R.C. Marsh, 177–281. London: Allen and Unwin, 1956. Salmon, N. 1981. Reference and essence. Princeton: Princeton University Press. ———. 1986. Frege’s puzzle. Cambridge, MA: MIT Press. Schwartz, S.P. 1977. Introduction. In Naming, necessity, and natural kinds, ed. S.P.  Schwartz, 13–41. Ithaca/London: Cornell University Press. Searle, J. 1958. Proper names. Mind 67: 166–173. ———. 1967. Proper names and descriptions. In Encyclopedia of philosophy, ed. P. Edwards, vol. 6, 487–491. New York: Macmillan. ———. 1971. Introduction. In The philosophy of language, ed. J. Searle, 1–12. Oxford: Oxford University Press. ———. 1983. Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge University Press. Segal, G. 2000. A slim book about narrow content. Cambridge, MA: MIT Press. Shapere, D. 1964. The structure of scientific revolutions. Philosophical Review 73: 383–394. ———. 1966. Meaning and scientific change. In Mind and cosmos: Essays in contemporary science and philosophy, ed. R. Colodny, 41–85. Pittsburgh: University of Pittsburgh Press. Soames, S. 1987. Direct reference, propositional attitudes, and semantic content. Philosophical Topics 15: 47–87. ———. 2001. Beyond rigidity: The unfinished semantic agenda of Naming and necessity. Oxford/ New York: Oxford University Press. Stanford, P.K., and P. Kitcher. 2000. Refining the causal theory of reference for natural kind terms. Philosophical Studies 97: 97–127. Sterelny, K. 1983. Natural kind terms. Pacific Philosophical Quarterly 64: 110–125. Strawson, P.F. 1959. Individuals. London: Routledge. ———. 1969. Meaning and truth. Reprinted in P.F. Strawson, Logico-linguistic papers, 170–189. London: Methuen 1971. Travis, C. 1989. The uses of sense: Wittgenstein’s philosophy of language. Oxford: Clarendon Press. Unger, P. 1983. The causal theory of reference. Philosophical Studies 43: 1–45. Wettstein, H. 1986. Has semantics rested on a mistake? Journal of Philosophy 83: 185–209.

Chapter 5

Multiple Grounding François Recanati

Abstract  Devitt’s theory of multiple grounding sheds light on phenomena like reference change and confusion. It rests on the notion of partial reference, borrowed from Field’s work on referential indeterminacy. Devitt’s ideas on these topics can be accommodated within the mental file framework, which is compatible with Devitt’s causal account of reference. In that framework, the role of proper names is to coordinate the mental files of all of those who are involved in the name-using practice. Keywords  Reference borrowing · Reference change · Confusion · Grounding · Anaphora · Proper names · Coordination · Coreference de jure · Mental files

5.1  Devitt vs Kripke In his work since the early seventies Michael Devitt has elaborated Kripke’s nondescriptivist picture of reference. Reference, for Devitt, is based on causal relations to things in the environment. Reference thus understood (what Devitt calls ‘designation’1) has to be distinguished from the start from denotation, which is based on satisfaction (of concepts by objects) rather than on causal relations. The distinction between reference and denotation is explicitly drawn by Donnellan (1966). Devitt’s way of drawing the distinction is slightly different, but they share the basic idea that reference is fundamentally relational while denotation is satisfactional (to use Kent Bach’s catchy formulation2). 1  Devitt uses ‘refer’ in a generic way, for general terms as well as for singular terms. He uses ‘designation’ for singular reference. In this chapter I follow the standard usage rather than Devitt’s own usage. 2  ‘Since the object of a descriptive thought is determined satisfactionally, the fact that the thought is of that object does not require any connection between thought and object. However, the object of a de re thought is determined relationally. For something to be the object of a de re thought, it must stand in a certain kind of relation to that very thought’ (Bach 1987: 12).

F. Recanati (*) Chaire de Philosophie du langage et de l’esprit, Collège de France, Paris, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_5

105

106

F. Recanati

For Devitt an expression (token) refers to an object in virtue of a causal relation between the token and the object. In the case of proper names, to which Devitt devoted his Harvard dissertation (Devitt 1972), the relation can be split in two component relations: the link between name and object has two parts – causal network and initial link to object. (Devitt 1981a: 41)

Names, Devitt says, are ‘basically anaphoric: reference borrowing is of the essence of their role’ (Devitt 1981a: 45).3 In the causal history of a particular use of a name are other uses of the name, from which it inherits its reference. This is similar to anaphora: just as the pronoun ‘he’ inherits the reference of its singular antecedent in ‘I’ve just read Aristotle. He is a great philosopher’ or in the dialogue ‘Have you read Aristotle? Yes, he is a great philosopher’, the name ‘Aristotle’ in these sentences inherits its reference from past uses to which that use is causally related (as per the Geach-Kripke-Donnellan picture).4 So the causal relation between a name token and its reference involves (i) the quasi-anaphoric relation between the token and the other tokens in the communicative chain (or network),5 and (ii) the grounding relation between the chain and some external object. According to both Kripke and Devitt, some of the tokens in a chain bear the responsibility for grounding the entire chain.6 (The tokens that don’t bear any responsibility are parasitic on the others; this explains how it is possible for an ignorant speaker to refer to Aristotle, even though he or she virtually knows nothing of him.) For Kripke the token which initiates the chain at the so-called ‘dubbing’ stage bears the responsibility for grounding the chain it initiates. It is, as it were, the antecedent which fixes the reference for all subsequent uses of the name: they inherit the reference determined by the initial dubbing. Figure 5.1 summarizes the Kripke picture. Token t3 refers to object o in virtue of its anaphoric connections to other tokens (t1 and t2) to which it is causally related within a chain which itself bears (via t1) the grounding relation to o. The chain and all the tokens belonging to it, including

3  Likewise, Taylor claims that proper names are essentially devices of coreference. Their role is to build, and exploit, ‘chains of explicit coreference’, participation in which guarantees the sharing of subject matter with other participants. ‘[W]hat it is to intend to use an expression as a name’, he says, ‘is to use that expression with the intention of either launching or continuing a chain of explicit co-reference’ (Taylor 2003: 10). When the same name is used twice, coreference is linguistically guaranteed: ‘Tokens of the same name … are guaranteed to co-refer, if they refer at all’ (Taylor 2003: 14). 4  See Geach 1969, Kripke 1980, Donnellan 2012. 5  ‘[U]nderlying a person’s use of a name may be many designating-chains involving multiple reference-borrowings and, ultimately, multiple groundings in the object: there may be a causal network of designating-chains underlying her use’ (Devitt 2015: 117). 6  ‘Grounding’ is Devitt’s term, not Kripke’s. Kripke speaks of reference-fixing. (By ‘grounding’, Devitt specifically means a causal-perceptual fixing of reference, while Kripke makes room for descriptive modes of reference-fixing.)

5  Multiple Grounding Fig. 5.1 The Kripke picture

107

Communicative chain: t1

t2

t3

t2

t3

object

Fig. 5.2 The Devitt picture

Communicative chain: t1

object

t3, refer to the object the chain is grounded in (via the token t1 which bears the responsibility for grounding). Devitt substantially modifies the Kripke picture by not restricting the responsibility for grounding to the initial step: the chain can be multiply grounded. Grounding involves associating a token of the name with ‘a mental representation of [the] object brought about by an act of perception’ (Devitt 1981a: 133), i.e. a ‘demonstrative representation’. That occurs not only at the initial stage, when the name is introduced and the object dubbed, but also at later stages when the sort of perceptual contact with the object which characterizes the initial step recurs: What is it … that grounds the name in a certain object? It is the causal-perceptual link between the first users of the name and the object named. What made it the case that this particular object got named in such a situation was its unique place in the causal nexus in the grounding situation. It is important to note that this sort of situation will typically arise many times in the history of an object after it has been initially named: names are typically multiply grounded in their bearers. These other situations are ones where the name is used as a result of a direct perceptual confrontation with its bearer. (Devitt 2015: 114)

In Fig. 5.2, illustrating Devitt’s position, perceptual grounding occurs not only at the initial step, t1, but also later, at t3. In such cases, Devitt says, the chain’s grounding in the object, initiated at t1, is reinforced at t3.

5.2  Reference Change Without multiple grounding, Devitt points out, one cannot account for reference change in so-called ‘Madagascar’ cases (Evans 1973). Devitt’s theory of multiple grounding can thus be construed as a response to the challenge raised by Evans for

108

F. Recanati

Kripke, even though the theory predates Evans’s challenge.7 That is how Devitt himself presents it: Multiple grounding is very important: it enables a causal theory to explain reference change and various mistakes and misunderstandings. Causal theories of reference for names, or indeed for any terms, leave themselves open to easily produced counterexamples if they make the initial grounding at a naming ceremony (or equivalent) bear the entire burden of linking a network to an object. (Devitt 1981a: 57)

In the Kripke picture a name token inherits its reference from its ancestors, just as an anaphoric expression inherits its reference from its antecedent. In the Devitt picture, a name token does not merely inherit its reference from its ancestors; it may contribute some grounding of its own (through the demonstrative representation it is associated with). As we have seen, each use of a name in the presence of its bearer reinforces the name’s grounding in the object. Things can go wrong, however: instead of reinforcing the chain’s grounding in the object, a later grounding event may damage it by unwittingly bringing a second object into the picture. In other words, there may be referential divergence between the earlier and the later groundings, as in Fig. 5.3. Instead of reinforcing the link to object o1, the new grounding provided at t3 anchors the chain to another object, o2, and therefore weakens the link to o1 by providing an alternative referent for the chain. This is what accounts for the possibility of reference change.8 What fixes the reference of a name, Devitt says, is the pattern of groundings which underlies it, and that pattern evolves as time passes (Devitt 2015: 122). The link to object o1 may gradually weaken, and the link to object o2 gradually strengthen, until o2 wins and becomes the new referent for the name. That’s arguably what happened in the ‘Madagascar’ case. ‘Madagascar’ was first the name of (a certain part of) Africa’s mainland; Marco Polo mistakenly took the term to be used (by the natives) to refer to the island off the coast. As more and more people followed Marco Polo’s usage, the name changed its reference and now refers to the island.

Fig. 5.3 Referential divergence

Communicative chain: t1

o1

t2

t3

o2

 As Devitt points out (Devitt 2015: 122 n. 29), his earliest discussions (1972, 1974) used the idea of multiple groundings to address the problem of confusion (see Sect. 5.3 below). He did not use it to address the problem of designation change until Designation (Devitt 1981a), after Evans had pressed the problem. 8  As Berger says (speaking of ‘focusing’ where Devitt talks of ‘grounding’), a term ‘can undergo an unintended reference change at a particular stage in its reference transmission only if at that stage the term’s reference is transmitted by a genuine focusing on a new referent’ (Berger 2002: 18–19). 7

5  Multiple Grounding

109

On the Kripke picture the initial grounding fixes the reference for the chain, and every token in the chain inherits that reference. Referential divergence within a chain is impossible. The referential divergence introduced at t3 would have to be described as the launching of a new chain (a new name). Not so on the Devitt picture, where referential divergence is allowed and gradual change is possible.

5.3  Confusion In cases of referential divergence such as that illustrated in Fig. 5.3, the token that initiates the divergence (t3) is referentially anchored to two distinct objects, o1 and o2. Via the anaphoric link to the other tokens in the chain, it is anchored to o1, while it is anchored to o2 via the new demonstrative link it itself carries. Cases of multiple anchoring like this are cases of confusion. Confusion is a theoretically important phenomenon, the study of which Devitt has pioneered. Devitt borrows Field’s notion of partial reference to handle confusion cases: t3 partially refers to both o1 and o2, but does not fully refer to any of them. Field proposes to dispense with the notion of full reference altogether, and to derive truth-­ conditions for utterances involving partially referring tokens by using supervaluation techniques (Field 1973). But, as Field acknowledges, we can also take the notion of partial reference as basic and define full reference in terms of it. The following definition suggests itself: A token t fully refers to x just in case (i) t partially refers to x, and (ii) for every y, if t partially refers to y, then y = x.

Let us illustrate this with an example, due to Lawlor (and discussed at length in Recanati 2016): Wally says of Udo, ‘He needs a haircut’, and Zach, thinking to agree, but looking at another person, says, ‘he sure does’. (Lawlor 2010: 489)

The pronoun in Wally’s mouth refers to Udo. Zach’s pronoun is meant to be anaphoric on Wally’s and to corefer with it, but at the same time it bears a demonstrative link to the person Zach is looking at, whom he wrongly takes to be the person Wally was referring to. Zach is confused: he is tracking two distinct objects at the same time (namely the person he sees and the person Wally initially referred to). The Field-Devitt notion of partial reference comes in handy here. Zach’s pronoun partially refers to Udo (via the anaphoric link) and to the person Zach is looking at (via the demonstrative link); therefore it fails to fully refer. The pronoun only partially refers (to both Udo and the person Zach is looking at). Can Zach’s utterance be evaluated as true or false simpliciter? Because there is failure of (full) reference, it seems that we cannot straightforwardly evaluate Zach’s statement as true or false. Lack of determinate reference seems to go together with lack of determinate truth-value. According to Field’s supervaluationist account, however, that is not necessarily the case: if both Udo and the person Zach is looking

110

F. Recanati

at need a haircut, then the utterance will come out as true simpliciter, in Field’s framework. Be that as it may, if we suppose that Udo does not actually need a haircut, although the person Zach is looking at does, then the right thing to say is obvious: in this case at least, Zach’s utterance is partially true and partially false, but it fails to be true or false simpliciter.

5.4  Degrees of Designation Devitt proposes to go further and to ‘refine the notions of partial designation and partial truth into notions of degrees of designation and degrees of truth’: Instead of saying merely that ‘a’ partially designates b, I say that it ‘designates b to degree n’, or that it ‘n-designates b’. (Devitt 1981a: 147)

The degree to which a token refers to an object depends upon ‘the relative importance of groundings in that object in the causal explanation of the token’ (Devitt 1981a: 148). When a token is anchored to distinct objects through two links, say an anaphoric and a demonstrative link as in this example, it will often be the case that one link, and the grounding it leads to, plays a more central role in the causal explanation of the token than the other; this will be revealed in our judgments of truth or falsity. The graded notions of partial reference and partial truth which Devitt proposes can help us capture these fine-grained differences. Note that the ‘question under discussion’ is likely to play a key role in determining which link matters more, when several causal links stand in conflict to each other. In the Lawlor example the anaphoric link matters more, arguably, because it is Wally’s utterance which fixes the question under discussion. Owing to that factor, Zach will be understood as unwittingly saying something false of Udo (assuming Udo does not need a haircut), just as the speaker in Kaplan’s famous example unwittingly says something false of Spiro Agnew.9 (Some may be tempted to say that Zach only ‘speaker-referred’ to the man he was looking at, while he semantically referred to Udo through his anaphoric use of the pronoun. That position seems to me hard to justify. It seems more accurate to say that, while there was speaker’s reference to both Udo and the demonstrated person, the question under discussion gives prevalence to the anaphoric link in evaluating the judgment for truth and falsity.) I have just mentioned the speaker reference/semantic reference distinction. Kripke’s famous ‘raking the leaves’ example provides another illustration of confusion and partial reference.

9  ‘Suppose that without turning and looking I point to the place on my wall which has long been occupied by a picture of Rudolf Carnap and I say: [That] is a picture of one of the greatest philosophers of the twentieth century. But unbeknownst to me, someone has replaced my picture of Carnap with one of Spiro Agnew.... I have said of a picture of Spiro Agnew that it pictures one of the greatest philosophers of the twentieth century’ (Kaplan 1978: 239).

5  Multiple Grounding

111

Two people see Smith in the distance and mistake him for Jones. They have a brief colloquy: ‘What is Jones doing?’ ‘Raking the leaves’. ‘Jones’, in the common language of both, is a name of Jones; it never names Smith. Yet, in some sense, on this occasion, clearly both participants in the dialogue have referred to Smith. (Kripke 1977: 263)

Kripke says that Jones is the semantic reference of the name ‘Jones’, while Smith is the speaker’s reference. Devitt rightly points out that there is speaker reference to both Smith and Jones in this case: the speaker partially refers to Smith (via the demonstrative link) and partially refers to Jones (via the quasi-anaphoric link to the network of uses underlying the name ‘Jones’). [The speaker] did not straightforwardly mean Smith, as Kripke claims, but neither did he straightforwardly mean Jones.… The speaker is confusing two people. As a result, we have no clear intuition that he meant one and not the other.… It may be objected that Kripke’s intuitions about (2) [‘Jones is raking the leaves’] are supported by the fact that the speaker would agree that he ‘referred to’ that man (pointing to Smith). But, of course, he would also agree that he ‘referred to’ Jones…. In virtue of what does a speaker mean Smith or Jones? What would make either person ‘the object of thought’? I suggest answers in terms of causal chains of a certain sort; I call them ‘d-chains’, short for ‘designating chains’. Consider a straightforward paradigmatic use of ‘Jones’ in Jones’ absence. We would say that the speaker ‘meant’, ‘intended to refer to’, etc., Jones. In virtue of what? Underlying his use of the name is a causal network stretching back through other people’s uses and ultimately ‘grounded in’ Jones in a face-to-face perceptual situation. This underlying network is made up of d-chains. The reason that Jones seems to have something to do with the speaker’s meaning in uttering (2) is that a network of d-chains grounded in Jones underlies that utterance too. That is why he used the name ‘Jones’. The reason that Smith also seems to have something to do with his meaning is that this situation is a perceptual one of just the sort to ground a network in Smith. D-chain networks are grounded in their objects not only at a baptism; they are multiply grounded. Confusions like the present one lead to a network being grounded in more than one object. Because there are d-chains to both Jones and Smith, I would say that neither was the speaker’s referent but each was his partial referent. (Devitt 1981b: 514–515)

Again, the question under discussion may play a role in determining the degree to which the token of ‘Jones’ partially refers to Jones and the degree to which it partially refers to Smith. Let’s change Kripke’s example a bit and suppose that the confusion originates with the second speaker. The first speaker says: ‘I haven’t seen Jones today. Do you know what he is doing?’ Then the other speaker responds, while pointing to Smith in the distance: ‘He is raking the leaves’. The second speaker partially refers to Jones and partially to Smith, but the reference to Jones counts more since it addresses the question under discussion. As a result, the second speaker’s utterance, ‘He is raking the leaves’, will presumably be evaluated as false (unless Jones happens to be raking the leaves somewhere at the same moment) to a greater extent than it will be evaluated as true.10

 Potential effects of the question under discussion on truth-value judgements in cases of presupposition failure have been discussed by Strawson and others. See von Fintel 2004: 275ff and the references therein.

10

112

F. Recanati

5.5  Semantic Coordination I have mentioned two types of case in which the reference of a singular term (token) depends, at least in part, upon the references of other tokens. First, there is the case of so-called ‘anaphoric chains’ (Chastain 1975), where one term (e.g. a pronoun) inherits its reference from an antecedent. Second, there is the case of proper names, which are ‘quasi-anaphoric’ in the sense that (as Devitt puts it) ‘reference borrowing is of the essence of their role’ (Devitt 1981a: 45). In both cases the anaphoric or quasi-anaphoric link forces coreference between the singular term (name or pronoun) and the singular terms it is linked to in the chain or network. Coreference, in these cases, is more or less mandatory. It is de jure, not de facto. According to several authors, the notion of coreference de jure has wider application and is not restricted to anaphora and proper names. They point out that, in general, use of the same word by the interlocutors triggers a presupposition of coreference, just as use of the same proper name does. Thus Schroeter writes: When you hear someone use the term ‘water’ in a normal English sentence, you naturally presume that the other person must be referring to the very same kind of stuff that you yourself pick out with that term. (Schroeter 2012: 178)

And Prosser: A shared word involves a kind of common commitment that locks together the reference of each token. The fact that different speakers recognise the words of others as tokens of their own words is already sufficient to bring about this locking of reference.... This de jure locking of reference comes about precisely because different speakers trade on the identity of reference between tokens produced by different speakers. (Prosser 2018: 9–10)

Fiengo and May hold that, in a syntactic sense, a pronoun and its antecedent count as the same expression, and this suggests equating the two phenomena: coreference de jure and recurrence of expression (Fiengo and May 1994, 1996). I do not think this is right, however, even if we buy Fiengo’s and May’s point about anaphora. I think coreference de jure is a matter of semantic coordination (Fine 2007), which is an even more general phenomenon than recurrence. Using the same expression again is a way of achieving semantic coordination, but there are other ways. The next example, involving turn-taking and ‘I’/‘you’ alternation, is a case of coordination without recurrence. Imagine the following dialogue: Lauben: ‘I have been wounded’ Leo Peter: ‘You have been wounded, really?’ The coreferential indexicals used by Lauben in speaking about himself (‘I’) and by Leo Peter in speaking to Lauben (‘you’) are clearly distinct. They are not ‘the same expression’. Yet they are not merely, i.e. de facto, coreferential: their coreference is presupposed (Prosser 2018). Leo Peter takes it for granted that the person talking to him (and self-ascribing the property of having been wounded) is the person he is now addressing in his response. Lauben likewise takes it for granted that the person

5  Multiple Grounding

113

Leo Peter is addressing is himself. The presuppositional status of these ‘discourse-­ internal identities’ has been emphasized by Perry 1980 and Spencer 2006. Since Lauben and Leo Peter both unreflectively assume that Leo Peter’s use of ‘you’ corefers with Lauben’s use of ‘I’, they engage in what Prosser calls ‘transparent communication’, based on a shared presupposition of coreference. This is similar to what happens in a case of anaphora or in a case of name sharing. All these cases display ‘coreference de jure’. Within a framework such as Devitt’s, the Lauben case can be explained by saying that Lauben’s reference to himself figures prominently in the causal explanation of Leo Peter’s subsequent reference to Lauben by means of ‘you’. That is what accounts for the coordination of the two tokens. On this account Leo Peter’s use of ‘you’ is anchored to Lauben twice: it is anchored to Lauben via Lauben’s own use of ‘I’, which is grounded in Lauben and to which Peter’s ‘you’ is coordinated; but it is also anchored to Lauben directly since Peter is talking to him in face to face conversation (and using ‘you’ as one normally does to refer to one’s interlocutor): Peter’s use of ‘you’ is directly grounded in his perception of his interlocutor. This double anchoring of Peter’s use of ‘you’ in Lauben is very similar to the cases we discussed before; and it is easy to check that it can give rise to confusions of the same type. Confusion will arise if things go wrong and the presupposition of coreference turns out to be false, i.e. if the person Leo Peter addresses when he says ‘You have been wounded, really?’ turns out not to be the person who actually said ‘I have been wounded’. Imagine that Leo Peter actually misheard Lauben’s utterance as coming from the mouth of Elwood Fritchley, and uttered ‘You have been wounded, really?’ in addressing Fritchley. (To flesh out the example, imagine also that Lauben did not notice, and thought Leo Peter was addressing him.) In such a case of confusion, Peter’s use of ‘you’ fails to (fully) refer because it simultaneously tracks two distinct persons: the person who said ‘I have been wounded’ (Lauben) and the person Peter is addressing (Elwood Fritchley). It partially refers to both. That is similar to the case in which the pronoun used by Wally and the pronoun used by Zach turn out to track distinct individuals despite the presupposition of coreference carried by the anaphoric link between them.

5.6  Coreference De Jure From an internal or phenomenological point of view, coreference de jure between two singular terms (tokens) t1 and t2 is characterized by ‘the subjective appearance of obvious, incontrovertible and epistemically basic sameness of subject matter’ (Schroeter 2012: §1) and, correlatively, by the subject’s disposition to ‘trade upon identity’ (Campbell 1987), i.e. to go through the following type of inference: Trading on identity (TI) t1 is F

114

F. Recanati

t2 is G Therefore, something is both F and G It is easy to check that trading on identity is licensed when the same name occurs in both premisses, as in (1) below, or when the singular term in the second premiss is anaphoric on the singular term in the first premiss, as in (2). (1)

Cicero is F Cicero is G Therefore, someone is both F and G

(2)

Ciceroi is F Hei is G Therefore, someone is both F and G

This is in contrast to cases in which an additional identity premiss is needed to reach the conclusion, as in (3). (3)

Cicero is F Tully is G Cicero = Tully Therefore, someone is both F and G

Coreference de jure also has a truth-conditional aspect, which justifies its name. It is generally characterized as follows: two tokens t1 and t2 that are coreferential de jure are bound to corefer if they refer at all. In a case of anaphora, for example, there are two options: It may be that the ‘antecedent’ fails to refer, in which case the other term will fail to refer too; but if the antecedent refers, then the other term will refer to the same thing. Likewise for two tokens of the same proper name (belonging to the same network): the name may be empty, but if it isn’t, the two tokens are bound to corefer. We can generalize this by saying that in any instance of coreference de jure between two singular terms, if either of the term refers, then the two terms corefer. The problem with that characterization is that it is refuted by the cases of confusion we have discussed. In these examples of confusion one term (fully) refers, while the other one fails to (fully) refer. In Lawlor’s example, Wally (fully) refers to Udo when he says ‘He needs a haircut’. The confusion is entirely on Zach’s side. Zach is confused and partly refers to two distinct persons; therefore he fails to (fully) refer to anyone when he says ‘He sure does’. This shows that the proper characterization of coreference de jure can’t be that t1 and t2 are bound to corefer if either refers. In this example t1 does (fully) refer, but t2 doesn’t. In Recanati 2016 I proposed a weaker characterization: in cases of coreference de jure, t1 and t2 are bound to corefer if they both refer. In all of the counterexamples to the stronger characterization, t2 fails to (fully) refer; in such cases the weaker

5  Multiple Grounding

115

characterization is trivially satisfied since the antecedent of the conditional is false (it is not the case that both t1 and t2 refer). In Devitt’s framework we can talk of partial and full coreference. Two terms fully corefer just in case they fully refer to the same thing. Two terms partially corefer just in case they partially refer to the same thing. In the examples of confusion we have discussed, t2 fails to (fully) refer, while t1 (fully) refers, so t1 and t2 do not fully corefer. Still, t1 and t2 partially corefer. For example, both Wally and Zach partially refer to Udo; and Lauben and Leo Peter (in the ‘Fritchley’ variant) both partially refer to Lauben.11 In Devitt’s framework, therefore, we can characterize the truth-conditional aspect of coreference de jure in a way that minimally departs from the standard characterization: two terms t1 and t2 that are coreferential de jure are bound to partially corefer if they partially refer at all.

5.7  Mental Files A mental file is a mental representation which, in the normal course of events, is causally related to what it is about via ‘acquaintance relations’ or, better, ‘epistemically rewarding relations’ (ER relations): relations which make it possible for the subject to gain information from the object. The relation to an object one currently perceives is a paradigmatic ER relation, but more indirect relations established through testimony and communicative chains also count. The role of a mental file is to store the information one gains in virtue of standing in the relevant ER relation to the object (Recanati 2012, 2016). Files are typed by the type of ER relation they exploit. So we distinguish demonstrative files (the sort of file which, according to Devitt, ultimately grounds a d-chain) from memory files, recognition files etc. Certain files are based on several ER relations (or a composite ER relation) and are governed by the presupposition that these relations converge on the same object. There are also encyclopedia entries, which are opportunistic and exploit any ER relation available without imposing any such relation in particular. Perry calls them ‘detached files’ – they are the sort of file one normally associates with a proper name. The file story is fully compatible with Devitt’s framework, since Devitt himself appeals to mental representations as a key component of the d-chains which mediate between the object referred to and the linguistic token.12 On the mental file story,

 Admittedly, only Wally and Lauben achieve full reference. Zach and Leo Peter (in the ‘Fritchley’ variant) only partially refer, and this is what prevents full coreference from obtaining between Zach and Wally and between Lauben and Peter. 12  ‘Fully compatible’ may be a little too strong, in view of the following difference between Devitt’s framework and the mental file framework. ER relations per se are not, or not necessarily, causal relations; but they make information flow possible, and information flow is a matter of causal relations between the object thought about and the thinking subject. In the normal course of events, ER relations and the causal relations of information flow go hand in hand, and relate the subject to the 11

116

F. Recanati

reference (by a linguistic expression) is always mediated by a mental file associated with the expression token by the language user. It is mental files which ultimately refer, and they refer in virtue of the ER relations they are based on. Files can be multiply anchored if they are based on several ER relations of which it is presupposed that they converge on the same object. All the cases of confusion I have described are cases in which such a presupposition is in place and turns out to be false. I felt the need to introduce the notion of a mental file in order to make progress in characterizing the role of proper names understood as ‘devices of coreference’ (Taylor 2003). We can assume that each individual user associates a given proper name with a mental file of his own about the reference of the name. When a name is used purely deferentially (as when one picks up a name overheard in a conversation), the individual mental file the language user associates with the name is a deferential file: a file based on a specific ER relation, that of being party to a proper name using practice (Recanati 1997, 2000, 2001). Being party to a proper name using practice (through acquiring the name from someone else) is an epistemically rewarding relation: one is in a position to gain information about the referent of the name through testimony (by attending to the name when it is used, or by using it oneself to elicit information from others). Let us call that ER relation, made available by the mere sharing of words, the ‘deferential relation’. The deferential relation ‘broaden[s] the horizons of thought’, as Kaplan puts it (Kaplan 1989: 603). It makes it possible to think and talk about objects and properties one is not acquainted with: My dog, being color-blind, cannot entertain the thought that I am wearing a red shirt. But my color-blind colleague can entertain even the thought that Aristotle wore a red shirt. (Kaplan 1989: 604)

However, a deferential file, based on the deferential relation (and no other ER relation), is only a stage in the development of a full-fledged encyclopedia entry based on as many ER relations as happen to be available in context. It is that sort of file that is normally associated with a proper name. On this picture, corresponding to the network of uses of a proper name, there is what Perry calls an ‘intersubjective file network’ (Perry 2012: 200–204), constituted by the files associated with the name by the users in the network. Kamp calls it an ‘intersubjective causal network of entity representations’ (Kamp 2015: 309).13 The Devitt-Taylor view that proper names are essentially devices of coreference can

same object. Things can go wrong, however, and in that case what determines the reference is the ER relation, not the causal relation. (The examples I have in mind are cross-wiring cases, where the subject gains proprioceptive information concerning someone else’s body, or introspective information concerning someone else’s mind, and thinks of himself through a self-file based on the ER relation of identity.) For Devitt, however, what fixes reference has got to be the causal relation. How important that difference is, I don’t know; in any case it can be ignored for the purposes of this chapter. 13  Perry 2012 emphasizes the relation of ‘coco-reference’ (his name for coreference de jure) which ties together the files in the network.

5  Multiple Grounding

117

now be cashed out as follows: The role of proper names is precisely to coordinate the mental files of all of those who are involved in the name-using practice.

5.8  Coordination via Proper Names Through a proper name, the linguistic community arguably interconnects the individual files in the minds of name users, thus making information transfer between the files possible through testimony and chains of communication using the name. Kamp speaks of ‘the causal coordination of labelled entity representations that are privy to the members of the community’ (Kamp 2015: 298). The files thus interconnected can be viewed as a global, distributed file in which the community pools information about the referent. The pooling idea should be understood in the light of Putnam’s ‘division of linguistic labor’ (Putnam 1975: 227–229). The reference of a name is fixed at the community level: it is the reference of the distributed file. The distributed file itself shouldn’t be seen as the static juxtaposition of individual mental files, but as a public file managed by the community as a whole. The community filters out information tentatively contributed to the distributed file by screening testimony and correcting tentative individual contributions when they do not fit.14 In this way the community pools information from the interconnected individual files so as to build a coherent body of information about the reference of the distributed file. What fixes the reference of the distributed file is what Devitt calls the process of grounding. Grounding always proceeds through individual mental files in the minds of the language users. Each user is responsible for grounding, via his/her file based on various ER relations, the distributed file of the community. The reference of the distributed file is a function of the references of all the (non-parasitic) individual mental files associated with the name by its users, and more precisely of all the partial references determined by the ER relations on which individual files are based. Mental files refer through ER relations, and by their reference contribute to determining the reference of the distributed file to which they belong. So distributed files are multiply grounded. From what I have said, it follows that the individual files and the distributed file are referentially interdependent (in a non-circular manner). The reference of the individual encyclopedia entry associated with a proper name in the user’s mind depends, inter alia, upon the deferential relation which partially anchors it to the reference of the distributed file. If the individual file is based only on that deferential relation (as in the case of deferential files), it does not contribute to grounding but inherits the reference of the distributed file, thereby making it possible for a language user with no knowledge of an object to refer to it both in speech and thought.  For example, if I tell you that Napoleon died a few years ago, you will act as a gate-keeper and do your best to prevent that piece of alleged testimony from entering the public file associated with the name ‘Napoleon’.

14

118

F. Recanati

Such a parasitic use takes advantage of the fact that the reference of a name is fixed by the distributed file. In the other direction, however, the reference of the distributed file itself depends on the references of the individual files in the network (unless they are purely deferential files). The reference of the individual file and that of the distributed file are interdependent, but they can diverge.15 The reference of the distributed file and that of the individual file diverge in all cases in which (i) the deferential ER relation which contributes to determining the reference of the individual file and makes it dependent upon the reference of the distributed file counts as less important in context than some more direct, e.g. perceptual, relation to the reference, and (ii) the more direct relation targets an object which turns out to be distinct from the reference of the distributed file. This situation is illustrated by Kripke’s ‘raking the leaves’ example: the ‘semantic reference’ of the name ‘Jones’ is the reference of the distributed file, while the ‘speaker’s reference’ is the reference of the individual files in the mind of the language users. In Kripke’s example, as we have seen, the individual files in the mind of the language users are confused files partly referring to Smith and partly to Jones (Devitt 2015: 118–121). But the distributed file unambiguously refers to Jones. The particular mistake made by these particular users hardly affects the reference of the distributed file.

5.9  Conclusion Devitt’s theory of multiple grounding is important because of the light it sheds on phenomena like reference change and confusion, and on the metasemantics of coordination. The notion of partial reference Devitt borrows from Field’s work on referential indeterminacy is particularly promising. All of these ideas can easily be accommodated within the mental file framework, which is, by and large, compatible with Devitt’s causal account of reference.16 In that framework, the role of proper names is to coordinate the mental files of all of those who are involved in the name-­ using practice.

References Bach, K. 1987. Thought and reference. Oxford: Clarendon Press. Berger, A. 2002. Terms and truth: Reference direct and anaphoric. Cambridge, MA: MIT Press.

 When there is referential divergence between the distributed file and the individual file associated with the name by a particular user of the name, the use of the name is deemed incorrect by the community (as part of its corrective policy with respect to distributed files). In Kripke’s example, the use of the name ‘Jones’ will be judged incorrect by any member of the community apprised of the fact that the man raking the leaves is not Jones. The two users themselves, when apprised of the facts, will recognize that their use is incorrect. 16  Devitt himself toyed with the mental file idea in ‘Against Direct Reference’ (Devitt 1989), and again in Coming to our Senses (Devitt 1996). 15

5  Multiple Grounding

119

Campbell, J. 1987. Is sense transparent? Proceedings of the Aristotelian Society 88: 273–292. Chastain, C. 1975. Reference and context. In Language, mind, and knowledge, ed. K. Gunderson, 194–269. Minneapolis: University of Minnesota Press. Devitt, M. 1972. The semantics of proper names: A causal theory. Harvard PhD dissertation. ———. 1974. Singular terms. Journal of Philosophy 71: 183–205. ———. 1981a. Designation. New York: Columbia University Press. ———. 1981b. Donnellan’s distinction. Midwest Studies in Philosophy 6: 511–524. ———. 1989. Against direct reference. Midwest Studies in Philosophy 13: 206–240. ———. 1996. Coming to our senses. Cambridge: Cambridge University Press. ———. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Donnellan, K. 1966. Reference and definite descriptions. Philosophical Review 75: 281–304. ———. 2012. Essays on reference, language, and mind. Oxford: Oxford University Press. Evans, G. 1973. The causal theory of names. Proceedings of the Aristotelian Society 47: 187–208. Field, H. 1973. Theory change and the indeterminacy of reference. Journal of Philosophy 70: 462–481. Fiengo, R., and R. May. 1994. Indices and identity. Cambridge, MA: MIT Press/Bradford Books. ———. 1996. Anaphora and identity. In Handbook of contemporary semantic theory, ed. S. Lappin, 117–144. Oxford: Blackwell. Fine, K. 2007. Semantic relationism. Oxford: Blackwell. von Fintel, K. 2004. Would you believe it? The king of France is back! (Presuppositions and truth-­ value intuitions). In Descriptions and beyond, ed. M. Reimer and A. Bezuidenhout, 269–296. Oxford: Oxford University Press. Geach, P. 1969. The perils of Pauline. Reprinted in P. Geach, Logic matters, 153–165. Oxford: Blackwell, 1972. Kamp, H. 2015. Using proper names as intermediaries between labelled entity representations. Erkenntnis 80: 263–312. Kaplan, D. 1978. Dthat. Syntax and Semantics 9: 221–243. ———. 1989. Afterthoughts. In Themes from Kaplan, ed. J. Almog, H. Wettstein, and J. Perry, 565–614. New York: Oxford University Press. Kripke, S. 1977. Speaker’s reference and semantic reference. Midwest Studies in Philosophy 2: 255–276. ———. 1980. Naming and necessity. Oxford: Blackwell. Lawlor, K. 2010. Varieties of coreference. Philosophy and Phenomenological Research 81: 485–495. Perry, J. 1980. A problem about continued belief. Pacific Philosophical Quarterly 61: 317–332. ———. 2012. Reference and reflexivity. 2nd ed. Stanford: CSLI. Prosser, S. 2018. Shared modes of presentation. Mind and Language. https://doi.org/10.1111/ mila.12219. Putnam, H. 1975. The meaning of ‘meaning’. In H. Putnam, Mind, language and reality: Philosophical papers, vol. 2, 215–271. Cambridge: Cambridge University Press. Recanati, F. 1997. Can we believe what we do not understand? Mind and Language 12: 84–100. ———. 2000. Oratio obliqua, oratio recta. Cambridge, MA: MIT Press/Bradford Books. ———. 2001. Modes of presentation: Perceptual vs deferential. In Building on Frege: New essays on sense, content, and concept, ed. R. Stuhlmann-Laeisz, U. Nortmann, and A. Newen, 197–208. Stanford: CSLI. ———. 2012. Mental files. Oxford: Oxford University Press. ———. 2016. Mental files in flux. Oxford: Oxford University Press. Schroeter, L. 2012. Bootstrapping our way to samesaying. Synthese 189: 177–197. Spencer, C. 2006. Keeping track of objects in conversation. In Two-dimensional semantics, ed. M. Garcia-Carpintero and J. Macia, 258–271. Oxford: Clarendon Press. Taylor, K. 2003. Reference and the rational mind. Stanford: CSLI.

Chapter 6

Reference and Causal Chains Andrea Bianchi

Abstract  Around 1970, both Keith Donnellan and Saul Kripke produced powerful arguments against description theories of proper names. They also offered sketches of positive accounts of proper name reference, highlighting the crucial role played by historical facts that might be unknown to the speaker. Building on these sketches, in the following years Michael Devitt elaborated his well-known causal theory of proper names. As I have argued elsewhere, however, contrary to what is commonly assumed, Donnellan’s and Kripke’s sketches point in two rather different directions, by appealing to historical or causal facts of different sorts. In this paper, I shall discuss and criticize Devitt’s causal theory, which confuses things, I shall argue, by mixing, so to speak, Donnellan’s and Kripke’s sketches. Keywords  Proper names · Causal theory of proper names · Reference · Causal chains · Speaker’s reference · Semantic reference · Michael Devitt · Keith Donnellan · Saul Kripke

As any reader knows and this volume witnesses, Michael Devitt made outstanding contributions to Twentieth and Twenty-first century philosophy in many different fields, from philosophy of language to metaphysics, and from epistemology to metaphilosophy and philosophy of science (especially philosophy of biology and philosophy of linguistics). But if they had to pick out one single contribution of his, I am certain that most philosophers would indicate his causal theory of proper names. Actually, Devitt worked on the elaboration and refinements of his causal theory during his entire career as a philosopher, from its earliest stages to very recent times. Indeed, this was the subject of his 1972 Ph.D. dissertation at Harvard University, The Semantics of Proper Names: A Causal Theory, his first article, “Singular Terms,” published by the Journal of Philosophy in 1974, his first book, Designation,

A. Bianchi (*) Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Parma, Italy e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_6

121

122

A. Bianchi

published in 1981, and dozens of further publications since then, among which an important recent one is the article “Should Proper Names Still Seem So Problematic?” (2015). At the origin of all this there is, as Devitt recalls in the preface to Designation, a series of lectures given by Saul Kripke at Harvard in 1967, which Devitt had the chance to attend and which anticipated many of the themes later developed in Naming and Necessity. Devitt was quick in realizing how disruptive the content of those lectures was. Moreover, unlike many of those who first reacted to Kripke’s work, he was not overimpressed by modal issues. Unlike all those who wrongly took Kripke’s fundamental claim about proper names to be that they are rigid designators, Devitt immediately understood that the notion of rigid designation, as characterized in Lecture I of Naming and Necessity, was nothing but a useful instrument for arguing against what Kripke called the “description theory of proper names.” But Devitt saw clearly that, first, much more powerful arguments against it  – the argument from ignorance and the argument from error  – are offered in Lecture II;1 and, second, that what was really revolutionary in Kripke’s work was his positive answer to the question concerning reference determination – the chain of communication picture sketched in the final part of Lecture II. As Devitt wrote at the very beginning of his first article, “[t]he main problem in giving the semantics of proper names is that of explaining the nature of the link between name and object in virtue of which the former designates the latter” (1974: 183; see also 1976: 406 and 1981a: 6). And Kripke had not only shown that the description theory is “mistaken not merely in details but in fundamentals” (Devitt 1974: 183) but also “indicated where the truth of the matter lies, namely in a ‘causal theory’ of proper names” (183–184). Here is the famous passage where Kripke introduced the key idea: Someone, let’s say, a baby, is born; his parents call him by a certain name. They talk about him to their friends. Other people meet him. Through various sorts of talk the name is spread from link to link as if by a chain. A speaker who is on the far end of this chain, who has heard about, say Richard Feynman, in the market place or elsewhere, may be referring to Richard Feynman even though he can’t remember from whom he first heard of Feynman or from whom he ever heard of Feynman. He knows that Feynman was a famous physicist. A certain passage of communication reaching ultimately to the man himself does reach the speaker. He then is referring to Feynman even though he can’t identify him uniquely. He doesn’t know what a Feynman diagram is, he doesn’t know what the Feynman theory of pair production and annihilation is. Not only that: he’d have trouble distinguishing between Gell-Mann and Feynman. So he doesn’t have to know these things, but, instead, a chain of communication going back to Feynman himself has been established, by virtue of his membership in a community which passed the name on from link to link, not by a ceremony that he makes in private in his study: ‘By “Feynman” I shall mean the man who did such and such and such and such’. (Kripke 1972: 298–299 (1980: 91–92))

1  In this context, it is perhaps worth noting that in the later “A Puzzle about Belief,” Kripke himself wrote that the argument from ignorance is “the clearest objection” (1979: 246) to the description theory.

6  Reference and Causal Chains

123

Unfortunately, however, Kripke did not develop the idea to produce a full-blown theory, but limited himself to some sketchy remarks, offering a rough picture, “a picture which,” as he himself said, “if more details were to be filled in, might be refined so as to give more exact conditions for reference to take place” (Kripke 1972: 300–301 (1980: 94)).2 Since, however, “Kripke 1972 leaves a great deal of work to be done on the theory but it establishes the theory’s plausibility adequately enough to justify this investigation” (Devitt 1976: 417 n. 2), Devitt himself sat down to fill in the details. The result, of course, is his own causal theory. To introduce the theory and explain its origin, Devitt writes the following in the preface of Designation: There are two steps in my causal theory of proper names: a causal theory of reference borrowing and a causal theory of grounding. The theory of reference borrowing explains how those of us who have never grounded a name in its bearer can get the benefit of the groundings of others. The theory of grounding explains how, ultimately, names are linked to their objects. In 1967 I attended a series of lectures by Kripke at Harvard, parts of which later became the paper “Naming and Necessity” (1972). From those lectures I took the idea of a causal theory of reference borrowing. (Devitt 1981a: x)

And, significantly for what I want to say in this paper, he immediately adds, “Donnellan has a similar idea,” mentioning Keith Donnellan’s two seminal papers on proper names, “Proper Names and Identifying Descriptions” (1970) and “Speaking of Nothing” (1974). I shall come back to this in a moment. Before doing so, however, let me say that I am truly sympathetic to the spirit of Devitt’s proposal. He must be given credit for having clearly identified the central questions reference theorists should try to answer – rigidity, for example, is only a distraction – and where they should seek an answer; and for having looked at all this from a naturalistic perspective, a perspective that one does not find in either Kripke or Donnellan. The result of all this was genuine progress in philosophy of language: we now better understand the relation between language and reality. Having said this, however, I shall devote the rest of my paper to expressing and motivating my dissatisfaction at how Devitt has filled in some of the details. In doing so, I shall claim, he lost track of what I take to be a basic – perhaps the basic – insight of Kripke’s. Before proceeding, though, let me make some further preliminary remarks to avoid misunderstanding. First, Devitt’s causal theory is meant to cover not only proper names but also referentially used definite descriptions, demonstratives, and natural kind terms. On the one hand, I believe that a causal theory can be extended even further, to cover common nouns other than natural kind terms (e.g., artifactual kind  terms), and possibly adjectives and verbs as well. On the other hand, I am skeptical about Devitt’s extension of it to compound expressions such as definite descriptions. My only focus in this paper, however, will be proper names: I shall 2  For a discussion of the reasons why Kripke abstained from refining his picture “so as to give more exact conditions for reference to take place” and thus from offering a theory, see Bianchi 2015: 94–95.

124

A. Bianchi

express and motivate my dissatisfaction at how Devitt has filled in some of the details in order to transform Kripke’s chain of communication picture of proper names into a causal theory of proper names. Second, Devitt’s chosen word for the causal relationship connecting a term and an object is “designation,” whereas he uses “reference” as an umbrella term for both designation and the non-causal relation he calls “denotation.” Accordingly, in his terminology both the name “Michael Devitt” and the attributive description “the philosopher celebrated in this volume” refer to him, but only the name designates him. Instead, as is now more common, in what follows I shall use “reference” only for the causal relationship. Accordingly, I say that the name “Michael Devitt,” but not the attributive description “the philosopher celebrated in this volume,” refers to Devitt. Well, then. Let us grant that reference is a causal relationship. More specifically, a proper name token refers to an object that is at the origin of a certain causal chain. But which causal relationship is reference? More specifically, which causal chain has the referent of the proper name token at its origin? In fact, a proper name token bears (different) causal relationships to many different objects; as a consequence, it is at the end of many different causal chains, at whose origins there are many different objects.3 Think, for example, of the causal relationship between a proper name token and the person who produced it – the speaker: obviously we do not want to say that a proper name token refers to the person who produced it only because this person is at the origin of a particular causal chain ending in the token – if I utter the name “Michael Devitt,” I am at the origin of a causal chain ending in the token, but the token does not refer to me. More to the point, think of Kripke’s Smith-Jones case:4 there, the token of the name “Jones” is causally related both to Jones, via a causal chain that goes back to a baptism or something like that, and to Smith, because the production of the token was prompted by the speaker’s seeing him raking the leaves. Who, then, does the token refer to, Jones or Smith?5 To deal with this and other cases, we certainly need to say more about the causal relationships and causal chains involved in reference. Having said this, however, our ‘causal’ questions (Which causal relationship is reference? Which causal chain has the referent of the proper name token at its 3  Because of this, one should be careful, when offering a causal theory of proper names, not to talk of the cause, or the origin, of a proper name token. Although certainly aware of this (“[o]bjects can be involved in the causal explanation of a name in various ways without being the object the name designates” (Devitt 1981a: 177), so the aim is “to distinguish (in nonsemantic terms) the semantically significant d[esignating]-chains from other causal connections between singular terms and the world” (129)), in his first writings even Devitt sometimes slips up, for example when he says that “we look to the cause of the utterance to determine reference” (1974: 197; see also p. 193 and 1976: 413). 4  “Two people see Smith in the distance and mistake him for Jones. They have a brief colloquy: ‘What is Jones doing?’ ‘Raking the leaves.’ ‘Jones,’ in the common language of both, is a name of Jones; it never names Smith. Yet, in some sense, on this occasion, clearly both participants in the dialogue have referred to Smith” (Kripke 1977: 263). 5  For an interesting discussion of the many causal chains involved in this and other cases, see Almog et al. 2015: 368–374.

6  Reference and Causal Chains

125

origin?) might seem not to raise difficult problems. Didn’t those who inspired Devitt’s causal theory, Kripke and Donnellan, give clear answers  – Kripke even called the relevant causal chain a “chain of communication” – so that the remaining task is only the tedious one of filling in all the details? Not so, I claim. The fact is that Kripke and Donnellan, contrary to what Devitt seemed to think at the time he wrote “Singular Terms” and Designation, gave very different answers to our questions. Or so I argued in a long paper that some years ago I wrote together with Alessandro Bonanini, devoted to reconstructing Donnellan’s “historical explanation theory” of proper names as it emerges in the two papers Devitt himself mentions in the preface of Designation and to contrast it with Kripke’s chain of communication picture. Undoubtedly, Kripke and Donnellan were fighting against the same enemies, so to speak – Donnellan’s attack on what he calls the “principle of identifying descriptions” converges with that of Kripke’s on the “description theory.” And some of Donnellan’s arguments parallel Kripke’s most powerful ones, those from ignorance and from error.6 Moreover, on the positive side, they both took causal reality (or, as Donnellan preferred to say, history) to be the key to proper name reference. Nonetheless, they looked at causal realities (histories) of different sorts, so to speak. As we know, Kripke insisted on the causal relationship a given proper name token bears to preceding tokens of the same name (the chain of communication). Donnellan, instead, placed the explanatory burden on the mind of the speaker: what determines the reference of a proper name token is the cognitive status of the person who produced it. Indeed, Bonanini and I summarized Donnellan’s account in the following way: “A token of a proper name N produced by a speaker S at the time T refers to an individual X if and only if S’s having X in mind is appropriately involved in the explanation of S’s production of the token” (Bianchi and Bonanini 2014: 188 n. 27).7 Note that in this account neither preceding tokens of the name N nor a baptism are mentioned at all. I shall not spend much time on this here, but an interesting way to highlight the difference between Kripke’s and Donnellan’s accounts is in terms of reference borrowing. In fact, while reference borrowing is obviously fundamental for Kripke, there is no place for this alleged phenomenon in Donnellan’s account. If Donnellan’s historical explanation theory is true, in fact, when we use a proper name we simply do not borrow reference. On the contrary, we always fix it anew. We can have an object in mind in various ways – for example, because someone talked to us about it (in this sense, there can be something like a having-in-mind borrowing) – but, once we have that object in mind, we can refer to it by whatever

6  See Donnellan 1970: 342–343 and Kripke 1972: 291–292 (1980: 81) for the argument from ignorance; Donnellan 1970: 347–349 and Kripke 1972: 294–295 (1980: 83–85) for the argument from error. 7  Following Donnellan’s inclination, our original formulation took reference to be a relation between speakers and individuals: “In using a proper name N at the time T a speaker S refers to an individual X if and only if S’s having X in mind is appropriately involved in the explanation of S’s use of N at T” (188). The formulation I am giving here in terms of tokens, which I have chosen for ease of exposition, is equivalent.

126

A. Bianchi

name we want: the token we produce refers to the object we have in mind, no matter how that object was baptised and what any preceding tokens of the same name referred to. Summing up, contrary to what Devitt seemed to think at the time when he first elaborated his causal theory, it seems indisputable that Kripke and Donnellan had different views on proper name reference  – they gave different answers to our ‘causal’ questions. Although both Kripke’s and Donnellan’s answers ‘converge’ on the same referent in the vast majority of cases – they both account for Devitt’s being the referent of all the tokens I produced in writing this paper of the name “(Michael) Devitt” (but note that the description theory would also account for this)  – they appeal to different sets of considerations. And in certain cases, for example Kripke’s Smith-Jones case, Donnellan’s “Aston-Martin” case, and, possibly, Gareth Evans’ “Madagascar” case, they even diverge as to what or whom the referent of a proper name token is.8 As of recently, under the influence of Joseph Almog, various philosophers, related in one way or another with UCLA, where Donnellan taught for many years, have rediscovered and developed Donnellan’s answers, determining something like a Donnellan Renaissance in the theory of reference.9 Here is, for example, what Almog writes: Donnellan’s idea … unifies the cases of proper names, demonstrative [sic] and definite descriptions. What is at stake for Donnellan is not so much the morphology of the specific expression used but the underlying cognitive relation between the cognizer and the cognized object. In the party, I have in mind a given object, Sir Alfred, before any linguistic activity. I can now use a whole spectrum of expressions to get at what I am already cognitively bound to. I may say “he”; I may say “she” (if Sir Alfred is in female attire); I may say “Sir Alfred,” using his correct name, or “George,” using a false name some prankster tossed to me while earlier pointing out to me Sir Alfred; and I may use a whole variety of descriptions, for example “the theologian speaking loudly about his Catholic faith,” or “an eloquent but slightly tipsy theologian standing to the right of Margaret,” even if Sir Alfred is no theologian, et cetera. Through and through, the one underlying fact is that I am wired to this man by an information link from him to me  – to my cognitive system  – and the expression(s) I am about to use ride back on that wire, externalizing the cognitive contact already made. (Almog 2012: 181)

Now, let us ask where Devitt stands concerning this. We said that he filled in the details needed to transform a causal picture into a causal theory. But which causal picture did Devitt transform into a theory, Kripke’s or Donnellan’s? In other words, what are Devitt’s answers to our two questions? My dissatisfaction with Devitt’s 8  See Bianchi and Bonanini 2014: 200. Capuano 2018 tries to defend a Donnellanian treatment of these cases. Wulfemeyer 2017a uses the “Madagascar” case to argue in favor of Donnellan’s answers to our ‘causal’ questions. As is well known, the “Madagascar” case was introduced by Evans to argue against what he called the Causal Theory of Names (1973: 11). 9  See in particular Almog 2012, 2014: ch. 3; Capuano 2012a, 2012b, 2018; Pepp 2009, 2012; Almog et  al. 2015; Wulfemeyer 2017a. For criticisms, see Martí 2015 and my “Reference and Language,” forthcoming. Pepp 2019 defends Donnellan’s answers and argues against Martí’s views and my own attempt (Bianchi 2015) to develop Kripke’s answers, filling in some of the details needed to transform it into a full-blown theory.

6  Reference and Causal Chains

127

way of filling in the details is due to the fact that it does not make it that clear what his answers are. Or, rather, that it confuses things, by mixing, so to speak, Donnellan’s and Kripke’s answers. Why am I saying this? Up to now, I have only mentioned a passage from the preface to Designation where Devitt assimilates Kripke’s and Donnellan’s ideas. But this might only indicate that he was mistaken about the content of Donnellan’s proposal.10 In fact, in his formative years Devitt was exposed to Kripke’s research much more than to Donnellan’s. Thus, it is natural to think that when he started to work autonomously on these issues he decided to develop Kripke’s answers – the chain of communication picture  – into a causal theory, and that he mistakenly thought that Donnellan’s answers were similar if not identical. The fact that an alleged cornerstone of Devitt’s causal theory is the notion of reference borrowing, which clearly sits well with Kripke’s picture, gives further support to this. What’s more, here is what Devitt writes in the first section of his 1974 article: The central idea of the causal theory of proper names is that our present uses of a name, say ‘Aristotle’, designate the famous Greek philosopher Aristotle, not in virtue of the various things we (rightly) believe true of him, but in virtue of a causal network stretching back from our uses to the first uses of the name to designate Aristotle. Our present uses of a name borrow their reference from earlier uses. It is this social mechanism that enables us all to designate the same thing by a name. This central idea makes our present uses of a name causally dependent on earlier uses of it. (Devitt 1974: 184; see also 1976: 409, 1981a: 25, and 2015: 110)

This is, of course, the idea backing Kripke’s chain of communication picture, and I am more than ready to subscribe to it. Indeed, the appeal to a social mechanism of this sort seems to me a – possibly the – basic insight of Kripke’s on proper name reference. Unfortunately, however, no more mention of this or other social mechanisms occurs in Devitt’s article. Instead, in the fourth section a different notion is called upon to develop his causal theory. Quite surprisingly, Devitt writes, only a few pages after the above passage, the following: “Which object does a name designate? It is natural to say that it designates the object the speaker had in mind or meant” (1974: 188; see also 1976: 406 and 1981a: 32). Indeed, “[w]e can say roughly … that a name token designates an object if and only if the speaker had the object in mind (meant the object) in uttering the token” (1974: 189). This is, according to Devitt, “an insight of description-theorists” (1976: 406; see also 1974: 188 and 1981a: 32) that should be kept even after Kripke’s “decisive refutation of description-theories” (1976: 407) of proper names. Where description-theorists  As a matter of fact, Devitt continued to be mistaken about this (as many others are – for some examples, see Bianchi and Bonanini 2014: 176 n. 2). Here is what he wrote many years later in an encyclopedic entry on reference: “Kripke and Donnellan followed their criticism of description theories of names with an alternative view. This became known as the ‘causal’ ‘historical’ theory, although Kripke and Donnellan regarded their view as more of a ‘picture’ than a theory” (1998: 157–158). And he continued: “The basic idea of this theory is that a name designates whatever is causally linked to it in an appropriate way” (158). What Bonanini and I have argued is precisely that Kripke and Donnellan had very different ideas on what the “appropriate” causal link is, hence that they did not offer one single picture, but two rather different ones.

10

128

A. Bianchi

went wrong is in their understanding of having-in-mind in terms of (identifying) knowledge. On the contrary, we must understand it in causal terms. Indeed, Devitt claims that “one has an object in mind in virtue of a causal connection between one’s state of mind and the object” (1974: 188; see also 1976: 409–410 and 1981a: 33), and he adds that “[o]ne can ‘borrow’ the ability to have something in mind” (1974: 191), because “[t]here can be a causal link of the required kind even though the speaker has had no direct experience of the object: it will be a causal connection running through others back to speakers who did experience the object” (ibid.; see also 1981a: 38). In fact, Devitt goes as far as to offer a “rough” causal “analysis of having an object in mind in using a name (meaning an object by a name)” (1974: 189; see also 1976: 410). Now, this is almost exactly the view that, according to Bonanini’s and my reconstruction, Donnellan had. Note, for example, how similar the following passage, concerning the first use of a name “acquired not at a naming ceremony but through use” (1974: 199) is to the one by Almog quoted above: In virtue of what does such a first use designate the object? Our answer is along familiar lines. The speaker had the object in mind. He had it in mind in virtue of a causal connection. This connection might have led him to use a certain description had he been searching for a description to designate it, or a certain demonstrative had he been searching for a demonstrative, but did lead him to use a certain name when he was searching for an apt name for it. Part of what he intended was to bestow the name (provisionally, perhaps) on the object. (Devitt 1974: 199; 1981a: 58–59)

And, emphatically, this is not the view that Kripke had. In fact, there is no mention at all of having-in-mind, or related cognitive states or events, in the second lecture of Naming and Necessity. All that Kripke writes there is compatible with the claim that a proper name token can refer to an object the speaker does not have in mind. I am not saying that Kripke believes this can indeed be the case, and, as for myself, I believe that, under a certain construal of the notion of having in mind, this cannot be the case: we always have in mind what the proper name token we produce refers to. But, certainly, having in mind does not play any deep explanatory role in Kripke’s chain of communication picture. Thus, is it really this picture that Devitt developed into a full-blown causal theory? It might be objected that Devitt has always made it clear that the appeal to having in mind is only an “intuitively appealing start” (2015: 111) for, or a “stepping stone” (1974: 202) to, his causal theory.11 In fact, he may rightly claim that he has offered what, in his 1974 and 1976 articles, he called an “analysis,” and later on an “explanation” (1981a: 33, 225), or, “better, an explication” (2015: 111) of having in mind and that the final, ‘official’, formulation of his causal theory does not mention having in mind at all: “the notion does not feature in the theory” (1981a: 138). However, while this is true, it seems to me to be beside the point. The fact remains that his explanation of proper name reference, like Donnellan’s, is given in terms of the mental or cognitive state of the speaker. In a nutshell, according to Devitt a proper  This objection is hinted at in Martí 2015: 80 n. 7. Devitt (2015: 111 n. 5) agrees. As I explain in the text, I disagree.

11

6  Reference and Causal Chains

129

name token refers to whatever the thought that immediately caused the production of it is about, where the explanation of the thought’s aboutness is also given in causal terms. This gives us Devitt’s answers to our two ‘causal’ questions. But these answers are Donnellan’s answers, not Kripke’s answers. Indeed, even Devitt’s causal explanation of a thought’s aboutness is remindful of the one offered by Donnellan and elaborated by the neo-Donnellanians.12 In a footnote of his 2015 article, Devitt shows some awareness of this: In a highly UCLA-centric recent volume in honor of Donnellan, called “Having in Mind” …, Joseph Almog proposes a causal explanation of having in mind (2012: 177, 180–2) that has similarities to my old explanation, as Bianchi indicates (2012: 89 n. 7). Almog attributes the explanation to Donnellan…. Donnellan’s talk of “having in mind” is … a “metaphor” that needs development …. One might well think that a causal explanation is the natural development. It certainly seemed so to me and that was why I made it. Having made it, however, we should see this folk talk of “having in mind” as but “a stepping stone” to a causal theory of designation. (Devitt 2015: 111 n. 4)

Given all this, it seems fair to me to conclude that Devitt’s causal theory shouldn’t be seen as a development of Kripke’s chain of communication picture. It should, rather, be considered as a development of the alternative causal picture offered by Donnellan, Donnellan’s historical explanation theory. There is, however, an important complication that we need to discuss. In Designation, in fact, Devitt introduces in his theory the distinction between speaker’s reference and semantic reference (in his terminology, speaker-designation and conventional-designation). And his final, ‘official’, formulation of the theory contains two separate clauses for speaker’s reference and semantic reference. Now, certainly this move was not inspired by Donnellan, as Donnellan did not make any such distinction, and his historical explanation theory is meant to be a theory of the only kind of reference he was ready to recognize. What’s more, the distinction was famously used by Kripke to argue against Donnellan, and almost certainly Devitt took it from Kripke’s 1977 article. Shouldn’t we reconsider, then, the conclusion I drew a moment ago? Not so, I claim. It must be admitted that Devitt’s acknowledging a distinction between speaker’s reference and semantic reference allows him to avoid some of the extreme consequences that can be drawn from Donnellan’s account, for example that, at least from a semantic point of view, there are no languages.13 But this does not make Devitt’s theory anymore Kripkean, or so it seems to me. 12  For the neo-Donnellanian account of (singular) thought, see especially Capuano 2015 and Wulfemeyer 2017b. 13  In fact, Bonanini and I ended our article by claiming that Donnellan’s historical explanation theory can be seen as anticipating some radical theses later defended by Donald Davidson (1986, 1994):

we believe that according to Donnellan there are no languages at all, at least from a semantic point of view. What there are, in the end, are just uses of expressions, aimed at communication. There are present uses, and there are past uses. Before using an expression in order to communicate something, it is certainly helpful to consider preceding uses of it – if they succeeded in communicating what we want to communicate, they may succeed again. But,

130

A. Bianchi

First, consider Devitt’s final, ‘official’, formulation of his theory of speaker’s reference: “Speaker-Designation: A designational name token speaker-designates an object if and only if all the designating-chains underlying the token are grounded in the object” (Devitt 2015: 125). The formulation is rather technical, but if we look at how the technical terms are introduced by Devitt, we soon realize that the explanation of speaker’s reference is very similar to Donnellan’s explanation of reference. Designating chains are introduced by Devitt in Designation in this way: “underlying” a name token is a “causal chain” “accessible to” the person who produced the token. That chain, like the ability that partly constitutes it, is “grounded in” the object the name designates…. I shall call such a causal chain a … “designating-chain.” (Devitt 1981a: 29)

And they are thus characterized: “D[esignating]-chains consist of three different kinds of link: groundings which link the chain to an object, abilities to designate, and communication situations in which abilities are passed on or reinforced (reference borrowings)” (Devitt 1981a: 64; 2015: 110). Here, it is important not to be misled by the word “designating” in “designating-chains.” In fact, designating-­ chains underlying a proper name token do not necessarily originate in a baptism or something like that. For example, in the Smith-Jones case already mentioned (see note 4), there is, according to Devitt, a designating-chain underlying the “Jones” tokens produced in that particular situation by the two speakers originating in their perception of Smith (although there is another one originating in Jones’ baptism).14 But, then, Devitt’s ‘official’ formulation of his causal theory of speaker’s reference does not differ much from his 1974 account of reference discussed above, as also the following comment to an example in his 2015 article shows: “The token designated that person in virtue of being immediately caused by a thought that is grounded in that person by a designating-chain” (Devitt 2015: 111). And we have already seen that this provides an articulation of Donnellan’s answers to our two causal questions. In fact, Devitt’s causal theory of speaker’s reference has one of the extreme consequences of Donnellan’s historical explanation theory: once we have a thought about an object, we can express the former and (speaker-)refer to the latter by whatever name we want. The token we then produce (speaker)-refers to the object the thought is about, no matter how that object was baptised and what any preceding tokens of the same name referred to: A person can, of course, speaker-designate an object by a name without there being any convention of so doing. All that is required is that a token of the name have underlying it a designating-chain grounded in the object. So I could now speaker-designate Aristotle with

as we have seen in the case of proper names, past uses do not determine the semantic properties of the expression at all. In order to communicate, anything goes, if it may reasonably succeed. (Bianchi and Bonanini 2014: 201) I criticize this aspect of Donnellan’s theory in my forthcoming “Reference and Language.”  As a consequence, Devitt (1981b: 515; 2015: 120) claims that those tokens partially speakerrefer to Smith and partially speaker-refer to Jones. As Antonio Capuano pointed out to me, Kripke (1977: 274 n. 28), as well, contemplates this possibility.

14

6  Reference and Causal Chains

131

any old name simply on the strength of the link to Aristotle that is constitutive of my ability to designate him by ‘Aristotle.’ (Devitt 2015: 120)

Second, consider Devitt’s final, ‘official’, formulation of his theory of semantic reference: “Conventional-Designation: A designational name token conventionally designates an object if and only if the speaker, in producing the token, is participating in a convention of speaker-designating that object, and no other object, with name tokens of that type” (Devitt 2015: 126). Although certainly non-Donnellanian, this theory does not appear to be Kripkean, either. Or, at least, it does not resemble Kripke’s chain of communication picture – it does not appeal to any chain of tokens or uses of proper names. Perhaps, if participating in a convention can be explained in causal terms, as Devitt claims it can, the theory is still causal, but it does not seem to me to provide anything like Kripke’s answer to our first causal question (Which causal relationship is reference?). Insofar as some causal chains are appealed to in it, they are those by which speaker’s reference is accounted for, which, as I argued, are remindful of Donnellan’s rather than of Kripke’s. In fact, a striking aspect of Devitt’s theory of semantic reference is that it explains it in terms of speaker’s reference. This should immediately remind us of Grice’s general project of grounding sentence or word meaning on utterer’s meaning: “The meaning (in general) of a sign needs to be explained in terms of what users of the sign do (or should) mean by it on particular occasions” (Grice 1957: 217). Indeed, Devitt may be seen – and I believe he would agree on this – as pursuing Grice’s project,15 although certainly he does not want to take the further step of explaining utterer’s meaning (speaker’s reference) in terms of communicative (referring) intentions. Now, doesn’t Kripke’s appeal to the distinction between speaker’s reference and semantic reference show that he too was pursuing, or at least endorsing, Grice’s general project? If this were so, then at least in this respect Devitt’s theory could be seen as a development of Kripke’s ideas on reference. But, unfortunately, this is not so. In his 1977 article, after presenting the already mentioned Smith-Jones case, Kripke asks how can we account for it. Here is his answer: Suppose a speaker takes it that a certain object a fulfills the conditions for being the semantic referent of a designator, “d.” Then, wishing to say something about a, he uses “d” to speak about a; say, he says “ϕ(d).” Then, he said, of a, on that occasion, that it ϕ’d; in the appropriate Gricean sense ..., he meant that a ϕ’d. This is true even if a is not really the semantic referent of “d.” If it is not, then that a ϕ’s is included in what he meant (on that occasion), but not in the meaning of his words (on that occasion). (Kripke 1977: 263–264)

From this, Kripke arrives at his characterization of speaker’s reference: we may tentatively define the speaker’s referent of a designator to be that object which the speaker wishes to talk about, on a given occasion, and believes fulfills the conditions for being the semantic referent of the designator. He uses the designator with the intention of making an assertion about the object in question (which may not really be the semantic referent, if the speaker’s belief that it fulfills the appropriate semantic conditions is in error).

 “We seem to need notions of speaker meaning that enable us to explain conventional meaning. It seems that conventional meaning must be built up in some way from common speaker meanings” (Devitt 1981b: 519). See also Devitt 1981a: sect. 3.3.

15

132

A. Bianchi

The speaker’s referent is the thing the speaker referred to by the designator, though it may not be the referent of the designator, in his idiolect. (Kripke 1977: 264)

So, it seems that according to Kripke, for there to be speaker’s reference, there has to be, (1), a speaker’s use of a designator to assert something (but, I assume, any other illocutionary act would do as well), backed by, (2), his or her wish to talk about a particular object, and, (3), his or her belief about that particular object that it is the semantic referent of the designator. More precisely, a speaker-refers to an individual b by using a designator c if and only if, (1), a wishes to talk about b, and, (2), a believes of b that it is the semantic referent of c, and, (3), a produces a token of c in the course of accomplishing an illocutionary act. From this, it should immediately be clear that according to Kripke speaker’s reference cannot be used to account for semantic reference. According to his definition, in fact, one cannot speaker-refer to b by using c if he or she does not believe of b that it is the semantic referent of c, namely if he or she does not have the concept of semantic reference. But, then, speaker’s reference obviously presupposes semantic reference: the second clause in Kripke’s definition rules out the possibility of explaining the latter notion in terms of the former. May Kripke’s notion of speaker’s reference play some other significant explanatory role? Surely, what one speaker-refers to on a given occasion has some bearing on how he or she acts in the situation he or she is in. So, the notion might help to account for speaker’s behavior. In fact, it is at least arguable that what explains people’s behavior is their beliefs and desires (see e.g. Fodor 1987: ch. 1), and Kripke’s notion of speaker’s reference is couched in terms of beliefs and desires (“wishes”). There is no doubt that the beliefs and desires involved in a’s speaker-­ referring to b explain part of a’s behavior (especially, a’s behavior concerning b). For this reason, the notion may certainly play some explanatory role in psychology. However, this role is simply inherited from the notions by which it has been defined. So, no deep theoretical gain seems to have been achieved by introducing the notion. Does this mean that Kripke’s distinction is of no use, contrary to what Devitt, as well as many other philosophers of language, thinks? I do not believe so. But I believe, and I believe Kripke believes, that its use is mainly negative. It may dialectically help to convince people that we do not have to posit unexpected semantic relations (“ambiguities”) to explain some intuitions we may have. Indeed, by using some expressions sometimes we may want to refer to things which those expressions do not semantically refer to. (A case in point could be, of course, that of referentially used definite descriptions.) Obviously, none of this shows that Grice’s project cannot be pursued in the case of proper names. Unlike Kripke’s, Devitt’s Donnellanian explanation of speaker’s reference does not appeal to semantic reference, hence there is no problem in using his notion of speaker’s reference to account for semantic reference. But this shows, once again, how different Devitt’s and Kripke’s ideas on reference are. Note, also, that in Kripke’s definition of speaker’s reference there is no mention of causal chains: causal chains  – chains of communication  – are used by him to account for semantic reference. Therefore, Kripke’s chain of communication

6  Reference and Causal Chains

133

picture and Devitt’s causal theory not only appeal to different types of causal chains (chains of communication vs. Donnellanian chains), but make them play a different role. In Kripke’s picture, chains of communication explain semantic reference, by which speaker’s reference is explained in terms of certain propositional attitudes related to it held by the speaker. In Devitt’s theory, on the contrary, Donnellanian chains explain speaker’s reference, by which semantic reference is explained in terms of certain conventions related to it. Again, Devitt’s theory does not appear to be a development of Kripke’s picture.16 Let me conclude. In his “Afterthoughts,” David Kaplan introduced what I take to be an important distinction, the one between subjectivist and consumerist semantics. In Kaplan’s opinion, traditional semantics, such as Gottlob Frege’s and Bertrand Russell’s, are subjectivist: they are characterized by the thesis that “[w]hen we speak, we assign meanings to our words,” since “the words themselves do not have meanings” (Kaplan 1989: 600). Hence, according to them, “like Humpty Dumpty, everyone runs their own language” (ibid.). For example, in order to refer to, say, Aristotle, speakers must attach a meaning somehow available to them to the expression they use (a proper name like “Aristotle,” for example). If they are not able to do so, they will not be able to refer to him. To subjectivist semantics, Kaplan opposes the view that “we are, for the most part, language consumers” (602). According to consumerist semantics, in fact, “[w]ords come to us prepackaged with a semantic value.” Hence, “[i]f we are to use those words, the words we have received, the words of our linguistic community, then we must defer to their meaning” (ibid.). In order to refer to Aristotle, for example, speakers only have to use an expression whose meaning allows them to do this (a proper name like “Aristotle,” for example): when they acquired the expression, by hearing or reading it, they acquired a means to refer to him.17 Now, Kaplan sensibly claims that consumerist semantics goes hand in hand with Kripke’s “historical chain picture of the reference of names,” as the latter offers “an alternative explanation of how a name in local use can be connected with a remote referent, an explanation that does not require that the mechanism of reference is already in the head of the local user” (Kaplan 1989: 602–603). Indeed, if this picture is correct, in most of our name uses we consume, in Kaplan’s sense, names that others have created. As normal speakers, we do not play any semantic role.18 And, of course, what makes this possible is none other than reference borrowing (understood à la Kripke).

 For more on Kripke’s distinction, the Gricean project, and Devitt’s perspective with regard to them, see Bianchi 2019. 17  On this issue, see also Hinchliff 2012. 18  Kaplan jokingly writes that “[i]n our culture, the role of language creators is largely reserved to parents, scientists, and headline writers for Variety; it is by no means the typical use of language as the subjectivist semanticists believe” (602). As a matter of fact, things are not so simple, because of the phenomenon of inadvertent creation exemplified by the “Madagascar” case (see Bianchi 2015: 104–106 for a discussion). 16

134

A. Bianchi

On the contrary, Donnellan’s historical explanation theory leads to a radically subjectivist semantics, or so Bonanini and I have argued. Indeed, if that theory is true, every speaker plays a role in determining the semantic properties of the proper name tokens he or she produces – he or she fixes their reference, even when he or she fixes it in accordance with preceding tokens of the same name. Now, what about Devitt’s semantics? Certainly it is not as subjectivist as Donnellan’s, since it does not identify semantic reference with speaker’s reference. But it is not as consumerist as Kripke’s, either, since it explains semantic reference in terms of speaker’s reference.19 Because of this, it seems to me to miss some of the power and radicality of Kripke’s chain of communication picture, according to which language is social through and through, and speakers can only use it to refer to things because some of its expressions semantically refer to these things.20

References Almog, J. 2012. Referential uses and the foundations of direct reference. In Having in mind: The philosophy of Keith Donnellan, ed. J. Almog and P. Leonardi, 176–184. Oxford: Oxford University Press. ———. 2014. Referential mechanics: Direct reference and the foundations of semantics. Oxford: Oxford University Press. Almog, J., P. Nichols, and J. Pepp. 2015. A unified treatment of (pro-)nominals in ordinary English. In On reference, ed. A. Bianchi, 350–383. Oxford: Oxford University Press. Bianchi, A. 2012. Two ways of being a (direct) referentialist. In Having in mind: The philosophy of Keith Donnellan, ed. J. Almog and P. Leonardi, 79–92. Oxford: Oxford University Press. ———. 2015. Repetition and reference. In On reference, ed. A. Bianchi, 93–107. Oxford: Oxford University Press. ———. 2019. Speaker’s reference, semantic reference, and the Gricean project. Some notes from a non-believer. Croatian Journal of Philosophy 19: 423–448.

 Another aspect of Devitt’s causal theory that is relevant in this context is that, according to it, proper names can be grounded in objects not only at the moment of their introduction, but on many later occasions. As Devitt writes, in fact, “Nana is involved in the causal network for her name at more points than its beginning at her naming ceremony; the network is multiply grounded in her” (1974: 198; 1981a: 56). These later groundings are semantically relevant: “[d]ubbings and other first uses do not bear all the burden of linking a name to the world” (2015: 114). Thus, speakers who produce a token of a name already in use do play, at least sometimes, a semantic role, according to Devitt. This, again, seems to militate against considering Devitt’s semantics as fully consumerist (as I take Kripke’s to be). 20  I presented drafts of this paper at the Barcelona Language and Reality: Themes From Michael Devitt workshop and at the Dubrovnik Philosophy of Linguistics and Language course, both of which took place in September 2018. I am grateful to all those who intervened on those occasions. I would also like to thank Antonio Capuano and Michael Devitt for their comments. Notwithstanding the disagreement expressed in it, I hope that the paper made it clear how great my intellectual debt to the latter is. 19

6  Reference and Causal Chains

135

Bianchi, A., and A. Bonanini. 2014. Is there room for reference borrowing in Donnellan’s historical explanation theory? Linguistics and Philosophy 37: 175–203. Capuano, A. 2012a. The ground zero of semantics. In Having in mind: The philosophy of Keith Donnellan, ed. J. Almog and P. Leonardi, 7–29. Oxford: Oxford University Press. ———. 2012b. From having in mind to direct reference. In Reference and referring, ed. W.P. Kabasenche, M. O’Rourke, and M.H. Slater, 189–208. Cambridge, MA: MIT Press. ———. 2015. Thinking about an individual. In On reference, ed. A. Bianchi, 147–172. Oxford: Oxford University Press. ———. 2018. In defense of Donnellan on proper names. Erkenntnis. https://doi.org/10.1007/ s10670-018-0077-6. Davidson, D. 1986. A nice derangement of epitaphs. In Philosophical grounds of rationality: Intentions, categories, ends, ed. R.E.  Grandy and R.  Warner, 157–174. Oxford: Oxford University Press. Reprinted in D.  Davidson, Truth, language, and history, 89–107. Oxford: Clarendon Press, 2005. ———. 1994. The social aspect of language. In The philosophy of Michael Dummett, ed. B.  McGuinness and G.  Oliveri, 1–16. Dordrecht: Kluwer. Reprinted in D.  Davidson, Truth, language, and history, 109–125. Oxford: Clarendon Press, 2005. Devitt, M. 1974. Singular terms. Journal of Philosophy 71: 183–205. ———. 1976. Semantics and the ambiguity of proper names. Monist 59: 404–423. ———. 1981a. Designation. New York: Columbia University Press. ———. 1981b. Donnellan’s distinction. Midwest Studies in Philosophy 6: 511–524. ———. 1998. Reference. In Routledge encyclopedia of philosophy, ed. E. Craig, vol. 8, 153–164. London: Routledge. ———. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Donnellan, K.S. 1970. Proper names and identifying descriptions. Synthese 21: 335–358. ———. 1974. Speaking of nothing. Philosophical Review 83: 3–31. Evans, G. 1973. The causal theory of names. Aristotelian Society. Supplementary Volumes 47: 187–208. Reprinted in G. Evans, Collected papers, 1–24. Oxford: Clarendon Press, 1985 (page numbers given relate to this volume). Fodor, J.A. 1987. Psychosemantics: The problem of meaning in the philosophy of mind. Cambridge, MA: MIT Press. Grice, P. 1957. Meaning. Philosophical Review 66: 377–388. Reprinted in P. Grice, Studies in the way of words, 213–223. Cambridge, MA: Harvard University Press, 1989 (page numbers given relate to this volume). Hinchliff, M. 2012. Has the theory of reference rested on a mistake? In Reference and referring, ed. W.P. Kabasenche, M. O’Rourke, and M.H. Slater, 235–252. Cambridge, MA: MIT Press. Kaplan, D. 1989. Afterthoughts. In Themes from Kaplan, ed. J. Almog, J. Perry, and H. Wettstein, 565–614. Oxford: Oxford University Press. Kripke, S. 1972. Naming and necessity. In Semantics of natural language, ed. D. Davidson and G. Harman, 253–355, 763–769. Dordrecht: Reidel. ———. 1977. Speaker’s reference and semantic reference. Midwest Studies in Philosophy 2: 255–276. ———. 1979. A puzzle about belief. In Meaning and use, ed. A.  Margalit, 239–283. Dordrecht: Reidel. ———. 1980. Naming and necessity. Reprint with a new preface of Kripke 1972. Oxford: Blackwell. Martí, G. 2015. Reference without cognition. In On reference, ed. A.  Bianchi, 77–92. Oxford: Oxford University Press.

136

A. Bianchi

Pepp, J. 2009. Semantic reference not by convention? Abstracta 5: 116–125. ———. 2012. Locating semantic reference. UCLA Ph.D. dissertation. ———. 2019. What determines the reference of names? What determines the objects of thought. Erkenntnis 84: 741–759. Wulfemeyer, J. 2017a. Reference-shifting on a causal-historical account. Southwest Philosophy Review 33: 133–142. ———. 2017b. Bound cognition. Journal of Philosophical Research 42: 1–26.

Chapter 7

The Qua-Problem for Names (Dismissed) Marga Reimer

Abstract  The primary focus of this paper is the so-called “qua-problem” for names, a problem which I argue is spurious and thus apt for dissolving rather than solving. This pseudo problem can be conceptualized (following Devitt and Sterelny’s Language and Reality) as involving a pair of questions which appear to put pressure on the causal theorist to introduce a descriptive element into her theory of reference. One question concerns how a name can be grounded in a whole object when only a (spatio-temporal) part of the object is perceived; the other question asks for an explanation of failed groundings in cases where the speaker is very wrong about the perceived object they have attempted to name. I deny that causal theorists need to make any concessions to descriptivism in order to adequately address these concerns. In response to the first question, I appeal to a default, psychologically motivated, practice of naming only whole objects; in response to the second question, I suggest, through a series of thought experiments, that reference does not in fact fail even in cases where the speaker is radically mistaken about the perceived object they are attempting to name. After responding to a trio of objections Devitt has made to the proposed dissolution of the qua-problem for names, I compare the case of names to the far more complex case of natural kind terms, where (I suggest) there may indeed be a genuine qua-problem, even if one not amenable to the particular solution proposed by Devitt and Sterelny. Keywords  Qua-problem · Causal theory · Proper names · Reference grounding · Descriptivism · Natural kind terms

M. Reimer (*) Department of Philosophy, University of Arizona, Tucson, AZ, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_7

137

138

M. Reimer

7.1  Introduction My aim in this paper is two-fold: first, to argue that the qua-problem for names is a spurious problem and is thus to be dismissed or dissolved rather than tackled or solved; second, to indicate some of the consequences of this “dismissive” response. I will begin with a brief presentation of the causal theory of reference for names, as the qua-problem is thought to affect causal (vs. description) theories of names. I will then defend my near exclusive focus on the qua-problem for names. This is important as the qua-problem is also thought to affect natural kind terms – and with a particular vengeance. I will then present Devitt and Sterelny’s (1987) locus classicus: a clear, concise and vividly illustrated characterization of the putative problem.1 After developing a variant of the “dismissive” response to the qua-problem Devitt and Sterelny reject, I present and critique their own tentatively proposed solution to that problem. Objections to this critique are anticipated and responses provided. In the penultimate section of the paper, I turn to the qua-problem for natural kind terms. I discuss the reference fixing of natural kind terms, primarily to highlight the contrast with names, where reference fixing appears to be considerably less problematic. In the final section of the paper, I do two things: first, I discuss why Devitt and Sterelny think there is a qua-problem for names when there appears to be none; second, I indicate the consequences of my proposed dismissal of the qua-­ problem (for names) for the prospect of a causal theory of names. This involves a brief comparison with natural kind terms which (in contrast to names) really do face a qua-problem – one to which a “hybrid” (causal-descriptive) account of reference might conceivably provide a solution.

7.2  Brief Background: The Causal Theory of Reference The causal theory of reference (for names) is usefully viewed as having two components: (i) an account of reference fixing, wherein a name is initially “grounded” in its bearer, and (ii) an account of reference borrowing, which explains how those not present at a name’s grounding are able to acquire competence in the use of that name. As these two accounts are so familiar to contemporary philosophers of language, I will eschew generalizations regarding causal chains and the like and attempt instead to elucidate the relevant phenomena by way of a “real life” (vs. merely hypothetical or otherwise artificial) illustration, which will resonate with anyone familiar with Kripke’s causal theory of reference who has ever named a pet. One morning back in June of 2014, a neighbor knocked frantically on my front door, informing me that her husband had just seen a dead cat and two (live) kittens in my backyard. As we (my husband, my daughter and I) owned no cats at the time  For another nice, if less vivid, characterization of the qua-problem (that encompasses both names and natural kind terms), see Miller 1992: 426.

1

7 The Qua-Problem for Names (Dismissed)

139

but lived near a large feral colony, I quickly surmised that one of our huskies had killed a feral mother cat. I immediately went to the backyard to retrieve the orphaned kittens. One kitten reminded me of a childhood pet named “Miezekatze,” the other kitten reminded me of another childhood pet (also a cat) named “Scatty.” Minutes after retrieving the two kittens, I started calling the orange tabby “Mieze” and the black longhair “Scatty.” They were thereby named “Mieze” and “Scatty.” It was that simple. There was no formal dubbing ceremony; nor was there an informal ceremony where I said to myself (something like): This one is to be called “Mieze” and this one is to be called “Scatty.” The “fixings” themselves were causal in nature insofar as they involved my perception of the two kittens: the kittens were the cause of my perceptual kitten-like experiences.2 Upon bringing the two kittens into the house, I said to my husband (something like): This one is “Mieze” and this one is “Scatty.” My initial grounding (or “fixing”) of the names was thereby grounded further. My husband then began to refer to the kittens as “Mieze” and “Scatty” respectively, as did my daughter, my sister, my veterinarian, her assistants and our next-door neighbors, all of whom met the kittens within days of their rescue. Their uses of the names amounted to further groundings of those names in the perceived kittens. When my husband and I, my daughter, my sister, my veterinarian, her assistants and my cat-loving neighbors discussed Mieze or Scatty (by name) with others not familiar with the cats, the latter’s ability to subsequently use the names in question to talk about Mieze and Scatty involved a kind of reference “borrowing.” Those users “borrowed” the reference from the speakers who grounded the two names in the first place; they were able to do so in virtue of perceptual (and hence causal) contact with the grounders’ vocalized uses of those names. As these reference-­ borrowers used the names “Scatty” and “Mieze” to talk about the kittens (eventually cats) with other members of the linguistic community, the causal-historical chain of communication associated with those names grew beyond, possibly far beyond, what I currently imagine it to be. Indeed, if anyone ever reads and then discusses this paper in any detail, new causal-historical chains of communication associated with written tokens of my cats’ names will be generated in virtue of the process of reference borrowing.

7.3  Why Focus on Names? Although I talk about the qua-problem for natural kind terms in the penultimate section of the present paper, my focus will be almost entirely on the qua-problem for names. This focus requires some justification given that the qua-problem for natural kind terms is considerably more complex, and therefore more challenging,

2  As Andrea Bianchi has pointed out to me, it might be questioned whether this initial perceptual contact is sufficient to make the groundings in question unambiguously “causal.”

140

M. Reimer

than the qua-problem for names. There are at least three points to be made on behalf of my comparative neglect of the qua-problem for natural kind terms. First, while virtually all of us have fixed the reference of names, few if any of us (ordinary folk, including philosophers and even natural scientists) have fixed the reference of natural kind terms. The near universal experience of having named a child, a pet, a doll, an imaginary friend,3 an automobile, or a boat is an experience that will make the various observations and arguments regarding the reference fixing of names particularly vivid and hence (I assume) not especially difficult to understand or evaluate. Second, as a “mere” philosopher of language, I am simply not in a position to discuss, in an appropriately informed manner, either natural kind terms or their reference fixing; I do not have the sort of background in the natural sciences necessary for such discussion. Although it is certainly true that natural kind terms were introduced into the language long before the advent of contemporary natural science, that doesn’t mean that the process by which such terms are currently introduced is something I have the expertise to discuss; I am neither a natural scientist nor a philosopher of science. However, as a natural language speaker and philosopher of language, I have just the sort of background appropriate for critical reflection on the reference fixing of ordinary names like “Mieze” and “Scatty.” Third, and finally, I believe (contra Devitt and Sterelny and other causal theorists of reference4) that names and natural kind terms are importantly different and intend to argue that the qua-problem for names is not so much a problem that admits of a satisfactory solution as really no problem at all. In contrast, I believe (as indicated below in Sect. 7.9) that there is an inherently messy and perhaps ultimately intractable qua-problem associated with the reference fixing of natural kind terms. (This is reflected in Devitt and Sterelny’s own comparatively complex and tentative discussion of the qua-problem for natural kind terms.)

7.4  The Qua-Problem for Names The best known and, to my mind, most intuitive and vividly illustrated version of the qua-problem for names can be found in Devitt and Sterelny’s (1987) Language and Reality.5 There, the two philosophers present the qua-problem as a serious

3  It is my belief (the justification of which is irrelevant here) that some such “imaginary friends” are in fact perceptual rather than imaginary in nature, being hypnagogic or hypnopompic hallucinations. Such hallucinations are experienced by many very young and mentally healthy children. The relevance of this point is that it suggests that such “friends” can in fact be named. In this connection, see example (v) in Sect. 7.7 of the present paper, which involves naming a hallucination, and doing so successfully. 4  Including, most notably, Saul Kripke (1980). 5  However, in “Should proper names still seem so problematic?”, Devitt writes: “One thing we look for is a solution to what became known as ‘the qua problem’: In virtue of what is a certain object

7 The Qua-Problem for Names (Dismissed)

141

problem for any causal account of names and as a potentially fatal problem for any purely causal (vs. causal-descriptive) account of such terms.6 As presented by Devitt and Sterelny, the qua-problem for names can be thought of as a pair of problems (formulated as questions) which are related insofar as a solution to the one will yield a solution to the other. Because of my wish to avoid any possibility of misleading the reader as to the views of Devitt and Sterelny, I will quote the relevant passages in their (near) entirety rather than attempt a paraphrase or summary. Devitt and Sterelny ask us to: (1) Think … of a grounding of ‘Nana’. The name was grounded in Nana [the grounder’s pet cat] in virtue of perceptual contact with her. But that contact is not with all of Nana, either temporally or spatially. Temporally, the contact in any one grounding is only with her for a brief period of her life, with a “time-­slice” of Nana … Spatially, the contact is only with an undetached part of her, perhaps a relatively small part of her face (she may be peering around a corner). In virtue of what was the grounding in the whole of Nana not in a time-slice or undetached part of her? (Devitt and Sterelny 1987: 64, emphasis added) (2) Think next of a situation where the would-be grounder is very wrong about what he is perceiving. It is not a cat but a mongoose, a robot, a bush, a shadow, or an illusion … At some point in this sequence, the grounder’s error becomes so great that the attempted grounding fails, and hence uses of the name arising out of the attempt fail of reference. Yet there will always be some cause of the perceptual experience. In virtue of what is the name not grounded in that cause? (Devitt and Sterelny 1987: 64, emphasis added)

Any adequate solution to the qua-problem for names, as conceptualized by Devitt and Sterelny, will simultaneously (and satisfactorily) answer the questions posed at the end of each of the foregoing passages.

7.5  In Defense of a “Dismissive” Response Before providing their own response to the question posed at the end of passage (1), Devitt and Sterelny write: The question [In virtue of what was the grounding in the whole of Nana not in a time-slice or undetached part of her?] is not to be airily dismissed on the assumption that names do, as a matter of fact, always designate “whole objects”. Even if this were so, it would surely be possible to name temporal or spatial parts of objects. So there must be something about

the focus of perception rather than a spatial or temporal part of the object? I have struggled mightily with this problem … but I now wonder whether this was a mistake: perhaps the problem is more for psychology than philosophy” (2015: 115n). 6  However, both Sterelny (1983) and Miller (1992) believe that a purely causal account might well avoid the qua-problem. (Interestingly, Sterelny’s paper was written before Language and Reality.)

142

M. Reimer

our practice which makes it the case that our names designate whole objects. (Devitt and Sterelny 1987: 64)

Devitt and Sterelny go on to provide some helpful factual (vs. merely hypothetical) examples of groundings in spatial and temporal parts of objects: Sickeningly coy examples are to be found in Lady Chatterley’s Lover. Think also of ‘Sydney’ which is the name of part of Australia. Temporal examples do not leap to mind so readily. ‘The Terror’ naming a part of the French Revolution is one example. And we might name a tadpole without thereby naming the frog it turns into. (Devitt and Sterelny 1987: 64)

Following Devitt and Sterelny, let us focus on the naming of Nana, the grounder’s pet cat. The question here is: In virtue of what was the grounding in the whole of Nana not in a time-slice or undetached part of her?

This problem can arguably be dismissed (pace Devitt and Sterelny) by appeal to a psychologically motivated default practice of naming whole objects rather than parts thereof. To be competent in the use of names, on such a view, is to be disposed to abide by this practice, barring special circumstances (detailed below). Thus, if a young child were to start naming, not only their dolls or toy soldiers but also parts thereof, it would be natural to suspect that the child did not know how names are in fact used: to designate whole objects rather than parts thereof.7 The existence of a default practice of naming whole objects is nicely captured in ordinary everyday definitions of “name” like the following, culled from a popular online dictionary: A word or set of words by which a person, animal, place, or thing is known, addressed, or referred to.

There is, unsurprisingly, no mention of the metaphysician’s spatial or temporal “parts” in such definitions. The bearers of names are just what we ordinarily take them to be: persons, animals, places, and other sorts of “things” – in other words, whole objects, rather than parts thereof. The existence of a default practice of naming (only) whole objects would be easy enough to explain: such a practice has a clear psychological motivation. As natural language speakers, we have a strong practical interest in thinking about, talking about and (in some cases) beckoning or and otherwise addressing, whole objects (notably “persons, animals, places, or things”) that are especially significant to us. We naturally attend to such objects, some of which (i.e., our own children) are entities whose well-being is, quite literally, essential to the survival of our own genes and indirectly to the survival of the species as a whole. Being able to think about, communicate about and (in some cases) beckon or otherwise address these important “objects” would be highly advantageous to us. Having devices to facilitate such mental processes and social activities would thus be an enormous convenience. Hence, the existence of names: devices for thinking about, communicating about,

7  Alternatively, one might suspect that the child was, for whatever reasons, conceptualizing the parts in question as wholes unto themselves and thus as apt for naming.

7 The Qua-Problem for Names (Dismissed)

143

and (in some cases) beckoning or otherwise addressing, whole objects that are of particular significance to us. While it certainly appears possible (as Devitt and Sterelny’s apt examples show) to name temporal or spatial parts of whole objects, the naming of such parts is perhaps best regarded as a special kind of case involving a special kind of mental state. Such a state might involve a pretense to the effect that the parts in question are actually wholes and thus suitable objects for naming. Such “name-ready” parts would presumably be of special significance to the would-be grounder; they would thus be “things unto themselves” and so apt for naming. This is surely what is going on in D.H. Lawrence’s Lady Chatterley’s Lover, where “Lady Jane” and “John Thomas” are the main characters’ nicknames for their genitalia, “objects” to which they both attach great significance. Devitt and Sterelny’s other examples (involving Sydney, The Terror, and a tadpole named without thereby naming the frog it turns into) could easily be accommodated on such a view insofar as the would-be grounders would see the “targeted” parts as locales, events, and creatures unto themselves and thus apt for naming. Indeed, this is obvious in the case of “Sydney” and “The Terror,” the bearers of which are surely objects “unto themselves,” even if “merely” parts of more expansive or extensive objects. Sydney, after all, is a whole city, while the Terror is a whole reign (or period) of time. This at least suggests that the “wholeness” of an object is relative to the perspective of the would-be grounder. Suppose we can accommodate the naming of parts in this way, with parts conceptualized as wholes by would-be grounders. Perhaps we can then regard all groundings of names as involving the practice of naming only whole objects, whether real or “pretend.” In that case, such a practice would no longer really be a “default” practice; it would be an impossible to fault practice. It would be impossible, psychologically speaking, to name a “mere” part of a whole insofar as naming that part would require conceptualizing it as a whole: as a thing unto itself and so apt for naming. This idea is arguably captured in what it means to be competent in the grounding of names: to possess such competence is to know (implicitly) that names are always to be grounded in wholes. Thus, consider a child who names her tadpole “Taddy,” only to name the emergent frog “Froggy.” How could she possibly effect the naming of the tadpole and the emergent frog unless she thought of the “two” as whole objects and as thus apt for naming? Similarly for Sydney and The Terror. The two parts would be conceptualized by the would-be grounders as whole objects with parts of their own: Sydney as a city comprised of its own spatial parts and the Terror as a 10-year period comprised of its own temporal parts. On this sort of picture, names always name whole objects and never parts – relative to the perspective of the grounder. For in order to name a part of an object (such as an amphibian, a country, or a revolution) the grounder must first conceptualize that part as a whole with its own spatial and/or temporal parts. Thus, what outwardly look to be parts (whether of amphibians, countries, or revolutions) are actually wholes, at least from the perspective of the would-be grounder. As to explaining how “Taddy” was grounded in a temporal part of a particular amphibian, we can say that the part in question was conceptualized by the grounder as a whole (as a whole tadpole) with its own parts, at the time of the grounding. We can say this while

144

M. Reimer

acknowledging that the name was grounded in a whole that was part of a different whole: a whole amphibian. As to explaining how the “The Terror” was grounded in a part of the French Revolution, we can say that it was also grounded in a whole (a whole reign) that was part of a different whole: a whole revolution, the French Revolution. In this way, we emerge with a variant of the “dismissive” response to the qua-­ problem rejected by Devitt and Sterelny: names do indeed always designate whole objects. But it’s only a variant of that view as it claims, in effect, that it is the nature of names to refer to what are whole objects from the perspective of the would-be grounder. Thus, it is no mere “matter of fact” that names refer to such things; they have no “choice” insofar as they are genuine names, as such expressions are, essentially, devices for referring to (what are conceptualized as) whole objects. It might be thought that this sort of picture leads immediately to a variant of the qua-problem. Returning to the case of Nana, the question now becomes: What makes it the case that the name is grounded in the whole cat rather than (e.g.) the cat’s whole face?

But this pseudo problem is easily dissolved; one need only invoke the default practice of naming whole objects, barring special circumstances. The cat’s whole face is “merely” a part with respect to the whole cat and, in the absence of any pretense that the cat’s face is also a whole object and thus amenable to naming, “Nana” gets grounded in the whole cat.

7.6  Devitt and Sterelny’s (Tentatively) Proposed Solution Devitt and Sterelny have a very different take on the qua-problem. As they explain: It seems that the grounder must, at some level, “think of” the cause of his experience under some general categorial term like ‘animal’ or ‘material object’. It is because he does so that the grounding is in Nana and not in a temporal or spatial part of her. (Devitt and Sterelny 1987: 64–65)

There are two points to be made here. First, if the proposed “dismissive” approach to the qua-problem is right, then the assumption that the grounder thinks of the cause of their experience under some general descriptive category is superfluous. The grounder need only conform to the default practice of naming only whole objects (whether real or “pretend”) in order for the name’s bearer to be secured. Because there are no “pretend” whole objects in the offing (face, paws, tails, etc.) the cat gets secured as the name’s referent. Second, the psychological plausibility of what Devitt and Sterelny are suggesting is questionable; just think of the two year-­ old naming her kitty “Nana.” At two years of age, she might not have, at any level, “general categorial terms like ‘animal’ or ‘material object’.”8 Yet she might well

 For a similar point, see Miller 1992.

8

7 The Qua-Problem for Names (Dismissed)

145

have, at some level, the concept thing: something suitably designated, not only by a name, but also by a non-descriptive demonstrative like “that” accompanied, perhaps, by a pointing gesture. Devitt and Sterelny go on to claim that their response to the question In virtue of what was the grounding in the whole of Nana not in a time-slice or undetached part of her?

provides a response to the question concerning failed groundings In virtue of what is the name not grounded in [the] cause [of the perceptual experience]?

As they explain: The grounding will fail if the cause of the perceptual experience does not fit the general categorial terms used to conceptualize it.

However, I don’t think that the question (italicized above) is a worrisome one because the grounding will arguably succeed rather than fail in cases of the sort Devitt and Sterelny have in mind. Such cases therefore fail to motivate a descriptive, reference-constraining, mental state. This can be shown with a series of thought-­ experiments involving putative cases of failed reference groundings.

7.7  Failed Grounding? Let’s imagine a case where an adult speaker perceives something (x) that she mistakenly believes to be a cat. She begins to call x “Mieze,” thereby taking herself to have so-named x. Following the five examples provided by Devitt and Sterelny, suppose that x is not a cat but something quite different, possibly radically different. Suppose, in particular, that x is a(n): (i) mongoose; (ii) robot; (iii) bush; (iv) shadow; (v) illusion. I would claim that, in all such cases, it is quite possible that “Mieze” has been successfully grounded in x. Let’s consider these five cases in turn, beginning with the first. (i) The speaker sees a particular mongoose (or meerkat) regularly but at a distance and mistakes it for a small cat. She names the creature “Mieze” and continues to refer to it by that name even after she discovers that it is not a cat. She says such things as, “I can’t believe that Mieze is actually a meerkat!”

146

M. Reimer

(ii) The speaker sees and hears her three year-old granddaughter’s new FurReal toy: Lulu’s Walkin’ Kitties Sugar Paws Pet.9 It is so life-like that she mistakes it for the real thing. She doesn’t know the robot-cat’s name and so names it herself after her childhood pet Mieze. She thinks about the robot-cat as “Mieze” and wonders if her granddaughter might inadvertently hurt the poor kitty. Even after she learns that Mieze is just a robotic (if startlingly realistic) toy, she continues to refer to it as “Mieze,” as does her granddaughter, who much prefers that name to the impossible “Lulu’s Walkin’ Kitties Sugar Paws Pet.” (iii) While walking through the neighborhood one evening, the speaker sees what she doesn’t realize is actually a miniature topiary; it’s shaped just like a cat and so in the evening light that’s what she mistakes it for. She sees the “cat” sitting placidly in the same spot every evening, just to the right of the home owner’s doorstep. She decides on a whim to name the lonesome kitty; she calls it “Mieze.” A lover of cats, she sometimes finds herself wondering how Mieze is doing. He (she suspects Mieze is a male) disappears one day and she becomes concerned about him. She figured Mieze was scared by the noise from all the landscaping that was done at her neighbor’s house the other day. She frets over his well-being. Later, when she finds out that Mieze was just a topiary cat, she says to herself, “Thank God, Mieze was nothing but a bush!” (iv) The speaker is a Tucsonan who uses her laptop outdoors on a regular basis. She is a lover of cats and although she has only one (a black longhair named “Scatty” named after a childhood pet), she often thinks of getting a second cat. One afternoon, the woman sees a shadow at around 4:30 and its shape is quite striking – it looks just like the shadow of a cat, and so that’s what she takes it to be. She names the shadow-cat “Mieze” after another childhood pet, only to later discover that the shadow, which appeared the next day at around the same time, was of an oddly leafed branch on her Chinaberry tree. Being in a silly mood, she greats her shadow-cat by its name, “Hi Mieze, I guess it must be around 4:30 now.” She continues to playfully greet the shadow-cat by his name (“Mieze”) whenever she sees it. (v) It’s been one of those days and the speaker decides to help herself to a rather hefty portion of her husband’s bourbon. She is not used to drinking such large quantities of liquor and as a result has some extraordinarily vivid Alice in Wonderland-­like dreams involving, of all things, a very large orange tabby cat. The dream-cat was so intensely real that the speaker (who’s been researching the paranormal) becomes convinced that the cat was more than just a dream. It reminds her so much of her childhood pet Mieze that she decides to name the hallucinatory cat “Mieze.” After she and her husband celebrate their 20th wedding anniversary by consuming two entire bottles of champagne, she says to him, with a seriousness that he finds alarming, “I wonder if I will see Mieze again tonight.” A couple of weeks later, after some somber (and sober) reflec-

 This is the actual name of the Hasbro toy in question.

9

7 The Qua-Problem for Names (Dismissed)

147

tion, she says to her husband, “I guess Mieze was just an incredibly vivid hallucination.” In these cases, it would be intuitive to say that the speaker has indeed successfully named the mongoose, robot, bush, shadow, and dream image “Mieze.” The intuitiveness of this is highlighted by the fact that the speaker continues to refer to the entities in question, and with utter sincerity, as “Mieze” even after she discovers her error. Indeed, others who know of her error might defer to her usage, successfully referring to x via the name “Mieze.” Importantly, the speaker’s utterances are truth-­ evaluable, suggesting referential success, not referential failure. This would appear to be at odds with Devitt and Sterelny’s view, as they say, “at some point in this sequence, the grounder’s error becomes so great that the attempted grounding fails, and hence uses of the name arising out of the attempt fail of reference” (Devitt and Sterelny 1987: 64). But I think that the reasonably sane and linguistically competent speaker’s post-grounding language (and the deference of others to that language) suggests otherwise. It also comports with the proposed view that, at the time of the grounding, there is no reason to suppose that the speaker has before (or beneath) her mind some constraining descriptive concept (or phrase) like animal or material object. All she arguably needs to know is how to name something – and that’s something that even a two-year old can do, and do with incredible ease. Importantly, however, in all five cases the speaker can effectively retract the grounding by saying (or thinking), “There’s no Mieze after all; what I thought was a cat was something else.” (This would make sense especially if the name-grounder were a speaker of German, as “Miezekatze” is German for pussycat.) Thus, if the grounding doesn’t succeed, it fails retroactively only because the would-be grounder regards the grounding as a failure. But again, she might well regard the grounding as a success – and do so regardless of whether she is aware, at some level, of any reference-­ constraining descriptive concept(s) at the time of the initial grounding.

7.8  A Trio of Objections Before moving on to the grounding of natural kind terms, I would like to consider and respond to three thought-provoking objections to the ideas proposed thus far. All have been put forth by Michael Devitt (p.c.).

7.8.1  Missing the Point While there may indeed be a psychologically motivated practice of naming whole objects which explains how we come to be naming them, that is beside the point. For the issue is not about the cause of our naming whole objects but about what it is

148

M. Reimer

to name them. The point can be brought out (according to Devitt) by way of a thought experiment: Imagine different beings in a very different environment and hence with a psychological motivation not to name whole objects. How would what they do in naming differ from what we do? There must be processes in them that differ from those in us. How do they differ?

It is difficult to imagine beings whose physical environment creates a motivation not to name whole objects, as that would amount to a motivation not to acquire a convenient means for thinking about or talking about such objects. However, differences in terms of social environment might conceivably lead to some such motivation. Thus, a very strong psychological motivation not to name whole objects would be the threat of annihilation upon any attempt to do so. So imagine a world where such a threat is regularly carried out. Would the inhabitants of such a world not name whole objects? If they valued their existence, they would surely refrain from naming such objects publicly. But insofar as their existence and flourishing depended on their ability to think about (if not communicate about) whole objects, they would surely name such objects privately (that is, silently to themselves) – or perhaps publicly in a private setting. As for their openly public naming practices, I imagine they might name parts of whole objects upon conceptualizing them as wholes, much as we do. But so far as I can tell, the internal “processes” associated with the public naming practices of the imagined beings wouldn’t differ substantially from our own. Thus, successfully naming parts of wholes (perhaps parts of themselves and/or their fellow beings) would require thinking of them as things unto themselves. Without such an assumption, it would be difficult to conduct Devitt’s thought-experiment. For it’s simply not clear that it would be possible to name something without first conceptualizing it as a whole, albeit as a whole that is part of some greater (more expansive or extensive) whole.

7.8.2  A Different Kind of Problem but a Problem Nonetheless Devitt and Sterelny’s original solution to the qua-problem (discussed here) involves the idea that the grounder think of the cause of their pre-naming perceptual experience under some general categorial term(s) like “animal” or “material object.” Devitt now thinks that this solution is “far too intellectualized,” leaving the qua-­ problem in need of an alternative solution. As he explains (p.c.): Kids are born into the world with innate dispositions to take words sometimes to name individual whole objects, sometimes kinds of such objects … But the unanswered question is: What are the natures of these innate dispositions? What constitutes its being a disposition to do that and not something else? … We still need to know the underlying mechanisms of our semantic dispositions.

Interestingly, this sort of picture seems to comport well with the proposed “dismissive” approach to the qua-problem. After all, an obvious way of accounting for the disposition (shared by children and adults alike) to interpret names as names of

7 The Qua-Problem for Names (Dismissed)

149

whole objects rather than as names of parts thereof is to suppose that such dispositions are “natural” – or (in other words) innate. They are thus part of our natural “semantic constitution” to which we conform barring special circumstances of the sort described above. However, Devitt clearly thinks that the qua-problem is a genuine problem that remains not only unanswered but also unanswerable – at least by a philosopher. As he explains (p.c.): the reason I’ve ceased to worry about the qua-problem is that [the] mechanisms [underlying our innate semantic dispositions] seem to me to be psychological matters, not philosophical ones, about which we humans as yet know little.

I don’t quite see this. Let’s grant that the “mechanisms” behind our semantic naming dispositions are in fact psychological in nature, perhaps having to do with the adaptive value of being able to think and communicate about salient or otherwise significant phenomena: phenomena that are important to our survival and flourishing, both as individuals and as a species. How might discovering these mechanisms solve the qua-problem? How might such a discovery explain how “Nana” gets grounded in Nana and not in any of her spatial or temporal parts? It’s the semantic disposition and not the psychological mechanisms behind that disposition that appears to be relevant to explaining how “Nana” gets grounded in Nana rather than (e.g.) her face, which may be the only part of her perceived by the grounder when the grounding of her name takes place. I have been suggesting that answering Devitt’s question about the “mechanisms” underlying our “innate semantic dispositions” involves appeal to a practice that effectively defines what is involved in knowing how to use a name: knowing that names are devices used to refer to whole objects – whether “real” whole objects or “pretend” whole objects, such as the tadpole that eventually becomes a frog or the (partial) mountain range whose status as a whole object is publicized with its naming.

7.8.3  Empty Names Left Unexplained In order to account for the phenomenon of empty names, failures in reference grounding need to be acknowledged. It must be acknowledged, more specifically, that an attempted grounding will fail if the cause of the grounder’s perceptual experience does not fit the general categorial terms used to conceptualize it. The reference failure attending the attempted grounding of “Vulcan,” whose would-be bearer was conceptualized by its would-be grounder as a planet, is arguably a case in point. I would begin by acknowledging that failures of reference grounding do occur and that “Vulcan” arguably illustrates the phenomenon. One might of course deny the latter point on any number of grounds including, for example, that “Vulcan” refers to an abstract artifact (Salmon 1998). But let’s suppose, for argument’s sake, that the name “Vulcan” is in fact empty, Le Verrier’s attempted grounding notwithstanding. This is easily accommodated on the proposed view. One need only acknowledge that some names are “disguised” (“abbreviated” or “truncated”)

150

M. Reimer

descriptions, that “Vulcan” was one of these, and that the disguised description for which that name stands, perhaps something like “the planetary cause of Mercury’s orbital perturbations,” lacks a denotation.10 Suppose, however, that (contrary to fact) Le Verrier perceived a particular star on several occasions, which he mistook for a planet causing perturbations in Mercury’s orbit. Suppose further that, whenever Le Verrier perceived this particular star, he referred to it as “Vulcan.” The question is: Would the name “Vulcan” have thereby been grounded in the observed star? Insofar as intuitions on this matter vary, they arguably vary in accordance with whether one understands the name as an abbreviated description or as a directly referential term. Insofar as one regards the attempted grounding as a failure, they likely view the name as a disguised description (e.g., “the planetary cause of Mercury’s orbital perturbations”) – a description without a denotation. Insofar as one views the attempted grounding (in a particular star) as a success, they likely view the name as a directly referential term associated, perhaps, with semantically inert descriptive content. As to which it is, disguised description or directly referential term, that presumably hinges on the would-be grounder’s intentions – which may well be too indeterminate to yield any fact of the matter. The important point is that the proposed view is consistent with the fact that “Vulcan” has no referent, Le Verrier’s attempted grounding of that name notwithstanding. For it seems plausible that Le Verrier never attempted to ground the name in anything he actually perceived but instead perceived something (perturbations in Mercury’s orbit) and then coined the (descriptive) name “Vulcan” to name the presumed planetary cause of what he perceived. But there was no such cause and so the name emerged as empty. Moreover, if Le Verrier had attempted to ground the name in something he actually perceived, such as a particular star, the grounding would perhaps have succeeded, in which case “Vulcan” would refer to that very star.

7.9  Referring to felis catus Although the present paper is focused on the qua-problem for names, something should be said about the qua-problem for natural kind terms, if only to gain a better understanding of the qua-problem for names. According to Devitt and Sterelny, the qua-problem for natural kind terms is considerably more complex than that for names. As they explain: Something must pick out the sample qua member of a natural kind. That something must be the mental state of the grounder. The grounder must … “think of” the sample as a member of a natural kind, and intend to apply the term to the sample as such a member. (Devitt and Sterelny 1987: 73)

However, as they go on to point out:  Today the name “Vulcan” might abbreviate an importantly different description, such as “the mythical planet between Mercury and the Sun.”

10

7 The Qua-Problem for Names (Dismissed)

151

The term is applied to the sample not only qua member of a natural kind but also qua member of one kind. Any particular sample of a natural kind is likely to be a sample of many natural kinds; for example, the sample is not only an echidna, but also a monotreme, a mammal, a vertebrate, and so on. In virtue of what is the grounding in it qua one member of a natural kind and not another? (Devitt and Sterelny 1987: 73)

In other words: As a result of groundings, a term refers to all objects having the same underlying nature as the objects in the sample. But which underlying nature? The samples share many. What makes the nature responsible for the sample being an echidna the one relevant to reference rather than the nature responsible for it being a mammal … ? (Devitt and Sterelny 1987: 73)

As with their solution to the qua-problem for names, their solution to this problem involves going beyond purely causal considerations. Here’s what they tentatively propose: the grounder of a natural kind term associates, consciously or unconsciously, with that term, first, some description that in effect classifies the term as a natural kind term; second, some descriptions that determine which nature of the sample is relevant to the reference of the term. (Devitt and Sterelny 1987: 74)

The chief difficulty with Devitt and Sterelny’s proposal can be seen by considering a well-known and initially powerful objection to any sort of description (or hybrid) account of the reference of natural kind terms. It is an objection first proposed by Putnam (1962) and later adopted by Kripke (1980). Suppose, for example, that common house cats are actually Martian robots. Does this mean that we are not actually referring to those creatures that we have been calling “cats” (or some historical equivalent) for at least nine millennia? Does it mean that “cat” is, in effect, an “empty” natural kind term? It would certainly seem so, if the reference grounding content involves “some description that in effect classifies the term as a natural kind term.” (See Miller 1992 for a similar concern.) But intuitively that’s just wrong; surely we have been referring to the hypothesized robocats all along – we just mistakenly thought they were animals of a certain kind, not robots.11 In that case, however, Devitt and Sterelny’s view would lose some of its credibility. The question would then become: Do the hypothetical Martian robocats undermine the possibility of any hybrid (causal-descriptive) account of the reference grounding of natural kind terms? Not obviously. Just suppose that what we call “cats” are indeed robocats. “Cat,” or one of its historical predecessors, was perhaps grounded with the assistance of descriptive mental content pertaining to the outward appearance and overt behavior of Putnam’s hypothetical robocats. Provided that neither the characteristic appearance nor behavior of the common house cat were to dramatically change, we would probably continue to call such creatures “cats” – even knowing of their true (robotic) nature. But we might well deny that they were animals or members of felis catus, a spurious species on the hypothesis in question. This is consistent with the idea that any descriptive content associated with the original grounding of the 11

 Devitt appears to have expressed some doubt on this point. See Miller 1992: 427 n. 6.

152

M. Reimer

natural kind term “cat” would have involved the characteristic appearance and behavior of those creatures currently classified by taxonomists as members of the species felis catus. Perhaps the descriptive reference fixing mental content would contain demonstrative expressions, as in the kind of thing that looks and acts like that,

where the demonstrative pronoun “picks out” the perceived sample (pseudo) feline. If this is right, then a complete account of the semantics of natural kind terms would require an account of the semantics of (perceptual) demonstratives. It seems unlikely (pace Devitt and Sterelny) that the grounding of natural kind terms involves the sort of descriptive content that would preclude grounding in non-­ natural kinds, such as robocats. With respect to the term “cat,” the grounding more likely involved content related to the familiar appearance and behavior that characterizes the common house cat as much today as it did 9000 years ago. After all, why was the term introduced in the first place? No doubt to refer to creatures that had a characteristic “look” and that behaved in certain characteristic ways. Such reference would have been reinforced by a community-wide interest in thinking about and talking about such intriguing and easily domesticated creatures. 9000 years ago, just as today, one might have wished to acquire a cat, not because it was an animal, or a member of a (yet to be classified) species, but because of its characteristically “agreeable” appearance and (perhaps more importantly) its characteristic behavior, which includes preying on grain-consuming rodents. It would thus make more sense that the natural kind term “cat” (or its millennia old equivalent) was used to refer to creatures valued for their characteristic appearance and behavior, not their membership within the animal kingdom or, more narrowly, within the species felis catus. However, as pointed out to me by Devitt (p.c.), developmental psychologists have shown that three to four year-olds take the nature of skunks, for example, not to be constituted by outward appearance. They apparently take the nature to be underlying, as Kripke does. This seems intuitively plausible, as it is easy to imagine three or four year-olds speculating that their neighbor’s “stinky” cat (perhaps named “Stinky”) is really a skunk in disguise, reflecting an appreciation of the appearance/ reality distinction.12 Perhaps, then, instead of claiming that the reference of a natural kind term is fixed in accordance with mental content like the kind of thing that looks and acts like that,

we should qualify that content with a tacitly understood “naturally” proviso thereby emerging with something like the kind of thing that naturally looks and acts like that,

where the demonstrative refers to a normal-looking house cat engaged in normal house cat behavior.  These ideas are echoed in Andrea Bianchi’s (p.c.) question: Suppose we find something that looks and acts like that, but has a very different nature. Would we be inclined to call it a cat?

12

7 The Qua-Problem for Names (Dismissed)

153

We could then account for hypothetical “skunk-cats” of the sort young children might speculate about. For if Stinky the cat were really a skunk in disguise, he would not be the kind of thing that naturally looks and acts like that, where the demonstrative refers to “typical” cat-like appearance and behavior. The main point that emerges from the foregoing reflections is that Putnam-style arguments directed against descriptive (or hybrid) accounts of the groundings of observable natural kind terms like “cat” are not clearly decisive (pace Miller 1992). Thus, while natural kind terms may well face a genuine and challenging qua-­ problem, perhaps a satisfactory “hybrid” solution is possible, even if not the one suggested by Devitt and Sterelny.

7.10  The Curious Origins of an Apocryphal Problem Although I don’t think that there is a genuine qua-problem for names, I do think there is one for natural kind terms, and a challenging one at that. But what would lead Devitt and Sterelny to think there is a qua-problem for names if there isn’t? I can think of at least three possible reasons. First, they are puzzled by the fact that names are successfully grounded in whole objects even though only parts of such wholes are perceived by the grounder at the time of the attempted grounding. Second, in cases where the speaker is radically mistaken about the nature of the would-be reference, Devitt and Sterelny believe, mistakenly if not unreasonably, that the grounding fails, something which they find puzzling. Third, Devitt and Sterelny believe that names and natural kind terms are semantically of a piece, and so might naturally assume that if one faces a qua-­ problem, then so does the other.13 In contrast, I have suggested that what we have is a “problem” apt more for dissolving (or dismissing) than solving. Appreciating this involves recognition of a default naming practice of naming (only) whole objects, as whole objects are what we, as speakers, are concerned with thinking about and talking about and (in some cases) addressing and beckoning. So, although it is perhaps only the (spatio-­ temporal) part that is perceived, it is the whole in which the name is successfully grounded. Moreover, while failed groundings certainly do need to be explained, such failures are perhaps more unusual than Devitt and Sterelny imagine. When they do occur, they can be explained, in at least some cases, by regarding the names in question as abbreviated descriptions without denotations rather than as directly referential terms without references. Finally, the common assumption that names and natural kind terms are semantically of a piece is questionable. Aside from some intuitive considerations (names, in contrast to natural kind terms, do not appear to connote), cases of failed reference grounding are, as we have seen, hard to come by

 Miller (1992) believes that he may have solved this problem via a purely causal account of the reference-grounding of natural kind terms.

13

154

M. Reimer

in the case of names but arguably less so in the case of natural kind terms (think of “witch” and “phlogiston”). Besides, grounding in natural kind terms these days arguably requires the relevant sort of scientific expertise, though nothing remotely similar is required for the grounding of names. Devitt and Sterelny are openly torn with regard to their proposed solution to the qua-problem for names. For it forces them to accept a theory of reference for names that is not purely causal; to explain successful reference grounding in whole objects as well as apparent grounding failures, a descriptive element must (they think) be introduced. This is seen as potentially problematic, as a complete theory of names would then have to include a theory of descriptions, or at least of sortals. But if I am right, the (spurious) qua-problem does not motivate any such inclusion. However, although this is presumably welcome news for the causal theorist of reference for names, problems remain for the theorist who also wants their causal theory of natural kind terms to be equally free of any descriptive elements. Unfortunately, this may simply not be possible.14 For as their own detailed discussion of the qua-­ problem for natural kind terms reveals, the situation is far messier than that involving names. This alone is reason to question the idea that the qua-problem faced by causal theories of natural kind terms has a counterpart in causal theories of names.

References Devitt, M. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Devitt, M., and K.  Sterelny. 1987. Language and reality: An introduction to the philosophy of language. Cambridge, MA: MIT Press. Kripke, S. 1980. Naming and necessity. Cambridge, MA: Harvard University Press. Miller, R.B. 1992. A purely causal solution to one of the qua problems. Australasian Journal of Philosophy 70 (4): 425–434. Putnam, H. 1962. It ain’t necessarily so. Journal of Philosophy 69: 647–658. Salmon, N. 1998. Nonexistence. Nous 32 (3): 277–319. Sterelny, K. 1983. Natural kind terms. Pacific Philosophical Quarterly 64: 110–125.

14

 But see Miller 1992 for a contrary view.

Chapter 8

Language from a Naturalistic Perspective Frank Jackson

Abstract  Why is it good to have a language? Many reasons, but one reason above all others: a shared language is a wonderful way of transmitting information. We will see how this simple, ‘Moorean’ observation tells us what to say about reference for proper names, two-dimensionalism, and the internalism–externalism debate. Throughout, the discussion will presume that we language users are natural parts of the natural world. Keywords  Information · Naturalism · Proper names · Reference · Two-dimensionalism

8.1  What to Expect Sometimes it is good to take a step back from local battles and see how things look from a wider perspective that allows one to make sense of the more particular issues. The wider perspective that informs this essay is a combination of two ideas. The first is that we are complex arrangements of the kinds of items studied in the natural sciences, broadly conceived, and our properties and interactions with our surroundings are one and all explicable from this perspective, or, I should say, explicable in principle from this perspective. To illustrate. It is common knowledge that we are able to recognise faces. This ability raises two research questions. One is, what’s the commonality that unites the faces we recognise as being faces we’ve seen before. The other is, what’s the commonality in the brain caused by the commonality in the faces which serves to carry the information about a face being or not being one we have seen before. What I mean by explicable in principle is that, regardless of whether or not these two research questions have in fact been answered at the time of writing, there exist answers giveable in the terms of the natural sciences. We can call this first idea naturalism.

F. Jackson (*) School of Philosophy, Australian National University, Canberra, ACT, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_8

155

156

F. Jackson

The second idea is that, first and foremost, we should think of words and sentences as ways of passing information between speakers that share a language. One way to find out whether or not it is raining outside is to listen for the sound of rain on the roof or to stick one’s head out the window and look; another is to ask someone who has just come in and attend to the words that come out of her mouth. In what follows, I discuss in turn what to say about proper names, two-­ dimensionalism, and the internalism–externalism debate, against this background. These three topics have been much discussed and are highly contentious. I won’t be reviewing the ins and outs, though I will make the odd passing reference to the literature. I hope to convince you that a surprising amount can be said about each issue drawing on the perspective outlined – drawing on, that is, naturalism and the informational role of language. How does what I do here relate to Michael Devitt’s concerns? Part of the answer is that our three topics are one’s that have engaged him over the years. The other part of the answer is an agreement over methodology. In a number of places  – three examples are Devitt (1989, 1996, 1997) – he urges that semantic theorising should be driven by an overall conception of what language is for, and that there has been insufficient attention to this question. We have, he urges, been overly concerned with swapping intuitions and insufficiently concerned with constructing our theories with an eye to the role language plays (see especially the opening paragraphs of Devitt 1997). And what is that role? Any answer to this question must, in his view, take account of the way utterances are “guides to reality”. I am more sympathetic to intuition swapping than is Devitt  – in my view, often the intuitions philosophers appeal to are nothing more mysterious than beliefs coming from the exercising of recognitional capacities.1 However, I am in full agreement with his emphasis on keeping in mind the role language plays, and his view about this role. I think my talk – above and below – about information is another way of getting at what Devitt has in mind when he talks of guides to reality, and theorising about language with its informational role front and center will inform much of what I say below. Where we may differ is in our understanding of naturalism, although I think of the difference as one at the margins. The subtitle to Devitt (1996) is “A naturalistic program for semantic localism”, and his discussion of what language is for and the implications of placing this question front and center is framed in terms of a naturalism very much in accord with the brief remarks about naturalism I made earlier. The possible difference is over how to think about information. We agree that an utterance like “The gas tank is nearly empty” is a guide to reality (gives information). I hold (unoriginally) that the way to think about this is in terms of possibilities. The gas tank may be full; it may be half full; it may be nearly empty; etc. Hearing “The gas tank is nearly empty”, together with trusting the producer of the sentence, allows one to eliminate all the possibilities except the third. That’s how utterances deliver information, are guides to reality. They narrow down the open possibilities. (I say more about this way of thinking about information below.) Discussions with Devitt

 See, e.g., Jackson (2011).

1

8  Language from a Naturalistic Perspective

157

make me think that he may regard trafficking in possibilities as a departure from naturalism. As against this, I think that the sciences that traffic in possible worlds and possibilities more generally – probability theory, statistics, information theory, decision theory – are sufficiently well founded to allow us to traffic in possibilities without losing our naturalistic credentials. I now turn to our three topics, starting with proper names.

8.2  Proper Names The epistemic importance of known, distinguishing marks is a commonplace. X-rays are good for diagnosing broken bones because (i) an X-ray of a broken bone differs from an X-ray of a bone that is not broken, and (ii) the difference is detectable. To avoid misunderstandings, I should emphasize that in saying this, I am not taking a position on a controversy about the individuation of perceptual experiences and the evidential import of this. Suppose that two white cups, C1 and C2, are perceptually indistinguishable. Suppose further that, here and now, I am in fact seeing C1. Would my perceptual experience have been different had I been seeing C2 instead? Some say no. The experience would have been indistinguishable from the experience I in fact have, and we type perceptual experiences by indistinguishability. Some say yes. We type perceptual experiences by veridicality conditions, and in one case whether or not the experience is veridical depends on whether or not C1 is white, whereas in the other it depends on whether or not C2 is white. The second, externalist kind of position makes possible a view according to which the epistemic value of a perceptual experience fails to supervene on indistinguishability. It opens the way to a view according to which, in the case where I am in fact seeing C1 and it looks white, I have better evidence for C1’s being white than I do for C2’s being white. The rights and wrongs here are contentious.2 I mention the matter only to highlight that, for our purposes, it does not matter which way one should jump. Suppose I know that one of C1 and C2 is radioactive and am an externalist in the just explained sense. All the same, I will most certainly want to put a distinguishing mark on the radioactive cup precisely in order to ensure that there is a perceptible difference between looking at C1 and looking at C2. We should (and do) allow that distinguishing marks are epistemically valuable, independently of where we stand on the externalism issue just bruited (and externalism more broadly). The externalism issue may well affect exactly how we characterize their importance, but that they are epistemically important is a given. If known, distinguishing marks are epistemically important, it makes sense that we should, on occasion, create them. And of course we do (and I did, in the example of the radioactive cup above). People, cities, streets, etc. get assigned names, and a consequence of that process is that they acquire known, differentiating marks. The

 For discussion and references, see, e.g., Schellenberg (2013).

2

158

F. Jackson

streets in my neighborhood differ, one from another, in many ways, but the way of most help to those finding their way to my house is its street sign. It differs in a known way from the signs adjacent to other streets in my neighborhood. Another example is the way names on drivers’ licenses help traffic police differentiate one offender from another. Once the names are assigned, we are in a position to use them to pass on information about the items named. The names are bricks in information preserving causal chains. When someone called “Hillary Clinton” does something, as often as not it gets recorded in a sentence (in a newspaper, an e mail, a blog, etc.) containing the words “Hillary Clinton”. The sentence is, let’s suppose, “Hillary Clinton won the debate”. Someone reading this sentence who thinks that the person who won the debate is likely to become President then produces another sentence containing “Hillary Clinton”, say, “Hillary Clinton will become President”, in order to pass on this new piece of information, and so it goes.3 Another example is the way the name on one’s driver’s license allows the transmitting of information about one’s driving history. The process of using names to transmit information about individual objects is too familiar to need more laboring from me. I hope the above sounds like a sketch of a causal theory of reference for proper names. What’s its connection with the theory put forward by Saul Kripke (1980)? Some – likely including Devitt – will insist that what I have sketched is a version of causal descriptivism about proper names (which indeed it is), whereas Kripke’s theory is a causal theory of reference properly speaking, where the “properly speaking” signals that it is not to be understood as any sort of description theory. But what’s the important difference between the two? Sometimes the point of difference is put in terms of what is and isn’t pretty much common knowledge. Causal descriptivism is committed to the relevant causal and naming facts being pretty much common knowledge, whereas the causal theory proper holds that the naming and causal facts that secure the reference of a name are not common knowledge. What’s more, that is often seen as a point in favor of the causal theory. Although supporters of the causal theory must of course allow that some people are aware of the relevant facts – namely, they themselves – they urge that speakers of the language at large are not. This is, however, hard to believe. People write books and produce television shows on whether or not Helen of Troy or Jericho existed, and whether or not Shakespeare wrote the famous plays. These books and television shows report the results of historical research directed at the causal origins of the names in question and whether or not the origins are of the right kind to allow us to affirm that Jericho and Helen of Troy really existed, and that Shakespeare wrote, for example, Macbeth. The research that goes into these shows and books is carried out by historians, not philosophers of language. The shows and books get reviewed and assessed by people who are not philosophers, and the reviews are read and understood by the

3  As it turned out, information in the wide sense that encompasses misinformation. More on this below.

8  Language from a Naturalistic Perspective

159

educated public at large. How could all this be possible if which facts are relevant to answering the questions were known only to a select group of philosophers? I have just argued that, as a matter of fact, the relevant facts (about naming procedures and information preserving causal chains containing names) are widely known, but it is worth adding that the relevant facts had better be widely known. Only that way can proper names play their important informational role. Let’s spell this out. Consider the word “flat”. A description theory of reference is true for “flat” in the following sense: the word applies to something just if it has a certain property; this is how the word serves to describe things in our world.4 What’s important for us here is that if only philosophers knew this, the word would not be of much use to the general public. The valuable informational role of the word “flat” depends on the fact that it is common knowledge that it applies to something just if that something has the property of being flat. Now the same goes for the naming procedures and information preserving causal chains containing proper names that we have lately been discussing. Not only are they, we argued, in fact common knowledge, they need to be in order for proper names to play the important informational roles they do in fact play for competent speakers in general. In this regard, proper names and words like “flat” are very different from terms like “inertial frame” and “progeria”. For words like these, we (or many of us) need tutorials from experts and help from dictionaries. Sometimes the point of difference between casual descriptivism and the causal theory is put in terms of whether or not there is a commitment to a metalinguistic element in claims about how things are that we make using sentences like “Mary Smith works in this building”. Causal descriptivism – or causal descriptivism of the kind outlined above – is committed to there being a meta-linguistic element in such claims. Part of the information being passed on using the sentence concerns someone’s being named in a certain way.5 By way of contrast, it is argued that the causal theory – properly understood – has no such commitment. This might be thought a mark against causal descriptivism and in favor of the causal theory. The thought would be (often is) that when I produce the sentence “Mary Smith works in this building”, I am not giving information about words. I am giving information about Mary Smith herself. But in fact I am giving inter alia meta-linguistic information, and in fact information in part about the name “Mary Smith”. Here’s a way to see 4  Calling this a description theory can mislead, because the crucial point is that the thing has to have a certain property. However, this feature of the word is responsible for the word being a good word for describing. Of course, there is the question as to how “flat” (in English) gets to be such that it applies to something just if that thing has the property. A popular answer – to which I and many are broadly sympathetic, an answer that goes back to Locke at least – is that we have beliefs to the effect that things are flat, and we speakers of English have adopted the convention of using “flat” to pass on this belief. Incidentally, some seem to mean by a description theory for “W”, the view that when we use “W” we consult a description that is in some sense before our minds. This is not what is meant here. 5  Of course, the name that appears in some current sentence need not be the name originally assigned. The “brick” that carries information may evolve over time and often changes when sentences are translated.

160

F. Jackson

this. Imagine someone asks you to find out if the sentence “There is something red and round in this room” is true. You might have recourse to information about words. For example, you might utter that very sentence to someone you are confident knows the answer and go by what she says. You would then be using information about words to get the answer. But you don’t have to do this. You could simply have a good look around the room and see if there is or is not something red and round in it. By contrast, it is impossible to show that “Mary Smith works in this building” is true without having recourse to information about words, and in particular the words “Mary Smith”. You need to find them on an office door, or in the phone directory of the building, or to note the response of someone who works in the building to a question containing “Mary Smith” – as it might be, the question “Does Mary Smith work in this building?”. Absent any information about the words “Mary Smith”, you may find out that a woman who works on logic works in the building, that a woman who comes to work on a bike each day unless it is raining works in this building, and so on and so forth. But none of this tells you that the sentence “Mary Smith works in this building” is true. For that you need someone uttering, as it might be, the sentence “I am Mary Smith”, at a welcome party for new occupiers of the building in question, combined with your knowledge of who likely attends welcome parties.

8.3  Two-Dimensionalism How do we collect information about how things are around us, information that we can then pass on with words, or gestures, or whatever? A good part of the answer is by having perceptual experiences, as we noted near the beginning. What has this to do with two-dimensionalism? The answer is that a very plausible claim about the nature of the information that comes from having a perceptual experience, when combined with what I will call linguistic modesty, supports a two-­ dimensional account of the semantics of a great many of the sentences we produce. I will spell this out shortly. But first I need to say something about the term “information”. I am going to follow the not uncommon practice of using it to cover both information proper and misinformation, and in fact that has been implicit from the beginning, and was explicit in the example about passing on information about Hillary Clinton. On this usage, the sentence “The Earth is warming” gives information about the Earth whether or not it is in fact warming (though in the paragraphs below, I assume that the Earth is in fact warming). Now I can explain how we will understand information in this inclusive sense, and what I am about to say relates to the possible difference with Devitt I mentioned near the beginning. What did we learn when we learnt that the Earth is warming? Something about the nature of the Earth we inhabit. It is warming. One way the Earth might be is that it is not warming. Another way is that it is warming. The research that tells us that the Earth is warming tells us that it is the second way that in fact obtains. This is how it tells us that the Earth is in a certain category, that of planets that are warming.

8  Language from a Naturalistic Perspective

161

Why didn’t I say that what we learnt is that a certain proposition is true? The reason was to highlight what is of most interest to us. What is of most interest to us is the nature of the Earth we inhabit. The reason we are so engaged with the debate over global warming is its implications for how things on Earth will unfold, and that’s a question about the nature of Earth. It is, that is, a question about the category to which Earth belongs. Or consider what happens when we come across a sign reading “The water is safe to drink”, on a camping trip. We may well come to believe that a certain proposition is true, but what’s most important is what we learn about the nature of how things are in front of us. There are many ways things in front of us might be. The sign gives us the good news that the way things in front of us are is that a certain body of water is safe to drink. The above is a plea for thinking of information in terms of partitions among possibilities, a way of thinking that is pretty much standard in statistics and probability theory, but is, I urge, a bit of folk wisdom all the same. The folk know that there are two relevant ways the water might be: poisonous versus safe to drink, and that the job of that sign is to reassure them that it is the second possibility that’s realized. If we partition the possibilities into those where the water is poisonous and those where it is safe to drink, the sign tells us that how the water actually is belongs to the second. Again, the folk who play poker know about possibilities and that reducing the possibilities for one’s opponent’s hand is getting (valuable) information about their hand. Again, those who buy shares are only too aware that there are many possibilities. They go to a good broker in order to find out which possibilities they can rule out, a process that failed some of us in 2008. Again, why are maps so useful? You are unsure whether to turn left or right to get to the airport. A look at a map saves from you missing your plane by telling you to turn left, and it does so by eliminating turning right as the way to get to the airport. Isn’t this common knowledge concerning how maps give information? If the right way to think about information is in terms of the narrowing of possibilities, there is an attractively simple way of explaining how we get information from coming across words and sentences we understand. The sentences effect the required reduction. Take the camping trip example. You are wondering if the water is safe to drink. Coming across the sentence “The water is safe to drink” gives you the needed information by virtue of the fact that you know how things have to be for that sentence to be true. Thus – provided you are confident that the sentence is true – out of the possible ways things might be, you know the actual way is one where the water is safe to drink. That’s how the sentence gets to restrict the possibilities open for you. This is in effect what we said when urging earlier that a description theory of reference is true for the word “flat”. That word delivers information to those who understand it by virtue of their knowing how something has to be for the word to apply to it. This means that when they hear the word being applied to something by someone they trust, the open possibilities for that something’s nature are reduced to those where it is flat. In the example just given, the possibilities concern how some given object is: flat or not flat. In the camping trip example, the possibilities concern how things are in the region in front of where you are. In other cases, the possibilities are for whole

162

F. Jackson

ways a world might be. An example of this kind taken from astronomy is the debate between the steady state theory and the big bang theory. This was a debate about the overall nature of the world we occupy (in the sense of “world” that astronomers and the folk use “universe” for). And when we acquired the information that the big bang theory is true, we learnt that the actual world, our world, is one of the worlds where things started with a big bang. We acquired information to the effect that our world is located among the worlds that started with a big bang. What is the right way to think of the possibilities in connection with the information that comes from perceptual experiences? The information that comes from a perceptual experience has a very distinctive character – as many have noted, which is not to say that they will agree with how I say things in what follows. The information concerns how things are relative to you yourself, but without giving any information about how you are over and above that things are that way relative to you. When you see something as being in front of you and moving, you get the information that you are one of those with something in front of them that is moving at the time of having the experience, but that’s the extent of it. If Mary Smith is having a perception at noon, 12 June 2000, as of an object in front of her as moving, the information she receives from having the experience will be information, properly speaking, just if there is an object moving in front of Mary Smith at noon, 12 June 2000. But she won’t get, from the experience itself, that information. She will get information that she’s one of those with a moving object in front of them, but not which one of them she is, or when it is except that it is whenever she’s having the experience, or where it’s happening except that it is wherever she is. As the point will be important, let me labor it with a further example. You are the passenger in a car travelling down the highway. You fall asleep. While you are asleep, the driver reaches a fork in the road and takes the left-hand fork. You wake up, look out the window and see four black sheep. You have no idea which fork the driver took. Clearly you have information about how things are relative to yourself, and your information is information properly speaking just if there are four black sheep located aright with respect to yourself somewhere along the left-hand fork, at the time of having the experience. But the location of the sheep with respect to the left-­ hand fork is not part of the information your experience delivers to you. That’s why you may well want to ask the driver which fork she took. Now for why the point is important. It means that the information you get from having a certain perceptual experience isn’t simply that you inhabit a world of a certain kind, though it is in part that. Take a case where your perceptual experience tells you that there is a car approaching you. Part of what the experience tells you is that somewhere or other in the world you inhabit, there is someone who has a car approaching them.6 But you learn more than that. The information is that the car is 6  What an experience tells one is the information delivered by having an experience. This means that, given the inclusive sense in which we are using “information”, we are using “tells” in a correspondingly inclusive sense, one on which, e.g., when something looks to be moving, what the experience tells one is that it is moving even if in fact it’s an illusion. The same goes for “teaches” etc.

8  Language from a Naturalistic Perspective

163

approaching you. Indeed, the survival value of the experience depends on the experience telling you something about how things are vis-à-vis you yourself. However, the experience does not “say” who you are. It says that you are one of those with a car approaching them, but not which one of them you are. This means that the information that your experience delivers to you is that you are one or other of the persons in the world you inhabit that has a car approaching them. Or to say this in terms of functions: your experience determines a function that goes from a person in a world to truth just if the person in the world has a car approaching them. Or to say it in terms of centered worlds: the information is a set of centered worlds (not a set of worlds) where the center in each world is a person with a car approaching them in that world. What’s this got to do with two-dimensionalism? Two-dimensionalism is a thesis about language, and the foregoing remarks concern the information that comes from perceptual experience. But consider the sentence you as an English speaker might naturally use to report the information in the car example, namely, “There is a car approaching me”. In producing this sentence, you are making public the information being delivered by the experience. You are not, however, adding to it. That’s what I had in mind when I spoke above of linguistic modesty. Linguistic modesty says that the information the sentence makes public is one and the same as the information delivered by the experience, and is, accordingly, given by the set of centered worlds where the center in each world is a person with a car approaching them in that world. Well, not quite. There’s the informational role of the sentence token, the utterance itself – more on this shortly. I am not disputing the common view that the sentence expresses a proposition that is a function of who produces the sentence, and where and when it is produced. Produced by Mary Smith, at 17:30, 25 December 2000, in Times Square, it expresses the proposition that a car is approaching Mary Smith at 17:30, 25 December 2000, in Times Square, and is true precisely when a car is approaching Mary Smith at 17:30, 25 December 2000, in Times Square. But that’s not the information being delivered by the sentence. Information isn’t that easy to get. Being in Times Square does not in itself give Mary Smith the information that she is in Times Square (though if she’s lost, she may wish that it did). The information being delivered by the sentence is one and the same as that delivered by the experience that typically justifies its production, as you would expect (with the heralded proviso that the token sentence has informational value, more on this shortly). But what’s that information? The information captured by a set of centered worlds, or, if you like, by a function that goes from a person in a world to truth just if the person in the world has a car approaching them at the time of having the experience. The upshot is that we need to acknowledge that an important part of the story about a good number of sentences – obviously, our car example is one among a host of like examples – is that they are associated with a set of centered worlds. And that’s what two-­ dimensionalism says. The “second” dimension is the set of centered worlds – the dimension we need to add to the set of worlds where a car is approaching Mary Smith at 17:30, 25 December 2000, in Times Square, in order to account for the informational role of the sentence.

164

F. Jackson

I know some will insist that there are serious problems with taking the information provided by a token of “There is a car approaching me” to be given by a set of centered worlds, the key idea behind the case for two-dimensionalism just outlined. Let’s now look at three of them.7 Suppose I, FJ, utter “There is a car approaching me”, and you, addressing me, utter “There is a car approaching you”, and that both sentence tokens are produced at 17:30, 25 December 2000, in Times Square. The objection is that the information provided by the two utterances is the same. We secure this, runs the objection, by treating the information provided by both utterances as the set of worlds where FJ has a car approaching him at 17:30, 25 December 2000 in Times Square. But this is to abandon the centered worlds approach. Indeed, it is to embrace the kind of “making information too easy to get” approach that I rejected earlier. My reply is that it isn’t true that the two utterances provide the same information. The utterance of “There is a car approaching me” provides the information that the producer of the sentence has a car approaching them; the utterance of “There is a car approaching you” provides the information that the person addressed has a car approaching them. Of course, if the person addressed in the second utterance is the person producing the first utterance, the two bits of information will stand or fall together, but that’s another question. The second worry needs a little stage setting. If the centered worlds account of the information provided by “There is a car approaching me” is correct, then plausibly the very same set of centered worlds is the content of the belief that there is a car approaching me – in the sense of content tied to how a belief represents things to be. If I believe that there is a car approaching me, what I believe is that I am one of those with a car approaching them in whatever world it is that I am in. Now contrast two cases. In one, I believe early on that there is a car approaching me, but later believe that this is no longer the case (perhaps I see the car change direction). In the other, I believe early on that there is a car approaching me but come to believe that this was never the case (it was “a trick of the light”). In the first, I believe that things have changed; in the second, I believe that I was wrong. What’s the difference over time in the contents – in, that is, what’s believed at the two times – on the centered worlds approach? It might seem, in each case, that the change over time is a change from, early on, the content being a set of centered worlds with a car approaching its center to, later on, the content being a set of centered worlds without a car approaching its center. But this would mean that the change in content was the same. Something’s gone wrong. The two cases are importantly different. My reply is that there is a token reflexive element in the information provided by perceptual experiences and, correspondingly, in the contents of beliefs based on them. When that’s incorporated into the centered worlds account, the difference between the two cases becomes apparent. In this respect, beliefs and experiences are akin to windsocks. A windsock’s being configured thus and so gives information about the direction of the wind relative to the windsock itself at the time the

 Stalnaker (2008). Others have worries of the same general kind, see, e.g., Holton (2015).

7

8  Language from a Naturalistic Perspective

165

configuration is thus and so. This is the sense in which the information provided has a token reflexive element. The same is true for the contents of beliefs about approaching cars. When I believe that there is a car approaching me here and now, I believe that the approaching is happening at the time of having the belief, and when I believe that there was a car approaching me, I believe that the approaching precedes the believing. Let’s now incorporate this into the centered worlds account of our two cases. In the first case – the one where what I believe changes because I believe that things have changed (the car is no longer approaching me) – the content of the early belief is a set of centered worlds with a car approaching its center at the time of having the belief, and the content of the later belief is a set of centered worlds without a car approaching its center at the time of having the belief. In the second case – the one where I change my mind about the car’s ever having been approaching me – the content of the early belief is a set of centered worlds with centers with a car approaching the center at the time of having the belief, and the content of the later belief is a set of centered worlds without the car approaching the center at any earlier time than that of having the belief. The two cases come out differently, as they should. The third worry concerns whether thinking of information in terms of centered worlds can explain how we transfer information using words. If I utter “The big bang hypothesis is true”, you will learn something about the kind of world you occupy (assuming you have some sort of understanding of my words and trust me to speak truly). For you know how the world I occupy has to be for my words to be true. You also know that you occupy the same world as I do (without of course knowing exactly which world it is – is it one of those where Brexit turns out to be a disaster or one of those where the UK confounds the critics, or …). You thereby learn something about the kind of world you occupy. However, this little story is in terms of the worlds at which “The big bang hypothesis is true” is true, sentences whose informational content is the set of worlds at which they are true. Is there a plausible story to tell for sentences whose informational content is instead a set of centered worlds? That’s the worry. My reply is that there is a plausible story to tell for sentences whose content is a set of centered worlds. It is slightly more complicated, the complication coming from the fact that it needs to take into account the way token sentences give information about centers – something we promised to address earlier.8 It is a bit of folk wisdom that hearing “I have a beard” gives information about how things are relative to the token sentence, and more especially relative to the subject who produces the sentence token. There is in principle nothing more mysterious here than the role of a token of “when” in response to a request to say “when” while pouring a drink. For sentences like these, type and token work together to deliver information. And sentences with centered content are precisely examples of ones where type and

8  What follows is an improved, I trust, telling of the account in Jackson (2010). I am indebted to many for convincing me of the need for improvement, including some dissenters. A debt to someone I take to be, broadly speaking, an ally is to Weber (2013).

166

F. Jackson

token work together. The type gives the set of centered worlds; the token gives the location of the center. Let me spell it out for our sample sentence, “There’s a car approaching me”. Understanding the sentence type tells a hearer that a token of the type expresses a sentence whose content is a set of centered worlds whose centers are persons with a car approaching them at the time of having the belief in whatever world it is that they inhabit. Trusting the producer of the sentence means that the hearer takes it that the producer of the sentence is in fact a person with a car approaching them at the time of producing the sentence, in whatever world it is that the producer inhabits. The hearer knows that they are in the same world as the producer. What the token adds is a way of finding the person with the car approaching them: look for the producer of the token. The hearer can then acquire information about how things are relative to they themselves, and if the so-located producer is close to them, the hearer may well acquire information that causes them to take evasive action.

8.4  A Second Application of Linguistic Modesty Suppose I am looking at a chair that is made of wood. This state of affairs – my looking at a chair that is made of wood – is an example of the contingent a posteriori. It will be neither necessary nor a priori that I am looking at a chair made of wood. Consequently, “The chair I am looking at is made of wood”, when produced by me, will be a contingent, a posteriori truth. But what about “The actual chair I am looking at is made of wood”? If we grant the popular view that composition is an essential property of an object and combine this with the fact that “the actual chair I am looking at” is a rigid designator, it follows that “The actual chair I am looking at is made of wood” is necessarily true.9 But the sentence is not a priori true. What, in that case, should we say about the state of affairs of the actual chair I am looking at being made of wood? Some say that it is an example of a state of affairs (world state, possible way things might be, etc.) that is metaphysically necessary (it could not fail to be the case) while being neither epistemically or conceptually necessary (its being the case is a posteriori). Linguistic modesty says that this is the wrong way to go. One does not create new states of affairs by adopting a certain way of talking. Although I may choose to express matters using the sentence “The actual chair I am looking at is made of wood” instead of “The chair I am looking at is made of wood”, that does not mean that I have somehow created a new state of affairs. After all, if someone heard me utter “The actual chair I am looking at is made of wood”, all they would need to do to establish that I had spoken truly would be to ascertain that the chair I am looking at is made of wood, a state of affairs that obtains as a contingent matter of fact, as we note above. 9  What about worlds where the chair I am looking does not exist, is the sentence true at those worlds? We could finesse this question by conducting our discussion using “Anything that is the actual chair I am looking at is made of wood”.

8  Language from a Naturalistic Perspective

167

Something similar applies to water’s being H2O. When chemists discovered that water is H2O, what they discovered was that the dominant potable liquid at room temperature on Earth that fills the rivers etc. is H2O. There was no “extra” experiment carried out to show that the liquid in question was water (and what would such an experiment look like?). The state of affairs that was found to obtain was, accordingly, one that obtained as a contingent a posteriori matter, for it is a contingent a posteriori matter that the dominant potable etc. liquid on Earth is H2O. However many hold that the sentence “Water = H2O” is a necessary a posteriori truth. Suppose they are right.10 Linguistic modesty says that it would be a mistake to infer from this that there is a state of affairs that is metaphysically necessary while being neither epistemically nor conceptually necessary. We should, I think, be sensitive to the important distinction between the world and ways the world might be, on the one hand, and our talk about the world and the ways it might be, on the other. The fact that we can create sentences that are necessary a posteriori truths does not, in itself, tell us that we should grant that there are states of affairs that are necessary a posteriori.11 Of course, an obvious question to ask at this point is how a sentence can be (i) a necessary a posteriori truth, (ii) be, in some good sense, about the way things are, without (iii) admitting ways things are, states of affairs, world states etc. that are metaphysically necessary but not epistemically or conceptually necessary. That’s a topic for another time.12

8.5  Twin Earth for Two-Dimensionalists By two-dimensionalists, I mean those swayed by the argument of the previous section but one. I mean, that is, those who insist that we need centered worlds to capture the information delivered by a whole range of sentences  – or, better, need centered worlds plus the role of tokens in giving information about the centers. By Twin Earth, I mean the famous example in its our world-bound version, in Putnam (1973). Despite its fame, it is perhaps sensible to have before us a quick recap of the example in its our world-bound version. In this version, Twin Earth is a planet in the same world (universe) as Earth; it is not a planet imagined to be in another possible world. Superficially Twin Earth is very like Earth. In particular, its inhabitants speak a language that has the same vocabulary as English. What’s more, they appear to use these words in much the same circumstances as we use them. They use “sky” for something above them that looks blue; “ocean” for a salt-tasting liquid that surrounds things they use “land mass” for; “water” for stuff that falls as something they  And maybe the sentence should be read as “Any water = H2O”.  I hope I have always been alert to the difference between words and the world they are about, but I am sure I am indebted to reading Devitt (1984). 12  But see one of the two-dimensionalist treatments of the necessary a posteriori in, for example, Jackson (1994) or Chalmers (1996). The basic idea goes back at least to Tichý (1983). 10 11

168

F. Jackson

call “rain” from things they use “cloud” for, stuff which they have to ingest every so often to survive; and so on. However, at a deeper level, there is a big difference. The kind they use “water” for is XYZ, whereas the kind we use “water” for is H2O. This difference is at a deeper level in the sense that it takes serious science to establish it.13 The question Twin Earth raises is, does our word “water” refer to XYZ on Twin Earth, or would we have to invent a new word, say, “twater”, if we wanted to refer to XYZ. Many insist that our word “water” does not refer to XYZ. If we say that there is some water on Twin Earth, we can only be speaking truly if there’s some H2O somewhere or other on Twin Earth – the stuff they call “water”, the stuff which is in fact XYZ, does not count. It is not always entirely clear what the argument for this conclusion is. Is the claim that the conclusion drawn is intuitively self-evident? That’s hard to believe given that many do not find it obvious. Many urge that were we to discover Twin Earth, we should and would say that we have discovered that water comes in two forms, in somewhat the way jade does. Others urge that we would be undecided and would need to make a semantic decision.14 But surely, be all this as it may, this much is clear. It could, as a matter of fact, be the way we use the word “water”. Suppose then that we in fact use “water” in such a way that H2O counts as water and XYZ does not count as water. Now how we use words is in part a matter of semantic decision. In some cases, the decisions are explicit and made for some stated reason. This is how it often is for technical and semi-technical vocabulary. In the case of the vast majority of words, however, the decisions are implicit, evolve over time and are subject to negotiation, and it can be unclear when and how they were made. What isn’t unclear is one important rationale for making them: when we seek to share information, it helps to use words in ways that deliver relevant information. This means that for anyone sensitive to the need for centered worlds to capture the informational value of a good many words and sentences, there is an obvious account of what would rationalize a semantic decision to use “water” in such a way that in our mouths it refers to H2O but not to XYZ: use it for a way things are relative to we users of the word. Before the rise of modern chemistry we knew that there was a kind that fell from the sky, filled the rivers, was a clear potable liquid at room temperature, and all that. We wanted a word for this kind. It made sense to have a word for this kind, the kind we ourselves come across as we move through the world. The word “water” was then given the job of picking out this kind. As XYZ is not this kind – not the kind we Earthians come across as we move through the world – it does not count as water. What I have just offered is a little story about how it might be the case that “water” behaves as true believers in Twin Earth hold that it in fact behaves. There is a semantic decision that would make it the case that “water” behaves as they are convinced that it does in fact behave, a semantic decision that would have much to

 I footnote the usual caveat. Maybe water’s being H2O is “philosophers’ science”, but the point could be made with less controversial examples. 14  Lewis (1994: 423–425). 13

8  Language from a Naturalistic Perspective

169

recommend it when we bear in mind the role of a word like “water” in giving information about how things are, and why it would be good, in particular, to have a word that gives information about how things are relative to we Earthians. It seems to me that true believers in Twin Earth have made exactly this semantic decision – implicitly.

8.6  The Internalism-Externalism Debate Thinking in terms of information, and especially in terms of how complex physical structures like you and me collect and transmit information about our surroundings, allows us to note important distinctions among ways of being an externalist about language, and equally about the mental states like belief and perception that underpin so many of our informational uses of language. One way of being an externalist is to insist, with commonsense, that our words and thoughts concern, by and large, how things are outside of us. This is how they give information about the kind of world we occupy, information whose correctness depends on the kind of world we in fact occupy. At a number of places, Devitt says that the folk conception of content is wide for essentially this reason.15 We should all be externalists in this sense. Another way of being an externalist is to insist that I and my duplicate from the skin in utter sentences and have beliefs with different truth-conditions.16 I and my twin, when subjected to any given stimulus, will produce exactly the same words. This follows from our identity in internal nature and consequent identity in functional profile. We are, in this regard, like duplicate Geiger counters: their identity in internal nature means that for any given surrounding level of radiation, they produce exactly the same clicks, and the explanation of how they get to produce those clicks is the same for each. However, it does not follow from this that the truth-conditions of our utterances and beliefs are the same. This is because, as we highlight earlier in this essay, so much of what we say and believe concerns how things are relative to ourselves. The sentence “There is something moving in front of me” gives information about how things are in front of me; ditto for the same sentence in my twin’s mouth. But the conditions under which my utterance is true concern how things are in the region in front of me, whereas the conditions under which my twin’s utterance is true concern how things are in front of him. Obviously, the same points apply to my beliefs and the beliefs of my twin. For instance, the belief I express with the words “Something is moving in front of me” differs in the conditions under which it is true from the belief my twin expresses using those words.

 As he says, it is an undoubted fact that “folk theory is ‘wide’” (Devitt 1985: 218).  Were I a brain in a vat, or someone who has contracted out some of their “brain-mind work” to the cloud, we’d have to phrase the issue differently, of course.

15 16

170

F. Jackson

Well, not quite, and this is something we learn from the discussion of two-­ dimensionalism above. It depends on how we understand what it is to differ in truth conditions. Within any given possible world containing me and my twin, what’s needed for my utterance and belief to be true differs from what’s needed for his utterance and belief to be true. In my case, it is the nature of the region in front of me that is crucial; in his, the nature of the (different) region in front of him. However, what’s required of where I am located for my utterance of “Something is moving in front of me” to be true is the same as what’s required of his location for his utterance of “Something is moving in front of me” to be true. In each case, we need to be at a location with something moving in front of us. And the same goes for the beliefs we express using “Something is moving in front of me”. For our beliefs to be true, we each need to be at a location with something moving in front of us. Or, to say it in terms of functions (as we do above for the example of having an experience as of a car approaching one): for each of us, there is a function that goes from a person in a world to truth just if there is something moving in front of that person in that world. In that sense, the utterances and the beliefs have the same truth conditions. The function is the same for each of us. Here’s a user-friendly way of saying it. Both I and my twin say and believe the same thing about ourselves when we use the sentence in question, namely that we are one of those with something moving in front of us, and what we say and believe have the same truth conditions in the following sense: our sayings and beliefs are true just if we belong to the class of persons with something moving in front of them. What’s the upshot for externalism specified in terms of difference in truth conditions? In one sense, I and my twin produce sentences and have beliefs with different truth conditions, in another sense we don’t.17

References Chalmers, D.J. 1996. The conscious mind. New York: Oxford University Press. Devitt, M. 1984. Realism and truth. Oxford: Basil Blackwell. ———. 1985. Critical notice of The varieties of reference by Gareth Evans. Australasian Journal of Philosophy 63: 216–232. ———. 1989. The revival of ‘Fido’-Fido. In Cause, mind, and reality, ed. J.  Heil, 73–94. Dordrecht: Reidel. ———. 1996. Coming to our senses: A naturalistic program for semantic localism. Cambridge: Cambridge University Press. ———. 1997. Précis of Coming to our senses: A naturalistic program for semantic localism. Philosophical Issues 8: 325–349. Holton, R. 2015. Primitive self-ascription: Lewis on the de se. In A companion to David Lewis, ed. B. Lower and J. Schaffer, 399–410. Malden: Wiley–Blackwell. Jackson, F. 1994. Armchair metaphysics. In Meaning in mind, ed. M. Michael and J. O’Leary-­ Hawthorne, 23–42. Dordrecht: Kluwer Academic Publishers. ———. 2010. Language, names, and information. Oxford: Wiley-Blackwell.

 Thanks to the many who have discussed these issues with me over the years, with special thanks to David Braddon-Mitchell, David Chalmers, David Lewis, Alex Sandgren and Daniel Stoljar.

17

8  Language from a Naturalistic Perspective

171

———. 2011. On Gettier holdouts. Mind and Language 26: 468–481. Kripke, S. 1980. Naming and necessity. Oxford: Blackwell. Lewis, D. 1994. Reduction of mind. In A companion to the philosophy of mind, ed. S. Guttenplan, 412–431. Oxford: Blackwell. Putnam, H. 1973. Meaning and reference. Journal of Philosophy 70: 699–711. Schellenberg, S. 2013. Experience and evidence. Mind 122: 699–747. Stalnaker, R. 2008. Our knowledge of the internal world. Oxford: Oxford University Press. Tichý, P. 1983. Kripke on necessity a posteriori. Philosophical Studies 43: 225–241. Weber, C. 2013. Centered communication. Philosophical Studies 166: 205–223.

Chapter 9

Michael Devitt, Cultural Evolution and the Division of Linguistic Labour Kim Sterelny

Abstract  There is a general consensus that there is a division of linguistic labour and that it is important in explaining the expressive power of human language; our ability to talk about phenomena beyond the reach of our own experience. But there is disagreement between Michael Devitt and defenders of causal description theories as to how that division is sustained in a linguistic community. Causal description theorists argue that we have indirect ways of specifying the referential targets of our names and terms; Devitt (on behalf of causal theories) argues that the doxastic prerequisites for referential competence are much more minimal. It is unclear how to resolve this debate, as appeals to intuitions about particular cases have little evidential weight. This paper explores a way forward, by seeing the division of linguistic labour as a special case of cumulative cultural learning. There is to hand a rich (though highly contested) literature on the cognitive prerequisites of cumulative cultural learning; one aim of the paper is to connect these two literatures. The more substantive aim is to distinguish between the cognitive demands of the vertical and the horizontal transmission of referential competence (that is intergenerational versus within generational transmission of that competence) and to suggest that while Devitt’s minimalism is a plausible view of the requirements of vertical transmission (for these are environmentally scaffolded in various ways), something closer to causal descriptivism is more plausible for the horizontal cases. Keywords  Division of linguistic labour · Evolution of language · Causal theories of reference · Causal descriptive theories of reference · Cumulative cultural learning · Reference borrowing

K. Sterelny (*) School of Philosophy, Research School of the Social Sciences, Australian National University, Acton, Canberra, ACT, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_9

173

174

K. Sterelny

9.1  The Division of Linguistic Labour The aim of this chapter is to connect two literatures. One is a literature central to Michael Devitt’s work on the philosophy of language: how to incorporate the division of linguistic labour into a theory of reference. The classic papers of Putnam and Kripke in the 60s and 70s showed that we are not referentially self-reliant: I can participate in conversations about Maxwell without understanding his distinctive contributions to physics, or without knowing identifying details about his personal history; indeed, even if I have confused Maxwell’s contributions with those of Faraday. Likewise, as Putnam made vivid, we can all refer to many of the kinds in our natural environment that we could not identify or specify (Putnam 1975; Kripke 1980). At a suitably coarse level of description, there is no controversy about how this is possible. I can talk about Maxwell and Faraday, about Buckminsterfullerene or LIPs (= Large Igneous Provinces) because I am embedded in a linguistic community, one with considerable historical depth, and that community includes agents whose connections to Maxwell, Faraday, Buckminsterfullerene, LIPs and the like are more intimate, more direct, than my own. I am linked, networked, more or less directly to those agents on (for example) the Buckminsterfullerene coalface. Hence I have the ability to use that term in language and thought to (for example) asking questions about it, passing on opinions about its industrial uses, looking it up on google. At a less coarse level of description, there is serious debate about just what those at the coalface need to know or do to be able to successfully launch a new term into the linguistic community, and debate about the nature of the links that connect the inexpert to the more expert, once that term has been launched. The issue has been how much information needs to flow down the links for a novice to acquire the capacity to use the term to pick out the kind, and to act themselves as a link, connecting further speakers into the network of the term’s users. Even here there is a consensus that the thought experiments of Kripke and Putnam (about, for example, Kripke’s Gödel/Schmidt example) showed that speakers did not have to have knowledge connecting term and referent of the most obvious kind. They did not, for example, need to know the most famous deeds of the famous, or the uniquely identifying, kind-making characteristics of natural kinds. Devitt has staked out a minimalist position with regard to both questions1: the experts who “ground” a term on a kind (or an individual) must have (or have had) perceptual interaction with the kind, in which some information about the referential target has been picked up. But they do not need to be able to specify the kind; they do not need to know a uniquely identifying description, or anything that comes close to that. Moreover, the network can grow from link to link just through linguistic perception. I listen to someone talking about Buckminsterfullerene, and so long as I understand the sentence well enough to parse it, and realise that the speaker is talking about a kind of stuff (and so long as I remember the term), I can  In Devitt 1981; restated in the two versions of Devitt and Sterelny 1999, and sundry papers.

1

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

175

then talk about that kind of stuff myself. In doing so, I then provide the perceptual opportunity for others to pick up that very capacity. An alternative view is that both launching a term, and acquiring it through conversational interaction with others using it, is more informationally demanding. Users of the term have to be able to specify the stuff, even though, in most cases, that specification is indirect: I know what Buckminsterfullerene is, because I know that Buckminsterfullerene is the stuff carbon chemists call “Buckminsterfullerene”. I know Gödel is the mathematician other mathematicians call “Gödel”. For a clean and crisp formulation of this view, see Jackson 2010. As Jackson sees it, the division of linguistic labour is allied to and depends on a division of epistemic labour. For in the division of linguistic labour, we depend on experts, and we know that we depend on experts.2 In a shocking betrayal, in Sterelny 2016, I argued for a Jacksonian view of the informational competences of users of a term. The idea was that the flexibility of semantic competence was hard to reconcile with Devitt’s automatic, quasi-­reflexlike picture of the introduction and transmission of a term from one speaker to the next. A minor expression of that flexibility is our ability to co-opt a name (or a term) for a new use, as in the making of metaphors or the coining of nicknames; as in using “Trump” as a nickname for a particularly greedy dog. This is a very minor instance of a most remarkable and important capacity: we can expand our lexicon, at will, at need, on the fly. Our capacity to expand our lexicon contrasts sharply with other aspects of language, which are apparently fully automated and which seem to change mostly by unnoticed, accidental variation and contagion: phonology, morphology, syntax. No speaker of a language coins a new phoneme (or a new morpheme like a tense marker, or a new syntactic form like a new way of forming wh-questions) on the spot, and smoothly integrates it into their language. I can coin a new name – for the recently arrived dog – and smoothly integrate it into my language. If it is apt, it is likely to catch on and become part of the local dialect. This strongly suggests that agents represent the referential, semantic properties of words – we notice them – in ways that they do not represent their phonological or morphological properties. Moreover, this is a central aspect of their semantic competence, not a peripheral one. So I bought into something like Jackson’s causal descriptivism. In using a referential term, speakers are aware of, and buy into, the causal networks that link their uses of that term to its bearer. Our capacity to use names to integrate information about spatiotemporally distant places and things into a single set of linked representations connected by a shared name or term depends not just on the existence of these sociolinguistic networks. It depends as well on our recognition of their existence, and on our intention to use a name in a further extension of the network. Thus in using “Trump” as a name for the next door neighbour’s dog, I am displaying my awareness of the network, while not intending to extend it. Play shows some awareness of what we are playing with. I am not sure exactly what a competent user of “Trump” knows, in virtue of being a competent speaker.

2  This knowledge may well be tacit, but if so it’s tacit in a shallow way, relatively easily prompted; it is not information sealed off in an encapsulated cognitive subsystem.

176

K. Sterelny

Certainly they need nothing like an explicit commitment to any semantic theory. Nonetheless, referential competence is not reflexlike, nor completely isolated from reflective understanding. Devitt was not convinced (personal communication). I am to push the issue further in this paper through the lens of an analogous debate in the literature of an adjacent but directly relevant field: cultural learning (or social learning; I use these terms interchangeably) and cultural evolution. There is an ongoing debate in the field of cultural evolution about the cognitive demands of cultural learning. Many animals learn socially. Human social learning is distinctive. In contrast to all or almost all other animals, humans depend on cumulative cultural learning. Generation N + 1 come to possess the informational resources of generation N, with reliability that is high enough for cumulative improvement, as generation N + 1 adds to its cognitive capital as an endowment for N  +  2, and so on. To some extent these resources are reconstructed each generation, just as the biological adaptations of an organism are reconstructed each generation, as a result of an information flow from parent to offspring. To some extent these resources persist in ways analogous to the persistence of the physical infrastructure one generation inherits from its predecessor. Indeed, the informational legacy of a generation is in part embodied in persisting material culture. But likewise institutions, customs, practices, patterns of interaction persist, as humans have overlapping as distinct from discrete generations. Children do not have to remake their culture each generation, as they do have to remake their perceptual systems and other morphological adaptations each generation.3 They join rather than reconstruct a culture, and in general, they do not need to have a full and accurate representation of their culture in order to become part of it. Language is a special case of cultural transmission. In particular, it is a special case of cumulative cultural transmission: of the preservation and incremental improvement of a culture’s stock of cognitive capital. The lexicon of a language, at the very least, is a cumulative, multi-generation achievement. Even staunch defenders of Chomsky’s nativism accept that. That is obviously true of English with its enormous lexicon, but it is also true of many small culture languages, which often have rich technical vocabularies for local flora and fauna, and for local technologies and techniques. So too is the formation and spread of a sociolinguistic network that supports the use of a newly coined term; that too adds to the culture’s toolkit. Theorists of cultural evolution distinguish between horizontal transmission (cultural learning within a generation) and vertical transmission, across a generation; cultural evolution is much accelerated by the presence of horizontal transmission, which spreads an innovation through a population much more rapidly than could be done through vertical transmission plus selection alone. As the division of linguistic labour has mostly been discussed, it is a special case of horizontal transmission; I shall consider vertical transmission in the final two sections and discuss important 3  This is an important difference between cumulative cultural evolution and cumulative biological information. Almost all of the biological structures of a multi-celled organism do have to be rebuilt from the beginning by each generation: a fact that makes inherently plausible the idea that gene flow from parents to offspring is a set of instructions for rebuilding an organism.

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

177

differences between those two modes of transmission. In those sections, I embed the issue about the requirements of the division of linguistic labour within these broader questions about cultural transmission, but first let’s turn to these evidential issues about reference as Devitt has canvassed them.

9.2  Intuition and Evidence Debates in the theory of reference have mostly been conducted by appeal to intuitions about actual and possible cases. Most strikingly, in Kripke’s and Putnam’s original arguments for causal theories of reference, Kripke described a scenario – Gödel as a mathematical imposter stealing the work of Schmidt  – and Putnam described a world phenomenologically indistinguishable from ours in which the water-like substance was not H2O. They then asked: who or what in those circumstances does “Gödel” or “water” refer to? These debates have not been entirely driven by particular intuitions about particular cases. For instance, just about everyone agrees that any theory of the semantics of names and terms must be consilient with a compositional semantics of the more complex expressions of which names and terms are a part. Their semantics must contribute in a systematic way to the semantics of more complex structures (especially puzzling ones involving belief sentences and the like). Thus Fodor rejected prototype theories of word meaning on the grounds that they failed this compositionality requirement. But intuitions about particular cases play a central role, including intuitions about compositional cases: for example intuitions about whether sentences of the form “Harry believes that X is in India” all have the same truth conditions when X is replaced by an empty name. Devitt has worried long and hard about the evidential role of intuitions in a theory of reference (in what follows, I rely particularly on Devitt 2015). On the one hand, he points out that agents’ opinions about whether a name (say) refers to a particular person are at best indirect evidence about the referential properties of an utterance. In his view, direct evidence is evidence derived from the corpus of language in use, evidence about utterances in their context, though as he notes, it is very difficult to test theories of reference against such evidence. However, he does not entirely discount intuition-based evidence. Intuitions, according to him, derive from (implicit) theories of semantic phenomena: implicit theories in lay speakers, when prompted by experimental philosophy, somewhat more explicit when the intuitions are those of philosophical experts, generated in their professional practice. In his view, when the intuitions are about actual cases (or hypothetical cases which are very similar to actual cases), ordinary speaker intuitions are likely to be truth-tracking. As the hypothetical situations diverge sharply from routine ones, our caution about reliance on intuition should increase; especially about the intuitions of lay speakers, but also those of the trained philosophical mind. I suspect we should be even more cautious of intuitive judgement than this. Like Devitt, I think intuitive judgement reflects the implicit information and misinformation of the agent: they are judgments when the agent lacks full introspective access

178

K. Sterelny

to the bases of their judgment (Mercier and Sperber 2017). Unlike Devitt, I doubt whether it is helpful to characterise agents’ representations of a domain as their theory of that domain. In many domains of skilled practice, those representational structures are likely to be varied in format and only partially integrated4: not a single coherent body of information represented in a language of thought. I will explain my scepticism about semantic intuitions through a contrast with a good case, where we can and do rely on intuitions. For in cases where an agent has genuine expertise, their immediate judgements are often highly reliable. Consider practical skills like stone knapping. Contemporary knappers are often trained archaeologists with a theoretical understanding of stone types and fracture dynamics. But there are plenty of skilled amateurs who do not, and yet still make rapid, accurate and somewhat introspectively opaque judgements about raw material selection, choice of hammerstone (and whether to use a hammerstone rather than a wood or bone hammer); force and angle of strike (for the skill involved, see Hiscock 2014). In some sense, using language is a skill too (language is certainly a social tool), but there are characteristics of artisan expertise which (a) are important to the reliability of their intuitive judgements, and (b) do not seem to be features of the use of language. Thus: (i) To some extent, knapping skill is wholly implicit know-how (“muscle memory”), but it is supported by and to a degree integrated with publically shared, explicit information: knapping lore. This is rich: it includes a technical vocabulary for stone types, different stone working tools, different techniques (hard vs. soft hammers; pressure flacking vs. impact flacking, and so forth). This lore is not a recent invention: see Stout 2002 for an ethnographic description of this lore in a still-living stone tool tradition in New Guinea. (ii) On the basis of this partial representation of their own skill, an expert knapper (and this is true of artisan skills in general) can demonstrate (not just display)5 elements of their skill to novices: segmenting it into constituents, slowing it down, repeating, exaggerating difficult elements. They can diagnose and correct errors in their own productions and those of others. They can improve by targeted practice, and can organise a sequence of practices to improve their own skills or those of others. (iii) There are clear, unambiguous, publically observable success and failure conditions, and so genuine experts can be identified. (iv) In virtue of (iii), we can often tell whether an intuitive judgement was right or not. So we, and the experts themselves, can calibrate and improve the reliability of these different intuitive judgements. Much of this is true of the rhetorical features of language: how to present your ideas clearly, convincingly, appealingly, memorably. Speech writing is an artisan craft analogous to knapping. But consider how little of this is true of the semantic

4  So for example, in a agent’s normative life, there are often inconsistencies between their reflexive and their considered opinion; I take this to be evidence of partial integration (Sterelny 2010). 5  For the importance of this distinction, see Csibra and Gergely 2011.

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

179

features of language. Agents may display their referential competence, but no-one demonstrates how to use a name or a kind term. The technical vocabulary of folk semantics is impoverished compared to those of craft skills. Success conditions are not unambiguous, public, observable. How can we tell whether Max  – who has confused Aristotle’s main ideas with those of Plato – is referring to Aristotle when he talks about Aristotle’s theory of forms? In short, the reasons we have to trust intuitions seem largely absent when we consider agent’s judgements about reference. I doubt whether philosopher’s intuitions have much more weight. Certainly, philosophers have thought hard and explicitly about semantics in general and reference in particular. But, first, that helps only if their explicit thoughts are largely true, and that is partly what is at issue. Moreover, even if their explicit thoughts were largely true, that helps only if those thoughts are integrated into the cognitive processes that drive their judgements about particular cases. That is not trivial: pathologists have excellent theories about diseases and their causes, but they need, and must work long and hard to acquire, the capacity to apply their theoretical understanding to particular cases. The fact that defenders of description theories took Kripke’s Gödel/ Schmidt example to be a prima facie problem for their views suggests that their theoretical understanding of reference did not drive their judgements about particular cases. If so, why take philosopher’s intuitions more seriously than those of lay speakers? It is true that philosophers are more practiced: they spend a lot of time and thought thinking about problematic hypothetical cases. But practice only leads to greater skill when coupled to feedback about success and failure: practice at the nets would not improve a leg spinner if he or she had no information about the trajectory of the ball after release. Philosophers’ practice in thinking about referential examples is not coupled to any correcting feedback loop. Devitt himself has long argued for the importance of moving beyond judgements about particular cases, but one of his positive ideas – appealing to either spontaneous or experimentally induced corpus data – is not very convincing (Devitt 2015). The problem is that in contrast to phonological and morphological features of language, the referential properties (if any) of utterances in a corpus are very far from being observationally scrutable. In one of his exploratory papers, Devitt reports an experimental attempt to overcome this problem by probing what subjects say about the beliefs of others, when those others have been depicted as having false beliefs about their referential targets. As he himself discusses, the procedure was subject to experimental artefacts. But even if it had not been, it is very similar to eliciting intuitions.6 Whether speaker A counts speaker B has having beliefs about Shane Warne is only indirect evidence about whether speaker B actually has beliefs about Shane Warne. In this paper, I shall try another idea. Perhaps information about the cognitive demands of cumulative cultural learning in general can give us some insight into the particular case of what an agent must understand in order to be part of reference

6  Nor would it have discriminated between causal descriptive theories and causal theories of reference.

180

K. Sterelny

borrowing networks, to be part of the division of linguistic labour. For the division of linguistic labour is a form of that learning. Indeed, it is a particularly difficult form of cultural learning. Even when we restrict ourselves to the demands on learning the lexicon of a language: (i) there is a large stock of items to learn; (ii) to a good approximation, each is arbitrary; (iii) learning is error-intolerant; small difference between a model and attempts to reproduce it often matter; (iv) Utterances are ephemeral. They are like actions rather than products. A tool sticks around; a novice can pick it up, turn it over, come back for a second look. So language is a difficult learning target; the only saving grace is that novices are exposed to lots of language. Importantly,  this is no more than  a project of exploration, just as Devitt’s recent work on testing semantic theories is an exploration. For there is no received view on the cognitive prerequisites of cumulative cultural learning in general, and nor is it likely that a single account will fit every case. It is also true that the leading frameworks of cultural evolution do not focus on the establishment of sociolinguistic networks that support the division of linguistic labour.

9.3  Two Conceptions of Cultural Evolution One of the central debates in the field of cultural evolution focusses on the cognitive character of social learning, and on the relationship between fidelity and the cognitive demands on cultural learning. On one very influential view associated with Robert Boyd, Pete Richerson and their co-workers, social learning is relatively undemanding. That view reflects a long tradition of evolutionary modelling of cultural learning, in which asocial learning is seen as accurate but expensive, whereas social learning is much cheaper (avoiding the costs of trial and error), but less accurate, both because of noise in the model-novice linkage, and because information becomes outdated. So on the “Californian” view, I can learn the customs, social norms and techniques deployed by members of my social group in a semi-automatic way. Social learning is, often, copying without understanding. A novice does not need to understand a technique (say: how to retouch a stone tool) in order to learn it; still less does a novice need to understand the social impact of a custom of hospitality in order to imbibe that same norm. Indeed, there is evidence from the automatic imitation literature that agents tend to match their gestures, posture, conversational style to their social partners without even realising that they are imitating (Heyes 2011). The cases on which the Californians focus are not as automatic as these: the agents are typically conceived of as intentionally learning. Moreover, that learning might well be guided by an intelligent, strategic choice of models. However, on this view, a girl, in learning from her mother and her older female relatives the patterns of dress and deportment typical in her community need not understand that she is learning the norms for gender role in her community. In learning, say, how to process flax leaves to make cloth, she need not understand why each stage in the process is necessary in the staged transition from soaking the flax, drying the leaves, beating them to separate the strands, and weaving the fabric. The idea

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

181

is that social learning is a copying process through which a novice can copy adaptive behaviour from an expert without understanding the route through which that behaviour has its functional outcome. Cultural learning is cheap; it has low cognitive demands and is often noisy. But with enough redundancy, and if it is linked to feedback processes which reward success, it can drive cumulative improvement in a culture over time, even if the causal bases of success are not understood.7 There is an alternative view, mostly associated with Dan Sperber and his colleagues, which sees cultural learning as more cognitively demanding, rejecting the view that (typically) it is rightly conceived of as a form of imitation or copying.8 Human cultures generate a dense cloud of public representation; humans signal not just in language, but in their dress, their physical interaction, in the artefacts they buy, in the dwellings they inhabit. Almost all of these representations are ephemeral: everyone ignores nearly all of the public information to which they are exposed. Some, however, is picked up. But the process is highly selective, and what agents do with the representation that they do pick up depends on what they already know or believe; their intentions; and on how their mind is tuned. We all have cognitive biases which make some potential social inputs salient and easily absorbed; others salient but difficult to master; others not salient at all. So, crucially, when agents do take note of social inputs, that process is transformative; those public inputs have to be re-coded to become part of the mental life of the social receiver. It is possible that the receiver, having made this information (or mis-information) her own will re-­ express it in a form that is publically similar to her own inputs. But that is just one possibility. What comes in may not be re-expressed at all. If it is re-expressed, that re-expression may be transformed through synthesis with other social inputs and/or the internal cognitive biases of the agent. High fidelity social traditions exist, but they are special cases, not the default expectation from social learning, and their existence requires special explanation (Sperber 1996, Claidiere et al. 2014, Morin 2016; see Sterelny 2017 for a more detailed critical comparison of the two frameworks). Obviously this general characterisation of cultural learning resonates with Jackson’s reflective conception of the sociolinguistic networks that sustain the division of linguistic labour. In an important critical review of these two approaches, Celia Heyes develops a line of argument that is broadly supportive of the Parisian view9: cumulative cultural  For recent overviews of this conception of cultural evolution, see Boyd 2016 and Henrich 2016.  Michael Tomasello contrasts with the Parisians in taking cumulative culture to be the core phenomenon to be explained in giving an account of human cultural learning. But he is in agreement with the Parisians in thinking that cumulative culture depends on very distinctive, high level features of cognition. Fidelity depends on our most sophisticated cognitive capacities, not our more routine and widely shared ones. Fidelity depends on imitation, on sophisticated theory of mind (joint and collective intentions), and on the capacity to represent the structure of joint actions in an agent neutral way (in his terminology, a “bird’s eye” representation of the organisation of collective action). For a brief overview, see Tomasello 2016. For a full elaboration of his views, see Tomasello 1999 and 2014. 9  Though she also points out that this whole debate has been conducted largely in the conceptual framework of folk psychology, and suggests reframing the debate as one about the relative impor-

7

8

182

K. Sterelny

transmission is mediated by understanding and reflection rather than by relatively automatic, subdoxastic imitation (Heyes 2018). She argues that the cognitively simple “copying” model of transmission can indeed support high fidelity transmission in a single learning episode. But she points out that cumulative cultural transmission requires iterated, high fidelity transmission, as one cultural variant becomes established through both horizontal and vertical flow as the typical variant for the community; thus being broadly available for improvement, an improvement which must itself be transmitted with high fidelity through the group. Heyes points to aspects of the Californian picture that tend to undermine iterated fidelity10: most importantly, any tendency to learn from multiple models will tend to edit out the improved version, which is necessarily rare early in the process. The most obvious way an improved version of the standard trait can spread is through direct or indirect recognition that it is an improvement, and that relies on understanding. More generally, Heyes argues that iterated fidelity requires what she calls “metacognitive rules”: the novice assesses her own and the model’s respective capacities, and commits to (and will invest in) accurate and detailed matching of the target’s competences. An example is “Copy the boat design of the boat owner with the largest fleet”; likely to lead to cumulative improvement if fleet size is correlated with seaworthiness. However, Heyes’ case for the connection between metacognitive rules and iterated fidelity is based on a small set of cases, and even if it fits those cases, it is not easy to see how it might apply to the transmission of a referring term through a sociolinguistic network. Likewise, lexical transmission is not a focal example of the Californians or the Parisians. In contrast, Dan Dennett’s conception of cultural evolution does focus on words and on language (Dennett 2017). Dennett situates his memetic view of word transmission within a general conception of human cultural evolution; a picture in which cultural change in human life becomes less clearly an evolutionary process over time. Adopting some machinery from Godfrey-Smith 2009, he identifies three dimensions in which the mechanisms that drive cultural change vary over time. (i) Understanding. To what extent does the transmission of a cultural variant depend on the receivers’ understanding that variant and its function? Applying this to language, do you need to understand that a word is a word or what it means in order to learn it socially? In Dennett’s jargon: can you have competence without comprehension?11 (ii). Search. Is the search for solutions (and hence the means through which new variations are added to the pool) directed by individual intelligence and insight (as Pinker 2010 argues) or is it quasi-random trial and error?

tance of type one versus type two cognitive processes: type one processes are fast, automatic, relatively opaque to introspection, parallel; type two processes are slow, topdown, somewhat transparent to introspection, serial (Kahneman 2011). 10  Joseph Henrich and colleagues have developed models which they argue avoid the need to posit high fidelity transmission to explain cumulative culture (Henrich and Boyd 2002). But it is not clear whether the assumptions about learning on which these models rely are realistic; and not at all clear how they could apply to the transmission of a term through a reference-borrowing network. 11  His paradigm of competence without comprehension is termite mound building: no-one suggests that speaker use of language is quite so lacking comprehension as that.

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

183

(iii) Organisation. Is culture and change essentially bottom up, the aggregate result of individual decision making and choice within populations, or is it organised and structured, so that variation is both generated and constrained top down? Dennett’s core general thesis is that earlier in hominin evolution, cultural change was genuinely Darwinian. Transmission did not depend on comprehension. Search was trial and error rather than directed. Human groups lacked institutional structures that could drive change from the top, and so changes were generated by the aggregated effects of individual decisions, not much structured or constrained by institutions. The process of cultural change gradually de-Darwinised, in all three of these dimensions. However, in his view, that is less true of language than of many other aspects of culture. On Dennett’s view, the core features of language evolved through Darwinian cultural evolution (with some gene-culture coevolution), and languages still change this way. For the most part, the lexicon of a language is not the result of top-down processes or directed search. It is mostly the result of Darwin’s “unconscious artificial selection”. Terms catch on and are taken up because they usefully fill lexical gaps, or divide semantic space in ways that are locally useful. Others fail to take off or fade away as the world changes. But lexical change is piecemeal, local, unplanned. Words are mostly added and retained by bottom up individual agent choices, not top-down deliberate design. Selection is unconscious: words are added that happen to fit public needs and which are easily processed and recalled by the cognitive mechanisms agents happen to have. Moreover, Dennett suggests that sociolinguistic networks which support the division of linguistic labour can build competence without comprehension: he suggests that there may be (or have been) languages without a word for words, or without other metalinguistic apparatus. Even if that is not true of recent languages, surely early protolanguage had no word for words, and he suggests that there is no reason to assume that such speakers had a concept of word. An agent can learn to recognise individual bird species without a general concept of bird. Likewise, the argument runs, an agent could recognise (and reproduce) word types, without the concept of a word. I think we can put a bit more muscle behind this suggestion. In Sterelny 2016, I argued that the division of linguistic labour was a relatively recently evolved feature of language: the forager societies of the sapiens expansion out of Africa of about 80 kya12 in all probability had fairly limited forms of the division of labour and of cognitive specialisation, and so to the extent that there were reference-borrowing networks that linked term-users to term grounders, they were probably quite short. Indeed, it is worth noting that while the division of linguistic labour is central to the expressive power of large-world languages like English, that central role might be a relatively recent feature of English. Even now, even in English, as Devitt pointed out in Designation, it plays a peripheral role in the life of many names and many kind terms. To recycle a homely example from Devitt’s early work, his friends, colleagues, family and (especially) his students acquired “Nana” as the name for his cat  Given the broad organisational similarities of all human languages, it is typically assumed that whether language evolved by genetic evolution, cultural evolution, or gene-culture coevolution, the main stages of its evolution took place before modern humans expanded out of Africa.

12

184

K. Sterelny

mostly from some form of social transmission: by hearing Devitt and his family talking about their new cat, and picking up the fact that she was called “Nana”. But most of those who knew “Nana” knew Nana. In sociolinguistic networks of the kind this example exemplifies, most users on occasion use the name in direct interaction with the bearer, and can be assumed to be able to recognise the bearer. The flexibility of use that I took to support Jackson’s view that the division of linguistic labour depended on downstream users recognising the networks of which they are a part may be a relatively recent feature of language, perhaps most marked in large world languages. That said, on this view the languages (or protolanguages) without a word for word, or other ways of explicitly thinking about words and reference, will also be languages without much (or with very minimal) division of linguistic labour. Word acquisition will be via ostensive introduction, not long transmission chains. One important respect in which Dennett’s discussion of the development of sociolinguistic networks varies from that of Devitt and Jackson is that Dennett discusses early language acquisition extensively. Devitt and Jackson primarily focus on horizontal transmission. In their discussion of reference borrowing, they focus on adult to adult transmission (or at least transmission within a community of competent language users). Dennett has a very helpful discussion of infant word acquisition, in which he emphasises the absolutely central role of phonology. It establishes a cognitive anchor or niche, as an infant hears a word as a socially significant sound, and perhaps after a few repetitions of hearing it, attempts to reproduce it. What follows is a good deal of phonological shaping and feedback from care-givers prior to, but overlapping, the beginnings of semantic feedback. In early stages of language formation, this feedback involves ostensive contact with the bearer of the term and/or depictions of bearers of the term. Most first words are nouns (Hurford 2016: 104–105). In the establishment of an early vocabulary, there is not much comprehension. There is no real reason to suppose that the infant has a concept of a word. One problem with this minimalist view of the vertical transmission of the lexicon is that words have complex and subtle properties. Very importantly, they are phonologically structured; composed from phonemes, and this is critical both to their memorability (the receiver already knows the alphabet of chunked sounds so just needs to encode a novel sequence) and for words to be re-identified in the mouths of different speakers and in different contexts. A continuous and messy speech stream is converted into a digitalised phonological representation. Moreover, listeners need to be sensitive to context, as the identification of a particular mark or sound as a letter or phoneme is context sensitive.13 Secondly, as well as having phonological (or inscriptional) structure, they have morphological, syntactic and semantic features. You might not need the concept of a name, but you have not picked up “Zorro” unless you have picked it up, and re-used it, as a name. This is a version of the argument that Sperber and his colleagues have made repeatedly: social learning is a transformative process. For the token of “Zorro” to propagate, the receiver needs to bring a lot to the table: the phoneme inventory; morphological analysis; cognitive

13

 For a striking case, see Dennett 2017: 201.

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

185

mechanisms that mark new lexical items for their syntactic character. If Jackson is right, they need some direct or indirect way of identifying Zorro. Talk of copying a model makes the process of acquiring a new term sound more input-driven than it really is. The difference between “rat” and “bat” is input driven, but the receiver needs a lot to hand to hear an input as “rat”.

9.4  Is Vertical Transmission Different from Horizontal Transmission? In the discussion to date, the talk of the cognitive demands on cultural evolution has been imprecise. In particular, we need to distinguish (i) the cognitive and computational complexity of the cognitive mechanisms required for a particular kind of social learning from (ii) the person-level question of the doxastic demands of particular forms of social learning. Jackson and Devitt differ on a doxastic question. What does a novice need to know in order to acquire from another the capacity to refer to (for example) Buckminsterfullerene with “Buckminsterfullerene”? Devitt and Dennett might be right to think that the doxastic demands are minimal. The agent does not need to know much about Buckminsterfullerene or “Buckminsterfullerene”. There can be transmission without comprehension (the person-level question). Even so, it is possible that rich and cognitively complex subdoxastic capacities are needed. The agent has to be somehow sensitive to the subtle properties of “Zorro” noted above. But that sensitivity need not be a matter of knowledge or belief. That option is suggested by recent work by Gergely Csibra and György Gergely and their work on “natural pedagogy” as an evolutionary adaptation (Csibra and Gergely 2006, 2009, 2011). They argue that the distinctive features of human culture depend on a specific set of subpersonal cognitive adaptations of infants and toddlers, and on appropriate response to these pre-programmed infant expectations. Infants are perceptually and cognitively organised to look for teachers. In turn, adults do, indeed, often teach. On their view, these mechanisms are in play in very young children, well before language acquisition, and hence are available to support lexical learning. They argue that young children are equipped with automatic but quite sophisticated learning strategies: strategies that are premised on their being programmed to recognise signals of a teaching interaction, and being programmed with default expectations of what they will learn in those interactions. Their framework was initially discussed in the context of physical skill transmission, but it is very naturally expanded to word and concept acquisition; and indeed, some attempts at expansion have already been discussed in the literature; both concept acquisition (discussing infants’ capacity to understand a sample as an exemplar of a kind (Csibra and Shamsudheen 2017)) and for somewhat older children, in word acquisition tasks (Butler and Markman 2012, 2014, 2016). The infant complex consists of:

186

K. Sterelny

1. A sensitivity to ostensive cues: initially gaze direction and infant directed speech (which has a distinctive cadence, and which appears to be culturally universal, as “motherese”). This then expands to sensitivity to one’s own name and to the various other, more culturally varied means by which adults address their communicative intentions to a target. 2. Once an infant registers that she is the target of a communicative intention, she is then cued to sensitivity to deixis: to the adults referential targeting of some aspect of her environment; again initially gaze direction is most important, but infants then become tuned into pointing, shifts in body orientation, linguistic cues (in older toddlers). 3. A central claim in this picture is that the infant, on identifying the target, treats the target generically, as a typical exemplar of the kind of which it is a member, and the infant is primed to acquire information that she takes to be novel, generic, and part of the common ground of the culture (the child will assume others know what she has just learned).14 For example, there is intriguing experimental data that suggests that infants encode the generic properties of the target object – size, shape, or colour – rather than its idiosyncratic properties; in this case, its location, even though the infant has seen the object by having her attention directed to that location (Csibra and Shamsudheen 2017). Early lexical acquisition shows similar defaults. If a new term – “blicket” in a famous set of experiments – is introduced in a context in which there is also a novel object (or action) young children have a strong default assumption that the novel term maps onto the novel object, and that this instance of the object is typical of its kind (Butler and Markman 2012, 2014). Of course, as philosophers well know, if an item is a typical exemplar of any kind, it will be of many, and the model has little to say about this referential ambiguity except noting that young children have a shape bias; their (perhaps learned) default is that members of a kind will be shape-similar (in which case a toy car will be treated as an exemplar of cars, not vehicles). Infants are not, however, confined to concepts defined by shape: at 14 months they can recognise a functional term like “chaser” even when the chasing agents vary greatly in appearance; even when the agent being chased in one scenario is very similar in appearance to the agent chasing in another.15 If something like this framework is right, then Devitt’s minimalism might well be the right view of vertical transmission of the referential lexicon, from one generation to the next. Vertical transmission (at least in its earlier stages) depends on complex capacities, but is not reflective; it depends on system one thinking. I conjecture that young children have very little awareness of what they are doing when they

 Of course this is only a default, a bias, for otherwise children could not learn individual names.  In a searching critical analysis of this model, Heyes is sceptical of some aspects of the picture, especially the idea that ostensive sensitivity and referential sensitivity are evolved adaptations of pedagogy. But the case that young children are in fact sensitive both to ostensive cues and the referential focus of interacting adults is pretty good; the machinery comes on line early, even if it is not innate, and even if its effects are not solely pedagogical (Heyes 2016).

14 15

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

187

acquire a new term. But adults have capacities that those just entering the linguistic community lack: they can pick up new terms in less informationally transparent contexts that those structured by a pedagogical interaction. They are much less dependent on ostension in acquiring new terms. They can innovate; by coining new terms; by deliberately using existing terms with a new referent (as in my “Trump” example), and also in recognising ambiguity or polysemy and thus refining the use of an existing term (for a recent example about the term “planet”, see Brusse 2016). Deliberate teaching, too, presumably requires some explicit, person-level representation of what a term applies to. Moreover, there is a plausible account of why adults (and doubtless older children) have some partial reflective awareness of the properties of their own linguistic tools: an awareness that gives them an on-the-fly adaptability that younger children lack. It is connected to selection for some epistemic caution in managing horizontal cultural transmission. While our use of language is not transparent to ourselves, in some respects it is not wholly opaque. While folk semantics is rather impoverished compared to the elaborated technical vocabularies and lore associated with many skills, it is much more elaborated than folk phonetics, folk morphology, folk syntax. These barely exist as explicit bodies of lay knowledge. Over the last decade or more Sperber (often in collaboration with Hugo Mercier) has argued that our metarepresentational capacities have evolved in part as a response to the division of epistemic labour, making the reliability of others’ views a matter of great importance, and in part because so much of human life requires collaboration and collective action.16 We need to think about truth and reference because we need to assess the reliability of other agents and what they say, for agents are often reliable in some contexts and with respect to some domains, but not others. We need to persuade others of the merits of our plans, and to assess (and often resist) their attempts at persuasion. Learning from others as a more-or-less automatic habit – one view of the nature of human cultural transmission – would be too risky, leaving us too exposed to others’ errors and to manipulation. As the ancestral forms of language became richer and the division of linguistic labour emerged, making indirect assessments of reliability more critical, the evolution of language-like communicative capacities coevolved with a very partial explicit awareness of some aspects of the emerging system, in ways that increased both its flexibility at and over time, and which managed the risks of trading ignorance for error.17

 See Sperber 2000 and 2001, Sperber et al. 2010, and Mercier and Sperber 2017. I discuss this line of thought in Sterelny 2018, but while I am sceptical of Sperber’s cognitive science framework, I am broadly on-side with the evolutionary hypothesis. 17  In writing this paper, I would like to thank Michael for his friendship, philosophical comradeship and support over many years, and I would also like to thank the Australian Research Council for its generous support for my work on the evolution of human cognition and society. 16

188

K. Sterelny

References Boyd, R. 2016. A different kind of animal: How culture made humans exceptionally adaptable and cooperative. Princeton: Princeton University Press. Brusse, C. 2016. Planets, pluralism, and conceptual lineage. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 53: 93–106. Butler, L., and E.  Markman. 2012. Preschoolers use intentional and pedagogical cues to guide inductive inferences and exploration. Child Development 83 (4): 1416–1428. ———. 2014. Preschoolers use pedagogical cues to guide radical reorganization of category knowledge. Cognition 130: 116–127. ———. 2016. Navigating pedagogy: Children’s developing capacities for learning from pedagogical interactions. Cognitive Development 38: 27–35. Claidiere, N., T.  Scott-Phillips, and D.  Sperber. 2014. How Darwinian is cultural evolution? Philosophical Transactions of the Royal Society Series B 369 (1642): 20130368. Csibra, G., and G. Gergely. 2006. Social learning and social cognition: The case for pedagogy. In Processes of change in brain and cognitive development, ed. M.H. Johnson and Y. Munakata, 249–274. Oxford: Oxford University Press. ———. 2009. Natural pedagogy. Trends in Cognitive Science 13 (4): 148–153. ———. 2011. Natural pedagogy as evolutionary adaptation. Philosophical Transactions of the Royal Society Series B 366 (1567): 1149–1157. Csibra, G., and R. Shamsudheen. 2017. Nonverbal generics: Human infants interpret objects as symbols of object kinds. Annual Review of Psychology 66: 689–710. Dennett, D.C. 2017. From bacteria to Bach and back: The evolution of minds. New  York: W.W. Norton. Devitt, M. 1981. Designation. New York: Columbia University Press. ———. 2015. Testing theories of reference. In Advances in experimental philosophy of language, ed. J. Haukioja, 31–63. London: Bloomsbury Academic. Devitt, M., and K.  Sterelny. 1999. Language and reality: An introduction to the philosophy of language. Cambridge, MA: MIT Press. Godfrey-Smith, P. 2009. Darwinian populations and natural selection. Oxford: Oxford University Press. Henrich, J. 2016. The secret of our success: How culture is driving human evolution, domesticating our species and making us smarter. Princeton: Princeton University Press. Henrich, J., and R. Boyd. 2002. On modelling cognition and culture: Why cultural evolution does not require replication of representations. Culture and Cognition 2: 87–112. Heyes, C. 2011. Automatic imitation. Psychological Bulletin 137 (3): 463–483. ———. 2016. Born pupils? Natural pedagogy and cultural pedagogy. Perspectives on Psychological Science 11 (2): 280–295. ———. 2018. Enquire within: Cultural evolution and cognitive science. Philosophical Transactions of the Royal Society Series B 373 (1743): 20170051. Hiscock, P. 2014. Learning in lithic landscapes: A reconsideration of the hominid ‘tool-using’ niche. Biological Theory 9 (1): 27–41. Hurford, J. 2016. The origins of language: A slim guide. Oxford: Oxford University Press. Jackson, F. 2010. Language, names and information. Oxford: Wiley-Blackwell. Kahneman, D. 2011. Thinking, fast and slow. London: Macmillan. Kripke, S. 1980. Naming and necessity. Cambridge: Harvard University Press. Mercier, H., and D. Sperber. 2017. The enigma of reason: A new theory of human understanding. London: Allen Lane. Morin, O. 2016. How traditions live and die. Oxford: Oxford University Press. Pinker, S. 2010. The cognitive niche: Coevolution of intelligence, sociality, and language. Proceedings of the National Academy of Science 107: 8993–8999. Putnam, H. 1975. Mind, language and reality: Philosophical papers, Volume 2. Cambridge: Cambridge University Press.

9  Michael Devitt, Cultural Evolution and the Division of Linguistic Labour

189

Sperber, D. 1996. Explaining culture: A naturalistic approach. Oxford: Blackwell. ———. 2000. Metarepresentations in an evolutionary perspective. In Metarepresentations: A multidisciplinary perspective, ed. D. Sperber, 117–137. Oxford: Oxford University Press. ———. 2001. An evolutionary perspective on testimony and argumentation. Philosophical Topics 29: 401–413. Sperber, D., F.  Clément, C.  Heintz, O.  Mascaro, H.  Mercier, G.  Origgi, and D.  Wilson. 2010. Epistemic vigilance. Mind and Language 25 (4): 359–393. Sterelny, K. 2010. Moral nativism: A sceptical response. Mind and Language 25 (3): 279–297. ———. 2016. Deacon’s challenge: From calls to words. Topoi 35 (1): 271–282. ———. 2017. Cultural evolution in California and Paris. Studies in History and Philosophy of Science Part C 62: 42–50. ———. 2018. Why reason? Hugo Mercier’s and Dan Sperber’s The enigma of reason: A new theory of human understanding. Mind and Language 33 (5): 502–512. Stout, D. 2002. Skill and cognition in stone tool production: An ethnographic case study from Irian Jaya. Current Anthropology 43 (5): 693–722. Tomasello, M. 1999. The cultural origins of human cognition. Cambridge: Harvard University Press. ———. 2014. A natural history of human thinking. Cambridge: Harvard University Press. ———. 2016. The ontogeny of cultural learning. Current Opinion in Psychology 8: 1–4.

Part III

Theory of Meaning

Chapter 10

Still for Direct Reference David Braun

Abstract  Michael Devitt argues against direct reference, and in favor of an alternative theory of meaning, in “Against Direct Reference,” “Still against Direct Reference,” and Coming to Our Senses. This paper criticizes Devitt’s theory of meaning and defends direct reference against his most important objection. Devitt’s initial theory of meaning entails, contrary to direct reference, that substitution of co-referring names can fail to preserve truth-value. However, Devitt recognizes that his  initial theory does not deal well with Kripke’s puzzle and other hard cases. Devitt then modifies his theory to handle those cases. But his modified theory, when fully developed along the lines that Devitt indicates, entails that proper names can be freely substituted in attitude ascriptions, just as direct reference says. Devitt’s strongest objection to direct reference claims (roughly) that if it were true, then attitude ascriptions would be incapable of explaining behavior. But Devitt’s objection rests on false assumptions about explanation. Keywords  Michael Devitt · Semantics · Meaning · Proper names · Direct reference · Modes of referring · Attitude ascriptions · Belief ascriptions · Explanation

I am honored to be included in a volume on Michael Devitt’s many important contributions to philosophy. I am also delighted to have an opportunity to celebrate the work of a friend. Michael and I first became acquainted when he wrote a referee report on one of my earliest papers (Braun 1993). Despite his serious disagreements with it, Michael recommended that it be published, and signed his report. Michael’s generous act began twenty-five years of philosophical exchanges between us. He has always been a fair, thoughtful, creative, energetic, provocative, and amusing interlocutor. I am tremendously grateful to him for our conversations and friendship. D. Braun (*) Department of Philosophy, University at Buffalo, Buffalo, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_10

193

194

D. Braun

As Michael knows, I have often argued in favor of the theory of direct reference.1 Michael has frequently argued against it. He presents some of his arguments in “Against Direct Reference” (1989), “Still against Direct Reference” (2012), and “Should Proper Names Still Seem So Problematic?” (2015). He goes into greater detail in his book Coming to Our Senses (1996). These works also present Michael’s arguments in favor of his alternative theory of meaning. I am still for direct reference, despite Michael’s efforts to turn me, and every other philosopher, against it. I here hope to persuade Michael and his readers that they should join me in accepting it. My main strategy is to start with Devitt’s theory of meaning and derive a version of direct reference from it. More precisely, I take a close look at how Devitt deals with some of the puzzle cases that help motivate direct reference, such as Kripke’s cases of Pierre and Peter. I show that when Devitt’s remarks about these cases are developed a bit further than he does, we end up with a Devittian theory that looks much like direct reference. I argue that this version of direct reference satisfies all of the methodological constraints on semantic theorizing that Devitt advocates. I end this paper by replying to Devitt’s most important criticisms of direct reference. I pay most attention to his criticism that says (very roughly) that attitude ascriptions would fail to explain behavior, if direct reference were true.

10.1  Direct-Reference Theory 10.1.1  D  irect Reference, Semantic Content, Millianism, and Russellian Propositions The theory of direct reference is mostly a theory about one kind of meaning, which direct-reference theorists typically call content or semantic content. The semantic content of an expression, according to direct-reference theorists, is determined by linguistic convention. So, semantic content is a kind of conventional meaning.2  See Braun 1993, 1998, 2000, 2001a, 2001b, 2006, and 2015. I will often follow Devitt in using the term ‘direct reference’ to refer to the theory of direct reference, rather than the phenomenon of direct reference. Though the theory of direct reference comes from Kaplan (1989), Devitt tends to use the term for a somewhat different semantic theory, about which Kaplan has periodically shown some ambivalence, namely the theory that I call ‘Millian Russellianism’ below. I will follow Devitt in using ‘direct reference’ for the latter theory. 2  A qualification: I can safely say that direct-reference theory’s semantic contents are conventional meanings only because I am here ignoring context-sensitive expressions. If an expression is context-sensitive, then its Kaplanian (1989) character is determined by linguistic convention, whereas its semantic content, in a context, is determined by both its character and features of that context, and so is not determined by conventional meaning alone. But if an expression is context-insensitive, then it has the same semantic content in all contexts, and so we can safely speak of its semantic content, full stop, without mentioning context. Moreover, its single semantic content is completely determined by linguistic convention, and so we can say that its (unvarying) semantic content is a conventional meaning. 1

10  Still for Direct Reference

195

Direct-reference theorists say that the semantic content of a proper name is its referent. This part of direct-reference theory is often called Millianism. Directreference theorists differ among themselves about the semantic contents of verbs, adjectives, common nouns, and various predicative  phrases containing them (Soames 2002; Salmon 2003; Braun 2015). I shall assume here that the semantic contents of these expressions are properties and relations. For example, the semantic content of both ‘spy’ and ‘is a spy’ is the property of being a spy. Direct-reference theorists believe that the semantic contents of declarative sentences are propositions. Sentences semantically express their propositional semantic contents. Propositions have constituents. If S is a declarative sentence, then the constituents of the proposition that S semantically expresses are (roughly) the semantic contents of the words in S. Structured propositions of this sort are commonly known as Russellian propositions. (Thus my favorite term for the theory that Devitt criticizes is ‘Millian Russellianism’.) Direct-reference theorists often use n-tuples to represent propositions and their constituent structures (though they typically do not identify propositions with n-tuples). Thus, the proposition that (1) semantically expresses might be represented by the n-tuple (1p). (1) Ortcutt is a spy. (1p) Proposition (1p) has Ortcutt himself as a constituent. A proposition that has an individual as a constituent is a singular proposition.

10.1.2  D  irect Reference, Attitude Ascriptions, and Shakespearean Attitude Ascriptions Direct-reference theorists hold that ‘that’-clauses refer to the semantic contents of the sentences inside them. This same proposition is also the semantic content of the ‘that’-clause.3 The semantic contents of typical attitude verbs, such as ‘believe’ and ‘say’, are binary relations that can hold between agents and propositions. So the semantic content of (2) can be represented by (2p). (2) Ralph believes that Ortcutt is a spy. (2p)

3  Strictly speaking, the semantic content of a ‘that’-clause should include the semantic content of the complementizer ‘that’ (Salmon 1986), but I ignore this here.

196

D. Braun

Direct reference entails that substitution of co-referring proper names in attitude ascriptions preserves the proposition semantically expressed, and so preserves truth-value. Suppose that ‘Ortcutt is Bernard’ is true. Then (3) expresses the same proposition as (2), and is also true. (3) Ralph believes that Bernard is a spy. Let’s follow Geach (1962) and Kripke (2011) in saying say that an expression is Shakespearean iff substitution of co-­referring proper names in that expression preserves the extension of that expression. (Geach uses the term ‘Shakespearean’ after the line “a rose/ By any other name, would smell as sweet.”) If an expression is ambiguous, then we will speak of an expression’s being Shakespearean relative to a disambiguation or reading. Direct-reference theory entails that attitude ascriptions and ‘that’-clauses are Shakespearean, on all of their readings.

10.1.3  D  irect Reference, Definite Descriptions, and Scope Ambiguity Devitt’s criticisms of direct reference often mention attitude ascriptions that contain definite descriptions, so I will say a bit about direct reference’s theory of definite descriptions. I will skip many details, as they can become rather complicated. Most direct-reference theorists hold that attitude ascriptions that contain definite descriptions in their ‘that’-clauses are scope-ambiguous. For example, (4) is scopeambiguous. The narrow-scope reading of (4) is roughly conveyed by (4n1) and (4n2). The wide-scope reading of (4) is roughly conveyed by (4w1) and (4w2). (4) Ralph believes that the man in the brown hat is a spy. (4n1) Ralph believes that: the man in the brown hat is a spy. (4n2) Ralph believes that: (The x: x a man in the brown hat) x is a spy. (4w1) The man in the brown hat is such that: Ralph believes that: he is a spy. (4w2) (The x: x is a man in the brown hat) Ralph believes that: x is a spy. Sentence (4) semantically expresses different propositions on these two readings. The narrow-scope reading of (4), given by (4n1) and (4n2), attributes to Ralph belief in a proposition in which the property of being a man in the brown hat is a constituent (or figures importantly in the proposition in some other way  – I am ignoring subtleties). This narrow-scope reading is true if Ralph sincerely utters ‘The man in the brown hat is a spy’. It can be true though Ralph is unable to name, or point out, any person who is a spy. Imagine, for example, that Ralph believes, on general grounds, that there is exactly one brown hat, and exactly one man wears that brown hat, and anyone who wears that brown hat is a spy. Suppose this leads him to utter ‘The man in the brown hat is a spy’. Then (4) is true on its narrow-scope reading.

10  Still for Direct Reference

197

By contrast, the wide-scope reading of (4), conveyed by (4w1) and (4w2), does not attribute to Ralph belief in a proposition in which the property of wearing a brown hat figures. (4) on its wide-scope reading could be true though Ralph has no opinion about whether any spy wears a brown hat. Imagine, for example, that Ralph is familiar with Ortcutt, and has heard someone he trusts say ‘Ortcutt is a spy’, and Ralph is now disposed to utter that sentence sincerely as well. Imagine that Ralph has no opinion about whether Ortcutt, or anyone else, wears a brown hat. Suppose that Ortcutt is the (one and only) man who wears the (one and only) brown hat. Then (4) is true on its wide-scope reading, but false on its narrow scope reading. The wide-scope reading of (4) attributes to Ralph belief in a singular proposition. It specifies that singular proposition by uniquely describing the relevant individual in the proposition (Ortcutt), but it may describe that individual in a way that Ralph would not recognize.4 So, the truth-conditions, and semantic content, of (4) on its wide-scope reading are different from the truth-­conditions, and semantic content, of (4) on its narrowscope reading, according to typical direct-reference theorists.

10.2  Devitt’s Methodology and Initial Theory of Meaning I shall now present Devitt’s theory of meaning. I rely mostly on Coming to our Senses (1996). The undated page-number references that occur below refer to Coming. I shall also occasionally rely on “Still against Direct Reference” (Devitt 2012).

4  Some direct-reference theorists hold that (2) is also scope-ambiguous. On one reading, ‘Ortcutt’ takes wide scope over ‘believes’, and on the other reading, it takes narrow scope. The narrow scope reading might be paraphrased with (2n), and the alleged wide scope reading might be paraphrased with (2w1) and symbolized with (2w2).

(2n) Ralph believes that: Ortcutt is a spy. (2w1) Ortcutt is such that Ralph believes that he is a spy. (2w2) λx[Ralph believes that: x is a spy] Ortcutt. (2n), on the one hand, and (2w1)/(2w2), on the other, differ in semantic content, because they differ in structure. But these semantic contents are logically and necessarily equivalent, so the alleged ambiguity makes no truth-conditional difference to (2). Thus, I ignore the possibility of this ambiguity here. However, this scope-ambiguity may make a difference to the truth-conditions of more complex sentences, such as ‘Alice believes that Ralph believes that Ortcutt is a spy’. See Salmon 1989, 2006.

198

D. Braun

10.2.1  Devitt’s Methodology Devitt notes that philosophers radically disagree with each other about the nature of meanings and the subject matter of semantics. Some philosophers, for instance, think that meanings are truth-conditions, others that they are verification conditions, others that they are inferential roles, and so on. All of these philosophers seem to rely almost exclusively on seemingly linguistic intuitions to argue for their favored views. Devitt proposes that semanticists begin again, with less reliance on intuition, and more justification for their methodology (48–54). Devitt says that the basic task of semantics is to explain the nature of meanings (54). He admits that it is initially difficult to say anything uncontroversial about meanings (55), but he thinks semanticists should avoid ad hoc assumptions about their natures, and should find phenomena that are sufficiently fundamental and worthy of attention (55). Devitt thinks that when we are trying to discover the nature of meanings, we should adopt the same procedure that we do in other areas of research. For example, when we wish to discover the nature of echidnas, we examine paradigm examples of echidnas and try to discover what they have in common (78–79). We should adopt a similar method in semantics. Ordinary speakers (of English) seem to use ‘that’-clauses to ascribe, and speak about, meanings (55–56). So, Devitt adopts the working assumption that ‘that’clauses of ordinary attitude ascriptions specify meanings (55–56). He assumes that these ‘that’-clauses specify properties of utterances and thoughts (56–57). (Devitt uses ‘thought’ as a term for token events of thinking.) He notes that ordinary people successfully use attitude ascriptions to explain and predict behavior, and to exploit other thinkers’ thoughts as guides to reality. So, plausibly, the properties that ‘that’clauses specify have a role in explaining behavior, and in aiding humans to use other humans as guides to reality. Devitt says that the properties of utterances and thoughts that can play these roles are worthy of our attention. Thus Devitt proposes that meanings are the properties of thoughts and utterances that are (a) of a sort ascribed by ‘that’-clauses in attitude ascriptions and (b) play a (significant) role in explaining behavior and allowing those who ascribe meanings to use other persons’ thoughts and utterances as guides to the way the world is (56–57). The phrase ‘of a sort’ in clause (a) above is important. Perhaps the properties that ordinary speakers ascribe in ‘that’-clauses can explain behavior and be used as guides to reality; but perhaps there are other properties, of roughly the same sort, whose ascription would work better for these purposes. In that case, those other properties are the meanings that we should ascribe for those purposes, and are the ones that Devitt says it is worthwhile for semanticists to investigate (62, 70–71). However, the properties that ordinary ‘that’-clauses do, in fact, ascribe are at least somewhat useful for these purposes, hence worthy of at least initial investigation, and so are putative meanings. Devitt admits that other semanticists may focus on properties of utterances and thoughts different from those on which he focuses. Those properties might be worthy of investigation, and so might reasonably be said to be semantic. But he says that their worthiness of study needs to be argued (67).

10  Still for Direct Reference

199

10.2.2  Devitt’s Initial Theory of Meaning Devitt presents his theory of meaning in stages. He first presents an initial theory that reflects judgments about attitude ascriptions in rather ordinary cases. Later, he revises his theory in light of more difficult cases, such as Kripke’s Pierre and Peter. I present Devitt’s initial theory immediately below. Devitt begins by looking at uncontroversial cases in which ‘that’-clauses are used to describe thoughts and utterances. By doing so, he hopes to discover the properties that those ‘that’-clauses attribute to thoughts and utterances. He argues that ‘that’-clauses (in fact) ascribe representational properties to thoughts and utterances (136–139). Devitt takes the representational properties of sentences to be properties that determine their truth conditions. He takes the representational properties of words to be properties that determine their references (136). The fact that an organism’s token thoughts have such representational properties helps explain the organism’s world-oriented behavior (137). Thus, these representational properties help explain the behavior of organisms (138). Those who are aware of these properties of other thinkers’ thoughts can then use those other thinkers’ thoughts as guides to reality. So, plausibly, these representational properties are meanings ascribed by ‘that’-clauses. Devitt claims that attitude ascription (2) is ambiguous and has at last two readings (141–142). (2) Ralph believes that Ortcutt is a spy. On the opaque reading of (2), substituting a co-referring definite description for ‘Ortcutt’ can lead from truth to falsehood or vice versa. Imagine that Ralph sees exactly one man in a brown hat, lurking about a government facility in a suspicious way. Ralph assents to ‘The man in the brown hat is a spy’. This man is identical with Ortcutt. Nevertheless, Ralph may dissent from ‘Ortcutt is a spy’. If so, then (2) is false on its opaque reading, while (4) is true on its opaque reading. (4) Ralph believes that the man in the brown hat is a spy. On the second disambiguation of (2), its transparent reading, substitution of a coreferring referring name for a definite description, or vice versa, preserves truthvalue. On the transparent readings of (2) and (4), they have the same truth-value. Devitt similarly thinks that, on the opaque reading of (2), substitution of coreferring proper names can change truth-­value. For example, suppose that Ortcutt is identical with Bernard, and suppose Ralph assents to ‘Ortcutt is a spy’ but dissents from ‘Bernard is a spy’. Then (2) on its opaque reading is true, while (3), on its opaque reading, is false. (3) Ralph believes that Bernard is a spy.

200

D. Braun

Devitt hypothesizes that on the transparent reading of (2), the occurrence of ‘Ortcutt’ attributes to a thought the property of having a part that refers to Ortcutt. On the opaque reading, it attributes to a thought the property of having a part that refers to Ortcutt under a certain mode of referring to Ortcutt, namely “the mode exemplified by the name ‘Ortcutt’ in [(2)] itself” (142). Devitt calls this mode the ‘Ortcutt’ mode. Devitt thinks these referential properties are identical with properties of standing at the ends of certain sorts of causal chains originating in dubbings of Ortcutt. But Devitt mostly brackets his causal theory of reference when discussing his theory of meaning. I shall also mostly ignore Devitt’s claim that referential properties are certain causal properties. Under the transparent reading of (2), its full ‘that’-clause refers to the property of being a thing such that it is true iff it has a part with the property of referring to Ortcutt, and a part with the property of referring to the property of being a spy, and the referent of the first part has the referent of the second part.5 Moreover, Devitt takes believing to be a relation between an agent and a thought (a token mental event with a sentence-like structure). So, (2) is true on its transparent reading iff Ralph has a token mental event with the preceding property (142). Under the opaque disambiguation of (2), its ‘that’-clause refers to the property of being a thing such that it is true iff it has a part with the property of referring to Ortcutt under the ‘Ortcutt’ mode and a part that refers to the property of being a spy, and the referent of the first part has the referent of the second part. So, (2) is true under this disambiguation iff Ralph has a token mental event with this second property (142). Devitt thinks that (2) may have a third reading, a rapport-transparent reading (144–145). But he sometimes seems unsure of whether this alleged third reading is genuine (145, 147–148). I shall ignore it in what follows. Though Devitt uses the term ‘attitude ascription’, he denies that they ascribe attitudes to propositions, and he denies that the ‘that’-clauses that appear in attitude ascriptions refer to propositions. He does so because he denies the existence of propositions (sect. 4.12, especially pp. 210–215). Devitt says that attitude ascriptions can be used as guides to reality, under both their transparent and opaque readings (152, 154). For example, if Alice believes that Ralph tends to have true thoughts, and she believes that (2) is true under either its opaque or transparent reading, then Alice can infer that Ortcutt is a spy. Devitt claims that attitude ascriptions, on their opaque readings, explain behavior (151, 153). Consider the following example (which is mine, not Devitt’s). Suppose that Ralph handcuffs Ortcutt, and we wish to explain why he does. Suppose we use ascriptions (5) and (6). (5) Ralph wants Ortcutt to be tried for espionage. (6)  Ralph believes that if he handcuffs Ortcutt, then Ortcutt will be tried for espionage. 5  I am generalizing Devitt’s account of singular terms in ‘that’-clauses to a similar view of general terms in ‘that’-clauses. I think he would not object to my extension: see his pp. 149–150. Nothing I say below will turn on the details of Devitt’s views about general terms.

10  Still for Direct Reference

201

On Devitt’s view, if (5) and (6) are true on their opaque readings, then they together explain why Ralph handcuffed Ortcutt. On their opaque readings, (5) and (6) are true if and only if (roughly speaking) Ortcutt is disposed to assent to ‘I want Ortcutt to be tried for espionage’ and ‘If I handcuff Ortcutt, then Ortcutt will be tried for espionage’. Agents who are disposed to utter these sentences do tend to handcuff Ortcutt. Devitt’s thinks that (5) and (6) explain Ortcutt’s behavior because there is a generalization that covers the explanation (151). Sentence (7) below may be one such generalization, when all of the ascriptions it contains are given their opaque readings (151). (7) For all x, if x wants Ortcutt to be tried for espionage, and x believes that if x handcuffs Ortcutt, then Ortcutt will be tried for espionage, then, ceteris paribus, x handcuffs Ortcutt. Such generalizations support, or underwrite, explanations of behavior, when the attitude ascriptions in them are construed opaquely (151). By ‘underwrite’, Devitt seems to mean that an explanandum sentence, such as ‘Ralph handcuffs Ortcutt’ or ‘Ceteris paribus, Ralph handcuffs Ortcutt’ follows deductively, or probabilistically, from the relevant attitude ascriptions and generalization.6 Devitt thinks that attitude ascriptions on their transparent readings can also explain behavior, if the agents also satisfy appropriate attitude ascriptions on their opaque readings (153). As Devitt puts it, “transparent [ascriptions] serve as explanations on the presupposition that they follow from good opaque ones” (153, italics in the original). Devitt is primarily interested in the meanings of thoughts, because he thinks that semantics should describe the properties of thoughts that enable the ascription of thoughts to explain behavior. But he thinks that agents’ utterances, and the meanings of those utterances, are our primary means of discovering the meanings of their thoughts (2012: 64).

6  There are two issues about (6) and (7) that I shall mention in this note, but not discuss at length. First, some might hold that (6) is true only if the sentence containing ‘he’ that occurs embedded in its attitude ascriptions refers to a “first-person (de se) thought.” Second (and more important), the occurrence of the quantifier ‘for all x’ in (7) binds occurrences of variables both inside and outside the ‘that’-clauses of the attitude ascriptions in (7). The values of those variables appear to be objects, not properties or meanings, and so their values appear not to be Devittian meanings. Therefore, at first glance, the attitude ascriptions in (7) must be construed transparently (more accurately, Shakespeareanly). Devitt says nothing about how the attitude ascriptions in such generalizations could have opaque (non-Shakespearean) readings. Kaplan (1968) presents a Fregean view of quantification into opaque attitude ascriptions. Devitt refers to Kaplan’s paper (143 n. 4), but does not discuss his theory. Perhaps Devitt would be willing to adopt it. Of course, quantifyingin presents no problems for direct reference, and that counts as a point in its favor.

202

D. Braun

Devitt distinguishes between the conventional meanings of utterances and tokens, on the one hand, and the speaker-­meanings that those utterances and tokens have, on the other (157). He says that “the meaning of a token utterance (normally) depends heavily on the conventional meaning of its type” (199). He maintains that hearers normally rely on the conventional meanings of expressions to understand utterances (tokens) of them (198). He adds that speakers choose to utter certain expressions, rather than others, because (roughly speaking) they rely on the conventional meanings of those utterances to enable their hearers to grasp what they (the speakers) mean by uttering them (198–199). This gives hearers a way of discovering the thought-meanings of the speaker. Thus, the main task for semantics is to describe and explain the natures of the meanings that utterances have largely by convention. (Devitt’s remarks on conventional meaning in Coming are fairly scattered. See pp.  59, 65, 154–158, 196–206, 225–226, 233–240. He says more about conventional meaning in 2012: 64–65.) There is much more to Devitt’s initial theory of meaning than what I have described here. His remarks about demonstratives are particularly intriguing. Devitt’s arguments against direct-reference theory are, to a certain point, traditional. He relies on the Identity Problem, which says that “a=a” can differ in meaning from “a=b” when the latter is true, which is contrary to direct reference. He also relies on the Opacity Problem, which claims that “N believes that a=a” can be true while “N believes that a=b” is false when “a=b” is true, contrary to direct reference. But Devitt argues for the crucial claims of these objections by relying on claims about meaning and explanation, which is rather untraditional. I will say much more about those arguments later.

10.3  S  ome Potential Hindrances to Dialogue Between Devitt and Direct-Reference Theorists Now that I have presented direct-reference theory and Devitt’s theory, we can see that there is some danger that Devitt and the direct-reference theorists may “talk past each other.” They may fail to notice that their theories focus on different phenomena. They may fail to recognize that some of their disagreements are superficial. They may fail to recognize that some of their substantive disagreements are obscured by differences in vocabulary.

10.3.1  Direct Reference and Devitt on Propositions The disagreement between Devitt and direct-reference theorists over propositions and ‘that’-clauses is superficial. Devitt accepts the existence of properties (and relations, presumably). Consequently, it is easy to construct a theory of belief as an attitude towards propositions that is consistent with Devitt’s semantic methodology and ontological commitments. This neo-Devittian theory says that propositions are

10  Still for Direct Reference

203

properties, namely the representational properties that Devitt says that attitude ascriptions attribute to thoughts. (Recall that, on Devitt’s view, thoughts are token mental events.) On this neo-Devittian view, ‘that’-clauses refer to these properties, and attitude ascriptions report certain thought-involving relations to them. For example, ‘that Ortcutt is a spy’ refers, on its opaque reading, to the property of being a thought (a token mental event) that is true iff that thought has a part with the property of referring to Ortcutt under the ‘Ortcutt’ mode, and a part that refers to the property of being a spy, and the referent exemplifies the property. (See Sect. 10.2.2 above.) The belief ascription ‘Ralph believes that Ortcutt is a spy’, on its opaque reading, is true iff Ralph stands in the belief relation to that representational property. Standing in the belief relation to that representational property requires an agent to have a thought (a token mental event) that instantiates that representational property in a certain way. For example, Ralph stands in the belief relation to the representational property to which ‘that Ortcutt is a spy’ refers (on either reading) iff he has a thought that (a) has that representational property and (b) has an appropriate “belief-ish” functional role in Ralph’s mind. The above neo-Devittian theory is consistent with Devitt’s methodology and ontology. It also strongly resembles theories of propositions and attitudes proposed by philosophers who explicitly accept the existence of propositions, such as Hanks (2015) and Soames (2015). So, the disagreement between Devitt and direct-reference theorists over propositions is, at best, superficial. I shall ignore it from here on.

10.3.2  Direct Reference and Devitt on Conventional Meaning The next difference between Devitt and direct-reference theorists is much more important, and has much more potential to produce confusions in dialogues between them. Direct-reference theorists want to describe the conventional meanings of linguistic expressions. Direct-reference theorists admit that there are various kinds of nonconventional meanings, including speaker-meanings. They can also admit that thoughts and utterances have all sorts of properties that some philosophers might claim are non-conventional meanings, for good reasons or bad. But direct-reference theorists do not intend their theory to describe non-conventional meanings, and so their theory is consistent with a huge range of theories of non-conventional meanings. For example (to take an extreme case), suppose some tokens of the proper name ‘Paderewski’ produced by Kripke’s Peter have the property of being caused by tokens of a mental name that has a certain causal role within Peter’s mind. Some philosophers may wish to say that this latter causal-role property is a meaning of the token of ‘Paderewski’ that Peter produces. Direct-reference theorists are free to agree. They should disagree with causal-role theorists only if those causal-role theorists claim that this causal-role property is a conventional meaning of the relevant token.

204

D. Braun

Devitt, by contrast, is not primarily interested in conventional meaning. He is instead primarily interested in the meanings that are useful to attribute to thoughts in order to explain behavior and use human beings as guides to reality, whatever those meanings might be. He is interested in conventional meaning only insofar as it is a means to discovering the meanings of thoughts that are useful for these purposes. Devitt initially hypothesizes that attitude ascriptions explain behavior because of the conventional meanings they attribute to thoughts. But he wants his semantic theories to describe the meaning-properties of thoughts that, in fact, explain behavior, whether or not those meaning-properties are conventional meanings of linguistic expressions. In fact, Devitt several times says that the meanings needed to explain behavior (namely, properties of referring to a particular object under a particular mode) may be too fine-grained to be conventional meanings (176, 228–240). Therefore, we need to be careful when comparing Devitt’s theory with direct reference. Suppose a follower of Devitt claims that speakers use attitude ascriptions to attribute certain meaning-properties to thoughts, and suppose that this Devittian points out that these meanings are not mentioned by direct-reference theory. This Devittian thereby raises an objection to direct reference only if she also claims that attitude ascriptions attribute those Devittian meanings to thoughts as a matter of conventional meaning. Thus, Devitt’s theory and direct-reference theory have somewhat different subject matters, and may not engage each other as directly as one might initially think. But their subject matters do overlap. On the one hand, Devitt admits the existence of conventional meanings, and that conventional meanings differ from speakermeanings and other non-­conventional meanings. He thinks that he needs to describe the conventional meanings of expressions so as to explain how ordinary speakers understand utterances and can use attitude ascriptions to explain behavior (59, 65, 154–158, 196–206, 225–226, 233–240; 2012: 64–65). On the other hand, some direct-reference theorists think that attitude ascriptions do explain behavior, and do so in virtue of their conventional meanings, and so they think that their theory of conventional meaning should be consistent with that fact (Braun 2001a). Devitt might be inclined to say that conventional meaning is not sufficiently fundamental to warrant much attention. (There’s a hint of this view on p. 65. But it seems inconsistent with Devitt 2012: 64–65.) A direct-reference theorist could reasonably reply that the study of conventional meaning is an important part of the study of meaning more generally (that is, semantics, intentionality, representation, etc.). Devitt seems to admit this. Devitt claims that direct-reference theorists have no principled basis for their delineation of their subject matter. In a discussion of Nathan Salmon’s (1986) views, Devitt says the following: He [the direct-reference theorist] must provide a principled basis for his distinction between the one referential property of a token that is alleged to constitute its content – its property of referring to x – and all its other referential properties – the modes of referring to x. In responding to such demands, a theorist needs an account of our semantic purposes in ascribing meanings or contents. (183)

10  Still for Direct Reference

205

A direct-reference theorist could reasonably reply that she is interested in the conventional meanings of linguistic expressions. She could say that conventional meanings are interesting in and of themselves. She could also say that one of our purposes in uttering attitude ascriptions is to ascribe to agents certain mental relations that they bear to these conventional meanings. In any case, describing these conventional meanings is, at the very least, an important part of a larger semantic theory (or theory of intentionality) that seeks to describe the sorts of features of utterances and thoughts in which Devitt is interested. I believe that the study of direct-reference-style conventional meaning can be given a more Devittian justification, roughly as follows. Attitude ascriptions explain behavior in virtue of having the conventional meanings that direct-­reference theory says they have. So direct-reference theory’s putative meanings are genuine (Devittstyle) meanings. I will say more about this later.

10.3.3  D  evitt’s Notions of Opacity and Transparency, and Being Shakespearean Devitt’s use of the terms ‘opaque’ and ‘transparent’ may further confuse the dialogue between him and direct-reference theorists. Devitt says (or implies) that direct-reference theory entails that all attitude ascriptions are transparent (176, 184). But direct reference does not entail that all attitude ascriptions are transparent, in Devitt’s sense of ‘transparent’. Let me explain. Devitt introduces the notion of opaque readings of attitude ascriptions by using an example. Imagine that Ralph assents to ‘The man in the brown hat is a spy’ but dissents from ‘Ortcutt is a spy’, and suppose that Ortcutt is the man in the brown hat. Then, says Devitt, (2) is false on its opaque reading, but true on its transparent reading. (2) Ralph believes that Ortcutt is a spy. Devitt says that “the ambiguity of [(2)] seems to demonstrate nicely the truth and falsity in both a ‘Fido’-Fido theory and a Fregean theory” (142). (Devitt uses the term “Fido’-Fido theory’ for direct-reference theory.) Devitt’s subsequent discussion seemingly suggests that direct-reference theorists cannot admit that there is a reading of (2) that is false in this situation. But typical direct-reference theorists agree that there is a reading of (2) that is false in the situation that Devitt describes (assuming that Ralph does not assent to ‘Bernard is a spy’, or to ‘He is a spy’ in a context in which ‘he’ refers to Ortcutt, or to some other sentence that expresses the singular proposition that Ortcutt is a spy). In fact, they would say that (2) is false on all of its readings in the situation that Devitt describes. Moreover, direct-reference theorists would agree that (4) below is

206

D. Braun

true in the above situation, on the reading in which ‘the man in the brown hat’ takes narrow scope. (4) Ralph believes that the man in the brown hat is a spy. In that sense, direct-reference theorists admit that substitution of a definite description for a proper name in (2) can result in a change in truth-value. As far as I can tell, direct reference entails that (4) has an opaque reading, in Devitt’s sense of ‘opaque’. (Unfortunately, Devitt does not define the terms ‘opaque’ and ‘transparent’. The relevant definitions may have to be rather complicated, to take into account the fact that the same sentence can have several readings, due to scope ambiguities.) There is, nevertheless, a serious disagreement between Devitt’s theory and direct-reference theory concerning proper names. Devitt holds that there are readings of attitude ascriptions on which substitution of co-referring proper names can result in a change in truth-value. The relevant readings are those on which the ascription’s ‘that’-clause refers to a property of referring to an object under a certain mode, for example, referring to Ortcutt under the ‘Ortcutt’ mode. (See 171–186, especially 175, first sentence of last paragraph, 176, last full paragraph, 177, last full paragraph, and 2012: 79). But direct-reference theory entails that substitution of co-referring proper names always preserves truth-value. In short, the crucial disagreement between Devitt and direct-reference theorists does not concern whether attitude ascriptions are opaque, in Devitt’s sense, but rather whether they are Shakespearean. Direct-reference theorists say that they are. Devitt says that they are not. In what follows, I focus on this issue, though I shall sometimes need to mention opacity and transparency to describe aspects of Devitt’s views.

10.4  Devitt’s Revised Theory of Opaque Attitude Ascriptions Devitt presents his initial theory in sects. 4.1–4.9 and 4.11–4.13 of Coming. But in Sect. 4.14, he begins a new section titled “Developing the Program,” in which he makes significant revisions to his initial view. Many of these revisions are inspired by his consideration of puzzle cases, such as Kripke’s (2011) cases of Pierre and London, and Peter and Paderewski. Devitt’s revisions entail that his opaque ascriptions are “less opaque” than they appeared to be on his initial theory. I shall argue that if we develop his program yet further, in ways suggested by Devitt’s own revisions, we end up with opaque ascriptions that allow for substitution of all co-referring proper names. That is, attitude ascriptions turn out to be Shakespearean.

10  Still for Direct Reference

207

10.4.1  Monolingual Non-English Speakers Before turning to the puzzle cases that Devitt discusses, I shall consider a simpler case that Devitt does not discuss. This simpler case raises a problem for Devitt’s initial view that is similar to the problem that the puzzle cases raise. Considering this simpler case will be a good “warmup” for Devitt’s discussion of Kripke’s puzzles, and for Devitt’s consequent revisions. Devitt’s initial view implies that (2) is true on its opaque reading only if Ralph has a mental token that refers to Ortcutt under the ‘Ortcutt’ mode. (2) Ralph believes that Ortcutt is a spy. Ralph has such a mental token name only if he uses the public name ‘Ortcutt’. (More exactly, perhaps, he must at least have heard the public name used to refer to Ortcutt. I omit qualifications like this one below.) Generalizing on this example, we can infer that ascription (8) below is true, on its opaque reading, only if Marie has a mental token that refers to London under the ‘London’ mode, and her having such a mental token requires that she use the name ‘London’. (8) Marie believes that London is pretty. But suppose that Marie is a monolingual French speaker who understands and sincerely assents to the French sentence ‘Londres est jolie’, but has never heard the English name ‘London’. Then Devitt’s initial view entails that (8) is false on its opaque reading. This consequence of Devitt’s initial view is troubling, for ordinary speakers would be inclined to use (8) to describe Marie, if they knew that she sincerely assented to ‘Londres est jolie’. A Devittian might reply that we need to distinguish the opaque and transparent readings of (8). He might say that (8) is false on its opaque reading, but true on its transparent reading. But there is a reason to think that (8) has an opaque reading on which it is true, at least if we understand ‘opaque’ in the way that Devitt does. The alleged transparent reading of (8) allows substitution of co-referring definite descriptions for proper names. So, (8) and (9) are both true, on their (alleged) transparent readings. (9) Marie believes that the capital of the United Kingdom is pretty. Yet an ordinary speaker who sincerely utters (8) may think that (9) is false, because Marie rejects the French translation of ‘The capital of the United Kingdom is pretty’ (and the French translation of ‘London is the capital of the United Kingdom’). Such an ordinary speaker’s judgment is evidence that he uses (8) while having in mind an opaque reading (in Devitt’s sense). We will soon see that Devitt’s revised theory agrees that (8) has an opaque reading on which it is true in the above situation, and this reading is a conventional

208

D. Braun

reading of (8). In fact, Devitt’s revised theory strongly suggests that there is no conventional reading of (8) on which it is false in the above situation. Or so I shall argue.

10.4.2  Bilingual Speakers and Devitt’s Revised View Devitt does not explicitly consider any cases of monolingual non-English speakers, such as Marie. But he does consider a bilingual case, namely Kripke’s famous case of Pierre. Devitt’s reflections on this case lead him to revise his initial view of opaque attitude ascriptions that contain proper names. The revised view entails that sentence (8) above is true, on its conventional opaque reading, contrary to Devitt’s initial view. Let’s remind ourselves of Kripke’s Pierre. Pierre assents to ‘Londres est jolie’ in French and to ‘London is not pretty’ in English. Devitt holds that Pierre has (at least) two mental tokens that refer to London, both of which have the meaning-­ property of referring to London. But these tokens differ in another kind of meaning: the first has the meaning-­property of referring to London under the ‘London’ mode while the second has the meaning-property of referring to London under the ‘Londres’ mode. Devitt’s initial theory of attitude ascriptions entails that (10) is false on its opaque reading, because Pierre does not assent to ‘London is pretty’. (10) Pierre believes that London is pretty. Devitt, however, disagrees with this consequence of his initial view. He thinks that (10) is true on its conventional opaque reading. Quoting Devitt now, The interesting thing about this case is that our opaque ascriptions of meanings using one of these names do not fit what I have claimed in that they do not normally distinguish between the two names: ‘Pierre believes that London is pretty’ could be true on the strength of Pierre being prepared to affirm “Londres is pretty” or “Londres est jolie.” (233, footnote omitted)

Devitt therefore revises his view about the meaning that the clause ‘that London is pretty’ attributes to thought tokens when it appears in standard opaque attitude ascriptions, on one of their conventional meanings. So, in a t-clause, each name ascribes in a standard way a meaning involving the “disjunctive” mode of ‘London’ or ‘Londres’ [footnote in Devitt’s text], rather than a finer-grained meaning involving one of the disjuncts. And my recent claim about our ordinary ascriptions involving names needs to be modified accordingly. [Devitt’s footnote: Or indeed the “translation” of ‘London’ in any other language. I shall ignore this, taking ‘Londres’ to represent all those translations.] (233)

On Devitt’s revised view, (10) is true on its standard, conventional opaque reading, because Pierre has a mental token that refers to London under the “disjunctive”

10  Still for Direct Reference

209

‘London’ or ‘Londres’ mode, and the ‘that’-clause in (10), on one of the conventional meanings of the ascription, ascribes a meaning to Pierre’s thoughts that involves this disjunctive mode. Similarly, (11) ascribes to Pierre a mental token with a meaning that involves this disjunctive mode, and is true because Pierre assents to ‘London is not pretty’. (11) Pierre believes that London is not pretty. Devitt’s readers may wonder why he says that (10) is true on its opaque reading. Why does he not say that (10) is false on its opaque reading, but true on its transparent reading? Devitt does not explicitly give his reasons, but I suspect that they are much like those I presented in Marie’s case above. An ordinary speaker might say that if Pierre sincerely utters ‘Londres est jolie’, then (10) is true, but such an ordinary speaker might simultaneously reject (12), if Pierre rejects the translation of its embedded sentence into French. (12) Pierre believes that the capital of the United Kingdom is pretty. If (10) is true on a reading, and (12) is false on a parallel reading, then the relevant reading of (12) is opaque, at least as Devitt thinks of opacity. The immediately preceding quote from Devitt contains an important footnote, which I reproduced above. The footnote implies that, strictly speaking, the ‘that’clause ‘that London is pretty’ ascribes a meaning involving not merely the disjunctive ‘London’ or ‘Londres’ mode, but a more general mode, which I shall call a translational mode. It is the property of referring to London under the mode of a translation of ‘London’. This mode involves every translation of ‘London’ into any language other than English. So, strictly speaking, the revised view entails that if (as Google Translate says) ‘Lontoo’ is a translation of ‘London’ into Finnish, and ‘Lontoo on kaunis’ is the translation of ‘London is pretty’ into Finnish, and Pierre is a Finnish speaker who sincerely utters ‘Lontoo on kaunis’, then (10) is true on its conventional opaque reading. I will discuss Devitt’s notion of translation, and his general translation mode, at length later. Devitt says that standard opaque belief ascriptions containing ‘London’, such as (11), ascribe, on their conventional meanings, a meaning that involves the disjunctive mode (really, the translational mode). But Devitt still maintains that his more specific, fine-grained meanings (such as referring to London under the ‘London’ mode) are genuine meanings, and it is possible to ascribe one of the more specific meanings. But to do so, one must use a complex, non-standard ascription, such as (13) below (237). (13) Pierre believes that London, qua city that he is living in, is not pretty. Thus Devitt’s revised view seemingly implies that the simple ascription (11) does not have a conventional meaning, on either its transparent or opaque reading, on which it ascribes the fine-grained meaning of referring only under the ‘London’ mode.

210

D. Braun

In short, Devitt’s revised theory entails that a standard attitude ascription, on the conventional meaning of its opaque reading, ascribes only the very general mode of referring to a particular object under a translation of the name that appears in the ‘that’-clause. I shall argue below that the opaque readings that involve the general translational mode allow for substitution of co-referring names salva veritate. Thus, Devitt’s revised view entails that the conventional meanings of standard attitude ascriptions, on their opaque readings, are Shakespearean  – just as direct reference says.

10.5  F  rom Devitt’s Revised Theory to Shakespearean Attitude Ascriptions 10.5.1  D  evitt’s Revised Theory and Substitution of ‘Bernard’ for ‘Ortcutt’ A reasonable Devittian theory should extend Devitt’s view about attitude ascriptions that contain ‘London’ to attitude ascriptions that contain ‘Ortcutt’. So, Devitt’s revised view seemingly entails (or should entail) that belief ascriptions that contain ‘that Ortcutt is a spy’, on their conventional opaque meaning, ascribe a meaning that involves the mode of referring to Ortcutt under either ‘Ortcutt’ or a translation of ‘Ortcutt’. So, (2) below ascribes a meaning that involves this mode. (2) Ralph believes that Ortcutt is a spy. Now suppose (2) is true. And assume ‘Bernard’ is a translation of ‘Ortcutt’. Then Devitt’s revised view entails that (3) is also true, on its conventional opaque reading. (3) Ralph believes that Bernard is a spy. Suppose that ‘Ralph believes that Bernard is not a spy’ is also true on its conventional, opaque reading. If ‘Bernard’ is a translation of ‘Ortcutt’, then Devitt’s revised theory entails that ‘Ralph believes that Ortcutt is not a spy’ is also true on its conventional, opaque readings. If ‘Bernard’ and ‘Ortcutt’ translate each other, then Devitt’s revised theory entails that they can always be substituted for each other in attitude sentences, on the conventional opaque readings of all sentences involved. Obviously, substitution of coreferring names preserves truth-value on any transparent readings of these sentences. Generalizing, we can conclude that, on Devitt’s revised view, all attitude ascriptions are Shakespearean, on all of their conventional meanings and readings. Devitt, of course, rejects the view that all attitude ascriptions are Shakespearean, on all of their conventional meanings (171–178, 181–182). But is his doing so consistent with his revised theory? Is his doing so consistent with what he says about

10  Still for Direct Reference

211

‘London’ and ‘Londres’? Devitt can resist the Shakespearean conclusion only if ‘Londres’ is a translation of ‘London’, but ‘Bernard’ is not a translation of ‘Ortcutt’ (and ‘Ortcutt’ is not a translation of ‘Bernard’). Is there any reasonable translation relation that would have this consequence? Let’s consider some candidates.

10.5.2  A Translation Relation that Hinges on Co-reference One obvious translation relation for proper names says that two names translate each other iff they co-refer. So, ‘Londres’ translates ‘London’, and vice versa, because they co-refer. Substitute this translation relation into Devitt’s revised view. Then ‘that London is pretty’ attributes (on its conventional opaque reading) a meaning that involves the following mode: referring to London under the mode of some proper name that refers to London. On this version of Devitt’s revised view, ‘Pierre believes that London is pretty’ is true on its opaque reading if Pierre assents to ‘Londres est jolie’. However, this version of Devitt’s view also entails that ‘Ralph believes that Bernard is a spy’ attributes a meaning that involves the following mode: referring to Bernard under the mode of some proper name that refers to Bernard. So, ‘Ralph believes that Bernard is a spy’ is true on its conventional opaque reading if Ralph assents to ‘Ortcutt is a spy’. Similarly, ‘Ralph believes that Ortcutt is not a spy’ as long as Ralph assents to ‘Bernard is not a spy’. Generalizing, we can conclude that all attitude ascriptions are Shakespearean, on their conventional meanings.

10.5.3  Well-Established, Frequently Used Translation Perhaps Devitt can avoid the Shakespearean conclusion by stipulating that by ‘translation’ his theory means well-­established, frequently used translation. By ‘well-established, frequently used translation’, I mean one that has actually been frequently employed by actual translators, to the point that the relevant translation is well-established. For example, bilingual speakers often use ‘Londres’ as a translation of ‘London’ into French. This practice is standard, frequent, and well-established. If we substitute this specification of translation relations into Devitt’s revised view, we get the result that the clause ‘that London is pretty’ attributes a meaning that involves the following mode: referring to London under the mode of some name that is a well-established, frequently used translation of ‘London’ in English. This translation-­relation would allow (10) to be true on its conventional opaque reading, under Devitt’s revised theory. But if ‘Bernard’ is not a well-established, frequently used translation of ‘Ortcutt’, then it will prevent (3) from being true on its opaque reading. And it is reasonable to think that ‘Bernard’ is not a well-established, frequently used translation of ‘Ortcutt’. After all, no translators have actually used ‘Bernard’ to translate ‘Ortcutt’ (let us suppose).

212

D. Braun

However, combining the “well-established, frequently used” notion of translation with Devitt’s revised theory results in some unacceptable consequences. Consider sentence (14). (14) Some ancient observer of the evening sky, who spoke a language that has never been translated into modern English, believed that Venus appeared in the evening sky. There surely were ancient observers of the evening sky who spoke languages currently unknown to English speakers and who noticed that Venus appears in the evening sky. So (14) is true, on one of its conventional meanings. And it should be true on a reading that does not allow free substitution of co-referring definite descriptions, for (15) is false, on the conventional meaning of the reading in which the definite description in it takes narrow scope with respect to ‘believed’. (I assume that no ancient observer of the evening sky had a Copernican view of the planets.) (15) Some ancient observer of the evening sky, who spoke a language that has never been translated into a modern European language, believed that the planet orbiting the Sun between Mercury and the Earth appeared in the evening sky. So, (14) is true on a reading that resists substitution of co-referring definite descriptions, and so it is true on an opaque reading, by Devitt’s standards for opacity. But if ‘that Venus appeared in the evening sky’ refers to a Devittian mode that involves the mode of referring to Venus via some name whose well-established, frequently used translation is ‘Venus’, then (14) is false. So, the notion of translation that figures in Devitt’s theory of conventional meaning should not be restricted to wellestablished, frequently used translation relations.

10.5.4  Distinct-Language-Only Translation Relations Let’s try again. We are, on Devitt’s behalf, looking for a translation relation that allows ‘Londres’ to translate ‘London’ (and vice versa), but which does not allow ‘Ortcutt’ to translate ‘Bernard’ (or vice versa). One striking difference between the two pairs of names is that ‘London’ and ‘Londres’ are names in different languages (English and French), whereas ‘Bernard’ and ‘Ortcutt’ are names in the same language (English). So, perhaps what a Devittian theory needs is a translation relation that can hold between names of different languages, but cannot hold between names of the same language. Notice that this proposal is consistent with Devitt’s footnote, where he says “Or indeed the translation of ‘London’ in any other language” (my italics). So, consider the following translation relation: name N in language L translates name N∗ in language L∗ iff L and L∗ are distinct, and there is an x such that N refers to x in L and N∗ refers to x in L∗. Then let’s plug this translation theory into Devitt’s

10  Still for Direct Reference

213

revised theory of the conventional meaning that ‘that’-clauses attribute, on their opaque readings. The result is that ‘that London is pretty’ attributes a meaning that involves the following: referring to London under either (a) the ‘London’ mode or (b) the mode of some name that (i) refers to London in some language but (ii) is not a name of London in English. Generalizing, let N be a proper name of English that refers to x. Then the clause “that … N …” in English attributes a meaning that involves referring to x under either (a) the N mode or (b) the mode of some name that refers to x but is not a name of x in English. A complement clause in a language L other than English (e.g., French or Finnish) would (usually) use names that refer to x but are names in L. Let us call this translation relation the distinct-language-­ only translation relation. If we plug this translation relation into Devitt’s revised theory, we get the result that attitude ascriptions that contain ‘that Ortcutt is a spy’ and ‘that Bernard is a spy’, on their conventional opaque meanings, attribute different meanings to mental tokens, because ‘Ortcutt’ and ‘Bernard’ do not translate each other, on this new translation relation. So, (2) and (3) can differ in truth-value, and belief ascriptions are not Shakespearean. But there are several serious issues with this proposal. One is that it is inconsistent with some remarks that Devitt makes about Kripke’s puzzle. Devitt says that the moral of Pierre’s story can be derived from cases in which the relevant names are “closely related” but in the same language, in the way that ‘Ruth Marcus’ and ‘Ruth Barcan’ are (233). Devitt doesn’t explain what he means by ‘closely related’, but it seems to have little to do with belonging, or not belonging, to the same language.7 Another drawback to this distinct-language-only translation relation is that it is not transitive. If N1 and N2 are names of x in a single language L1, and N3 is a name of x in a different language L2, then N1 translates into N3, and N3 translates into N2, but N1 does not translate into N2. And obviously the relation is not reflexive. A final issue with the distinct-language-only translation relation is that a revised Devittian theory that incorporates it fails to avoid an objection that Devitt thinks his revised theory does avoid. Let me explain. When Devitt presents his revised view (the one that uses translational modes), he says in a footnote, “A modification along the lines in the text is necessary, of course, to handle Church’s objection (1950) to Carnap” (233 n. 81). Carnap (1947) held that (roughly speaking) the complement clause of a belief ascription refers to the sentence embedded inside it. Church (1950) pointed out that, on Carnap’s theory, the complement clause of a belief ascription in English refers to a different sentence than does the complement clause of a standard translation of that belief ascription into another language. For example, on Carnap’s view, the complement clause of ‘Pierre believes that London is pretty’ refers to a

7  The proposal is also rather un-Devittian in another respect. In note 80, p.  233, Devitt says “I assume that there are no good semantic objections to this mixing of languages [in ‘Londres is pretty’]. ‘Londres is pretty’ has speaker meanings that are as unobjectionable as any. Its participation in the conventions of both French and English may be offensive to the Académie française but nevertheless yields straightforward conventional meanings.”

214

D. Braun

different sentence than the complement clause of ‘Pierre croit que Londres est jolie’. But if these complement clauses refer to different things, then they differ in meaning, and so the attitude ascriptions differ in meaning. That seems wrong. Moreover, if the attitude ascriptions differ in meaning, then perhaps it is possible for them to differ in truth-value. That also seems wrong. Devitt’s initial theory has a similar problem. On that initial theory, the ‘that’clause of the ascription ‘Pierre believes that London is pretty’, on the ascription’s conventional opaque reading, attributes to Pierre’s thought a property that involves the property of referring to London under the ‘London’ mode. Also, on that original theory, the ‘that’-clause of the ascription ‘Pierre croit que Londres est jolie’, on its conventional opaque reading, attributes to Pierre’s thought a property that involves the property of referring to London under the ‘Londres’ mode. So, the ‘that’-clauses refer to different properties. So, the ascriptions have different conventional meanings, on their respective opaque readings. So, it seems that they could, in principle, differ in truth-value. In fact, they do on the original theory: the French ascription is true, on its conventional opaque reading, whereas the English one is false. In the note from Devitt that I quoted above, Devitt says that he needs to modify his original view in order to avoid a Churchian objection to his view. He may have had the previous objection in mind. He also implies that reference to a translational mode is needed to allow his view to escape the Churchian objections. But Devitt’s revised theory has a similar Churchian problem, if it incorporates the latest distinct-language-only translation relation that I described above. The English ascription ‘Pierre believes that London is pretty’ attributes a meaning-­ property that concerns the name ‘London’ and translation from non-English languages, whereas ‘Pierre croit que Londres est jolie’ attributes a meaning-property that concerns the name ‘Londres’ and translation from non-French languages. These are different properties. So, the ‘that’-clauses refer to different properties. So, the ascriptions differ in meaning, on their conventional opaque readings. So, the two ascriptions could differ in truth-value, on their conventional opaque readings. So, the revised theory that uses the “cross-linguistic-only” translation relation is just as vulnerable to a Churchian objection as Devitt’s initial theory. I conclude that the only reasonable translation relation that Devitt’s revised view can use is one that allows co-­reference of two proper names, whether in the same language or different languages, to be both necessary and sufficient for them to translate each other. But then ‘Bernard’ translates ‘Ortcutt’ and vice versa, and Devitt’s revised theory entails that (1) is true iff (3) is. In short, Devitt’s revised theory entails that attitude ascriptions are Shakespearean, on the conventional meanings of their opaque readings. I have gone into many technical details here. This should not obscure the following simple points. Ordinary speakers translate names from other languages into their own language. They use those translated names in ‘that’-clauses when ascribing attitudes to those foreign-language speakers. The resulting attitude ascriptions seem to be true. These apparent facts are compatible with direct-reference theory, but raise difficulties for many other theories, including Devitt’s initial and revised theories.

10  Still for Direct Reference

215

10.5.5  How a Devittian Might Resist the Above Argument Devitt’s revised view relies on the notion of a translational-mode. I have argued that the best (and only plausible) translation relation that works for his revised theory has the consequence that attitude ascriptions are Shakespearean, on the conventional meanings of their opaque readings. But Devitt claims that each token has several meanings, and consequently, there are several notions of synonymy and translation (242–243). A Devittian could resist my argument by claiming that there are several translation relations that are relevant to attitude ascriptions, on their conventional meanings. A Devittian could invoke these translation relations in one of two ways. First, a Devittian could claim that each standard attitude ascription has more than one conventional opaque meaning. Each distinct opaque reading has the attitude ascription’s ‘that’-clause referring to a distinct translational mode that involves a distinct translation relation. For example, on one reading (or disambiguation) of an ascription that contains ‘that London is pretty’, the ascription attributes to a thought the property of referring to London under the ‘London’ mode only. Equivalently, the ‘that’-clause refers to the property of referring to London under some translation‘London’ of ‘London’, where translation‘London’ is a translation relation that holds only between ‘London’ and itself. On another disambiguation, the relevant ascription attributes to a thought the property of referring under the ‘London’ or ‘Londres’ mode, or equivalently to a mode that involves a translation‘London’ or ‘Londres’ relation that holds only between ‘London’, itself, and ‘Londres’. Other translation modes may allow translation of ‘London’ into ‘Lontoo’ (the Finnish name), but not ‘Londres’. And so on. On some of these conventional opaque readings, substitution of some co-referring proper names can fail to preserve truth-value (whereas substitution of others may preserve truth-value). So, attitude ascriptions have conventional opaque readings that are not Shakespearean. Call this the multiple-opaque-ambiguity response. A second option for the Devittian is to claim that the attitude ascriptions are unambiguous, but context-sensitive. Compare with ‘he’: it is unambiguous, but context-sensitive, having different referents and contents in different contexts. A Devittian could similarly say that in different contexts an attitude ascription’s ‘that’clause refers to different translational modes. The relevant translational mode depends on the attributer’s intentions. In some contexts, the relevant translational mode has the effect of making the attitude ascription Shakespearean. In other contexts, the relevant translational mode is more restrictive, and substitution of coreferring proper names can fail. Such a view would resemble Mark Richard’s (1990) view on attitude ascriptions. On Richard’s view, ‘that’-clauses in attitude ascriptions are used to translate the mental representations of believers. In each context, the relevant translation relation obeys certain restrictions. The restrictions vary from context to context, depending on the speaker’s intentions. The result is failure of substitutivity of co-referring proper names in some contexts. Call the Devittian response that appeals to different translation relations in different contexts the context-sensitivity response.

216

D. Braun

There are serious issues with both replies. Consider first the multiple-opaqueambiguity response. This response leads quite quickly to the hypothesis that attitude ascriptions are hugely ambiguous, for there are, or could be, non-trivial translations for ‘London’ and ‘London is pretty’ into a huge number of languages. (For example, according to Google Translate, the Albanian translation for ‘London is pretty’ is ‘Londra është e bukur’.) The multiple-opaque-ambiguity response implies that, for each such language, there is a distinct conventional opaque reading for the attitude ascription ‘Pierre believes that London is pretty’, referring to a distinct translation relation. But each such reading is conventional only if there is a distinct linguistic convention for each. There are (surely) not that many linguistic conventions associated with ‘that London is pretty’. Devitt seemingly agrees. As he puts the point, “An ascription surely does not have indefinitely many conventions” (207). Consider next the context-sensitivity response. Devitt rejects this response, because he thinks that a theory that appeals to context-sensitivity raises issues about whether hearers understand typical utterances of attitude ascriptions (197–208). He has such worries about Richard’s theory and other “hidden indexical” theories. Devitt also seemingly endorses Soames’s criticisms of Richard’s view (2012: 73 n. 17). I agree with Soames’s criticisms, as well.

10.5.6  W  hat the Above Argument Does, and Does Not, Show About Devitt’s Revised Theory The above argument shows that only certain translation relations are plausible candidates for determining the translational modes to which a ‘that’-clause refers, in virtue of its conventional meaning. On those translation relations, Devitt’s revised theory entails that attitude ascriptions are Shakespearean, on their opaque conventional readings. The argument does not show that proper names do not have meanings that are more fine-grained than the translational-­mode meaning. For example, the argument does not show that ‘London’ does not have the fine-grained meaning of referring to London under the ‘London’ mode only. What it does show is that these fine-grained meanings are not attributed to thoughts by standard attitude ascriptions on the conventional meanings of opaque readings. Devitt himself seemingly concludes that standard attitude ascriptions, on their conventional meanings, and under their opaque readings, do not, as a matter of their conventional meaning, ascribe the previous fine-grained meanings to thoughts. He says that this information about fine-grained meanings might be pragmatically conveyed by utterances of standard attitude ascriptions. He also says (237) that speakers could perhaps convey information about fine-grained meanings by uttering non-standard ascriptions such as (13). (13) Pierre believes that London, qua city that he is living in, is not pretty.

10  Still for Direct Reference

217

Devitt is not clear about whether he thinks that (13) conveys information about finegrained meanings in virtue of (13)’s conventional meaning. I think it unlikely that (13) does. But Devitt could introduce an extension of English, with some stipulated conventional meanings, on which the conventional meaning of (13), on an opaque reading, does convey information about fine-grained meanings, in virtue of (13)’s conventional meaning on that reading. Of course, this extension of English would not be ordinary English. Direct-reference theory does not address the semantics of such technical extensions of English.

10.5.7  R  eflections on Direct-Reference Theory and the Preceding Argument that Devitt’s Revised Theory Implies Shakepearean Attitude Ascriptions Let’s review a bit. Devitt initially emphasizes the evidence against attitude ascriptions’ being Shakespearean. As he points out, ordinary speakers often utter a sentence of the form “A believes that N1 is F” while they resist uttering “A believes that N2 is F”, even though N1 and N2 are co-referring proper names. Often, they will judge that the first ascription is true while the second is false. Devitt’s initial theory fits well with this evidence. But there is also evidence in favor of Shakespeareanism that is less obvious, but no less real. For example, ordinary practices of attributing attitudes to foreign-language speakers fit much better with Shakespearean theories. Devitt begins to notice this when he considers Kripke’s Pierre and Peter cases, but he does not notice all of the implications of those cases for his revised view.8

10.6  R  eplies to Devitt’s Arguments Against Direct-Reference Theory We have arrived at a Devittian view that resembles direct reference, because it entails that attitude ascriptions are Shakespearean, on their conventional meanings. But Devitt has argued against direct-reference theory, and particularly against its claim that attitude ascriptions are Shakespearean. I turn now to rebutting those arguments.

8  More evidence in favor of direct reference comes from the ways in which speakers use simple demonstratives in attitude ascriptions. Devitt discusses some of this evidence on pp. 220–223.

218

D. Braun

10.6.1  T  he Identity Problem and the Opacity Problem for Direct Reference Devitt’s best-developed arguments against direct reference are the Identity Problem and the Opacity Problem (171–178). Devitt’s versions of both of these arguments ultimately rely on his claim that attributing certain fine-grained meanings to utterances and thoughts is important to explaining behavior. The Identity Problem, in outline form, says that if “a=b” is true, then direct-reference theory implies that it does not differ in meaning from “a=a”. But the former sentence does differ in meaning from the latter. Therefore, direct reference is incorrect. Taking a particular example: ‘Twain is Clemens’ is true, and direct-reference theory implies that is does not differ in meaning from ‘Twain is Twain’. But they do differ in meaning. So, direct-reference theory is incorrect. The key premise in the latter argument is the claim that ‘Twain is Clemens’ differs in meaning from ‘Twain is Twain’. Nearly all ordinary speakers would agree with it. (I am assuming that the opinions of the undergraduate students in my beginning philosophy of language classes reflect ordinary opinion.) But many philosophers have thought that the claim should be supported by something more than a raw appeal to intuition about meaning. Devitt agrees (172–173). Frege claimed that “a=b” can differ in cognitive significance from “a=a.” One could similarly claim that ‘Twain is Clemens’ differs in cognitive significance from ‘Twain is Twain’. More specifically, one could claim that the first sentence is informative but the second is not. One could then claim that if this is so, then the two sentences differ in meaning. But Devitt rejects the last premise in this argument. He thinks that differences in informativeness are epistemological, and may not indicate differences in meaning (172–173). Devitt thinks that the best support for the crucial premise (that ‘Twain is Clemens’ differs in meaning from ‘Twain is Twain’) follows from his claims about semantic methodology (173–174). If Alice is disposed to sincerely utter both ‘Twain is Twain’ and ‘Twain is Clemens’, then ordinary speakers are willing to utter both ‘Alice believes that Twain is Twain’ and ‘Alice believes that Twain is Clemens’. They are willing to utter the first ascription, but not the second, if Alice is disposed to sincerely utter ‘Twain is Twain’ but is not disposed to utter ‘Twain is Clemens’. But ordinary speakers use ‘that’-clauses in attitude ascriptions to attribute putative meanings to thoughts. So, ordinary speakers’ usage indicates that the thoughts that underlie speakers’ dispositions to utter ‘Twain is Twain’ and ‘Twain is Clemens’ have different putative meanings. Furthermore, people who have thoughts with these distinct putative meanings tend to behave differently, for instance, in which sentences they utter. So, attributing these putative meanings to thoughts seemingly enables ordinary speakers to explain behavior in ways that they could not otherwise. Genuine meanings are properties that aid in explanation of behavior. So, the properties attributed to thoughts by the ‘that’-clauses ‘that Twain is Twain’ and ‘that Twain is Clemens’ are genuine meanings, and are distinct meanings. So, the sentences

10  Still for Direct Reference

219

embedded inside these ‘that’-clauses differ in meaning. So ‘Twain is Twain’ and ‘Twain is Clemens’ differ in meaning. Devitt’s version of the Opacity Problem is closely linked to his version of the Identity Problem. It entails (175–176) that the sentences ‘Alice believes that Twain is Twain’ and ‘Alice believes that Twain is Clemens’ can differ in truth-­value. So, the two ascriptions differ in meaning. But direct-reference theory entails that they have the same meaning. So, direct-reference theory is incorrect. Devitt argues for the first premise (that the two ascriptions can differ in truth-value) by appealing again to the behavioral differences between those who are willing to utter only ‘Twain is Twain’ and those who are willing to utter both that sentence and ‘Twain is Clemens’.

10.6.2  A  Direct-Reference Reply to Devitt’s Identity and Opacity Problems Devitt argues that ‘Twain is Twain’ differs in meaning from ‘Twain is Clemens’. But Devitt thinks that many meanings are non-conventional. Devitt’s arguments may establish a difference in (Devittian) meaning between ‘Twain is Twain’ and ‘Twain is Clemens’, yet fail to establish a difference in the conventional meanings of those sentences and the names in them. Devitt himself seems to concede this possibility below. The cognitive difference between names reflected in a competent person’s failure to accept an identity is sufficient for a difference of meaning. The difference need not be in the conventional meanings of the names; it may be only in fine-grained thought meanings needed to explain the person’s behavior. (176, my italics)

But direct-reference theory is a theory about conventional meaning. So, the above Devittian arguments from Identity and Opacity do not show direct-reference theory is false unless they also show either (a) that ‘Twain is Twain’ and ‘Twain is Clemens’ differ in conventional meaning or (b) that ‘Alice believes that Twain is Twain’ and ‘Alice believes that Twain is Clemens’ differ in conventional meaning. But Devitt’s arguments do not show this. We could, of course, add an additional premise to Devitt’s arguments to get the desired conclusion that the two sentences differ in conventional meaning. For instance, we could add the premise that if ‘Twain is Twain’ and ‘Twain is Clemens’ are used differently in acts of explaining by ordinary speakers, then they differ in conventional meaning. But Devitt has not argued for this additional premise. Directreference theorists need not concede it. The attitude ascriptions do appear to differ in their “explanatory power” (in what they can explain) and Devitt could point to this to support his claim that they differ in conventional meanings. But direct-reference theorists can give alternative accounts for the apparent difference in explanatory power between the two ascriptions. They can concede that speakers who utter ‘Alice believes that Twain is

220

D. Braun

Clemens’ and ‘Alice believes that Twain is Twain’ may thereby sometimes pragmatically convey different information about those Devitt-style meanings, and this information may be relevant to explanation. But this is consistent with their holding that these ascriptions, and the names in their ‘that’-clauses, have the same conventional meanings. Soames (2002) argues that a speaker who utters an attitude ascription may thereby assert a meaning (a proposition) that is descriptively richer than the conventional meaning of the ascription. For instance, such a speaker who utters ‘Alice believes that Twain is Clemens’ may assert that Alice believes that the author of Huckleberry Finn who is identical with Twain is the person who was at Betty’s party who is identical with Clemens. The additional information is not part of the ascription’s conventional meaning because the additional descriptive information varies from utterance to utterance of the ascription. The additional information asserted may help explain Alice’s behavior. Salmon (1986) holds that a speaker who utters an attitude ascription may conversationally implicate a meaning (a proposition) that concerns the way in which the believer believes the conventional meaning of the ‘that’-clause. For example, a speaker who utters ‘Alice believes that Twain is Clemens’ may thereby conversationally implicate that there is a token thought that disposes Alice to utter ‘Twain is Clemens’, and one of the meanings of that thought is the same as the singular proposition that Twain is Clemens. This latter information is not part of the conventional meaning of the attitude ascription. There is another manner in which speakers’ utterances of attitude ascriptions may, in a certain attenuated sense, convey more explanatory information than the conventional meanings of the words they utter: utterances of those ascriptions may differ in natural meaning. (Hints of this explanation appear in Braun 1998. Stronger hints of it appear in Braun 2006.) Recall Grice’s (1989) distinction between natural and non-natural meaning. Non-natural meaning occurs because of intelligent agents’ thoughts and intentions. For example, ‘smoke’ means smoke because of agents’ thoughts and intentions. Natural meaning, however, does not require agents’ thoughts and intentions. For example, smoke naturally means fire because there is a natural causal connection between the two that does not require any thoughts or intentions. As a result, someone who is aware of the natural causal connection can probabilistically infer from the presence of smoke that there is fire nearby. Similar points hold for utterances. For example, utterances of ‘Twain is Clemens’ are nearly always caused by tongue movements. So, utterances of ‘Twain is Clemens’ by a speaker naturally mean that the speaker has a tongue. This kind of natural meaning is comparable to smoke’s meaning fire. As a result, if a person is meeting Alice for the first time, and he hears her utter that sentence, and he is aware that most vocal utterances are caused by tongue movements, then he can infer (with probability, but not certainty) that Alice has a tongue. This is so, even though her sentence does not mean (in any sense) that she has a tongue, and even though she did not speakermean (e.g., assert or implicate) that she has a tongue.

10  Still for Direct Reference

221

Now suppose that utterances of ‘Twain is Clemens’ are characteristically caused by thoughts that have certain Devittian referential-mode properties, such as having a part that refers to Twain under the ‘Twain’ mode and having a part that refers to Clemens under the ‘Clemens’ mode. Then Alice’s utterance of ‘Twain is Clemens’ naturally means (naturally conveys the information) that she has a thought with such properties. This is so even though the sentence does not conventionally have that meaning, and even though Alice does not assert, implicate, or otherwise speaker-mean that information. Suppose also that Alice’s utterances of ‘Twain is Twain’ are characteristically not (immediately) caused by thoughts with those properties. Then her utterances of ‘Twain is Twain’ do not naturally mean those properties. Suppose further that speakers have some (imperfect) tendency to accept and utter ‘Alice believes that Twain is Clemens’ only if Alice is disposed to utter ‘Twain is Clemens’. Then their utterances of that attitude ascription naturally convey the information that Alice (probably) has a thought with the properties I described in the preceding paragraph. This is so even if none of these properties are conventional meanings of the sentence ‘Alice believes that Twain is Clemens’. It is so even if they neither assert, nor implicate, nor in any other way speaker-mean, this information. Suppose that someone hears an utterance of ‘Alice believes that Twain is Clemens’, and infers that Alice is disposed to utter ‘Twain is Clemens’. Since the utterance of the attitude ascription naturally conveys the information that Alice has the latter disposition, the hearer’s inference is likely to be correct. None of this holds for utterances of ‘Twain is Twain’ and ‘Alice believes that Twain is Twain’. But these differences between ‘Twain is Clemens’ and ‘Twain is Twain’, and between ‘Alice believes that Twain is Clemens’ and ‘Alice believes that Twain is Twain’, need not be due to differences in conventional meaning between the corresponding sentences, nor due to differences between what speakers assert or implicate or otherwise speaker-mean when they utter those sentences. They may be due to a difference in mere natural meaning.

10.7  M  ore on Direct-Reference Theory and Explanation of Behavior I argued in the previous section that, even if the belief ascriptions ‘Alice believes that Twain is Twain’ and ‘Alice believes that Twain is Clemens’ have the same conventional meaning (as direct reference says), utterances of those ascriptions can differ in the explanatory information they (naturally or non-naturally) convey. I shall now argue that even if direct reference is true, attitude ascriptions can explain behavior purely in virtue of their conventional meanings alone (without the addition of non-conventionally conveyed meanings). I begin my argument by considering explanations that do not involve attitude ascriptions.

222

D. Braun

10.7.1  Truth-Conditions for ‘Because’ Sentences Many linguistic explanations of events have the form ┌P because Q┐. Sentence (16) gives a plausible sufficient (but not necessary) condition for the truth of sentences of this sort.9 (16) A sentence of the form ┌P because Q┐ is true if: there is an event e1 whose occurrence is metaphysically sufficient for P to be true, and an event e2 whose occurrence is metaphysically sufficient for Q to be true, and both e1 and e2 occur (and so P and Q are true), and e2 is a cause of e1. For example, consider (17). (17) Ortcutt fell because Ortcutt slipped on a banana peel. According to (16), if there is an event whose occurrence is sufficient for the truth of ‘Ortcutt slipped on a banana peel’, and an event whose occurrence is sufficient for the truth of ‘Ortcutt fell’, and the first event is a cause of the second, then (17) is true. So, (17) is true if Ortcutt is the subject of a slipping-on-a-banana-peel event, and also the subject of a falling-­event, and the former event is a cause of the latter event. If (16) is true, then (17) can be true even if Ortcutt is the subject of two slipping events, only one of which causes him to fall. Suppose, for example, that Ortcutt slips twice, once by stepping on a banana peel with his left foot, and again by stepping on a banana peel with his right foot. And suppose that only the second slipping causes him to fall. Then (16) entails that (17) is true in such a situation. (Devitt’s remarks about a structurally similar case (Devitt 1997: 120–121) presented by Richard (1997: 93) suggest that he might disagree.) (16) entails that if Ortcutt is identical with Bernard, then (17) and (18) are true under exactly the same conditions. (18) Ortcutt fell because Bernard slipped on a banana peel. If Ortcutt is identical with Bernard, then any event whose occurrence is sufficient for the truth of ‘Ortcutt slipped on a banana peel’ is also sufficient for the truth of ‘Bernard slipped on a banana peel’. So, according to (16), if Ortcutt is Bernard, then (18) is true as long as an Ortcutt-slipping-on-a-banana-peel event is a cause of an

 (16) does not give a necessary condition for ┌P because Q┐. The sentence ‘This planar figure is a square because it is an equilateral four-side figure with four right-angled vertices’ can be true in a context, even though its right-hand sentence does not describe a cause of an event described by its left-hand sentence. A similar point holds for ‘Alice should give to UNICEF because UNICEF helps people in need’.

9

10  Still for Direct Reference

223

Ortcutt-falling event. Of course, someone who is unsure whether ‘Ortcutt is Bernard’ is true, or thinks it is false, may think that (17) is true but be agnostic about (18) or think it is false. But such a person would be mistaken.

10.7.2  ‘Because’ Sentences and Explanation Suppose that (17) is true, and suppose it is true in the way that (16) describes. That is, suppose there occurs an event that is sufficient for the truth of ‘Ortcutt fell’ and another event sufficient for the truth of ‘Ortcutt slipped on a banana peel’, and the latter event causes the former. Does (17) then explain Ortcutt’s falling? It’s hard to resist the conclusion that it does. Furthermore, it’s hard to resist the conclusion that the sentence in (17) that appears after the word ‘because’, namely ‘Ortcutt slipped on a banana peel’, also explains Ortcutt’s fall. Generalizing: if a sentence of the form ┌P because Q┐ is true in the way described by (16), then both it and sentence Q explain an event described by P. An old-fashioned advocate of a Deductive-Nomological theory of explanation (a DN theory, for short) might object. (For an introduction to DN theories of explanation, see Woodward 2017.) According to some old-fashioned DN theories, an explanation of an event is a linguistic argument. A linguistic argument is an explanation of an event iff (a) its premises describe events that caused the event being explained and (b) its premises include laws or generalizations, (c) its conclusion describes the explained event, and (d) the conclusion follows deductively or probabilistically from the previous premises. So, the sentence ‘Ortcutt slipped on a banana peel’ by itself does not explain Ortcutt’s fall. This DN theory is far too restrictive. (I think Devitt would agree.) Ordinary people and scientists very rarely present arguments of the above sort (with generalizations, and so on) when they attempt to explain events. The preceding old-­fashioned DN theory entails that virtually none of these speakers ever successfully state explanations of events. A slightly less restrictive DN theory distinguishes between complete or ideal explanations of events, on the one hand, and incomplete, partial, or non-ideal explanations of events, on the other hand. An ideal explanation of an event satisfies the previous requirements for an explanation. An incomplete explanation need not mention the relevant generalizations. It need only describe a nomologically sufficient condition for the explanandum event. But even this more liberal view is too restrictive. Ordinary people and scientists rarely describe nomologically sufficient conditions for events. For example, the sentence ‘Ortcutt slipped on a banana peel’ does not state a nomologically sufficient condition for Ortcutt’s fall. So, this new theory would entail that speakers rarely, if ever, formulate even incomplete explanations of events. Peter Railton (1981) and David Lewis (1986) have endorsed less restrictive theories of explanation that seem much more plausible. According to Railton, a sentence explains an event iff it provides some information contained in an ideal DN

224

D. Braun

explanation of the event, where an ideal DN explanation is a sequence of propositions that describes the entire causal history of the event and the nomological generalizations involved in that history. According to Lewis, a sentence explains an event iff it provides some information about the causes of that event. Sentence (17) does provide information about the causes of Ortcutt’s fall. It also provides some information that is contained in an ideal DN explanation of the event, because an ideal DN explanation would describe an event that is sufficient for the truth of ‘Ortcutt slipped on a banana peel’. So, both Railton’s and Lewis’s theory entail that (17) explains Ortcutt’s fall. Both of their theories allow that two distinct sentences can explain the same event, even if they differ in the amount of explanatory information they provide. For example, consider a longer version of (17) that mentions not only Ortcutt’s slip, but also the state of his vestibular system. Railton’s and Lewis’s theories entail that both (17) and this longer sentence explain Ortcutt’s fall, even though the longer sentence provides more explanatory information about Ortcutt’s fall than (17). Much more could be said about explanation and generalizations that is relevant to our concerns here. But I cannot take the space to do so. (I have said more in Braun 2001a, b.)

10.7.3  Explanations and Identity If (17) explains Ortcutt’s fall, and Ortcutt is identical with Bernard, then (18) also explains Ortcutt’s fall. Those who dissent from ‘Ortcutt is Bernard’ may judge that (17) explains Ortcutt’s fall and that (18) does not. But they would be mistaken. Some might be tempted to say that (18) is an explanation of Ortcutt’s fall for some people but not for others, depending on whether they assent to ‘Ortcutt is Bernard’. Following this line of thought would lead us to a notion of explanation that is relativized to agents’ epistemic or doxastic conditions. But I am interested here in a more metaphysical notion of explanation. I am concerned with the following question: Does (18) provide enough information to explain Ortcutt’s falling, regardless of whether anyone is aware of that information? (I think Devitt is also concerned with this metaphysical notion of explanation when he discusses semantics and explanation.)

10.7.4  Direct Reference and True ‘Because’ Sentences Let’s return now to direct-reference theory and the question of whether Shakespearean attitude ascriptions can explain behavior. Imagine that Betty is disposed to assent sincerely to ‘Twain smokes’. Then according to direct-reference theory, (19) is true.

10  Still for Direct Reference

225

(19) Betty believes that Twain smokes. Now suppose that Betty utters ‘Twain smokes’, and consider the following ‘because’ sentence. (20) Betty uttered ‘Twain smokes’ because Betty believes that Twain smokes. Assume furthermore that direct-reference theory is true and (19) is Shakespearean, and so the belief ascription in (20) is Shakespearean. Is (20) true in the situation described above? And does (20) explain Betty’s behavior, in this scenario? If (16) is a correct account of ‘because’ sentences, then (20) is true in this scenario, for the following reasons. (a) There is an utterance-event whose occurrence is sufficient for the truth of ‘Betty uttered “Twain smokes”’. (b) There is an event, namely a token thought (a “thinking”), one of whose contents (or meanings) is (the singular proposition) that Twain smokes. (c) The occurrence of this thought is sufficient for the truth of ‘Betty believes that Twain smokes’, on its conventional meaning. (d) The thought is a cause of the utterance. (20) is true in this scenario even if Betty has another thought, that is also a thought that Twain smokes, but which does not cause her to utter ‘Twain smokes’. Suppose, for example, that Betty is disposed to assent to ‘Clemens smokes’, and suppose the thought that disposes her to assent to ‘Clemens smokes’ does not dispose, or cause, her to utter ‘Twain smokes’, because she dissents from ‘Twain is Clemens’. However, direct reference says that the content of this ‘Clemens’ thought is the singular proposition that Twain/Clemens smokes. So, the occurrence of this ‘Clemens’ thought is sufficient, according to direct reference, for the truth of ‘Betty believes that Twain smokes’. So, Betty has a thought, one of whose contents is that Twain smokes, and which causes her to utter ‘Twain smokes’; and she has another thought, one of whose contents is that Twain smokes, but which does not cause her to utter ‘Twain smokes’. But since there is an event that makes ‘Betty believes that Twain smokes’ true, and that causes Betty to utter ‘Twain smokes’, (20) is true, if the theory of ‘because’ sentences in (16) is correct. Compare this conclusion with my earlier remarks about the scenario in which Ortcutt slips twice on banana peels, but only one of those slips causes him to fall. (Devitt’s remarks about a similar case presented by Richard 1997 suggest that he would disagree with my claim that (20) is true in the above scenario. See Devitt 1997: 120–121. I think this hypothetical Devittian opinion about (20) is incorrect.) According to direct-reference theory, the attitude ascription in (20) is Shakespearean, on its conventional meaning. Substituting ‘Clemens’ for ‘Twain’ in the attitude ascription in (20) gives us (21). (21) Betty uttered ‘Twain smokes’ because Betty believes that Clemens smokes. (21) is also true in the above scenario, if direct reference is true and my previous analysis of ‘because’ sentences is correct. Betty has a thought, one of whose

226

D. Braun

contents (or meanings) is (the singular proposition) that Twain smokes. If direct reference is correct, then the occurrence of this event is sufficient for the truth of both ‘Betty believes that Twain smokes’ and ‘Betty believes that Clemens smokes’. Furthermore, this token thought causes Betty to utter ‘Twain smokes’. Therefore, (21) is true. (Compare these remarks about (21) with my previous remarks about (18).) Of course, someone could think that (20) is true and yet think that (21) is false. This would be a mistake, according to direct reference, which could come about in two interestingly different ways. Suppose that Carla, like Betty, is inclined to dissent from ‘Twain is Clemens’. Suppose that Carla believes that Betty is inclined to utter ‘Twain smokes’, but also incorrectly believes that Betty is inclined to dissent from, or suspend judgment on, ‘Clemens smokes’. Then Carla may think that ‘Betty believes that Twain smokes’ is true and ‘Betty believes that Clemens smokes’ is false, and so think that (20) is true and (21) is false. Carla would be making a mistake. She could make this sort of mistake as long as she is inclined to dissent from ‘Twain is Clemens’, even if she is thoroughly convinced that all attitude ascriptions are Shakespearean. Another sort of mistake can afflict those who (implicitly) doubt that all attitude ascriptions are Shakespearean and (implicitly) reject direct reference. Suppose that Diana is disposed to assent to the identity ‘Twain is Clemens’. But she (like many ordinary speakers) thinks that it is possible for ‘Betty believes that Twain smokes’ to be true though ‘Betty believes that Clemens smokes’ is false. Then Diana may think that the former is true and the latter is false, and so she may affirm (20) and deny (21). Or she may think ‘Betty believes that Clemens smokes’ is true, but that Betty’s belief that Clemens smokes did not cause her to utter ‘Twain smokes’, and so she may affirm (20) and deny (21). If direct-­reference theory is true, then Diana is making a mistake about attitude ascriptions. But direct-reference theorists have offered reasonable explanations of how she could make such a mistake. I conclude that (20) is true in the above scenario, even assuming that direct-reference theory is true. Let’s now turn to the question of whether (20), and the attitude ascription that appears in (20), explain Betty’s utterance in the above scenario, assuming that direct reference is true.

10.7.5  Direct Reference and Explanation Given everything I have said above, it is hard to resist the conclusion that, if (20) is true, then it, and the attitude ascription inside it, explain Betty’s utterance. Moreover, this conclusion follows from Railton’s and Lewis’s theories of explanation. If (20) is true, and my analysis of ‘because’ sentences is correct, then the attitude ascription in (20) both describes a cause of Betty’s utterance, and gives some information contained in an ideal DN explanation of her utterance. But a sentence explains an

10  Still for Direct Reference

227

event if it gives some information about a cause of that event (Lewis), or provides some information contained in an ideal DN explanation of that event (Railton). So, (20), and the attitude ascription in it, explain Betty’s utterance. Parallel morals could be drawn from variations on Betty’s case. Consider an alternative scenario in which Betty assents to ‘Twain smokes’ but (unlike the above scenario) she dissents from both ‘Clemens smokes’ and ‘Twain is Clemens’. Nevertheless, both (20) and (21) are true in this alternative scenario, if my analysis of ‘because’-sentences is correct, and even if direct-reference theory is true. Furthermore, both (20) and (21), and the attitude ascriptions that appear in them, explain Betty’s utterance of ‘Twain smokes’. I conclude that attitude ascriptions that are Shakespearean on their conventional meanings can explain behavior. So, if direct-reference theory is true, then attitude ascriptions, on their conventional meanings, explain behavior.

10.8  Devitt’s Reply I discussed arguments-from-explanation against direct reference in Braun 2001a. I claimed that some of these arguments resembled arguments that Devitt gave against direct reference in Coming. I criticized those arguments, and I argued (in much the way I did above) that attitude ascriptions would explain behavior even if direct reference were true. Devitt has replied as follows. But Braun has missed the main point of my argument. That point is not that if DR were true, the ascriptions could not explain behavior. The point is that ascribing non-Russellian propositions involving causal modes of reference does, as a matter of fact, explain behavior. This point is quite compatible with the claim that ascribing a singular Russellian proposition can contribute to the explanation in the way Braun illustrates. Indeed, in effect, I explicitly endorse that claim (1996: 153). But ascribing modes of reference provides a more complete explanation. As Braun states nicely: “Clearly, the way in which an agent believes or desires a [singular] proposition can make a difference to that agent’s behavior” (p. 257). So that way, involving in the case in question a name’s causal mode of reference, is the meaning. There is no principled basis for Braun’s view that the referent is theoretically interesting in a way that the mode of reference is not. (Devitt 2012: 78, Devitt’s italics)

I will now respond to Devitt’s reply. Devitt says in the above passage that attitude ascriptions that do not ascribe modes of reference can contribute to the explanation of behavior. He says that he, in effect, explicitly made this point on p. 153 of Coming. But strictly speaking, what Devitt says in that passage is that transparent attitude ascriptions can explain behavior. He also adds the following important qualification: “The transparent [ascriptions] explain behavior on the presupposition that they follow from good opaque ones” (153, Devitt’s italics). This qualification has important consequences for direct reference, if we interpret it so that it directly engages with direct reference. As

228

D. Braun

I mentioned before, Devitt’s use of ‘opaque’ and ‘transparent’ makes the dialectic with direct reference difficult. So let us take Devitt to be saying here that Shakespearean attitude ascriptions explain behavior only if they follow from true non-Shakespearean attitude ascriptions. Now if direct reference were true, then all attitude ascriptions would be Shakespearean. So, there would be no non-Shakespearean attitude ascriptions for Shakespearean attitude ascriptions to follow from. So, Devitt’s claims about explanation on p. 153 of Coming seem to entail that if direct reference were true, then attitude ascriptions would not explain behavior.10 I criticized arguments that make that sort of claim in my 2001a. One more point of clarification: Devitt says that my arguments illustrate how (direct-reference-style) attitude ascriptions that do not ascribe modes of reference can contribute to the explanation of behavior. He says that my work illustrates how this is the case. But Devitt’s description of my work is misleading. When Devitt says that attitude ascriptions that do not ascribe modes of reference can contribute to the explanation of behavior, he implies that they merely contribute, and so do not, by themselves, (really) explain behavior. But I argued that direct-reference-style attitude ascriptions really can, all by themselves, explain behavior (full stop, no qualifications). Devitt’s reply seemingly offers a new argument-from-explanation against direct reference. (More cautiously: his reply offers an argument that I did not detect in Coming, and so did not discuss in my 2001a.) The seemingly new argument appears in the following passage. But ascribing modes of reference provides a more complete explanation. As Braun states nicely: “Clearly, the way in which an agent believes or desires a [singular] proposition can make a difference to that agent’s behavior” (p. 257). So that way, involving in the case in question a name’s causal mode of reference, is the meaning. (Devitt 2012: 78, Devitt’s italics)

 I am here assuming that when Devitt speaks of one attitude ascription following from another, he is speaking of one attitude ascription in English following from another attitude ascription in English. If Devitt is instead speaking of one ascription following from another in merely possible languages, then his remarks imply that Shakespearean attitude ascriptions of ordinary English explain behavior whenever non-Shakespearean ascriptions in a (merely possible) language do. Suppose there are merely possible languages that contain both direct-reference-style Shakespearean ascriptions and non-Shakespearean ascriptions that ascribe modes of reference. In such a language, every true Shakespearean ascription follows from (is necessitated by) some true non-Shakespearean ascription. But if direct reference is true, then attitude ascriptions of ordinary English are synonymous with the Shakespearean attitude ascriptions of the possible language we are considering. But if this is so, then ordinary English attitude ascriptions always explain behavior, even if direct reference is true of ordinary English.

10

10  Still for Direct Reference

229

(22a–f) is my initial attempt to make Devitt’s argument more explicit. (22a) The modes of reference involved in an agent’s thought can make a difference to how that agent behaves. (22b) If (22a), then attitude ascriptions that ascribe modes of reference provide a more complete explanation of an agent’s behavior than do (direct-referencestyle) attitude ascriptions that do not ascribe modes of reference. (22c)  So, attitude ascriptions that ascribe modes of reference provide a more complete explanation of an agent’s behavior than do (direct-reference-style) attitude ascriptions that do not ascribe modes of reference. (22d) If (22c), then modes of reference are the meanings of thoughts. (22e) If modes of reference are the meanings of thoughts, then direct reference is false. (22f) So, direct reference is false. Let’s examine this argument line by line.11 I take line (22a) to be true. As Devitt rightly points out, I more or less assert what (22a) says in the sentence that Devitt quotes from my 2001a. Line (22b) uses the expression “more complete explanation.” I am not entirely sure what Devitt means by that phrase. But I suspect that when he speaks of one explanation being more complete than another, he assumes that some explanations are complete while others are not. I also suspect that the complete explanations he has in mind are classic DN explanations. Perhaps Devitt thinks that if direct reference were true, then no putative explanation of behavior that contained only directreference attitude ascriptions could constitute a complete, classic DN explanation of that behavior. Any such putative explanation would be, at best, partial. He may think that, by contrast, if his theory of attitude ascriptions were true, then complete, classic DN explanations of behavior could be formulated using ordinary English attitude ascriptions. In that sense, explanations that ascribe modes are more complete than explanations that do not.

11

 Devitt presents a related argument, in a semi-formal format, on p. 75 of Devitt 2012. 1. A  name in a t[hat]-clause of an apparently opaque attitude ascription conveys information about a mode of referring to the name’s bearer. 2. A name’s mode of referring to its bearer is causal not descriptive. 3. Apparently opaque attitude ascriptions explain behavior in virtue of what they convey. 4. So, the causal mode [explains behavior and hence] is the name’s meaning.

My explication of the argument that Devitt gives on p. 78 provides some justification for line (1) above, based on his remarks about explanation on p. 78. My explication also extends the earlier argument so that it explicitly draws a conclusion about direct reference. I think this is consistent with Devitt’s intentions. But even if it is not, the argument I present in the main text is Devittian in spirit and worth considering seriously.

230

D. Braun

I have already given reasons to reject DN theories of explanation. But let’s suppose, for the sake of argument, that DN theories are correct. On that assumption, it seems that, even if direct reference is true, it is possible to formulate classic DN arguments that (a) contain ordinary attitude ascriptions and that (b) DN theories would count as complete explanations of behavior. Consider, for instance, argument (23). (23a) Ralph wants Ortcutt to be tried for espionage. (23b) Ralph believes that if he handcuffs Ortcutt, then Ortcutt will be tried for espionage. (23c) For all x, if x wants Ortcutt to be tried for espionage, and x believes that [if x handcuffs Ortcutt, then Ortcutt will be tried for espionage], then, ceteris paribus, x handcuffs Ortcutt. (23d) So, ceteris paribus, Ralph handcuffs Ortcutt. Argument (23) has the form of a complete DN explanation of Ralph’s behavior, even if direct reference is true. Perhaps, however, Devitt thinks that if direct reference were true, then there would be too many exceptions to generalization (23c) for it to be true. But if generalization (23c) were false, then (23) would fail to be a complete explanation of Ralph’s behavior. (Page 153 of Coming contains remarks that suggest such a view.) Devitt might think that if his own theory were true, then there would be fewer exceptions to (23c), on its opaque (non-Shakespearean) reading, and so (23c) would be true, on its opaque reading. Hence, argument (23) would be a complete DN explanation if his theory were true (when the attitude ascriptions are construed opaquely or non-Shakespeareanly), whereas it would be (at best) an incomplete and partial DN explanation, if direct reference were true. I have, however, argued that ceteris paribus psychological generalizations, such as (23c), would be true even if direct reference were true (see Braun 2000). But rather than repeat those arguments here, while continuing to assume a DN theory of explanation, I shall try to respond to argument (22) while bypassing issues about psychological generalizations and DN explanations. I will do so by recasting argument (22) in such a way that it does not rely on the notion of a complete explanation or the notion of a more complete explanation.12 My recast version of the argument will instead rely on the notion of one explanation’s providing more explanatory information than another. This is a notion that can be explicated without assuming a DN theory of explanation. It should also be acceptable to Devitt. I shall present my proposed revisions to (22) bit by bit.

 I mention ideal explanations in the article that Devitt quotes. But my notion of an ideal explanation is Railton’s, which is very different from the ideal explanations of classic DN theory. See Sect. 10.7.2 above: an ideal Railtonian explanation of an event is a sequence of propositions that describes the entire causal history of that event, along with all of the covering laws involved in that causal history. No human being has ever provided an ideal Railtonian explanation. Railton’s notion of an ideal explanation is also the only notion of a complete explanation that one can derive from his theory.

12

10  Still for Direct Reference

231

(22a) does not need any revision. (22b) does, for the claim that attitude ascriptions that ascribed modes of reference would provide more complete explanations of behavior than Shakespearean attitude ascriptions is problematic. But there is little doubt that attitude ascriptions that ascribed modes of reference would provide more explanatory information than Shakespearean attitude ascriptions. That is, such attitude ascriptions would provide more information about the causes of an agent’s behavior than direct-reference-style ascriptions. Suppose, for example, that Betty utters ‘Twain smokes’, and consider the following technical attitude ascription (which is not an attitude ascription of ordinary English): ‘Betty has a thought (i) one of whose meanings is the singular proposition that Twain smokes, and (ii) one of whose parts refers to Twain under the “Twain” mode’. This technical ascription provides some information about the causes of Betty’s uttering ‘Twain smokes’. So, it provides some explanatory information about her behavior, and therefore explains her behavior. (I discussed such ascriptions in Braun 2001a: 267–268, and pointed out that, by my lights, they explain behavior.) This technical ascription ascribes both a singular propositional content, and a mode of reference, to a thought. Modes of reference can make a difference to how an agent behaves. So, this technical ascription provides more explanatory information than an ascription that merely describes the singular propositional content of Betty’s thought (as ordinary ascriptions would if direct reference were true). So, although I dispute (22b) itself, I accept a modified version of (22b), namely (22b*). (22b*) If (22a), then attitude ascriptions that ascribe modes of reference provide more explanatory information about an agent’s behavior than do (directreference-style) attitude ascriptions that do not ascribe modes of reference. I believe that this premise serves Devitt’s argumentative purposes as well as (22b), but without relying on DN theories of explanation and the problematic notion of one explanation’s being more complete than another. Now that we have modified (22b), we need modified versions of (22c) and (22d) as well, so as to preserve validity of argument (22). So, attitude ascriptions that ascribe modes of reference provide more (22c*)  explanatory information about an agent’s behavior than do direct-referencestyle attitude ascriptions that do not ascribe modes of reference. (22d*) If (22c*), then modes of reference are the meanings of thoughts. Let’s now consider (22d*) more closely. Notice that it, and the original (22d), speak of the meaning of a thought. I formulated (22d) and (22d*) in this way because Devitt writes “So that way, involving in the case in question a name’s causal mode of reference, is the meaning” (my italics). But unfortunately, (22d) and (22d*) are inconsistent with Devitt’s views about meaning. Devitt holds that every thought has many meanings. For example, every thought concerning Twain has a part that has a purely referential meaning (namely, referring to Twain) and also a meaning that

232

D. Braun

involves a mode of reference (e.g., referring to Twain under the ‘Twain’ mode). So, on Devitt’s view there is no such thing as the meaning of a thought. Can (22d*) be revised so that it is consistent with Devitt’s views and yet helps him to argue against direct reference? Direct reference is a theory of conventional meaning, particularly the conventional meanings of attitude ascriptions. So, Devitt’s purposes require him to draw a conclusion concerning the meanings that attitude ascriptions ascribe to thoughts in virtue of the conventional meanings of those ascriptions. So, (22d**) would suit his argumentative purposes. (22d**) If (22c*), then modes of reference are among the meanings that attitude ascriptions ascribe to thoughts in virtue of the conventional meanings of those ascriptions. To complete the modified argument, we also need a revised version of (22e). Adding it, we obtain the modified version of argument (22) below. (22a) The modes of reference involved in an agent’s thought can make a difference to how that agent behaves. (22b*) If (22a), then attitude ascriptions that ascribe modes of reference provide more explanatory information about an agent’s behavior than do (directreference-style) attitude ascriptions that do not ascribe modes of reference. So, attitude ascriptions that ascribe modes of reference provide more (22c*)  explanatory information about an agent’s behavior than do (direct-referencestyle) attitude ascriptions that do not ascribe modes of reference. (22d**) If (22c*), then modes of reference are among the meanings that attitude ascriptions ascribe to thoughts in virtue of the conventional meanings of those ascriptions. (22e*) If modes of reference are among the meanings that attitude ascriptions ascribe to thoughts in virtue of the conventional meanings of those ascriptions, then direct reference is false. (22f) So, direct reference is false. I deny (22d**). That is, I admit that attitude ascriptions that ascribe modes of reference provide more explanatory information about an agent’s behavior than do (direct-reference-style) attitude ascriptions that do not. But I deny that modes of reference are among the meanings that attitude ascriptions ascribe to thoughts in virtue of the conventional meanings of those ascriptions. I maintain that ordinary attitude ascriptions never provide this information about modes of reference, in virtue of their conventional meanings. Of course, utterances of ordinary attitude ascriptions can pragmatically convey information about modes of reference. And non-standard, technical attitude ascriptions can provide information about modes of reference, on their conventional meanings. But these facts about pragmatics and

10  Still for Direct Reference

233

non-­standard technical ascriptions should not lead us to conclude that ordinary ascriptions also ascribe modes of reference, in virtue of their conventional meanings. Consider an analogous case. The sentence ‘Ortcutt fell because Ortcutt slipped on a banana peel’ provides some explanatory information about Ortcutt’s fall. But it fails to describe some events and states that make a causal difference to Ortcutt’s fall, such as the coefficient of friction of the sidewalk beneath the banana peel that he stepped on, the positions of Ortcutt’s limbs as he slipped, and the state of Ortcutt’s vestibular system at that time. A more technical sentence that also provided this latter information would provide more explanatory information about his fall than one that does not. And perhaps, given the right (unusual) contextual setup, an utterance of the ordinary sentence ‘Ortcutt feel because Ortcutt slipped on a banana peel’ could pragmatically convey information about the sidewalk’s coefficient of friction, Ortcutt’s vestibular system, and so on. But, of course, none of this should lead us to conclude that the ordinary sentence provides this extra explanatory information in virtue of one of its conventional meanings. A second analogy: Consider the technical sentence ‘Betty uttered “Twain smokes” because (a) she believed that Twain smokes and (b) neural events n1 and n2 occurred in her’. Suppose that neural events n1 and n2 are not identical with her Twain-thought, but are among the causes of her utterance. Suppose, for instance, that her Twain-thought caused n1 and n2, and those events causally contributed to the way in which her tongue moved so as to help produce her utterance. If so, then the complex, technical ascription that mentions n1 and n2 provides more explanatory information about Betty’s utterance than a sentence that does not mention them. But we cannot reasonably conclude, from this point alone, that the ordinary attitude ascription ‘Betty believed that Twain smokes’ provides information about n1 and n2, in virtue of its conventional meaning. A parallel point holds for attitude ascriptions and modes of reference. Technical attitude ascriptions that explicitly ascribed modes of reference would provide more explanatory information about behavior than direct-reference-style Shakespearean ascriptions. But this does not tell us whether (or not) ordinary attitude ascriptions, on any of their conventional meanings, provide this information about modes of reference. I claim they do not. Ordinary attitude ascriptions, on their conventional meanings, provide exactly the information that direct reference says they do. I conclude that (22d**) of the modified version of Devitt’s (seeming) argument against direct reference is false. Devitt’s response to me ends by saying that I have no principled basis for thinking “that the referent is theoretically interesting in a way that the mode of reference is not.” I admit that modes of reference are theoretically interesting, in the following ways: modes of reference are real properties, some parts of thinking-events have those properties, they are causally relevant to behavior, and mentioning them (e.g., in technical ascriptions) can add explanatory information to explanations of behavior. But I also maintain that there is a principled and theoretically interesting distinction between conventional and non-conventional meaning. (Devitt agrees, as far as I can tell.) We saw some evidence, in Sect. 10.5 above, for thinking that ordinary

234

D. Braun

attitude ascriptions, on their conventional meanings, do not ascribe modes of reference. If this is right, then referents are theoretically interesting in one way that modes of reference are not: attitude ascriptions ascribe referents to parts of thoughts, in virtue of those ascriptions’ conventional meanings, but do not ascribe modes of reference to parts of thoughts, in virtue of those ascriptions’ conventional meanings.

10.9  Conclusion Direct reference is a theory of conventional meaning. There is prima facie evidence against it, coming largely from speakers’ resistance to substituting co-referring names in attitude ascriptions. But there is also prima facie evidence in favor of it, coming partly from speakers’ use of proper names to describe the attitudes of agents who speak languages foreign to them. (There is also favorable evidence from the use of indexicals in attitude ascriptions, which I have not discussed here, and favorable evidence from quantification into attitude ascriptions, which I briefly mentioned in note 6 above.) So, the evidential situation is complex. But direct-reference theorists have supplemented their theories of conventional meaning with theories of assertion, implicature, and explicable error. They have argued, plausibly in my view, that their packages of theories explain the full range of evidence. Devitt’s initial theory does not explain the full range of evidence. When his initial theory is revised to explain the full range of evidence, in ways suggested by his own remarks, we end up with a version of Devitt’s theory that strongly resembles direct reference. Moreover, Devitt’s arguments against direct reference do not withstand close scrutiny. That is why I am still for direct reference, and why I think Devitt and his readers should be for it as well.13

References Braun, D. 1993. Empty names. Noûs 27: 449–469. ———. 1998. Understanding belief reports. Philosophical Review 107: 555–595. ———. 2000. Russellianism and psychological generalizations. Noûs 34: 203–236. ———. 2001a. Russellianism and explanation. Philosophical Perspectives 15: 253–289. ———. 2001b. Russellianism and prediction. Philosophical Studies 105: 59–105. ———. 2006. Illogical, but rational. Noûs 40: 376–379.

 Thanks to Andrea Bianchi for creating and editing this volume, and for inviting me to contribute to it. Thanks to Panu Raatikainen for help with translating from English to Finnish. Thanks, finally, to Michael Devitt for his comments on this chapter.

13

10  Still for Direct Reference

235

———. 2015. Wondering about witches. In Fictional objects, ed. S. Brock and A. Everett, 71–113. New York: Oxford University Press. Carnap, R. 1947. Meaning and necessity. Chicago: University of Chicago Press. Church, A. 1950. On Carnap’s analysis of statements of assertion and belief. Analysis 10: 97–99. Devitt, M. 1989. Against direct reference. Midwest Studies in Philosophy 14: 206–240. ———. 1996. Coming to our senses: A naturalistic program for semantic localism. Cambridge: Cambridge University Press. ———. 1997. Meanings and psychology: A response to Mark Richard. Noûs 31: 115–131. ———. 2012. Still against direct reference. In Prospects for meaning, ed. R.  Schantz, 61–84. Berlin/Boston: DeGruyter. ———. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Geach, P. 1962. Reference and generality. Ithaca: Cornell University Press. Grice, P. 1989. Meaning. In P. Grice, Studies in the way of words, 213–223. Cambridge, MA: Harvard University Press. First published in 1958 in Philosophical Review 66: 377–388. Hanks, P. 2015. Propositional content. New York: Oxford University Press. Kaplan, D. 1968. Quantifying in. Synthese 19: 178–214. ———. 1989. Demonstratives. In Themes from Kaplan, ed. J. Almog, J. Perry, and H. Wettstein, 481–563. New York: Oxford University Press. Kripke, S. 2011. A puzzle about belief. In S. Kripke, Philosophical troubles: Collected papers, vol. 1, 125–161. New York: Oxford University Press. First published in Meaning and use, ed. A. Margalit, 239–283. Dordrecht: Reidel, 1979. Lewis, D. 1986. Causal explanation. In D. Lewis, Philosophical papers, vol II, 214–240. Oxford: Oxford University Press. Railton, P. 1981. Probability, explanation, and information. Synthese 48: 233–256. Richard, M. 1990. Propositional attitudes: An essay on thoughts and how we ascribe them. New York: Cambridge University Press. ———. 1997. What does commonsense psychology tell us about meaning? Noûs 31: 87–114. Salmon, N. 1986. Frege’s puzzle. Cambridge, MA: MIT Press. ———. 1989. Illogical belief. Philosophical Perspectives 3: 243–285. Reprinted in N. Salmon, Content, cognition, and communication: Philosophical papers, volume 2, 193–223. New York: Oxford University Press. ———. 2003. Naming, necessity, and beyond. Mind 107: 729–749. ———. 2006. The resilience of illogical belief. Noûs 40: 369–375. Reprinted in N.  Salmon, Content, cognition, and communication: Philosophical papers, volume 2, 224–229. New York: Oxford University Press. Soames, S. 2002. Beyond rigidity: The unfinished semantic agenda of Naming and necessity. New York: Oxford University Press. ———. 2015. Rethinking language, mind, and meaning. Princeton: Princeton University Press. Woodward, J. 2017. Scientific explanation. In The Stanford encyclopedia of philosophy, ed. E.N. Zalta. https://plato.stanford.edu/archives/spr2017/entries/scientific-explanation/.

Chapter 11

Naming and Non-necessity Nathan Salmon

Abstract  Kripke’s examples of allegedly contingent a priori sentences include ‘Stick S is exactly one meter long’, where the reference of ‘meter’ is fixed by the description ‘the length of stick S’. In response to skepticism concerning apriority Kripke replaced the meter sentence with a more sophisticated variant, arguing that the modified example is more immune to such skepticism. The case for apriority is examined. A distinction is drawn between apriority and a broader notion, “qua-­ priority,” of a truth whose epistemic justification is dependent on no experience other than that required to justify belief of the deliverances of pure semantics. It is argued that Kripke’s examples are neither a priori nor qua-priori. Keywords Contingent a priori · Jack the Ripper · Saul Kripke · Meter · Neptune

11.1   The Examples Saul Kripke’s Naming and Necessity (N&N) stands as one of the greatest philosophical works of the twentieth century. Perhaps the most startling claim Kripke makes in N&N is that certain sentences that are (semantically) true as a consequence of the way a name’s reference was fixed by description are metaphysically contingent yet knowable a priori. Kripke does not mean by this that it is a contingent meta-truth, one knowable a priori, that the sentences in question are true. To say that a given sentence is necessary, or contingent, is to attribute something modal not about the semantic fact that the sentence is true but about the very proposition expressed by the sentence as its semantic content. A true sentence is necessary (i.e., semantically necessary) insofar as the truth it semantically expresses is itself metaphysically necessary, and is (semantically) contingent insofar as the truth it semantically expresses is metaphysically contingent. Kripke eschews propositions in N&N I am grateful to the participants in my seminars over the years on the topics of the present essay. I owe thanks also to Teresa Robertson for her comments. N. Salmon (*) Department of Philosophy, University of California, Santa Barbara, CA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_11

237

238

N. Salmon

and elsewhere throughout his philosophical work. Consequently, he might prefer to say that a true sentence is necessary insofar as it is true with respect to all metaphysically possible worlds, contingent insofar as it is true but not necessary. A true sentence is (semantically) a priori insofar as the truth it semantically expresses is knowable with epistemic justification that is independent of experience, and is (semantically) a posteriori insofar as the truth it semantically expresses is knowable only with epistemic justification that is dependent on experience. The phrase ‘dependent on experience’ is to be understood in a very particular way. The paradigm of a knowable fact whose epistemic justification is independent of experience is one described by a mathematical theorem, like the fact that 17 + 23 = 40. Perhaps some experience is inevitably involved in acquiring the concept of the sum of a pair of numbers. If so, that experience is irrelevant to the epistemic justification. Also, calculating the sum of 17 and 23 involves some experience, at least typically. Perhaps any human means of arriving at the sum essentially involves experience of one sort or another. But if the calculation is performed correctly, the experience that accompanies the calculation plays no justificatory role. Rather, the justification lies entirely in the calculation, i.e., in the proof itself, which is a deduction ultimately from intuitive “first truths.”1 This makes the equation ‘17 + 23 = 40’ a priori. By contrast, if someone is locked in a room with an elephant, the subject’s belief that an elephant is uncomfortably nearby is justified, at least in part, by visual experience of the elephant. The belief cannot be justified instead by something like a mathematical proof (a deduction from intuitive first truths). The sentence ‘An elephant is nearby’ is thus a posteriori. The sentence ‘I am having a visual experience as of an elephant nearby’, which also is not subject to mathematical-like proof, is also a posteriori. (Consider what would go into a reductio argument for the visual experience itself.) In some cases, the justification that is dependent on experience is precisely the absence of a relevant experience. Thus if someone is locked in a room without any elephants, and is not intoxicated or hallucinating, the fact described by ‘I am not having a visual experience as of an elephant standing in front of me’ should be classified as a posteriori. The relevant justification in this case is not any mathematical-like proof. It is instead the lack of visual elephant impressions. In this sense, it is dependent on experience. Kripke’s examples arise from the introduction of a name or term into the language (or idiolect) through fixing its reference, i.e., through stipulating its designatum, by means of a definite description. He focuses on three examples: (i) Suppose that Le Verrier fixed the reference of ‘Neptune’ by means of (the French for) a description of the form ‘the planet causing such-and-such perturbations in the orbit of Uranus’;2 (ii) suppose that the measurement term ‘meter’ – more accurately, the length term ‘one meter’  – had its reference fixed by means of a description ‘the 1  Here by ‘intuitive’ I mean knowledge that comes from a non-sensory cognitive faculty, like the mathematical faculty (assuming there is one) through which mathematicians gain knowledge of the Peano axioms for arithmetic. 2  To make this as pure a case as possible we suppose that Le Verrier uttered French words to the following effect: Let ‘Neptune’ be a name for the planet causing such-and-such perturbations in

11  Naming and Non-necessity

239

length of stick S at time t0’;3 and (iii) suppose that the police investigating the Whitechapel murders fixed the reference of the name ‘Jack the Ripper’ by means of the description of the form ‘the person who committed such-and-such murders, or most of them anyway’. Three further examples of the same alleged phenomenon have also been widely discussed in the relevant literature: (iv) Suppose that the reference of ‘Shorty’ is fixed by the description ‘the world’s shortest spy’ (David Kaplan and Robert Sleigh); (v) suppose that the reference of ‘Newman-1’ is fixed by the description ‘the first child to be born in the 22nd Century’ (Kaplan); and (vi) suppose the reference of ‘Julius’ is fixed by the description ‘the inventor of the zipper’ (Gareth Evans). Kripke says about (i) and (ii) that the following sentences are (semantically) contingent yet a priori for the reference fixer (at the time of fixing), the a-priority being a consequence of how reference is fixed: (1) Stick S, if it exists at t0, is exactly one meter long at t0 (Kripke 1980: 54–57). ( 2) If there is a unique planet causing perturbations in the orbit of Uranus, then Neptune is causing perturbations in the orbit of Uranus (Kripke 1980: 79 n. 33). Though Kripke does not make the analogous claim in connection with (iii), what he says about (i) taken together with his remarks about (iii) (Kripke 1980: 79, 94) arguably commit him to the thesis that the following sentence is likewise contingent a priori: (3) If anyone singlehandedly  committed such-and-such murders, then Jack the Ripper did.4 About (1) Kripke says, “The case of fixing the reference of ‘one meter’ is a very clear example in which someone, just because he fixed the reference in this way, can in some sense know a priori that the length of this stick is a meter without regarding it as a necessary truth” (1980: 63; cf. pp.  14–15). It is an indication of Kripke’s genius that he also explicitly recognizes that his view that (1) is a priori is, if not mistaken, then at least quite implausible. He writes:

the orbit of Uranus, if exactly one planet is causing those perturbations; and let ‘Neptune’ name nothing otherwise. Similar expansions should be supplied for the other examples. 3  More accurately still, we suppose that the designation of the measurement term ‘meter’ is fixed by means of the description ‘the function that assigns to any real number n, the length that is exactly n times the length of S at t0’. This simultaneously fixes the reference of ‘one meter’, ‘two meters’, ‘0.5 meters’, ‘17 meters’, etc. 4  Kripke does not discuss the analogs of (1) and (2) for any of (iii)–(vi). One might assume that he would deem the analogs also contingent a priori, but one does so at the risk of misinterpretation. The cases of (iii) and (vi) are highly analogous to (i), all three of which invoke verbs of causation (‘cause’, ‘murder’, ‘invent’). By contrast, (iv) and (v) invoke grammatical superlatives (‘shortest’, ‘first’) while (ii) stands apart from all the rest. In work published subsequent to N&N, Kripke raises considerations that count heavily against extending the mechanism to superlative cases to generate purported contingent a priori truths through stipulating the designatum of a name by description. See Kripke 2011.

240

N. Salmon

If someone fixes a meter as ‘the length of stick S at t0’, then in some sense he knows a priori that the length of stick S at t0 is one meter, even though he uses this statement to express a contingent truth. But, merely by fixing a system of measurement, has he thereby learned some (contingent) information about the world, some new fact that he did not know before? It seems plausible that in some sense he did not, even though it is undeniably a contingent fact that S is one meter long. So there may be a case for reformulating the thesis that everything a priori is necessary so as to save it from this type of counterexample. (Kripke 1980: 63n)

For many of us it has seemed that no reformulation is needed. Even as formulated above, the thesis that everything a priori is necessary is already immune to Kripke’s alleged counterexamples.5 However, Kripke goes on to say, Since I will not attempt such a reformulation, I shall consistently use the term ‘a priori’ in the text so as to make statements whose truth follows from a reference-fixing ‘definition’ a priori. (Kripke 1980: 63-64n) 5  There is another kind of sentence for which the thesis is vulnerable, e.g., ‘If Kripke is actually a philosopher, then Kripke is a philosopher’ and ‘If Kripke is a plumber, then Kripke is actually a plumber’. Each of these conditionals, although evidently (semantically) a priori, is false with respect to possible worlds in which Kripke is a plumber instead of a philosopher. To the best of my knowledge, examples like these were first noted by Kaplan in 1971 or 1973 (see Kaplan 1979: 95); and later in his 1977 masterpiece (see Kaplan 1989: 539 n. 65). Cf. Salmon 1981: 77–78; 1986: 141–142, 180 n. 19; and 1987a. The a-priority of the examples depends on our being de re connected to the actual world in conceiving it metaphysically as this possible world [the only world that is realized], or this possible world [the only world that obtains]. Notice that conceiving a world in this manner is a way of knowing what world is in question. A case can be made that ‘Saul Kripke is actually a philosopher’ is itself (semantically) a priori, provided that it is possible to be de re connected to the actual world by conceiving of it compositionally, rather than metaphysically, as the only possible world in which: p, p′, p″, …, including sufficiently many propositions to pin down the actual world. Notice, however, that this is arguably not a way of knowing what world is in question (viz., the one that is realized/obtains). Furthermore, a compositional conception of the actual world is not a possibility for knowers with finite or otherwise reasonably limited comprehension (including all humans). Given that the first example mentioned in the preceding paragraph is also a priori, it appears to follow that their consequence ‘Kripke is a philosopher’ is a priori as well, provided it is possible to be de re connected to the actual world in conceiving it compositionally. It does not actually follow, however, since if there are such different ways of being de re connected to the actual world, a-priority need not be closed under logical consequence. Cf. Soames 2007: 261–263. Soames uses the label ‘indexical’ for the conception of the actual world that accompanies ‘actually’, and labels the potential alternative, compositional conception ‘non-indexical’. I believe that the relevant distinction is not correctly drawn in these terms. While the modal adverb ‘actually’ (in the relevant sense) is indeed indexical, the metaphysical conception of the actual world that accompanies it is no less descriptive than is a compositional conception. (To suppose that the actual world can be demonstrated seems to presuppose a David-Lewis-like misconception of possible worlds as universes, as opposed to abstract maximal scenarios or states of the universe. Perhaps one can gesture toward, or otherwise demonstrate, the universe, but how would one demonstrate the maximal scenario that obtains or the total way that things are, in order to single it out from all the other maximal scenarios or total ways for things to be?)

11  Naming and Non-necessity

241

We shall return to the question of exactly how Kripke uses the term ‘a priori’.

11.2   A Purported Proof Kripke has persuaded the angels and all right-minded philosophers that each of (1)–(3) is indeed contingent.6 The alleged a-priority, on the other hand, remains a sticking point. Early on, following trenchant observations made by Alvin Plantinga, Keith Donnellan pointed out that even if the manner in which the reference was fixed produces the result that it is knowable a priori by the reference fixer that the phrase ‘one meter’ designates the length of S at t0 (if S exists) so that (1) is true, or that pseudonym ‘Jack the Ripper’ designates the person who committed the Whitechapel murders (if any person did so singlehandedly) so that (3) is true, or that the name ‘Neptune’ designates the planet that is perturbing the orbit of Uranus (if any single planet is) so that (2) is true, it does not straightforwardly follow that any of (1)–(3) is itself a priori. Analogously, it is a contingent a posteriori fact that ‘2 + 3 = 5’ is true, but it does not follow that the equation itself is either contingent or a posteriori. Contrary to N&N, it is evidently a posteriori that the length of S at t0 is one meter, and a posteriori that Neptune causes perturbations.7 For as Donnellan argued, the knowledge that S (if it exists) is exactly one meter long at t0, is de re knowledge of the length, one meter, that S is exactly that long, no longer and no shorter, at t0; and the knowledge that Jack the Ripper (if he existed) was a murderer is de re knowledge of the Whitechapel Murderer that he was a murderer; and the knowledge that Neptune (if it exists) perturbs Uranus is de re knowledge of the eighth planet that it perturbs Uranus. Each of these pieces of de re knowledge is quite real, but each is evidently also quite a posteriori. What is Kripke’s rationale in N&N for his view that (1) and (2) are a priori for the reference fixer? In the preface to N&N Kripke describes how he hit upon the idea: I imagined a hypothetical formal language in which a rigid designator ‘a’ is introduced with the ceremony, ‘Let “a” (rigidly) denote the unique object that actually has property F, when talking about any situation, actual or counterfactual.’ It seemed clear that if a speaker did introduce a designator into a language that way, then in virtue of his very linguistic act, he would be in a position to say ‘I know that Fa’, but nevertheless ‘Fa’ would express a contingent truth (provided that F is not an essential property of the unique object that possesses it). (Kripke 1980: 14)

Kripke thinks the a-priority of (1) and (2) is a result or product of the manner in which the reference fixer fixed the reference of the crucial term. His thought appears to be that the reference fixer first recognizes a priori that the sentence is true, and then in a purely a priori manner transitions his/her way from the truth of the sentence to the content itself. To illustrate Kripke’s apparent strategy, let us suppose  With the exception of Michael Devitt. See Devitt 2015: 136–137.  Plantinga 1974: 8-9n; Levin 1975: 152n; Donnellan 1979. I provide an argument similar to Donnellan’s in Salmon 1986: 141–142, and in Salmon 1987b.

6 7

242

N. Salmon

that the reference fixer introduces instead of the measurement term ‘meter’ an invented proper name, ‘OneMeter’, by stipulating RF: Let ‘OneMeter’ be a proper name of the length of stick S at t0, if stick S exists (and has exactly one length) at t0; otherwise let the name ‘OneMeter’ designate nothing. Our objective is to establish a priori – in effect, to prove – the proposition semantically expressed by the following sentence: M: If stick S exists (and has exactly one length) at t0, then the length of stick S at t0 = OneMeter. To that end we may suppose the reference fixer constructs the following purported proof, in which each line is taken to represent the proposition expressed by the sentence occurring on that line: 1. If stick S exists (and has exactly one length) at t0, then the length of stick S at t0 = the length of stick S at t0. (logic) 2. Line 1 is true iff if stick S exists (and has exactly one length) at t0, then the length of stick S at t0 = the length of stick S at t0. (semantics) 3. Line 1 is true. (1, 2, propositional logic) 4. If stick S exists (and has exactly one length) at t0, then ‘OneMeter’ designates the length of stick S at t0. (stipulation of RF) 5. If (a) line 1 is true, and (b) if stick S exists (and has exactly one length) at t0, then ‘OneMeter’ designates the length of stick S at t0, then (c) M is true. (semantics) 6. M is true. (3, 4, 5, propositional logic) 7. M is true iff if stick S exists (and has exactly one length) at t0, then the length of stick S at t0 = OneMeter. (semantics) 8. If stick S exists (and has exactly one length) at t0, then the length of stick S at t0 = OneMeter. (6, 7, propositional logic) That M is true is a trivial and nearly immediate consequence of the reference fixing. The basic strategy is to prove the proposition expressed by M (line 8) on the basis of the truth of M (line 6) and the familiar Tarski-semantics equivalence (line 7). Each of lines 1, 2, 4, 5, and 7 of the proof is put forward as semantically a priori. The inferences to lines 3, 6, and 8 are each logically valid. Hence each preserves semantic a-priority. Thus, if each of lines 1, 2, 4, 5, and 7 is indeed semantically a priori, then M is as well. The most significant problem with this attempt to establish the a-priority of M (there are several problems) is that the line 7 is semantically a posteriori. It is a common misconception that the T-sentences (i.e., the instances of the Tarski T-schema) for a natural language are analytic and therefore a priori – e.g., ‘ ‘Snow is white’ is true in English iff snow is white’. (Cf. the so-called redundancy or disquotational theory of truth.) That they are in fact synthetic and a posteriori is proved

11  Naming and Non-necessity

243

by the Church-Langford translation test.8 Consider the French translation of the classic T-sentence: ‘Snow is white’ est vrai en anglais si et seulement si la neige est blanche.

This French sentence contains exactly the same information, no more and no less, as that contained in the original T-sentence: a non-linguistic necessary and sufficient condition for the truth in English of ‘Snow is white’. But an ideally competent French speaker with no understanding of English does not know this information and cannot learn it except by means of experience – any more than an ideally competent English speaker with no understanding of French can know a priori that ‘La neige est blanche’ is true in French iff snow is white. One’s understanding of a natural language, even of one’s mother tongue, is invariably a posteriori. For exactly similar reasons, lines 2, 3, and 5 are, like line 7, semantically a posteriori.

11.3   Quasi-a-priority We should probably conclude from the preceding considerations that Kripke means something different by his use of the term ‘a priori’. First a bit of taxonomy.9 Information concerning how a name came to name whom or what it does – whether the designation was fixed by description, for example, or instead by ostension and then passed from one speaker to the next – is not genuinely semantic, as such. It is pre-semantic. Information concerning whom or what a name names, by contrast, is typically (not invariably) purely semantic, depending on its pre-semantics. Thus the fact that ‘Walter Scott’ designates Walter Scott (in English) is purely semantic. Also purely semantic is the fact that ‘the sole author of Waverley’ designates whoever singlehandedly wrote Waverley. The resultant fact that ‘the sole author of Waverley’ designates Walter Scott is partly semantic, partly non-semantic, since it is a result of, and dependent upon, both the purely semantic fact that ‘the sole author of Waverley’ designates the sole author of Waverley and the altogether non-semantic, historic fact that Scott singlehandedly wrote Waverley. Analogously, the fact that ‘Snow is white’ is true (in English) iff snow is white is purely semantic, whereas the fact that ‘Snow is white’ is true is only partly semantic, being dependent on the non-semantic fact that snow is white. (A fact is said to be semantic if it is at least partly semantic; a fact is non-semantic if and only if it is not even partly semantic.) A name whose designation was fixed by description reverses the usual order of things. Line 4 of the reference fixer’s proof expresses a truth of pure semantics, although it does not identify what length ‘OneMeter’ designates (if S exists and has exactly one length at t0), whereas the  Church 1950. See also Salmon 2001.  The taxonomy, inspired by Rudolf Carnap, comes from Salmon 1993. In the terminology proposed there, (1)–(3) and their analogs with regard to (iv)–(vi) should be regarded as contingent analytic rather than contingent a priori. 8 9

244

N. Salmon

semantic fact that ‘OneMeter’ designates the particular length that it does (if S exists and has exactly one length at t0), is dependent on the non-semantic fact that S is exactly that long at t0, and hence partly non-semantic. I submit that what Kripke has in mind by his use of the term ‘a priori’ is a truth that is knowable independently of any experience beyond that on which knowledge of purely semantic (and/or purely pre-semantic) information about the language in question depends (even insofar as such knowledge is a posteriori in the traditional sense). This is a broader category; it includes, but extends beyond, knowledge whose epistemic justification is altogether independent of experience, i.e., a priori knowledge in the traditional sense. I shall say that a truth is quasi-a-priori – for short, qua-priori – if it fits this broader notion, i.e., if it is either a priori or else an a priori consequence of pure semantics (together with pre-semantics). I shall say that a truth is quasi-a-posteriori – for short, qua-posteriori – if it is knowable but not qua-priori, i.e., if its epistemic justification is dependent on some experience or other beyond that on which knowledge of purely semantic (and/or pre-semantic) facts depends. Thus, for example, the fact described by the French sentence displayed above is qua-priori, whereas the simpler fact that ‘Snow is white’ is indeed true in English, although partly semantic, is qua-posteriori. Of course, one can infer the latter fact from the former taken in conjunction with the fact that snow is white, but the truth that snow is white is qua-posteriori. We shall also say that a true sentence is (semantically) qua-priori if the truth it semantically expresses is qua-priori, and that it is (semantically) qua-posteriori if the semantically expressed truth is qua-posteriori. Never mind whether M is a priori. (It is not, but never mind that.) Is M qua-priori? The proof displayed above makes a forceful case that M is indeed qua-priori. Each of the lines 1, 2, 4, 5, and 7, even if not a priori, is at least arguably qua-priori; furthermore each of the inferences to lines 3, 6, and 8, being logically valid, preserves qua-priority. Something along these lines captures, or at least comes close to capturing, the rationale for Kripke’s claim in N&N that (1) and (2) are to be classified as “a priori”, i.e. (as I interpret him), as qua-priori. Line 7 remains the major stumbling block. To be sure, since the biconditional sentence occurring at line 7 is a T-sentence, the meta-meta-truth that the sentence is meta-true is indeed qua-priori. But this does not entail concerning the meta-truth that the biconditional expresses that it is itself qua-priori. Indeed, that meta-truth is in fact qua-posteriori. Knowledge of that information is de re; it is knowledge concerning the length in question – one meter, i.e., 39.3701 inches – that M is true iff stick S (if it exists and has exactly one length at t0) is exactly that long at t0. Since it is qua-priori that M is true (the left-hand-side proposition), any knowledge of the biconditional meta-truth yields (by modus ponens) de re knowledge that is entirely non-semantic (extra-linguistic), viz., knowledge of the particular length in question that stick S (if it exists and has exactly one length at t0) is exactly that long at t0. A moment’s reflection confirms that this bit of de re knowledge is qua-posteriori, making the biconditional qua-posteriori as well. (If the biconditional and its left-hand side were both qua-priori, then the right-hand side would also be qua-priori.) In previous work I wrote the following:

11  Naming and Non-necessity

245

it would seem that no matter what stipulations one makes, one cannot know without resorting to experience such things as that S, if it exists, has precisely such-and-such particular length at t0. It would seem that one must at least look at S’s length, or be told that it is precisely that long, etc. (Salmon 1987b: 198)

Exactly similarly, no matter what reference-fixing stipulations one makes, one cannot know of the particular length, one meter = 39.3701 inches, just on the basis of purely semantic (and/or purely pre-semantic) information that S (if it exists, etc.) is exactly that long at t0. Suppose that the reference fixer has heard of S but has never seen it. Suppose further that the reference fixer utters RF using the description ‘the length of stick S at t0’ attributively rather than referentially, in Donnellan’s (1966) sense (“the present length of S, whatever that is”), but under a wildly mistaken impression of S’s length (e.g., that S is about an inch long, or roughly the length of a football field). In such circumstances the reference fixer clearly does not know of the actual length, one meter, that S is exactly that long, even though the reference fixer does know, qua-priori, that M is true. In N&N Kripke supposed that the reference fixer utters RF while looking at S directly in front of him/her. This is a very special kind of case; in a significant sense, it is not a case of genuinely fixing reference by description. In such a case, the reference fixer uses the description referentially rather than attributively. Here the reference fixer does indeed know of the actual length, one meter, that S is exactly that long. But, to use Russell’s terminology, the reference fixer knows this by acquaintance rather than by description.10 The reference fixer’s de re knowledge concerning the length is based on visual contact with the stick and its length. Hence, while it is true that the reference fixer in the envisaged example knows de re concerning one meter that S (if it exists and has a unique length) is exactly that long, it is not qua-priori knowledge.

11.4  Kripke’s Revised Case Post-N&N Kripke shifted ground with respect to his claim that (1) is a priori. In his unpublished 1986 Exxon Distinguished Lectures at the University of Notre Dame, “Rigid Designation and the Contingent A Priori: The Meter Stick Revisited,” he has replaced (1) with a variant along the lines of the following:

 Cf. Kripke, “Rigid Designation and the Contingent A Priori: The Meter Stick Revisited,” unpublished transcription of the 1986 Exxon Distinguished Lectures at the University of Notre Dame, at pp. 35–40 (the close of lecture 2). Kripke there admits that the case of stick S and ‘meter’, as envisaged in N&N, is not a “pure case” of fixing the reference of a de jure rigid designator by description (lecture 1, p. 3). He says that the definite description ‘the length of S at t0’ is not a reference-fixing description, and is instead an “acquaintance-guiding description”. Although I did not know the content of Kripke’s Notre Dame lecture series when I wrote “How to Measure the Standard Metre” (Salmon 1987b), to some extent the latter can serve as a sort of reply. (I explicitly mentioned the lecture series, at p. 204 n. 11.)

10

246

N. Salmon

(1′) If I am presently having a visual experience as of a stick before me of roughly a yard in length, and I am not under any perceptual illusion, then the stick presently before me is presently exactly one meter long. The idea behind this variant is that by forming a conditional and “putting into the antecedent” the experiences that would justify belief of the consequent, those experiences play no justificatory role with regard to the conditional itself (pp. 57, 62–63). The reference fixer may then come to know the consequent proposition (which is itself a posteriori) by attending to his/her experiential knowledge and performing modus ponens on the conditional. (Here we might suppose a Cartesian epistemology, whereby a subject infers the external world from internal experiences and conditionals linking the two.) It thereby seems more plausible that (1′), in contrast to (1), is genuinely a priori. Even in the case of (1′), the justification for the reference fixer’s belief of the relevant proposition is arguably dependent upon experience. As a prelude to arguing for this, we first note that (1′) is logically equivalent to a three-way disjunction: (1″) Either I am not presently having a visual experience as of a stick before me of roughly a yard in length, or I am under some perceptual illusion, or else the stick presently before me is presently exactly one meter long. Now suppose that the reference fixer is blindfolded and informed that stick S, which the subject has heard of but has never seen and has no opinion as to its length, sits a couple of feet in front. At t0, still blindfolded, the reference fixer utters RF. Can the reference fixer now know the disjunctive fact described by (1″) independently of experience (other than experience that justifies knowledge of purely semantic and purely pre-semantic facts)? The reference fixer of course knows the first disjunct of (1″). The justification for the reference fixer’s belief of this proposition is the lack of visual experience as of a stick, and hence is dependent on experience. But that is irrelevant. The relevant question is this: At t0, while still blindfolded, can the reference fixer’s belief of the full disjunctive fact be epistemically justified without appeal to the absence of visual experience and instead entirely by means of intuitive or conceptual connections among the disjuncts themselves? More to the point, concerning the particular length one meter = 39.3701 inches, can the reference fixer, while blindfolded at t0, know de re, but independently of experience, that if he/she is having a non-illusory visual experience as of a stick of roughly a yard in length, then the stick in question is presently exactly that long? One difficulty is that while blindfolded, it is difficult for the reference fixer to get a cognitive grip on the particular length in question, in order to form any belief at all about it. This difficulty is surmountable. Suppose that although the reference fixer has never seen S, he/she happens to be very familiar with a different stick S′ that, by sheer coincidence, is exactly the same length as S. Suppose the reference fixer even knows that S′ is 39.3700787 inches long, and thus knows precisely how long S′ is. The reference fixer can then get hold of the relevant length by thinking of it

11  Naming and Non-necessity

247

demonstratively as that length [the length of S′].11 Thinking of the length in this way at t0, while blindfolded, can the reference fixer know independently of experience that either he/she is not having a visual experience as of a stick of roughly a yard in length, or he/she is under some perceptual illusion, or else the stick presently before him/her is presently exactly that long [the length of S′]? It is evident that the answer is ‘No’. Indeed, it would be quite irrational for the blindfolded reference fixer even to believe (1″) except on the basis of the absence of visual experience. Were he/she to believe the full disjunctive proposition independently of the lack of visual experience, the belief would be only so much guesswork, lucky that the length the reference fixer has in mind happens to coincide with that of S. Furthermore, it is no better for the reference fixer to think of the relevant length as the length of the stick presently before me, thereby eliminating dumb luck in favor of logical certainty. Pending experiential contact with S, thinking of the length by description in this way is not a way of getting connected to the length and does not enable one to form de re beliefs about it. Of course, the reference fixer can always learn the relevant proposition by simply removing the blindfold and opening his/her eyes. But looking at the stick is not a way for the reference fixer to gain non-­ experiential knowledge of (1′). As soon as the reference fixer looks at S, and thinks of its length demonstratively as that length [the length of the stick presently before me], the visual experience is itself an essential part of the epistemic justification for his/her belief of (1′). Otherwise removing the blindfold would be entirely unnecessary for epistemic justification. Our conclusion is that (1′) is, like (1)–(3), both a posteriori and qua-posteriori.

References Church, A. 1950. On Carnap’s analysis of statements of assertion and belief. Analysis 10: 97–99. Devitt, M. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Donnellan, K. 1966. Reference and definite descriptions. Philosophical Review 75 (3): 281–304. ———. 1979. The contingent a priori and rigid designators. In Contemporary perspectives in the philosophy of language, ed. P. French, T. Uehling, and H. Wettstein, 45–60. Minneapolis: University of Minnesota Press. Kaplan, D. 1979. On the logic of demonstratives. Journal of Philosophical Logic 8 (1): 81–98. ———. 1989. Demonstratives. In Themes from Kaplan, ed. J. Almog, J. Perry, and H. Wettstein, 481–563. New York: Oxford University Press. Kripke, S. 1980. Naming and necessity. Cambridge, MA: Harvard University Press. ———. 1986. Rigid designation and the contingent a priori: The meter stick revisited. (Unpublished transcription.) ———. 2011. Unrestricted exportation and some morals for the philosophy of language. In S. Kripke, Philosophical troubles, 322–350. New York: Oxford University Press.  In terms of Kaplan’s indexical operator ‘dthat’, the reference fixer can think of the length in question by means of the semantic character of the indexical expression ‘dthat[the length of S′]’. See Kaplan 1989, especially pp. 518–522.

11

248

N. Salmon

Levin, M.E. 1975. Kripke’s argument against the identity thesis. Journal of Philosophy 72 (6): 149–167. Plantinga, A. 1974. The nature of necessity. New York: Oxford University Press. Salmon, N. 1981. Reference and essence. 2nd ed. Amherst: Prometheus, 2005. ———. 1986. Frege’s puzzle. 2nd ed. Atascadero: Ridgeview, 1991. ———. 1987a. Existence. In Philosophical perspectives, 1: Metaphysics, ed. J.  Tomberlin, 49–108. Atascadero: Ridgeview. Reprinted in N.  Salmon, Metaphysics, mathematics, and meaning, 9–49. New York: Oxford University Press, 2005. ———. 1987b. How to measure the standard metre. Proceedings of the Aristotelian Society, New Series 88: 193–217. Reprinted in N.  Salmon, Content, cognition, and communication, 141–158. New York: Oxford University Press, 2007. Also in On sense and direct reference, ed. M. Davidson, 962–980. Boston: McGraw-Hill, 2007. ———. 1993. Analyticity and apriority. In Philosophical perspectives, 7: Language and logic, ed. J. Tomberlin, 125–133. Atascadero: Ridgeview. Reprinted in N. Salmon, Content, cognition, and communication, 183–190. New York: Oxford University Press, 2007. ———. 2001. The very possibility of language: A sermon on the consequences of missing Church. In Logic, meaning and computation: Essays in memory of Alonzo Church, ed. C.A. Anderson and M. Zeleny, 573–595. Dordrecht: Kluwer. Soames, S. 2007. Actually. Proceedings of the Aristotelian Society, Supplementary Volumes 81: 251–277.

Chapter 12

Against Rigidity for General Terms Stephen P. Schwartz

Abstract  The difficulties with extending Kripke’s notion of rigidity from singular terms to general terms are well-known. The most serious are overgeneralization and the related problem of trivialization. Proposed solutions fall into two camps. One proposes restricting rigidity to terms that apply essentially. The most detailed version of this is rigid application as elaborated and defended by Michael Devitt. The other solution is to happily accept that all common nouns and unstructured general terms as well as some structured ones are rigid, thus embracing the overgeneralization charge. This approach has been defended by Genoveva Martí and José Martínez-­ Fernández, Joseph LaPorte, Nathan Salmon, Arthur Sullivan, and Michael Johnson, among others. Neither of the proposed solutions is satisfactory. The one view offers no systematic way of classifying general terms as rigid, the other includes too many terms as rigid. Overgeneralization remains a significant problem. The notion of rigidity cannot be satisfactorily extended to general terms, and there is no reason why we should regret this. Keywords  Natural kind terms · Nominal kind terms · Rigid designation · Rigid application · General terms · Paradigm terms · Rigid essentialism · Rigid expressionism

12.1  Introduction The difficulties with extending the notion of rigidity from proper names to general terms are well known. My view is that these difficulties cannot be overcome without arcane complexities and confusions. Thus the rigid/non-rigid distinction cannot be usefully applied to general terms. There are other approaches that offer ways to successfully do the work that rigidity was thought to do.

S. P. Schwartz (*) Department of Philosophy and Religion, Ithaca College, Ithaca, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_12

249

250

S. P. Schwartz

In the following, I will use “general term” and the distinction between general terms and singular terms in the sense of the classic one of Mill’s. Note that Mill uses “general name” where we today would use the phrase “general term.” The distinction, therefore, between general names, and individual or singular names, is fundamental; and may be considered as the first grand division of names. A general name is … a name which is capable of being truly affirmed, in the same sense, of each of an indefinite number of things. An individual or singular name is a name which is only capable of being truly affirmed, in the same sense, of one thing. (Mill 1974: 28)1

Thus general terms will include common nouns, property designators such as “green,” and other predicate and property terms. There is no need for lengthy discussion here of the difficulties with treating general terms as rigid. They have been widely aired in the literature.2 Briefly, the extension of e.g. the general term “tiger” varies from possible world to possible world. Thus if “tiger” is held to designate the items in its extension, then it is non-rigid and indeed almost all common nouns would turn out to be non-rigid. A few might still be rigid such as “prime number” but even the natural kind terms such as “water,” “gold,” “tiger” that Kripke held to be rigid would turn out to be non-rigid. As a result philosophers have looked for more nuanced approaches to the question of the rigidity of general terms. These nuanced approaches to general term rigidity have run into difficulties of their own as I will detail. The two main ways to solve the problems of extending rigidity to general terms are what I will call in the interests of simplicity “rigid essentialism” and “rigid expressionism.” (I will elaborate and criticize these views in more detail in what follows. I am just sketching out the territory here in an introductory manner.) Rigid essentialism is the view that Kripke’s notion of rigidity for natural kind terms can be captured by noting that most natural kind terms apply essentially. The idea is that a tiger is essentially a tiger whereas a lawyer is not essentially a lawyer, thus “tiger” applies essentially whereas “lawyer” does not. This approach highlights the distinction between natural kind terms and artifact and social kind terms (what I will call, again in the interests of simplicity, “nominal kind terms”). The claim is that natural kind terms apply essentially whereas nominal kind terms typically do not. According to its proponents, rigid essentialism also has the virtue of defeating descriptivism about natural kind terms and related terms. Since the descriptions associated with the terms do not apply essentially, the descriptions cannot be equivalent to the terms. Rigid expressionism on the other hand is more popular than rigid essentialism. Rigid expressionism is the view that rigid general terms designate or express the 1  Mill also mentions abstract names of attributes. “But when only one attribute, neither variable in degree nor in kind, is designated by the name; as visibleness; tangibleness; equality; squareness; milkwhiteness; then the name can hardly be considered general; for though it denotes an attribute of many different objects, the attribute itself is always conceived as one, not many. To avoid needless logomachies, the best course would probably be to consider these names as neither general nor individual, and to place them in a class apart” (Mill 1974: 30). 2  See Schwartz 2002 and Glüer and Pagin 2012 for more extensive explanations of the difficulties.

12  Against Rigidity for General Terms

251

same kind or property in every possible world.3 The kind or property then replaces the extension as the object that is rigidly designated. The kind or property expressed or designated by “tiger” remains the same from world to world  – the species Panthera tigris or the property “being a tiger” or tigerhood or some such thing depending on the particular version of the view. The proponents of rigid expressionism are happy to accept that all common nouns are rigid – both “tiger” and “lawyer” come out rigid because “tiger” expresses the same animal kind or species in every possible world whereas “lawyer” expresses the same occupational kind or profession in every possible world. This can be summed up as the claim that at least unstructured general terms (basically single word general nouns like “tiger,” “gold,” “refrigerator,” “lawyer”) are proper names of kinds or properties and thus rigidity for familiar proper names carries over neatly to them. According to this view all common nouns including all the simple nominal kind terms are rigid. Proponents of rigid expressionism, while accepting “overgeneralization” which they see as no flaw, avoid trivialization by noting that definite descriptions of kinds or properties can be non-rigid when properly construed. So for example whereas “blue” rigidly expresses the color property blue, “x’s favorite color” denotes blue in the actual world but some other color in a different possible world. Alas, neither rigid essentialism nor rigid expressionism succeed in offering a useful version of rigidity for general terms, as I will argue. Rigid essentialism offers no systematic or plausible distinction between natural kind terms and nominal kind terms nor is it useful in defeating descriptionism and thus loses its point. Rigid expressionism, on the other hand, erroneously runs together as rigid terms that are fundamentally different thus ignoring vital semantic distinctions. Contrary to rigid expressionism “tiger” and “lawyer” are not both rigid. I argue instead that the rigid/ non-rigid distinction does not usefully apply to any general terms (common nouns, property designators, predicate terms).

12.2  Against Rigid Essentialism Rigid essentialism has been most carefully and fully articulated by Michael Devitt (see Devitt 2005, 2009). Devitt argues that natural kind terms are mostly what he calls “rigid appliers” whereas artifact and social terms are not, although there are exceptions on both sides of the aisle. “[A] general term ‘F’ is a rigid applier iff it is such that if it applies to an object in any possible world, then it applies to that object in every possible world in which the object exists. Similarly for a mass term” (Devitt 2005: 146). Devitt claims that “rigid application can do much the same theoretical 3  I use “rigid expression” instead of “rigid designation.” The distinction may be purely terminological. Both Salmon 2005 and Johnson 2011 use the phrase “rigid expression of a property” but other proponents are happy to stick with “designation.” Orlando in her paper on the topic calls rigid essentialism “the essentialist conception” and rigid expressionism “the identity of designation conception” (Orlando 2014: 51). She favors a version of the latter.

252

S. P. Schwartz

work for kind terms as rigid designation does for singular terms, the work of refuting description theories” (Devitt 2005: 159). Unfortunately Devitt’s theory depends for its plausibility on a limited diet of examples. If we “cherry pick” terms, Devitt’s rigid application theory looks like a winner but when we go beyond the obvious examples difficulties mount. Apparently if an animal is a tiger in the actual world, then it is a tiger in every possible world in which it exists. On the other hand, it is indisputable that no lawyer or pediatrician is essentially a lawyer or pediatrician. But once we look beyond these handpicked examples it becomes clear that the view won’t work to make any interesting or systematic distinction between terms.4 Many natural kind terms do not apply essentially. For example, “frog” is surely a natural kind term, but a frog is not essentially a frog (see Schwartz 2002: 274–275). Any frog in this world sadly never gets beyond its tadpole stage in some other possible world. It would be a stretch too far to defend the rigid application of “frog” by claiming that a tadpole is a frog. Surely that is not the way the term “frog” is used in ordinary language, and the rigid/non-rigid distinction is meant by Kripke to apply to ordinary language natural kind terms and to rely on our ordinary linguistic intuitions. “Frog” is just the sort of term that according to Kripke ought to be non-descriptional and rigid. Notice that the situation with “frog” will arise anywhere there are nouns relating to phase changes. “Ice” is a natural kind term, but a collection of molecules that are ice in this world will not be ice in some other possible world. Thus that collection of molecules that constitutes some ice cube is not essentially ice. Indeed nothing is essentially ice. On the other hand, many artifact kind terms seem to apply essentially. I do not see how my refrigerator could be anything but a refrigerator in any other possible world. Even if it is put to some other use and never used to cool it is still a refrigerator but a “repurposed” one. Thus my conclusion is that Devitt’s notion of rigid application cannot be used to systematically sort general terms into rigid-like natural kind terms versus non-rigid nominal kind terms. More recently Devitt has responded to my claim that artifact kind terms are rigid appliers. He argues that in fact “table” is not a rigid applier5: Consider this story of a certain wooden object. It was designed to be a table, made in a certain factory, and then sold and used as a table. The term ‘table’ clearly applies to it. But it might not have done. Suppose that this very object had been made in the same way from the same materials in the same factory but designed not to be a table but rather a light shade for a very modern building. And that was how it was used for its entire existence. Then the term ‘table’ would never have applied to it. So, ‘table’ is not a rigid applier. (Devitt 2009: 246)

Again Devitt seems to be cherry-picking his examples. I suppose that “table” might not be a rigid applier – tables can be rather simple things, so perhaps their essences 4  Criticisms of Devitt’s rigid application theory can be found in many places. I particularly recommend LaPorte 2013: 110–116 and Rubin 2013. 5  Kripke thought otherwise: “if the very block of wood from which the table was made had instead been made into a vase, the table never would have existed. So (roughly) being a table seems to be an essential property of the table” (Kripke 1980: 115 n. 57).

12  Against Rigidity for General Terms

253

are correspondingly limited – but what about “TV set” or “refrigerator”? My TV set was made in a factory along with many other TV sets, with a production routine, intentions and so on. I do not see how it could be produced and yet not be the very TV set that it is. My point is that this very thing in front of me in my den – it could not have failed to be a TV set. Presumably I could point to other individual objects and make the analogous claim. Could my car have failed to be a car? I do not see how. I suppose that all the parts of my Subaru Forester could have been somehow shipped to Borneo and assembled in the shade of a tree by pious villagers as a religious exercise. But still I’d say they’ve assembled a car. If this point can be extended, then the terms “TV set,” “car,” “refrigerator,” etc. are at least in some uses rigid appliers. Devitt is willing to grant that “frog” is a natural kind term and is not a rigid applier simpliciter. Instead he now suggests that “frog” is a mature-rigid applier. “A general term ‘F’ is a mature-rigid applier iff it is such that if it applies to an organism in any possible world, then it applies to that organism in every possible world in which the organism exists and develops to maturity” (Devitt 2009: 248). This strikes me as adding complication to complication in an ad hoc attempt to save a theory. Instead of picking away at this new complication, consider a general argument to show that rigid application, including mature-rigid application, is not going to help us understand the semantics of general terms since it rests on metaphysical assumptions that are more obscure than what the theory is trying to clarify. Devitt notes the metaphysical foundations of his theory: “Clearly, if ‘F’ is a rigid applier then any individual F must be essentially F. So the view that there are any such ‘F’s entails a fairly robust metaphysical thesis. Still, that thesis has been popular from ancient times to the present and I think that it is plausible” (Devitt 2005: 146). Popular and plausible or not, it is contentious. Suppose there are metaphysically possible worlds where some mature eagles turn into mature tigers. In other worlds those same eagles don’t ever turn into tigers. We don’t know so much about what’s possible and what’s not to absolutely rule that out. “Tiger” would then not be a rigid applier or mature-­ rigid applier. Devitt has the burden of denying this story. No doubt he welcomes this burden, but why should he even have it? I do not see how any such remote possibility or impossibility (if it is impossible) has any relevance to our understanding of the term “tiger.” “Tiger” is still non-descriptional and we can establish that by the modal and epistemic arguments of Kripke without peering into deep metaphysical waters.6 On a related front, Devitt is dubious about the usefulness of the notion of a natural kind term. He points out correctly that “natural kind term” is vague. (But then so also is almost every general term outside of mathematics.) [H]owever we tidy up our account of natural kinds, it is hard to see how ‘natural kind term’ could come out as a theoretically significant description in semantics. Thus, ‘plastic’ is not likely to be classified as a natural kind term and yet it is surely semantically just like the paradigmatic natural kind term ‘gold’: the two terms seem equally nondescriptive; and if 6  See also Inan 2008. Inan makes the same point more succintly: “we should not expect to have to enter into a deep metaphysical debate concerning essentialism, to decide whether a term such as ‘tiger’ is rigid or not” (Inan 2008: 215).

254

S. P. Schwartz

there is any acceptable sense in which ‘gold’ is rigid then surely ‘plastic’ will be rigid in that sense too. Furthermore, the biological term ‘predator’ must be counted as natural and yet it seems descriptive and nonrigid. And what could be the principled basis for counting terms from the social sciences like ‘unemployed’ and ‘nation’ as not natural? Yet they are surely descriptive and nonrigid. (Devitt 2009: 245)

I don’t think we should be so quick to chuck the category “natural kind term” but I agree that we should not put too much emphasis on the word “natural” in the phrase “natural kind.” Many of our familiar animals and plants are artifacts in that they are the result of intentional human activity. Likewise we have artificially produced new elements. I would accept that “plastic” is a natural kind term, but I’m not willing to accept “predator” or “unemployed.” For me the distinction between natural kinds and nominal kinds rests on Mill’s classical, and still useful, distinction between kinds whose nature is inexhaustible and those which rest on one or only a few criteria. Alas, Mill is verbose but this passage is worth quoting in full. But if we contemplate any one of the classes so formed, such as the class animal or plant, or the class sulphur or phosphorus, or the class white or red, and consider in what particulars the individuals included in the class differ from those which do not come within it, we find a very remarkable diversity in this respect between some classes and others. There are some classes, the things contained in which differ from other things only in certain particulars which may be numbered, while others differ in more than can be numbered, more even than we need ever expect to know. Some classes have little or nothing in common to characterize them by, except precisely what is connoted by the name: white things, for example, are not distinguished by any common properties except whiteness; or if they are, it is only by such as are in some way dependent on, or connected with, whiteness. But a hundred generations have not exhausted the common properties of animals or of plants, of sulphur or of phosphorus; nor do we suppose them to be exhaustible, but proceed to new observations and experiments, in the full confidence of discovering new properties which were by no means implied in those we previously knew. While, if any one were to propose for investigation the common properties of all things which are of the same color, the same shape, or the same specific gravity, the absurdity would be palpable. (Mill 1974: 122)

I see no harm in calling the inexhaustible classes “natural kinds”7 and the exhaustible ones “nominal kinds” as long as we realize that natural kinds do not need to be natural. Our paradigm cases of inexhaustible classes are, after all, natural natural-­ kinds like tiger, gold, water. Another nice feature of Mill’s distinction is that natural kind is a matter of degree. Some classes may be less “inexhaustible” than others. (The inexhaustibility does not have to be actually infinite. It could just be indeterminately large.) In any case, no version of rigid essentialism is going to shed any light on this fundamental distinction between natural kinds and nominal kinds or the terms that we use to talk about them, or the notion of rigidity applied to general terms.

 Mill calls them “real Kinds” with capital “K.”

7

12  Against Rigidity for General Terms

255

12.3  Against Rigid Expressionism Rigid expressionism put bluntly is the view that unstructured general terms are proper names of kinds even when used for predication.8 Thus rigid expressionists hold that the kinds/properties/universals exist and our general terms name them and we use the terms as rigid names of the kinds just as we use proper names as rigid names of concrete individuals. The label “rigid expression” is due to Nathan Salmon (Salmon 2005) but the view in one form or another has a large number of adherents. (McGinn 1982, Donnellan 1983, LaPorte 2000 and especially 2013, Salmon 2005, Sullivan 2007, López de Sa 2008, Martí and Martínez-Fernández 2010, Johnson 2011 and unpublished, Orlando 2014 all hold versions of rigid expressionism.) The idea is that general terms designate their extensions but express the property named. A general term is rigid, according to this view, because the same property is expressed by the general term at every possible world. “The view I endorse is Rigid Expression. That a general term G is rigid iff it is a rigid expresser – that is, iff if it expresses the property P relative to the actual world, then it expresses P relative to all worlds. Fellow travelers … though it should be noted … endorse Rigid Expression’s close cousin Rigid Designation, or don’t make the distinction” (Johnson unpublished: 5). The rigid/non-rigid distinction is maintained, and thus trivialization is avoided, in various ways. The typical move is to claim that certain definite descriptions of properties or kinds are non-rigid. Thus “blue” would be a rigid expresser of the color blue, but “the color of Democratic leaning states” would be non-rigid because in other possible worlds it might be some other color. Like rigid essentialism, rigid expressionism seems very plausible and appealing at first sight. It’s unfortunate that it cannot be worked out in any illuminating way. Rigid expressionism has many advantages besides being initially plausible and appealing. (1) Rigid expressionism supports semantic uniformity  – the idea that syntactically similar items should be semantically similar, thus all common nouns are uniformly held to be rigid; (2) rigid expressionism conforms with Kripke’s claim that natural kind terms are like proper names; and (3) rigid expressionism gives a clear explanation of the a posteriori necessity of identity claims involving kinds, such as “water is H2O.” Both terms are rigid. Although “H2O” is structured, it is considered to be rigid. Rigid expressionism also has significant difficulties. (1) Added complexity. We need a two-tiered system of reference for rigid expressionism to work so that a general term can designate its extension in a world but also express (or designate) a kind or property. (See Linsky 1984 and 2006 for the most popular version of such a two-tiered system.) (2) Kripke gave no indication that his 8  Sometimes kind terms are used as singular terms in subject position and then something is predicated of the kind. E.g. “Gold is a metal.” This is a statement about the kind gold. I am mostly concerned with kind terms in predicate position where a property is being predicated of an object. E.g. “My wedding ring is gold.” If the proponents of rigid expressionism do not mean their view to apply to kind terms used in predicate position, then there is nothing much to their view, although I share Mill’s concern that names of kinds and properties are not straightforward singular terms in any case.

256

S. P. Schwartz

considerations about natural kind terms would extend to nominal kind terms and Putnam only briefly suggested that words like “pediatrician” could develop a natural kind sense. (3) General terms are syntactically unlike names in that they most frequently occur in predicate position, they are used to classify, are quantifiable, none of which are true of proper names.9 Furthermore, the semantic uniformity is limited, since some structured general terms (e.g. “H2O”) are supposed to be rigid, others are non-rigid. I want to note these difficulties (and advantages) but focus on what I take to be the most serious issue that rigid expressionism faces – the overgeneralization problem.10 The original worry is that all unstructured (and many other) general terms will turn out to be rigid in which case the rigid/non-rigid distinction loses its point. Proponents of rigid expressionism address the overgeneralization problem by happily accepting it. For example, Martí and Martínez-Fernández welcome extending rigidity to all common nouns. “Simple, name-like general terms display a similar semantic behavior and so they belong in the same semantic category, different from the category of complex general terms, precisely for the same reasons that proper names and singular terms with descriptive content belong in different categories…. The fact that all simple general terms are characterized as rigid is, we think, not an undesirable overgeneralization of rigidity. It’s just the way things should be” (Martí and Martínez-Fernández 2010: 56). And “[w]hen it comes to semantic function, we think that terms such as ‘water’ or ‘tiger’ (terms for natural kinds) behave in exactly the same way as ‘bachelor’, ‘philosopher’, ‘computer’ or ‘pencil’, pretty much for the same reason that we think that all proper names are rigid, independently of whether they name a person or a robot” (Martí and Martínez-Fernández 2011: 288). Martí and Martínez-Fernández and others maintain the distinction between the rigid and non-rigid general terms by pointing out that kinds can be designated by non-­ rigid definite descriptions. For example, LaPorte (2000) argues that “honeybee” is a rigid designator of an insect kind whereas “the insect most widely farmed for honey” is non-rigid. In this way we get a distinction among kind or property designators that parallels the distinction that Kripke pointed out for singular terms. The simple proper name “Aristotle” is rigid whereas “the most famous student of Plato” is non-rigid. The simple color designator “white” is rigid whereas “the color of Antarctica” is non-rigid. (This is LaPorte’s 2013 prominently used example.) In my view none of this works smoothly nor is it worth the trouble of trying to make it work. Semantic uniformity is certainly a simplifying idea and the analogy of general terms with proper names is equally appealing but both are overdrawn with respect to our common nouns. I claim that there are significant and fundamental differences between typical natural kind terms like “gold,” “tiger,” and “water,”

9  Although recently leading philosophers and linguists (Fara 2015, Bach 2015) have argued that proper names are predicates, which would probably do more harm than good to rigid expressionism. In any case, we have the confusing situation of semanticists arguing that (unstructured) predicates are proper names while others are prominently claiming that proper names are predicates. 10  This is a problem that I discussed in previous publications (see in particular Schwartz 2002).

12  Against Rigidity for General Terms

257

and typical nominal kind terms like “refrigerator,” “pediatrician,” and “screwdriver.”11 It would be so congenial and so consistent with the classic publications of Kripke to be able to establish that the former are rigid and the latter are non-rigid. Alas, that is not to be and nobody any longer thinks that it can be.12 Thus the overgeneralizing theories of Martí and Martínez-Fernández and others are indeed viciously overgeneralizing. I do not deny that semanticists can work out and perhaps have successfully worked out clever (and complex) ways of enforcing a distinction between rigid and non-rigid property designators that treat all common nouns as rigid, but they are not worth the price. They feel ad hoc and constructed purely to salvage the distinction. We are better off recognizing and utilizing diversity where we intuitively see it. The common nouns that have a natural kind sense and function like proper names as Kripke pointed out stand on one side, whereas the artifact kind terms and social kind terms typically stand on another. The two kinds of terms are introduced into the language in differing ways, function to generate necessary truths in differing ways, and have differing modal profiles.13 We do not need rigidity to describe these differences as I will argue. Furthermore, there is a large difference between general sentences such as “Water is H2O,” “Gold is a metal” and “All tigers are animals” on the one hand and “All pediatricians are doctors” and “All screwdrivers are tools” on the other. Sentences like “All tigers are animals” are synthetic, necessary (if true), and a posteriori as was established by Kripke and Putnam in the early 1970s. This was certainly an exciting and revolutionary insight that went directly against the orthodoxy of Anglo/American linguistic philosophy which viewed such sentences as analytic and a priori or synthetic and a posteriori.14 Similarly “All pediatricians are doctors” and “All screwdrivers are tools” were considered to be analytic, a priori, and necessarily true by definition or convention. The pre-Kripke orthodoxy was wrong about “All tigers are animals” but right about “All pediatricians are doctors” etc. “All pediatricians are doctors” is in some relevant sense (which I will shortly try to sharpen) analytic and known a priori. Understanding the language is sufficient to know the truth of it. On the other hand, being completely conversant with the terms in “All tigers are animals” is not sufficient to know its truth, and in fact we may all be wrong about that. Paraphrasing a claim of Putnam’s, all the tigers might be robots

 I think I can count Devitt as an ally at least this far. Recall that he claims that “predator,” “unemployed,” and “nation” are descriptive and non-rigid in contrast to e.g. “tiger” and “gold” which would be rigid according to him. 12  Actually I should say “very few.” Orlando (2014) argues that it can be. I think she’s on the right track except for holding that natural kind terms are rigid. Also of course Devitt seeks to maintain the distinction along these lines. 13  And of course there are intermediate, ambiguous, and hybrid cases. E.g. “vixen” means “female fox” and “screwdriver” is also the name of a mixed drink made with vodka. 14  I believe that the general view was that “All tigers are animals” and “Gold is a metal” were analytic. 11

258

S. P. Schwartz

sent by aliens to spy on us.15 Given the recent spate of fake news such a claim may not be that far-fetched. Could all the pediatricians be non-human aliens sent to spy on us? Of course! But they better be validly practicing medicine for children or they’re not pediatricians. (See Schwartz 1978 for extensive discussion of this example.) One reason that philosophers thought that e.g. “All tigers are animals” is analytic is that it is not refutable by isolated counterexample. And of course neither is “All pediatricians are doctors.” They have that in common and for a similar reason. We would not count a single individual non-animal tiger-like creature, say a really good Disney robot, as a tiger or animal. Likewise we would not count a non-doctor as a pediatrician but only as a phony pediatrician or someone posing as a pediatrician. But the key difference is that “All tigers are animals” is falsifiable in the sense that our whole empirical theory of tigers may be mistaken. (I’m using “falsifiable” and “may be” in the purely epistemic sense.) But our whole theory of pediatricians cannot be mistaken, because we don’t have such a theory. We have a conventional definition and that’s it. As Mill pointed out in his System of Logic (see the lengthy quote above) there are kinds, the natural kinds or real Kinds, that are inexhaustible in terms of our research. A hundred generations of researchers, as Mill puts it, would not be enough to exhaust the nature of tigers or gold. Thus falsifiability of our current firmly held beliefs remains a possibility. But with other kinds like pediatrician and screwdriver everything is on the surface. There is no depth for us to research except as they may be included in other natural kinds. They are based on one or a few superficial characteristics. There is a certain artificiality or conventionality about these nominal kinds that is absent with natural kinds. Nominal kind terms will not support a posteriori necessary propositions because the kinds are not inexhaustible. They are exhausted by the surface criteria. And that is a fundamental difference, in my opinion. “All pediatricians are doctors” is not epistemically falsifiable.16 My basic argument for the semantic distinction between natural kind terms and nominal kind terms is however a different and, I think, a simple one. In the early days of the Kripke/Putnam Revolution I often met resistance to their picture of the semantics of natural kind terms. One effective way of dispelling this resistance was to point out “Look, general terms surely could function the way Kripke describes natural kind terms. They could be like proper names, etc. Introduced in a baptism, reference fixed by descriptions that are not part of the meaning, etc. And thus

 I am using “may” and “might” in the epistemic sense which is relevant to analyticity. If a claim is epistemically falsifiable, it isn’t analytic. 16  As though we could wake up one day and read on the Google Newsfeed “Amazing new discovery! Pediatricians aren’t doctors after all, they’re actually plumbers.” I suppose we can imagine arcane stories where we would be inclined to say “Pediatricians aren’t doctors as we thought all along” but likewise we can imagine arcane stories, and maybe ones that aren’t so arcane, where we would be inclined to say “Shakespeare wasn’t Shakespeare.” This hardly refutes Kripke’s theory of proper names or the standard logic of identity, nor do arcane stories refute the analyticity or a prioricity of “All pediatricians are doctors.” 15

12  Against Rigidity for General Terms

259

support necessary a posteriori general propositions. And one can stipulatively introduce such terms easily via a formal baptism of sorts. That there so easily could be such terms suggests strongly, I insisted, that there are such terms in our common language.” Worked every time! (Well, almost every time.) (See Schwartz 1979 where I spell this out in some detail.) I still agree with all this as long as it doesn’t include rigidity. But now the argument can be run the other way. As Donnellan points out and then discusses in detail proper names can be introduced as rigid designators or stipulatively to abbreviate descriptions. “For we should not, of course, suppose that names cannot be introduced as abbreviations; it is obvious that we can do that if we want to” (Donnellan 1977: 14). Kripke in Naming and Necessity specifically acknowledges that we can do this: The picture which leads to the cluster-of-descriptions theory is something like this: One is isolated in a room; the entire community of other speakers, everything else, could disappear; and one determines the reference for himself by saying – ‘by “Gödel” I shall mean the man, whoever he is, who proved the incompleteness of arithmetic’. Now you can do this if you want to. There’s nothing really preventing it. You can just stick to that determination. If that’s what you do, then if Schmidt discovered the incompleteness of arithmetic you do refer to him when you say ‘Gödel did such and such’. (Kripke 1980: 91)

Names introduced this way are not rigid, of course: Suppose the reference of a name is given by a description or a cluster of descriptions. If the name means the same as that description or cluster of descriptions, it will not be a rigid designator. … If we used that as a definition, the name ‘Aristotle’ is to mean ‘the greatest man who studied with Plato’. Then of course in some other possible world that man might not have studied with Plato and some other man would have been Aristotle. (Kripke 1980: 57)

Now surely the same thing is true of general terms. A simple one word general term can be introduced stipulatively as an abbreviation for a description. By “description” here I do not mean “definite description” but rather a list of necessary and sufficient conditions for the application of the term (or possibly a cluster à la Wittgenstein/Searle) – i.e. a definition. In fact this is done frequently in philosophy. Semanticists commonly stipulate what they mean by “intension,” “extension,” “predicate,” “semantics,” and so on. Johnson stipulated what he means by “rigid expresser.” Nelson Goodman famously stipulated a meaning for his made up term “grue”: “grue” “applies to all things examined before t just in case they are green but to other things just in case they are blue” (Goodman 1965: 74). Goodman calls this a definition in the same place. Devitt stipulatively defines the term “rigid applier.” We must, then, grant that general terms can be introduced stipulatively to be abbreviations for descriptions. Further, these general terms are terms in necessarily true sentences. For example, “All green things examined before t are grue,” etc. Goodman could not get his argument started and his famous paradox to enlighten (or mystify) philosophers if we were not able to know such things and know them on the basis of his definition. Likewise “All rigid-appliers are general terms.” We do not need theories of rigidity to tell us why these sentences are necessarily true. Nothing deep at all is involved. It’s rather trivial. They are true by definition, i.e. analytically true.

260

S. P. Schwartz

These are not isolated examples. General terms are introduced by stipulative definition in every technical and academic field. It’s a very useful device.17 Given that we can introduce a general term and stipulate that it is to be an abbreviation for a descriptive necessary and sufficient condition, it would be astonishing if we did not do this in less technical areas, and indeed in the common language. Of course, in the common language such terms will be subject to the usual vagueness and ambiguity. (But so are the supposed rigid natural kind terms.) In my favorite pastime, boating, we have thousands of stipulatively defined terms. A sloop is a sailboat with one mast and a mainsail and jib. A yawl is a sailboat with two masts, the mizzen mast behind the steering post; a ketch has two masts, the mizzen in front of the steering post, etc. Virtually all of the general terms for the parts of a sailboat – “mast,” “sheet,” “rudder,” “boom,” “sail,” “stay,” “shroud,” etc. – as well as many other terms such as “starboard” and “port,” were introduced stipulatively or in a similar way and can be seen as abbreviations for descriptions. It would be so laborious to say “cable that supports the mast either port or starboard.” Instead we say “shroud.” Of course, these terms were not introduced as explicitly as “grue” and “rigid applier” but that should not make any difference if they are understood to function, as I think they are, as abbreviations for general descriptions. And likewise these terms are terms in necessary truths such as “All sloops are sailboats.” Again we do not need a theory of rigidity to explain the necessity of that statement. It is analytic – just like “All pediatricians are doctors” and “All screwdrivers are tools.” Does this mean that every competent speaker of English knows that all sloops are sailboats? I’m sure they don’t. There are many sub- and sub-sub-languages as part of English. People who are conversant with the terms “sloop” and “sailboat” know a priori that all sloops are sailboats. To repeat, I am not claiming anything deep here. In particular I am not claiming that such acts of stipulation create what Boghossian (1996) calls “metaphysical analyticity” and which he claims is impossible. I am not involved in deep disputes about whether the rules of logic and the truths of mathematics are true by  In his book Rigid Designation and Theoretical Identities LaPorte argues that even if “bachelor” and “soda pop” are respectively abbreviations for “eligible unmarried male” and “sweet, carbonated beverage” they would still be rigid and non-descriptional. I find his argument for this claim to be compressed and obscure. He writes: “But here again, it is not at all clear that the apparent descriptive status of the relevant expressions is genuine” (LaPorte 2013: 54). The problem according to LaPorte is that the terms used in the description are not descriptive. “Sweet,” “carbonated,” and “beverage” he states “simply refer … to their respective properties” (LaPorte 2013: 54). I of course question just this and especially the use of “simply” here. “Sweet” may well be a natural kind term, but “carbonated” and “beverage” strike me as artifact terms. In any case, suppose all the definitions ultimately reduce to undefinable natural kind terms like “sweet.” This does not mean that common nouns defined by those terms are not descriptional. I do not see how the ultimate status of the terms in the definiens makes any difference as long as the definiens supplies a necessary and sufficient condition (or cluster condition). We should follow Donnellan and recognize that certainly we can introduce terms that fit the classical description theory. It would be astonishing if the classical description theory was an impossibility  – incoherent. So I would hold that if e.g. “bachelor” is an abbreviation for “eligible unmarried male” and “soda pop” for “sweet, carbonated beverage” then the apparent descriptive status of the relevant expressions is genuine.

17

12  Against Rigidity for General Terms

261

convention. If “All green things examined before t are grue” is analytic and knowable a priori on the basis of Goodman’s stipulative definition, as I claim it is, this relies on an entire background of the rules of logic, grammar, philosophy, and our linguistic and social traditions.18 All that has to be in place. It is not a case of meaning conventions creating truths ex nihilo. I am just relying on the rather obvious point expressed in the online Stanford Encyclopedia of Philosophy article on the analytic/synthetic distinction: Indeed, these cases of “deep” natural kinds contrast dramatically with cases of more superficial kinds like “bachelor,” whose nature is pretty much exhausted by the linguistics of the matter. Again, unlike the case of polio and its symptoms, the reason that gender and marriage status are the best way to tell whether someone is a bachelor is that that’s just what “bachelor” means. (Rey 2018)

Or as Putnam puts it in an early article: There they are, the analytic statements: unverifiable in any practical sense, unrefutable in any practical sense, yet we do seem to have them. This must always seem a mystery to one who does not realize the significance of the fact that in any rational way of life there must be certain arbitrary elements. They are “true by virtue of the rules of language”; they are “true by stipulation”; they are “true by implicit convention.” (Putnam 1975a: 68)

My only point is that we do not need or want rigidity to explain the necessity of such truths as “All sloops are sailboats,” “All bachelors are unmarried males,” “All pediatricians are doctors.” And indeed as I will indicate, we do not need rigidity to explain the necessity of any necessary truths involving general terms. Of course such terms as “sloop” or “bachelor” were not introduced by explicit stipulations, as far as we know, but rather by implicit convention as Putnam indicates. Ironically it is Quine who shows the way that implicit stipulation or convention can give rise to analytic necessary truths in natural language. Philosophers like to claim that Quine demonstrated that truth by convention is an impossibly paradoxical notion in his famous article on that subject. I do not think that is quite accurate, nor do I think that Quine even claimed there to refute the notion of truth by convention beyond the areas of formal logic and mathematics. Let us hear from Quine: It may be held that we can adopt conventions through behavior, without first announcing them in words; and that we can return and formulate our conventions verbally afterward, if we choose, when a full language is at our disposal. It may be held that the verbal formulation of conventions is no more a prerequisite of the adoption of the conventions than the writing of a grammar is a prerequisite of speech; that explicit exposition of conventions is merely one of many important uses of a completed language. So conceived, the conventions no longer involve us in vicious regress. Inference from general conventions is no longer demanded initially, but remains to the subsequent sophisticated stage where we frame general statements of the conventions and show how various specific conventional truths, used all along, fit into the general conventions as thus formulated. (Quine 1966: 98)

18  See Devitt and Sterelny 1999: 101–104 for a helpful discussion of the logical underpinnings of analytic truths.

262

S. P. Schwartz

Quine’s next sentence is quite remarkable: “It must be conceded that this account accords well with what we actually do” (Quine 1966: 98). Precisely! Granted, Quine goes on to raise questions about the value of appealing to such implicit conventions, but nothing that a friend of analyticity couldn’t live with.19 Despite Quine’s later resistance to the idea, I would say that the “various specific conventional truths” he mentions are analytic. There are other ways to indicate the difference between typical natural kind terms and typical nominal kind terms. For example, consider Putnam’s famous Twin Earth thought experiment (Putnam 1975b). (This can also be conducted on our actual earth as well. See the Internet site sugartwin.com.) Water-twin is not water, tiger-twins are not tigers, gold-twin is not gold (and on earth we also have fool’s gold and Sugartwin). But could there be a screwdriver-twin that is not a screwdriver? We go to Twin Earth and find they use the implements that are as similar to our screwdrivers as water-twin is to our water – they look, feel, and are used exactly like our screwdrivers, etc. But then they are screwdrivers. I think the same thing would hold of sloop-twins, pediatrician-twins, refrigerator-twins, etc. In fact they aren’t “twins.” They are the things themselves. We would happily apply all of our occupation and artifact terms to the occupation- and artifact-twins on Twin Earth. But not water-twin  – that’s not water, period. Of course, the Twin Earth thought experiment is only of limited usefulness. E.g. what are we to say of e.g. blue? Suppose the things that look blue to us on Twin Earth reflect a different wave-length of light than blue things on earth. What would we judge? That they are not really blue, but really blue-twin? Do we have grue on Twin Earth or only grue-twin? I’m not sure and I’m not inclined to pursue this further. But it seems to me that there is certainly a striking Twin-Earth difference between water, tigers, and gold and the terms for them on the one hand, and pediatricians, screwdrivers, sloops and the terms for them on the other and such a difference should not be swept under the rug and ignored by semanticists. Does any of this demonstrate that e.g. “bachelor,” “sloop,” “grue,” “rigid applier,” “screwdriver,” i.e. what I’ve been calling nominal kind terms are not rigid or that some cobbled up theory of their rigidity cannot explain the necessity of e.g. “All sloops are sailboats”? Absolutely not! But it does argue against any reason for supposing the there is any point or use for so doing. The complexity and confusion are not worth it. We have a simpler story: the classic story that “All sloops are sailboats” is necessarily true and a priori because it follows from the (implicitly stipulated) definition of “sloop.” It is analytic. It is one of the “various specific conventional truths” that Quine discusses. Are there problems with this story? Of course. But there are even greater problems and complexities with the rigidity story. Analyticity fell into disrepute because of the attacks on the notion by Quine and others based on behaviorism. Behaviorists didn’t like meanings. But behaviorism has fallen into disrepute and meanings are back in vogue. Now meanings can again

 And of course ordinary natural kind terms are not introduced by a formal baptism, but rather by an informal impicit convention or stipulation “adopted through behavior” as Quine puts it.

19

12  Against Rigidity for General Terms

263

be used to explain the epistemic and modal profiles of nominal kind terms. At the same time we can leave intact the insights of Kripke and Putnam into the differing epistemic and modal profiles of natural kind terms. There are fundamental differences in the semantic and modal profiles of natural kind terms and nominal kind terms and running them together as Martí and Martínez-Fernández, LaPorte, and others do is mistaken. As Fara puts it: “uniformity is not advantageous per se. Uniformity is advantageous just to the extent that it provides for theoretical simplicity along with empirical adequacy” (Fara 2015: 109). Quite so. Treating all simple general terms, common nouns, single-word property and predicate designators as rigid fails on both counts. It is not theoretically simple and it is not empirically adequate.20

12.4  Conclusion So we are still faced with the overgeneralization problem. Surely Kripke is right that natural kind and related terms like the color terms are very much like standard proper names, while as I have argued most artifact and social terms are not. To repeat, it would be so nice if we could characterize the natural kind terms as rigid and the nominal kind terms as non-rigid. Alas, no one has found a way to distinguish the natural kind terms and most have decided the project isn’t so important after all. I disagree for the reasons given above. But since there is no viable way of characterizing the natural kind terms as rigid and the nominal kind terms as non-­ rigid, the project of applying rigidity to general terms at all is doomed. It is doomed to capitulate to the overgeneralization problem. Every unstructured general term would end up rigid. This is false semantic uniformity. My conclusion is that we should give up trying to make general terms rigid or non-rigid. The distinction, so useful with singular terms, does not usefully apply to general terms. What can replace rigidity and non-rigidity in the characterization of general terms? One very promising approach is elaborated by Christian Nimtz (2017) although his approach sweeps up natural kind terms in the broader category of what he calls “paradigm terms.” Nimtz’s argument is too complex to fully expound here, but the basic idea is that paradigm terms are terms whose application conditions are relationally determined, object involving, and actuality dependent. Paradigm terms  I think some of the motivation for and the infatuation with semantic uniformity and the extension of rigidity to nominal kind terms comes from confusing rigidity with consistency of meaning. As I pointed out elsewhere (Schwartz 2002) words do not change their meanings when we talk about other possible worlds. Meanings are consistent when we talk about counterfactual situations. When we say “If Hillary had not used a personal email system when she was secretary of state, she would not have lost the election” “Hillary” means Hillary, “had” means had, “personal” means personal, “email” means email, “system” means system and so on. LaPorte explicitly acknowledges this (LaPorte 2013: 36–37) and claims to distinguish rigidity from consistency of meaning. Johnson on the other hand exaggerating only slightly claims that everything is rigid (Johnson 2011). I do not see how this can be anything other than mere meaning consistency.

20

264

S. P. Schwartz

are semantically tied to paradigms which are actual objects. Any such terms apply to anything bearing a specific equivalence relation (varying from term to term) to the actual objects serving as their paradigms. This determines their extensions in all possible worlds. So far this is nicely reminiscent of Putnam’s work on natural kind terms, but Nimtz dispenses with the categories of natural kind term and rigid general term in his semantics, thus improving on previous approaches. According to Nimtz the standard natural kind terms (as well as many other terms) are paradigm terms but their status as natural kind terms plays no role in their epistemic and modal profiles. Their status as paradigm terms explains the necessity and a posteriority of theoretical identifications involving them. Nimtz argues, cogently in my opinion, that rigidity plays no role in establishing these results, and indeed plays no role at all in the semantics of paradigm terms or any other general terms. (He allows that paradigm terms used as singular terms to designate the kinds themselves are rigid, but he considers this a simple consequence of their paradigm semantics.) Nimtz on the basis of his paradigm term theory provides arguments establishing that kind-term identifications like “Something is a Brontosaurus iff it is an Apatosaurus” (another favorite example of LaPorte’s) or “Something is gold iff it is 79Au” are necessary, if true. Any such theoretical identifications comprising paradigm terms are necessary, if true, irrespective of whether the paradigm terms involved are natural kind terms. His analysis makes do entirely without the notion of rigidity. On the other hand, terms that are stipulatively introduced via descriptions, either explicitly or implicitly, are not paradigm terms and thus have a different semantic profile. Goodman could hardly have introduced “grue” by a paradigm. That was just the point of “grue”! Before t all examined green things are grue. Likewise I do not think that terms like “screwdriver,” “sloop,” “pediatrician,” etc. are terms whose application conditions are relationally determined, object involving, and actuality dependent although actual examples are often useful in helping people to understand the terms. E.g. Goodman notes that before t every examined emerald is green and it is grue.21 But his introduction of “grue” is not paradigmatically tied to actual emeralds. On the other hand, our term “emerald” is. “Emerald” is a paradigm term, “grue” is not and this explains their differing semantic profiles. Nimtz’s theory is not entirely trouble-free. I assume that “green” is a paradigm term and this helps explain how it differs from “grue.” But now Nimtz might have difficulties with Devitt’s famous “qua problem” (see Devitt and Sterelny 1999: 79–81). When we point to an object and say “That’s green” how do we know that we are paradigming (if I may) color and not some other feature or kind? Despite some such possible roadblocks, I think that an approach featuring a concept like Nimtz’s paradigm terms is much simpler, clearer, and more comprehensible than the rigid/non-rigid story, and Nimtz’s story is empirically adequate in that it gives a clear, comprehensible distinction between terms like “tiger,” “gold,” and “water” on the one hand and terms like “pediatrician,” “grue,” and “screwdriver” on the other.

21

 Recall that “All emeralds are green” is true, whereas “All emeralds are grue” is false.

12  Against Rigidity for General Terms

265

And it explains why the former give rise to a posteriori necessary truths, whereas it allows that the latter are the subjects of analytic truths. In our studies of general terms I urge that we travel a road more like Nimtz’s paradigm semantics, rather than the thorny path of rigidity.22

References Bach, K. 2015. The predicate view of proper names. Philosophy Compass 10 (11): 772–784. Boghossian, P. 1996. Analyticity reconsidered. Noûs 30 (3): 360–391. Devitt, M. 2005. Rigid application. Philosophical Studies 125: 139–165. ———. 2009. Buenos Aires symposium on rigidity: Responses. Análisis Filosófic 29: 239–251. Devitt, M., and K. Sterelny. 1999. Language and reality. 2nd ed. Cambridge, MA: The MIT Press. Donnellan, K. 1977. The contingent a priori and rigid designators. Midwest Studies in Philosophy 2: 12–27. ———. 1983. Kripke and Putnam on natural kind terms. In Knowledge and mind, ed. C. Ginet and S. Shoemaker, 84–104. Oxford: Oxford University Press. Fara, D.G. 2015. Names are predicates. Philosophical Review 124 (1): 59–117. Glüer, K., and P. Pagin. 2012. General terms and relational modality. Noûs 46 (1): 159–199. Goodman, N. 1965. Fact, fiction, and forecast. 2nd ed. Indianapolis: The Bobbs-Merrill Company, Inc. Inan, I. 2008. Rigid general terms and essential predicates. Philosophical Studies 140: 213–228. Johnson, M. 2011. Harlequin semantics. Ph.D. dissertation, Rutgers, The State University of New Jersey. ———. unpublished. Rigidity for the common noun. http://michaeljohnsonphilosophy.com/wpcontent/uploads/2013/03/Rigidity-Reborn.pdf. Accessed 25 May 2017. Kripke, S.A. 1980. Naming and necessity. Cambridge: Harvard University Press. LaPorte, J. 2000. Rigidity and kind. Philosophical Studies 97: 293–316. ———. 2013. Rigid designation and theoretical identities. Oxford: Oxford University Press. Linsky, B. 1984. General terms as designators. Pacific Philosophical Quarterly 65: 259–276. ———. 2006. General terms as rigid designators. Philosophical Studies 128: 655–667. López de Sa, D. 2008. Rigidity for predicates and the trivialization problem. Philosophers’ Imprint 8 (1): 1–13. Martí, G., and J. Martínez-Fernández. 2010. General terms as designators: A defense of the view. In The semantics and metaphysics of natural kinds, ed. H. Beebee and N. Sabbarton-Leary, 46–63. New York: Routledge. ———. 2011. General terms, rigidity and the trivialization problem. Synthese 181: 277–293. McGinn, C. 1982. Rigid designation and semantic value. The Philosophical Quarterly 32: 97–115. Mill, J.S. 1974. The collected works of John Stuart Mill, Volume VII – A system of logic ratiocinative and inductive Part I (1843). Toronto: University of Toronto Press. Nimtz, C. 2017. Paradigm terms: The necessity of kind term identifications generalized. Australasian Journal of Philosophy 95 (1): 124–140. Orlando, E. 2014. General terms and rigidity: Another solution to the trivialization problem. Manuscrito 37 (1): 51–84. Putnam, H. 1975a. The analytic and synthetic. In H. Putnam, Philosophical papers: Vol. 2: Mind, language, and reality, 33–69. Cambridge: Cambridge University Press.

 I would like to thank Michael Devitt, Andrea Bianchi, Michael Gardner, Joseph LaPorte, and Christian Nimtz for helpful comments on earlier versions of this paper.

22

266

S. P. Schwartz

———. 1975b. The meaning of ‘meaning’. In Minnesota studies in the philosophy of science: Vol. VII: Language, mind, and knowledge, ed. K. Gunderson, 131–193. Minneapolis: University of Minnesota Press. Quine, W.V. 1966. Truth by convention (1935). In W.V. Quine, The ways of paradox, 70–99. New York: Random House. Rey, G. 2018. The analytic/synthetic distinction. In The Stanford encyclopedia of philosophy, ed. E.N. Zalta, Fall 2018 ed. Stanford: Metaphysics Research Lab. https://plato.stanford.edu/ archives/fall2018/entries/analytic-synthetic/. Rubin, M. 2013. Are chemical kind terms rigid appliers? Erkenntnis 78: 1303–1316. Salmon, N. 2005. Are general terms rigid? Linguistics and Philosophy 28: 117–134. Schwartz, S.P. 1978. Putnam on artifacts. The Philosophical Review 87 (4): 566–574. ———. 1979. Natural kind terms. Cognition 7: 301–315. ———. 2002. Kinds, general terms, and rigidity: A reply to LaPorte. Philosophical Studies 109: 265–277. Sullivan, A. 2007. Rigid designation and semantic structure. Philosophers’ Imprint 7 (6): 1–22.

Chapter 13

Devitt and the Case for Narrow Meaning William G. Lycan

Abstract  In the late 1970s, Jerry Fodor, Hilary Putnam and Stephen Stich argued that the intentional content of most mental states is “wide,” i.e., does not supervene on the physical makeup of the subject’s head at a time. But many (including Fodor himself) have since argued that underlying the ordinary wide contents there must also be distinct, narrow ones. In “A Narrow Representational Theory of the Mind,” Michael Devitt defends the claim that the laws of mental processes as investigated by cognitive psychology should advert only to the narrow properties of representations, though some of those properties will be meaning properties specified by conceptual role, and that the scientifically appropriate boundary for explaining the behavior of an organism is its skin. But in Coming to Our Senses he repudiates that doctrine, because he has come to accept that widely characterized behavior is a more appropriate explanandum for psychology than behavior narrowly characterized. This paper argues that there are narrow meanings, on the model of Kaplan’s notion of “character,” but (agreeing with Devitt) we have seen no reason to believe that there are narrow contents underlying ordinary wide ones. Keywords  Mental representation · Wide content · Narrow content · Cognitive psychology · Psychological laws · Psychological explanation · Conceptual role · Functional properties · Kaplanian character

I first met Michael at Tufts in 1974, when I was visiting there and he came to give a paper. I believe the next I saw him was at Kingsford Smith airport in 1978, where he and David Armstrong greeted my wife and me as we began our first sojourn in Australia. I will always be grateful to Michael for arranging my first visiting appointment at Sydney University. W. G. Lycan (*) Department of Philosophy, University of Connecticut, Storrs, CT, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_13

267

268

W. G. Lycan

In the 5 or 10 years following, he came to the USA fairly often, and our families spent several Christmases together, in Columbus, in Connecticut, and in Chapel Hill. Michael and I always gave each other bottles of single-malt Scotch, and opened them early to assist in preparations for Christmas morning. A few days later he and I would head off to the Eastern Division meetings of the APA. On one of those occasions we had an early flight from Raleigh-Durham to the convention city, and I had booked a taxi for a very early hour, possibly 4:30 a.m. No one who knows Michael will be surprised to hear that when we got into the taxi we were already in the middle of a spirited philosophical debate. It continued, pretty loudly, for 30–40 minutes until we reached the airport. The driver had remained silent, but when I paid him, he said: “What all were you guys talkin’ about back there? … On second thought, why do I give a s∗∗t?” That discussion may have been about technicalities regarding direct reference. This paper is about narrow mental content, and in particular Devitt’s distinctive and valuable contribution to that debate, “A Narrow Representational Theory of the Mind” (1989), though as we shall see, in Coming to Our Senses (1996) he repudiated the view defended there.

13.1  Narrow Content In the beginning, all content was narrow. The beginning had a pretty good run, lasting from time immemorial until the early 1970s, when Hilary Putnam discovered Twin Earth. Putnam (1975) originally focused just on the meanings of people’s linguistic utterances, arguing that two perfect twins, an Earthling and that person’s molecular duplicate on Twin Earth in parallel physical surroundings, can utter identical sentences that nonetheless have different meanings. Stich (1978) and Fodor (1980) extended this argument to cover the contents of people’s propositional attitudes, so far as those contents comprised concepts of certain sorts, such as natural-kind concepts. Stich added several different sorts of example; ordinary indexicals provided a humble one, that should have been noticed much sooner. Stich then argued that Fodor had underestimated the range of the wide (“what Fodor sees as a bit I see as the tip of an iceberg” (1980: 97)). Narrow content by definition is necessarily shared by molecular duplicates. “Content” means traditional intentional content, referential and/or propositional as measured by sets of possible worlds. Mental states containing indexical elements, such as my seeing a teapot to my left or your hoping for peace on earth or Dave’s wanting to be famous, are unequivocally wide, because our respective Twins differ in their contents so measured. What I see is a teapot to Bill Lycan’s left; what Twin Bill sees is a similar teapot, though a different one, to his left. You want peace on (our) planet Earth, but what your twin wants is peace on the planet s/he inhabits, which is an entirely different planet despite its uncanny molecular similarity to Earth. Dave wants him(self), Dave, to be famous, while Twin Dave wants him(self),

13  Devitt and the Case for Narrow Meaning

269

Twin Dave, to be famous and could not care less about Dave, of whom he has never heard; and are quite different singular propositions, true in different sets of worlds.1 That far, the discussion had confined itself to standard cognitive and conative propositional attitudes. Block (1990, 1996) widened the wide further by granting a sense in which perceptual contents are wide – in which, for example, a same-­colored object might look red to me but blue to my molecular twin. Yet Block insisted that there is another and perhaps more important sense in which the perceptual contents are narrow, in which the object must look the same color to any molecular duplicate of mine. (He called those narrow properties “qualia,” in a special neologistic sense of that word.) I have disputed the existence of “qualia” in Block’s sense (Lycan 1996: ch. 6; 2001), but that issue is not our concern here.

13.2  Syntactic Psychology Despite his title, Devitt’s (1989) focus is not on everyday philosophy of mind and mental states but on the nature of laws in scientific psychology, and specifically on the “Revisionist Line” (371). According to one version of the latter, “[c]ognitive psychology must explain the interaction of thoughts with each other and the world by laws that advert only to formal or syntactic properties, not to truth-conditional ones.… [P]sychology must be ‘narrow’” (ibid.). “Narrow” is well understood, but “formal” and “syntactic” were still being used impressionistically in the 1980s, and Devitt takes some pains to isolate the concept that is driving the Revisionist Line: The formal/syntactic properties that a symbol has in his sense are properties it has in virtue of its role in a particular system of symbols, and more specifically, [t]hey are functional, structural, or relational properties…. [N]othing outside the system has any bearing on these properties. In particular, meaning is irrelevant to them…. In sum, a symbol has its syntactic properties and relations solely in virtue of its relations to others in a system of symbols; it has them solely in virtue of the system’s internal relations. (374–375, italics original)

(Devitt gives and elaborates the examples of chess pieces as such, and types such as “variable,” “one-place predicate,” “conjunction” and “conditional antecedent” in the predicate calculus.) The Revisionist thesis, then, is Syntactic psychology: The laws of mental processes advert only to the syntactic properties of representations. (376)2

1  As we shall see, Devitt (1996) starts with a general concept of “semantic role,” and develops a notion of mental “meaning” from it. “Narrow meaning,” as he will prefer to call it, is rightly broader than narrow content as just defined. 2  Coming: “Psychological laws should advert to properties of tokens that are only syntactic” (1996: 264).

270

W. G. Lycan

Devitt considers each of two arguments for syntactic psychology. The first is from the computer analogy. As is uncontroversial, computational processes are defined syntactically; so to reject syntactic psychology would be to abandon the analogy. To this, Devitt responds by granting that “processes from thoughts to thoughts,” such as inference, should be understood syntactically, but he reminds us that there are at least two other sorts of mental process, “from sensory inputs to thoughts” and “from thoughts to behavioural outputs,” and we have been given no reason to believe that those are merely syntactic. [C]omputers do not have transducers anything like those of humans and do not produce behavior in anything like the way humans do…. Any interpretation a computer’s symbols have, we give them. However, it is plausible to suppose that a human’s symbols have a particular interpretation in virtue of their perceptual causes, whatever we theorists may do or think about them. Furthermore, it is because of those links to sensory input that a symbol has its distinctive role in causing action. (1989: 376–377, italics original)

The second argument is from “methodological solipsism and psychological autonomy” (377): In psychology, we are concerned to explain why, given stimuli at her sense organs, a person evinced certain behavior. Only something that is entirely supervenient on what is inside her skin  – on her intrinsic internal physical states, particularly her brain  – could play the required explanatory role between peripheral input and output. Environmental causes of her stimuli and effects of her behavior are beside the psychological point. The person and all her physical, even functional, duplicates must be psychologically the same, whatever their environments…. All of this counts against the Folk view that a thought’s property of having a wide meaning, or truth conditions, is relevant to psychology.

If sound, Devitt says, this argument establishes that truth-conditional properties are irrelevant to psychology.3 But (obviously) that does not entail syntactic psychology; we would need the further premise that no nonsyntactic properties other than truth-conditional ones are relevant. Devitt is going to defend a “functional(conceptual-)role” semantics for psychology, that is narrow but not purely syntactic in the sense he has defined. According to that view, “the meaning (or content) that must be ascribed for the purposes of psychology can be abstracted from wide truth-conditional meaning” (378). We are to “abstract from outside links” to obtain a “proto-truth-conditional” meaning; “[i]t is the inner, functional-role part of wide meaning” (379, italics original). Narrow word meanings are “functions [in the mathematical sense] taking external causes of peripheral stimuli as arguments to yield wide (referential) meanings as values” (379–380), which are narrow (shared by molecular duplicates) but not syntactic.4

 And he does not contest the argument up till that point. In Coming he will.  Notice that two different notions are expressed in these last two sentences, that do not necessarily coincide: internal functional roles, of the sort that would be codified in a Lewisian (1972) Ramsey sentence, and mathematical functions from external causes to wide referential meanings. “Function” as between those two is a pun, not that Devitt is confused as between the two meanings of the word. 3 4

13  Devitt and the Case for Narrow Meaning

271

Take mental natural-kind terms such as “echidna” and “platypus,” and suppose for a moment that they are purely referential, without descriptive content. The difference between them is syntactic: they are distinct symbols in the representational system. However, what makes something an ‘echidna’ token [in particular], and not a ‘platypus’ token, is that it is linked to echnidna-ish stimuli. That is what gives the term the narrow meaning of ‘echidna’ in particular. (380)

Nothing changes if “echidna” and “platypus” are not purely referential but have some descriptive content. Their descriptive content may be syntactic, a matter of inferential relations to other mental terms, but “[i]f the buck is to stop, the explanation [of some terms’ meanings] must ultimately be of terms relying for their reference, fully or partly, on direct causal links to external reality” (ibid.).5

13.3  Narrow Psychology Devitt then returns to the issue of psychological laws, and spends the rest of his paper defending “narrow psychology: The laws of mental processes advert only to the narrow semantic properties of representations” (381), i.e., as well as the representations’ syntactic properties.6 (But for whatever reason he simply does not consider narrow content, i.e., narrow truth-conditional meaning.) First he argues that the laws must (indeed) appeal to narrow semantic properties in addition to syntactic ones; then he argues separately that nothing wide is required. In support of the necessity claim: Laws for processes from sensory inputs to thoughts have to explain the fact that a particular sensory stimulus affects some thoughts and not others; for example, it may lead to certain beliefs being formed and others dropped, but leave the vast majority of beliefs unchanged. Suppose that the stimulus is the sight of Ron riding and that the only effect of this is the formation of the belief ‘Fa’…. Why should the stimulus lead to ‘Fa’ rather than [to ‘Fb’, ‘Ga’, ‘Gb’, etc.] …, each of which is also a one-­ place predication and different from the others? A stimulus has a distinctive role in thought formation…. Syntax alone cannot explain that distinctive role. (1989: 382)

We are forced to appeal to lawlike links between thoughts and sensory stimuli. So too for processes from thoughts to behavioral outputs. Why does a given thought lead to certain behavior rather than quite different behavior? “[W]e can hope to find laws about the role of thoughts containing the [relevant] term in bringing about certain sorts of behavior” (383). Such laws will have to connect the

5  Devitt does not consider Fodor’s (1980) argument from opacity. To expound and criticize that elusive argument would have taken a separate paper. 6  Coming: “Psychological laws should advert to properties of tokens that are only narrow semantic” (1996: 275). Devitt adds, “This should be read as a commitment to laws that advert to properties that are not syntactic, for example, to narrow word meanings.”

272

W. G. Lycan

thoughts to motor outputs leading to action on or involving certain external objects rather than others. “Suppose that the behavior [in question] is the opening of a gate in the path of the mounted Ron, and that the belief that led to this is ‘Fa’” (ibid.); how could that causal chain be explained without appeal to some psychological properties that somehow involve Ron and the gate? Of course, the rhetorical questions here are welcomed and enthusiastically answered by those of Devitt’s opponents who believe that psychological laws will have to make mention of ordinary wide semantic properties. The agent opened the gate for Ron because her/his belief was referentially about that particular gate and about Ron, rather than about something else entirely. So Devitt must now defend his sufficiency claim, that narrow semantic properties will do and everyday wide ones are not needed. He begins that task by revisiting the argument from methodological solipsism and psychological autonomy. Though it failed to establish syntactic psychology, it has some force in support of narrow psychology. Now, by the mid-1980s the argument had taken several slightly different forms, and one of those had been shown by Burge (1986) to be invalid. Quoting his formulation: [E]vents in the external world causally affect the mental states of a subject only by affecting the subject’s bodily surfaces; … nothing (not excluding mental events) causally affects behavior except by affecting (causing or being a causal antecedent of causes of) local states of the subject’s body. (15)

Burge had pointed out, and Devitt agrees, that narrow psychology does not follow, because those principles themselves show nothing about how the local states are to be individuated, and if they are individuated relationally, they themselves are wide. But Devitt thinks the argument is still sound in spirit, because there is a “strong conviction that a scientifically appropriate boundary for explaining the behavior of an organism is its skin” (1989: 388).7 In any case, Devitt now argues that although folk belief ascription is wide and commonsense “explanation” can take different forms for different purposes, even the folk recognize that strict explanation by laws would be narrow. He cites the case of demonstrative beliefs (1989: 390). Raelene sees a man approaching and, thinking “That man has a knife and means me no good,” reaches for her gun. In a similar situation, Gail sees a man, thinks the same, and does the same. If the folk have any law in mind, it is unlikely that the law is wide, so far as the demonstrative is concerned…. [T]he folk would regard [Gail’s] belief as the same as Raelene’s for the

7  Fodor (1987) went on to repair the defect by arguing independently that for scientific purposes, states and events must be individuated by their causal powers. Devitt addresses this in a footnote (1989: 395–396 n. 29), but here again he is not convinced; he glimpses the objection that he will address at the end of the paper (see below), and that in Coming will disabuse him of the argument entirely. Wilson (1995: ch. 2) demolishes the argument from causal powers on its own terms. Still another version of the argument is the plain appeal to Twin Earth: If I and Twin Bill are molecular duplicates and so exactly alike in our heads, then necessarily (ignoring quantum randomness) we will behave in exactly the same way. That version too will succumb to the later objection just mentioned.

13  Devitt and the Case for Narrow Meaning

273

purpose of [causal-]psychological explanation…. They would think that even the lowest level laws that cover Raelene’s belief will also cover Gail’s; the different references of ‘that’ are psychologically irrelevant. (ibid.)

Granted, the folk may not be so narrow-minded for beliefs involving proper names, but Devitt argues that in a case of two functionally identical8 English philosophers’ knowing little about different Australian philosophers each named ‘Bruce’, the psychological laws should and will predict that Jeremy and Nigel will make similar remarks about “Bruce” at parties, and will behave similarly when finally meeting their respective Ozzie philosophers.9 The point can be extended even to the natural-kind terms that motivated Twin Earth. And this is all very plausible.

13.4  Defending Narrow Psychology But now Devitt confronts each of two objections that will be made by the “wide”favoring opponent. The first is that since natural language meanings are themselves truth-conditional, narrow meanings are hard to specify directly. Devitt sidesteps that (392) by pointing out that assuming the narrow meanings do exist, we can refer to them as such, if only by a notational device such as attaching an asterisk to the attitude verb (“Raelene believes∗ that …”). And we know something about how to abstract away from “the links outside the skin” that determine external-world reference. Fair enough. But the second objection is the big one (Baker 1986, 1995, and Burge again): What is “behavior”? Narrow psychology could explain mere bodily movement, and what is most conspicuously in common between my behavior and that of my Twin is bodily movement. But (this is my own way of making the point) suppose I believe

8  Of course no two human beings are ever functionally identical in reality, but Devitt points out (391) that all we need is functional similarity in the respects relevant to the situation in question. 9  Notoriously, in episode 22 (1970) of “Monty Python’s Flying Circus,” the Pythons scabrously lampooned Australian philosophers, and in that sketch pretended (very humorously, I admit) that every Ozzie philosopher is named Bruce. When I first arrived at Sydney University I had looked forward to learning what motivated that particular trope, but to my disappointment the real Australian philosophers did not know either, as ‘Bruce’ was not a conspicuously common name. It must be remembered, though, that in the sketch, the new member of the department at the University of Woolloomooloo, “a chap from pommie land,” was named Michael, though with a made-up surname.

Fourth Bruce: Michael Baldwin – this is Bruce. Michael Baldwin – this is Bruce. Michael Baldwin – this is Bruce. First Bruce: Is your name not Bruce, then? Michael: No, it’s Michael. Second Bruce: That’s going to cause a little confusion. Third Bruce: Mind if we call you ‘Bruce’ to keep it clear?

274

W. G. Lycan

that I am underpaid,10 and accordingly Twin Bill believes that he is underpaid. If you were able to watch us as on two TV screens showing the same program, you would see us moving exactly in parallel, each heading downstairs to the department chair’s office, banging on the door, and shouting “I want a raise and I want it now!” We are in the same narrow functional states, even though our ordinary wide beliefs are about different people separated by light-years, and we behave in exactly the same way. But in another obvious sense, we are not behaving in the same way. I do not bang on Twin Don Baxter’s door on Twin Earth, nor does Twin Bill bang on Don Baxter’s here on our planet, and would not even if such things were made feasible by an interstellar travel agent. Don is not Twin Bill’s department chair and has no influence on Twin Bill’s salary. He and Twin Don are entirely different people. More generally, psychology of the sort that interests us “attempts to explain actions, which are behaviors intentionally described” (1989: 392, italics original). Burge’s examples: “she picked up the apple, pointed to the square block, tracked the moving ball, smiled at the familiar face, took the money instead of the risk” (1986: 11). Those actions themselves are individuated widely. And it seems that the intentional mental states that make them actions as opposed to mere bodily motion would themselves have to be widely individuated if they are to be connected explanatorily to the actions. Devitt responds: To bring behavioral outputs under psychological laws, the outputs must be seen as goal-­ directed and as actions. However, we can see them in this way whilst abstracting from the particular contexts that are referred to in intentional descriptions and that are effected by actions. For example, the saga of Raelene may end with an action that we would ordinarily describe as ‘her shooting that man’. But for strictly psychological purposes, it does not matter that it was that particular man that she shot. Our language does not have a way of setting aside that fact, but we can easily introduce one using ‘∗’ again…. And, once again, there is no harm in not being so strict: in giving a description that makes the behavior open to a hybrid explanation, part psychological and part sociological. (1989: 393)

That completes Devitt’s (1989) case for narrow meanings. His picture is an appealing one. What makes it particularly appealing for me is that, on one natural interpretation harking back to White (1982), it takes its cue from the simple indexical cases and extends Kaplan’s (1979) notion of “character” to the other referring expressions, proper names and natural kind terms. Character is a type of meaning and an indispensable one, and there is no mystery about sameness of indexical character despite difference of truth-condition. Being a function from contexts to semantic contents, it is the type of meaning that an indexical has independently of context. And that is what is wanted – not necessarily a narrow content (i.e., propositional content), but a narrow meaning that captures what is common to the beliefs of molecular twins, exhibiting the sense in which they have the same belief despite the beliefs’ differing in truth-condition. Raelene believes that she herself is endangered by a man approaching her, and Gail believes that she herself is endangered by a man approaching her. 10

 Which belief, I’m bound to say, would be ludicrously false.

13  Devitt and the Case for Narrow Meaning

275

13.5  Objections to the 1989 Picture That said, I shall consider some objections to Devitt’s picture, starting with my own and then turning to the ones he himself makes in Coming. Objection 1: Devitt insists that unlike pure thought-to-thought processes, input links (from sensory inputs to thoughts) and output links (from thoughts to behavioral outputs) cannot be viewed syntactically. But taken at face value, that’s wrong. Consider vision, for simplicity. The proximal stimuli are rod and cone activations, which send signals up the optic nerves to more central visual processing in LGN and V1. But we can ignore the referential connotation of “signals” and give purely abstract descriptions of the photoreceptors and patterns obtaining among their firings; likewise for the visual processing that results in perceptual belief. (And that is very much in keeping with functionalism.) Devitt will say, and rightly: That only puts off his original objection to syntactic psychology, and he can reiterate his defense of narrow meaning: We still have to explain the fact that a particular sensory stimulus affects some thoughts and not others, leaving the vast majority of beliefs unchanged. So retinal etc. patterns must themselves be associated with external objects. If the stimulus is again the sight of Ron riding and its only effect is the formation of the belief ‘Fa’, syntax alone cannot explain that. But then, Objection 2: What epistemologically relates retinal patterns to external objects? And likewise for motor outputs? Devitt speaks of “echidna-ish” stimuli, but what would be an “echidna-ish” retinal pattern? One produced by staring an echidna full in the face? A side view? From above or from below? A sight of (what one has evidence is) an echidna just disappearing around a corner? There is no obvious retinal pattern that would be common to all those contacts. Devitt gives one more concrete example (1989: 387): the narrow meaning of ‘fire’ might be tied to the sight of flames and smell of smoke. Flames might produce a predictable abstract pattern on the retina, and smoke might have a distinctive effect on the olfactory epithelium (though for those facts to be predictively relevant, the cognitive psychologist would have to know them). In any case the objection is considerably clearer and more forceful for peripheral outputs, which for our purposes are motor instructions to muscles. Such instructions collectively produce particular bodily motions, but as Devitt has acknowledged, bodily motions can constitute very different actions depending on context. Suppose in an even higher-tech society than our present one, we performed most actions by pushing buttons, as in an elaborated airplane cockpit. The motor outputs alone would give hardly a clue as to what an agent was doing, unless the psychologist had a detailed knowledge of the massive keyboard and its functions. Of course, the output half of Objection 2 is closely related to the issue of widely individuated behavior, to which Devitt will return. None of the foregoing points would come as news to him now.

276

W. G. Lycan

13.6  Abandoning Narrow Psychology As previously noted, in Coming to Our Senses seven years later (1996: ch. 5), Devitt abandons narrow psychology – oddly, without announcing that reversal of position. He confines his meta-comment to a preludial footnote: I shall draw on my 1989a [“A Narrow Representational Theory …”] and [1991] in places (particularly in sections 5.4–5.7), but I am now much less sympathetic to Revisionism. In those papers I was impressed by Burge’s argument for the status quo (1986), but insufficiently so. I was led astray by my conflation of the two views of narrow meanings: as functions from contexts to wide meanings and as functional roles. (255 n. 10)

Let’s see what he goes on to argue that impugns narrow psychology. Separating the two views he has just mentioned, he addresses each on its own, and argues that neither will do. So, Objection 3: Neither conception of narrow meaning is adequate for purposes of scientific psychology. Beginning with the first view of narrow meanings, as Kaplanian character functions, Devitt points out that a theory of wide referential or truth-conditional meaning would have to explain “the way in which the mind must take a fact about the external context as the function’s argument to yield a wide meaning as its value” (286, italics original), i.e. (?), the way in which the function is computed. (The reason I’ve queried my “i.e.” is that I don’t understand Devitt’s allusion to “the mind” and what it “must take.” It is the theorist whose task is to explain the way in which the external context combines with internal properties of a mental token to yield the token’s wide meaning. The subject’s mind would take a fact about the context and consider it as the function’s argument only if, as is rare, the subject happened to be thinking about such things.) That is, though Devitt does not use the term, what we would need is a psychosemantics in Fodor’s (1987) sense, a (preferably naturalistic) theory of what gives a mental token the external reference or other truth-conditional meaning it has. “So, if we had a theory of [a particular token’s] wide meaning, we would know all that we needed to about its narrow meaning” (1996: 286, italics original). I take it a “theory of” a particular token’s wide meaning would be an application of our preferred psychosemantics to the token, based on whatever that psychosemantics specifies as the token’s own relevant properties. On the character function view, Devitt says, “narrow meanings may be ‘coarse grained’ in that there is not much to them and ‘promiscuous’ in that they can yield any of a vast range of wide meanings as values by changing the relevant external context as argument” (287–288). When Oscar has a belief about water, the corresponding beliefs of various Twin Oscars on other near-twin planets (that is, the beliefs sharing the same character function) might be about milk or about gin or what have you, though the Twins’ beliefs might be far from true. (Or, if readjustments were made to the narrow meanings of other mental tokens, the beliefs might be brought “back toward the truth.”) Right; so far so good, and no surprises. At this point (288) Devitt considers what we might take to be a sample psychosemantics, “IT,” that he has outlined in the previous chapter and that will be familiar

13  Devitt and the Case for Narrow Meaning

277

to anyone who has followed Devitt’s well-known work on linguistic reference (notably Devitt 1981). (And a reader of Coming must understand that as regards reference and meaning, Devitt treats thought and language almost interchangeably.11) IT applies just to “designational” or referential as opposed to attributive mental singular terms, and is “historical-causal.” The meaning of such a term would be the property of referring to its object by a certain type of “d-chain,” a historical-­ causal chain grounded in perceptions of its object and continued by reference borrowings of the Kripkean sort. By far the greatest part of the work of determining reference is accomplished by these external machinations, which of course may be historically very complicated.12 So far this is grist to the psychosemanticist’s mill; so what is Devitt’s point? In sum, according to IT, the narrow meaning of a name token places only a trivial constraint on what the token refers to in a context …. As a result, given the right context, … any object at all can be the referent of a token with a certain narrow meaning. (1996: 289)

And “[i]f the narrow meanings of words like ‘water’ and ‘Reagan’ really were as coarse grained and promiscuous as I have been suggesting, how could they serve the needs of psychological explanation?” (291). Very well; but in response, one small point and one larger point. The small point is that a mental name may have at least a little more of an internal role than is suggested by IT. It has the syntax of a name. And it comes with an ontological category, as Devitt and Sterelny (1987) nearly concede in confessing their “qua-problem” (63–65, 72–75). To take Evans’ (1973) famous example: “Anir,” actually the name of a son of King Arthur, was thought by some to name Arthur’s burial place. One who said “Anir was doubtless a green and pleasant spot” would simply be misreferring. Moreover, there may be more detailed descriptive constraints, as is claimed by Rosenberg (1994) on the basis of apparent counterexamples to Devitt (1981). Any such descriptive constraint will take the form of inferential connections, which will be part of the name’s narrow functional role. My larger rebuttal is that even if mental proper names are purely referential expressions, of course they do not by themselves contribute much to psychological explanation, nor would be expected to. They occur syntactically impacted in longer constructions, paradigmatically whole sentences. Devitt grants (288) that pronouns such as “this” and “she” are coarse-grained and promiscuous, but no one would

 This can lead to confusion, as it does on, e.g., p. 288. On pp. 157–158 he notes a few assumptions he makes regarding the relation between thought content and linguistic meaning: (1) thoughts as specified by “that”-clauses and the corresponding verbal utterances have the same meanings. (2) Thought meanings should be given “a certain explanatory priority” (italics original). (3) À la Grice, a linguistic utterance’s conventional meaning “is explained in terms of regularities in speaker meanings.” [Very strong disagreement from me on that one.] Nonetheless (4) conventional linguistic meanings play a role in the determination of particular thought meanings. 12  Just how complicated is brilliantly illustrated by the main examples in Rosenberg (1994), particular that of Gracie and “Barbara Cartwright” (135ff). Rosenberg intends some of his cases as counterexamples to Devitt’s (1981) analysis. 11

278

W. G. Lycan

suppose that they contributed nothing to narrow explanations of behavior such as those that make Raelene and Gail behavior buddies. Raelene thinks something she would express as, “That man has a knife and means to hurt me with it; I’ll draw my gun on him and make him cry and bleat for his mother,” and Gail thinks something she would express using the exact same sentence. Had Gail switched a few pronouns and thought “He’ll draw his gun on me and make me cry and bleat for his mother,” her behavior would have been quite different, and certainly have diverged from Raelene’s.

13.7  Against the Functional-Role View of Narrow Meanings What, then, of the other, functional-role view of narrow meanings? Devitt takes functional roles to be in themselves nonrepresentational, because for him “functional” means only “causal …, including dispositional” (292); there is function neither in the mathematical nor in the teleological sense. And in particular, the inputs and outputs are considered just as physically realized causes, not as indicators or signals. (Devitt reasonably asks, which subset of causal relations would it be that constitutes a mental token’s narrow meaning?13 Here the psychosemanticist could help only by offering a functional-role psychosemantics or by apparently collapsing the present option back into the character function view.) Devitt returns (1996: 294) to the issue of widely individuated actions/behavior. A narrow functionalist psychology would explain bodily motions, and predict the same motions for any molecular duplicate, but it could not explain intentional behavior as such. Of course some actions are characterized in success terms: “picked up the apple,” “took the money”; and obviously a narrow psychology could not on its own explain success. That is no objection. But the success verbs have non-­ success counterparts, such as “tried to pick up the apple” and “reached for the money.” And in any case other action descriptions are non-success to begin with: “hunted lions,” “pointed at the block,” “smiled at the face.” We do not see how such behavior could be explained solely by reference to narrow functional properties; the relation between the narrow meanings and the action “would be mysterious” (296). (I repeat that for Devitt “functional” incorporates no teleology).14,15 Devitt adds an argument in support of that pessimism: Its premise is that intentional action “is partly constituted by the intentional state that immediately causes it” (296), such as a desire or an intention in the other sense, as in intending to do  Block (1986), a functional-role theorist, acknowledges this as a significant problem.  What of a proper functionalism instead of a pallid causal theory? There are teleosemantics all ready to work, such as Millikan’s (1984, 1989). But teleological properties are wide. At least, the most promising theories of teleology are either backward-looking or forward-looking in time. 15  Devitt also suggests (1996: 295–296) that if a narrow property were to predict proper intentional objects of behavior, it would be as near as matters to a Kaplanian character, and so return us to our first view of narrow meanings. 13 14

13  Devitt and the Case for Narrow Meaning

279

something. The action’s explanation would naturally invoke the state’s content; and that content is itself wide. Doubtless these arguments could be resisted by the defender of the functional-­ role conception, but I find them convincing, so if I buy into narrow meaning I will go back to the character-function view.

13.8  Explaining “Wide” Behavior Finally, Devitt revisits the argument from methodological solipsism, and now rejects it more decisively than he had in “A Narrow Representational Theory …” It is now actually harder to reject, because its conclusion is weaker: just narrow psychology rather than syntactic psychology. He rescinds his “strong conviction that a scientifically appropriate boundary for explaining the behavior of an organism is its skin” (1989: 388). Rather, we have found nothing to support … [that] conviction …. It is possible to draw boundaries anywhere and to look for explanations of the characteristics and proximal behavior of the bounded entity or system in terms of what goes on within the boundary. The view that the appropriate boundary for explaining the behavior of an organism is its skin cannot be taken for granted. (1996: 303)

He cites the causal powers of chess pieces in the context of the game. “What matters to the causal powers of a chess piece are its external relations” (302). He concedes (308) that Kaplanian character functions would suffice to explain “proto-intentional narrow behavior,” i.e., behavior narrowly individuated. For the reasons he has given earlier, the character functions would not suffice to explain wide, ordinary individuated behavior. And that’s it: “The whole issue of narrow psychology comes down to this issue about the sort of behavior that psychology should explain” (ibid., italics original). And on that “whole issue,” Devitt makes four quick arguments: (1) Widely individuated behavior, ordinary action, still needs explaining, and no good argument has been given that it should not be explained within psychology (309). (2) For that matter, nothing has shown that we have any great interest in explaining narrowly individuated behavior, in addition to the ordinary wide behavior we do want explained (ibid.). (3) “[A]ny description of narrow behavior is parasitic on a description of wide behavior” (ibid.); to apply a narrow psychological law to it we would have to “subtract” our knowledge of the context from an ordinary action description. (4) “[E]xplaining wide behavior is the status quo and we should prefer the status quo in the absence of decisive reasons against it” (310). Upshot: Even though narrow behavior can in principle be explained by psychology, wide behavior is the “more appropriate” (308) explanandum.

280

W. G. Lycan

13.9  The Problem of Psychosemantics I agree with (1), and join Devitt in rejecting narrow psychology as stated, but I think he underestimates the job of characterizing the external connections scientifically. For science, it is not enough just to observe commonsensically that it is Snavely whom Raelene sees approaching her and it is Snidely whom Gail sees approaching her. The external connections include, not only the man’s long-­distance effect on our subject’s sensory receptors, but the relations she bears to him in virtue of which her thoughts and intentions are about him – in short, the operative psychosemantics. Remember Fodor’s (1980: 249) point that since such a theory … would have to define its generalizations over mental states on the one hand and environmental entities on the other, it will need, in particular, some canonical way of referring to the latter. Well, which way? …. [A] naturalistic psychology would attempt to specify environmental objects in a vocabulary such that environment/organism relations are law-instantiating when so described. But here’s the depressing consequence again: we have no access to such a vocabulary prior to the elaboration (completion?) of the nonpsychological sciences.

Remember also that the principles of psychosemantics itself are philosophy, not science. And they remain unsettled to say the least (Lycan 2006a). I remain agnostic in regard to Devitt’s quick argument (2), and would need to hear more in defense of (3). As regards (4), I agree that if forced to choose, I myself would rather have explanations of wide behavior than narrow explanations of bodily movements, but I cannot speak for psychological science. In any case, I believe there are narrow meanings of the character-function sort, and Devitt has done nothing to disabuse me of that view; indeed he agrees. What use should or will be made of them in psychology remains an open question.

13.10  Against Two Further Candidates for Narrow Meaning But are there narrow meanings, or even possibly narrow contents, other than the ones each of us has granted? I will close by commenting briefly on each of two positive answers. The first is that given by Jackson (1994, 1998) and Chalmers (1996).16 For each of several reasons, it is felt that mental natural-kind terms, at least, must have meanings over and above their referent kinds as supposed by Kripke and Putnam; and it is attractive to suppose that the kind concepts do not differ as between Earth and Twin Earth. Jackson and Chalmers offer a development of Kaplan’s “character” idea, which both Devitt and I should count as a mark in their favor, since it suggests narrow meanings of a now familiar sort. Following Davies and Humberstone (1980), they propose that in addition to its ordinary intension, a kind concept has an 16

 And put to evil uses, but let us draw a veil.

13  Devitt and the Case for Narrow Meaning

281

“A-intension” (Jackson) or “primary intension” (Chalmers), which is what we get when “we are considering, for each world w, what the term applies to in w, given or under the supposition that w is the actual world, our world” (Jackson 1998: 48). This would be what Twins’ “water” concepts have in common despite their disparate referents. At any world at which the mental term we spell “water” is tokened, it will refer to whatever substance “plays the watery role” there (50), which role is given by standard Kripkean this-worldly reference-fixers for the term (49). So, even for us in the actual world, Jackson maintains, “water” has a kind of nonrigid intension or meaning along with its normal, rigid referential content. Thus, “water”’s nonrigid meaning for us is, stuff that plays the watery or waterish role. My main problem with all this (Lycan 2006b) is that I do not believe that concepts corresponding to English kind terms have A-intensions. To generate an A-intension, one needs a transworld “role,” as in “plays the watery role.” Such roles are supposed to be constituted by reference-fixing descriptions, and are the same across the relevant worlds. That requires a distinctive and stable set of reference-­ fixers. But the latter pretty clearly does not obtain. Reference-fixers are rarely enshrined in the public language; they are private to individual speakers at particular times.17 Which may be fine for some purposes, but it breaks the connection with Kaplanian characters; and more argument would be needed for positing these special mental intensions, i.e., arguments better than those already rebutted by Devitt, me and others. The second defense of narrow meaning, indeed of narrow content, is offered by Terry Horgan and co-authors, beginning with Horgan (2000) and Horgan and Tienson (2002), and it has burgeoned into what Horgan calls the “Phenomenal Intentionality Research Program” (e.g., Horgan and Graham 2012; Kriegel 2007, 2013a, b). It was inspired in part by an idea of Brian Loar’s (1987, 2003), but their approach is different and their claims are much more ambitious. They defend an internal type of intentionality that (Horgan and Tienson say) is not only determined by phenomenology but is constituted by it (520, 524); and they make it clear that they mean content as opposed to Kaplanian character. Finally, they contend that their internal intentionality is “the fundamental kind of intentionality: the narrow, phenomenal kind that is a prerequisite for wide content and wide truth conditions” (529). Though their main concern is with the phenomenal and qualitative content of perception, and that takes us back to the question of narrow perceptual contents which I set aside in Sect. 13.1 above, they argue that their case extrapolates to perceptual beliefs and hence to more general beliefs based on those. But (I have argued in Lycan 2008) the most their arguments show is that (in Loar’s phrase) there is a kind of “purporting to refer” that is determined by phenomenology. I have no quarrel with that. But it shows at best that phenomenal duplicates share Kaplanian 17  Subsequently, Jackson and Chalmers themselves (2001) backed off the idea that A-intensions correspond to public linguistic meanings or types of meaning analogous to Kaplanian characters. They did kick A-intensions upstairs and diffusely so, into individual minds at particular times.

282

W. G. Lycan

characters, not that they share contents in any more referential or truth-­ conditional sense. More recently, Horgan and Graham (2012), Kriegel (2013b) and others have taken the sad state of psychosemantics as a powerful motivation for their “Research Program,” arguing that intentional reference is to be taken as primitive and conceding that this leads to some form of mind-body dualism. Though abhorring the dualism, I agree it is a significant motivation and can no longer be discounted.18 I hope that Devitt and I now agree on two main points: there are narrow mental meanings on the model of Kaplanian character; but as yet, we have seen no convincing reason to believe that there must be narrow contents underlying ordinary wide ones.

References Baker, L.R. 1986. Just what do we have in mind? In Midwest studies in philosophy, 10: Studies in the philosophy of mind, ed. P.A. French, T.E. Uehling, and H.K. Wettstein, 25–48. Minneapolis: University of Minnesota Press. ———. 1995. Explaining attitudes: A practical approach to the mind. Cambridge: Cambridge University Press. Block, N.J. 1986. Advertisement for a semantics for psychology. In Midwest studies in philosophy, 10: Studies in the philosophy of mind, ed. P.A. French, T.E. Uehling, and H.K. Wettstein, 615–678. Minneapolis: University of Minnesota Press. ———. 1990. Inverted earth. In Philosophical perspectives, 4: Action theory and philosophy of mind, ed. J.E. Tomberlin, 53–79. Atascadero: Ridgeview Publishing. ———. 1996. Mental paint and mental latex. In Philosophical issues, 7: Perception, ed. E. Villanueva, 19–49. Atascadero: Ridgeview Publishing. Burge, T. 1986. Individualism and psychology. Philosophical Review 95: 3–45. Chalmers, D. 1996. The conscious mind. Oxford: Oxford University Press. Davies, M., and L. Humberstone. 1980. Two notions of necessity. Philosophical Studies 38: 1–30. Devitt, M. 1981. Designation. New York: Columbia University Press. ———. 1989. A narrow representational theory of the mind. In Representation: Readings in the philosophy of psychological representation, ed. S.  Silvers, 369–402. Dordrecht: Kluwer Academic Publishers. Reprinted in Mind and cognition, ed. W.G.  Lycan, 371–398. Oxford: Basil Blackwell, 1991. Page references are to this edition. ———. 1991. Why Fodor can’t have it both ways. In Meaning in mind: Fodor and his critics, ed. B. Loewer and G. Rey. Oxford: Basil Blackwell.  Pautz (2013) offers a good critical survey of the “Research Program.” Nicholas Georgalis (2006, 2015) similarly defends a notion, “minimal content,” that is available only from the first-person perspective and is more fundamental than (indeed an absolute prerequisite for) ordinary wide content. It is determined by the subject’s conceptions and intentions, and differs from Horgan’s phenomenal intentionality (Georgalis says) by not individuating contents according to phenomenology in the “what it’s like” sense. My problem with Georgalis’ view is that according to it, only mental states of which the subject is aware can be intentional at all; states of which the subject is unaware do not have even derived intentionality. Georgalis not only accepts that consequence but insists that it is an important truth. But if, like me, you think that being aware of a mental state you’re in and being unaware of it is typically just a superficial matter of attention, you will not be persuaded that that difference creates such an ontological gulf. 18

13  Devitt and the Case for Narrow Meaning

283

———. 1996. Coming to our senses. Cambridge: Cambridge University Press. Referred to in the text as Coming. Devitt, M., and K.  Sterelny. 1987. Language and reality: An introduction to the philosophy of language. Oxford: Basil Blackwell. Evans, G. 1973. The causal theory of names. Aristotelian Society Supplementary Volume 47: 187–208. Fodor, J.A. 1980. Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences 3: 63–73. Reprinted in J.A. Fodor, Representations: Philosophical essays on the foundations of cognitive science, 225–253 and 330–331. Cambridge, MA: Bradford Books/MIT Press, 1981. Page references are to this edition. ———. 1987. Psychosemantics. Cambridge, MA: Bradford Books/MIT Press. Georgalis, N. 2006. The primacy of the subjective: Foundations for a unified theory of mind and language. Cambridge, MA: MIT Press. ———. 2015. Mind, language, and subjectivity: Minimal content and the theory of thought. New York: Routledge. Horgan, T. 2000, June. Narrow content and the phenomenology of intentionality. Presidential Address to the Society for Philosophy and Psychology. New York City. Horgan, T., and G. Graham. 2012. Phenomenal intentionality and content determinacy. In Prospects for meaning, ed. R. Schantz, 321–344. Berlin: De Gruyter. Horgan, T., and J. Tienson. 2002. The intentionality of phenomenology and the phenomenology of intentionality. In Philosophy of mind: Classical and contemporary readings, ed. D. Chalmers, 520–533. Oxford: Oxford University Press. Jackson, F. 1994. Armchair metaphysics. In Philosophy in mind, ed. J. O’Leary-Hawthorne and M. Michael, 23–42. Dordrecht: Kluwer Academic Publishing. ———. 1998. From metaphysics to ethics: A defence of conceptual analysis. Oxford: Oxford University Press. Jackson, F., and D. Chalmers. 2001. Conceptual analysis and reductive explanation. Philosophical Review 110: 315–360. Kaplan, D. 1979. On the logic of demonstratives. In Contemporary perspectives in the philosophy of language, ed. P.A. French, T.E. Uehling, and H.K. Wettstein, 401–412. Minneapolis: University of Minnesota Press. Kriegel, U. 2007. Intentional inexistence and phenomenal intentionality. In Philosophical perspectives, 21: Philosophy of mind, ed. J. Hawthorne, vol. 21, 307–340. Atascadero: Ridgeview Publishing. ———., ed. 2013a. Phenomenal intentionality. Oxford: Oxford University Press. ———. 2013b. The phenomenal intentionality research program. In Phenomenal intentionality, ed. U. Kriegel, 1–26. Oxford: Oxford University Press. Lewis, D.K. 1972. Psychophysical and theoretical identifications. Australasian Journal of Philosophy 50: 249–258. Loar, B. 1987. Subjective intentionality. Philosophical Topics 15: 89–124. ———. 2003. Phenomenal intentionality as the basis of mental content. In Reflections and replies: Essays on the philosophy of Tyler Burge, ed. M. Hahn and B. Ramberg, 229–257. Cambridge, MA: Bradford Books/MIT Press. Lycan, W.G. 1996. Consciousness and experience. Cambridge, MA: Bradford Books/MIT Press. ———. 2001. The case for phenomenal externalism. In Philosophical perspectives, 15: Metaphysics, ed. J.E. Tomberlin, 17–35. Atascadero: Ridgeview Publishing. ———. 2006a, December. Consumer semantics to the rescue. Presented in a symposium in honor of Distinguished Woman Philosopher Award recipient Ruth Garrett Millikan, Society of Women Philosophers. ———. 2006b. The meaning of ‘water’: An unsolved problem. In Philosophical issues, 16: Philosophy of language, ed. E. Sosa and E. Villanueva, 184–199. Oxford: Basil Blackwell. ———. 2008. Phenomenal intentionalities. American Philosophical Quarterly 45: 233–252.

284

W. G. Lycan

Millikan, R.G. 1984. Language, thought, and other biological categories. Cambridge, MA: Bradford Books/MIT Press. ———. 1989. Biosemantics. Journal of Philosophy 86: 281–297. Pautz, A. 2013. Does phenomenology ground mental content? In Phenomenal intentionality, ed. U. Kriegel, 194–234. Oxford: Oxford University Press. Putnam, H. 1975. The meaning of “meaning”. In Minnesota studies in the philosophy of science, 7: Language, mind and knowledge, ed. K. Gunderson, 131–193. Minneapolis: University of Minnesota Press. Rosenberg, J.F. 1994. Beyond formalism: Naming and necessity for human beings. Philadelphia: Temple University Press. Stich, S.P. 1978. Autonomous psychology and the belief-desire thesis. The Monist 61: 573–591. ———. 1980. Paying the price for methodological solipsism. Behavioral and Brain Sciences 3: 97–98. White, S. 1982. Partial character and the language of thought. Pacific Philosophical Quarterly 63: 347–365. Wilson, R.A. 1995. Cartesian psychology and physical minds. Cambridge: Cambridge University Press.

Chapter 14

Languages and Idiolects Paul Horwich

Abstract  The main theses to be elaborated and supported in what follows are, roughly speaking: (i) that, besides communal languages (such as Swahili, Hindi, Mandarin, and Arabic), we should acknowledge the existence of idiolects – the different, idiosyncratic, personal versions of languages that are deployed by the individual members of linguistic communities; (ii) that the meaning of a word in a communal language is constructed from the somewhat different meanings it has within the different idiolects of the speakers of that language; and (iii) that the word’s meaning in a given person’s idiolect is grounded in the particular implicitly followed rule for its use that explains the particular collection of sentences containing the word that this person accepts. Keywords  Communal language · Deference · Michael Devitt · Idiolect · Saul Kripke · Meaning · Reference · Rule · Technical terms

14.1  Introduction The problems to be addressed in this paper concern the relationship between the meanings of words in communal languages (e.g. the English meaning of “arthritis”) and the meanings of those very words in the various idiolects of particular members of linguistic communities (e.g. what Sid personally means by “arthritis”). Is it even possible that a word-type has one meaning at the group level, yet numerous different meanings at the individual level? Should we perhaps resolve the apparent tension here by distinguishing two kinds of meaning? If we do that, then would ‘communal meaning’ be somehow a construction out of ‘individual meanings’? For example, might we identify the communal meaning of everyday words (like “Mars”, “blue”, and “true”) with the idiolectal meanings of them that are roughly shared amongst most members of the community, and, in the case of technical terms (such as “arthritis”, “felon”, and “carburetor”), with the idiolectal meanings that are given to them P. Horwich (*) Department of Philosophy, New York University, New York, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_14

285

286

P. Horwich

by most of the relevant experts – i.e. those in the community to whom most of the rest of us tend to defer (e.g. the doctors, lawyers, and auto-mechanics)? I’ve been inclined to say ‘yes’ to all these questions. But Michael Devitt – who has also devoted considerable attention to them – has argued, in stark contrast, that the answer is ‘no’ in every case. In addition he’s maintained that his answers demand and vindicate a perspective in which: (i) the relation of reference that an indefinable word bears to the object or property for which it stands should be analyzed in causal terms; and (ii) the word’s meaning is engendered by the particular kind of reference-­ constituting causal chain that links the word to its referent. And he has criticized my sketchy attempts (in Meaning, 1998, and Reflections on Meaning, 2005) to show, on the contrary: • that affirmative answers to the above questions are along the right lines; • that this position depends on rejecting his (i) and (ii); • and that it calls instead for identifying a word’s meaning with how the word is used, and for explaining the word’s reference in non-causal, purely deflationary terms. The perspective on languages and idiolects to be elaborated in the present paper is a development of those earlier ideas of mine – one that focuses on responses to Devitt’s various forceful objections to them.

14.2  To Be Defended This view is composed of the following theses (subject to refinement in light of the criticisms to be considered in Sect. 14.3): (1) A given person (at a given time) has somewhat idiosyncratic ways of pronouncing verbal sounds, somewhat idiosyncratic ways of understanding them, and somewhat idiosyncratic ways of combining them into complex expressions (including sentences). These personal ways of pronouncing, using, and combining words constitute an idiolect.1 (2) Any group of people whose interactions deploy similar idiolects constitutes a linguistic community – the members speak a common language.2

1  To the extent (which appears to be considerable) that people think in a mental version of their overt languages, a person’s idiolect should be taken to include the mental correlates of her overt words, together with their meanings. These are the same as the overt-word-meanings; but those sounds (and corresponding inscriptions) meaning what they do is explained by the more fundamental fact that her mental terms have those meanings. To the extent, however, that everyone deploys the same language of thought, whose words and meanings are innate and universal, then there can be no distinction between a person’s mentalese idiolect and communal mentalese. 2  The relevant respects of similarity are outlined in thesis (5) below.

14  Languages and Idiolects

287

(3) Each word’s sound-type (defined broadly enough to include idiolectal variations) has a constant communal meaning. That type-meaning – which, together with the context of an utterance containing the word, determines the word’s contribution to what is said – does not vary from one speaker to another. (4) A person’s meaning what she does (in her idiolect) by a given word, w, plays a role in provoking all the different instances of her accepting (or rejecting) particular sentences containing w, and also accounts for the causal import of their acceptance (or rejection). Therefore, the underlying fact that constitutes her meaning what she does must be something that explains those causal relations. Arguably, that fact will take the form: S’s overall use of w results from S’s implicitly following the rule,   (where R(w) concerns the acceptance of certain specified sentences or inferences containing w). The particular rule of this form that engenders the idiolectal meaning of a given word is the one that meets this explanatory requirement. But it can be discovered only by a dauntingly complex, holistic, empirical investigation.3 (5) As was suggested in Sect. 14.1, there’s a distinction (albeit rather vague) between ordinary words and technical terms. In the former case, there’s a basic rule for the word’s use, which many of the speakers who often use it implicitly follow (and the majority follow at least approximately). For example, the rule for “blue” is plausibly (and very roughly): “Predicate ‘blue’ of observed things if and only if they are blue!”. And the rule for “true” is plausibly (and very roughly): “Accept instances of the schema, ‘

 is true ↔ p’!”. In the latter case (of a technical term), the members of some small subset of the population – acknowledged experts in the relevant area – nearly all implicitly follow a certain rule for its use, whereas the non-experts rarely follow it. For example, the expert basic rule for “arthritis” will dictate accepting something like, “Arthritis is a disease of the joints”; and the expert basic rule for “tachyon” will dictate accepting something like, “Tachyons (if there are such things) travel backwards in time”.4 (6) In both cases (but more often for technical terms) there will be people who, despite using the word with its communal meaning, lack a full understanding of it. The extent of their misunderstanding is a matter of how much difference 3  For what explains someone’s acceptance of a given sentence containing w can be discovered only relative to assumptions about the many other relevant explanatory factors, besides how w’s meaning is constituted – assumptions that include, for example, the way the meanings of all the other words in the sentence are constituted. See my Meaning (1998) and Reflections on Meaning (2005) for details. And see my “Obligations of Meaning” (forthcoming) for why I take an account of idiolectal meaning in terms of rule-following to be somewhat preferable to one in terms of dispositions (which I advocated in those earlier works). 4  It’s worth noting that many technical terms are derived from everyday words. In such a case the ordinary word retains it’s ordinary communal meaning, but is given an additional one by the experts, a technical meaning; so the word becomes ambiguous. And, at the idiolectal level too, both the experts and any non-experts who venture into technical territory, will give the word-sound two meanings.

288

P. Horwich

there is between the basic rule for its use that the individual implicitly follows and the basic rule that the majority (or the majority of experts) tend to implicitly (and roughly) follow. But such deficiencies – no matter how great – don’t count against attributions of the communal meaning. Individuals are credited with that communal meaning simply by virtue of their membership in the community. (7) Claims of the form, “S’s word w means F”, made by person, A, are multiply ambiguous. That’s because an attribution of meaning to one of S’s words may aim to specify what it means either in S’s idiolect, or in one or another of S’s communities (e.g. S’s village, province, country, etc.). Similarly, since any such claim amounts to a translation of the word into a language of the attributor, the specified meaning might be articulated either in A’s idiolect or else in one or another of A’s communal languages. (8) The upshot is that the meaning of a word in a communal language is grounded in the meanings it has within the various idiolects of individual members of the community – in the way that was indicated in theses (5) and (6). And the latter are grounded in the various implicitly followed rules that explain the different collections of sentences containing the word that the different individuals accept.

14.3  Objections and Replies Let me now consider various criticisms of this picture. Many of them have been ventured by Devitt. But let me begin with one with which he doesn’t sympathize. • Objection A: There’s no good reason to think that the meaning of a word is anything more than its referent – the out-in-the-world entity that the word stands for. So we should take the only meaning of “Mars” to be Mars, the only meaning of “red” to be redness, and so on. Unlike these familiar kinds of thing, the Fregean, finer-grained ‘meanings’ (= senses; connotations; modes of presentation of, and determiners of, referents) – are undesirably obscure and explanatorily unnecessary. So it’s best to do without them (Salmon 1986; Soames 2002). A first concern about this fairly popular Millian/Russellian view is that it ignores or rejects Frege’s famous and compelling argument to the contrary. One can, like the ancient Greeks, perfectly well accept “Hesperus  =  Hesperus”, whilst rejecting “Hesperus = Phosphorus”. So there must be some sense in which “Hesperus” and “Phosphorus” don’t have the same ‘meaning’, even though they do stand for the same thing. And the same point applies to any pair of terms whose co-referentiality is non-trivial. In the second place, the extreme referentialist is stuck with having to suppose that words lacking referents are meaningless (even when this lack has not been recognized). But surely “Vulcan”, which was introduced to be the name of a conjectured planet that turned out not to exist, had a ‘meaning’ of some sort. And the same goes for “Beelzebub”, “Santa Claus”, “Atlantis”, “phlogiston”, and “ghost”.

14  Languages and Idiolects

289

Bolstering these two arguments is the above-mentioned idea – see thesis (4) – that someone’s overall use of a word is explained, in part, by what she means by it. In particular, a difference in idiolectal meaning between “Hesperus” and “Phosphorus” is needed to explain how the circumstances in which S would accept “Hesperus looks blue” aren’t the same as the circumstances in which she would accept “Phosphorus looks blue”. And what S means by “God” is needed, even if the term has no referent, to explain why she accepts the theological doctrines that she does. Finally, there’s not much merit in the complaint that any notion of meaning, insofar as it goes beyond reference, is intolerably obscure. In particular, there’s nothing especially mysterious in the idea, advanced here, that what a person means by a given word is constituted by the basic rule for its use that she implicitly follows.5 • Objection B: In the case of certain words – namely, names – it’s especially clear that there can be nothing to their meanings over and above the kinds of causal relation they bear to their referents. That’s because, as Kripke (1980) taught us, there’s typically no  communally shared description of the particular thing to which a name (e.g. “Socrates”) refers; no piece of information about it that all users of the name possess; thus no way of using the word that could serve as it’s fine-grained meaning, as the determiner of its common referent. Instead (and again from Kripke): (i) the referent of a name, “N”, is initially fixed by someone who, focusing on a certain thing, says (or thinks) “Let that be called ‘N’”; (ii) other people hear the name in conversation with the introducer of it, and thereby qualify, when they proceed to use it, as referring to the same thing as the introducer; (iii) yet others pick up the name from those who are already deploying it, similarly preserving it’s referent; and (iv) this explains how so many people use the name “Socrates” (for example) to refer to the same particular guy, even people who know nothing distinctive about him  – indeed, who are radically mistaken about what he did, where and when he lived, and what he looked like (Devitt 2011: 200–203). An initial concern with this picture is that it doesn’t distinguish between, on the one hand, ostensive names (“Let’s call this puppy, ‘Pooch’”) and, on the other hand, descriptive names (“Let ‘Ripper’ be our name for whoever committed those

 This is not to deny that it’s controversial, even amongst those philosophers who countenance something like Fregean meanings (= senses), what sort of fact about a person’s words constitutes their having the meanings they do. Nor is it to deny that there’s disagreement over the content of the particular answer I’m now proposing, i.e. over what it is to implicitly follow a rule. This is philosophy, after all! My own position on the latter issue, following Wittgenstein, is that S implicitly follows the rule, “To conform with regularity R”, just in case S is disposed to conform to R, and is prone to immediately correct his failures to conform. (See my response below to Objection F). In earlier work I have taken the basis for a word’s idiolectal meaning to be merely the basic acceptance-disposition governing it overall use. So the account offered here – in which this basis is instead said to be a matter of implicit rule following (analyzed as that disposition plus a proneness to self-correction) is a slight modification of my earlier view. However, it has no bearing at all on the main project of this paper – to show how communal meanings are grounded in idiolectal meanings.

5

290

P. Horwich

murders”) and theoretical names (such as the numerals, ‘0’, ‘1’,‘2’, etc., whose referents are fixed by their roles in counting and calculating). The above causal story look like a pretty good account of ostensive names, but not so good of other names. In addition, it fails to distinguish idiolects from languages. Each individual uses a name in a somewhat distinctive way: she accepts certain sentences containing it – sentences that some other community-members reject and that yet others have never considered. And it’s natural to suppose that what she means in her idiolect consists in the basic rule she implicitly follows that helps to explain this distinctive usage. As for the name’s communal meaning, this might well be grounded in the basic acceptance rule implicitly followed by the relevant experts  – in the case of ostensive names, by those who are or were acquainted with the referent. And their basic rule (implicitly followed) is to accept “That’s N” when and only when they are confronted by N. Thus it’s not the case that Kripke’s devastating critique of the Frege-Russell view (that the communal-referent of each name is fixed by a definite description), together with his ‘contagion’ picture of communal-reference inheritance, provide reasons for concluding that names have no non-referential, use-theoretic meaning. After all, the meaning of “N” in S’s idiolect may, consistently with Kripke’s arguments, be provided by an idiosyncratic description, or some other idiosyncratic acceptance rule. And the contagion model applies just as plausibly to communal meaning as it does to communal reference. So the former’s existence is not jeopardized by the absence of any commonly followed rule for the name’s use. But could it be that S’s merely hearing the name, “N”, from a fellow community member will, by itself, guarantee that, when she proceeds to use it, “N” will have the very communal meaning and referent with which her source deployed the word? I’m inclined to say ‘yes’. Even if S explicitly decides that “N”’s meaning and referent in her idiolect will diverge from its communal meaning and reference, the communal semantic features will nonetheless be correctly attributable to her. Granted, this opinion is rather heterodox. Many have been tempted to say that, in addition to hearing the word that’s new to her, S must have the intention to mean what her source meant. But, as Devitt (2011: 202) has rightly emphasized, that would be to over-intellectualize our natural unreflective use of language. His own formulation of the ‘extra’ that’s needed for S’s uses of “N” to inherit her source’s reference is that “the borrower must process the input provided by the situation [of her hearing the word that’s new to her] in whatever way is appropriate for gaining, or reinforcing, an ability to use the name to designate its referent” (Devitt 2011: 202). But this amounts to saying that only when certain unspecified conditions are satisfied will a speaker’s term refer to what the person she heard it from was referring to. And this is so uninformative that we might as well impose no extra condition at all.6 6  In an early brief discussion of these issues (1998: 86) I made the mistake of suggesting that in order for a non-expert to use a technical term with its communal meaning she must defer to what the experts say with the help of that term. And as Devitt and Sterelny rightly pointed out (see Devitt

14  Languages and Idiolects

291

• Objection C: The causal picture of reference-determination can be elaborated in a natural way so as to incorporate a level of meaning that’s more fine-grained than reference, but without in the slightest backing away from Kripke’s conclusion that such meanings can’t be constituted cognitively  – e.g. by the name’s association with some description, or information, or use-rule. For we can easily extend the Kripkean idea that (typically) a certain name refers to a certain object in virtue of a causal link between them. It suffices to suppose that the meaning of the name consists in the mode (or type) of reference-determining causal link involved. From this point of view, although “Hesperus” and “Phosphorus” each stand in one or another particular reference-determining causal relation to the same thing, they aren’t synonyms since those relations belong to different types. So, in order to be able to explain why it was an interesting discovery that Hesperus is Phosphorus, there’s no need to invoke ‘different rules of use’. For we need nothing beyond the causal resources that were already needed to account for reference (Devitt 2015: sect. 6.3). An obvious defect with this idea (as I’ve articulated it so far) is that it’s much too unspecific. For until we’re told what the relevant distinctions in mode of causal chain are, we can’t assess whether the account will deliver the result that primitive terms have different meanings when (and only when) they intuitively do. Now Devitt was of course always perfectly aware of this need, and makes an appealingly simple attempt to address it. His suggestion is that the reference-determining causal chains leading from a certain thing (or property) to someone’s tokening of a word be divided into modes on the basis of the word-type to which the token belongs. Since this trivially implies that different name-types for the same thing are related to that thing by different modes of reference-determining causal chain, it delivers many desirable results  – e.g. that “Hesperus” and “Phosphorus” have different meanings, so “Hesperus = Phosphorus” (unlike “Hesperus = Hesperus”) makes a non-trivial claim. However, one might well baulk at the idea that different primitive terms never have (indeed cannot have) the same meaning. What about “London” and “Londres”? What about “water” and “acqua”? In addition, it’s hard to see why a single name for something couldn’t be ambiguous. Consider Kripke’s “Paderewski” example.7 Many people didn’t realize that Paderewski the politician was the same person as Paderewski the concert-pianist. So arguably the name was ambiguous. and Sterelny 1999: 3 and Devitt 2002: 118–119), this is highly implausible. For many non-experts will never meet an expert, or will not recognize that they are in the presence of an expert, or will be too stubborn or overconfident to defer. But they might nonetheless mean (communally) what the experts mean, providing they heard the word from someone who heard the word from someone who heard the word from … word from someone who’s an expert. So my present proposal involves no such claim about deference. The only role now given to this notion is in roughly explaining the notion of the ‘acknowledged experts’ as ‘those to whose opinions there is a tendency to defer’. 7  See Kripke (1979).

292

P. Horwich

Granted, a Devittian might respond to this second problem by distinguishing between those causal chains from the guy that passed through conversations about music, and those that passed through conversations about politics, and might supplement the initial ‘typing of causal chains’ proposal by saying that this sort of difference is also constitutive of a difference in meaning. But it’s doubtful that this proposal could be satisfactorily generalized without leaning on the idea that a word’s particular meaning derives from its role in articulating particular information about its bearer – the very idea that the Kripke-Devitt conception was supposed to be adamantly rejecting. Finally, a puzzling feature of Devitt’s causal theory of meaning is that, on the one hand, he fully endorses the idea that the meaning of a word should explain the word’s use (Devitt 1996 and 2011); but, on the other hand, he doesn’t address the obvious difficulty of seeing how this could be done if what constitutes that meaning is, in some cases, merely a long causal chain that begins with the dubbing of an object and continues with many instances of conversational reference-inheritance. In the first place, that alleged meaning-constitutor seems quite incapable of contributing much, if anything, to the explanation of any specific use of the word.8 And, in the second place, it’s quite unclear what the facts are that need to be explained. For there is surely no such thing as the sentences that the community as a whole accepts (or rejects). And at the individual level, the accepted (and rejected) sentences vary enormously from one person to another. • Objection D: Kripke’s account of reference is extremely compelling. Surely its insights must be respected in any decent account of meaning. In particular it must be clear how an alleged meaning-constituting property determines the corresponding Kripkean reference-constituting property. A point in favor of Devitt’s thoroughgoing causal proposal is that it meets this condition. Indeed it’s tailor-­ made to do so. But the view of meaning as founded on acceptance rules does not meet it. In fact it’s not so hard to see how the picture of meaning advocated here accommodates the central Kripke’s insights. In both perspectives the communal reference of an ostensive name is fixed by those acquainted with its bearer, who are disposed to think “That’s N” in the presence of N.  In both cases, there’s room for names whose referents are fixed in other ways – e.g. via description (e.g. stipulated acceptance of “N = the F”). And in both perspectives the rest of the community inherits these referents by virtue of the interactions that define community membership.

8  Devitt (1996: ch. 5) argues that some of the things that we feel should be explained by what S’s words mean can be well explained only if those meanings are externalistic (i.e. if they are constituted not merely by S’s intrinsic states, but also by her relations to the outside world). But this gives us no reason to think that S’s acceptance of such-and-such sentences is amongst those things that can only be explained in that way. Nor does it relieve the difficulty of seeing how the particular externalistic form of Devitt’s word-meanings – namely, types of causal-chain between words and their referents – could conceivably have the particular role in explaining sentence-acceptance that word-meaning clearly have.

293

14  Languages and Idiolects

Granted, there are some differences, but they are relatively superficial. One is over the idea, advanced here, that the initial pattern of usage that fixes “N”’s referent constitutes idiolectal meanings, shared amongst experts, which in turn constitutes the word’s communal meaning. Kripke doesn’t endorse this view. But nor does he reject it; and it is not clear why he should want to. Another difference is over the idea that the relation, ‘x refers to y’, is constituted by some relation of the form, ‘x bears causal relation R to y’ – where R involves the elements of acquaintance followed by contagion that Kripke emphasizes. That idea – according to which reference is a substantive relation – is rejected here in favor of a deflationary alternative, whereby reference is taken to be explained via instances of the disquotional schema: My “ N ” ( as I understand it ) refers to x « N = x





The merits of this alternative are that (i) it applies equally well to names that are not introduced on the basis of acquaintance, but rather via description or via theory (as in the above-mentioned case of the numerals); and (ii) it accommodates the obvious parallels between reference, satisfaction, and truth, given the independent plausibility of deflationism about those other two semantic notions.9 • Objection E: Clearly there are languages in the everyday sense – languages such as English, Hungarian, and Dutch. But there’s no good reason to think that so-­ called idiolectal languages exist. In fact there are several good reasons for countenancing such things. One is that we are able to accommodate the familiar practice of describing a fully-fledged member of our linguistic community as “not fully grasping” the meaning of a certain word, or as “meaning something somewhat different from its English meaning”. A second consideration is that the acknowledgment of idiolects is nothing more than an extension of the series of precisifications that we already recognize: language (e.g. English), dialect (e.g. American English), sub-dialect (e.g. Texan American English), sub-sub-dialect (e.g. Southern Texan American English). A third count in favor of postulating idiolects is that it’s the level of language at which it becomes plausible to suppose that the constitutor of a word’s meaning is the common core of explanations of all the facts as to which sentences containing that word are accepted, and which are rejected. Fourthly, the notion of ‘idiolect’ is also explanatorily valuable in helping to provide an account of communal languages (at least, that’s what the present paper is attempting to show). Finally, it’s worth emphasizing that Chomsky’s conception of an ‘I-language’ – i.e. an individual’s mental representations of her idiolect  – has proved to be of

 For elaboration of this view of nominal reference see ch. 5 (“Reference”) of my Meaning.

9

294

P. Horwich

considerable explanatory value, to the extent that it’s the central theoretical notion within the currently leading scientific approach to linguistic phenomena. • Objection F: For the above reasons one might hope that the existence of idiolects can be vindicated. But the awkward fact is that Kripke (interpreting ­Wittgenstein’s critique of “private languages”) has established that they are conceptually impossible.10 Kripke’s argument (Kripke 1982) is based on two ideas. The first of these is that meaning a given thing by a word is a matter of implicitly following a certain rule for its use – and we can happily go along with this assumption. His second premise is that implicitly following the rule, “To conform with regularity R”, goes substantially beyond merely conforming with R (or with having a disposition to conform with R) since it immediately implies that one ought to conform. And this too seems reasonable enough. However, according to Kripke, these points require that there be a community of people deploying the same meanings and rules – people who are in a position to observe and resist one another, and criticize failures to conform. And this is where one might well disagree with him. For the needed ‘potential for correction’ could come from the speaker herself. It could be a matter of self-correction. As Wittgenstein put the point in his Philosophical Investigations: Or a rule is employed neither in the teaching nor of the game itself; nor is it set down in a list of rules. One learns the game by watching how others play. But we say that it is played according to such-and-such rules because an observer can read these rules off from the practice of the game – like a natural law governing the play. – But how does the observer distinguish in this case between players’ mistakes and correct play? – There are characteristic signs of it in the players’ behavior. Think of the behavior characteristic of correcting a slip of the tongue. It would be possible to recognize that someone was doing so even without knowing his language. (Wittgenstein 1951: par. 54, my emphasis).11

 Although Devitt acknowledges the existence of phenomena that might well be called “individualistic languages” or “idiolects”, they are crucially different from the phenomena whose theoretical importance I have been advocating using those terms. For Devitt, S’s “idiolect” consists primarily in her vocabulary-items together with the meanings they possess – where the latter are their communal meanings. So any differences between the so-called “idiolects” of two members of a linguistic community can derive only from the fact that each person uses words that the other doesn’t use. He does not recognize a kind of meaning (which I’m calling “idiolectal meaning”) that’s distinct from communal meaning, that varies from one individual to another, and that’s the causal basis of each individual’s distinctive overall usage of words. So he doesn’t acknowledge idiolects in my sense of the term. I believe that this explains his puzzlement (Devitt 2011: 206–208) over how I could have written both that the members of a community don’t mean the same thing and that they do mean the same thing. His failure to see that this “incoherence” is dissolved by the distinction between my idiolects and ordinary languages suggests that he thinks of the former as completely beyond the pale. 11  Along roughly these Wittgensteinian lines, I suggested (in footnote 5 above) that person S implicitly follows the rule, “To conform with regularity R”, if and only if S’s activity is governed by the disposition to conform with R, and there is some tendency for S to correct instances of his non-conformity (i.e. to react against his initial inclinations). 10

14  Languages and Idiolects

295

Readers of Kripke might puzzle over how this apparent endorsement of individualistic implicit rule-following (hence individualistic meaning) could be reconciled with Wittgenstein’s rejection of ‘private language’. But the simple answer is that Kripke should not have identified individualistic languages (= idiolects) with private languages. For the latter are defined by Wittgenstein as consisting of words that no-one else could ever understand because they refer to the speaker’s qualia (for example the peculiar, private ‘what-it’s-like’, varying from one person to another, to have what we all call a ‘looks red’ experience). And Wittgenstein’s objection to the existence of these alleged mental phenomena – his so-called ‘private language argument’ – is largely an attempt to expose the confusions underlying our temptation to postulate them. But idiolects need not (and should not) contain any terms purporting to refer to qualia.12 • Objection G: The proposal articulated in theses (1) through (8) – and elaborated in response to the subsequent objections – is deplorably imprecise. We aren’t told exactly how similar idiolects must be to one another in order to jointly comprise a communal language. We aren’t told where the boundary lies between technical terms and everyday words. We aren’t told what, for present purposes, is going to count as an expert in relation to a given word. We aren’t told how to measure the degree of difference between two rules of use. And so on. No doubt my theses are imprecise is all these ways and more. But how much of a defect is that? Vagueness is inescapable after all, yet substantive and interesting claims can nonetheless be made. Moreover, even if the account presented here would be somewhat improved by the elimination of some of its present imprecision – which remains to be seen – it’s quite possible that, even as it stands, the view is cogent, correct, and an illuminating step in the right direction. Commenting on his own rather broad-brush account of reference, Kripke (1980: 94) says, “[w]hat I’m trying to present is a better picture – a picture which, if more details were filled in, might be refined so as to give more exact conditions for reference to take place”. And I feel roughly that way about the above-sketched view of how word-meanings are constituted.13

References Devitt, M. 1996. Coming to our senses. Cambridge: Cambridge University Press. ———. 2002. Meaning and use. Philosophy and Phenomenological Research 65: 106–121. ———. 2011. Deference and the use theory. ProtoSociology 27: 196–211.

  For further discussion see ch. 5 (“Kripke’s Wittgenstein”) of my Wittgenstein’s Metaphilosophy (2012). 13  My thanks to Michael Devitt for the stimulus of his great contributions to the philosophy of language, for the extraordinary clarity with which he always expresses himself, and for the incisive, constructive, generous comments he gave me on the penultimate draft of the present paper. 12

296

P. Horwich

———. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Devitt, M., and K.  Sterelny. 1999. Language and reality: An introduction to the philosophy of language, 2nd ed. (1st ed. 1987). Oxford: Blackwell Publishers. Horwich, P. 1998. Meaning. Oxford: Oxford University Press. ———. 2005. Reflections on meaning. Oxford: Oxford University Press. ———. 2012. Wittgenstein’s metaphilosophy. Oxford: Oxford University Press. ———. forthcoming. Obligations of meaning. In Meaning, decision, and norms: Themes from the work of Allan Gibbard, ed. W. Dunaway and D. Plunkett. Ann Arbor: Maize Books (University of Michigan). Kripke, S. 1979. A puzzle about belief. In Meaning and use, ed. A.  Margalit, 239–283. Dordrecht: Reidel. ———. 1980. Naming and necessity. Cambridge, MA: Harvard University Press. ———. 1982. Wittgenstein on rules and private language. Cambridge, MA: Harvard University Press. Salmon, N. 1986. Frege’s puzzle. Cambridge, MA: MIT Press. Soames, S. 2002. Beyond rigidity: The unfinished semantic agenda of Naming and necessity. New York: Oxford University Press. Wittgenstein, L. 1951. Philosophical investigations. Oxford: Blackwell.

Part IV

Methodology

Chapter 15

Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics Georges Rey

Abstract  I argue that Devitt should replace his maxim, “Metaphysics first!,” with “Explanation first!,” since scientific explanation provides the only basis to decide which metaphysical claims deserve any priority. This seems to me particularly important to urge against Devitt, given his odd inclusion of Moorean “commonsense” in his understanding both of “realism” and of Quine’s “holistic” epistemology, an inclusion that I suspect is responsible for his surprisingly unscientific views of many topics he discusses, specifically (what I’ll address here) secondary properties, linguistics, and the possibility of a priori knowledge. Keywords  A priori · Explanation · Realism · Color · Linguistics · Noam Chomsky · Confirmation holism · W.V. Quine

15.1  Introduction Michael Devitt (1996, 2010a) insists on “Putting metaphysics first!,” reasonably stressing the general priority of metaphysical considerations over semantic and epistemological ones. I’m not sure it’s a good idea (and I doubt Devitt at the end of the day thinks it is) to insist on anything always coming “first” in our claims about anything  – didn’t Quine rightly caution us against any hopes for a “first philosophy”?1 – but I think Devitt (2010a: 2) is right to think that, at least as things  As, for the record, did Russell:

1

There is not any superfine brand of knowledge, obtainable by the philosopher, which can give us a standpoint from which to criticize the whole of the knowledge of daily life. The most that can be done is to examine and purify our common knowledge by an internal G. Rey (*) Department of Philosophy, University of Maryland, College Park, MD, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_15

299

300

G. Rey

have stood for the last century or so, our physical, chemical and biological theories of the world are more certain than our theories of semantics or epistemology. But why is this so? Why should we be so impressed with these “natural” sciences? What picks out the ones we should take seriously? Should anything a natural scientist says take precedence? I submit that what deserves priority are those scientific claims that offer terrific explanations of, well, virtually all the phenomena they describe, e.g., large tracts of physics, cosmology, geology and biology. Most of epistemology, psychology and the social sciences pale by comparison. But why should we take explanation so seriously? Well, for one thing, it’s good explanations that seem to provide the best reasons for thinking the world is one way rather than another; for another, although we certainly don’t have yet a good theory of explanation, we do seem to have an excellent sense of one: on the whole – once one controls for obvious confounding factors such as attention, specific agenda or difficult or technical cases – human beings seem extremely reliable at judging which of two explanations is the better; at any rate, I suspect we’re more reliable about explanations than we are about metaphysics considered without them.2 Consequently, I’d like to suggest replacing Devitt’s maxim with what seems to me a more fundamental one: Explanation first! I think it’s our reliable sense of explanation that underlies his and many other people’s considered judgment that, at least for the time being, the “metaphysics” of much of the natural sciences ought to enjoy a priority over most other claims, metaphysical, semantic, epistemic or otherwise. This, at any rate, is what has always struck me as what was most plausible about Quine’s (1969) “epistemology naturalized”: epistemology was a chapter of natural science, and its claims should be assessed not from a standpoint outside of science, but from how well it can be integrated in it. Now, I don’t expect Devitt to seriously disagree with any of this. At some level, I suspect he thinks what he calls “metaphysics” simply includes good explanation. He certainly wouldn’t for a moment be endorsing putting first what many traditional and recent philosophers have regarded as “metaphysics,” e.g., the study of “being qua being,” claims about universals, tropes, “substances,” arbitrary mereological sums of objects, and the like.3 Rather, like Quine, he has in mind the metaphysics of the explanations provided by good empirical scientific theories. But is explanation really an entirely metaphysical matter, independent of human interests and cognitive structure? On the one hand there is the fact that some scrutiny, assuming the canons by which it has been obtained, and applying them with more care and precision. (1914: 73–74) Indeed, the several pages there sound so like Quine, that one wonders whether Quine simply forgot that he’d read them. 2  Consider the cogency of most people’s often detailed reasonings about detective stories or following trials, or just ordinary occurrences (when they have no confounding interests). See Baum et al. (2008) and Pacer and Lombrozo (2017) for some experimental evidence in the case of children. 3  See for example, the majority of the discussion and readings under the “metaphysics” rubric in the Stanford Encyclopedia of Philosophy, not to mention what’s proffered under it in standard bookstores.

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

301

explanations are correct, and others not, independently of what anyone thinks; but, on the other, there are the various criteria Quine (1960/2013: 17–19) and Quine and Ullian (1970/78: ch. 5) advance for good explanation, e.g., conservativism, simplicity, refutability: it is by no means obvious that such desiderata on explanation can be spelled out independently of the cognitive system for which something might serve as an explanation, that it is a purely metaphysical and not also partly a psychoepistemological category.4 The issue is sufficiently difficult and obscure that I don’t want to assume a stand on it at the outset, and so I shall regard my insistence on the priority of explanation over metaphysics as a potentially substantive alternative to Devitt’s motto (I’ll provide other reasons to do so as I proceed). In any case, my complaint is not so much with what Devitt might agree to if pressed, but rather a related trend in his work, as in Quine’s work before him, that seems to me to betray metaphysical and epistemological biases that don’t sufficiently heed deeper scientific issues that may well undermine them. In Quine, these were manifested in his rigid “extensionalism” about ontology and semantics, which ushered in his notorious behaviorism, and scepticism about mind, meaning and modality. In Devitt, I tentatively want to suggest it is an uncritical, essentially Moorean “commonsense” understanding of realism and of Quine’s “holistic” epistemology, which, I argue, badly biases his views of many topics he discusses, specifically (what I’ll discuss here) secondary properties, linguistics, and the possibility of a priori knowledge. I develop criticisms of these views, first, in passing with regard to common sense, and a secondary property like color (Sect. 15.2), but then, turning to the issues that Devitt and I have disputed in earlier articles, I’ll consider linguistics (Sect. 15.3), and finally his views of epistemology and the a priori (Sect. 15.4). But all of this will be on behalf of stressing what seems to me to be the priority of scientific explanation over commonsense, and the importance of a related distinction unnoticed in (among others) Devitt and Quine, between what I call an “explanatory” as opposed to a “working” epistemology (Sect. 15.4.1). This latter also tends to be more a matter of commonsense than of science, and is, I submit, what concerns Devitt and Quine, which I think they conflate with the former, leading them to underestimate the serious difficulties of the relevant explanatory projects.

15.2  Scientific vs. Commonsense Realism The biases of Devitt that most worry me can be seen in his very characterization of his “realism”: Tokens of most commonsense, and scientific, physical types objectively exist independently of the mental. (2010a: 33; see also Devitt 1984/97: 23)

4  For some idea of the complexities one might face in trying to sort the issue out, see Woodward (2017).

302

G. Rey

Realism is an important issue, and Devitt (1984/97: §§7–14, 2010a: §§4–5) has admirably defended at least the scientific sort against a daunting array of opponents. But why on earth should he burden it with the claims of “commonsense” (or “common nonsense” as Nelson Goodman used to like to call it)? Science has its arguments from best explanations; but what argues particularly for commonsense, which is only occasionally interested in serious explanation? Devitt has a “Moorean” reply (after G.E. Moore 1925/59): From an early age we come to believe that such objects as stones, cats and trees exist. Furthermore, we believe these objects exist even when we are not perceiving them, and that they do not depend for their existence on our opinions or on anything mental. This Realism about ordinary objects is confirmed day by day in our experience. It is central to our whole way of viewing the world, the very core of common sense. A Moorean point is appropriate: Realism is much more firmly established than the epistemological theses [regarding under-­ determination] that are thought to undermine it. (Devitt 2010a: 62, emphasis mine; see also p. 104)5

In his (1984/97: 18), he does, of course, recognize that “[w]e certainly do not want to say that all common-sense and scientific physical entities exist,” and so exceptions should be made for “flying saucers and phlogiston.” He adds, however: it is not enough to say that only some common-sense and scientific entities exist. The realism that is worth fighting for … is committed to the existence of most of those entities. (1984/97: 18, emphasis original; see also pp. 23, 73)

It’s hard not to wonder who’s counting, and how. It’s striking that Devitt’s constantly repeated examples are of “stones, cats and trees” (1984/97: 17, 81, 300; 2010a: 57, 62), and seldom of any of the more problematic cases that scientists have often discussed at length.6 Think of people’s uncritical thoughts about rainbows (what and where are they?), waves (what exactly is moving?), Euclidean figures, numbers, secondary and other response-dependent properties (red, funny, sexy), not to mention the enormous array of religious and superstitious entities (souls, ghosts, gods and demons) in which the majority of the world’s population seem to believe and which surely Devitt wouldn’t want to defend for a nano. I don’t want to rush to any conclusions about the consequences of quantum mechanics for our ordinary thought, but I see no reason whatsoever to rule out a priori the possibility that it might show that almost all of our commonsense beliefs in a determinate world are only approximately true, just “true enough” for us (and/or our ancestors) to have

5  The influence of Moore on behalf of realism pervades much of Devitt’s (2010a): in addition to this passage, see also pp. 109–114, 122–123, 136, 184–185, 316–320. All of this would be innocuous enough did Moore display any interest in science over commonsense, against which I wager he’d deploy his same appeal to commonsense that Devitt endorses. 6  He does discuss what some people might regard as problematically human-dependent entities, such as artifacts, tools and social phenomena (1984/97: §13.5). Here he’s entirely right to claim that mere human dependence for their identification isn’t enough to render something unreal. But the cases typically raised by scientists are ones in which the issue is the stability of the identification, which hammers enjoy, but which, as we shall see shortly, e.g., secondary properties do not.

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

303

gotten by, the commonsense phenomena  – even “stones, cats and trees”  – themselves not being part of any explanatorily serious inventory of the world.7 In any case, although Devitt reasonably argues that traditional, Cartesian sceptical arguments don’t undermine commonsense, he devotes surprisingly little attention to how serious scientific explanations often do. He’s certainly not entitled to claim that “[t]here is no plausible worked-out alternative to Realism” (1984/97: 73). As we shall see in the cases to which we will now turn, scientific realists can and often do explain away many commonsense posits as specific psychological illusions based upon the very sciences they are understanding realistically! It’s hard to see why Devitt should include any claims about commonsense in putting “metaphysics first.” Why not just leave commonsense entirely out of it?

15.2.1  Secondary Properties: Color An example of Devitt’s odd partiality to commonsense is his treatment of a paradigmatic secondary property that has been much researched: color. Since the Galilean revolution, many have thought that color is not a genuine property of objects. For recent example, in his landmark textbook, Vision Science, the well-known vision scientist, Stephen Palmer, writes: Neither objects nor lights are actually “colored” in anything like the way we experience them. Rather, color is a psychological property of our visual experiences when we look at objects and lights, not a physical property of those objects or lights. The colors we see are based on physical properties of objects and lights that cause us to see them as colored, to be sure, but these physical properties are different in important ways from the colors we perceive. (Palmer 1999: 95)

In response to such an eliminativist view of color, Devitt recommends a “Lockean” objectivist, dispositional view: According to my account, an object is red in virtue of its power to have a certain effect on (normal) humans…. Let C be any condition. We can define redness-for-S-in-C. We can be objectivist about this for any C, just as for any S. Once again, we need not apologize for our special interest in one condition, the normal condition of human life. (1984/97: 250−251)8

7  One might wonder if any commonsense entities, such as tables, could ever be regarded as real, given that tables qua tables don’t enter into any serious science. In my (forthcoming: §9.2) I argue that there are two conditions under which it’s reasonable to take a commonsense term realistically: (i) if there’s enough stability in its usage across people and time, and (ii) if all the central properties connected to the usage of the term can be preserved under the assumption it applies to something that can be delineated by a serious explanatory science. “Table” and “chair” seem to me to satisfy both these conditions: tables are just hunks of matter occupying a certain space time position that, modulo vague boundaries, are stably picked out by the relevant uses of “table.” By contrast, rainbows, Euclidean figures, and (as I will shortly argue) secondary properties are either unstable or have features that aren’t satisfied in the physical world. 8  Devitt does go on in the passage to acknowledge that there are potential problems with respect to his account of color, citing Hardin (1988/93), but thinks the problems depend upon subjective

304

G. Rey

However, given the complexities of color vision, it’s extremely unlikely there is any stable “normal condition of human life” that would serve the purpose. The problems of the strategy are nicely set out by C.L.  Hardin (1988/93, 2008) in several rich discussions of both the philosophy and the scientific explanations of color vision. After summarizing much of the relevant evidence, he sums up the case against such a “normal observer standard condition” view: Among other things it supposes that there is an observer – perhaps a statistically constructible one – whose visual system can reasonably serve as the basis for making the required classification. In particular, it presupposes that all normal observers will locate their unique hues at approximately the same place in the spectrum, and, given a set of standard color chips under the same conditions, will agree on approximately the same chips as exemplifying those unique hues. This is by no means the case. In fact, the differences are large enough to be shocking. (Hardin 2008: 148)

Hardin goes on to describe careful experiments by Kuehni (2001) on 40 color chips (arranged in a circle), which showed that: If the results for the four unique hue ranges are taken together, there fails to be consensus on 26 out of the 40 chips composing the hue circle. That is to say, 65% of the hue circle is in dispute! (Hardin 2008: 149)

This invites the observation Russell made in the opening pages of his Problems of Philosophy, where, after similarly considering the varying conditions other than “normal” ones under which inconsistent color attributions are made, he concludes: But the other colours which appear under other conditions have just as good a right to be considered real; and therefore, to avoid favouritism, we are compelled to deny that, in itself, the table has any one particular colour. (Russell 1912/59: 10)9

features being regarded as essential to colors. Since he concedes this is a matter of dispute, he concludes that “[t]he right theory of colour would be largely Lockean, but partly eliminativist” (1984/97: 251). What he fails to recognize is how Hardin’s discussion – and the discussion of most vision theorists  – completely undermines any explanatory significance whatsoever to “normal” conditions of human perception, and so any viability at all to the Lockean strategy. 9  Russell didn’t consider there a Lockean, dispositional proposal, but his point is made even sharper in this regard by Kuehni’s data (see Mound 2019: §6.2, for recent general acceptance of the point). It’s curious that in his extended discussion of color (2010a: 122–136) Devitt nowhere considers this problem of the diversity of color responses. Perhaps he thinks it could be handled by simply relativizing it to more specific populations and conditions, just as he is willing to relativize it to species (1984/97: 250). It would be interesting to see him try to spell out what seem to be all the relevant parameters: cone sensitivities, nature of the ambient light, surrounding context, angle, prior exposure, expectations, etc., and in such a way that also supported counterfactuals. Given some of the continuities of these parameters, what hope is there that the list of conditions “C” would be finite? I should stress that I am not for a moment endorsing the global response dependency of properties – the view that all properties are response-dependent – that Devitt is reasonably opposing in those pages.

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

305

The burden is on the color realist to argue that there is any non-arbitrary basis for selecting some specific conditions and some specific reactions of some specific people as the “realistic” ones – just as a believer in absolute motion needs some non-arbitrary basis for selecting one frame of reference as somehow correct. There is no explanatory basis for realism either about color or absolute motion: what invariances there are are in us, not the world (see John Collins’ contribution to this volume).10 So far as I can see, Devitt’s only real motivation for insisting on color realism is simply a wistful attachment to commonsense11: There is no denying the sense of loss which the Realist would feel if the objects he believed in did lack secondary properties. (1984/97: 82).

One can understand this sense of loss on first learning of the complexities of color vision, say, in one’s first philosophy or psychology class. But it’s hard to see why this should persist on reflection. What is really lost? We can perfectly well engage in our ordinary talk of saying that roses are red, the sky blue and the rainbow enchanting, even if from a (usually quite un-ordinary) scientific point of view all these phenomena involve all manner of odd illusion.12 My point here is not to go to the wall denying the reality of all these things. I only want to insist upon the methodological point that it should be scientific explanation, not commonsense, that should be the arbiter of the issue. Devitt’s treatment of secondary properties may not be crucial to his views. But I think it does presage his more extended treatment of the theoretically more complex domain of linguistics, and the entities he thinks it ought to concern. Unlike the case of color, these are relatively unique discussions that he and I have had that deserve to be pursued in their own right.

 In their defence of color realism, Byrne and Hilbert (2003) presume there is some such basis, but don’t provide one, or (to my mind) provide any reason to believe one exists. 11  Devitt (1984/97: 81) does make a passing reference to what he takes to be a role for real colors in biology. But he presents no argument that what’s relevant there are actual colors. On the face of it, one would expect the biologist’s concern would be precisely the same as the vision theorists’, with animals’ sensitivities, and what colors they represent. What role would colors themselves play apart from those sensitivities? 12  In this regard it’s worth noting recent proposals of Chomsky (2000) and Pietroski (2005, 2010), according to which what the human language faculty makes available to our cognitive and other “performance” systems are various phonological, syntactic and only rudimentary semantic material, which only the conceptual system uses to create the full semantic phenomena of reference and truth. The proposals allow for essentially polysemous expressions of natural language to be interpreted one way for the purposes of science, quite another for purposes of ordinary talk, as in the cases of rainbows and the sky. See also Sperber and Wilson (1995), Carston (2016), and my (2009, 2014b, forthcoming: ch. 10) for related discussion. I commend these approaches to Devitt as a strategy for painlessly liberating metaphysics from the burdens of commonsense and ordinary talk. 10

306

G. Rey

15.3  Language 15.3.1  Ontology Devitt seems to presume a similar dispositional proposal could be made in the case of language, in particular, of what can be grouped together as “standard linguistic entities” (“SLE”s), such as words, sentences, syllables, morphemes, phonemes: I assume that hearing a sound as a certain phoneme is largely innate and so, to that extent, it is just like seeing a certain Kanizsa figure as a triangle. But to that extent it is also just like seeing a certain object as red. And whereas the Kanizsa figure is not really a triangle, the object is really red. Or so thinks the neo-Lockean about secondary qualities. So we need an argument that phonemes are like illusions not these secondary qualities. (2008a: 224)

But, in fact, precisely the same kinds of considerations Hardin raised about color can be brought to bear on Devitt’s hopes to identify SLEs dispositionally: there’s simply far too much variability in people’s perceptions and executions of their intentions in highly varying contexts to pin SLEs to any particular arrangement. There are plenty of people whose speech plenty of people can’t understand, even when they speak “the same language,” and there are endless differences merely in pronunciation between people due to, e.g., age, gender, anatomy, speech impediments, personality, social class, and, even within a single person at certain stage of life, differences in social style (auctioneers, sports announcers), auditory manner (singing, whispering, bellowing), emotional intensity and relative inebriation. And, of course, people’s abilities to “correct” for all this variability itself varies widely, with the consequence that one would be as hard put indeed to specify a “normal” listener, delivery and social context that would suffice for all speakers of “the same language,” dialect or even idiolect, as one would be to specify a normal perceiver and normal circumstances in the case of color. Should linguistics be grounded by, e.g., BBC newscasters, L’Académie Française, or how sufficiently well-educated lawyers entone in court? Phonologists and phoneticians do not await such analyses, any more than vision theorists await a dispositional analysis of worldly color.13 To paraphrase Max Weinreich’s famous quip, a dispositionally defined phonology would seem at best an idiolect with a gun-boat – and an obsessive philosopher at the helm! This much, at any rate, seems to be a common view among many linguists, and is what famously leads Chomsky to regard linguistics as an essentially

 Again, as with color, the present point is not really to deny the existence of SLEs, but just to call attention to their lack of explanatory significance. Devitt (2006a: 189) does wonder how communication without real SLEs would be possible; but here the same kind of explanation is available as with so much human life: reliably shared illusions can suffice, as they happily do in, e.g., our talk about “rainbows” in “the sky,” or what turns out to be illusory, Kanizsa lettering – of “language”! – on many billboards. See my (2006) for discussion.

13

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

307

psychological investigation.14 Along with insisting on the reality of SLEs, Devitt (2003, 2006a, b, 2008a, b) rejects this psychological conception, arguing that linguistics should be concerned with a “linguistic reality” (LR) that is “prior” to psychology: (LR) Linguistics has something worthwhile to study apart from psychological reality of speakers: it can study a linguistic reality. This reality is in fact being studied by linguists in grammar construction. The study of this linguistic reality has a certain priority over the study of psychological reality. A grammar is about linguistic reality not the language faculty. Linguistics is not part of psychology. (Devitt 2006a: 40; see also p. 274)

But this, of course, presumes there is a substantial theory of language largely independent of psychology.15 Now, such a view did underlie much early work in linguistics, and is clearly advocated by Quine (1953c, 1960/2013), but, for at least much of the linguistic and psychological community since, it has been forcefully rebutted by Chomsky (1957, 1965, 1968/2006, 1980). At any rate, I submit that there’s no theory of a largely non-psychological “linguistic reality” that remotely begins to rival in depth and detail what Chomsky and now some 60 years of research have provided on behalf of the psychological view. Indeed, it’s striking that Devitt doesn’t challenge the explanatory power of a generative grammar, only the interpretation of it as psychology. But can a Chomskyan theory of grammar be separated from its psychological focus? On the face of it, it’s odd that Devitt should think Chomsky is or should be concerned with a non-psychological linguistic reality, since data about actual utterances or speech corpora seldom figure in standard Chomskyan discussions. And this is exactly to be expected. As Chomsky (1965) stresses, his concern is not with actual speech performance, but with the specific competence (technically, the I-language) that underlies it, which may or may not be manifested in actual speech corpora. Indeed, it is more likely indicated by what speakers find they don’t and “can’t” say, rather than what they do (cf. Pietroski 2005: §2.1). So what might be the argument for Devitt’s contrary view? It’s hard to resist the impression that it’s yet another instance of his commitment to the Moorean commonsense that seemed to be driving his views about color. It crops up in his

 Thus, Chomsky and the phonologist Morris Halle write in their influential The Sound Pattern of English:

14

We take for granted, then, that phonetic representations describe a perceptual reality. Our problem is to provide an explanation for this fact…. Notice, however, that there is nothing to suggest that these phonetic representations also describe a physical or acoustic reality in any detail. (Chomsky and Halle 1968: 24–25) The relevant external acoustic phenomena don’t display the segmentations into phrases, words and syllables that we take our utterances essentially to possess. Thus, SLEs display neither the stability nor the preservation of central properties in the acoustics that I argued in note 7 were needed to maintain at least an external realism about them. 15  Note that, unlike Platonists, Devitt doesn’t think language is causally independent of minds (2006a: 26). And I say he thinks language is “largely independent” of psychology, since he does think that regarding something as language requires regarding it as the result of human intentional action (see his 2006a: 230–231).

308

G. Rey

(re-)definition of “competence” in terms of dispositions to behavior (2006a: 128), and his focus on the expressive, communicative and conventional aspects of language (2006a: 29–30, 134; 2008a: 214, 218). These are all familiar enough commonsense conceptions, but ones that Chomskyans argue are misleading from a serious explanatory point of view. There is not space to pursue all these issues here,16 but fortunately Devitt made his commonsense bias most evident in a recent email that he was happy for me to quote. He produced the following argument for (LR): (1) Any theory is a theory of x’s iff it quantifies over x’s and if the singular terms in applications of the theory refer to x’s. (2) A grammar quantifies over nouns, verbs, pronouns, prepositions, anaphors, and the like, and the singular terms in applications of a grammar refer to such items. (3) Nouns, verbs, pronouns, prepositions, anaphors, and the like are linguistic expressions/ symbols (which, given what has preceded, are understood as entities produced by minds but external to minds). Therefore: (4) A grammar is a theory of linguistic expressions/symbols. (Devitt email to author, 10 Dec 2015)

Premise (1) is a version of Quine’s (1953a) “criterion of ontological commitment,” which we will accept for the sake of the present discussion. It is the conjunction of premises (2) and (3) that is under contention, since, although it’s a paradigm of commonsense, it seems actually to be false, at least for a Chomskyan linguistics.17 On behalf of his premises, Devitt (2003, 2006a) claims that “a great deal of the work that linguists do, day by day, in syntax and phonology is studying a language in the nominalistic sense” (2006a: 31), i.e., external expressions in an external language. And he is certainly right that, on the surface, [w]ork on phrase structure, case theory, anaphora, and so on, talk of “nouns,” “verb phrases”, “c-command”, and so on, all appear to be concerned, quite straightforwardly, with the properties of symbols of a language, symbols that are the outputs of a competence. This work and talk seems to be concerned with the properties of items like the very words on this page. (2006a: 31, bold emphasis mine)

Any standard elementary linguistics textbook certainly appears to be referring to and quantifying over types of SLEs, tokens of which seem to appear on its pages as objects of discussion. Devitt takes such appearances to involve a commitment to the reality of nouns and verb phrases, per (3), as the external output of competence. Let’s allow what Quine (1953a) reasonably claimed, that a theory is committed to the entities it quantifies over, the values of its singular terms.18 But Devitt knows

 I do so in my (2012) and in my (forthcoming).  Devitt expresses (2) and (3) in his (2008a: 211). 18  In line with my motto “Explanation first!,” I actually prefer to think of the criterion as one applied not purely formally, but in terms of what entities are posited by the theory that do genuine explanatory work. A discussion of this difference is, however, not at issue in the present discussion. 16 17

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

309

his Quine well enough to know that theorists can’t always be taken at their workaday word. Thus, clearly for a Quinean, in considering the commitments of a Chomskyan theory, we shouldn’t go merely by what is said in elementary textbooks, but, rather, by what sort of entities are performing the genuine explanatory work the theory is being invoked to perform.19

15.3.2  Linguistic Explanation So what explanatory work is being performed? As we’ve noted, Chomskyan linguists rarely if ever appeal to actual corpora data about the use of external speech. For all they care, some people may take vows of silence and never speak at all; some people may constantly be interrupting themselves, seldom producing complete sentences, and others might not bother to use the resources of recursion and complex embeddings.20 To make a long, familiar story short, Chomskyans want to explain a wide array of striking phenomena about human language for which they quite reasonably claim no non-psychological account is remotely plausible. My favorite examples in this regard are what I call “WhyNots,”21 or (would be) sentences that  Collins (2008) raises essentially the same point, to which Devitt (2008b: 250) replied merely by quoting an introductory linguistics textbook, adding,

19

simply taking grammars to mean what they say, without any revision or reinterpretation, they give empirical explanations of linguistic expressions. And maybe they sometimes do. But these are emphatically not the explanations Chomskyans intend to provide, as innumerable of Chomsky’s explicit statements make clear (see, e.g., his 1955/75: 5–7, 105; 1965: 4–5, 1968/2006: 25, 1980: 129–130). Note that it’s one thing to argue with the ontological commitments of a theory, as Quineans often do, quite another to reject its repeated, detailed, empirically motivated explanatory aims. Could Chomskyans really be so wrong about what it is they’re trying to explain? Devitt (2006a: 31) does cite what sound like Chomsky’s (LR) formulations of his task in his (1957, 1965: 9 and 1980: 222). However, the (1957) formulation was only for the limited purpose of mere lecture notes for a course, and both the (1965) and (1980) passages he quotes consist of (misleadingly) brief ways of expositing merely the competence/performance distinction, and are belied by ample other passages in the same volumes (see Chomsky 1965: 30ff, 1980: 189ff). Chomsky (2000: 159–160) does further complicate the issue by appearing to reject talk of “representations of.” However, he’s there concerned with the “of” relating representations to external objects. I use “representation” in what I take to be a traditional sense whereby there can be a representation of an x even though there is no x, as in the case of Zeus, phlogiston, and the highest prime; see my (2003, 2012, and forthcoming: ch. 8) for extended discussion. 20  Claims about non-recursive spoken language have been made by Everett (2012) with regard to the Pirahã, to which Chomsky rightly replied, “Fine; can they learn Portuguese?” But see Smith and Allott (2016: 188–195) for detailed criticism of Everett’s claims. 21  Chomsky (2013: 41) calls them, less mnemonically, “fine thoughts”  – as in perfectly “fine thought, but it has to be expressed by some circumlocution.” Linguistic texts are bursting with examples. If one remembers nothing else about Chomsky, one should remember at least a few of these examples: a great deal of Chomsky’s approach can be readily re-constructed merely by reflecting upon how they could possibly be explained. They serve not only as striking evidence for

310

G. Rey

native speakers find oddly unacceptable, despite it being perfectly clear to them what thought such a non-sentence would express, but utterly unclear why one can’t acceptably utter them. Here are three representative sets of examples (“∗” is used to indicate unacceptability due to ungrammaticality): (i) Island constraints. wh- words can’t be extracted from underlined places in certain phrases: (1) ∗Who did the picture of ______ impress Mary? (cp. The picture of Bob impressed Mary.) (2) ∗Who did you go to the movies with Bill and ______ last Thursday? (ii) Negative Polarity Items (“NPIs”). Certain terms, such as “any” or “in weeks,” can only appear in non-factive or “negative” contexts (indicated by underlines): (3) Do you have any wool? vs. ∗I have any wool. (4) I doubt I have any wool. vs. ∗I know I have any wool. (5) I haven’t seen him in weeks. vs. ∗I have seen him in weeks. (iii) Parasitic gaps. One or more unpronounced elements, one of which is OK (as in (6)) only so long as the other one is there; if one is deleted, the other must be as well, which is why (7) is bad (underlinings the “gaps” from which “which” was moved): (6) Which articles did you file ______ without reading ______? (7) ∗I filed Bill’s articles without reading ______. It is virtually impossible to imagine a non-psychological explanation of these phenomena. They obviously aren’t the results of external laws about acoustic phenomena themselves, along lines of laws of the speed of sound or Doppler effects. Perhaps Devitt thinks they emerged as the result of customs and conventions, along the lines of people stopping at red lights, passing on the right, or keeping a certain physical distance from each other in conversations (see note 22 below). After all, convention does arguably play a role in mapping meanings to atomic morphemes, and in the setting of parameters for a grammar, e.g., whether it is SVO or SOV. But if language were largely a conventional vehicle for conveying meaning in social communication, it would be quite astonishing that one couldn’t utter these and the multitude of other WhyNots to express the thoughts that – once one gets past the ungrammaticality – it would be easy to guess were being expressed. The WhyNots aren’t genuine options for a speaker of a natural language: they are seldom if ever uttered sufficiently for a custom or convention against them to emerge with the speed, stability and universality with which children respect them; and it is very doubtful that a convention allowing them could actually be adopted even if people tried.

the innateness of grammatical rules (although some of them may involve learned parameter settings), but, just as importantly, as crucial evidence of the psychological focus of a Chomskyan theory.

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

311

A particularly striking example of the psychological conception at work concerns the specification of the rules for island constraints (see examples in (i) above). A recent proposal for at least some of them is to posit a process of “multiple spell-­out,” whereby certain phrases are “finished” and shipped off to the phonological system before further rules can apply to them. This explanation is an alternative to the initial view that regarded such islands as due to constraints on certain syntactic configurations, as in Chomsky (1986), or, alternatively, to memory constraints, as in some recent proposals discussed by Phillips (2013) and others in Sprouse and Hornstein (2013). Whatever the correct account, the crucial point is that the specification of the grammar requires sorting out claims about the character both of certain processes and the various psychological structures over which they are defined. A theory of a “linguistic reality” conceptually independent of psychology would be like a theory of “nutritional reality” (based, say, on dining customs) independent of the rest of physiology. Even if one were to try to find laws, rules or principles of nutrition or grammars as external realities, one would have in the end to resort to the very internalistic theorizing that Devitt abjures in order even to demarcate the target subject matter. In any case, Devitt provides no serious alternative explanations. So far as I have read, he doesn’t present anything like a serious theory of external “Linguistic Reality” remotely comparable in richness and power to an internalist generative grammar. He often suggests that human language should be understood on analogy with the famous communicative dances of the bees (e.g., 2006a: 29–30, 134; 2008a: 214). But the bees dances are studied as external natural indication relations, which rarely obtain for natural languages (any genuine correlations of utterances with reality are at best a tiny fraction of language usage). More plausibly, he turns to some form of conventionalism, as when he endorses David Lewis’ (1969) “platitude that language is ruled by convention” (Devitt 2008a: 218).22 But, as we noted, aside from the mapping of morphemes to phonemes, and (perhaps) parameter settings, convention seems to be hopeless in explaining, e.g., the WhyNots, or the speed,  Devitt does have a go at trying to show how it is possible for conventions to, e.g., yield unvoiced elements, such as PRO (the subject of, e.g., infinitives, as in Bob tried PRO to swim). Devitt writes:

22

Consider the string “Bob tried to swim.” The idea is, roughly, that each word in the string has a syntactic property by convention (e.g., “Bob” is a noun). Put the words with those syntactic properties together in that order and the whole has certain further syntactic properties largely by convention; these further properties “emerge” by convention from the combination. The most familiar of these properties is that the string is a sentence. A more striking discovery is that it has a “PRO” after the main [finite] verb even though PRO has no acoustic realization. There is no mystery here. (2008a: 217–218) But it’s hard to take this hand wave seriously. The more one focuses on the utterly non-obvious relevant rules, the elaborately hierarchical tree-structures, and the plenitude of multiple occurrences of unarticulated elements, the harder it is to see how the rules could possibly just “emerge” from kids just “combining” words. I’m reminded of Louise Antony’s (2002) nice rejoinder to Hubert Dreyfus’ similar rejection of representationalist theories of skills. She quotes Monty Python’s “How to Do It” advice: How to play the flute. (Picking up a flute) Well here we are. You blow there and you move your fingers up and down here. (https://www.youtube.com/watch?v=tNfGyIW7aHM)

312

G. Rey

universality and odd inflexibilities of language acquisition. Chomskyan linguistics is per force about psychology because that’s simply where the relevant law-like regularities lie. Pace Devitt (2008b: 251), it is phenomena such as the WhyNots that “warrant the robust psychological assumption that there are mental states with the properties the grammar ascribes to VPs.” Given that Chomskyans regard the explanatory point of their work to be psychological, it’s not surprising therefore that they don’t take themselves to be discussing what they regard as an ill-defined set of external SLEs themselves. Rather, they are concerned with an internal computational explanation defined over mental representations of them – just as vision theorists are concerned not with real colors, but with apparent, presumably merely represented ones.23 Actual external SLEs, and any “linguistic reality” independent of that computational psychology, are largely irrelevant. Devitt has perhaps been quite understandably misled by the fact that Chomskyans don’t always make this plain, any more than vision theorists do. For starters, they often talk, perfectly commonsensically, of sentences uttered or written on a page, and of what one can say in “English” or “Mandarin” – but, as they also frequently point out (e.g., Chomsky 1986: 22), this latter talk is simply short for the “I(nternal)language” that happens to be approximately shared by certain speakers. More confusingly, it’s also true that the principles are often expressed as though they were true of certain objects. Consider, as a simple example, one of the standard constraints on negative polarity items (NPIs; see examples in (ii) above): (NPI-C) An NPI must be c-commanded by a licensor.24

This is really just a clear and succinct way of pointing to what are in fact constraints on the computations of representations of NPIs. The relevant principle should literally be stated along the following lines25: (NPI-C′): A representation of an NPI must represent it as at a node in a tree in which it is c-commanded by something represented as a licensor.

Devitt (2008b: 253) is later slightly more cautious, retreating to agnosticism about whether the rules on PRO are conventional or innate. But what about all the other WhyNots and the rules needed to explain them? 23  Vision theorists, like Palmer, routinely proceed to talk about “colors,” despite denying their reality. As with most cognitive psychology, whether vision, linguistics, or theories of Greek mythology, it’s convenient to express the content of mental representations by simply pretending the representations are true (see my (forthcoming: §6.4) for discussion). 24  “c-command” is the relation of a node in a tree structure to its sister and any of her descendants; a licensor is, e.g., an interrogative (Do you …?) or a “negative” like not or doubt – but the details aren’t important here. 25  Why don’t the following sorts of formulations appear anywhere in texts? Well, obviously they’d be awkward and prolix. But I fear it’s also because of rampant use/mention confusions that Devitt (2006a: 69–71) and I (2003) have elsewhere deplored, which are partly due to indifference and partly to confusions about intentionality (see my (forthcoming)), but which unfortunately can easily mislead the reader into a realism based on a use/mention confusion, according to which the SLEs are the neural states themselves that are said to represent them (so that, e.g., [+ voice] is a feature of a brain, not a larynx!).

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

313

However, it would obviously be silly at this point to be cluttering up linguistic texts with such cumbersome prose. Linguistics is already hard enough. What is perfectly sensible instead is simply to pretend there are the SLEs and tree-structures in which they appear, and then treat any rules, principles, constraints, or parameters as true under the pretense. The point here is actually quite general to any psychological theory that aspires to be purely “internalistic” as Chomsky’s does. Insofar as psychologists are interested in how a system thinks, imagines or, more generally, represents the world, they are not interested in whether the things that the system represents are real, or whether the principles that seem to govern those “things” are actually true of anything real. It’s enough that the system computes such representations. Thus, to return to the example of color: vision theorists may make rich and important claims about the character of the visual system by talking about colors, such as red and green, and perfect geometric shapes, such as triangles and squares, all the while believing there actually are no real colors in the external world, nor any genuine triangles or squares; indeed, for all they care, all such things may all be metaphysically impossible! All that’s essential to their theories is that there are representations of such things, and that whatever principles and constraints that are set out in their theories are “true in the pretense”; for example, that in the visual pretense objects are illuminated as from above.26 Devitt’s disregard of the demands of the relevant explanations extends also to his treatment of the special role linguists accord intuitions, where he (2006a: 118) finds it “hard to see what shape … an explanation” in terms of special internal roles would take. I argue at length in my (2006, 2014a) that he fails to appreciate the perceptual explanatory role the linguists have in mind, but I haven’t much to add to the criticisms of his view I have made.27 I turn instead to expand on problems with his views on a related topic, his rejection of the a priori.

 None of this is to say (à la Jackendoff 1987, 2006) that a representation independent world is somehow unintelligible. Pretending that p doesn’t, after all, entail (even believing) that not p. Here I share Devitt’s (2010a) insistence on at least a scientific realism about the (possible) entities genuinely needed by true explanations. 27  Briefly: I argue for the causal efficacy of structural descriptions of SLEs, which seems to me supported by empirical evidence that Devitt’s alternative hypotheses can’t explain. In my (forthcoming), I supplement my earlier pieces by citing still further experimental evidence (much of it from Fernández and Cairns 2011: 206–224) regarding, e.g., involuntary perception of native speech, its rhyme and register; garden paths; syntactic priming, “slips of the ear” and syntactic “nonsense” (as in Lewis Carroll and Derrida). Devitt (2006a: 224; 2014: 284–287) mysteriously claims that such phenomena can be explained merely by representations of semantic messages having syntactic properties, rather than representing them. But it’s unclear how this explanation would go: a representation’s merely having certain, e.g., neural properties, after all, doesn’t make those properties perceptually real. See Maynes and Gross (2013) for related discussion. 26

314

G. Rey

15.4  The A Priori Regarding the a priori, Devitt writes: there is only one way of knowing, the empirical way that is the basis of science (whatever that way may be). (1998/2010: 253).28

This only way of knowing is Quine’s global confirmation holism, whereby even claims quite remote from experiential test, such as logic and mathematics, are in principle open to revision by it: with the recognition of the holistic nature of confirmation, we lack a strong motivation for thinking that mathematics and logic are immune from empirical revision. (Devitt 1998/2010: 253)

Not surprisingly, he rejects any special role for the kind of intuitions to which traditional philosophers have appealed: We have no need to see philosophical intuitions as a priori. We can see them as being members of a general class of empirical intuitions. (2010b: 276)

Devitt rejects the a priori chiefly for the same sort of reason he rejects special appeals to intuitions in linguistics, finding Quine’s alternative account ever so much clearer: the whole idea of the a priori is too obscure for it to feature in a good explanation of our knowledge of anything. If this is right, we have a nice abduction: the best explanation of all knowledge is an empirical one. (2010b: 283)

I want separately to challenge both of the claims here: (i) that Quine’s “all is empirical” proposal is a good abduction, and (ii) that the notion of the a priori is, by comparison, somehow hopelessly obscure. But, first, a crucial distinction about how to approach these issues.

 Devitt (1996: 2; 1998/2010: 253) actually defines “naturalism” in this way, claiming “[Rey] is a bit confused about the sense in which he [accepts it], given his position on the a priori” (1998/2010: 254). As I said at the start, I reject the definition, preferring the explanatory conception that seems to me to be what Quine is defending in his (1969). It is worth noting that Devitt’s characterization is not included among the many other characterizations that either Steup (2018), Papineau (2016) or Rysiew (2017) discuss in their Stanford Encyclopedia articles on the topic(s). At best, Steup notes that a “naturalized epistemology” treats epistemology as part of natural science – but this of course doesn’t entail anything substantive about “the only way of knowing.” But, fine, one can see how Devitt might have in mind a usage that assimilates this continuity of epistemology with natural science – there’s no “first philosophy” (as even Russell allowed; see note 1 above) – with the view that logic and arithmetic are also continuous with it, and known in the same way (as Russell surely didn’t allow!). I just want to allow that these issues be distinguished, and that, in any case, it’s simply not true that “what has interested nearly all philosophers under the name ‘a priori’ has been a non-naturalistic way of justifying beliefs” (Devitt 2010b: 273). Epistemologically naturalistic defences of the a priori status of logic and arithmetic, like mine and those of Goldman (1999) and Antony (2004), not to mention the various strategies of the early Wittgenstein and the Logical Positivists, are all perfectly intelligible cases in point.

28

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

315

15.4.1  A Working vs. an Explanatory Epistemology29 Discussions in epistemology don’t to my knowledge sufficiently distinguish what I call a “working” epistemology from an explanatory one. I take a working epistemology to be the explicit epistemology that standardly concerns philosophers when, for example, they address the practice of people consciously justifying what they think and say in the face of doubts, whether it be ones raised by disputes about specific research, whole theories, religious doctrines, or by Cartesian worries about dreams, brains in vats and fake barns in the countryside. Some of the discussion concerns confirmation in science, but much of it, like that of Moore that Devitt cites, focuses on commonsense claims and reasoning, defending it ultimately against the predations of the sceptic. By contrast, an explanatory epistemology is one not concerned with conscious justifications or defenses of beliefs, but with providing a general scientific account of how animals and people come to have their often remarkable cognitive abilities: the rationality, intelligence and frequent success of many of their efforts, whether or not this is of any use in settling disputes or vindicating commonsense. Historically, of course, working epistemologies have often been accompanied by explanatory speculations about the “origins of ideas,” of the sort one finds persistently in the philosophical tradition, from Plato to the empiricists, up to and including Quine and Davidson, most of which, however, are seldom informed by any serious, empirically controlled research. I don’t want here to reiterate familiar complaints about this (what seems in retrospect) often irresponsible speculation, but merely want to emphasize how one should be careful about inferring the character of this explanatory epistemology from features of the working one. After all, there is no real reason the two should coincide: maybe what we fairly self-consciously do in working reflection and explicit argument is quite different from what, as it were, our minds/brains may more efficiently do, often unself-consciously and inexplicitly, in reasoning and learning about the world.30 One might think that this distinction between a working and an explanatory epistemology is simply the familiar distinction between a “normative” and a “descriptive” one. And perhaps it is; but I think it’s crucial to notice how normative considerations may enter into an explanatory psychology in way that might differ from the role they play in a working one, at least until we have a sufficiently rich psychology that is able to unify the two. After all, one task of a “descriptive” explanatory psychology is surely to explain just how we and other animals come to understand things and succeed in so many of our efforts as well as we appear to do. It’s not unlikely that at least some of this success is due to our using strategies that,

 I initially set out the views of this section in my (2014b: §2); some passages are identical, but others have (I hope) been improved, and I’m trying to improve further in my (forthcoming). 30  I emphatically do not mean to be drawing a “personal”/“sub-personal” distinction here, only a distinction between the obviously different purposes of a working vs. an explanatory epistemology (hence the “as it were”). 29

316

G. Rey

given our innate endowment in our normal environmental niche, are immensely reliable, sometimes “rational,” even ingenious (Why did Fischer win so many chess games? He was no dope!). And these strategies might not be ones we’ve even noticed in our everyday working epistemic practices.31 Of course, creatures’ successful strategies may turn out to be highly specialized and specific to particular domains – say, of language, or the folk theory of biology or mind – and there may be no general explanatory epistemology to be had. As has become clear in the last several decades, genuinely explanatory epistemology turns out to be immensely more difficult than traditional philosophers have supposed. Our working, reflective practice simply cannot wait on its results. Even if from an explanatory point of view there turns out to be a priori knowledge, aside from mathematics it may be difficult to ascertain precisely which things count, and so, for the foreseeable future, appeal to it may not settle any non-mathematical disputes.32 Given our ignorance of what is genuinely responsible for our cognitive abilities, it may well be that the best working epistemology for the foreseeable future is Quine’s pragmatic “Neurathianism”: in explicitly justifying one’s claims, one starts at different places at different times, depending upon what serious doubts have been raised about some issue, much as, in Quine’s familiar figure from Neurath, one repairs a boat while remaining afloat in it, piece by piece, standing on a second side to repair a first, on a third to repair the second, and so on, moving around the boat, only to stand ultimately on the first to repair the rest.33

15.4.2  Is Quinean Holism a Good Abduction? The Neurathian figure can seem to invite Quine’s other familiar figure of “confirmation holism” that he takes from Pierre Duhem (1914/91):

31  I don’t mean to be suggesting for a moment that the explanatory ascription of attitudes is generally “normative,” as many have insisted (see the exchange between myself and Ralph Wedgwood in McLaughlin and Cohen 2007): perfectly descriptive facts can explain satisfactions of norms. Another distinction that is partly orthogonal to the one I’m drawing is between an “internalist” and an “externalist” epistemology, the first concerned with justifications a person may provide, the second with the relations a person or animal may bear to its environment (see Steup 2018: §2.3). But an explanatory epistemology could be purely internal, along lines I’ve noted Chomskyans pursue. 32  To generalize Putnam’s (1965/75: 36) observation about analyticity, there may be an a priori, but it “cuts no philosophical ice … bakes no philosophical bread and washes no philosophical windows” – at least not in any of the usual working philosophical disputes. Thus, Devitt (2010b: 283n) is precisely right in noticing that a priori claims “could do little epistemic work” – in, I would add, a working epistemology, since (apart from mathematics) it may only be available in an explanatory epistemology, which may be one of the last things we’ll ever get right. As Hegel pointed out, the owl of Minerva may spread its wings only with the falling of the dusk (see my (1993) for discussion). 33  Cf. Russell’s (1914) denial of a “first philosophy” quoted in note 1 above.

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

317

our statements about the external world face the tribunal of sense experience not individually, but only as a corporate body. (Quine 1953b: 38)

Famously – or notoriously – Quine enlarges on Duhem’s idea by including in the scope of the holism claims about logic and mathematics: they also stand in (dis) confirmatory relations to sensory experience, even if those relations are remote and indirect. With enough pressure from (very?) unusual experience we perhaps might revise them (although, would the revisions really be intelligible? Cf. below). And this, of course, is what Devitt endorses as “the one way of knowing, the empirical way that is the basis of science.” It’s striking that neither Quine nor Devitt ever spell out how their holistic empirical confirmation could actually work.34 At best, Quine appeals to five “virtues of hypotheses” – simplicity, generality, modesty, conservativism, falsifiability – that, as we noted earlier, he set forth in a freshman text on reasoning, which, although they serve as good freshman advice, are hardly a serious theory. As Quine himself admitted when he was chided on the matter: “I treated it mainly at that level because I have known little to say of it that was not pretty much common knowledge” (1986: 493). But if epistemology is seriously “a chapter of natural science,” aren’t we owed a little bit more than commonsense? Indeed, Quine seemed to present his holistic claim equally as a claim about an explanatory as well as a working epistemology. He, of course, assumed a behavioristic framework, and viewed people’s cognitions (if that’s what he’s entitled to call them) as essentially bundles of dispositions to assent and dissent that change under the pressure of stimulation, with no particular disposition sacrosanct. And perhaps something like such a view is convenient enough for working purposes, where we’re largely concerned with just such dispositions and disputes among the target sentences. Just how intelligent such a system could be, and whether human beings are in fact remotely such systems, are not questions either Quine or Devitt even begin to seriously consider. Quine (1960/2013) relied on a highly problematic Skinnerian behaviorism, to which I doubt Devitt or anyone would any longer appeal. Does Devitt have a sufficiently refurbished version of Quine’s behaviorism in mind? It is easy to underestimate the extent of our ignorance in this regard. Despite acknowledging our lack yet of an adequate account of either the metaphysics of mathematics or an epistemology of even empirical knowledge, Devitt thinks we do have an intuitively clear and appealing general idea of this way, of “learning from experience.” It starts … from the metaphysical assumption that the worldly fact that p would make the belief that p true. A belief is justified if it is formed and/or sustained by the experiences of a mind/brain in a way that is appropriately sensitive to the putative fact that p. Many instruments  – thermometers, voltmeters, etc.  – are similarly sensitive to the world…. [T]he mind/brain is similar enough to the instruments to make empirical justification quite unmysterious, despite the sad lack of details. (2010b: 284)

 I have to confess to now being deeply embarrassed about how little I and others have pressed this issue since being under the spell of Quine’s view since the Sixties (talk about being “in the grip of a picture”!). I’m genuinely grateful to Devitt for so insisting on the view as to cause me, countersuggestible as I am, to finally notice its virtual vacuity and/or empirical outlandishness.

34

318

G. Rey

But this is a strange line of thought. In the first place, it may well be true that empiricism hoped that an account of empirical knowledge would begin with causal (e.g., perceptual) interactions with the external world  – but what does this have to do particularly with naturalism, or with confirmation holism? Thermometers certainly don’t work holistically (as we will note shortly, this would seem to be a crucial respect in which the mind/brain is different from any known instrument). Even classical empiricists realized there was a problem with a causal theory of mathematical knowledge, and one would have thought – Quine certainly seems to think – that the vague holistic virtues like simplicity, etc., enter the picture precisely when causation from external stimuli alone peters out. It may well be that the curvature of space/ time lies in a chain of causes at the other end of which occurs Einstein’s general theory of relativity, but I wouldn’t describe this as an “intuitively clear and appealing” answer to how Einstein’s knowledge was possible! Nor does saying that it was the “simplest,” most “conservative,” most “falsifiable,” etc. hypothesis really make things much clearer. Einstein’s theory – or present quantum theory! – is simpler and more conservative only to a very sophisticated palate! Devitt acknowledges the ignorance of epistemology that I have stressed: As Georges Rey (1998) is fond of pointing out, we are not close to solving the epistemological problem of anything. Since we do not have a serious theory that covers even the easiest examples of empirical knowledge – examples where experience plays its most direct role – the fact that we do not have one that covers the really difficult examples from mathematics hardly reflects on the claim that these are empirical too. Beyond that, the present project needs only the claim that the empirical way is holistic. We have no reason to believe that a serious theory would show that, whereas empirical scientific laws are confirmed in the holistic empirical way, the laws of mathematics are not. (2010b: 274−275, emphasis his)

But, just as much as there seem to be non-controversial examples of empirical knowledge, there equally well at least appear to be excellent examples of non-­ empirical, “a priori” knowledge. Indeed, it’s hard to think of any mathematical claim in the history of the subject that has in fact depended for its ultimate justification upon experience, or been refuted on the basis of it.35 True, Quine (1953b, 1954/76) audaciously claims that one can generalize Duhem’s views about holistic confirmation to include logic and mathematics; and, as an antidote to simplistic claims of “truth by convention” and simply as noticing a neglected conceptual possibility (or, anyway, we understand the words), the claim should perhaps not be

 There is the case of Putnam’s (1968/75) proposal to revise logic in the light of quantum mechanics. But arguably this can be construed as a case of the revision of the classical theory of logic, not of logic itself (see Stairs 2015 for nuanced discussion). It is often thought that Einstein’s empirical theory of General Relativity gave the lie to Euclidean geometry (Devitt (2010a: 257) suggests as much), but this is an historical and conceptual error: what refuted the parallels postulate of Euclid was the development in the eighteenth–nineteenth centuries of non-Euclidean geometries by Gauss, Riemann and Lobachevsky – all reasoning purely mathematically. Once the mathematical possibility of spaces with different metrics was understood, it was then an empirical matter to determine which specific metric (flat or curved) applied to actual space/time. The math and geometry were a priori; what was empirical was, per usual, determining which mathematical structures apply to the physical world.

35

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

319

lightly dismissed. But why should it be taken more seriously than that? It’s crucial to notice that Quine nowhere presents the least bit of empirical evidence for it. Dare I say?: it seems an entirely a priori speculation. In fact, once one thinks about how such holistic confirmation might actually work, it can seem pretty baffling. As Fodor (2000) pointed out: if confirmation is “quinean” (i.e., holistic) and “isotropic” (every belief is potentially relevant to every other) in the ways that Quine suggested, then, on the face of it, it would seem computationally intractable. Our only model of computation is Turing’s, and that’s a model for which it’s absolutely crucial that its transitions occur locally.36 But isotropy would appear to require that a computation be sensitive to a potential infinity of possible connections  – an infinitely long machine table?  – and quinity would require making a global assessment of the totality of one’s beliefs, a computation that certainly wouldn’t seem to be local. For Fodor, all this was reason to think that the mind couldn’t be a Turing-style computer – “the mind doesn’t work that way” was the claim of his eponymous book.37 I’m not sure that there aren’t replies to Fodor’s argument (see, e.g., Carruthers 2003), but, turning Fodor’s ponens into a tollens, he’s certainly provided a reason to doubt that Quine’s “holistic epistemology” is quite so unproblematic as Devitt seems to think. Actually, Quine (1975/81: 71) himself retreats from the full holism, noting that confirmation needn’t include quite all “our statements about the external world,” but merely “large chunks” of them. But, if it’s only large chunks, then there’s no longer any challenge here to the a priority of a good “chunk” of logic and mathematics, which might remain quite as insulated as they strikingly appear to be. Duhem didn’t include them in his holism; one wonders why Quine thinks he really can include them in his. The question is what entitles Quine or Devitt to the “only”s in their claims (“only as a corporate body,” “only one way of knowing”).38 In any case, as many have noted, Quine’s (1953b: 43–44; 1960/2013: 10–11) specific account of the apparent a priority of logic and mathematics as simply “central” to our web(s) of belief is patently wrong. Especially if by “centrality” he really means the relative unlikelihood of changing one’s speech dispositions, there are obviously countless sentences of logic and mathematics that people will revise for any number of reasons, from elementary corrections of computing errors to more complicated revisions in the light of paradoxes or philosophical reflection (hasn’t  Indeed, on the physical face of it, locality would appear to be essential to any explanation of anything (I presume neither Devitt nor Quine are going to appeal to very puzzling non-locality claims in some interpretations of quantum physics). 37  Note that despair over essentially this point was one consideration that drove Descartes (1637/1970: 116) to his dualism. 38  Note that one might wonder how various sorts of self-knowledge are supposed to fit into Devitt’s “one way of knowing.” Don’t people seem to have various sorts of privileged access to their thoughts, sensations, attitudes, motor movements and positions of their body (maybe even features of their I-language!)? I should think it requires some pretty subtle empirical research to determine just where someone’s epistemic privileges begin and end. A priori-ish claims about “one way of knowing” shouldn’t preclude or constrain that research (see my (2014a: §3.21) for discussion of what seems to be special introspective knowledge of many of one’s attitudes). 36

320

G. Rey

nominalism driven some to deny there’s actually a cube root of 8?). Conversely, there are countless non-mathematical beliefs that most people will likely never revise, e.g., that the world has existed for more than five minutes, or that animals have lived and died (it would actually be mildly amusing to test Quine’s proposal to see just what people might revise when and under which pressures). Notice that Quine’s fastening in this way on revisability as a way of thinking about the web of belief is actually on the face of it at odds with the priority of scientific explanation that I take to be the moral of his (1969) “Epistemology Naturalized.” Most people considering revisions, after all, are on the whole less likely to heed science than they are to cleave to commonsense (see Kahneman 2011). I don’t want to insist upon my understanding of Quine, but am struck by how this “commonsense” understanding fits with Devitt’s Mooreanism, particularly his claim, quoted above, that Realism about ordinary objects is confirmed day by day in our experience. It is central to our whole way of viewing the world, the very core of common sense. (Devitt 2010a: 62, 104)

Are commonsense views then in the center along with logic and mathematics?! This seems very doubtful, to put it mildly. All the more reason to view my insistence on the priority of scientific explanation as a substantive alternative to Devitt’s “Metaphysics first!”. In any event, there’s surely a lot more to the psychology of logic and mathematics than mere unrevisability. What most dramatically distinguishes them from mere empirical beliefs and most commonsense is the virtual unintelligiblity of their revision: putting (a priori?) nominalistic scruples to one side, it is impossible to imagine that 2 + 2 really isn’t 4, or that there aren’t primes between 2 and 10. By contrast, it’s perfectly easy to imagine most central, empirical beliefs to be false, e.g., the aforementioned otherwise unrevisable beliefs that the world has existed for more than five minutes, etc. What on a Quinean model accounts for this dramatic psycho-­ epistemic difference?39 Devitt seems to think that the only serious obstacle to extending the Duhem hypothesis to logic and mathematics is the metaphysical issue of the status of mathematical objects: “we no longer have any reason to think that, if we solved the metaphysical problem, the epistemological problem would not be open to an empirical solution” (2010b: 275). It’s hard to believe Devitt is taking the epistemic problem seriously. Even if the metaphysics of mathematics were settled, there would still be quite difficult significant epistemological problems: again, how to explain at least the apparent a priority of arithmetic, and the unintelligibility of denials of instances of it? More generally, how are we to make sense of coherent revisions of the web of belief without all manner of constraint on logic and mathematics? What is the status

 Actually, the details here bear experimental scrutiny: denials of claims about particular numbers (“267 + 12 = 279”) and particular inferences (“Some pets are dogs, therefore some dogs are pets”) seem on modest reflection unintelligible, but generalizations, especially non-elementary ones, say, Cantor’s results regarding infinity, or the law of excluded middle, might not be (cf. Russell 1912/59: 112–113).

39

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

321

of whatever criteria for revision that we employ?40 Are they, too, revisable? Are there really no ultimate constraints that would prevent different people winding up with just any theories of the world, all equally “rational”?! In any case, on the face of it, the most serious scientific research on confirmation seems to take for granted the special status of mathematics: Bayesian confirmation, for example, would make no sense without the theory of probability and the assumption that certain sentences  – specifically, truths of logic and mathematics  – are treated as certain, with a probability of 1 (see Háyek 2012: §1).41 That is, they are treated differently from the hypotheses whose probabilities the Bayesian ­calculations are recruited to compute; pace Quine and Devitt, they seem, indeed, to be treated as necessary and a priori. Thus, it’s simply not true that “we no longer have any reason to think that, if we solved the metaphysical problem [of mathematics], the epistemological problem would not be open to an empirical solution.” What we lack is a sketch of a serious theory that it is! Anyway, surely the burden is on the Quinean to defend the claim that arithmetic is within the scope of empirical revision, against, when you think about it, the overwhelming evidence and serious explanatory proposals that it isn’t. Ironically enough, Quine’s commonsense advice as part of a working epistemology may be precisely what could lead us to recognize that the best explanatory epistemology of our knowledge of logic and mathematics – and maybe much else besides – is that they are a priori after all. Indeed, for anything Quine or Devitt have said, there might well be (heaven forfend!?) transcendental conditions for the possibility of knowledge of precisely the sort Kant envisioned (were Quine or Devitt to offer such rich detail!). In any case, the suggestion that Quine has made “a nice abduction: the best explanation of all knowledge is an empirical one” (2010b: 283) is  – not to put too fine upon it  – patently preposterous. Quine’s thorough-going confirmation holism as “the only way of knowing” is, in his and Devitt’s hands, a mere mantra, a check written on a non-existent bank.

 Hartry Field in his (1996) and in conversations with Devitt pressed this issue, which Devitt addresses in his (1998/2010), finding something “fishy” about this form of argument. But he confesses:

40

I have not located the source of the smell. And this is something that should be done, because the dominant idea is appealing. Alas, I do not know how to do it. I comfort myself with the thought that we know so little about our evidential system. (1998/2010: 270) But if in fact we know so little about our evidential system, how can Devitt be so confident that Field’s argument is in fact “fishy,” and that the empirical way of knowing is the only way? I confess, it’s continually difficult to resist the charge that even Devitt (1998/2010: 268) notes to be “a neat and perplexing point,” that, for Quineans, it would appear to be a priori that there’s no a priori! 41  Cf. Frege (1884/1980: 16–17): “Induction … must base itself on the theory of probability…. But how probability could possibly be developed without presupposing arithmetic laws is beyond comprehension.”

322

G. Rey

15.4.3  Is a Naturalistic A Priori Obscure? In reply to Devitt’s charge that the a priori is hopelessly obscure, I have elsewhere (Rey 1993, 1998, 2014a, b) defended the possibility that our logical competence is partly the result of an axiom-free, Gentzen-style natural deduction system that is wired into our brain. Intuitive judgments produced as a casual result of this competence would count as a priori knowledge since they would not be based on any experiential premises, but merely on applications of rules that (we may suppose) are consciously perspicuous and indubitable to their users, absolutely reliable (producing truths in any world the users might experience or inhabit), and, arguably, the constitutive basis of their competence with the contained operators.42 The situation is precisely as a traditional philosopher imagines it; it’s just that we are additionally supposing that there is in fact a perfectly good, natural causal explanation of their intuitive verdicts, one that underwrites the claims of rational insight with a ­mechanism that would, under idealization, be absolutely reliable and as independent of experience as a reasoning process could possibly be. It’s difficult to see why the results of such a process shouldn’t count as genuine a priori knowledge, and, indeed, provide a plausible explanation of it. I won’t rehearse all of Devitt’s and my previous exchanges on the topic, except to mention a few new points in reply to some of Devitt’s criticisms that bear on the issue of explanation. Devitt also argues that, since a person’s competence of this sort might be a matter of luck, the beliefs arrived at by such a system wouldn’t be “non-­ accidental,” and so the verdicts to which such a system gives rise couldn’t count as (justified) knowledge (1998/2010: 259, 261). But this doesn’t follow. Note, first, that, though the system might have evolved by luck, this doesn’t imply that the deliverances of the system are a matter of luck. Ex hypothesi, the system is (ceteris paribus) absolutely reliable, counterfactually issuing in logical truths under explanatory idealization; why isn’t that enough? Indeed, secondly, it is wholly possible that the human ability to have knowledge of anything may well be a result of evolutionary “luck”  – it was likely a matter of random mutation, not selection, that humans evolved that were capable of astrophysics and mathematics at all! Why should that fact render the results any the less perfectly reliable, well-justified knowledge? Thirdly: an inference can be fully justified without including premises that the inference is justified, much less the explanation or justification of that fact. Despite  Note that I’ve added “or inhabit” here to rule out the accidentally believed a posteriori necessities, such as “All men are mortal” or “Water is H2O” that also worry Devitt. Pace his (1998/2010: 260 n. 9), I’m still inclined to think that logical truths are true by virtue of their pattern of operators alone, being moved by Gillian Russell’s (2008: 33–34) nice analogy with “xy = z”, being satisfied by  by virtue of y alone: logical truths are worldly enough; it’s just that the world makes no more distinctive contribution to them than does 5 to the truth of 5x0 = 0. But there’s no need to press this specific issue here; knowing logical truth by virtue of knowing the meaning of the operators alone will suffice. For further replies to Devitt, see my (1998) which appeared in the same place as his critique. 42

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

323

his obstinate insistence to the contrary, Lewis Carroll’s (1895) tortoise is fully justified in drawing a logical inference without need of premises that express and maybe explain that he is so justified. Similarly, for our Gentzen creature, who may have no meta-logical defences or explanations of her nonetheless perfectly valid and reliable, non-accidental inferences.43 My defence of the possibility of a naturalistic a priori is not meant as a defence of it as a fully realistic psychological hypothesis. It’s enough that it’s perfectly intelligible as a serious explanation. This is all that is required to undermine Devitt’s charge of “obscurity,” and his and Quine’s insistence that a holist “empirical way” is the “only” way of knowing. Indeed, as we’ve noted in the preceding section, the thoroughgoing holism that Devitt endorses is in fact totally obscure, on the face of it incompatible with all the evidence we have about how mathematical knowledge is attained, and with the only promising research strategies we yet have for understanding knowledge generally. Perhaps confirmation will turn out to be sometimes holistic, which I wouldn’t be surprised it can be (should the property ever be specified in sufficient detail and plausibility to assess). It’s only the “only” that I contest. Without the “only,” holistic confirmation is entirely compatible with, and may even require, more local sorts, and therefore it is entirely compatible with a priori knowledge. It would just turn out that one of the constraints on acceptable holistic confirmations is that, e.g., logic and math always be preserved – which, of course, accords perfectly with actual scientific practice! In any case, surely the postulation of a dedicated Gentzen-style natural deduction computational system is vastly less obscure than what I’ve argued is the groundless Quinean proposal of totally holistic confirmation on which Devitt uncritically relies.

15.5  Conclusion In this paper I have urged replacing Devitt’s maxim, “Metaphysics first!,” with my “Explanation first!,” arguing that it provides the only basis to decide which metaphysical claims deserve any priority. This seems to me particularly important to 43

 Devitt also wonders why I chose an axiom-free system: suppose that [someone’s belief in a non-obvious logical truth] was not produced by a nonaxiomatic system of natural deduction but is inferred from some general logical beliefs. Once again, the epistemic status of [a logically true belief] depends on the status of the general beliefs. So we have to say more to show that they are knowledge. It is hard to see how the change from general beliefs to sub-systems of rules could remove the need to say more. (1998/2010: 262–263)

The reply is that, on the face of it, an axiom-free system, using perspicuous, easily realized inference rules, would be potentially more explanatory than a system that merely posited comparatively arbitrary axioms. Expositorily, it’s also easier to see natural deduction as rules an agent would find obvious and ineluctable, as opposed to (dispositions to assent to?) – arbitrary axioms. But essentially the same argument could be made with axioms – see, for example, the argument of the preceding section according to which Bayesian (and likely any probabilistic) procedures of confirmation require the probabilities of logic and arithmetic to be equal to 1.

324

G. Rey

urge against Devitt, given the many passages I’ve quoted in which he seems to include in the metaphysics he’s putting first “most” commonsense claims, which, I argue, have no place in serious explanation. Indeed, despite his claimed commitment to a “naturalized epistemology,” Devitt, like his mentor, Quine, seems to display a peculiar aloofness to the sciences of the domains he discusses. I’ve focussed on three cases, his treatment of color, his treatment of linguistics, and his treatment of actual issues in a naturalized epistemology, and have found in all three a serious disregard of actual, empirically informed theories of color, language and confirmation. It often seems as though he and Quine remain(ed) “outside” of the sciences they otherwise esteem, precisely as philosophers traditionally have, listening in from afar, overhearing snatches of their claims, arguing with them from a commonsensical point of view, but not engaging directly with the enterprises themselves.44 My tentative diagnosis in the case of Devitt is that there’s a tension between the two strains of the realism that underlie his insistence on “Metaphysics first!”: the at least avowed commitment of Quine to science, but also a Moorean commitment to “most” commonsense, as well as a focus on a working epistemology that addresses explicit reasons that people can explicitly provide for their commonsense beliefs, rather than an explanatory epistemology that provides a scientific account of their cognitive achievements generally. As I’ve tried to indicate, there’s a much greater tension between the two than Devitt seems to acknowledge. Is this a serious indictment of Devitt’s work as a whole? I’d like to think it isn’t, and that his laudable commitment to a naturalized epistemology, scientific realism and to “Metaphysics first!” is in fact a commitment to the primacy of scientific explanation, commonsense be hanged. But who knows how deep his Mooreanism runs?45

References Antony, L. 2002. How to play the flute: A commentary on Dreyfus’s “Intelligence without representation.” Phenomenology and the Cognitive Sciences 1 (4): 395–401. ———. 2004. A naturalized approach to the a priori. In Epistemology: Philosophical issues 14, ed. E. Sosa and E. Villaneuva, 1–17. Atascadero: Ridgeview Publishing Company. Baum, L., J. Danovitch, and F. Keil. 2008. Children’s sensitivity to circular explanations. Journal of Experimental Child Psychology 100: 146–155. Byrne, A., and D. Hilbert. 2003. Color realism and color science. Behavioral and Brain Sciences 26: 3–64.  I’ve not dealt here or elsewhere with Devitt’s (2010c) discussion of biological essentialism, since I don’t know the domain sufficiently well. Perhaps the problems I’ve discussed here do not arise there, in which case: great! 45  Many, many thanks to Andrea Bianchi for saintly patience in his perceptive editing of my ms. I’m also indebted to John Collins and Steven Gross for very helpful discussions of earlier drafts of this paper, and to Michael, both for earlier comments, but, really, for stimulating and wonderfully genial arguments over the (egad!) nearly five decades of a great friendship, that’s endured for me as much because of, as despite our differences. 44

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

325

Carroll, L. 1895. What the tortoise said to Achilles. Mind 4 (14): 278–280. Carruthers, P. 2003. On Fodor’s problem. Mind and Language 18 (5): 502–523. Carston, R. 2016. Linguistic conventions and the role of pragmatics. Mind and Language 31 (5): 612–624. Chomsky, N. 1955/75. The logical structure of linguistic theory. New York: Plenum. ———. 1957. Syntactic structures. The Hague: Mouton. ———. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press. ———. 1968/2006. Language and mind, 3rd edition. Cambridge: Cambridge University Press. ———. 1980. Rules and representations. Oxford: Blackwell. ———. 1986. Knowledge of language. New York: Praeger. ———. 2000. New horizons in the study of language and mind. Cambridge: Cambridge University Press. ———. 2013. Problems of projection. Lingua 130: 33–49. Chomsky, N., and R. Halle. 1968. The sound pattern of English. Cambridge, MA: MIT Press. Collins, J. 2008. A note on conventions and unvoiced syntax. Croatian Journal of Philosophy 8: 241–247. Descartes, R. 1637/1970. Discourse on method. In The philosophical works of Descartes, Volume 1, ed. E. Haldane and G. Ross, 111–151. Cambridge: Cambridge University Press. Devitt, M. 1984/97. Realism and truth. Princeton: Princeton University Press. (The 1997 Princeton edition is the 1991 Blackwell 2nd edition together with a new Afterword. Its index is a reprint of the 1991 index and so does not cover the Afterword). ———. 1996. Coming to our senses. Cambridge: Cambridge University Press. ———. 1998/2010. Naturalism and the a priori. Philosophical Studies 92: 45−65. Reprinted in Devitt 2010a: 253−270. ———. 2003. Linguistics is not psychology. In Epistemology of language, ed. A. Barber, 107–139. Oxford: Oxford University Press. ———. 2006a. Ignorance of language. Oxford: Clarendon Press. ———. 2006b. Defending Ignorance of language: Responses to the Dubrovnik papers. Croatian Journal of Philosophy 6: 571–606. ———. 2008a. Explanation and reality in linguistics. Croatian Journal of Philosophy 8: 203–231. ———. 2008b. A response to Collins’ note on conventions and unvoiced syntax. Croatian Journal of Philosophy 8: 249–255. ———. 2010a. Putting metaphysics first: Essays on metaphysics and epistemology. Oxford: Oxford University Press. ———. 2010b. No place for the a priori. In Devitt 2010a: 271–291. ———. 2010c. Resurrecting biological essentialism. In Devitt 2010a: 213–249. ———. 2014. Linguistic intuitions are not “the voice of competence.” In Philosophical methodology: The armchair or the laboratory?, ed. M.C. Haug, 268–293. London: Routledge. Duhem, P. 1914/91. The aim and structure of physical theory. Trans. P.P.  Wiener. Princeton: Princeton University Press. Everett, D. 2012. Language: The cultural tool. New York: Pantheon Books. Fernández, E., and H.  Cairns. 2011. Fundamentals of psycholinguistics. Malden, MA: Wiley-Blackwell. Field, H. 1996. The a prioricity of logic. Proceedings of the Aristotelian Society 96: 1–21. Fodor, J. 2000. The mind doesn’t work that way. Cambridge, MA: MIT Press. Frege, G. 1884/1980. The foundations of arithmetic, 2nd revised edition. Trans. J.L.  Austin. Oxford: Blackwell. Goldman, A. 1999. A priori warrant and naturalistic epistemology. In Philosophical perspectives, 13: Epistemology, ed. J. Tomberlin, 1–28. Atascadero: Ridgeview Publishing Company. Hardin, C.L. 1988/93. Color for philosophers. Indianapolis: Hackett. ———. 2008. Color qualities and the physical world. In The case for qualia, ed. E.  Wright, 143–154. Cambridge, MA: MIT Press.

326

G. Rey

Háyek, A. 2012. Intepretations of probability. In The Stanford encyclopedia of philosophy, ed. E.N.  Zalta, (Winter 2012 Edition). https://plato.stanford.edu/archives/win2012/entries/ probability-interpret/. Jackendoff, R. 1987. Consciousness and the computational mind. Cambridge, MA: MIT Press. ———. 2006. Locating meaning in the mind (where it belongs). In Contemporary debates in cognitive science, ed. R. Stainton, 237–255. Oxford: Blackwell. Kahneman, D. 2011. Thinking, fast and slow. New York: Macmillan. Kuehni, R. 2001. Focal colors and unique hues. Color: Research and Application 26 (2): 171–172. Lewis, D. 1969. Convention. Cambridge: Harvard University Press. Maynes, J., and S. Gross. 2013. Linguistic intuitions. Philosophical Compass 8: 714–730. McLaughlin, B., and J. Cohen, eds. 2007. Contemporary debates in philosophy of mind. Oxford: Blackwell. Moore, G.E. 1925/59. A defence of common sense. In G.E.  Moore: Selected writings, ed. T. Baldwin, 106–133. London: Routledge. Mound, B. 2019. Color. In The Stanford encyclopedia of philosophy, ed. E.N. Zalta (Spring 2019 Edition), forthcoming. https://plato.stanford.edu/archives/spr2019/entries/color/. Pacer, M., and T. Lombrozo. 2017. Ockham’s razor cuts to the root: Simplicity in causal explanation. Journal of Experimental Psychology: General 146 (12): 1761–1780. Palmer, S. 1999. Vision science: Photons to phenomenology. Cambridge, MA: MIT Press. Papineau, D. 2016. Naturalism. In The Stanford encyclopedia of philosophy, ed. E.N. Zalta (Winter 2016 Edition). https://plato.stanford.edu/archives/win2016/entries/naturalism/. Phillips, C. 2013. On the nature of island constraints. In Sprouse and Hornstein 2013: 64–108. Pietroski, P. 2005. Meaning before truth. In Contextualism in philosophy: Knowledge, meaning, and truth, ed. G. Preyer and G. Peter, 255–302. Oxford: Clarendon Press. ———. 2010. Concepts, meanings, and truth: First nature, second nature and hard work. Mind and Language 25 (3): 247–278. Putnam, H. 1965/75. The analytic and the synthetic. In H. Putnam, Philosophical papers, vol. 2, 33–69. Cambridge: Cambridge University Press. ———. 1968/75. Is logic empirical? In H. Putnam, Philosophical papers, vol. 1, 174–197. Cambridge: Cambridge University Press. Quine, W. 1953a. On what there is. In W. Quine, From a logical point of view, 1–19. New York: Harper & Row. ———. 1953b. Two dogmas of empiricism. In W. Quine, From a logical point of view, 20–46. New York: Harper & Row. ———. 1953c. The problem of meaning in linguistics. In W. Quine, From a logical point of view, 47–64. New York: Harper & Row. ———. 1954/76. Carnap and logical truth. In W. Quine, The ways of paradox and other essays, revised edition, 107–132. Cambridge, MA: Harvard University Press. ———. 1960/2013. Word and object, 2nd edition. Cambridge, MA: MIT Press. ———. 1969. Epistemology naturalized. In W. Quine, Ontological relativity and other essays, 69–90. New York: Columbia University Press. ———. 1975/81. Five milestones of empiricism. In W. Quine, Theories and things, 67–72. Cambridge: Harvard University Press. ———. 1986. Reply to Henryk Skolimowski. In The philosophy of Quine, expanded edition, ed. L. Hahn and P. Schilpp, 492–493. Chicago: Open Court. Quine, W., and J. Ullian. 1970/78. The web of belief, 2nd edition. New York: McGraw-Hill. Rey, G. 1993. The unavailability of what we mean: A reply to Quine, Fodor and LePore. Grazer Philosophische Studien 46: 61–101. ———. 1998. A naturalistic a priori. Philosophical Studies 92: 25–43. ———. 2003. Intentional content and a Chomskyan linguistics. In Epistemology of language, ed. A. Barber, 140–186. Oxford: Oxford University Press. ———. 2006. Conventions, intuitions and linguistic inexistents: A reply to Devitt. Croatian Journal of Philosophy 6: 549–570.

15  Explanation First! The Priority of Scientific Over “Commonsense” Metaphysics

327

———. 2009. Concepts, defaults, and internal asymmetric dependencies: Distillations of Fodor and Horwich. In The a priori and its role in philosophy, ed. N. Kompa, C. Nimtz, and C. Suhm, 185–204. Paderborn: Mentis. ———. 2012. Externalism and inexistence in early content. In Prospects for meaning, ed. R. Schantz, 503–529. Berlin/Boston: De Gruyter. ———. 2014a. The possibility of a naturalistic Cartesianism regarding intuitions and introspection. In Philosophical methodology: The armchair or the laboratory?, ed. M.C.  Haug, 243–267. London: Routledge. ———. 2014b. Analytic, a priori, false  – and maybe non-conceptual. European Journal of Analytic Philosophy 7 (2): 85–110. ———. forthcoming. Representation of language: Philosophical issues in a Chomskyan linguistics. Oxford: Oxford University Press. Russell, B. 1912/59. The problems of philosophy. Oxford: Oxford University Press. ———. 1914. Our knowledge of the external world as a field for scientific method in philosophy. London: Open Court. Russell, G. 2008. Truth in virtue of meaning: A defence of the analytic/synthetic distinction. Oxford: Oxford University Press. Rysiew, P. 2017. Naturalism in epistemology. In The Stanford encyclopedia of philosophy, ed. E.N.  Zalta (Spring 2017 Edition). https://plato.stanford.edu/archives/spr2017/entries/ epistemology-naturalized/. Smith, N., and N. Allott. 2016. Chomsky: Ideas and ideals, 3rd edition. Cambridge: Cambridge University Press. Sperber, D., and D. Wilson. 1995. Relevance: Communication and cognition. 2nd edition. Oxford: Blackwell Publishers. Sprouse, J., and N.  Hornstein, eds. 2013. Experimental syntax and island effects. Cambridge: Cambridge University Press. Stairs, A. 2015. Could logic be empirical? The Putnam-Kripke debate. In Logic and algebraic structures in quantum computing, ed. J.  Chubb, A.  Eskandarian, and V.  Harizanov, 23–41. Cambridge: Cambridge University Press. Steup, M. 2018. Epistemology. In The Stanford encyclopedia of philosophy, ed. E.N. Zalta (Winter 2018 Edition). https://plato.stanford.edu/archives/win2018/entries/epistemology/. Woodward, J. 2017. Scientific explanation. In The Stanford encyclopedia of philosophy, ed. E.N.  Zalta (Fall 2017 Edition). https://plato.stanford.edu/archives/fall2017/entries/ scientific-explanation/.

Chapter 16

Experimental Semantics, Descriptivism and Anti-descriptivism. Should We Endorse Referential Pluralism? Genoveva Martí

Abstract  Discussions of semantic theory by Machery and colleagues cast doubt on the universality of the intuitions that drive the approach to semantics inspired by Kripke, and suggest the partial adequacy of a descriptivist stance. In the past I have argued against the significance of those results, but some philosophers have considered them substantial enough to argue that we should abandon the hope for a universal semantic theory and be prepared to become semantic pluralists. Recent experimental work by Domaneschi, Vignolo and Di Paola, and by Devitt and Porot, contradicts the experimentalists’ descriptivist conclusions, but their results by themselves may not suffice to convince the revisionists arguing for some form of pluralism. The aim of this paper is threefold: to review and clarify the reasons for my original diagnosis of Machery and colleagues’ work, to discuss the impact of the new anti-descriptivist findings, and to provide reasons to resist the pull of revisionism. Keywords  Experimental philosophy · Experimental semantics · Referential pluralism · Semantic pluralism · Communication

16.1  Introduction Recent discussions by experimental philosophers have cast doubt on the adequacy of the approach to the semantics of singular and general terms that is inspired by the work of Kripke, Donnellan and Putnam. The criticisms by experimentalists are based on results that question the universality of the intuitions that drive the approach. Data collected in several empirical studies appear to provide evidence that many speakers’ intuitions about language use (and perhaps even their actual use) are not always in line with the predictions by the proponents of what has come G. Martí (*) Department of Philosophy, ICREA and University of Barcelona, Barcelona, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_16

329

330

G. Martí

to be known as the causal-historical picture. Opening the debate in 2004 Edouard Machery, Ron Mallon, Shaun Nichols and Stephen Stich [MMNS] argued that speakers in different cultures differ in their beliefs and intuitions as regards what a name refers to. MMNS reported that the intuitions of East Asians, unlike those of Westerners, are much more in line with what is predicted by a descriptivist approach to the semantics of names and they concluded that intuitions about reference are culturally dependent. Other studies have shown wide intracultural and even individual variations, and some have extended similar conclusions to the case of kind terms. Those results put a strain on the general approach to reference inspired by Kripke. The semantic arguments in Naming and Necessity, ignorance and error arguments that were meant to show that speakers do refer in the absence of uniquely identifying information (like in the ‘Feynman’ case) and that they also do refer to the “right” referent even when the description they associate with the name denotes someone else (as in the ‘Columbus’ case), were extremely powerful and they played a crucial role in convincing philosophers that traditional descriptivism, no matter how prima facie natural, was entirely misguided.1 Finding out that the ‘Gödel’ case, a story that can be regarded as buttressing the error argument, did not have a universal effect, freeing speakers from any inclination towards descriptivism, was quite a shock.2 Eventually, some philosophers have argued that we need to revise entirely some basic assumptions about the unity of semantic theory, and have concluded that the evidence provided by experimental semanticists supports the endorsement of a hybrid theory (Genone and Lombrozo 2012), the acceptance of the existence of widespread semantic ambiguity (Nichols et al. 2016), or the adoption of some variety of referential pluralism (Andow 2014; Wikforss 2017). Recent work by Filippo Domaneschi, Massimiliano Vignolo and Simona Di Paola (2017) and by Michael Devitt and Nicolas Porot (2018) refutes the prior experimental descriptivist results and throws new light on the debate. However, it is at least arguable that their

1  Similar arguments can be found in Donnellan (1970) as arguments for the non-necessity and nonsufficiency of a backup of descriptions. I assume that MMNS’s paper, and the ‘Gödel’ case from Kripke (1980) that they use, are sufficiently well known and don’t need to be rehearsed. The same applies to the ‘Feynman’ and ‘Columbus’ cases. It should be noted that the debate provoked by the results reported in MMNS revolves around classical descriptivism, the view that Kripke, Donnellan and Putnam were arguing against. More recent varieties of descriptivism, such as causal descriptivism, make predictions as regards reference that are in line with the causal-historical picture that emerges from the work of Kripke, so results that support classical descriptivism would speak against both causal descriptivism and the causal-historical picture. It should be noted also that, as Gary Ostertag (2013) points out, experimental results that can be interpreted as disconfirming classical descriptivism should not be interpreted as automatically providing support to the causal-historical picture. 2  I have argued that the Gödel scenario is a story based on the error argument but it is not a story that can be used, nor is it used by Kripke, to support the error argument (see Martí 2012: sect. 5). Devitt (2011: sect. 2) also highlights the differences between the Gödel case and the “humdrum” cases that support the error argument. My discussion here will not rely on that argument. For an in depth discussion of the role of the ‘Gödel’ case, see Ostertag (2013).

16  Experimental Semantics, Descriptivism and Anti-descriptivism. Should We…

331

findings are insufficient as grounds to reject the revisionists’ stance. Nevertheless, I think that we have powerful reasons to resist pluralist proposals, or so I will argue. In reaction to MMNS’s original article, in Martí (2009), I argued that the results there reported did not provide input for semantic theorizing and hence that they did not really affect semantic theory. A subsequent study by Machery, Olivola and de Blanc [MOD] (2009) seemed to me to suffer from the same problems as MMNS’s original probe. Years down the road, and after some back and forth debate,3 I still think that my diagnosis was, in essence, correct, although some aspects of my stance need to be better delineated, and some unclarities addressed. In particular, Devitt and Porot (2018) argue against my interpretation of MOD’s tests and results, and I will take an opportunity here to address their criticism. So, in this paper I have three purposes: (i) to clarify the reasons for my original diagnosis of Machery and colleagues’ work and to respond to Devitt and Porot’s criticism; (ii) to discuss the impact of the recent anti-descriptivist findings and (iii) to provide reasons to resist the pull of revisionism and the plea for semantic pluralism. In the next section I address (i) and (ii). In Sect. 16.3 I discuss semantic pluralism.

16.2  T  wo Distinctions: Use vs. Reflections on Use, and Use vs. Interpretation. Testing Use In Martí (2009) I expressed concerns that MMNS were not providing data that could be used as input for semantic theorizing. Devitt (2011, 2012) also expressed similar concerns. MMNS asked the participants in their study to answer who, in their view, the hypothetical speaker mentioned in the vignette was referring to when he used ‘Gödel’, prompting them to reflect on the use of ‘Gödel’, not to use ‘Gödel’. Instead, I proposed, they should ask a question that prompted participants to use the name ‘Gödel’. The input the semanticist reflects on is actual usage: the ‘Feynman’ and ‘Columbus’ cases that Kripke uses to illustrate the ignorance and error arguments are real cases, things that, according to Kripke, one hears in the marketplace. Input for semantic theory is not obtained by asking people what in their view another (hypothetical or real) speaker is in fact talking about when she uses a name. That reflection on use is theoretical; answering the question ‘What does speaker S refer to when she uses N?’ is the first step that the theorist engages in. This is why, in my view, by asking the participants in the study to answer that sort of question, MMNS were not getting philosophers out of the armchair, as they liked to advocate. On the contrary, what they were doing was pulling out more armchairs for people to sit on.4

3  See my extended reply (Martí 2012) to MOD, as well as the 2014 Philosophy TV debate on reference and X-Phi with Machery (http://www.philostv.com/). 4  For the longest time, I was sure I owed that humorous remark to Michael Devitt. But he assures me that he actually heard it from me. So, I apologize I cannot thank the originator.

332

G. Martí

In my view, the questions posed my MMNS invited participants to engage in the first steps of a process of theorizing, and although the question they asked participants was not on the surface theoretical, the reflection that answering it required was theoretical: Granted, MMNS do not ask “which theory of reference do  you think gives us the right prediction?” or something equally blunt. But the data they collect – intuitions about whom a hypothetical speaker refers to when using a certain expression – are one step removed from the kind of data about use that the semanticist relies on. (Martí 2012: 71)5

It should be stressed that Machery and colleagues do not claim their results prove anything about how speakers in different cultures use names. And my contention against them was not that they were wrong in claiming to show that East Asian speakers use names descriptively. My contention was rather that, if MMNS expected their results to have any impact on semantic theory, those results had to be interpreted as showing that East Asians use names descriptively, otherwise no substantial argument against Kripke’s semantic theses ensues.6 For this reason I proposed that Machery and colleagues ask participants a question that would make them write or say something using the name ‘Gödel’, asking them, for instance, to evaluate John’s reaction when, upon discovering the fraud, John proclaims Gödel to be a thief and a liar.7 But the probe that MOD designed in response to Martí (2009) still asks participants to answer a question that involves reflecting on what a speaker, Ivy in their vignette, says when she uses a name, ‘Tsu Ch’ung Chih’. It is definitely not a test

5  I claimed also that MMNS’s results only give us evidence of what theory the participants in their study would be disposed to accept. This claim has been criticised (see for instance Devitt and Porot 2018: 1556–1557), but I still stand by it. If someone accepts that ‘Gödel’ as used by John refers to the satisfier of the uniquely identifying properties John associates to the name, the next question we could pose to that person is: ‘So, do you think that this is a general thing about proper names, that people use them to refer to satisfiers of the descriptions they associate with them?’ and if the person in question were consistent (something that, as we well know, cannot be taken for granted) they would agree. I don’t see any other alternative that to conclude in response ‘Well, then you are, at least prima facie, a descriptivist’. 6  But certainly, the results have been often interpreted as providing evidence of usage. For instance, in response to experimental semanticists’ results, Åsa Wikforss (2017: 97) proposes to “grant that names refer differently in different cultures (and across individuals).” 7  Wikforss (2017: 113 n. 24) notes that my own proposal to test use “still involves reflection on another speaker’s language.” I do not think that Wikforss is right. The hope in proposing the amended probe was precisely that, in speaking about John’s reaction, subjects would use the name ‘Gödel’ to say or write things such as ‘John’s reaction is understandable, Gödel stole the proof’. Needless to say, the suggestion for an amended probe was undeniably rough and, as I indicated, several issues would need to be fleshed out. Among them, I mentioned in particular that the degree of knowledge that the subjects had about Gödel would have to be taken into account (Martí 2009: 47 n. 5). Subjects familiar with Gödel might perhaps attach other information to the name, and hence answers seemingly in line with anti-descriptivist predictions would not be evidence of nondescriptive usage. Ichikawa et al. (2012: 58 n. 2) mention this issue, although they think it is plausible to assume that the subjects, were they descriptivists, would attach the same description as John does.

16  Experimental Semantics, Descriptivism and Anti-descriptivism. Should We…

333

that requires the subjects to use a name, and for this reason, in my view, it is still not a test that puts on the table the kind of input that is relevant for semantic theorizing: The question I proposed in Marti 2009 is meant to elicit responses in which subjects use ‘Gödel’; it does not ask subjects to reflect on someone’s use of ‘Gödel’. Does MOD’s linguistic question do that? It seems to me that it does not, for … the questions still ask subjects to reflect on someone’s practice using ‘Gödel’; it doesn’t require of them to use ‘Gödel’. (Martí 2012: 74–75)

My disagreement with Machery and colleagues became often framed as a discussion as to whether metalinguistic questions were adequate to collect semantically relevant data. I regret that I was not entirely clear at the time. The issue always had to do with whether the test collected data about use or not, and my disagreement with Machery and colleagues hinged on the distinction between use and reflection on use. I interpreted, or misinterpreted, as metalinguistic any question that did not elicit first-level use data. In any case, MOD refrained from using a test like the one I proposed in response to MMNS, indicating that “open questions such as ‘What do you think of John’s reaction?’ do not yield easily analysable data” (MOD 2009: 691 n. 1). But as it turns out, it is entirely possible to collect data about use: Martí wants empirical studies that are designed to elicit speakers’ uses of names in response to vignettes. This is a wholly appropriate methodology for the theory of reference, called elicited production. (Nado and Johnson 2016: 143)

For If you’ve gotten your subjects to use the name in question, you’ve got perfectly good evidence in the form of speakers applying the name to a thing. That’s the core data for the theory of reference. The method is called elicited production, and it’s a tool that linguists frequently make use of. (Nado and Johnson 2016: 142)

To collect data about use experimental semanticists would simply have had to include elicited production tasks. This has not been the standard practice, although there are now at least two such studies, by Domaneschi et al. (2017) and by Devitt and Porot (2018). Both studies deliver results that are massively anti-descriptivist; moreover, some of the vignettes designed by Devitt and Porot follow very closely the amended Gödel vignette proposed in Martí (2009).8 But before discussing the impact of these studies I want to focus on a different contrast suggested by the work of Nichols et al. (2016), a contrast between use and interpretation that will also play a role in this debate. Production tasks are absent in Nichols, Pinillos and Mallon’s tests, but their methodology differs substantially from that of MMNS and MOD.  Some of the questions that Nichols, Pinillos and Mallon ask their subjects are simple judgments of agreement or disagreement with sentences such as ‘Catoblepas exist’ or 8  Devitt and Porot use both elicited production tasks in which the participants’ free responses are evaluated, and questions as regards the truth value of certain sentences. Domaneschi, Vignolo and Di Paola ask their subjects to select a photo of what they take to be the referent, and they ask them also to select a proper name, out of two options.

334

G. Martí

‘Catoblepas are wildebeest’.9 Unlike the MMNS and MOD tests, Nichols, Pinillos and Mallon’s questions do not require of their subjects to reflect on how a real or hypothetical speaker is using an expression. Nevertheless, like MMNS and MOD, Nichols, Pinillos and Mallon put subjects entirely in the position of interpreters, not users, so their findings are not based either on speakers’ usage. This is an important issue, for understanding the sentences someone else produces, conversationally or in writing, is a cooperative task in which principles of charity and accommodation often play an important role.10 Hence, in my view the preferred way of collecting data should be to elicit use. Nevertheless, it should be conceded that testing people’s judgements of truth or falsity is, as Devitt and Porot (2018) insist, much closer to giving an indication of how they are disposed to use expressions. According to Devitt and Porot, those are “somewhat imperfect” tests of usage, but tests of usage after all. I agree with Devitt and Porot that in spite of the differences between producing sentences and interpreting them, simple truth-value judgments of sentences that express claims that are part of a story told in a vignette do provide evidence of speakers’ disposition to use language and hence should not be dismissed.11 But Devitt and Porot do not only rely on truth-value judgments of the kind used by Nichols, Pinillos and Mallon. Like Domaneschi, Vignolo and Di Paola they test production.12 For that reason, in my view, their results are more significant and they decisively favor anti-descriptivism. For instance, Domaneschi, Vignolo and Di Paola’s findings are decisively anti-descriptivist,13 and the percentage of responses inconsistent with descriptivism reported by Devitt and Porot in the different elicited production tests is between 89.47 and 100.14 9  In their probes Nichols, Pinillos and Mallon use the story of the catoblepas, a legendary animal described in Medieval bestiaries as a bull with scales and a very heavy head, whose stare or breath could turn people into statues of stone. Stories about the catoblepas appear to be vaguely connected to sightings of wildebeest. Genone and Lombrozo (2012), who obtain results that are by and large congruent with Nichols, Pinillos and Mallon’s, use vignettes modelled on the Gödel case, so they inquire whether protagonists of the story presented to the subjects were thinking and talking about the same kind. I have discussed those two papers in Martí (2015), where I indicate some reasons why the tests, and especially some of the vignettes used, seem to me faulty. 10  Those who subscribe to a Davidsonian interpretationist stance in semantics do not see any difference between a speaker’s active production of language and its interpretation. In their view, we “should expect the first-order use of a name such as ‘Gödel’ to cohere with beliefs about who someone else is talking about when using the name” (Wikforss 2017: 103). I do not share Wikforss’ Davidsonian stance. 11  Although, as I explain below, I do not plead guilty of the mistake they attribute to me on this basis. 12  Both sets of authors use elicited production, but theirs are not the only studies that do not rely on subjects’ reflection on use. Cohnitz and Haukioja (2015) have urged the use of, and have used, eye-tracking as a way to test subjects’ reactions to vignettes. 13  The first test that Domaneschi, Vignolo and Di Paola perform lends only partial support for antidescriptivism due to a possible confusion between semantic reference and speaker reference. Their second test is designed to resolve the ambiguity and the results are clearly anti-descriptivist. 14  Devitt and Porot test also the theoretical/referential intuitions of subjects, asking questions such as “Who does the name ‘Tsu Ch’ung Chi’ refer to?” which are similar to the kind of question asked in MMNS.  The results are not as high on the anti-descriptivist scale as the elicited production

16  Experimental Semantics, Descriptivism and Anti-descriptivism. Should We…

335

Devitt and Porot also carry out an interpretation test in which subjects are asked to judge the truth value of some sentences, a test they consider similar to that of MOD. The results of the interpretational truth value judgment task are entirely in line with the production results being decisively anti-descriptivist.15 Since Devitt and Porot view their interpretation task as similar to MOD’s, they conclude that MOD’s findings are anomalous. And they criticise my reaction to MOD in Martí (2012) for failing to acknowledge that truth value judgments are, in spite of their imperfections, tests of usage or very close to tests of usage (see Devitt and Porot 2018, sect. 1.2–1.4). The fact that truth value judgment tests are so close to tests of usage is why, on their view, the results of their truth value tests are very much in line with the elicited production results. But I think Devitt and Porot neglect an important detail. Devitt and Porot make some important, and very much needed, changes to the vignette. Compared to the vignettes used by MMNS and MOD, Devitt and Porot’s stories are carefully written to avoid confusions on the part of the audience.16 They list five different changes (Devitt and Porot 2018: 1562), of which, they note, the most significant change is the use of anaphoric devices, a change that is not detrimental to descriptivism, quite the contrary. But there is another change whose importance, in my view, Devitt and Porot fail to appreciate fully: the elimination of Ivy. In Devitt and Porot’s vignette there is no mention of a speaker (such as the Ivy of MOD or the John of MMNS) whose use participants in the experiment are required to think about. Besides the clarifications in the text of the vignette Devitt and Porot implement, I think that the elimination of Ivy is an important contributing factor that makes the test closer to actual use. MOD’s questions are not as close to actual use as Devitt and Porot suggest, for they still require explicitly that the subjects think about how another speaker is using a name and what she is referring to when she uses it, a reflection that, as I have argued, is theoretical, and hence it is not obvious that it provides direct evidence of disposition to use. Devitt and Porot’s truth value judgment tests may well be seen as imperfect tests of usage. Not so MOD’s. Given the elimination of Ivy and the subsequent elimination of the theoretical reflection, it is not entirely surprising that

results, yet they oscillate between 80% and 91.7% of answers inconsistent with descriptivism, failing to replicate MMNS’s results. Devitt and Porot find this surprising. I comment on this finding below. 15  The percentages go from 80.65 to 96.15% of anti-descriptivist answers. Curiously, the lowest anti-descriptivist percentage corresponds to a case modelled almost exactly on the Martí (2009) proposal, where the subjects are asked whether the character in the vignette is a thief and a liar. I wonder if the use of such blunt terms accounts for the participants’ tendency to recoil from expressing agreement. 16  Certainly, the original Gödel story as presented by Kripke is difficult to parse by someone who is not already a philosopher, or who has not followed Kripke’s prior discussion of descriptivism. But of course, Kripke’s original story is not addressed to the general public.

336

G. Martí

Devitt and Porot’s results in the interpretation (truth value judgment) tests are in line with the production tests.17 Just to clarify, my point is not that eliminating the hypothetical speaker is responsible for the anti-descriptivist results Devitt and Porot obtain. My point is that the reason Devitt and Porot’s interpretational tests are closer to being tests of usage is the absence of Ivy, John or any other alleged speaker from their vignettes. Because of that absence their results are more reliable as data relevant as input for semantic theorizing. Devitt and Porot’s interpretational tests, like Nichols, Pinillos and Mallon’s put the participants in the position of interpreters, and so they are not direct tests of usage. But MOD’s and MMNS’s tests, on top of being interpretational, request from the audience a reflection on the use of language by another speaker, so their results are further from providing data about use or about disposition to use. At any rate, Devitt and Porot’s results, as well as Domaneschi, Vignolo and Di Paola’s, obtained via collection of use data, massively contradict the hypothesis that there is wide variation in the use of proper names. Both the elicited production tests and the truth value judgment tests coincide: the results were “significantly antidescriptivist and hence add to the evidence against descriptivist theories of reference for names” (Devitt and Porot 2018: 1577). I surmise that some semantic revisionists would disagree with that conclusion. One may observe that there continues to be a small portion of subjects that appear to use names descriptively. For instance, at least one of Devitt and Porot’s vignettes, both in the elicited production and the truth value judgment tasks, yield 18.75% of descriptive uses, not an entirely negligible number. Are we to say that their usage is anomalous, or that they are incompetent users of names, on the basis of their small numbers? Is descriptivism more wrong because the responses of only 18.75% of the subjects in one of Devitt and Porot’s test are descriptivist –as opposed to the 33% that MOD obtained in one of their samples? Semantic revisionists might plausibly want to insist that the numbers do not carry or fail to carry weight in that way; that the correct semantic theory is not to be decided on the basis of the usage of a majority of speakers. Why would speakers be able to use names descriptively if they were not relying on a descriptive semantic mechanism that was operative on certain occasions of use and interpretation? And so, some authors have urged the acceptance of referential pluralism. There are different varieties of referential pluralism (see Andow 2014 and Wikforss 2017 for discussion). I will not discuss specific details because in my view the plea

 As mentioned before (note 14), one of Devitt and Porot’s tests elicits referential intuitions, asking directly questions such as “Who does the name ‘Tsu Ch’ung Chih’ refer to,” a question that they regard as identical to the question asked by MMNS. The results Devitt and Porot obtain are coherent with the anti-descriptivist results of their other two tests. Devitt and Porot see this as a failure to replicate MMNS’s results, and find the divergent result surprising (2018: 1571). But I am not entirely surprised, for it should be noted that the vignette in Devitt and Porot’s test dispenses also with the hypothetical speaker who is using ‘Tsu Ch’ung Chih’; so the test is substantially different from MMNS’s.

17

16  Experimental Semantics, Descriptivism and Anti-descriptivism. Should We…

337

for referential pluralism is misguided: the semantics of proper names is not descriptivist.

16.3  On Referential Pluralism I have argued that the input relevant for semantic theorizing, the input the semanticist reflects on, is usage. But saying that usage is the input for the theoretical reflection the semanticist engages in when elaborating theory is not tantamount to espousing the view that semantics is just a systematic description of usage. Fundamental questions about the semantic mode of operation of proper names, or of any other expression, are not answered by a collection of data about use, but by a reflection on those data. And that reflection was already done by Kripke, who established that proper names are not descriptive devices on the basis of powerful semantic arguments, the ignorance and error arguments. There is no doubt that people are referring to Feynman when they use ‘Feynman’ even if they do not attach a uniquely identifying description to the name. And there is no question that people are referring to Columbus, and not to some Viking that in the eleventh century set foot in the New World, when they use ‘Columbus’. These, and other similar real cases of usage that Kripke mentions, provide the data that supports the conclusion against descriptivism. For if classical descriptivism were right, it would not be possible for those uses of ‘Feynman’ and ‘Columbus’ to refer as they do. Some philosophers, not only experimental semanticists, argue against the adequacy of the anti-descriptivist approach to reference on the basis of arguments about the apparent descriptive semantics of some terms, as if the fact that the application of some terms is guided by a definite description constitutes an objection to the causal-historical position.18 But this is to misunderstand the structure of the dialectic between descriptivism and anti-descriptivism.19 Descriptivism is a hegemonic approach to reference. It postulates that reference is always mediated by a definite description: it is impossible to refer without the mediation of descriptive material, cognitively accessible to the speaker, that determines the reference, or domain of application, on each occasion of use. Kripke offers other powerful, modal and epistemic, arguments that collectively cement the case against descriptivism. But the semantic arguments are the most powerful. First, because they are based on real, marketplace, usage. Second, and more important, because they show something about the possibility of reference: that if descriptivism were right those referential uses would be impossible; and by that token they show something directly about the nature of reference. The

 This can be detected especially in discussions within philosophy of biology and general philosophy of science. 19  I have argued for this also in Martí (2015). 18

338

G. Martí

semantics of names is non-descriptivist, and any language that contains terms that are used to designate individuals, terms that designate them in spite of actual or possible ignorance and error, are languages that contain proper names that operate non-­ descriptively, as Kripke argued. Of course, nothing prevents speakers from introducing a name as an abbreviation of a definite description and use it consistently as such.20 And even Devitt, a prominent champion of the causal-historical approach, notes than in some conversational contexts names are typically used descriptively: the names of authors … can have a double life. In claims about where “Shakespeare” lived, was educated, and so on, the name seems to function as a designational name. In critical assessments of “the works of Shakespeare,” however, it often seems to function as a descriptive name, so that it would not matter to the truth of these assessments if the work was actually written by Bacon. (Devitt 2015: 127)

Focusing on what is possible as regards use and reference highlights also that even the cases that are often discussed as paradigmatic examples of names used descriptively lend support to anti-descriptivism. ‘Ibn Kahn’ is typically presented as one of those names: the mathematicians that talk about Ibn Kahn’s important results are talking about the mathematician that constructed the proofs, even if ‘Ibn Kahn’, the name that appears at the bottom of the documents, is not the name of the mathematician in question. We can agree that this is a clear case in which a community of speakers is using a name, by agreement, to refer to whoever satisfies a shared definite description.21 Nevertheless, if one of those mathematicians were to say ‘I’ve heard somewhere that Ibn Kahn signed those papers but he was not the author of the proofs’, her utterance would be understood in spite of the lack of any other definite description to substitute for the one that she and her audience connect to ‘Ibn Kahn’. Such a scenario would be impossible and it should strike us as incoherent if descriptivism were correct and this shows that names do not behave semantically like definite descriptions, even if they can be used by speakers to refer via a definite description. One thing is what an expression means, what its mode of semantic operation is, a different thing is what speakers do with the expression in particular occasions where they use it or interpret another speaker’s use. Nichols et al. (2016) provide a nice illustration of the importance of not blurring that distinction. Being primed by reading the catoblepas story alongside a story that highlights the continuity of use of ‘triceratops’ in spite of past dramatic mischaracterizations of those dinosaurs, subjects responded to uses of ‘catoblepas’ in ways that Nichols, Pinillos and Mallon classify as congruent with the predictions of the causal-historical picture of reference, agreeing with statements such as ‘Catoblepas  The difficulty of achieving such consistency of use should not be downplayed. Speakers may forget the associated description and pass on along the chain of communication partial or wrong information, generating the possibility of ignorance and error, while still preserving reference. Names tend to behave like names, even when we try to make them operate like descriptions. 21  Gareth Evans (1973) introduces the story to support a hybrid theory of reference in which descriptive and causal factors intervene. I discuss the view in connection with proposals by experimental philosophers in Martí (2015). 20

16  Experimental Semantics, Descriptivism and Anti-descriptivism. Should We…

339

are wildebeest’. Following Devitt and Porot, we should regard such assent as an imperfect evidence that the subjects would be disposed to use ‘catoblepas’ to refer to wildebeest. But while ‘wildebeest’ refers, and appears in the OED as another name for gnu, a kind of antelope found in South and East Africa, ‘catoblepas’ does not refer to any real species. That is a fact about the meaning of ‘catoblepas’ and about the meaning of ‘wildebeest’ and ‘gnu’ in English, no matter what, on a particular occasion of use or interpretation, subjects do or can be primed to do.22 There is no need to postulate referential pluralism or to endorse a similar position (a hybrid theory or rampant ambiguity) to account for the variety of ways in which speakers use expressions to achieve their communicative purposes when they interact in conversation, or when they interpret what they are told. We sometimes use names giving predominance to a shared, or supposedly shared, cognitive file in determining the subject of a conversation. In a conversation, we alternatively use names and definite descriptions that correctly apply to the bearers of names in ways that suggest that the truth conditions of our claims are determined singularly. And we often use definite descriptions referentially, to talk about individuals that are not the satisfiers of the description.23 We do all that and still typically achieve our communicative purposes, for when we communicate or when we interpret someone else’s words, a host of devices allow us to focus on a common subject. In actual conversation, glances, gestures and the overt intention to talk about something, an intention interpretable as such by the audience, play a role in determining the common focus. But even in writing or reading, the prior text and the conspicuous intentions of agents do play a role; that role is diminished in reading or writing, compared to actual conversation, which is why we tend to be more clear and more precise in choosing our words when we write than when we talk; or when we are in front of a non-charitable audience rather than in casual conversation with our friends.24  Nichols, Pinillos and Mallon operate on the assumption that ‘catoblepas’ is descriptive and that the reason it does not refer is that nothing satisfies the properties associated with it. They also assume that subjects’ responses under priming are evidence of a use that is consistent with the causal-historical theory. I think they are wrong on both counts: ‘catoblepas’ is a name of a legendary creature, and it is a seriously debated issue how those names operate. On the other hand, the causal-historical picture does not predict that ‘catoblepas’ should refer to wildebeest, just on the basis of stories about catoblepas originating with sightings of real animals. 23  The referential use of definite descriptions is typically characterized as a pragmatic phenomenon, whose natural home is in an account of use and communication. I have argued however that Donnellan’s distinction teaches us also a substantial semantic lesson, a lesson about what makes an expression of any kind a genuinely referential device: the divorce from any form of descriptive mechanism (see Martí 2008). 24  Those who afford semantic impact to experimental results and propose hybrid theories, ambiguities or some form of pluralism alert that the recognition of a variety of uses entails the acceptance of rampant failures of communication, among people who are using a name descriptively or nondescriptively. The worry relies on a very traditional picture of communication that requires transmission of a given, semantically expressed proposition, to communicate. So, if it appears that different people express or understand different propositions the presumption is that they fail to communicate. The picture, in my view, is erroneous, but an in depth discussion of this topic is 22

340

G. Martí

Once we recognize the variety of uses that speakers make of language for what it is, we can see that we should resist the incorporation into semantic theory of an account of reference to match all these usages.25 Using a name as if it were a description, either in a stable way (as in the ‘Shakespeare’ cases Devitt mentions), or in a short lived conversation, and using a description as a stable name or a short lived one is not a new phenomenon that should lead us to revise fundamental semantic tenets and endorse pluralism.26

References Andow, J. 2014. Intuitions, disagreement and referential pluralism. Review of Philosophy and Psychology 5 (2): 223–239. Cohnitz, D., and J.  Haukioja. 2015. Intuitions in philosophical semantics. Erkenntnis 80 (3): 617–641. Devitt, M. 2011. Experimental semantics. Philosophy and Phenomenological Research 82 (2): 418–435. ———. 2012. Whither experimental semantics? Theoria 27 (1): 5–36. ———. 2015. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. Devitt, M., and N.  Porot. 2018. The reference of proper names: Testing usage and intuitions. Cognitive Science 42 (5): 1552–1585. Domaneschi, F., M.  Vignolo, and S.  Di Paola. 2017. Testing the causal theory of reference. Cognition 161: 1–9. Donnellan, K. 1970. Proper names and identifying descriptions. Synthese 21 (3/4): 335–358. Evans, G. 1973. The causal theory of names. Aristotelian Society, Supplementary Volumes 47: 187–208. Genone, J., and T. Lombrozo. 2012. Concept possession, experimental semantics, and hybrid theories of reference. Philosophical Psychology 25: 717–742. Ichikawa, J., I. Maitra, and B. Weatherson. 2012. In defense of a Kripkean dogma. Philosophy and Phenomenological Research 85 (1): 56–68. Kripke, S. 1980. Naming and necessity. Cambridge, MA: Harvard University Press.

beyond the scope of this paper. See Ichikawa et al. (2012) for a discussion on whether different uses affect the ability to communicate. 25  Wikforss endorses referential pluralism at the level of descriptive semantics, a “clearly empirical theor[y], true of a particular language at a particular time. Such a theory need not be general” (Wikforss 2017: 110). If what she calls ‘descriptive semantics’ is a pure description of how people use language, I have no qualm. But the pure description of use is not semantics. And it is not a discovery that speakers use language in all kinds of ways, much less a discovery that should lead to any revisions in semantic theory or methods. 26  I am grateful to Michael Devitt for helpful discussions of these issues. More generally I am extremely grateful to Michael for his advice, support and encouragement throughout the years. It is a pleasure and an honor to be part of this volume to celebrate Michael’s 80th birthday. Versions of this paper were given in Warsaw, Athens, Salzburg and Maribor. I thank the audiences for their comments. The research for this paper has been partly supported by grants FFI2015-70707-P and FFI2016-81858-REDC of the Spanish MINECO, and the Diaphora Project (H2020-MSCAITN-2015-675415).

16  Experimental Semantics, Descriptivism and Anti-descriptivism. Should We…

341

Machery, E., R. Mallon, S. Nichols, and S. Stich. 2004. Semantics, cross-cultural style. Cognition 92: B1–B12. Machery, E., C.Y. Olivola, and M. de Blanc. 2009. Linguistic and metalinguistic intuitions in the philosophy of language. Analysis 69 (4): 689–694. Martí, G. 2008. Direct reference and definite descriptions. Dialectica 62 (1): 43–57. ———. 2009. Against semantic multiculturalism. Analysis 69 (1): 42–48. ———. 2012. Empirical data and the theory of reference. In Reference and referring: Topics in contemporary philosophy, ed. W.P.  Kabasenche, M.  O’Rourke, and M.H.  Slater, 63–82. Cambridge, MA: MIT Press. ———. 2015. General terms, hybrid theories and ambiguity. A discussion of some experimental results. In Advances in experimental philosophy of language, ed. J.  Haukioja, 157–172. London: Bloomsbury. Nado, J., and M. Johnson. 2016. Intuitions and the theory of reference. In Advances in experimental philosophy and philosophical methodology, ed. J. Nado, 125–154. London: Bloomsbury. Nichols, S., A. Pinillos, and R. Mallon. 2016. Ambiguous reference. Mind 125 (497): 145–175. Ostertag, G. 2013. The Gödel effect. Philosophical Studies 166 (1): 65–82. Wikforss, Å. 2017. Semantic intuitions and the theory of reference. Teorema 36 (3): 95–116.

Part V

Metaphysics

Chapter 17

Scientific Realism and Epistemic Optimism Peter Godfrey-Smith

Abstract  Many people understand scientific realism as including a qualified commitment to the reality of the posits of current scientific theories, and/or optimism about the capacity of science to give us theories that are largely true. Devitt is an example, and I’ll use his views to develop some criticisms of this approach. I’ll also discuss structural realism, pessimistic inductions from the history of science, and related topics. Keywords  Scientific realism · Truth · Models · Unobservables · Metaphysics

17.1  Introduction Michael Devitt over many years has argued for a particular way of formulating scientific realism, and has argued also for the truth of scientific realism in that sense.1 Devitt – my one-time undergraduate honours advisor, one-time CUNY colleague, and long-time friend – and I are on the same side of many issues in this area. But I think that familiar ways of arguing about scientific realism are often problematic, and Devitt’s formulations are examples. Here I’ll offer some arguments against his way of conceiving the terrain. I’ll use those arguments to rethink the realism issue in more general terms at the end of the paper.

1  Versions of this paper were given at symposia in honor of Michael Devitt at Macquarie University in 2005 and at the Australasian Association of Philosophy Conference in Canberra in 2014. I met Michael Devitt when I came to Sydney University as a first-year undergraduate in 1983. I am grateful to Michael for almost 35 years of friendship, support, and philosophical acuity.

P. Godfrey-Smith (*) School of History and Philosophy of Science, University of Sydney, Sydney, Australia © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_17

345

346

P. Godfrey-Smith

17.2  Devitt’s Formulations of Scientific Realism Devitt has given many formulations of scientific realism over the years, but they have been similar. A 2011 formulation, which I will work from here, is not very different from views he defended in the 1980s.2 Science appears to be committed to the existence of a variety of unobservable entities – to atoms, viruses, photons, and the like – and to these entities having certain properties. The central idea of scientific realism is that science really is committed and is, for the most part, right in its commitments. As Hilary Putnam once put it, realism takes science at “face value” …. So, for the most part, those scientific entities exist and have those properties. We might call this the “existence dimension” of realism. It is opposed by those who are skeptical that science is giving us an accurate picture of reality. (2011: 286) Scientific realism is not only committed to the existence of unobservable entities but also to their “mind-independence”: they do not depend for their existence and nature on the cognitive activities and capacities of our minds. This is the independence dimension of realism. (ibid.)

From there he christens a view: SR: Most of the essential unobservables of well-established current scientific theories exist mind-independently. (ibid.)

We need to also take account of what scientific theories say about the objects they recognize  – what properties the theories attribute to them. So Devitt strengthens “SR” above to form “Strong Scientific Realism” (SSR). SSR: Most of the essential unobservables of well-established current scientific theories exist mind-independently and mostly have the properties attributed to them by science. (287)

17.3  Metaphysical and Scientific Issues This view mixes, in what I see as a problematic way, purely metaphysical issues with an attempt to support a generalized optimism about science, especially current science. The metaphysical issue is essentially one about the mind-independence of (much of) the world. This question descends from debates between realism and idealism, the idea that the world is largely mental or mind-dependent. That debate has been transformed repeatedly – metaphysical constructivism is a relevant descendant of idealism, though it does not have much to say about ideas per se (Goodman

 See Devitt (1984), and later editions.

2

17  Scientific Realism and Epistemic Optimism

347

1978; Woolgar 1988). Devitt and others (for example, Chakravartty 2017) do see these more recent debates as continuous with earlier debates about idealism.3 My first point is that this issue is not tied to any view about the likely success of science. Each option on the metaphysical table is compatible with optimism or pessimism about that latter question. Karl Popper is a useful figure here (1959). For Popper, there is no philosophical impediment to our regarding our theories as directed upon a mind-independent world. It is possible that we could devise a theory that is wholly accurate within its domain, where this includes giving an accurate description of unobservable entities. It’s possible, but the natures of theory and evidence preclude us from ever having any confidence that we are succeeding in this task. Popper’s view is a form of “skeptical realism” about science, and it is set up in a way that makes the possibility of skeptical realism very clear.4 Devitt might accommodate a skeptical possibility as a version of what he calls “weak” or “fig-leaf” realism: something exists mind-independently, but who knows what? However, the sort of skeptical realism that I have in mind, and use Popper to illustrate, is not weak or fig-leaf-like as metaphysics. On that side, it is (or can be) as strong as anything else. It’s not like a view that sees objects as partly external but partly constructed or constituted by us – a “weak realism” in metaphysics itself. A view like Popper’s does not compromise on metaphysical issues of that kind, though it is rather skeptical about science. This worry about the epistemic side is sometimes the basis for statements of scientific realism that are expressed in terms of the goals of science. Perhaps scientific realism is a metaphysical doctrine plus a claim about what scientific theories try or aspire to do (van Fraassen 1980). I took an approach of this kind in Theory and Reality (2003). I treated scientific realism as a combination of a metaphysical claim (roughly, one about mind-independence) together with the view that it’s a reasonable goal for science to represent the world’s structure. This brought me a little in the direction of a view like Devitt’s, and I now think this was not very satisfactory, as discussed below. I said above that the Devitt-style combination of metaphysics and scientific optimism is a “problematic” combination. Why is it problematic? Perhaps it is a bit artificial, from what I have said so far, but that is no reason not to name and defend the thesis. You can set up whatever view you like, perhaps by conjunctively combining different elements, and then work out whether to defend its truth. A second aspect of the problem can be approached by way of quantum mechanics. Some people have interpreted quantum mechanics as pointing towards a kind of  Chakravartty (2017), in the Stanford Encyclopedia of Philosophy:

3

Metaphysically, realism is committed to the mind-independent existence of the world investigated by the sciences. This idea is best clarified in contrast with positions that deny it. For instance, it is denied by any position that falls under the traditional heading of ‘idealism’, including some forms of phenomenology, according to which there is no world external to and thus independent of the mind. 4  I discuss this aspect of Popper’s views in more detail in Godfrey-Smith (2016a).

348

P. Godfrey-Smith

idealist metaphysics, in its treatment of the role of measurement in physical systems. The occurrence of definite states and events is due to the collapse of the wave function, which is due to the act of measurement, and measurement has a connection with the mental of a kind that is pertinent here. This view is not anywhere near as standard an interpretation of quantum mechanics as stereotypes sometimes suppose, but it has been seriously defended (Schrödinger 1959; Wigner 1961/1995).5 I understand that most experts think that these interpretations are unsatisfactory, but that is not the point. Even if the views are no good, they have been defended by important figures and they show something important in principle. It is entirely possible for scientific work to bear directly on the metaphysical question of realism and idealism. It is possible for us to learn through science something surprising about the relation between stones, trees, cats, electrons, neutrons, etc., and “the mental.” It is a view within physics that stones, trees, and cats (Devitt’s examples) have mind-­ independent existence. Or rather, one context in which this debate can take place is within physics. The fact that there may be more purely philosophical reasons to deny mind-independence does not mean that there can’t be scientific arguments, in either direction, as well. Perhaps the physicists combined scientific reasons with more traditional philosophical ones to reach their conclusion (Wigner seems to), but the science is part of what they think bears on the issue. And to take this debate seriously requires taking science seriously as telling us how things are. How they are, on one view, is mind-dependent. With respect to that realism-versus-idealism issue, science, if we trust it at all, has the role of adjudicator (or contributor to adjudication), rather than occupant of one side of the issue. Putting these two first points together, one can have a realist metaphysics and be pessimistic about science (Popper), or take the lessons of well-confirmed theories seriously but think they tell us something very metaphysically surprising (Schrödinger). I’ll next look at the summaries of confidence about science that are treated as part of SR in Devitt and others.

17.4  Confidence Here is “Strong Scientific Realism” again, from above:

5  Einstein accused Bohr of a view like this, but Bohr apparently said Einstein had misunderstood him. See Faye (2014). Here is Wigner:

[I]t was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness…. It may be premature to believe that the present philosophy of quantum mechanics will remain a permanent feature of future physical theories; it will remain remarkable, in whatever way our future concepts may develop, that the very study of the external world led to the conclusion that the content of the consciousness is an ultimate reality. (1961/1995: 172)

17  Scientific Realism and Epistemic Optimism

349

SSR: Most of the essential unobservables of well-established current scientific theories exist mind-independently and mostly have the properties attributed to them by science.

I will make two points bearing on this. (i) There may be diversity across cases with respect to the appropriate level of confidence. (ii) There may be hidden diversity in what is being claimed by the science, in a way Devitt’s formulation does not accommodate. Devitt, and many others, want to come up with a kind of overall level of confidence, applicable across all of science. I think this exercise is OK as an attempt to give a rough and coarse-grained summary, which might have a role in some discussions (discussions of the relation between science and religion, for example). But a summary like this is not something to formulate carefully and get right, because its very nature is rough and involves a kind of balancing or averaging of disparate cases. Different degrees and kinds of confidence will surely be appropriate in different parts of science. Scientific realist talk sometimes as if that was not true – as if all “mature” theories motivate the same sort of conclusion – but this is surely a mistake. Here is how Devitt sets up his summary. (The italics are mine, to mark words I will comment on below.) Most of the essential unobservables of well-established current scientific theories exist mind-independently and mostly have the properties attributed to them by science.

Rather than just saying “The world is as science says it is,” he qualifies in three ways. Only the posits of well-established theories, only the essential posits, and only most of these. We could entertain weaker or stronger options: many rather than most, and so on. What is my objection? At least initially, it is a mild one. I think this project involves an excessive sculpting and refining of something that can only ever be a very rough and ready summary of how well we think we’re doing, as we look across very diverse fields. There is a kind of false rigor in the project, as if there is a definite threshold we have to identify here and get right, a threshold that, once cleared, allows the drawing of a philosophical conclusion. That is not really how things are. Instead we have different kinds of evidence, and different shortcomings and reasons for doubt, in different fields. As Ernan McMullin argued back in the 1980s (especially 1984), it would be an error to use the special perplexities around subatomic physics to draw conclusions about “scientific” theories – about virology and immunology and climate models  – and it would also be a mistake to use the apparently very settled character of (say) basic chemistry to draw conclusions about “science” that might guide our thinking about (say) cosmology. My second point can be set up with another piece of the first Devitt quote above: The central idea of scientific realism is that science really is committed [to a variety of unobservable entities] and is, for the most part, right in its commitments. As Hilary Putnam once put it, realism takes science at “face value.”

I think that scientific realism of Devitt’s kind does not really take science at face value. This is because Devitt’s statements are based around a particular way of

350

P. Godfrey-Smith

presenting a view of the contents of the world. We refer to objects and attribute properties to them. (Most of the essential unobservables exist mind-independently and mostly have the properties attributed to them.) Any needed qualifications are given by talking of “most objects” and “most properties.” This is fine for some parts of science, not so much for others. My argument is based on the role of different representational strategies in science, and in particular the role of modeling. I’ll present the case assuming as little as possible of my own ways of thinking about those issues (2006). But the central themes here are: (a) the role of approximation and idealization, and (b) the primacy of structure rather than entities. In many fields, science aims to develop models that are good approximations, in their role as representations of the underlying structure responsible for data (Wimsatt 2007). Theories that deal with change are often like this. It is often very hard to represent change in a tractable and exact way without idealizing. Often, in scientific theories that are concerned with change, you represent an idealized case in an exact way and then add a commentary, and perhaps some ancillary models, that try to capture what is simplified or left out. This is common in biology, especially parts that deal with large-scale phenomena such as evolution and ecological change (Godfrey-Smith 2009). In sciences that work like this, flat-out “Xs exist and have property F” claims are not good ways of summarizing what we take to be the state of knowledge, especially on the “property F” side. What is meant is something more like this: our best models include Xs which have F, and we think the structure specified by these models is usefully similar to the structure of the system we are trying to understand. So suppose Devitt wants to say: “Most of the essential unobservables of well-­ established current scientific theories exist mind-independently” (ignoring the “facts” side for a moment – the points below are magnified once we put it back in). Think of this (again simplifying) as a long conjunction: Xs exist and Ys exist and Zs exist …. In each conjunct, Devitt is either reproducing the face-value commitment to Xs that might be made in that science, or he is making a more philosophically committal claim, one that is a sharpening up of what the science itself delivered. Neither option looks good to me. The first option is that he is using the ordinary face-value meaning of the claim “Xs exist” in each science. Then in many sciences, what “Xs exist” means (if people say it at all) is something like: our best model, which is a good model, includes Xs. This is not really saying what Devitt wants it to say. It’s not a commitment to the definite existence of Xs, and this form of words would probably only be used as a sort of shorthand. The other option is that when Devitt, speaking as scientific realist, says “Xs exist,” he is saying something more definite that the science delivers. He is sharpening up and strengthening a commitment. Then he is adding scientific, empirical content to key pieces of all the fields that make up modern science. He is doing a little bit of particle physics, a little bit of cosmology, a little bit of biology. He has no basis from which to do this.

17  Scientific Realism and Epistemic Optimism

351

So there is a dilemma here. Devitt can just echo the scientists, and not add anything to what they say, when they make theoretical commitments. But then they might not be saying what he wants them to say. Or he can say what he really wants to say, but then the science may not license it. Making my point like that oversimplifies a little, too. There are two related problems here, seen in different cases. In some cases, scientific practice itself, either on its face or just a little below the surface, is organized around models of structure, with an embrace of approximation. Then the problem is as I describe it above. The other possibility is that the scientists are saying what Devitt wants them to say, but they perhaps should not. This is the point of some arguments from structural realism  – Worrall’s classic 1989 article, especially. Historically, modern science has tended to do well capturing structure and less well in identifying entities. Does that distinction make sense? Yes, as Worrall argued. In nineteenth century physics, the structural features of electromagnetic radiation were handled quite well, but the avowedly central entity, the ether, turned out not to exist. Some structural realists make claims of this kind for all science. But then they make the same sort of generalization across different fields that I think causes problems for Devitt. I think it does not apply everywhere but it applies in some fields, especially highly mathematical fields dealing with inaccessible entities that seem to behave in ways very distant from what we are familiar with in the case of countable, middle-sized material objects. In areas like that, high confidence that we are getting something right is sometimes justified, but this will less often be confidence that we have pinned down the real entities. Instead we can be confident about the structure captured by our models.6 When situations like this hold, it will probably be necessary to talk about the approximate truth of theories – and not just truth, also accuracy with respect to various kinds of non-linguistic representational vehicles. These are the sorts of things Devitt wants to avoid – he wants to avoid semantic questions and keep the discussion ontological. But I think a role for philosophically difficult notions of approximation is inescapable.

17.5  Scientific Realism Thinking about Devitt’s version of scientific realism has led me to a rethinking of some of my own earlier work. In Theory and Reality (2003), I found it hard to formulate a version of scientific realism that had the right “shape”  – a version that excluded and included what I saw as the right things. The result was a bit complicated, but I thought I could steer through it all and isolate a useful view. I now think that the results were not satisfactory and a different approach is needed.

6  Again, see McMullin (1984). Here I am expressing sympathy for something like what would now be called “epistemic structural realism,” not ontic structural realism.

352

P. Godfrey-Smith

The overall situation, I think, is like this. The classic “scientific realism” debates of the mid and late twentieth century arose in a context where a particular range of deflationary, constructivist, and skeptical views had been developed. In different ways, these views rejected the very possibility of representation of a mind-­ independent world, or rejected the view that we could reasonably hope to succeed in a project of this kind. Scientific realism in many twentieth century contexts then functioned as a blanket rejection of a diverse family of views – verificationism, radical constructivism, milder views like van Fraassen’s constructive empiricism (1980), and so on. Various pieces of twentieth century scientific realism, as usually formulated, were “pointed outwards” at these different opposing views. They were also pointed outwards at the family of views that used historical inductions to argue for a pessimistic view of our current scientific efforts (Laudan 1981), without questioning whether it was possible to get things right. The idea of a unified and definite “realist” position, worth formulating carefully and then defending against all these different views, was the product of that context of discussion. Devitt’s formulations have always been among the clearest and most carefully worked-through. But as I argued above, what we end up with is problematic. In Theory and Reality (2003: ch. 12), I developed my own version of a view of this kind. This view had a metaphysical element – it was not just a view about how theories are properly interpreted. Roughly, scientific realism in my sense combined a statement of the mind-independence of (most of) the world’s structure with the claim that science aims, and reasonably aims, to represent that structure in its theories. I did not want to include a blanket statement of optimism about our current theories, or our “well-established” theories, as Devitt does. Scientific realism is compatible with the view that we are presently wrong about a lot. So like van Fraassen, I included a claim about the goals of science. This claim was fairly modest: realism requires that getting things right is possible, and is a reasonable goal rather than an idle hope. This claim about goals will imply some conclusions about the content of theories, especially when they putatively refer to unobservable entities. Our theories and models are candidates for accurately representing what’s going on, and not only in the observable domain. That 2003 statement was also supposed to be more careful than some others about the “mind-independence” claim on the metaphysical side. I think that quite a lot of standard “realist” talk about the mind-independent character of the world is problematic. The mental is part of the natural world (not something separate from it) and it also has a distinctive causal role. Many kinds of dependence of material objects on the mental are fine (Godfrey-Smith 2016b). Others are not (such as “worldmaking” in Goodman’s 1978 sense). I also wanted to allow the possibility of science uncovering some surprising connections between the mental side of the universe and the rest – I did not want rule that sort of thing out ahead of time. So the metaphysical background to scientific realism was what I called common-­ sense realism naturalized. We all inhabit a common reality, which has a structure that exists independently of what people think and say about it, except insofar as reality is comprised of thoughts, theories,

17  Scientific Realism and Epistemic Optimism

353

and other symbols, and except insofar as reality is dependent on thoughts, theories, and other symbols in ways that might be uncovered by science. (Godfrey-Smith 2003: 176)

The extra element was the claim about what science reasonably tries to do: One actual and reasonable aim of science is to give us accurate descriptions (and other representations) of what reality is like. This project includes giving us accurate representations of aspects of reality that are unobservable. (ibid.)

Scientific realism I saw as the conjunction of those claims. I now think this is not a good formulation, for many of the same sorts of reasons used above when arguing against Devitt. Once again: I think there is a family of general questions about the mind-independence – about the relations between the mental (the symbolic, the theory-using …) part of the universe and the rest of it.7 These are questions about the putatively small subset of the world’s goings-on that are mental, in relation to the whole. This putatively small subset includes scientific cognition and the formulation of theories. There are several kinds of “dependence” that might arise or be considered here. And these are issues that science can tell us something about, as long as we think that science is more than an instrument for prediction. Science is a means we have for the adjudication of questions of this kind – science is not aligned in a prior way with one view or another. But if there is no limit to the extent to which a scientific theory might assert mind-dependence of the material world, then there is no limit to how far science can take us towards a metaphysical view that is a form or cousin of traditional idealism. Suppose a idealist view did arise from quantum mechanics. If that happened, then a kind of idealism would be shown to be true, but “common-sense realism naturalized,” in the way formulated above, could still hold. Once again, I do not think it is likely that science will take us in this direction, but the possibility illustrates what is wrong with statements of scientific realism that combine a “taking what theories say seriously” clause with something about the mind-independence of the world being studied. Rather than trying for some new and more harmonious combination of claims to comprise scientific realism, I think that it would be best to separate the metaphysical questions in this area from questions about the extent to which we can believe what various parts of science seem to tell us. I accept that a general question can be asked about whether it’s possible in principle for scientific theories to represent the structure of the world, including its unobservable structure. I think this is less of a “live” question now than it was 50 years ago; the answer is that it is possible in principle. Moving past this question, we can ask a range of questions about how well the scientific enterprise has done in this respect, how it is doing now, and how well it might do in the future. Once we get to those questions, a field-by-field treatment becomes primary, and my earlier mild criticism of general summaries like Devitt’s applies.

7  As the parenthetical comment indicates, the “mental” as traditionally understood does not pick out the category under discussion, which includes theorizing and its social organization, etc. I am using “mental” as a shorthand here.

354

P. Godfrey-Smith

Devitt has long argued that metaphysical issues should be kept as separate as possible from semantic and epistemological ones, and handled in their own terms. I am suggesting here that much of the twentieth century (and ongoing) discussion of scientific realism has featured unwelcome entanglements of just this kind. I think that in the future, the idea of a thesis of “scientific realism” as something to be defended or denied will fade, leaving us with a set of metaphysical questions (or, sometimes, non-questions) about the mental and non-mental on one side, and a family of questions about the achievements and prospects of various different scientific projects, on the other.

References Chakravartty, A. 2017. Scientific realism. In The Stanford encyclopedia of philosophy, Summer 2017 ed, ed. E.N. Zalta. https://plato.stanford.edu/archives/sum2017/entries/scientific-realism/. Devitt, M. 1984. Realism and truth. Oxford: Blackwell. ———. 2011. Are unconceived alternatives a problem for scientific realism? Journal for General Philosophy of Science 42: 285–293. Faye, J. 2014. Copenhagen interpretation of quantum mechanics. In The Stanford encyclopedia of philosophy, Fall 2014 ed, ed. E.N.  Zalta. https://plato.stanford.edu/archives/fall2014/entries/ qm-copenhagen/. Godfrey-Smith, P. 2003. Theory and reality: An introduction to the philosophy of science. Chicago: University of Chicago Press. ———. 2006. The strategy of model-based science. Biology and Philosophy 21: 725–740. ———. 2009. Abstractions, idealizations, and evolutionary biology. In Mapping the future of biology: Evolving concepts and theories (Boston studies in the philosophy of science), ed. A. Barberousse, M. Morange, and T. Pradeu, 47–56. Dordrecht: Springer. ———. 2016a. Popper’s philosophy of science: Looking ahead. In The Cambridge companion to Popper, ed. J. Shearmur and G. Stokes, 104–124. Cambridge: Cambridge University Press. ———. 2016b. Dewey and the question of realism. Noûs 50: 73–89. Goodman, N. 1978. Ways of worldmaking. Indianapolis: Hackett. Laudan, L. 1981. A confutation of convergent realism. Philosophy of Science 48: 19–48. McMullin, E. 1984. A case for scientific realism. In Scientific realism, ed. J. Leplin, 8–40. Berkeley: University of California Press. Popper, K.R. 1959. The logic of scientific discovery. New York: Basic Books. Schrödinger, E. 1959. Mind and matter. Cambridge: Cambridge University Press. van Fraassen, B.C. 1980. The scientific image. Oxford: Oxford University Press. Wigner E.P. 1961/1995. Remarks on the mind-body question. In Philosophical reflections and syntheses: The collected works of Eugene Paul Wigner, Part B Historical, philosophical, and socio-political papers, ed. J. Mehra, vol. B/6, 171–184. Berlin/Heidelberg: Springer. First published 1961. Wimsatt, W.C. 2007. Re-engineering philosophy for limited beings piecewise approximations to reality. Cambridge, MA: Harvard University Press. Woolgar, S. 1988. Science: The very idea. London: Ellis Horwood. Worrall, J. 1989. Structural realism: The best of both worlds? Dialectica 43: 99–124.

Chapter 18

Species Have Historical Not Intrinsic Essences Marion Godman and David Papineau

Abstract  In a series of important recent papers, Michael Devitt has argued, against contemporary orthodoxy, that species and other biological taxa have essences. We fully support this revival of essentialism. We further agree with Devitt that biological essences are properties that explain the multiple shared features of taxon members. We are not persuaded, however, that these essences need be common intrinsic properties of those members. An alternative candidate is shared historical origins. We argue, contra Devitt, that historical essences explain the shared features of biological taxa just as well as intrinsic properties. Indeed, we think that there are reasons for viewing historical essences as more basic than intrinsic properties. One reason is that many taxonomically shared features depend on non-zygotic inheritance rather than intrinsic genetic nature. Another is that historical origins play a more significant role than intrinsic properties in explaining the shared features of non-sexually-reproducing organisms. Keywords  Essentialism · Natural kinds · Species · Intrinsic essences · Historical essences · Michael Devitt

18.1  Millian Kinds Many categories are distinguished by the fact that all their instances share many, many properties. John Stuart Mill referred to such categories as “Kinds”.

M. Godman Department of Political Science, Aarhus University, Aarhus, Denmark e-mail: [email protected] D. Papineau (*) Department of Philosophy, King’s College London, London, UK City University of New York Graduate Center, New York, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_18

355

356

M. Godman and D. Papineau

By a Kind, it will be remembered, we mean one of those classes which are distinguished from all others not by one or a few definite properties, but by an unknown multitude of them…. The class horse is a Kind, because the things which agree in possessing the characters by which we recognise a horse, agree in a great number of other properties, as we know, and, it cannot be doubted, in many more than we know. ([1856]1974: bk. 4, ch. 6, § 4)

Of course, horses are not alike in all respects. They vary in size, colour, and plenty of other features. Still, it remains the case that they are all alike in sharing manes, tails, a liking for hay, and many other anatomical and behavioural features – including, as Mill emphasizes, many that we are as yet ignorant of. In short, there are many true instances of the schema All horses are F. What goes for horses goes for all other biological species. For any species, C, there are many true instances of the schema All Cs are F. Biological species are by no means the only Millian Kinds. Chemical substances display the same structure. All samples of copper share the same density, melting point, electrical and heat conductivity, disposition to combine with other substances, and so on. It works the same with chemical compounds like water, as well as with elements like copper. Samples of water again share density, melting point, and so on. Whether a chemical substance C is an element or a compound, there will be a multitude of truths of the form All Cs are F. Again, astronomical objects fall into Kinds. All main-sequence stars share a wide range of properties with each other, as do red giants, white dwarfs, supernovae, comets, and planets. Returning to the biological realm, species are by no means the only biological Kinds. Higher taxa (genus, family, order, class, phylum, …) also have members that share many features. Thus, all the members of the class mammalia have hair, a neocortex, mammary glands, are warm-blooded, and so on. (Note how the features shared by all the individual members of some such higher taxon will always be a subset of the features shared by all the individual members of any subordinate lower taxon. For example, the features shared by all mammals are a subset of the features shared by all horses.)1 Most Kinds fall into types. All the different species are alike species, all the different chemical substances are alike chemical substances, and so on. This is important because it can tell us in advance which properties will be shared by the members of any given Kind of a certain type. For organisms in a species, say, we know that morphology, anatomy, diet, breeding behaviour, … will be shared, but not injuries, size, number of siblings, …. For samples of chemical substances, we know that density, melting point, combinatorial dispositions, … will be shared, but not shape, monetary value, ….

1  It is even possible to regard individual animals, like Dobbin, and other persisting objects, like David’s car, as examples of Millian Kinds. Think of the “members” of such Kinds as their temporal stages. Then the “members” of Dobbin will not only share the features common to all horses, but also a certain size, colour and funny scar on his right ear. Again, the stages of David’s car won’t only share the features common to all Vauxhall Zafiras, but also the tow ball he added and the dent in the bonnet. Unfortunately, we will be unable to pursue this important topic further here (however for some further discussion see Millikan 1999, 2000: 23ff).

18  Species Have Historical Not Intrinsic Essences

357

Ruth Millikan speaks of “templates” for families of Kinds – a list of determinables such that all the members of any Kind in the relevant family will share the same determinate value. (For example, for chemical substances the list will include melting point: while different substances have different melting points, all samples of a given substance have the same one.) Someone who grasps the template for a family will thus be able to make many one-shot inductions. Once you’re seen the temperature at which one sample of copper melts, you’ll know the melting point for all. Once you’ve seen how one hedgehog reproduces, you’ll know how they all do (Millikan 1998). Let us make one last point about the structure of Kinds. Not every category C that supports a range of generalizations of the form All Cs are F is a genuine Kind. Consider the category of: Mantelpieces that have a portrait of Robert De Niro and six glasses on top.

Now suppose that the term, “De Niro piece”, has been coined to classify the instances that conform to this definition and that we go out and track down De Niro pieces. We will of course find that All De Niro pieces are mantelpieces, that All De Niro pieces have a portrait of Robert De Niro, and that All De Niro pieces have six glasses. Still, these are not genuine empirical generalizations. Rather, they are guaranteed by the definition of “De Niro piece”. Nor is there any reason to suppose that there are any other general truths about De Niro pieces, beyond those so definitionally guaranteed. Because of this, categories like De Niro piece do not add to our power to anticipate nature. We can’t ever use generalizations involving such categories to tell us something we don’t already know. Since we first need to check that something has all the properties entailed by the definition to know whether it is a De Niro piece, the generalizations about De Niro pieces cannot function as a means of predicting further unchecked properties. With genuine Kinds, it is not like this. We can typically ascertain Kind membership and use it to predict properties without first checking whether all the properties associated with the Kind are present. We can easily determine that something is a horse, or a piece of copper, on the basis of just one or two of the many features that such things have in common. We have explained Kinds, in part, by noting how they underpin the ability of human thinkers to make one-shot inductions and anticipate new features. But it is worth emphasizing that the structure of Kinds is not itself an anthropocentric matter. Kinds involve real patterns in nature, patterns that would still be there even if there were no thinkers to make use of them. These patterns may be very useful to human beings – indeed many of our cognitive powers have no doubt evolved to make use of them – but the patterns themselves are prior to this use.

358

M. Godman and D. Papineau

18.2  Essences Kinds have a peculiar and striking structure. They enter into a multitude of generalizations (All Cs are F, for many Fs), and these generalizations are not just matters of definition (as they are with cooked-up categories like De Niro pieces). Kinds thus raise an explanatory question. Why do all the Cs share so many properties? What is it about the Cs that accounts for their multifarious resemblance? This is where essences come in. Essential properties are properties that explain all the other shared properties. For any Kind C, there will be some central common feature E possessed by each C, a feature that gives rise to all the other properties F shared by the Kind. The essential property thereby explains why the Kind supports multiple generalizations. People sometime say that essences are the “real definitions” of categories, or again that they specify “what it is to be a C”, “what makes something a C” or “in virtue of which something is a C”. We are not against these ways of talking, but on their own they can seem to lack substance. What distinguishes, among the many properties shared by C, those that constitute its “real definition”, or that specify “what it is to be/what makes something/in virtue of which something is a C”? We take the special explanatory role of essences to answer these questions. Essential properties are those that are responsible for all the many other properties shared by the members of C. (See Godman et al. 2020.) Millikan has observed that there are two general sorts of Kinds, distinguished by two different sorts of essential properties. She calls them “eternal” Kinds and “historical’ Kinds. With an eternal Kind, the shared properties of the Kind are explained by an intrinsic, or “eternal”, property of the members. With a historical Kind, the shared properties are explained by a historical relation to a common origin (1999, 2000).2 Eternal Kinds are perhaps the more familiar. Chemical substances are the paradigm. Their essences are the structures of their constituent molecules.3 All samples of copper have the same density, melting point, electrical and heat conductivity, … because they have the same molecular make-up. The same goes for astronomical Kinds, such as different kinds of stars, supernovae, comets (Ruphy 2010). For example, the common physical make-up of main-sequence stars (they fuse hydrogen to form helium) explains their other shared properties. Further cases suggest themselves. Geological Kinds arguably owe their many shared properties to 2  Perhaps there are other sort of Kinds. One possibility is functional Kinds, whose instances share properties because they have been selected for the same purpose. Biological analogues, such as aerial insectivores, could arguably qualify here. But Kinds of this sort tend to be thin, with a relatively limited range of generalizations covering their members (Papineau 2009, 2010; Godman 2015). This is why biologists generally view homological classification as more important than analogical. 3  Note that it is the structure of the molecules that matters, not their chemical formulae. Isomers are chemical compounds whose molecules contain the same constituent atoms, but which have very different properties because of the way the atoms are arranged.

18  Species Have Historical Not Intrinsic Essences

359

common physical essences. Richard Boyd has suggested that weather categories like storms are further examples of eternal Kinds (1999: 84).4 Now for historical Kinds. Consider all the different copies of Alice in Wonderland, including the paperback with a front page torn off on Marion’s bookshelf, the hardback in David’s study, and the many others in numerous libraries and book stores across the world. These instances all share their first word, their second word, … and so on to the end. They also share the same list of characters, the same plot, and the same locations. We thus have a wealth of generalizations of the form All copies of Alice in Wonderland are F. Copies of Alice in Wonderland form a Kind. But the common properties of this kind are certainly not explainable by any common physical essence. The copies of Alice in Wonderland are not even physically alike. They can be made of different kinds of paper, or of board, or written in braille, and then there are audio versions on magnetic tape or digital disc. Rather, all these instances are members of the same Kind because they are all copies of an original. Their shared features are all due to their common descent from the original version written by Lewis Carroll. It is purely this chain of reproduction, not any common intrinsic property, that explains the shared features. Many artefacts are like literary works in this respect. Earlier we alluded to all the features common to Vauxhall Zafiras. They all share the same shape, same engine, same ingenious system for stowing seats in the rear, and so on for all their many design features. They too form a Kind. But here again the commonalities are not explained by some common intrinsic property. While the Zafiras do have many physical properties in common, none of these is distinguished as the source of all the other common features. Rather their many similarities stem from their all being made according to the same original blueprint. They are constituted as a Kind by their common historical source. More generally, Kinds with historical explanatory essences can be found throughout the cultural, social and technological domains. They typically cover innovations which lack a shared intrinsic core, but where a chain of reproduction can account for the similarity amongst instances over time, including technological inventions and regional identity (Millikan 1999), religion, ideology, cultural syndromes (Godman 2015, 2016), and gender (Bach 2012). For historical kinds, the basic principle is that “each [instance] exhibits the properties of the kind because other members of that same historical kind exhibit them” (Millikan 1999: 54).5

4  Perhaps, as Devitt has pointed out, the word “eternal’ is less than ideal (personal correspondence). The members of an eternal Kinds can be short-lived and in constant flux. Think of supernovae. But we shall stick to Millikan’s (1999) terminology. “Eternal” Kinds simply means those with intrinsic essences. 5  Non-reductive physicalists hold that there are “special sciences” whose laws range over types whose instances share no physical properties. Some critics have argued that this supposed physical heterogeneity is in tension with substantial laws (Kim 1992; Block 1997; Papineau 2009, 2010). But this argument carries little weight against historically grounded special sciences. Historical categories conform to multiple generalizations because of shared origins, not shared physical natures (Millikan 1999; Godman 2015).

360

M. Godman and D. Papineau

Different examples of historical Kinds will involve different means and mechanisms of reproduction (for example, type setting, copying machines, and various modes of social learning). But all examples will involve three central ingredients: (1) the existence of a model, (2) new instances produced in interaction with the model or other past instances, (3) this interaction causes the new instances to resemble past instances. A chain of reproduction thus generates the relevant historical relations that ground and explain the Kind.

18.3  Biological Taxa In a series of important recent papers (2008, 2010, 2018), Michael Devitt has argued that biological species have intrinsic essences.6 Devitt’s position is nuanced. His fuller formulation is that biological essences “are, at least partly, underlying intrinsic, mostly genetic properties” (2008: 344). One reason for this more cautious formulation is that Devitt allows that biological essences are also partly historical as well as intrinsic. We shall return to this historical concession below. But first let us comment on Devitt’s thinking about the intrinsic component in biological essences. Here Devitt is very much in accord with the points made in this paper so far. Although he often introduces essences as properties “in virtue of which an organism is C”, or “which makes it a C”, or so on, he makes it clear that such phrases must be cashed out in explanatory terms. As he sees it, the argument for supposing that species have intrinsic genetic essences is precisely that these essences explain the many shared features of species members: “generalizations about what they look like, about what they eat, about where they live, about what they prey on and are prey to, about their signals, about their mating habits, and so on” (Devitt 2008: 351). So, for Devitt, biological species are a version of the “eternal” Kinds discussed above. Their shared features are explained by some common underlying intrinsic property. Still, this is not the only option suggested by our discussion so far. There is also the possibility of viewing species as historical Kinds, with the shared properties of their members being explained by their common descent from the same ancestors, rather than by their common genetic intrinsic make-up. The discussion so far might have suggested that eternal and historical Kinds are disjoint. Eternal Kinds have instances that arise independently, with no causal interaction  – such as chemical elements like copper and astronomical categories like main-sequence stars. Historical Kinds, by contrast, have instances that are causally related by lines of descent – such as copies of a literary work and members of a religion. But in truth there is no principled reason why some Kinds could not qualify as both historical and eternal. Perhaps the instances of some Kind are both copied from

6  In fact Devitt argues the point in general for all biological taxa, and focuses on species only for simplicity. We shall follow him in this. Note that all the points which follow apply to other taxa too.

18  Species Have Historical Not Intrinsic Essences

361

some original and share some intrinsic physical make-up. Consider infectious diseases like measles or tuberculosis. One explanation for sufferers’ shared symptoms could be that their ailments are all copied from earlier sufferers. But another explanation would be that the sufferers all harbor the same physical microbe, and that this gives rise to their shared symptoms. In cases like this, we can see things either way. We can view the Kind as historical, and see the shared properties as due to the instances having a common source. From this historical perspective, the intrinsic physical property is simply part of the mechanism by which instances of the Kind reproduce themselves. Just as books are reproduced with the help of typesetters and printing blocks, so is measles reproduced with the help of viruses. But we can also view the Kind as eternal, and explain the shared properties as due to the common underlying physical core. From this eternal perspective, the common reproductive source is simply the reason why the instances all have the same physical essence. Just as the shared constitution of copper samples arise from their similar conditions of formation, so does the shared physical essence of measles cases derive from their common ancestry. There is no incompatibility here. As it happens, our two models for explaining the structure of Kinds can both be applied to infectious diseases. There is nothing wrong with that. The truth is that diseases happen both to reproduce themselves and to do so by giving all new instances a common intrinsic physical property. What then about biological species? At first pass it might seem that they will come out like infectious diseases, as both historical and eternal Kinds. Members of a species owe their shared properties to their common ancestry, but at the same time species reproduce themselves by imbuing their members with intrinsic genetic cores. In the section after next, however, we shall argue against this picture. In our view, there are good reasons for viewing species as historical and not eternal Kinds. First, though, it will be useful to discuss Devitt’s views about historical species essences.

18.4  Devitt on (Partly) Historical Essences Devitt holds that biological species have an essence that is partly intrinsic, but also partly historical. However, he is thinking of the historical element quite differently from us. Devitt (2008, 2010, 2018) follows Ernst Mayr (1961) and Philip Kitcher (1984) in distinguishing two explanatory questions: (1) Why do the members of a species each develop their range of shared phenotypic properties? (2) What led to there being a species whose members develop this range of phenotypic properties? The first question is about the ontogeny of the shared properties in species members, the second about the phylogeny of the species itself. Mayr called these two questions “proximate” and “ultimate” respectively. Devitt prefers Kitcher’s terminology

362

M. Godman and D. Papineau

of “structural” and “historical” (2008: 351–355). For the moment, we shall follow him in this. In Devitt’s view, the first structural question demands an answer in terms of intrinsic essences. There must be something inside tigers, say, that make them all develop their shared characteristics: “if our concerns are … with a nature that causes a tiger’s development into an organism with those properties, the nature must be intrinsic” (2018: 3). As Devitt sees it, then, historical essences only come in when we turn to the other, historical, question about what made the generalizations about the species true in the first place. Once we are looking for historical explanations of species, we should expect to bring in a historical element. The historical component in a species’ essence will be that part of its nature that explains how the species came to exist. And this, Devitt allows, will include an account of how the members of the species descended from earlier organisms (2018). A relatively minor point first. We are not persuaded that the notion of essence has much work to do in connection with the Mayr-Kitcher question about evolutionary history. As we see it, talk of essences earns its keep when we need to explain a body of generalizations characteristic of a Millian Kind (for some C, many truths of the form All Cs are F). We don’t see why questions about the historical origin of species traits’ raise an explanatory challenge of this Millian form, and so to this extent think Devitt’s commitment to (partly) historical essences is misplaced. But let that point pass. (Perhaps some way can be found way of pressing the ultimate or historical question into Millian form.) The more basic issue is that we, unlike Devitt, are thinking of historical essences as answers to the first proximate question (1), not the historical question (2). When we say that literary works and religions are Kinds with historical essences, we are aiming to explain the proximate fact that their instances all display a great number of commonalities, and saying that this is due to their all being copied from a common source. We aren’t addressing questions about the origins of literary works or religions, so far at least, but only the striking fact that their instances display multiple similarities. (At this point, we need to note how Devitt’s use of the term “structural” for questions of type (1) prejudges the case against historical answers to such questions. To the extent that “structural” implies an answer in terms of intrinsic rather than historical properties, the terminology simply rules out the alternative historical option by fiat. Given this, we shall revert henceforth to Mayr’s original term “proximate”.) We are thus thinking of historical essences quite differently from Devitt. For us they explain proximate multiple similarities, not historical origins. By way of further evidence for this difference, note that we have no need to answer a question that much perturbs Devitt, and occupies most of his 2018 paper. From Devitt’s perspective, a historical species essence explains how the species emerged historically, and so needs to be informative about which ancestors constituted this emergence. (Devitt argues that this challenge can only be met by identifying ancestors with a certain intrinsic essence; for him, this provides a further reason why species essences must be partly intrinsic.) By contrast, our appeal to historical essences imposes no

18  Species Have Historical Not Intrinsic Essences

363

demand that we give a criterion for species emergence. Since we are not concerned with explaining the historical origins of species, we can by-pass such issues. By way of analogy, consider a request for an explanation of the properties common to all Christians. We would say that the members of this Kind display so many common features because they are all influenced by a common historical source. This seems the right answer. But it doesn’t require us to be specific about when Christianity started. Maybe we should date it from Jesus, or from the first Pope, or from the Council of Nicea. But our historical account of the Kind would seem to stand up perfectly well whichever we do. (Perhaps there are issues for which a definite demarcation of Christianity, and of species membership, matters. But answering proximate questions about Kinds aren’t among them.)

18.5  Historical over Intrinsic Essences We are thus left with the proximate question about biological species. Why do their members all share so many properties? As we have seen, one answer would be to assimilate species to eternal Kinds, as Devitt does, and appeal to the common genetic make-up intrinsic to each member. But an alternative would be to view species as historical Kinds, and attribute their shared properties to their common ancestry, with their genetic make-up simply being part of the species’ copying mechanism. It might seem that there is no good reason to favor one perspective over the other. If historical essences are just as legitimate as intrinsic ones, why should it matter that we can view biological species in two ways, each of which explains the common features of their instances? After all, the two stories can be viewed as complementary, showing how species are both like eternal kinds and like historical kinds, in line with our earlier analysis of infectious diseases. In this final section we would nevertheless like to suggest that there are at least two reasons to reject the assimilation of species to eternal Kinds and to view them as instead fundamentally historical. The first is to do with non-zygotic inheritance. The second is to do with species that reproduce by fission. Non-zygotic inheritance first. It is often said that “acquired characteristics are not inherited”. But this dictum needs to be handled with care. It is of course true that acquired characteristics are not inherited through the sexual bottleneck. The children of a skilled forager will not inherit her skills simply from the genetic material she bequeaths them. But they might well inherit her skills in other ways  – for instance, by her explicitly training them, or by their implicitly copying her tricks. In this sense, there is nothing at all problematic about the inheritance of acquired characteristics. Of course offspring tend to resemble their parents in many ways that owe nothing to genetic inheritance, courtesy of various modes of social learning. One common theme in much recent biological thinking is that this kind of inheritance can be just as important to the evolution and nature of biological species as genetic inheritance. Developmental systems theorists (Oyama et al. 2003) and epigenetic evolutionists (Jablonka and Raz 2009) have emphasized how many

364

M. Godman and D. Papineau

inherited traits that are regarded as characteristic of species don’t go through the zygotic sexual bottleneck. Discussed cases include habitat-imprinting mechanisms for insects, mating songs in killer whales, sexual preferences in birds, and tool use in chimpanzees and in humans. It may be difficult to quantify the importance of such non-zygotic inheritance for the evolution of species. But there is no reason to think that it is a marginal phenomenon. Natural selection can operate on the variation of any phenotypic traits as long as these traits confer heritable variation of fitness – and so regardless of whether these heritable variations are caused by genetic or by non-genetic factors (Mameli 2004: 39). Given this, we can expect the traits that are systematically shared among species members to contain elements that depend on non-zygotic channels of inheritance as well as genetic ones.7 It is worth emphasizing that the point we are making here is not just a denial of genetic determinism. Devitt is of course aware that few if any species-characteristic phenotypic traits are purely genetic, in the sense of being caused entirely by genetic factors, without any assistance from the environment.8 That is not the current issue. Rather we are observing that nothing requires characteristic traits shared by species members to depend on genetic inheritance at all. Genes are just one of the means by which parental characteristics are passed on to offspring. This kind of non-zygotic inheritance is in obvious tension with supposing that intrinsic essences are always the proximate explanation of the common features of biological species. It might seem natural to suppose that such proximate explanations must appeal to material present in the zygote. Why do all tigers grow up the same, and different from zebras, even though tigers and zebras are subject to just the same environmental influences? What could explain that, except their shared genetic make-up? Well, the answer is that tigers and zebras aren’t subject to just the same environmental influences. Tigers are raised by tigers, while zebras are raised by zebras, and many of their species-characteristic properties can be due to this in itself – without any assistance from their genes. That is the first reason why we think biological species should be counted as historical Kinds rather than eternal ones. Not all species-characteristic properties can be proximately explained by intrinsic genetic properties that are present in the zygote. Some shared characteristics derive from other ways in which parents are copied by their offspring. We can account for all the cases if we view species as historical Kinds, whose shared properties are due to copying mechanisms, in a way we can’t if we view them as eternal Kinds. The second reason for denying that species have intrinsic essences relates to species that don’t reproduce zygotically at all. Bacteria and some single-celled protists multiply solely by cellular fission. There seems no good reason to think of the 7  From a traditional point of view, a change in gene-pool frequencies is necessary for evolution by natural selection. The position we are defending denies this. A change in the frequency of nonzygotically inherited traits can be driven by natural selection without any changes in gene frequencies. 8  See e.g. Devitt 2008: 352.

18  Species Have Historical Not Intrinsic Essences

365

similarities in such species as deriving from some essential inner core, rather than historically from the copying process of cellular mitosis. As we have just seen, to the extent that there is an argument for intrinsic essences, it hinges on the thought that it can only be the genetic material present in the zygote that explain why all members of a species grow up to display the same traits. We have already seen that this thought isn’t watertight even for organisms that do develop from zygotes. But, apart from that, it gets no grip at all on organisms that don’t so develop, but are born full-grown, so to speak. We don’t think all the pound coins pressed from some mould must have some common inner essence to explain why they share their many other joint properties. No more should we think this of the members of non-zygotic species. In both cases, the explanation of the shared properties is much simpler. The instances are all copied from the same original and share properties for that reason. They haven’t developed into complex organisms from single cells that must therefore contain the source of their similarities. Since they don’t grow and develop, no inner essence is needed to explain why they do so in the same way.9 Bacteria and single-celled protists do contain DNA. However, since this does not play a developmental role, but simply contributes to the metabolic workings of the cell, it has no better claim to being the intrinsic essence of these species than their cell walls, say. These traits are all simply different features of the organism that are reproduced by the copying process of mitosis. Those, like Devitt, who defend intrinsic biological essentialism might object that asexually reproducing single-celled organisms do not fall naturally into the species category. Without sexually interacting organisms contributing to shared gene pools, these organisms constitute simple lineages, rather than any more complex biological categories. True enough. But this does not undermine the argument. Even if they aren’t species, single-celled lineages are certainly biological Kinds, with their members displaying multiple similarities. So they must still have some shared essence, some common property that explains their many shared features. In the absence of any inner developmental core, this explanation can only be their common historical source. Devitt himself emphasizes that his underlying concern is with Linnaean taxa in general (2008: 346). Species are just a special case. In principle, we suppose, intrinsic essentialists could restrict their claims to taxa that involve full-fledged species, and so avoid the argument from single-celled organisms. But it strikes us as preferable to have one account that works for all biological taxa, including single-celled lineages, rather than one story for species and another for lineages.

9  These points apply to some other biological species that reproduce non-sexually. Aspen trees and many crassulaceae reproduce largely vegetatively; to that extent their offspring are developed multi-cellular organisms, and so the similarities they share are in no need of explanation by inner essences.

366

M. Godman and D. Papineau

18.6  Conclusion Devitt has made an important contribution by reopening the question of biological essentialism. He is quite right to reject the contemporary dogma that biology has no need of essences. As he insists, the members of biological taxa share a multitude of properties, and this demands an essentialist explanation. We feel, however, that he has located biological essences in the wrong place. To ask for explanations of shared properties is not yet to embrace intrinsic essences. In many cases, shared properties can instead be explained by a shared history. In our view, this is the right model for biological taxa. While intrinsic essences can explain some of the shared properties of some sexually reproducing taxa, historical essences provide a fully general account that covers all shared properties across all taxa.

References Bach, T. 2012. Gender is a natural kind with a historical essence. Ethics 122 (2): 231–272. Block, N. 1997. Anti-reductionism slaps back. Noûs 31 (11): 107–132. Boyd, R. 1999. Kinds, complexity and multiple realization: Comments on Millikan’s "Historical kinds and the special sciences." Philosophical Studies 95 (1): 67–98. Devitt, M. 2008. Resurrecting biological essentialism. Philosophy of Science 75 (3): 344–382. ———. 2010. Species have (partly) intrinsic essences. Philosophy of Science 77 (5): 648–661. ———. 2018. Historical biological essentialism. Studies in History and Philosophy of Biological and Biomedical Sciences 71: 1–7. Godman, M. 2015. The special science dilemma and how culture solves it. Australasian Journal of Philosophy 93 (3): 491–508. ———. 2016. Cultural syndromes: Socially learned but real. Filosofia Unisinos 7 (2): 185–191. Godman, M., A.  Mallozzi, and D.  Papineau. 2020. Essential properties are super-explanatory: Taming metaphysical modality. Journal of the American Philosophical Association. https:// doi.org/10.1017/apa.2019.48. Jablonka, E., and G.  Raz. 2009. Transgenerational epigenetic inheritance: Prevalence, mechanisms, and implications for the study of heredity and evolution. The Quarterly Review of Biology 84 (2): 131–176. Kim, J. 1992. Multiple realizability and the metaphysics of reduction. Philosophy and Phenomenological Research 52 (1): 1–26. Kitcher, P. 1984. Species. Philosophy of Science 51 (2): 308–333. Mameli, M. 2004. Nongenetic selection and nongenetic inheritance. The British Journal for the Philosophy of Science 55 (1): 35–71. Mayr, E. 1961. Cause and effect in biology. Science 131: 1501–1506. Mill, J.S. 1886/1974. A System of logic, ratiocinative and inductive: Being a connected view of the principles of evidence and the methods of scientific investigation. Toronto: University of Toronto Press. Millikan, R.G. 1998. A common structure for concepts of individuals, stuffs, and real kinds: More Mama, more milk, and more mouse. Behavioral and Brain Sciences 21 (1): 55–65. ———. 1999. Historical kinds and the special sciences. Philosophical Studies 95: 45–65. ———. 2000. On clear and confused ideas: An essay about substance concepts. Cambridge: Cambridge University Press. Oyama, S., P.E. Griffiths, and R.D. Gray, eds. 2003. Cycles of contingency: Developmental systems and evolution. Harvard: MIT Press.

18  Species Have Historical Not Intrinsic Essences

367

Papineau, D. 2009. Physicalism and the human sciences. In Philosophy of the social sciences: Philosophical theory and scientific practice, ed. C.  Mantzavinos, 103–123. Cambridge: Cambridge University Press. ———. 2010. Can any sciences be special? In Emergence in mind, ed. C.  Macdonald and G. Macdonald, 179–197. Oxford: Oxford University Press. Ruphy, S. 2010. Are stellar kinds natural kinds? A challenging newcomer in the monism/pluralism and realism/antirealism debates. Philosophy of Science 77 (5): 1109–1120.

Part VI

Michael Devitt’s Responses

Chapter 19

Stirring the Possum: Responses to the Bianchi Papers Michael Devitt

Abstract  This paper is made up of responses to the papers in this volume. The first section is concerned with the philosophy of linguistics, particularly with the linguistic conception of grammars and the psychological reality of language. The second section is concerned with the theory of reference for proper names, including issues of reference borrowing, grounding, the qua-problem, and causal descriptivism. The third section is concerned with the theory of meaning, particularly the matters of direct reference, the contingent a priori, rigidity for kind terms, narrow meaning, and the use theory. The fourth section is concerned with methodology, particularly with “putting metaphysics first”, the role of intuitions, and the contribution of experimental semantics. The fifth section tackles two issues in metaphysics: the definition of scientific realism and biological essentialism. Keywords  Philosophy of linguistics · Theory of reference · Theory of meaning · Intuitions · Experimental semantics · Scientific realism · Biological essentialism Thank you very much indeed to my friends who contributed such rich and thought-­ provoking papers to this volume. And a special thanks to Andrea Bianchi for conceiving of this volume and pulling it off. He is a wonderful editor!1 The papers are so thought-provoking that, I confess, I found responding to them rather overwhelming. They obviously raise many more issues than I can deal with here. I focus my responses on areas where I think I have something new to say or 1  So wonderful that I forgive him for rejecting “Stirring the Possum” as the title of this volume (although he did let me use it for my responses). It’s an Australian expression and he was worried that its meaning would be lost on non-Australians. But that has not been my experience. And its meaning is only a Google click away for those who can’t figure it out. I am indebted to Bianchi also for some helpful comments on a draft of these responses.

M. Devitt (*) Philosophy Program, City University of New York Graduate Center, New York, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0_19

371

372

M. Devitt

where I detect misunderstandings. So my responses to papers I largely agree with will be very brief. And where I think – perhaps mistakenly! – that I have dealt adequately with a criticism elsewhere, I shall simply cite that discussion. I shall organize my responses into sections according to topics. The authors of any of the volume’s papers that I cite in a section are mentioned parenthetically in the section’s title. Some authors are cited in more than one section.

19.1  Philosophy of Linguistics 19.1.1  The Linguistic Conception of Grammars (Collins, Rey) 19.1.1.1  Introduction In my book, Ignorance of Language (2006a),2 I urge seven “major conclusions” and entertain seven “tentative proposals” about language and linguistics. John Collins rich paper, “Invariance as the Mark of the Psychological Reality of Language”, is the latest in our long-running exchange about some of these theses (Collins 2006, 2007, 2008a, b, Devitt 2006b, 2008a, b). He takes particular issue here with the two major conclusions that have turned out to be the most controversial. One of these is: First major conclusion: Linguistics is not part of psychology. (40)

I urge that a grammar is about a nonpsychological realm of linguistic expressions, physical entities forming symbolic or representational systems (ch. 2).3 I later called this “the linguistic conception” of grammars. It stands in contrast to the received Chomskian view that a grammar is about a speaker’s linguistic competence and hence about mental states. I later called this “the psychological conception”.4 The other conclusion that concerns Collins is: Third major conclusion: Speakers’ linguistic intuitions do not reflect information supplied by the language faculty. They are immediate and fairly unreflective empirical central-processor responses to linguistic phenomena. They are not the main evidence for grammars. (120)

 All unidentified references to my work in Sect. 19.1 are to this book.  See also Devitt 2003. Devitt and Sterelny 1989 is an earlier version of the argument, but contains many errors. 4  I emphasize that 2 3

the linguistic conception does not involve the absurd claim that psychological facts have nothing to do with linguistic facts. Some psychological facts cause linguistic facts (23–24), some “respect” them (25), some partly constitute them (39–40, 132–133, 155–157), some provide evidence for them (32–34), and some make them theoretically interesting (30, 134–135). But psychological facts are not the subject matter of grammars. The dispute is not over whether linguistics relates to psychology but over the way it does. (2008a: 207)

19  Stirring the Possum: Responses to the Bianchi Papers

373

Part of Georges Rey’s remarkably polemical, “Explanation First! The Priority of Scientific Over ‘Commonsense’ Metaphysics”, is the latest in an even longer-running exchange (Rey 2006a, b, 2008, 2014, forthcoming-a, forthcoming-b, Devitt 2006a, 2008a, 2013a, 2014a, 2020). It has a few pages rejecting my first major conclusion and mentions his earlier rejection of the third. I have already said a great deal about intuitions (2006a, c, d, 2008a, c, 2010a, b, 2012a, 2014a, b, 2015a,  2020) and so will say only a little more here (in Sect. 19.4.3). I shall discuss my first major conclusion, the linguistic conception of grammars, focusing on Collins’ lengthy discussion but with passing attention to Rey’s much briefer one. I discuss other aspects of Rey’s paper in Sects. 19.4.1 and 19.4.2 below. My linguistic conception has provoked a storm of criticism.5 Yet, as Collins observes (note 2), I have complained that my critics, including Collins himself, have tended not to seriously address my argument for this provocative conclusion. Collins’ paper in this volume is certainly not open to that complaint: he addresses my argument in great detail, for which I thank him. His paper is a novel and important contribution to the fundamental issue of what grammars are about. 19.1.1.2  The “Master Argument” As Collins points out, (what he calls) my “master argument” (17–38) rests on the application of three general distinctions to humans and their language. The distinctions are, briefly: first, between the theory of a competence and the theory of its outputs or inputs; second, between the structure rules governing the outputs and the processing rules governing the exercise of the competence; third, between the “respecting” of structure rules by processing rules and the inclusion of structure rules among processing rules. (“Respecting” is a technical term here: processing rules and competence “respect” the structure rules in that they are apt to produce outputs governed by those structure rules; analogously, apt to process inputs so governed.) So, using Collins’ numbering, premise (1) of my argument is that there are these three general distinctions. Collins accepts this but in Sect. 2.4 he resists premise (2): that these distinctions apply to humans and their languages. So he must resist the result of this application, his (3). And in Sect. 2.5 he resists my final step which he expresses as follows: (4), “A grammar is best interpreted as a theory of the structure rules of linguistic expressions, not of linguistic competence” (16). The basis for (2) is simple: the distinctions in (1) are quite general, applying to any competence; so they apply to linguistic competence. Why does Collins think not? (a) Collins doubts that the distinctions really are general. (i) They “appear not to apply to the mammalian visual system” which does not “produce external products” (18). But they do apply to the vision system, nonetheless: that system is a

5  Antony 2008; Collins 2006, 2007, 2008a, b; Longworth 2009; Ludlow 2009; Pietroski 2008; Rey 2006a, b, 2008; Slezak 2007, 2009; Smith 2006.

374

M. Devitt

competence to process certain inputs – and, we should note, so partly is the language system – and the distinctions apply to these as much as to a competence to produce certain outputs (17). (ii) The distinctions also fail to apply to “organs” (19). Perhaps, but the generalization is only about competencies and hence applies to an organ only insofar as it is a competence. (b) Collins thinks my application of these general distinctions to linguistics rests on dubious “analogical reasoning” (19). True, I do argue for the distinctions using examples of competencies; my favorite is the waggle dance of the bees. But this is not aptly called “analogical reasoning”. It is better seen as arguing for a generalization about Fs using examples of Fs as evidence, surely a sound scientific practice. (c) Collins claims “that the distinctions presuppose an externalism of structured outputs” (22; see also 20). But, strictly speaking, they do not. The distinctions apply even if a competence is never exercised. Thus there would still be a distinction between a theory of competence in a language and a theory of the structure rules of its linguistic outputs even if those outputs were only possible not actual.6 19.1.1.3  Linguistic Realism and Explanation In any case, I think that Collins’ deep objection to the linguistic conception lies elsewhere. Although the application of the three distinctions to human languages does not presuppose an external-to-the-mind linguistic reality, “linguistic realism”, the linguistic conception as a whole, does. For, according to that conception, grammars explain the nature of that linguistic reality and one can’t explain what doesn’t exist. Furthermore, it has to be the case that the theoretical interest of grammars comes primarily from such explanations. Collins argues that neither of these requirements, particularly the second, is met. My view is alleged to rest on the “misunderstanding … that there are linguistic phenomena as such, which a linguistic theory may target directly, with psychological phenomena being targeted only indirectly” (8). Here is a strong statement of his view: “I reject linguistic externalia … simply because they are explanatorily otiose: they neither constitute phenomena to be explained nor explain any phenomena” (17). Rey’s objection to the linguistic conception is similar. In his earlier-mentioned exchange with me, Rey has mounted the most sustained attack on linguistic realism that I know of; and I have responded. In the present paper, he again urges his antirealism. And he insists that grammars explain psychological phenomena: “Chomskyan linguistics is per force about psychology because that’s simply where the relevant law-like regularities lie” (312).

 It is apparent that the competence/performance distinction is central to my application of the three distinctions to linguistics. So it is odd that Collins often writes as if I have overlooked this muchloved distinction (12–13, 19, 38–39). 6

19  Stirring the Possum: Responses to the Bianchi Papers

375

The center of disagreement is over two related explanatory issues: What is worthy of explanation in linguistics? What do grammars actually explain? The fate of each conception, linguistic and psychological, largely hinges on these questions. Starting in Ignorance (28–35, 134–135, 184–192), and continuing in various responses to critics (2006b, 2007, 2008a, b, c, 2009a, 2013a, b), I have argued for linguistic realism, for the view that grammars (partly) explain that reality, and for the view that it is primarily in virtue of this that grammars are theoretically interesting. I shall not repeat these arguments but the key ideas are as follows. First, we posit this linguistic reality to explain behavior, particularly communicative behavior: a noise (or inscription) is produced because it has certain linguistic properties, including syntactic ones, and it is because it has these properties, and hence is a linguistic expression (symbol), that an audience responds to that behavior as it does. Second, a grammar is a straightforward account of the syntactic properties of that expression and hence a (partial) explanation of the nature of the expression in virtue of which it plays its striking causal role in human lives. Collins’ guiding idea is that “the ontology of a theory is ultimately what is invariant over and essential to the explanations the theory affords” (8). I have no quarrel with that. But I think that what are invariant and essential to the explanations that grammars afford are linguistic expressions, like the sentences, nouns, determiners, etc. on this very page. Collins introduces a notion: Primitive explanation: T primitively explains D iff T explains D-phenomena independently of other theories, and T does not explain anything non-D without being embedded in a larger theory. T is explanatorily invariant over D. (14)

Deploying this notion, I think that grammars primitively explain linguistic expressions not mental states. In contrast, Collins thinks: “A linguistic theory primitively explains the speaker-hearer’s understanding, not properties of the inscriptions themselves” (23); “A grammar is supposed to explain the character of our capacity to produce and consume material as linguistic” (19). But how could it do that? Here are some typical grammatical rules (principles): An anaphor must be bound by another expression in its governing category. A pronoun must not be bound by another expression in its governing category. Accusative case is assigned by a governing verb or preposition. A verb which fails to assign accusative case fails to theta-mark an external argument.

These are about expressions. They are not about mental states: they do not mention understanding or mental capacities (31). Building on this, in correspondence with Rey and with reference to Quine (1961), I offered the follow deductive argument for my view of grammars, step (4) above (in Collins’ numbering): (a) Any theory is a theory of x’s iff it quantifies over x’s and if the singular terms in applications of the theory refer to x’s.

376

M. Devitt

(b) A grammar quantifies over nouns, verbs, pronouns, prepositions, anaphors, and the like, and the singular terms in applications of a grammar refer to such items.7 (c) Nouns, verbs, pronouns, prepositions, anaphors, and the like are linguistic expressions/symbols (which are entities produced by minds but external to minds). (d) So, a grammar is a theory of linguistic expressions/symbols.8 Now one might well think that grammatical rules directly explain the natures of expressions but also indirectly explain something mental, linguistic competence. As Collins has remarked earlier, “the theory could tell us about speaker-hearers by way of being true of something else, where what a theory is true of should attract our proper ontological commitment” (10). Why does Collins not think that what could thus be the case is actually the case? In a response to Collins I attempted to show how it is actually the case: “It is because the grammar gives a good explanation of the symbols that speakers produce that it can contribute to the explanation of the cognitive phenomena” (2008a: 215). I illustrated this with an example I took from Collins (slightly modified here to fit Collins’ present discussion). The cognitive phenomenon to be explained is S’s interpretation of the reflexive in Fred’s brother loves himself as referentially dependent on the whole DP rather than on Fred or brother alone. Collins nicely reproduces my answer as follows: (EE) (i) S is competent in English and hence respects its structure rules. (ii) Fred’s brother loves himself is an English sentence in which himself is c-commanded by the whole DP but not by either of its constituents. (iii) It is a rule of English that, in these circumstances, the reflexive must be bound by the whole DP. (iv) Therefore, S, because he respects the rules of English, gives a joint construal to himself and the whole DP. (27)

Notice that the only psychological premise in this explanation is (i), what I call “the minimal claim on the issue of the psychological reality of language” (25). And notice that the explanation is not deep. It does not tell us much about S’s mind because (i) is so minimal: the explanation just tells us that, whatever it may be that constitutes her competence in English causes S to interpret a reflexive according to the rules of English. That is hardly news. I continued on: This cognitive explanation depends on (ii) and (iii) providing a good linguistic explanation of the nature of that English sentence. In general, English speakers construe English

7  For a discussion of some examples in Liliane Haegeman’s textbook (1994), see my 2008b: 250–251. 8  Rey writes (308) as if this is my argument for the linguistic conception: it is only the final move, step (4). Later, he curiously remarks that I don’t “present anything like a serious theory of external ‘Linguistic Reality’ remotely comparable in richness and power to an internalist generative grammar” (311). But, of course, a central point of the linguistic conception is that those very grammars are, precisely, rich and powerful theories of external linguistic reality.

19  Stirring the Possum: Responses to the Bianchi Papers

377

expressions as if they had certain properties because, as the grammar explains, the expressions really have those properties. (2008a: 215)

Finally, the grammar directly explains the linguistic phenomenon of reflexives “independently of other theories” – see (ii) and (iii) – but it does not so explain the psychological phenomenon of S’s interpretation of reflexives – see (i). So, the grammar “primitively explains” the linguistic not the cognitive phenomenon. 19.1.1.4  The Paraphrase Response Return to Colllins. Why does he think that “the grammar tells us nothing about the strings, but lots of things about how we interpret them” (23)? Prima facie, this is all wrong, as argument (a) to (d) above shows: as I put it, “the Chomskian research program is revealing a lot about language but, contrary to advertisements, rather little about the place of language in the mind” (16). But is this view of grammars a bit flatfooted? As Quine points out, it is quite respectable for a scientist to paraphrase by withdrawing claim P in favor of claim Q because Q lacks the ontological commitment of P and yet will serve his purposes well enough. The scientist thus “frees himself from ontological commitments of his discourse” (1961: 103). The commitments of P arose from “an avoidable manner of speaking” (13). So, as Rey says, we shouldn’t go merely by what is said in elementary textbooks, but, rather, by what sort of entities are performing the genuine explanatory work the theory is being invoked to perform. (309)

In light of this paraphrase response, enthusiasts for the psychological conception need to do three things. First, they need to acknowledge that grammars should not be taken literally but rather should be taken as standing in for a set of paraphrases that do not talk about expressions (like reflexives and DPs). Second, they should tell us what the paraphrases are or, at least, tell us how they are to be generated from what grammars actually say. Third, they should give examples of how this paraphrased grammar directly explains cognitive phenomena. So far as I know, nobody, from Chomsky down, including the critics of my linguistic conception,9 has met the last two requirements explicitly in print. So we should be grateful to Collins for meeting them in his present paper.10 Given the centrality of the psychological conception to the promotion of Chomskian linguistics, it is striking that there has been

 See note 5.  Later, however, Collins seems to back away from the need to paraphrase. He raises the question: “How would linguistics appear if linguists were not interested in external concreta, but in the abstract mental structures speaker-hearers employ in their interpretation and production of such concreta, inter alia?” He responds “Well, it would appear just as it does, surely” (33). But this is not so: grammars quantify over linguistic expressions and these are not “abstract mental structures”.

9

10

378

M. Devitt

so little sensitivity to these requirements in presentations of grammars. Sensitivity to the ontology of one’s theory is a mark of good science. So how does Collins meet the requirement of the paraphrase response? He rejects my explanation, (EE), and gives an ingenious “internalist” explanation “without appeal to rules of English or external properties”. This explanation is very revealing: (IE) (i) If S is competent in English, S’s interpretation of the marks himself is constrained to be jointly interpreted with the interpretation of other marks occurring in the inscription i.e., the interpretation is ‘reflexive’. (ii) The constraint is such that the interpretation of himself is dependent on the interpretation of a mark categorised by S as a c-commander of the first interpretation. (iii) To determine a c-commander, S must project the lexical interpretations into a hierarchical structure determined by the interpretations mapped onto the given marks. (iv) Based on the projection, the interpretations of Fred or brother do not c-command the reflexive interpretation; only the interpretation of Fred’s brother does. (v) Therefore, the reflexive is jointly construed with the DP interpretation. (27–28)

First, some clarification. (IE)’s talk, in its steps (i) to (iv), of S’s interpretation of marks “does the work” of (EE)’s talk, in its step (ii), of the properties of the external marks themselves. Those steps (i) to (iv) establish that if S is competent with English then S interprets the mark himself as a reflexive that is c-commanded by S’s interpretation of the whole DP mark Fred’s brother but not by S’s interpretation of either of its parts. But what then “does the work” of (EE)’s (iii)? How do we get from (IE)’s first four steps to its conclusion (v)? There is an implicit premise along the following lines: (Ri) When speakers competent with English interpret a mark as a reflexive and c-commanded by a DP mark but not by either of the DP’s constituents, they jointly construe the reflexive with the DP interpretation.

We should start thinking about this proposal by reminding ourselves that Collins needs to show that the grammar “primitively explains the speaker-hearer’s understanding”. So, where is the grammar playing a role in (IE)? Manifestly, the grammar of reflexives as presented in textbooks is not playing a role. For, that bit of the grammar, reflected in (EE)’s (iii), is along the following lines: (Re) In an English sentence in which a reflexive is c-commanded by a DP but not by either of the DP’s constituents, the reflexive is bound by the whole DP.

For Collins to be right that the grammar primitively explains S’s interpretation, the textbook grammar containing (Re) has to be replaced by one containing (Ri): a grammar’s apparent commitment to external linguistic expressions has to be paraphrased away into a commitment to acts of interpretation. Similarly, (EE)’s grammatical claim (ii) about the c-commanding of expressions has to be paraphrased away as a grammatical claim about the interpreting of expressions as c-commanding, based on (IE)’s (i) to (iv). Collins, in effect, acknowledged the need for this sort of paraphrase earlier: When a linguistics text tells us that a sentence is ambiguous, say, the claim is not that the exemplified string has some peculiar hidden structure or some high-level functional prop-

19  Stirring the Possum: Responses to the Bianchi Papers

379

erty or any other property as an external entity. The claim is simply that competent speakerhearers robustly and reliably associate 2+ specific interpretations with tokens of such a string type. (25)

Collins’ explanation (IE) is indeed one “without appeal to rules of English”; instead, it appeals to rules governing interpretations by English speakers. What is Rey’s story? His examples of cognitive phenomena that need to be explained are of speakers finding certain constructions – “WhyNots”, as he neatly names them – “oddly unacceptable” (310). He claims that it is “virtually impossible to imagine a non-­psychological explanation of these phenomena” (310). And so it is, because the phenomena are psychological! But the issue is whether grammars directly explain them. I would argue, along the lines of (EE), that grammars as they stand do not directly explain them, although their accounts of the syntactic structure of linguistic entities contribute to the explanation.11 Rey needs an alternative explanation provided by a suitably paraphrased grammar. His suggestion for paraphrase, illustrated with the principle for negative polarity items, is to replace principles about expressions with principles about (mental) representations of expressions (312–313). This has the great advantage for Rey of removing a grammar’s ontological commitment to linguistic expression and hence to linguistic realism. We should be grateful to Rey, as to Collins, for proposing a paraphrase. But, we should note first, that Rey’s paraphrase is very different from that of Collins in terms of the interpretation of marks, along the lines of (Ri). And, we should note second that it is unclear how Rey’s paraphrase will yield the desired psychological explanations. How, for example, will it explain speakers’ finding WhyNots “oddly unacceptable”? Rey follows up his paraphrases with another suggestion that seems more explanatorily promising: he recommends “simply to pretend there are the [linguistic items] and tree-structures in which they appear, and then treat any rules, principles, constraints, or parameters as true under the pretense” (313). If this were a story of what goes on when speakers interpret marks as linguistic entities then it would be similar to Collins’s story. 19.1.1.5  Criticism of the Paraphrase Response 1. Collins has offered a story of how the grammar directly explains cognitive phenomena. And if the grammar were rewritten along the lines of (Ri), it seems that it would indeed explain psychological phenomena. But note that such explanations can be derived from the unrevised grammar together with the uncontroversial claim that competent speakers “respect” the principles and rules of the grammar; see (EE). The rewriting adds no explanatory power. And the rewriting is pointless if the

 To avoid misunderstanding, perhaps I should emphasize that I grant the Chomskian view that the language has some of its syntactic properties, including perhaps the WhyNots, as a result not of convention but of innate constraints on the sorts of language that humans can learn “naturally” (2006a: 244–272).

11

380

M. Devitt

grammar is indeed a more or less true account of linguistic reality, as the linguistic conception claims. 2. Collins – and perhaps Rey – is proposing, in effect, that a grammar’s claim that external marks have linguistic properties should be paraphrased as the claim that speakers interpret them as if they had those properties. This has a sadly nostalgic air to it. The idea that widely accepted claims about the nature of the external world should be rewritten as claims that it is as if that world has that nature – that it seems to us that it does – has a long grim history that includes the disasters of metaphysical idealism and scientific instrumentalism.12 In my view (1984, 1991), the idea has little to be said for it. 3. If Collins’ and Rey’s approach to linguistic entities were good it would generalize to social entities in general. So if their argument really showed that linguistic entities were “explanatorily otiose” then analogous arguments would show that, for example, votes and coins were “explanatorily otiose”: all we need for explanatory purposes is the fact that people treat certain pieces of paper as if they were votes and certain pieces of metal as if they were coins. But this is not so. Those pieces of paper play their causal roles in our lives in virtue of having the property of being votes; that’s what explains their place in the causal nexus. Similarly, those pieces of metal. And similarly, the marks we treat as linguistic entities. Nothing in Collins’ or Rey’s discussion, it seems to me, gainsays this causal role of linguistic entities. In particular, what explains the fact that everyone in the English-speaking community would interpret the mark himself in Fred’s brother loves himself as a reflexive c-commanded by the mark Fred’s brother? Why this remarkable coincidence of interpretation? The core of the answer is simple: because in English himself is a reflexive c-commanded by that DP13; and the competence of any English speaker enables her to detect this linguistic fact; that’s part of what it is to be competent. There are similar simple answers to analogous questions about other social entities like votes and coins. We suppose that certain parts of the external physical world have certain natures because having those natures explains their causal roles. There is no good reason to resist these answers. Social “as-if-ism” is a mistake. Collins makes an Occamist claim: “The crucial issue here is parsimony” (28). I agree. Linguists, like other scientists, should only posit what they need for explanation. We need linguistic entities for that purpose just as we need votes and coins. There are, external to the mind, entities playing causal roles in virtue of their linguistic properties. Grammars are approximately true theories of some of those properties, the syntactic ones.

 Rey firmly rejects this sort of antirealism (313 n. 26) but one wonders why the external world in general is spared the treatment he gives external languages in particular. 13  Collins begins a paragraph with the charge that my “position is incoherent as it stands” (33). How so? He sums up: “Suffice it to say, nowhere does Devitt attempt any analysis of the properties he mentions to render them as high-level functional properties of external marks” (34). I take over my view of these properties from grammars. What more “analysis” do I need? 12

19  Stirring the Possum: Responses to the Bianchi Papers

381

19.1.2  The Psychological Reality of Language (Camp) After my argument in ch. 2 for the first major conclusion, the linguistic conception of grammars, the rest of Ignorance is largely devoted to this question: In what respect, if any, are the structure rules of a language “psychologically real” in its competent speaker? Given the linguistic conception we know that the speaker’s competence must “respect” those rules: there must be something-we-know-notwhat within the speaker apt to produce outputs governed by those structure rules; analogously, to process inputs. But what is that something? And how much of it is innate? My struggle with these questions yields the remaining five of my seven major conclusions and all seven of my tentative proposals. So my answers are complicated! This reflects how very difficult it is to tell what they should be: “A central theme of the book is that we have nowhere near enough evidence to be confident about many psychological matters relevant to language; we are simply too ignorant” (vi). Hence the tentative nature of many of my proposals. They are put forward not to be believed but as promising hypotheses to be further investigated. In “Priorities and Diversities in Language and Thought”, Elisabeth Camp agrees with me about the linguistic conception: “I share Devitt’s conception of language as a public, social construction” (47). But she disagrees with me over thoughts and the psychological reality of language. My views on these matters begin with “intentional realism” – there really are thoughts (125–127) – and “LET” – “language expresses thought” (127–128). This leads to the view that: linguistic competence is an ability to match sounds and thoughts for meaning. If this is right then it is immediately apparent that any theory of linguistic competence, and of the processes of language comprehension and production, should be heavily influenced by our view of the nature of thoughts. (129)

This leads to: Fourth major conclusion: the psychological reality of language should be investigated from a perspective on thought. (129)

I go on to argue for four priority claims (125–141) which I sum up: Fifth major conclusion: thought has a certain priority to language ontologically, explanatorily, temporally, and in theoretical interest. (141)

On the basis of these conclusions I go on to propose some hypotheses about thoughts and the psychological reality of language. Camp disagrees with both the ontological and explanatory priority claims. I shall consider only the explanatory one.14 She claims that “neither thought nor language can be assigned clear explanatory priority over the other” (47). My argument that

 I considered an earlier version of her disagreement with my ontological claim in my responses at the 2007 Pacific APA; also, an earlier version of her disagreement over the language faculty.

14

382

M. Devitt

thought is prior comes from Paul Grice (1989), building on LET. The crux of the argument is as follows: An utterance has a “speaker meaning” reflecting the meaning of the thought that the speaker is expressing, reflecting its “message”…. [S]peaker meaning is explanatorily prior to conventional meaning: the regular use in a community of a certain form with a certain speaker meaning leads, somehow or other, to that form having that meaning literally or conventionally in the language of that community. So the story is: thought meanings explain speaker meanings; and speaker meanings explain conventional meanings. In this way thought is explanatorily prior to language. (132)

So where does Camp disagree? Camp has a nice discussion in Sect. 3.2 in support of the view that “language does much more than express thought” (47). She sees this as a rejection of LET. It is not at all clear to me that it is at odds with LET as I understand it or, more importantly, with the development of LET in the Gricean argument above. In any case, what clearly is central to Camp’s disagreement is her rejection of “the language of thought hypothesis” (LOTH), the view that thoughts involve language-like mental representations. In Sect. 3.1, Camp presents some very interesting and helpful considerations in favor of the view that “humans employ a range of formats for thought” (47). But the way she brings this rejection of LOTH to bear on my discussion reflects serious misunderstandings. Indeed, her attributions of views to me are frequently quite wide of the mark, overlooking many of the complications of my treatment. I have space here to discuss just two. I begin with the following: The first step in Devitt’s broadside for the priority of thought is establishing that thought itself has a sentential structure…. [That thesis] plays a central role in his overall argument for the priority thesis. (47)

But, in fact, LOTH plays no role at all in my case for the explanatory priority of thought; see the Gricean argument. Indeed, LOTH does not enter my discussion until the following chapter. Furthermore, Camp overstates my commitment to LOTH. I do indeed present an argument for LOTH (145–147, 158), influenced by Jerry Fodor (1975), and I do favor it over alternatives, but I never claim to have established it. I regard LOTH as “far from indubitable” (11), a “fairly huge” step (148), “a bold conjecture” (259), “speculative” (260), an example of a “promising” theory (192) to be further investigated. So none of my main “conclusions” depend on it. True, some of my seven other “proposals” do depend on it, and that is sufficient for my labeling them “tentative”. The second misunderstanding I shall discuss concerns, in effect, one of those proposals: First tentative proposal: a language is largely psychologically real in a speaker in that its rules are similar to the structure rules of her thought. (157)

I summed up the argument for this (142–162) as follows: First, we adopted the controversial LOTH. Second, we noted, what seems scarcely deniable, that public language sentences can have syntactic properties that are not explicit.

19  Stirring the Possum: Responses to the Bianchi Papers

383

Third, we pointed to the role of mental sentences in explaining the implicit and explicit syntactic properties of the public sentences that express them. (157–158)

I noted that “the importance of LOTH to this argument is enormous” (158). Camp is way off track about this discussion. She talks of my “ultimate conclusion that language exhibits the particular features it does because thought possesses those features” (47). But this hypothesis of in virtue of what language has its syntax is not an “ultimate conclusion” at all: it is part of an argument for a tentative proposal; see above. Later she alleges that “the next big move in Devitt’s argument for the priority of thought is the claim that language takes the form it does because it expresses thought” (51). But that claim about language has nothing to do with my priority argument, which is in the previous chapter and is part of the background for this discussion of the first tentative proposal. So, Camp has the order of argument backward. We are investigating the psychological reality of language. My first tentative proposal is my best guess if the controversial LOTH is true. But I appreciate that it may well not be and so investigate the consequences of that (195–243). I argue for: Fifth tentative proposal: if LOTH is false, then the rules of a language are not, in a robust way, psychologically real in a speaker. (243)

So, if Camp is right in rejecting LOTH, I would conclude something strongly at odds with the standard Chomskian view of the psychological reality of language. I wonder whether Camp embraces that anti-Chomskian conclusion. I also argue (256–260) for: Sixth tentative proposal: humans are predisposed to learn languages that conform to the rules specified by UG because those rules are, largely if not entirely, innate structure rules of thought.

This is the innateness analogue of my first tentative proposal. And Camp’s critical discussion of it has, I would argue, misunderstandings analogous to her discussion of the first. But if she is right about LOTH, then this proposal goes down with the first. Appreciating that it might, I argue (256–270) for an analogue of the fifth: Seventh tentative proposal: if LOTH is false, then the rules specified by UG are not, in a robust way, innate in a speaker.

So we would again have a conclusion strongly at odds with a standard Chomskian view. I wonder whether Camp embraces that conclusion. In sum, put Camp’s interesting argument against LOTH together with my arguments for the fifth and seventh tentative proposals and we have a case against the received Chomskian view that what grammars and UG describe is psychologically real in speakers.

384

M. Devitt

19.2  Theory of Reference 19.2.1  R  eference Borrowing (Raatikainen, Sterelny, Horwich, Recanati) Saul Kripke made two enormous contributions to the theory of reference, one negative, one positive. The negative one was what I call “the ignorance and error argument” against description theories of reference. This showed that many competent users of many terms, most strikingly proper names, use those terms successfully to refer to entities about which those users are largely ignorant or wrong. So, the reference of those names could not be determined by the descriptions that those competent users associate with the terms as standard description theories require.15 The positive one was that people can “borrow” the reference of many terms in a communication situation, again most strikingly proper names, in an epistemically undemanding way that does not require any capacity to identify the referent. This radical idea is the crux of Kripke’s “better picture” of reference (1980: 94). Kripke’s reference-borrowing idea for names has frequently been misunderstood. Panu Raatikainen points out, in his informed and judicious “Theories of Reference: What Was the Question?”: Both the initial borrowing and the later use are intentional actions, but [the] … subsequent use need not involve any intention to defer to the earlier borrowing; it need not involve any “backward-looking” intention. (76)

John Searle (1983: 234) and others have wrongly taken Kripke’s reference borrowing to require such a backward-looking intention and have, perhaps for this reason, renamed it “deference”, as I have noted (2006e: 101–102; 2011a: 202–204). What matters for the subsequent use of a name that has been acquired by reference borrowing is that the use be caused by an ability with that name that is, as a matter of fact, grounded in the bearer via that borrowing: the efficacious ability must have the right sort of causal history… [T]he speaker need not know who the lender was or even that she has borrowed the name. There is no need for her to have any semantic thoughts about the name at all. Use of language does not require any thoughts about language. (2015b: 116)

What does the initial borrowing require? I have, as Kim Sterelny nicely puts it in “Michael Devitt, Cultural Evolution and the Division of Linguistic Labour”, a “minimalist position” on this (174). I do not follow Kripke in talking of the borrower intending to use the name with the same reference as the lender (1980: 96). That strikes me as too intellectualized. I recently put it like this in “Should Proper Names Still Seem So Problematic?” (2015b), the most accessible and up-to-date version of my theory:

 As Raatikainen points out (74), this argument counts against theories that require identification by description or ostension.

15

19  Stirring the Possum: Responses to the Bianchi Papers

385

Rather we require that the borrower process the input supplied by the situation in whatever way is appropriate for gaining, or reinforcing, an ability to use the name to designate its referent. The borrower must intentionally set in motion this particular sort of mental processing even though largely unaware of its nature and perhaps not conscious of doing so. Similarly, a person walking or talking must intentionally set in motion the sort of mental processing appropriate to that activity. (2015b: 117–118)16

In “Languages and Idiolects”, Paul Horwich criticizes an earlier presentation of this view of reference borrowing (Devitt 2011a: 202): this amounts to saying that only when certain unspecified conditions are satisfied will a speaker’s term refer to what the person she heard it from was referring to. And this is so uninformative that we as well might impose no extra condition at all. (290)

This criticism would be appropriate only if we should expect philosophers to specify these conditions. But we should not17: “Reference borrowing is a species of lexical acquisition or understanding and so we must look to psycholinguistics to throw more light on it” (2015b: 118). And given the current state of our knowledge of language processing, we should not expect psycholinguists to tell us much soon. Sterelny hypothesizes “that rich and cognitively complex subdoxastic capacities are needed” for reference borrowing (185). That seems likely to me. In “Multiple Grounding”, François Recanati (106) quotes my saying of names that “reference borrowing is of the essence of their role’ (Devitt 1981a: 45). This essentialist claim is a bit misleading. Searle (1983: 241) has objected to Kripke that it is not essential that people borrow their reference of a name; each person might manage reference on her own. I responded: “No causal theorist has ever denied this. The reference of any name can be borrowed, even if those of a few are not in fact borrowed” (1990: 101–102 n. 9). That is how to understand my essentialist claim. Still, a person’s reference with a name is typically borrowed, at least partially. I pointed out that this borrowing is like pronominal anaphora (1981a: 45). Recanati puts it nicely: just as the pronoun ‘he’ inherits the reference of its singular antecedent … in the dialogue ‘Have you read Aristotle? Yes, he is a great philosopher’, the name ‘Aristotle’ in [this sentence] inherits its reference from past uses to which that use is causally related. (106)

Recanati thinks that with both the anaphoric link for pronouns and the “quasi-anaphoric” link for names, the link forces coreference between the singular term (name or pronoun) and the singular terms it is linked to in the chain or network. Coreference, in these cases, is more or less mandatory. It is de jure, not de facto. (112)

 A similar theory of reference borrowing is also appropriate for referential descriptions (Devitt 1974: 191–192; 1981a: 38–39). David Kaplan has recently also urged such a view (2012: 142, 147). 17  Perhaps I should help myself to Horwich’s concluding remarks about his own view: 16

Moreover, even if the account presented here would be somewhat improved by the elimination of some of its present imprecision … it’s quite possible that, even as it stands, the view is cogent, correct, and an illuminating step in the right direction. (295)

386

M. Devitt

The cautious “more or less” is important here. For, only coreference demanded by the syntax is really mandatory. Thus, consider ‘John thinks that Bruce loves himself’. In any utterance of this sentence, ‘himself’ must corefer with ‘Bruce’. In contrast, it is not the case that in any utterance of ‘Yes, he is a great philosopher’ in response to ‘Have you read Aristotle?’ ‘he’ must corefer with ‘Aristotle’. It probably will, of course, because the speaker will probably be expressing a thought that it is grounded via an anaphoric link to the earlier ‘Aristotle’. Still, it might not corefer with ‘Aristotle’ because the speaker might be expressing a thought that is grounded via an anaphoric link to another singular term earlier in the discourse; or she might be expressing a “demonstrative” thought using ‘he’ deictically. Nor is it the case that any utterance of ‘Aristotle’ must corefer with any particular earlier token of ‘Aristotle’, say, one referring to the famous philosopher. Whether the utterance does depends on whether the speaker is expressing a thought that is grounded via a reference borrowing link to that earlier token; the thought expressed might be grounded in Onassis via a different link. Still, it is true that if the speaker’s token is expressing a thought grounded (solely) via an anaphoric or quasi-anaphoric link to another token then the two tokens must corefer. And it is also true that speakers have a social obligation not to mislead. So their utterances should exploit the links that an audience will naturally take them to be exploiting. Relatedly, Recanati notes that “in general, use of the same word by the interlocutors triggers a presupposition of coreference” (112). Nonetheless, that presupposition may be false: the presupposition does not make it the case that there is coreference; a speaker may be carelessly, or even deliberately, misleading. As I have emphasized elsewhere (2013c), what constitutes reference is one thing, what hearers reasonably take the reference to be is another. Just how extensive is reference borrowing? Kripke took it to be a feature not only of names but of “natural” kind terms. And Hilary Putnam went even further with his “division of linguistic labor” (1973, 1975). Raatikainen believes that “the reference of any sort of term can be borrowed” (78), even ‘bachelor’ (77). I wonder. Consider “artifactual” kind terms. It does seem plausible that the reference of terms like ‘sloop’ and ‘dagger’ can be borrowed. But if so, are they, like proper names, covered by a “pure-causal” theory of borrowing? Sterelny and I pointed out in our textbook that maybe not (1999: 93–101): although you can gain ‘sloop’ without associating it with ‘boat having a single mast with a mainsail and jib’, maybe you cannot gain it without associating it with ‘boat’. And what about the more basic “artifactual” terms ‘boat’ and ‘weapon’? Is it really plausible that they can be borrowed? The first problem with these questions is that we do not have strong intuitions. The second problem is that here, more than anywhere in the theory of reference, we need the support of more than intuitions. We need evidence from usage (2011b, 2012a, b, c, 2015c). We shall consider how to get this in Sect. 19.4.4.

19  Stirring the Possum: Responses to the Bianchi Papers

387

19.2.2  Grounding (Raatikainen, Recanati) As a result of reference borrowing we can all succeed in designating Aristotle with his name in virtue of a chain of reference borrowings that takes us back to the original users who fixed the name’s reference in Aristotle. But how did the original users do that? I have urged that the reference of a paradigm name like ‘Aristotle’ is fixed in the object, directly or indirectly, by the causal link between a person and that object when it is the focus of a person’s perception. This is what I call a “grounding” (1974: 185–186, 198–200; 1981a: 26–29, 56–64, 133–136; 2015b: 113–115). (We shall consider the reference fixing of non-paradigm “descriptive” names like ‘Jack the Ripper’ in Sect. 19.3.2.) The grounding may be direct, by formal, informal, or implicit dubbing; or it may be indirect, via other terms that are grounded in the referent. I wanted my theory, as Raatikainen aptly remarks, to be “thoroughly causal, also at the stage of introduction of names” (74). According to the theory of grounding, a name is grounded in an object by a certain causal-perceptual link. This sort of situation will typically arise many times in the history of an object leading to my claim that names are typically multiply grounded in their bearers (1974: 198). I used this idea, together with Hartry Field’s (1973) idea of partial reference, to explain cases of reference confusion (1974: 200–203). Thus, applying these ideas to Kripke’s famous raking-the-leaves example (1979a: 14) yields the conclusion that ‘Jones’ has a semantic-referent, Jones, but no determinate speaker-referent; both Jones and Smith are partial speaker-referents (1981b: 512–516; 2015b: 118–121). Later, I applied the ideas to cases of reference change like Gareth Evans’ famous example of ‘Madagascar’ (1981a: 138–152; 2015b: 121–124). In brief, the reference of a name changes from x to y when the pattern of its groundings changes from being in x to being in y. Nonetheless, the mistaken idea that cases of reference change are “decisive against the Causal Theory of Names” (Evans 1973: 195) persists (Searle 1983; Sullivan 2010; Dickie 2011).18 Recanati suggests that the “theory of multiple grounding can … be construed as a response to the challenge raised by Evans” for Kripke’s theory (107–108). It was certainly used as a response but it was not introduced for that purpose; it is a consequence of the causal-perceptual theory of groundings as I emphasized in “Still Problematic”: Groundings fix designation. From the causal-perceptual account of groundings we get the likelihood of multiple groundings. From multiple groundings we get the possibility of confusion through misidentification. From confusion we get the possibility of designation change through change in the pattern of groundings. (2015b: 123–124)

Recanati demonstrates convincingly that the ideas of multiple grounding and partial reference can be applied to pronouns (109–113). And he rightly thinks that his

 Multiple grounding is also vital in explaining reference change in “natural kind terms” (1981a: 190–195). So, as Raatikainen points out (80), it can deal with “most of Unger’s much-cited alleged ‘counterexamples’” to the causal theory (Unger 1983).

18

388

M. Devitt

“mental file framework … is, by and large, compatible with Devitt’s causal account of reference” (118); indeed, I came to explicitly adopt something like it (1989a, 1996).

19.2.3  Kripkean or Donnellanian? (Bianchi) In an earlier paper with Alessandro Bonanini (2014), Andrea Bianchi argues that the pictures of the reference of proper names given by Donnellan and Kripke are different: where Donnellan thinks that reference is determined by a cognitive state of the speaker not the social mechanism of reference borrowing, Kripke thinks the opposite.19 In “Reference and Causal Chains”, Bianchi alleges initially that my position is unclear in that I confuse Donnellan’s and Kripke’s pictures (127). He goes on to argue that, in the end, Devitt’s causal theory shouldn’t be seen as a development of Kripke’s chain of communication picture. It should, rather, be considered as a development of the alternative causal picture offered by Donnellan, Donnellan’s historical explanation theory. (129)

I think that Bianchi is wrong about all three of us. Donnellan’s and Kripke’s pictures of reference determination, charitably understood, both include both a cognitive state and reference borrowing. My theory certainly does. So it can be considered a development of both pictures. We should not lose sight of the fact that the most important question about any theory is whether it is true, whatever its origins. Beyond that, credit or blame for a theory should of course be placed where it is due. Now, as I have always made clear (1974: 184 n. 2; 1981a: x–xi), my theory of names was influenced by Kripke’s picture not Donnellan’s20; indeed, my first presentation of the theory, in my dissertation (1972), was completed before I read Donnellan (1970) on names (which was mentioned in a last-minute footnote). I took Kripke’s picture of reference borrowing from his 1967 Harvard lectures and developed it in a naturalistic setting (of which Kripke would not approve, of course); see Sect. 19.2.1. I added to this a causal theory of reference fixing (“grounding”), summarized in Sect. 19.2.2, for which Kripke should not, of course, be blamed.21 However, my main concern here is not to

 Antonio Capuano (2018) and Julie Wulfemeyer (2017) agree.  Donnellan did, however, have a big influence on my view of descriptions. I urged a causal theory of referential descriptions (1972; 1974: 190–6; 1981b) as well as of names and demonstratives, as Bianchi notes (123). This theory rests on a referential/attributive distinction I drew (1981a: x–xiii) under the influence of Donnellan and C.B. Martin. 21  Bianchi and Bonanini take me, in my text with Sterelny (1999: 66–67), to ascribe “a causal theory of reference borrowing (as well as one of reference fixing)” to both Kripke and Donnellan (2014: 195 n. 37; emphasis added). Now what we actually ascribe to them is “the basic idea of causal, or historical, theories of reference” (1999: 66; emphasis added). We then go straight on to present “our theory” which includes a causal theory of both fixing and borrowing. So it is natural to take our ascription to Kripke and Donnellan to be of both. That was misleading: the basic idea we intended to ascribe was the theory of borrowing. Indeed, in earlier works, I ascribe a causal 19 20

19  Stirring the Possum: Responses to the Bianchi Papers

389

argue about what my theory of names developed from but to clarify and defend the theory. A startling dialectic has emerged, largely out of UCLA, that is a helpful background to understanding Bianchi’s claims. The dialectic is on fundamental issues about language. On one side there is Bianchi himself, in the wonderfully provocative “Repetition and Reference” (2015). He rejects the near-universal Gricean view that mental content is explanatorily prior to linguistic meaning: in my opinion things should be the other way round: the intentional properties of (postperceptual) mental states are to be explained in terms of the semantic properties of linguistic expressions. (2015: 96)

This view is reflected in Bianchi’s austere account of reference borrowing. He eschews Kripke’s talk of intentions (Sect. 19.2.1 above), claiming “that this problem can be dealt with by appealing to the notion of repetition” (2015: 100), a notion he takes from David Kaplan (1990). On the other side of the dialectic, there are some philosophers who Bianchi calls “neo-­Donnellanians”, responsible for “a Donnellan Renaissance in the theory of reference” (126). The seminal work in this Renaissance is a volume of papers, Having in Mind, edited by Joseph Almog and Paolo Leonardi (2012). The neo-Donnellanians reject reference borrowing and hold that a person’s use of a name refers to whatever she has in mind in using it. I am dismayed by Bianchi’s conclusion that I belong with them. For reasons implicit in my “What Makes a Property ‘Semantic’?” (2013d), I think both sides of this dialectic are deeply wrong. (Is there something in the UCLA water?) Against Bianchi, a commitment to the Gricean priority runs right through my work; see Sect. 19.1.2 above, for example. The contents of thoughts create, sustain, and change the meanings of linguistic expressions. So far as reference borrowing is concerned, we have already noted that I have what Sterelny calls a “minimalist position” (Sect. 19.2.1). Still, my position is a lot more robust than Bianchi’s, which is far too weak to do the job in my view. Reference borrowing does not require intentions22 but it does require mental processes, probably complicated subdoxastic ones, to be discovered by psycholinguists. I shall say no more about Bianchi’s position. My discussion of Donnellan will give some indication of why I disagree with the neo-Donnellanians.23 For, Bianchi’s view that Donnellan’s picture differs from Kripke’s is based on the surprising interpretation that Donnellan subscribes to the above neo-Donnellanian view. In Bianchi’s present paper, he sums up his earlier argument with Bonanini (2014) as follows:

theory of reference borrowing to Kripke as “the central idea” (1972: 55 n. 4; 1974: 184), “the vital feature” (1972: 73), of the causal theory. I saw the causal-perceptual theory of grounding as my contribution (1972: 55 n. 4; 1981a: xi). My take on Kripke’s view of reference fixing is set out in “Still Problematic” (2015b: 113). 22  Indeed, in “Three Mistakes About Semantic Intentions” (forthcoming-a) I argue that there is no place at all for talk of intentions in semantics. 23  See also the criticisms in Martí 2015.

390

M. Devitt

while reference borrowing is obviously fundamental for Kripke, there is no place for this alleged phenomenon in Donnellan’s account…. [W]hen we use a proper name we simply do not borrow reference. On the contrary, we always fix it anew. (125)

Bianchi and Bonanini’s case for this is ingenious, and based on a careful study of Donnellan. Nonetheless, I think that it is probably wrong. Consider the name ‘Aristotle’. This name is frequently used by philosophers to refer to a certain famous ancient Greek philosopher, as everyone would agree. Far from fixing its reference anew, these philosophers do something that has been done countless times for centuries. They use the name to refer to that ancient philosopher by participating in a convention of using the name to refer to him; their usage is governed by a linguistic rule that links the name to the philosopher; as linguists say, ‘Aristotle’, with that meaning, is an item in their lexicon. Rules/items like this, established by conventions (apart from some innate syntax; Sect. 19.1.2), constitute our language. And to deny that our uses of expressions are (largely) governed by such rules is, in effect, to deny that we have a language. Bianchi draws attention to this consequence of his interpretation: extreme consequences … can be drawn from Donnellan’s account, for example that, at least from a semantic point of view, there are no languages. (129)

I first thought that no sensible philosopher like Donnellan could embrace anything close to this. But then I started to read the neo-Donnellanians.24 Consider Jessica Pepp, for example. In a very rich paper (2019), she rejects what I have just urged, which she calls “the Conventional Stance”, the thesis that “the reference of proper names as used on particular occasions is determined by linguistic convention” (744).25 After arguing effectively against Bianchi’s repetition account, she asks: “If participating in a convention for using a proper name is not some form of copying or repeating previous uses, what is it?” (749). She is rightly critical of intention-­based answers and finds no satisfactory answer. But she does not consider a less intellectualized approach focused on largely non-central causal processes. On my account, for there to be a convention in a community of using an expression with a certain speaker meaning is for members of the community to be disposed to use that expression with that meaning because other members are disposed to do so: there is a certain sort of causal dependency of the disposition of each member of the community on the dispositions of others. And for a person to participate in this convention is for her to use that expression because she has that dependent disposition. (2015b: 119)

 I am indebted to Bianchi for directing my attention to these works and hence awakening me from my “dogmatic slumbers”. 25  But, a word of caution about the tricky word ‘determined’. In this thesis it should have its causal sense not its epistemic or constitutive sense: a convention causes there to be a linguistic rule. The linguistic rule, whether innate, idiosyncratic, or caused by convention, is what constitutes the reference of the name (2013d: 96–100; forthcoming-b). As a matter of fact, the rule for ‘Aristotle’ and, say, ‘dog’ were caused by convention. 24

19  Stirring the Possum: Responses to the Bianchi Papers

391

A language is constituted by rules for its expressions. People are speakers of the language in virtue of being disposed to use its expressions in accordance with those rules, dispositions that are mostly established by convention (see Sect. 19.3.5 for more). It is hard to say precisely what is involved in a person exercising such a disposition in making an utterance, but this is no special problem for the theory of names: we have the same problem with exercising the dispositions that constitute any skill: these are tricky problems in psychology (2006a: 210–220; forthcomingb). Despite what remains unknown, it should go without saying that there is a linguistic rule, established by convention, connecting, for example, ‘Napoleon’ to the famous French general, just as there is one connecting ‘dog’ to certain familiar pets and ‘cat’ to certain others. For, the alternative is that ‘Napoleon’, ‘dog’, and ‘cat’ are not part of our language. And if they are not, what is? We are en route to the batty conclusion that we don’t have a language.26 We shoud try to avoid attributing such a view to Donnellan. And I think we can by taking the passages in Donnellan that have led Bianchi and Bonanini to their interpretation as better explained as reflections of Donnellan’s insensitivity to the Gricean distinction between speaker-reference and semantic-reference. As Bianchi notes, Donnellan does not make the distinction in his discussion of names (129). Consider, for example, the claim by Bianchi and Bonanini that, according to Donnellan, [(∗)] once someone has an individual in mind, in order to refer to it he or she may in principle use whatever name (or other expression) he or she likes. (2014: 193)

(∗) is true if taken to be about speaker-reference but not about semantic-reference. This is demonstrated by cases where the speaker-­referent is not the semantic-referent. Suppose that someone uses ‘Napoleon’ according to the convention and so semantically refers to the great general. Now nearly always, she will have that general in mind – the thought she means to express will be about him – and so will speaker-refer to him. But not always. Consider my example of the cynical journalist observing General Westmoreland at his desk during the Vietnam War and commenting metaphorically: “Napoleon is inventing his body count” (1996: 225–227; 2015b: 126). The journalist has Westmoreland in mind yet the semantic-referent of ‘Napoleon’ is the famous French general and could not be Westmoreland. For: A linguistic name token can have the conventional meaning, M, as a result of the speaker’s participation in a convention only if there is a convention of using tokens of that physical type to mean M. (1996: 226–227)

So, it is not true that the journalist could “use whatever name (or other expression) he or she likes” to semantically refer to Westmoreland. (1) The novel uses of names yield the first sort of case of a speaker-referent being different from a semantic-referent. (a) Metaphorical uses, like that of the journalist,

 Surely the neo-Donnellanians do not embrace this conclusion. Still, Almog et al. (2015) do leave me wondering about their answers to these crucial questions: What is a language? What is having one supposed to do for us and some other animals?

26

392

M. Devitt

provide examples of this novelty. (b) Name introductions through use not dubbing – think of many nicknames and pseudonyms (1974: 199; 1981a: 58; 2015b: 114) – provide others. Novel uses demonstrate the crucial role of the speaker/semantic distinction in the theory of language. I said earlier that “the contents of thoughts create, sustain, and change the meanings of linguistic expressions”. The contents do this in that they constitute speaker-meanings, and regularities in speaker-­meanings create, sustain, and change semantic-meanings (2013d: 96–99; forthcoming-b). We need the distinction to explain the origin of linguistic meaning and language. (2) The confused use of a name, like in the raking-the-leaves example (Sect. 19.2.2), provide another sort of difference: whereas ‘Jones’ lacks a determinate speakerreferent, its semantic-referent is Jones and could not be Smith because there is no linguistic rule linking it to Smith. (3) Spoonerisms provide another. Spooner was right when he finished a sermon, “When in my sermon I said ‘Aristotle’ I meant St. Paul”: Aristotle was the semantic-referent, St. Paul, the speaker-referent (1981a: 139–146; 2015b: 126–127). Bianchi and Bonanini are clearly right to attribute (∗) to Donnellan if (∗) is about speaker-reference but not, I would argue, if it is about semantic-reference. And there is another truth which they do not attribute, but which we should charitably suppose Donnellan would have embraced once sensitive to the speaker/semantic distinction: that in virtue of having borrowed the semantic-reference of the name a person can have an object in mind, can use the name to speaker-refer to the object, and can use the name to semantically-­refer to it.27 And that’s how we all use ‘Aristotle’ to semantically-refer to the ancient philosopher. Of course, this truth presupposes a reference-borrowing theory of semantic-reference. And attributing that theory to Donnellan makes his view like Kripke’s, just as people have commonly supposed it is. This attribution is surely preferable to the Bianchi-Bonanini one and, so far as I can see, none of the evidence Bianchi and Bonanini produce for their interpretation shows that Donnellan denies a reference-borrowing theory of semantic-reference. I conclude that Donnellan is not a neo-Donnellanian. I emphasize that my proposal is not that we should take “Donnellan’s considerations on proper names as concerning speaker’s reference rather than semantic reference”, a proposal that Bianchi and Bonanini reject forcefully: “it is indisputable that Donnellan’s main critical target … is a semantic claim about proper names” (2014: 198). Indeed, I think that Donnellan’s remarks, including those suggesting reference borrowing, should mostly be taken to be about semantic-reference. My point is that because he was insensitive to the distinction, his remarks should sometimes be taken to be about speaker-reference.

27  Bianchi and Bonanini do attribute something like this to Donnellan in a footnote (2014: 194–195 n. 36) but, for reasons that escape me, deny that what they attribute is Kripkean reference borrowing. In any case, what I am suggesting that we should attribute to him seems to me to clearly fit their description of the Kripkean view: “any token of a proper name, except for the first, inherits its reference from preceding ones, to which it is historically connected…. [N]o further reference fixing is required” (2014: 195 n. 36).

19  Stirring the Possum: Responses to the Bianchi Papers

393

One reason I’m in favor of bestowing this small bit of charity on Donnellan is that I’d like a similar bit to be bestowed on me. For, my early presentation of the causal theory (1972, 1974), though not my later ones (1981a, 2015b), were also insensitive to the speaker/semantic distinction. It can be seen that I regard the speaker/semantic distinction as absolutely fundamental to semantics. Bianchi (see his forthcoming “Reference and Language”) and the neo-Donnellanians (e.g. Capuano 2018; Pepp 2019; Almog et al. 2015) do not.28 Is this the root of the problem in this dialectic? So I see Donnellan’s picture, charitably interpreted, as like Kripke’s. I think that Bianchi is right to see some similarities between neo-Donnellanianism and my old view, most obviously the causal-perceptual view of reference fixing, but those similarities sit in very different theoretical frameworks. Turn now to a comparison of my theory with Kripke’s. I summed up my theory as follows in “Still Problematic” (a paper provoked by Bianchi!): Speaker-Designation: A designational name token speaker-designates an object if and only if all the designating-chains underlying the token are grounded in the object. (2015b: 125) Conventional-Designation: A designational name token conventionally-designates an object if and only if the speaker, in producing the token, is participating in a convention of speaker-designating that object, and no other object, with name tokens of that type. (2015b: 126)

Now, the social mechanism of reference borrowing plays a key role in my account of designating-chains. Reference-borrowing is often a crucial part of the determination of the speaker-reference of a name token and near enough always a crucial part of the determination of its semantic-reference. So my view seems to have what Bianchi requires for it to be Kripkean. Yet he says that this “final, ‘official’, formulation” of my theory “does not appear” to be Kripkean (131). Why not? Bianchi thinks not because of the role I give to the cognitive state of the speaker in determining reference, an idea he sees as coming from Donnellan not Kripke: there is no mention at all of having-in-mind, or related cognitive states or events, in the second lecture of Naming and Necessity…. [H]aving in mind does not play any deep explanatory role in Kripke’s chain of communication picture. (128)

Now we should start by setting aside my introduction of the cognitive state that I take to be constitutive of reference with talk of “having in mind”. For, as I point

 The reason Almog et al. give for rejecting this distinction is that a convention cannot produce “a bond” between a name and its bearer (2015: 369–370, 377). Clearly I think it can and does: the “bond” is a conventional linguistic rule linking the name to its bearer, a rule established by the regular use of the name to speaker-refer to the bearer. I wonder if the neo-Donnellanians miss the importance of the speaker/semantic distinction for names because they assimilate names to simple demonstratives like ‘this’. For, it is plausible to say, initially at least (but see later on the need to get away from this vague ordinary talk), that ‘this’ is governed by a conventional linguistic rule that makes it semantically refer to the object that the speaker has in mind (2004: 290–291); so it semantically refers to its speaker-referent. But a name is not a simple demonstrative and its semanticreference is governed by a different rule.

28

394

M. Devitt

out, and Bianchi notes (128), this talk is but a “stepping stone” (1974: 202) which does not feature in my causal theory; see summary above. Still, Bianchi’s discussion has made it clear to me that I should have been more careful in my handling of this stepping stone. I should not have claimed, early, to offer a causal “analysis” (1974: 202) of “having in mind” and, late, “an explanation – better, an explication – of this somewhat vague folk talk” in causal terms (2015b: 111). The folk talk arguably covers more than the causal relation we want. So my causal explanation was, in reality, an explanation of a “technical” notion of having-in-mind that is more restrictive than the folk notion.29 The importance of getting away from vague folk talk to a causal explanation is nicely illustrated by Pepp’s response to Genoveva Martí’s criticism of neo-Donnellanianism (2015).30 Pepp’s key thesis is “Cognitive Priority”, “the metaphysical thesis that names in particular uses refer to things in virtue of speakers thinking of those things” (2019: 742), in virtue of their “hav[ing] in mind to refer to” those things (745). Martí offers a counterexample (2015: 80–81) to which Pepp responds. The counterexample is similar to the raking-the-leaves case and so, for convenience, I shall adapt Pepp’s response to the counterexample as if it were to that case. Pepp thinks that even if she allows that “the only referent of the use [of ‘Jones’] that is in accord with linguistic convention” is Jones, “this does not require rejecting Cognitive Priority”. For, the reference to Jones is “still partially in virtue of the utterance being generated by the speaker’s thinking of [Jones]” (754) And she is surely right: in some sense, the speaker is “thinking of”, “has in mind”, Jones31; and these are the  Bianchi claims that my “causal explanation of a thought’s aboutness is remindful of the one offered by Donnellan and elaborated by the neo-Donnellanians” (129). Indeed, the neo-Donnellanians’ explanation seems like a version of my old one. But is this explanation Donnellan’s? Some neo-Donnellanians certainly take their theory to be an elaboration of Donnellan’s; see papers in Almog and Leonardi 2012, particularly Almog 2012: 177, 180–182. For them to be right about this, Donnellan must have offered an explanation that includes both a causal theory of aboutness borrowing and a causal-perceptual theory of aboutness fixing. Now I think we should see a causal theory of borrowing as, at least, implicit in Donnellan (and Kripke!), but what about the causalperceptual theory of fixing? So far as I can see, neither Almog nor anyone else cites any evidence for the attribution of this theory to Donnellan. I found no such explanation in Donnellan’s early papers, as I have noted (1981a: xi, 283–284, n. 12; 2015b: 111 n. 4): there seems to be no sign of a theory that explains fixing in terms of both causality and perception. As Julie Wulfemeyer has remarked recently: “The grounding cognitive relation went largely unexplained by Donnellan” (2017: 2). Finally, we should note Donnellan’s defense of his theory against the charge of being “excessively vague”: he compares his theory with “the causal theory of perception” (1974: 18–19). But, the vagueness of the causal-perceptual theory is not just similar to that of the causal theory of perception, it is the very same. (It is, in effect, the qua-problem for the theory of grounding; see Sect. 19.2.4.). If Donnellan held the causal-perceptual theory of fixing he surely would not have mentioned the causal theory of perception without noting that he held that theory of fixing! 30  Also by Capuano’s distinction between Donnellan’s theory, “DPN”, and Kripke’s, “KPN” (2018: 3). 31  Martí prefers her counterexample to raking-the-leaves because she thinks that her analogue of “the speaker is ‘thinking of’, ‘has in mind’, Jones” is less plausible (2015: 82–83 n. 12; Martí has reversed the roles of Smith and Jones). Probably so, but it is still plausible enough, in my view (Sect. 19.2.2). 29

19  Stirring the Possum: Responses to the Bianchi Papers

395

expressions that feature in Cognitive Priority. Yet we would surely all be drawn also to the idea that, in some sense, the speaker is “thinking of”, “has in mind”, Smith. In light of this, Pepp’s claim that “the Conventional Stance rejects Cognitive Priority” (744) is not clearly true. For, the Stance’s view that reference to x is “determined by linguistic convention” may be quite consistent with that reference being determined by “thinking of x”; participating in the convention may involve “thinking of x”, in some sense. We need to move beyond this folk talk to identify a real difference between Pepp and the Conventional Stance.32 I move beyond by telling a causal story in terms of two notions, underlying and participating in a convention: Underlying concerns the process of a speaker using the name to express a thought grounded in a certain object. Participating in a convention concerns the process of a speaker using the name because she has a disposition, dependent on the dispositions of others, to use it to express thoughts grounded in a certain object. Typically these two groundings are in the same object (2015b: 126)

Our common use of ‘Aristotle’ to refer to the great philosopher exemplifies what is typical. We express a thought grounded in him by exercising our disposition to express such thoughts using ‘Aristotle’. The great philosopher is the speaker-referent because of his grounding relation to the thought expressed. He is the semanticreferent because of his grounding relation to the disposition exercised in expressing the thought. In atypical cases, the speaker-referent and the semantic-referent are different. (1) (a) In the journalist’s novel metaphorical use of ‘Napoleon’, he expresses a thought grounded in one object, Westmoreland (speaker-referent), by intentionally exercising his disposition to express thoughts grounded in another object, Napoleon (semantic-referent). (b) In novel uses like the introduction-­in-­use of a nickname, the speaker expresses her thought grounded in an object (speaker-referent) without exercising any disposition to use that name to express thoughts grounded in an object (no semantic-referent). (2) In confused uses like the raking-the-leaves case, the speaker uses ‘Jones’ to express her thought grounded in two objects, Smith and Jones (partial speaker-referents),33 by exercising her disposition to express thoughts grounded in just one, Jones (semantic-referent). (3) Canon Spooner uses ‘Aristotle’ to express a thought grounded in one object, St. Paul (speaker-referent), by accidentally exercising his disposition to express thoughts grounded in another object, Aristotle (semantic-referent). So, the Conventional Stance, as I understand it, is committed to there being reference-determining processes of exercising such name-object dispositions, established by conventions. It is not committed to this process being free of cognitive activity that one might loosely describe as “thinking of” the object.

 A similar terminological point needs to be made, I argue (2014c), in response to the claim by John Hawthorne and David Manley (2012) that there is no causal “acquaintance” restriction on “reference” or “singular thought” (of the sort that the neo-Donnellanians and I urge). 33  What I call “partial reference” Almog et al. seem to call “ambiguous reference” (2015: 371–373). 32

396

M. Devitt

I certainly do not claim that the above view is Kripke’s – his picture does not include the causal theory of groundings, for one  – but why is it not Kripkean?34 Bianchi thinks not for two reasons. (1) He seems to think that giving any cognitive state a role in reference determination is unKripkean. But this is surely quite wrong. Notice that Kripke talks of intentions when discussing reference borrowing; see Sect. 19.2.1. And Kripke surely thinks that it is in virtue of a cognitive state of a speaker that her token of ‘Aristotle’ refers to the famous philosopher and not Onassis.35 Indeed, to suppose that it is not partly in virtue of a speaker’s cognitive state that her token has a certain reference is as implausible as supposing that we don’t have a language. So why does Bianchi think that assigning a role to a cognitive state is unKripkean? (2) The answer seems to be that he thinks (125) that to give a cognitive state this determining role is to deny a determining role to reference borrowing (perhaps Pepp would agree). This may be Bianchi’s crucial mistake. For, the cognitive state plays its determining role in virtue of being grounded in an object via reference borrowing; see above. In sum, Donnellan’s and Kripke’s pictures of reference determination, charitably understood, both include both a cognitive state of the speaker and reference borrowing. And I was right to present my theory as a development of these pictures in a naturalistic setting. But if I’m wrong on the interpretative issues, so be it. What we should all care about more is the theory not its origins. I claim that the theory is much more plausible than those of Bianchi or the neo-Donnellanians. Bianchi has identified (131–133) one respect in which my views may seem to be clearly at odds with Kripke’s. As we have seen, I follow Grice in holding that conventional semantic-reference is explained in terms of speaker-reference. In contrast, consider this passage from Kripke, quoted by Bianchi (131): we may tentatively define the speaker’s referent of a designator to be that object which the speaker wishes to talk about, on a given occasion, and believes fulfils the conditions for being the semantic referent of the designator. (1979a: 15)

So, as Bianchi shows, it seems that Kripke must reject the Gricean priority of speaker-reference. But does he really? I thought not and so I asked him. He said that he does not. If not, what could be going on this quote? Kripke is offering an explanation of the speaker-­referent of a designator that presumes that the designator already has a semantic-referent. Similarly, an explanation of the speaker-­meaning of a metaphorical utterance would presume that the utterance already had a semantic-meaning. These explanations are no challenge to the Gricean idea that an utterance gets its semantic-meaning from speaker-meanings in the first place.

 Evans (1982: 69–79) labeled a causal theory of thought aboutness like mine “the Photograph Model” and claimed that it should not be attributed to Kripke. He rejects it in favor of “Russell’s Principle” according to which to think about an object, “one must know which object is in question” (65). In response, I argue (1985: 225–227) that Kripke’s ignorance and error arguments count not only against the description theory of names but also against Russell’s Principle. 35  On this see my 2015b: 110–111. 34

19  Stirring the Possum: Responses to the Bianchi Papers

397

Finally, Bianchi mentions Kaplan’s distinction (1989) between subjectivist and consumerist semantics. My theory of names is a bit of both. Insofar as the semantic reference of a person’s use of a proper name is determined solely by her having borrowed the reference  – for example, any contemporary person’s use of ‘Aristotle’ referring to the famous philosopher  – then the theory is consumerist: the name comes to her “prepackaged” with a semantic value.36 Insofar as the semantic reference of a person’s use is partly determined by her own groundings of the name37 – for example, any associate of Aristotle’s use of ‘Aristotle’  – then the theory is subjectivist: she is assigning a semantic value to it.

19.2.4  T  he Qua-Problem for Proper Names (Raatikainen, Reimer) I wanted my theory of grounding to be “thoroughly causal” but there was a problem, “the qua-problem”. The problem for names arises in two ways, as Sterelny and I (1999) pointed out. (1) In virtue of what is a name grounded in, say, my late cat Nana rather than a spatial or temporal part of her? For, whenever a grounder is in causal-perceptual contact with the cat, she is in perceptual contact with a spatial and temporal part of the cat. (2) Suppose that the would-be grounder is very wrong about what he is perceiving. It is not a cat but a mongoose, a robot, a bush, a shadow, or an illusion. At some point in this sequence, the grounder’s error becomes so great that the attempted grounding fails, and hence uses of the name arising out of the attempt fail of reference. Yet there will always be some cause of the perceptual experience. In virtue of what is the name not grounded in that cause? (80)

Clearly there must be something about the grounder’s mental state that requires the grounding to be in a cat, or something appropriately cat-like. But what is that something? I have struggled mightily with this problem, even hypothesizing that grounders must associate a “categorial” description, as Raatikainen notes (74). With this hypothesis I moved from a causal to a “descriptive-causal” theory of grounding (1981a: 61–64; Devitt and Sterelny 1999: 79–80). After presenting problem (1), Sterelny and I insisted that it was not to be airily dismissed on the ground that we just do designate “whole objects”, for we do not always do so, as we illustrated; “there must be something about our practice which makes it the case that our names designate whole objects” (1999: 79).

 However, as I (2015b: 116 n. 17) note, Kaplan’s consumerism involves backward-looking “deference”. I want no part of that (Sect. 19.2.1). 37  The semantic-reference of her use is almost certainly only partly thus determined because even though she has grounded the name herself, every time she hears and correctly understands another’s use, she “borrows” it, thus reinforcing her ability to use it with that semantic-reference (2015b: 117). 36

398

M. Devitt

Marga Reimer has a different view. In “The Qua-Problem for Names (Dismissed)” she describes the qua-problem for names as a “pseudo problem [that] is easily dissolved” (144): This problem can arguably be dismissed (pace Devitt and Sterelny) by appeal to a psychologically motivated default practice of naming whole objects rather than parts thereof…. The existence of a default practice of naming (only) whole objects would be easy enough to explain: such a practice has a clear psychological motivation. As natural language speakers, we have a strong practical interest in thinking about, talking about and (in some cases) beckoning or and otherwise addressing, whole objects (notably “persons, animals, places, or things”) that are especially significant to us. (142)

Reimer is persuasive that there is indeed “a psychologically motivated default practice” of naming such “whole objects” as persons, animals, and places that she mentions. This explains how we come to be naming them. But this is beside the point. For, the point is not about the cause of us naming them but about what it is to name them: “In virtue of what is what we do the practice of naming persons, animals, and places? What makes it the case that our practice is that? What constitutes its being the case that we mostly name those things?” Rather than naming Nana, we could have named her tail. And, as a matter a fact, people did name Sydney, which is a part of New South Wales, which is a part of Australia, … There must be processes that constitutes our naming Nana rather than her part and Sydney rather than what it is a part of (or Balmain which is part of it). Reimer accepts, of course, that we often do name the parts of objects but she supposes that we do so only by pretending that they are “whole objects”: they are “conceptualized as wholes by would-be grounders”: Perhaps we can then regard all groundings of names as involving the practice of naming only whole objects, whether real or “pretend.” In that case, such a practice would no longer really be a “default” practice; it would be an impossible to fault practice. (143)

But to conceptualize something – Sydney, for example – as a “whole object” seems to amount to nothing more than treating it as nameable! And anything is nameable. Perhaps our use of ‘whole object’, initially in scare quotes, has misled here. For, every part of a “whole object”, whether Australia’s Sydney or Nana’s tail is itself also a “whole object” that can be named. Reimer has not told us in virtue of what ‘Sydney’ names the city not Australia, ‘Nana’, the cat not her tail. In the course of making this point in communication with Reimer, I suggested that we imagine different beings in a different environment who might have very different naming practices. Reimer addresses this (147–148) but still, it seems to me, misses the point. These beings might be motivated to name time-slices of entities like Nana. Suppose that they did. Then there would have to be some processes in them, different from those in us, in virtue of which they are naming time-slices where we are naming cats. So much for problem (1). What about problem (2)? A would-be grounder who believes she is perceiving a cat might be wrong: perhaps the cause of her experience is a mongoose, a robot, a bush, a shadow, or an illusion. How do we account for grounding failures in some of these circumstances? In virtue of what is the name not

19  Stirring the Possum: Responses to the Bianchi Papers

399

grounded in the cause of her experience whatever it may be? Reimer has a bold response: “I don’t think that the question … is a worrisome one because the grounding will arguably succeed rather than fail in [these] cases” (145). She then argues that it does succeed in those cases. This can’t be right. Let ‘N’ be a schematic letter for a name. Now some substitution instances of the schema ‘N does not exist’ are true. The custom is to call names in those instances “empty names”. A theory of reference needs to explain in virtue of what these names are empty and these instances of the schema are true. Set aside non-paradigm “descriptive” names like ‘Jack the Ripper’ that have their reference fixed by a description. It is easy to explain the emptiness of one of those, as Reimer demonstrates in her discussion of ‘Vulcan’ (149–150): nothing fits the description. But what about the paradigm names that, according to our theory, have their reference fixed by a causal grounding? The emptiness has to arise from a failure of such a grounding. So what does the failure consist in? That’s the qua-problem (2). Hallucinations provide the clearest case of emptiness. Reimer’s discussion of one is puzzling: It’s been one of those days and the speaker decides to help herself to a rather hefty portion of her husband’s bourbon … and as a result has some extraordinarily vivid … dreams involving … a very large orange tabby cat…. It reminds her so much of her childhood pet Mieze that she decides to name the hallucinatory cat “Mieze.” … [Later] she says …, “I wonder if I will see Mieze again tonight.” A couple of weeks later … she says …, “I guess Mieze was just an incredibly vivid hallucination.” (146–147)

Indeed it was. And because it was we can say truly that Mieze does not exist and ‘Mieze’ is empty. And the explanation that we must give of this is that the grounding failed. So Reimer’s own discussion raises the qua-problem: In virtue of what does ‘Mieze’ not refer to whatever it was that caused her attempted grounding (the bourbon?)? Why does Reimer think otherwise? She emphasizes that the speaker has successfully introduced the name ‘Mieze’ and may continue using it and passing it on to others. Empty names can indeed be introduced and can have a long life. That poses a well-known problem.38 But it is a different problem. Successfully introducing a name is much less demanding than successfully grounding it, as ‘Mieze’ illustrates. The causal theory needs an account of what makes the grounding of ‘Mieze’ unsuccessful. That’s the qua-problem for names. So what is the solution? As already noted, I moved to the idea that grounders must associate a “categorial” description. It’s an ugly idea and I was reluctant to embrace it. One of its problems is that it lacks “psychological plausibility”, as Reimer points out (144–145). I now think, as she notes, that this view of a grounding is “far too intellectualized”. I have taken to wondering whether this problem, like the reference-borrowing one above, “is more for psychology than philosophy” (2015b: 115 n. 15). Insofar as philosophy has anything to contribute to the solution of the qua-problem, I think that we should look to teleosemantic explanations of the nonconceptual

38

 I have attempted a solution (1981a: ch. 6).

400

M. Devitt

content of perceptions (Devitt and Sterelny 1999: 162).39 The basic idea is that the grounding of ‘Nana’ would be in a cat not a spatiotemporal part of her in virtue of the grounding involving a perception with the biological function of representing such an object not a spatiotemporal part of it.

19.2.5  Causal Descriptivism (Raatikainen, Jackson, Sterelny) Raatikainen notes that Kripke did not present his ignorance and error argument as counting against all possible description theories. Raatikainen gives a critical summary of some that have emerged since Kripke. One that has been surprisingly popular is “causal descriptivism” favored by David Lewis (1984), Fred Kroon (1987), and Frank Jackson (1998) …. [S]peakers associate with a name ‘N’ a description of the form ‘The entity standing in relation R to my current use of the name “N”’, and this description determines the reference of “N”. The relation R here is drawn from the rival non-descriptivist (e.g. the causal-historical chain picture) theory of reference. (90)40

The theory is, as Raatikainen says, “ingenious”. But it is also “fishy” (Devitt and Sterelny 1999: 61), not least because it is parasitic on the causal-historical theory. In “Language from a Naturalistic Perspective”, Jackson asks “what’s the important difference between the two [theories]?” and puts his finger straight on an important one: Causal descriptivism is committed to the relevant causal and naming facts being pretty much common knowledge, whereas the causal theory proper holds that the naming and causal facts that secure the reference of a name are not common knowledge. (158)

For competent speakers to know “the relevant causal and naming facts” is for them to “know the right ‘R’” to substitute in the above schematic form of causal descriptivism (Devitt and Sterelny 1999: 61).41 Raatikainen objects, saying that this is psychologically implausible: it requires that every competent speaker must possess … the absolutely correct and complete theory of reference, and it is doubtful that anyone possesses such a theory. (90–91)

This is not quite right (nor is Devitt and Sterelny 1999: 61). What causal descriptivism requires is that every competent speaker associates ‘N’ with the above “R-description” and hence, in effect, knows that ‘N’ stands in relation R to the referent, where R is the relation specified by the causal-historical theory; this is “the relevant causal and naming fact”. So causal descriptivism does not require that  On this, see an important recent book, Neander 2017.  I first heard of causal descriptivism from Robert Nozick at Harvard in 1970 in response to my graduate student talk proposing a causal theory. Kripke also first heard the theory from Nozick; see 1980: 88 n. 38. 41  Note that substituting ‘being the cause’ “is only the very vague and inadequate beginning of what is required” (Devitt and Sterelny 1999: 61). 39 40

19  Stirring the Possum: Responses to the Bianchi Papers

401

speakers know that very theory, know that it is in virtue of such associations/knowledge that ‘N’ refers to its referent. Rather it requires that speakers have the knowledge on which the theory is built. Let us call this required knowledge, italicized above, “the knowledge-­base” of causal descriptivism. And it is indeed “psychologically implausible” that every competent user of ‘N’ has that knowledge-­base. Although causal descriptivism was designed to avoid ignorance and error problems it has a raised a new one. 19.2.5.1  Jackson Jackson is unimpressed by this objection: This is, however, hard to believe. People write books and produce television shows on whether or not Helen of Troy or Jericho existed, and whether or not Shakespeare wrote the famous plays. These books and television shows report the results of historical research directed at the causal origins of the names in question and whether or not the origins are of the right kind to allow us to affirm that Jericho and Helen of Troy really existed, and that Shakespeare wrote, for example, Macbeth. The research that goes into these shows and books is carried out by historians, not philosophers of language. The shows and books get reviewed and assessed by people who are not philosophers, and the reviews are read and understood by the educated public at large. How could all this be possible if which facts are relevant to answering the questions were known only to a select group of philosophers? (158)

Jackson is surely right that many historians and other educated people are quite good at identifying named entities like Helen of Troy. This is not to say, of course, that any competent user of any name knows how to identify its bearer. Nonetheless, Jackson does seem to think that she does: If you say enough about any particular possible world, speakers can say what, if anything, words like ‘water’, ‘London’, ‘quark’, and so on refer to in that possible world. (1998: 212)

And Jackson’s line of defense of causal descriptivism requires at least that. Yet note that knowing how to identify an entitity requires knowing not only what evidence counts, which is hard enough, but also knowing when one has enough evidence, when you have said enough “about any particular possible world”. It strikes me as a romantically optimistic view of the epistemic capacities of our species to suppose that we all have this ability. But suppose, for the sake of argument, that we all did. The defense still has an interesting other problem that can be most persuasively presented by appealing to cognitive psychology. There is a familiar folk distinction between knowledge-that and knowledge-­how. Psychologists and cognitive ethologists take this distinction to be much the same as their own very important one between “declarative” and “procedural” knowledge. Thus, John Anderson, a leading cognitive psychologist, writes: The distinction between knowing that and knowing how is fundamental to modern cognitive psychology. In the former, what is known is called declarative knowledge; in the latter, what is known is called procedural knowledge. (1980: 223)

402

M. Devitt

Psychologists describe the distinction, rather inadequately, along the following lines: where declarative knowledge is explicit, accessible to consciousness, and conceptual, procedural knowledge is implicit, inaccessible to consciousness, and subconceptual. Finally, skills are procedural knowledge not declarative knowledge.42 Now knowing how to identify an entity x named ‘N’ is a cognitive skill, a piece of procedural knowledge. It consists in knowing how to discover, and make effective use of, alleged information about x, information about the properties of x that will be presented in claims of the form, ‘N is F’; and it consists, as just noted, in knowing when enough is enough. But this procedural knowledge is not what Jackson’s defense requires. It requires common knowledge of “the relevant causal and naming facts”; of “the knowledge-base” of causal descriptivism; of the fact that ‘N’ stands in relation R to the referent, where R is the relation specified by the causal-­historical theory. This knowledge-base is, of course, a paradigm of declarative knowledge. It is quite different from the procedural knowledge that Jackson supposes we all have, both intuitively and according to the consensus in psychology: procedural knowledge is not declarative knowledge. But mightn’t procedural knowledge cause declarative knowledge? Thus, one might observe the exercise of one’s skill at, say, riding a bicycle, typing, or thinking, and abstract the principles of a good performance. Similarly, one might observe the exercise of one’s skill at identifying x, the referent of ‘N’, and abstract the principles of reference. And one surely might but, importantly, one typically doesn’t. Indeed, most of us typically can’t; we are just not clever enough. And even where we are clever enough to discover the nature of a skill, that does not constitute the skill. Our epistemic success is one thing, the metaphysics, another. In general, knowledge-how is very different from knowledge-that.43 In particular, knowing how to identify an entity is very different from having the knowledge-base required by the causal descriptivist theory of the entity’s name. Even if all competent users of a name ‘N’ knew how to identify its bearer by using descriptions of the bearer, which they surely don’t, we have no reason to believe that the R-description, ‘the entity standing in relation R to my current use of ‘N”, with the “correct” substitution for ‘R’, is even among the descriptions used by anyone for this purpose. Raatikainen rightly says that it is “doubtful that anyone possesses” the right theory of reference, even philosophers (91); it’s just too difficult.44 And it is really “hard to believe” that the folk have the knowledge-base of causal descriptivism.

 My claims about cognitive psychology draw on my examination of this and its relation to language in Ignorance of Language (2006a: 210–220); those about cognitive ethology, on “Methodology and the Nature of Knowing How” (2011c: 213–215). 43  Stanley and Williamson (2001) disagree. I have argued that they are wrong (2011c). 44  Jackson says that causal theorists “must of course allow that some people are aware of the relevant facts – namely, they themselves” (158). Well, we do keep our spirits up by thinking that we are aware of some of the relevant facts but it would be wishful thinking to suppose that we were aware of them all; note the acknowledged incompleteness of the theories of reference borrowing and grounding discussed in Sects. 19.2.1, 19.2.2 and 19.2.4. 42

19  Stirring the Possum: Responses to the Bianchi Papers

403

Mean aside. If that knowledge-base was really “common knowledge”, it is curious that there was hardly even a glimmer of causal descriptivism in the theories proposed in the pre-Kripkean era. Jackson thinks that not only are “the relevant facts (about naming procedures and information preserving causal chains containing names) … widely known, but [they] had better be widely known. Only that way can proper names play their important informational role” (159). But all that is required for ‘Hillary’ to play its role of conveying information about Hillary Clinton is that people participate in the convention of using ‘Hillary’ to convey thoughts about Hillary (Sect. 19.1.2). This again is a piece of knowledge-how. It does not require people to have causal descriptivism’s knowledge-base for ‘Hillary’. Indeed, in my view, it does not require them to have any semantic propositional knowledge at all (2006a). Jackson has an over-intellectualized view of linguistic competence. This underlies another step in his defense of causal descriptivism. According to that theory, the reference-determining description that users of ‘N’ associate with it, its R-description, is metalinguistic; see above. Jackson defends this metalinguistic commitment as follows. Suppose that you wanted to discover whether Mary Smith works in this building. Jackson claims that it is impossible to show that “Mary Smith works in this building” is true without having recourse to information about words, and in particular the words “Mary Smith”. You need to find them on an office door, or in the phone directory of the building, or to note the response of someone who works in the building to a question containing “Mary Smith” – as it might be, the question “Does Mary Smith work in this building?”. (160)

This seems quite wrong to me. Thus, to learn from that worker’s testimony, you simply need to understand what she says. This requires a skill at using words, a piece of knowledge-how. There is no reason to think that it requires any “information about words”, a piece of knowledge-that; or so I have argued (2006a). In sum, I think that Jackson has not rebutted the ignorance and error objection to causal descriptivism. Competent speakers do not all know how to identify the bearers of the names they use. Even if they did they would not thereby have the knowledge-­base required by causal descriptivism. Sterelny and I had another, deeper, objection (1999: 61). Raatikainen puts it nicely: [causal descriptivism] is parasitic and redundant: if it is true, it admits that a name stands in a causal-historical relationship, R, to its bearer; R alone is sufficient to explain reference, and further description involving R is redundant. (91)45

The point is that, according to causal descriptivism, reference is determined by the speakers’ association of the R-description with ‘N’ and that association must be redundant. We can come at this redundancy from the other direction. Suppose causal theorists finally discover that ‘N’ refers to x in virtue of x standing in a certain causal  Note that this is not a model for a suspiciously quick refutation of any description theory. Causal descriptivism’s redundancy comes from its peculiar feature of having a reference-determining description that specifies the reference-determining relation.

45

404

M. Devitt

relation to our use of ‘N’. Obviously those theorists will describe that relation in presenting their theory! Hence they will associate that description with ‘N’. But it won’t be their association of the description, let alone ordinary speakers’ association of it, that determines reference: reference is determined by the relation described. Jackson does not address this objection and Sterelny, in “a shocking betrayal”, seems to have forgotten it in his present flirtation with Jacksonian descriptivism. For Sterelny, in “Michael Devitt, Cultural Evolution and the Division of Linguistic Labour” claims to have “bought into something like Jackson’s causal descriptivism” (175). I am hoping that what follows will lead him to repent. 19.2.5.2  Sterelny Sterelny addresses two questions. The first concerns what we “need to know or do to be able to successfully launch a new term into the linguistic community” (174). In effect, this is the issue of reference fixing (see Sects. 19.2.2, 19.2.3 and 19.2.4). The second concerns “how much information needs to flow down the links for a novice to acquire the capacity to use the term” (174). In effect, this is the issue of reference borrowing (Sects. 19.2.1, 19.2.3). Sterelny takes me to have “staked out a minimalist position with regard to both questions”. In contrast, he now leans toward the alternative view “that both launching a term, and acquiring through conversational interaction with others using it, is more informationally demanding” (174–175). And his concern is not just with proper names but also with kind terms, particularly “natural kind” terms. How minimal is the position that I have “staked out”? Consider names first. When it comes to their reference borrowing, my position is indeed minimal in its demands on “the central processor”. However, I anticipate that psycholinguists will reveal that lots of subdoxastic processing is necessary; see the discussion in Sect. 19.2.1. When it comes to reference fixing, as noted in Sect. 19.2.4, I once embraced, reluctantly, an informational demand. So I was not then a minimalist. However, as also noted, I have backed away from that demand. What about “natural kind” terms? My position on the two questions is much the same for them as for names (Devitt and Sterelny 1999: 88–93). When it comes to other terms however, I, in the company of Sterelny himself, have attempted to describe the logical space for positions but staked out none (1999: 93–101). I think we will need to start testing usage to make progress here (Sect. 19.4.4). So, it would be fair to say that minimalism is my default position. Still, aside from proper names and (some?) “natural kind” terms, I think it is an open question whether information about the referent must go into fixing or borrowing. But I do insist that nothing like causal descriptivism’s knowledge-base, nor any other semantic information, is constitutive of the reference fixing or borrowing of any term. Sterelny’s interesting ideas about the evolution of language here and elsewhere (2016) do not support anything like Jacksonian descriptivism.

19  Stirring the Possum: Responses to the Bianchi Papers

405

Sterelny does seem to think otherwise: “In using a referential term, speakers are aware of, and buy into, the causal networks that link their uses of that term to its bearer” (175). Why does he think so? [T]he flexibility of semantic competence was hard to reconcile with Michael’s automatic, quasi-reflexlike picture of the introduction and transmission of a term from one speaker to the next…. I can coin a new name – for the recently arrived dog – and smoothly integrate it into my language. If it is apt, it is likely to catch on and become part of the local dialect. This strongly suggests that agents represent the referential, semantic properties of words – we notice them …. Moreover, this is a central aspect of their semantic competence, not a peripheral one…. Our capacity to use names … depends not just on the existence of these sociolinguistic networks. It depends as well on our recognition of their existence, and on our intention to use a name in a further extension of the network. (175)

It is not clear to me why Sterelny thinks that the “automatic, quasi-reflexlike picture”, or indeed any picture, of linguistic competence is “hard to reconcile” with the flexibility in reference fixing that he rightly emphasizes: that “we can expand our lexicon, at will, at need, on the fly” (175). But suppose the reconciliation is indeed impossible. Why would this “strongly suggest” that the picture has to be enriched by semantic information rather than the sort of information about the referent that I have just agreed may well be required for some terms? Set that aside. I want to focus on two further matters that, on the one hand, accommodate Sterelny’s idea that language acquisition is informationally demanding but, on the other hand, do not go down the Jacksonian path. The first matter is on reference fixing, the second, on reference borrowing. (I) I agree that the capacity to coin new words that Sterelny describes is a “central aspect” of our (innate) competence to learn natural languages. So this capacity partly causes lexical expansion, partly causes our competence in any language we do learn. Further, this capacity surely requires a lot of cognitive sophistication, as Sterelny insists. In particular, it surely requires lots of thoughts about other minds. I certainly think that we should entertain, though be reluctant to embrace, the idea that the capacity requires thinking some semantic thoughts.46 Indeed, I once made a “tentative” proposal of this sort: “Some sort of semantic theorizing is required to gain linguistic competence, including some elementary theorizing about the point of language” (1981a: 109). It seems to me that Sterelny’s idea that language acquisition is informationally demanding is most plausible if construed as a proposal of that sort: semantic thoughts are part of the cause of our capacity to coin new words. I have two points to make about this proposal. First, this causal connection does not make the semantic thoughts constitutive of that competence in the coined words (1981a: 109). So such proposals are not even in the same game as causal descriptivism. A look at psychology helps again. Psychology distinguishes two sorts of learning, explicit learning, which is a

 The case of the prairie dogs alone should give pause. Prairie dogs have a system of barks that convey information about which sort of predator is threatening. When an experimenter used a plywood model to simulate a new sort of predator, the prairie dogs introduced a new bark (Slobodchikoff 2002).

46

406

M. Devitt

“top-down” process, and implicit learning, which is a bottom-up process. Set aside implicit learning for a moment. Explicit learning starts from declarative knowledge. Consider learning to change gears in a stick-shift car by starting with instructions like: “First, take your foot off the accelerator, then disengage the clutch.” That’s one example of explicit learning. The proposal that semantic thoughts partly cause our capacity to coin new words is another example. And the important point is that in explicit learning, declarative knowledge, like the instructions and the semantic thoughts, plays a role in bringing about the skill, but does not constitute it and may even disappear after the skill is learnt. The psychological consensus is that the skill is procedural knowledge, knowledge-how. Second, supposing that we must have some semantic thoughts for language acquisition may have some plausibility, but supposing that we must have something like the knowledge-base for causal descriptivism, as Sterelny is suggesting, strikes me as very implausible. My first point rests on a crucial distinction between what causes competence and what constitutes it. Sterelny’s discussion may support the idea that some true semantic thoughts are included in the cause of competence. I’m dubious, but if the discussion does support this, then causal descriptivism may not be as open to the ignorance and error objection as we once claimed. But the discussion does not support the idea that true semantic thoughts, let alone the knowledge-base of causal descriptivism, constitute the competence. Of course, as a causal theorist, I must allow that causes of some sort could be constitutive, but I don’t think that we have been given any reason to believe that semantic thoughts are constitutive of linguistic competence. My second point is that, even if semantic thoughts are part of the cause of the competence, it is very implausible that the knowledge-base is. So, causal descriptivism still has an ignorance and error problem. (II) That point is about reference fixing. Sterelny thinks that causal descriptivism may get support from a consideration of reference borrowing: Perhaps information about the cognitive demands of cumulative cultural learning in general can give us some insight into the particular case of what an agent must understand in order to be part of reference borrowing networks, to be part of the division of linguistic labour. (179–180)

Getting this support is difficult because “there is no received view on the cognitive prerequisites of cumulative cultural learning in general” (180). Sterelny’s interesting discussion explores two schools of thought. First, there is the “relatively undemanding ‘Californian’ view”. This view of cultural learning seems to fit the psychologists’ idea of implicit learning, a “bottom-up” learning that takes place “largely without awareness of either the process or the products of learning” (Reber 2003: 486). Clearly, there is no support for Jackson there, but Sterelny does see some in the alternative “Parisian” view which sees cultural learning as more cognitively demanding …. [T]he process is highly selective, and what agents do with the representation that they do pick up depends on what they already know or believe; their intentions; and on how their mind is tuned. (181)

19  Stirring the Possum: Responses to the Bianchi Papers

407

Sterelny thinks this view “[o]bviously ... resonates with Jackson’s reflective conception of the sociolinguistic networks that sustain the division of linguistic labour” (181). I don’t think so. The “Parisian” view seems to fit the psychologists’ idea of explicit learning, a “top-down” process that I have described above. And the important point now, as in (I) above, is that explicit learning and the Parisian view are theories of how we achieve a skill, about its cause, they are not theories of what constitutes that skill. So, no resonation with Jackson. Sterelny argues that reference borrowing is at least partly Parisian: The flexibility of use that I took to support Jackson’s view that the division of linguistic labour depended on downstream users recognising the networks of which they are a part may be a relatively recent feature of language, perhaps most marked in large world languages. That said, on this view the languages (or protolanguages) without a word for word, or other ways of explicitly thinking about words and reference, will also be languages without much (or with very minimal) division of linguistic labour. (184)

So for speakers to borrow reference they must have semantic thoughts about words and reference. This is an interesting idea and, taken as a view of the cause of the borrowing, it may be right. Still, I think there are reasons to doubt it. When I looked hard at the matter, admittedly a decade ago, I concluded that “language learning seems to be a paradigm of implicit learning” (2006a: 219). I’m still inclined to that conclusion. If it is right, declarative knowledge, whether semantic or not, plays no role. Still, we should all agree, it is early days in understanding these processes. But, to repeat, even if language learning is explicit, with declarative knowledge playing a causal role, that does not support Jackson. For, causal descriptivism is a theory of what constitutes reference. So Sterelny must have in mind that these semantic thoughts are constitutive. Yet, in the context of discussing reference borrowing in an earlier paper, he remarks: While it is hard to specify exactly what users understand about their language in using it, and, certainly, they have nothing like an explicit commitment to any semantic theory, referential competence is not reflexlike, or completely isolated from reflective understanding. (2016: 273)

It is not clear to me that this cautious statement is at odds with my “minimalist” view, described in Sect. 19.2.1, and hence is a very long way from causal descriptivism. In sum, Sterelny leans toward an informationally demanding view of language learning and has some interesting thoughts in support. I am dubious that this learning demands semantic knowledge let alone anything close to the knowledge-base of causal descriptivism. But that is all about the cause of a person’s competence with a name. My main point is that there is no case here for an informationally demanding view of the competence itself. In particular, I see no case for the view that the knowledge-base is constitutive of this competence. So the considerations he adduces should not take him so close to causal descriptivism. Finally, I must remind Sterelny of our redundancy objection to that theory. This objection, arising out of the theory’s parasitism, still strikes me as devastating. In

408

M. Devitt

light of the present discussion, we can expand on the objection as follows. Suppose, per impossibile, that a person knew that it was in virtue of ‘N’ standing in a certain causal relation to the referent that ‘N’ refers to the referent; i.e., she knew the correct causal theory. What role could that knowledge play? Not the role of constituting the reference of ‘N’: see the redundancy objection. Yet that piece of declarative knowledge could cause the person to establish that reference-­determining causal relation between ‘N’ and an entity. So, that knowledge’s role would be analogous to that of the instruction in learning to change gears in a stick-shift car.

19.3  Theory of Meaning 19.3.1  Direct Reference (Braun, Horwich) David Braun and I have been arguing about direct reference (DR) for 25  years. Braun believes in it, I don’t (1989a, 1996, 2012d). DR is more often declaimed than argued. I have always admired Braun’s conscientious and ingenious attempts to argue for this unpromising doctrine, illustrated again in “Still for Direct Reference”. Braun’s paper starts with nice accounts of DR and of my contrary view (for which I thank him). I shall presume these accounts in what follows. Braun goes on to give a long, thoughtful, and intricate response to my criticisms of DR. It would be a daunting task to respond to this in the detail it deserves. Sadly, I cannot attempt anything close to that here. I apologize for the resulting density of what follows. DR, as we are understanding it,47 is a resurrection of the “Millian” theory according to which the “meaning” (“semantic value”, “semantic content”, etc.) of a proper name is simply its bearer or, as I prefer to say, its property of referring to its bearer. So, it faces the familiar, and apparently overwhelming, problems that led Frege and Russell to abandon Millianism long ago in favor of description theories. Despite this, DR is commonly taken, by friend and foe alike, to be the theory of meaning required by the Kripkean revolution in the theory of reference. How did this widespread opinion come about? It’s a curious history that I have recently tried to tell (2015b: 129–130). I am a maverick among the revolutionaries. For many years I did not entertain DR as even a candidate theory. And, from the beginning (1974: 204), I presumed that the meaning of a name was to be found in the causal network that determined its reference, a network of groundings and reference borrowings (Sects. 19.2.1, 19.2.3 and 19.2.4): Frege was right to think that a name’s meaning was its mode of reference but wrong to think that this mode was constituted by associated descriptions. We can safely say that a key reason for the success of the implausible DR is that the idea that a name’s meaning is its causal mode of reference strikes people as

47

 “Direct reference” is often understood in other ways (1989a: 206–212; 1996: 170n).

19  Stirring the Possum: Responses to the Bianchi Papers

409

too shocking to be taken seriously (2015b: 130–132). Still, I argue that, shocking as it may be to the tradition, it is right (2001). However, it is important to note, I came to allow  – in Coming to Our Senses (1996) – that a name did not have just this one causal-­mode meaning.48 The meanings we should posit are properties that play certain causal roles in the natural world. In particular, it is in virtue of their meanings that token thoughts and utterances cause behavior and guide us to reality. When it comes to guiding us to reality, a name token can mostly play its causal role simply in virtue of its property of referring to its bearer, simply in virtue of its DR meaning. But, when it comes to causing behavior, we need to posit a finer grained meaning, its property of referring to its bearer in a certain way, a mode of referring. And we should contemplate that the explanation of one piece of behavior may require a finer grained mode than the explanation of another. So, the main objection to DR is what it omits from meaning, not what it includes. Interestingly, Kaplan (2012), a founding father of DR, has recently said as much!49 The “new” Kaplan urges, for reasons similar to mine, a semantic place for nondescriptive ways of referring that are also similar to mine. Yet he emphasizes that he does not want to deny a place to “Millian” meanings: “I only wish to resist the term ‘semantics’ being hijacked for one kind of content” (2012: 168 n. 28). This seems dead right to me. There is no sound theoretical basis for DR’s insistence that there is just the one kind of meaning that DR favors. I turn now to Braun’s defense of DR. (I) Braun sees an important difference between us over attitude ascriptions: an expression is Shakespearean iff substitution of co-referring proper names in that expression preserves the extension of that expression…. Direct-­reference theory entails that attitude ascriptions and ‘that’-clauses are Shakespearean, on all of their readings. (196)

On my view, in contrast, these ascriptions are not Shakespearean on some readings. Why is this difference important? For me, not primarily because of what it shows about the meanings of these attitude ascriptions,50 but because of what it shows about the nature of the properties ascribed by these ascriptions (1996: 70–81,

 Braun notes that DR is concerned with “the conventional meanings of linguistic expressions” (203) and is worried that my concerns may be elsewhere so that we are talking past each other. I don’t think we are. Now it is true, as he notes, that my focus in Coming is on the meanings of thoughts rather than of utterances but I am also concerned with utterances. I later describe that concern as follows:

48

Paul Grice (1989) has drawn our attention to the distinction between the conventional meaning of an utterance and its speaker meaning…. In doing semantics, we are primarily interested in conventional meanings on occasions of utterance because those are the meanings that are the main routes to speaker meanings and hence thought meanings…. So the basic task for utterances is to explain the nature of properties that utterances have largely by convention and that play semantic roles. (2012d: 65; see also p. 79) 49  My telling of the history of DR includes quite a lot on Kaplan, of course (2015b: 129–130). 50  Although I do discuss these meanings, of course (1996: 82–84, 196–218).

410

M. Devitt

140–150). For, these ascriptions ascribe properties that seem to play the causal roles of meanings and so provide evidence of the nature of meanings, evidence that meanings are those ascribed properties. Thus, whereas (1) Ralph believes that Ortcutt is a spy, construed transparently,51 ascribes the property of referring to Ortcutt, construed opaquely, it ascribes the property of referring to Ortcutt under the mode of ‘Ortcutt’. I take this as evidence, although not the only evidence (1996: 150–154), that the latter property is part of a meaning (“content”) of the belief in question and hence of any utterance of ‘Ortcutt is a spy’ that expresses that belief. For, opaque ascriptions serve our purposes of explaining and predicting behavior: when we articulate the generalizations in virtue of which behavior is contingent upon mental states, it is typically an opaque construal of the mental state attributions that does the work. (Fodor 1980: 66)

Clearly, the opaquely construed (1) could not then be Shakespearean: substituting another name for Ortcutt, say ‘Bernard’, would change what it ascribes. Now, some puzzle cases required adding a few bells and whistles to this theory. One puzzle that Braun focuses on is Kripke’s Pierre (1979b: 254–265). This case led me to say that in (2) Pierre believes that London is pretty, opaquely construed, and ascribed because Pierre is ready to assert “Londres est jolie”, ‘London’ ascribes “the ‘disjunctive’ mode of ‘London’ or ‘Londres’ rather than a finer-grained meaning involving one of the disjuncts” (1996: 233).52 There is nothing special about the French version of ‘London’ and so I added a footnote to the mention of ‘Londres’ saying: “Or, indeed, the ‘translation’ of ‘London’ in any other language” (233 n. 81). This is the basis for Braun’s first criticism. Devitt can resist the Shakespearean conclusion only if ‘Londres’ is a translation of ‘London’, but ‘Bernard’ is not a translation of ‘Ortcutt’ (and ‘Ortcutt’ is not a translation of ‘Bernard’). Is there any reasonable translation relation that would have this consequence? (211)

He argues ingeniously that there is not: I must settle for translation as co-reference and accept Shakespeareanism (214). If this were so it would seriously undermine my case for meanings as causal modes of reference. Braun puts a lot of weight on this argument (234). (a) First, Braun takes my scare-quoted use of ‘translation’ in a footnote far too literally. My idea is simply that the opaque (2) ascribes a mode of referring to

 An expression is transparent iff substitution of co-referring singular terms in that expression preserves the extension of that expression. So where the definition of ‘Shakespearean’ talks of the substitution of proper names, that of ‘transparent’ talks of the substitution of singular terms. 52  ‘London’ does not do so in all opaque ascriptions, however; consider “Pierre believes that London is not Londres” (235). 51

19  Stirring the Possum: Responses to the Bianchi Papers

411

London by the causal network for ‘London’ or by some closely related network like that for ‘Londres’. Just how closely related? It is hard to say: there is some indeterminacy here, as almost everywhere. Still, there are clear cases of not being close enough: the network for ‘Mark Twain’ is not close enough to that for ‘Samuel Clemens’. Similarly, it is hard to say how close to a full head of hair a person has to have to be hirsute but Yul Brynner is clearly not close enough. And Ralph having a belief under the mode of ‘Bernard’ cannot be enough to make the opaque (1) true. So I don’t have to accept Shakespeareanism. These claims rest largely on intuitions about the truth values of attitude ascriptions. There is a more important consideration that is not about such “second-level” utterances. (b) The Pierre puzzle does not undermine the argument, alluded to above, about the meanings that cause behavior (1996: 140–154). It follows from that argument that part of the meaning we have to attribute to Pierre’s “first-level” utterance of “Londres est jolie”, and to its underlying belief, in order to fully explain Pierre’s behavior is the property of referring to London under the mode of ‘Londres’; similarly, to his utterance of “London is not pretty”, the property of referring to London under the mode of ‘London’. Those are the meanings that fully explain Pierre’s behavior. Turn for a moment to Horwich. In considering my theory, he thinks that “one might well baulk at the idea that different primitive terms never have (indeed cannot have) the same meaning. What about ‘London’ and ‘Londres’?” (291) As can be seen, I think that the causal roles of a name justify our ascribing more than one meaning to it, some of which can be shared by another name. Thus ‘London’ and ‘Londres’ do share the following two meanings: the property of referring to London; the property of referring to London under the disjunctive mode of ‘London’ or ‘Londres’, or … But they do not share the property of referring to London under the mode of ‘Londres’; only ‘Londres’ has that meaning. And its having that meaning is important in explaining Pierre’s behavior.53 The Pierre puzzle does not show that meanings are not causal modes. Rather, it shows that that our ordinary attitude ascriptions are not always perfect ways to attribute the causally significant meanings: the disjunctive meaning that (2) ascribes “is not fine-grained enough to explain [Pierre’s] behavior” (1996: 234). I had earlier found a similar imperfection in discussing demonstratives: Richard’s puzzle (1983) shows that “the folk do not have standard forms of ascription that distinguish the meanings” of demonstrative beliefs that need to be distinguished to explain behavior in Richard’s ingenious story (1996: 221).54 And it is not hard to see why the folk

53  Horwich (291) also raises Kripke’s case of Peter and ‘Paderewski’. I have discussed this difficult problem (1996: 236–240). 54  I contemplate positing “hidden indexicals” to avoid these charges of imperfection (221–222, 234–235).

412

M. Devitt

do not have standard forms that will do the required explanatory job in these situations: the situations are unusual (231–234). Braun is not the first enthusiast for DR to get comfort from puzzles about attitude ascriptions. I responded to some earlier examples as follows: The conclusion that the direct-reference philosophers draw from the puzzles … are “in the wrong direction”. The puzzles do not supply evidence that meanings are coarse grained but rather supply evidence that some meanings are more fine grained than those ascribed by standard attitude ascriptions. The puzzles show that we sometimes need to ascribe these very fine-grained meanings to serve the semantic purpose of explaining behavior. (243).

(II) Braun turns next to arguments against DR, particularly my versions of the Identity Problem and the Opacity Problem (173–176). After presenting these versions, he rightly says that they do not show direct-reference theory is false unless they also show either (a) that ‘Twain is Twain’ and ‘Twain is Clemens’ differ in conventional meaning or (b) that ‘Alice believes that Twain is Twain’ and ‘Alice believes that Twain is Clemens’ differ in conventional meaning. (219)

He continues: “But Devitt’s arguments do not show this.” Well, they were certainly intended to (2012d: 79)! Let me make that explicit by adding ‘conventionally’ to a recent statement of the argument: suppose Abigail has a neighbor called “Samuel Clemens” and reads books by someone called “Mark Twain”. On hearing someone report, “Mark Twain is at the Town Hall”, she rushes there, saying, “I’d love to meet Mark Twain”. It makes all the difference to our explanation of Abigail’s behavior that she said this not “I’d love to meet Samuel Clemens”. For, the former [conventionally] expresses a thought that causes her to rush to the Town Hall whereas the latter [conventionally] expresses one that might not have (because she may not know that Mark Twain is Samuel Clemens). It matters to the explanation of her behavior that her utterance [conventionally] refers to Twain/Clemens under the mode of ‘Mark Twain’. (2015b: 133)

What could be the basis for denying that the mode of reference of ‘Mark Twain’ that is crucial to the name’s causal role here is a conventional meaning of the name? At this point Braun follows the DR tradition of clutching at the straw of pragmatics (219–220). But there is no principled basis for treating this phenomenon pragmatically; or so I have argued (1989a, 1996, 2012d). The use of the name in this story exemplifies the undeniably regular practice of using a name to convey a message partly constituted by the name’s mode of reference. This regularity is best explained by positing a convention of using a name to convey such parts of messages; indeed, there is no other plausible explanation. So we should posit a convention (2013d: 107). So the mode of reference is a conventional meaning of the name. (III) Appeals to pragmatics loom large in Braun’s intricate defense of DR against the charge that DR cannot account for the causal role of utterances (221–234). Indeed, that defense ultimately rests on exporting the troubling phenomena to pragmatics. This exportation strategy is demonstrated in two crucial, and characteristically honest, admissions at the end of Braun’s discussion. The first concerns attitude ascriptions:

19  Stirring the Possum: Responses to the Bianchi Papers

413

I admit that attitude ascriptions that ascribe modes of reference provide more explanatory information about an agent’s behavior than do (direct-­reference-­style) attitude ascriptions that do not. But I deny that modes of reference are among the meanings that attitude ascriptions ascribe to thoughts in virtue of the conventional meanings of those ascriptions…. [The] utterances of ordinary  attitude ascriptions can pragmatically convey information about modes of reference. (232)

But the attitude ascriptions that provide that “more explanatory information” are not occasional occurrences, they are regular ones: attitude ascriptions with names are standardly used in a non-Shakespearean way. I stand by the charge that there is no principled basis for denying that this usage exemplifies a convention. Braun likes to defend DR by arguing that attitude ascriptions, “second-level” utterances, are Shakespearean. I agreed that this issue is important because the nonShakespearean nature of these ascriptions provides evidence that modes of reference are meanings of names. But it is much more important to look at the causal role of names in “first-level” utterances. Braun’s second admission is helpful on this: I admit that modes of reference are theoretically interesting, in the following ways: modes of reference are real properties, some parts of thinking-events have those properties, they are causally relevant to behavior, and mentioning them (e.g., in technical ascriptions) can add explanatory information to explanations of behavior. But I also maintain that there is a principled and theoretically interesting distinction between conventional and non-conventional meaning. (233)

I agree with every bit of this, of course. Now think of its implications. Consider one of those “thinking-events”, Betty’s belief that Twain smokes (one of Braun’s examples). Braun agrees that this belief plays its role in causing behavior partly in virtue of its property of referring to Twain under the mode of ‘Twain’. So he should agree that this property is part of the meaning (content) of that belief; playing that causal role makes it a meaning. Let us call that thought meaning, partly constituted by the mode of ‘Twain’, “M”. Obviously, Betty can express that belief by attempting to convey M to an audience, just as she can express any belief by attempting to convey its meaning. Grice (1989) and many others have emphasized that she might attempt this in many ways; for example, in certain circumstances, she might attempt it with the words, “Twain has a death wish”. Whatever words Betty chooses for that purpose will “speaker mean” M, for the meaning of the thought she expresses determines what she means by her words. However, as an English speaker, Betty is very likely to use the sentence, “Twain smokes”, for that purpose. She is very likely to use that sentence because it is the conventional English way of expressing a belief with meaning M. So the utterance of “Twain smokes” not only speaker means M, as does the utterance of “Twain has a death wish”, but it also conventionally means M. So referring to Twain by the mode of ‘Twain’ is a conventional meaning of ‘Twain’. Where might Braun get off this train? He agrees that a thought’s mode of referring is causally relevant to behavior and so is theoretically significant. (i) But he might resist the idea that this makes that property a meaning. What then does make a property a meaning? One wonders whether such resistance would be “merely verbal”. (DR has often led me to such wonderings: 1989a, 1996, 2012d.) (ii) He is not likely to claim that whereas ‘Twain smokes’ is not the conventionally expression

414

M. Devitt

of M, some other some form of words is. (iii) We are left with this option: there is no conventional expression of M. This would be weird. Apart from the evidence that ‘Twain smokes’ is such a conventional expression, it would be a truly remarkable failure of our species – I assume that it would not be a failure of just English speakers – not to have managed over many millennia to come up with a conventional way of conveying a piece of information that is so causally significant.

19.3.2  D  escriptive Names and “the Contingent A Priori” (Salmon, Schwartz) Nathan Salmon’s “Naming and Non-necessity” is a thorough and judicious examination of Kripke’s startling claim … that certain sentences that are (semantically) true as a consequence of the way a name’s reference was fixed by description are metaphysically contingent yet knowable a priori. (237)

Names that have their reference fixed by a description (more precisely, by an attributively used description) are now usually called “descriptive names”. They stand in contrast to the paradigm names discussed in Sect. 19.2.2 which have their reference fixed by a causal grounding. ‘Jack the Ripper’ is a favorite example of a descriptive name and it is one of Kripke’s. It is the example I shall use. Kripke supposes that “the police in London use the name ‘Jack the Ripper’ to refer to the man, whoever he is, who committed all these murders, or most of them” (1980: 79). Salmon takes Kripke to be committed to the view that the following sentence is “(semantically) contingent yet a priori for the reference fixer (at the time of fixing)”: (3) If anyone singlehandedly  committed such-and-such murders, then Jack the Ripper did. (239) Let us start with the contingency thesis. Salmon does not mince his words: “Kripke has persuaded the angels and all right-minded philosophers that [(3)] is indeed contingent” (241). Well, as Salmon notes (241 n. 6), Kripke has not persuaded me (even though named after a top angel). The thesis that (3) is contingent rests on the thesis that ‘Jack the Ripper’ is rigid. But why suppose that? To Salmon and other direct referentialists this rigidity thesis clearly seems intuitive, as intuitive as the rigidity thesis about paradigm names like ‘Aristotle’. It doesn’t seem so to me. I have noted that when we submit [descriptive names] to the standard tests for rigidity, they do not clearly pass, even if they pass at all. Consider, for example, how ‘Jack the Ripper’ fares on what Kripke describes as an “intuitive test” of rigidity (1980: 48–49, 62). The following should be false: Jack the Ripper might not have been Jack the Ripper. Yet it seems not to be so, at least not clearly so. Similarly, It might have been the case that Jack the Ripper was not a murderer,

19  Stirring the Possum: Responses to the Bianchi Papers

415

(with ‘Jack the Ripper’ having narrow scope) should be true, but it seems not to be. And suppose that Prince Alfred was Jack the Ripper. Is it really necessary that he was? If ever there was a thesis in the philosophy of language that needs more than intuitive support – and I think they all do (2012a, b) – the thesis that descriptive names are rigid is surely one. (2015b: 136–137)

Note that the intuition that a paradigm name like ‘Aristotle’ is rigid, an intuition that I do share, does have other support: it is rigid because its reference is fixed by a causal grounding in an actual object (2005a: 145). But there is no analogous support for the rigidity of ‘Jack the Ripper’. Salmon quotes (241) a passage in which Kripke imagines a hypothetical formal language in which a rigid designator ‘a’ is introduced with the ceremony, ‘Let “a” (rigidly) denote the unique object that actually has property F, when talking about any situation, actual or counterfactual.’ (1980: 14)

Clearly descriptive names could be introduced into our actual language in this way (perhaps implicitly). But where’s the evidence that they are? After all, we can imagine another hypothetical formal language in which a non-rigid designator is introduced like this: “Let ‘a’ denote the unique object that has the property F in a situation, actual or counterfactual.” (Note that this alone does not make such a designator synonymous with its introducing description. It might be open, as descriptive names like ‘Jack the Ripper’ are open (2015b: 136), to being borrowed by people who are ignorant and wrong about the referent.) I rather doubt that there is any determinate matter of fact about which of these hypotheticals fits our actual practice. And we surely don’t have any evidence of which. Turn now to the a priori thesis. Salmon has a traditional understanding of ‘a priori knowledge’: “knowable with epistemic justification that is independent of experience” (238); his example is justification by mathematical proof (238). He carefully considers how reference fixing might yield a justification of the likes of (3).55 He demonstrates that the justification rests on knowledge of semantic facts that is a posteriori (242–243). He goes on: “We should probably conclude from the preceding considerations that Kripke means something different by his use of the term ‘a priori’” (243). He comes up with a proposal: I submit that what Kripke has in mind by his use of the term ‘a priori’ is a truth that is knowable independently of any experience beyond that on which knowledge of purely semantic (and/or purely pre-semantic) information about the language in question depends (even insofar as such knowledge is a posteriori in the traditional sense)…. I shall say that a truth is quasi-a-priori – for short, qua-priori – if it fits this broader notion. (244)

Even an old Quinean like me who believes that there is no a priori knowledge at all (1998, 2011d) could accept that our knowledge of (3) is qua-priori. Salmon has made a good point about the empirical nature of the semantic knowledge, arising from a stipulated meaning, that plays a role in the justification of the allegedly a priori (3). I have argued that we should go further along these lines: the  His consideration is actually of another Kripke example, the one-meter stick. My remarks extend mutatis mutandis to a further example, Neptune, if not also to the one-meter stick.

55

416

M. Devitt

semantic knowledge, supposedly arising from “conceptual analysis”, that plays a role in the justification of allegedly a priori “analytic truths”, is also empirical. Thus, our knowledge of “All bachelors are unmarried” is supposed to rest on our semantic knowledge that the content of the concept is the same as that of . But that semantic knowledge, if knowledge it is, would come from empirical semantics (2011d: 25). Consider also this claim by Stephen Schwartz in “Against Rigidity for General Terms”: “People who are conversant with the terms ‘sloop’ and ‘sailboat’ know a priori that all sloops are sailboats” (260). If those people have this knowledge about sloops on the basis of their semantic knowledge about ‘sloop’ and ‘sailboat’, then the knowledge is empirical. In sum, the famous examples of “the contingent a priori” are not a priori and may not be contingent.

19.3.3  Rigidity in General Terms (Schwartz) There is a well-known problem extending Kripke’s notion of rigidity from singular terms to general terms. Schwartz begins his paper with a nice description of this problem and of two proposed solutions to it. He calls one of these proposals “rigid expressionism”. It is the view that “rigid general terms designate or express the same kind or property in every possible world” (250–251). Schwartz rejects this, as he has done before in “Kinds, General Terms, and Rigidity” (2002). I have also rejected it in “Rigid Application” (2005a: 140–143; also in 2009b, in response to Orlando 2009). I find his case against rigid expressionism largely persuasive and will not discuss it. Schwartz calls the other proposal “rigid essentialism”. He rejects this in general and my version in particular. Here is my version: a general term ‘F’ is a rigid applier iff it is such that if it applies to an object in any possible world, then it applies to that object in every possible world in which the object exists. Similarly for a mass term. (2005a: 146)

Now Schwartz and I both emphasize that a notion of rigidity must do some theoretical work. But we differ sharply on what that work is. Schwartz thinks that the job of rigidity is to distinguish “natural” kind terms from others. This led him to criticize an earlier presentation of my view (Devitt and Sterelny 1999: 85–86) on the ground that rigid application does not do that job (Schwartz 2002). He claimed, on the one hand, that some nominal kind terms like ‘television set’ are rigid appliers. He claimed, on the other hand, that some natural kind terms like ‘frog’ are not rigid appliers. My negative response to Schwartz was to argue that “even if these claims are right [I don’t think they are about ‘television set’ (2005a: 155–156)], they are not grounds for dissatisfaction” with my notion of rigidity because it is not the task of such a notion to distinguish natural from non-natural kind terms. I also argued, positively, that “the primary task is to distinguish kind terms that are not covered by a description theory from ones that are” (2005a: 154). In the present paper, Schwartz responds:

19  Stirring the Possum: Responses to the Bianchi Papers

417

Rigid essentialism offers no systematic or plausible distinction between natural kind terms and nominal kind terms nor is it useful in defeating descriptionism and thus loses its point. (251)

So, despite my negative argument, Schwartz is still demanding that rigidity distinguish natural and nominal kind terms. And, despite my positive argument, he denies that rigid application counts against descriptionism. I shall briefly rehearse my arguments before turning to what Schwartz says in support of his position. Positive Thesis  On the basis of the theoretical work that Kripke did with rigidity in discussing proper names, I argued (2005a: 144–148) that “the primary work we should expect from a notion of rigidity for kind terms is featuring in lost rigidity arguments against description theories of meaning for some terms” (145). Lost rigidity arguments have the following form: the term in question is rigid; a description of the sort that the description theory alleges to be synonymous with the term is not rigid; so, the term is not synonymous with that description and the description theory is false. Finally, rigid application does indeed feature in lost rigidity arguments and so does the required work. So far as I can see, nothing Schwartz says actually counts against this thesis. Suppose that the thesis is right. It raises two questions. (I) What terms are rigid and hence can feature in lost rigidity arguments? I shall discuss this soon. (II) What other theoretical tasks, if any, should rigidity perform? This brings us to the negative thesis. Negative Thesis  It is true that Kripke’s favorite examples of rigid kind terms are (arguably) natural kind terms. Still, Kripke does not claim that all natural kind terms are rigid nor that all non-natural ones are non-rigid. Indeed, he thinks that “presumably, suitably elaborated…‘hot’, ‘loud’, ‘red’” (1980: 134) are among the rigid ones; see also the discussion of yellowness (128n). So, in partial answer to question (II), I could see no basis for the idea that rigidity should mark out the class of natural kind terms (2005a: 145–146). Later, in a passage mostly quoted by Schwartz (253–254), I’ve wondered why the idea that it should mark out this class has any prima facie appeal at all. First, … ‘natural kind term’ is vague. The problem is that it is far from clear what is for a kind to be natural and which ones count as natural…. Second, … it is hard to see how ‘natural kind term’ could come out as a theoretically significant description in semantics. Thus, ‘plastic’ is not likely to be classified as a natural kind term and yet it is surely semantically just like the paradigmatic natural kind term ‘gold’: the two terms seem equally nondescriptive …; and if there is any acceptable sense in which ‘gold’ is rigid then surely ‘plastic’ will be rigid in that sense too. Furthermore, the biological term ‘predator’ must be counted as natural and yet it seems descriptive and nonrigid. And what could be the principled basis for counting terms from the social sciences like ‘unemployed’ and ‘nation’ as not natural? Yet they are surely descriptive and nonrigid…. ‘[N]atural kind term’ does not cut semantic nature at its joints; it does not describe a natural kind! (2009b: 245)

I now have a more crushing objection to the idea. How could rigidity have the theoretically interesting task of marking out the class of natural kind terms? It is

418

M. Devitt

trivial that a term is a natural kind term iff it refers to a natural kind. So, if we mark out the kinds, we mark out the terms; and vice versa. And we should proceed by marking out the kinds and thus the terms: we should “put metaphysics first”, as I like to say (2010c); see Sect. 19.4.1. Indeed, it is somewhat preposterous to think that we should proceed in the other direction, marking out natural kinds by way of a semantic thesis about natural kind terms. And if there is any theoretical interest in marking out the terms at all, it is because there is one in marking out the kinds.56 Schwartz criticizes rigid application for failing to do something that nobody should have ever thought a notion of rigidity needed to do. What does Schwartz have to say in support of his position? He thinks that my “theory depends for its plausibility on a limited diet of examples.” I “cherry pick” terms (252). But this can’t be an effective criticism of my positive thesis. According to that thesis, rigidity’s task is to feature in lost rigidity arguments against description theories. So rigidity “can apply where it may without reflecting on its worth” (2005a: 146). Schwartz makes it immediately apparent what lies behind his criticism: he is insisting, despite my negative thesis, that rigidity must sort the natural from the nominal kind terms: ‘frog’ is natural but not a rigid applier57; ‘refrigerator’ is nominal but a rigid applier. He concludes: “Devitt’s notion of rigid application cannot be used to systematically sort general terms into rigid-like natural kind terms versus non-rigid nominal kind terms” (252). But, as I emphasized in my earlier papers, this sorting is not the theoretically interesting role for rigidity. So far, Schwartz is replaying his earlier objections (2002), to which I have already responded (154–159). Let us say more about Schwartz’s nice example of ‘frog’. I presented his point that ‘frog’ is not a rigid applier thus: Consider a particular frog. It starts life as a tadpole and then turns into a frog. So ‘frog’ then applies to it. But in another possible world it dies young as a tadpole. So ‘frog’ never applies to it in that world. (2005a: 157)

I went on to argue for a view of ‘frog’ (157–159). I have since abandoned that view in responding to the criticisms of Ezequiel Zerbudis (2009) and have come up with a new one. This new view deploys a notion of mature-rigidity: A general term ‘F’ is a mature-rigid applier iff it is such that if it applies to an organism in any possible world, then it applies to that organism in every possible world in which the organism exists and develops to maturity. ‘Frog’ is a mature-rigid applier: anything that is a frog in some world will be a frog in any other possible world in which it exists provided it does not die as a tadpole. (2009b: 248)

 Schwartz makes the interesting suggestion that we follow Mill in taking natural kinds to be those “whose nature is inexhaustible” and nominal kinds those “which rest on one or only a few criteria” (254). Suppose we do follow Mill, then we could immediately, without any appeal to semantic theory, distinguish natural from nominal kind terms! 57  Schwartz also wonders whether it is possible for “mature eagles [to] turn into mature tigers” (253). I agree with the biological consensus that this is not possible: it is essential for something to be a tiger that it has a certain history (2018). Were an eagle to turn into something tiger-ish that something would not be a tiger because it would not have the required history. 56

19  Stirring the Possum: Responses to the Bianchi Papers

419

Schwartz is dismissive: “This strikes me as adding complication to complication in an ad hoc attempt to save a theory” (253). I presume that Schwartz thinks this because he takes my claim that ‘frog’ is a mature-applier to be an “attempt to save the theory” that rigid application distinguishes natural kind terms. But that is not the purpose of the claim for that is not my theory. My negative thesis, in partial answer to question (II), is precisely that it is not the task of rigidity to distinguish natural kind terms. Schwartz has misunderstood the point of my discussion of ‘frog’. The discussion arises from my concern with question (I), with what terms are rigid and hence can feature in lost rigidity arguments. The point of the discussion was to lessen the disappointment of discovering, thanks to Schwartz, that rigid application did not yield a lost rigidity argument for ‘frog’ (2005a: 157). I was looking for another notion of rigidity that would yield such an argument for ‘frog’. Mature-rigidity does the trick: it will serve to refute any description theory of ‘frog’ constructed in the usual way from descriptions of the readily observable properties of frogs. In the actual world those descriptions apply to frogs but in another possible world they might apply not to frogs but to other organisms altogether. The descriptions form a non-mature-rigid applier. (2009b: 248–249)

In sum, rigid application features in lost rigidity arguments for some terms, maturerigid application, for others. Mature-rigidity is not introduced “to save a theory” but to add another one. And, to repeat, neither theory has anything to do with distinguishing natural kind terms. I claim that the task of rigidity is to feature in lost rigidity arguments. I claim that rigid application and mature-rigid application both fulfil that task for some terms. Schwartz has not said anything that casts doubt on these claims. His criticism that these notions do not distinguish natural from nominal kind terms is misguided. It demands that rigidity do something that nobody should have ever thought it needed to do.

19.3.4  Narrow Meanings (Lycan, Horwich) In his paper, “Devitt and the Case for Narrow Meaning”, Bill Lycan generously describes my paper, “A Narrow Representational Theory of the Mind” (1989b), as “distinctive and valuable” (268). Well, distinctive it may be, but I do wonder about its value. By the time of Coming to Our Senses (1996), I had come to realize, as Lycan notes (276), that that paper rests on a serious conflation of the important distinction between narrow meanings as functions from contexts to wide meanings and as functional roles (255 n. 10). More on this distinction below. Narrow meaning/content was all the rage in the 1980s. It was generally agreed that the meanings we ordinarily ascribe to explain behavior are truth-referential and hence “wide”. Still, many thought that the meanings we ought to ascribe for that purpose should be “narrow”. For some, cognitive psychology should explain the interaction of mental states with each other and the world by laws that advert only

420

M. Devitt

to formal or syntactic properties. For others, the semantics for psychology should be richer in some way but still narrow. I was one of those who took this revisionist line. I regret doing so. My view on this issue in the 1989 paper is ably presented and criticized by Lycan (268–275). Some features of that paper re-appear in the 1996 book, but placed in a framework that strictly observes the aforementioned distinction that was conflated in the paper. I still believe what I said about the issue in the book and that is what I shall discuss. That book, like the earlier paper, focuses on two arguments for revisionism, the argument from the computer analogy (1996: 265–272) and the argument from methodological solipsism (272–277). Lycan and I are in almost total agreement. First, neither of these argument supports the view that psychology should ascribe only syntactic properties,58 strictly understood, to mental states. Such properties may be adequate for the explanation of thought processes but they are not for the explanation of thought formation or behavior. The mind as a whole is not purely syntactic at any level even the implementational (1996: 277–284). In arguing this, Coming, like the 1989 paper, emphasizes three distinctions that tend to be overlooked in the debate: first, that between a token’s intrinsic fairly brute-physical properties like a shape, which I call “formal” properties, and syntactic properties, which are extrinsic functional properties that a token has in virtue of its relations to other tokens in a linguistic system (258–265); second, that between processes that hold only between thoughts, and mental processes in general, which may involve not only thoughts but also sensory inputs and behavioral outputs (266–267); and, third, that between syntactic properties, which are constituted only by relations between linguistic tokens, and putative narrow meanings, which involve relations to nonlinguistic entities – for example, sensory inputs and behavioral outputs – as well (273–275). The argument from methodological solipsism may seem to support the view that psychology should ascribe only narrow meanings. To assess this support we need to make that aforementioned distinction between two views of narrow meaning (285–286). According to one,59 the narrow meaning of a sentence is a function taking an external context as argument to yield a wide meaning as value. So, on this view, narrow meanings partly determine truth conditions and reference; they are intentional wide meanings “minus a bit”; they are “proto-intentional”. The belief that we need only these meanings to explain behavior is, therefore, only moderately revisionist. According to the other, more popular, view of narrow meaning,60 that meaning is a functional role involving other sentences, proximal sensory inputs, and proximal behavioral outputs. These putative meanings are not truth-referential and differ greatly from the meanings that we currently ascribe. The belief that we need only these meanings to explain behavior is, therefore, highly revisionist.

 Cf. Stich 1983; but see also Stich 1991.  To be found, e.g., in White 1982; Fodor 1987: 44–53. 60  To be found, e.g., in Loar 1981 and 1982, McGinn 1982, Block 1986. 58 59

19  Stirring the Possum: Responses to the Bianchi Papers

421

Coming finds some truth, though not enough, in the argument for the moderately revisionist view, but none at all in the argument for the highly revisionist one. My responses to these arguments reflected the influence of Tyler Burge (1986). Start with the moderately revisionist view. What precisely are these proto-intentional narrow meanings? What constitutes their functions? Consider what must be the case for any token to have a certain wide meaning. Part of the answer is that the token must have some properties that it has solely in virtue of facts internal to the mind containing it. Those properties constitute its narrow meaning. So, if we had a theory of its wide meaning, we would know all that we needed to about its narrow meaning. The theory will explain the narrow meaning by explaining the way in which the mind must take a fact about the external context as the function’s argument to yield a wide meaning as its value. (286)

Lycan is puzzled: I don’t understand Devitt’s allusion to “the mind” and what it “must take.” It is the theorist whose task is to explain the way in which the external context combines with internal properties of a mental token to yield the token’s wide meaning. (276)

Let me clarify. Suppose that we are considering the narrow meaning of ‘bachelor’ and our theory tells us that the reference of ‘bachelor’ is determined descriptively. Then, the function that is ‘bachelor”s narrow meaning will be partly constituted by some inferential properties that will constrain what it could refer to in a context; thus, perhaps ‘bachelor’ could refer to something in a context only if ‘unmarried’ refers to it in that context. (1996: 290)

Suppose so. Then our theory tells us that the way in which the function for ‘bachelor’, a part of the mind, takes a certain fact about the external context as its argument is partly via the function for ‘unmarried’, another part of the mind, taking a certain other fact about the external context as its argument. In contrast, suppose that we are considering the narrow meaning of ‘Aristotle’ and our theory tells us that the reference of ‘Aristotle’ is determined causally. Then a certain causal network is the external fact that the function for ‘Aristotle’, a part of the mind, takes as its argument, without any dependence on the functions for other terms. This “direct” way of selecting an argument is very different from the indirect way for ‘bachelor’. Given that a theory that explains wide meanings will explain these functions, such narrow meanings must be acceptable to someone who believes, as I do, in wide meanings. And these narrow meanings would indeed yield explanations of behavior. But there is a concern about this because many of these narrow meanings are likely to be “coarse grained” in that there is not much to them and “promiscuous” in that they can yield any of a vast range of wide meanings as values by changing the relevant external context as argument (287–292). Lycan offers a “rebuttal” but I think this arises from a misunderstanding: even if mental proper names are purely referential expressions, of course they do not by themselves contribute much to psychological explanation, nor would be expected to. They occur syntactically impacted in longer constructions, paradigmatically whole sentences. (277)

422

M. Devitt

And he points out that the different coarse-grained and promiscuous narrow meanings of pronouns can make significantly different contributions to psychological explanations. I agree with both these points, but they are not at odds with my proposal. To say that a meaning is coarse-grained is not to say that it has no grain; to say that it is promiscuous is not to say that it can take any value in a context. And, remember, we are contemplating that the coarse-grained and promiscuous terms may include not just proper names and pronouns but many mass terms, general terms, adjectives, and verbs. Just how many should be included “is an open question to be settled with the help of future theories of reference” (1996: 291). I go on to argue (299–312) that these narrow meaning could serve psychological needs if the behaviors that needed to be explained were themselves only narrow and “proto-intentional”. The objection to restricting psychology to explanations involving narrow meanings is then that we do not need to explain only proto-intentional behavior. For, that behavior is also likely to be coarse grained and promiscuous. The intentional behavior, giving water to Mary involves giving, water, and Mary. So too does the related proto-­intentional behavior in one context, but that behavior might, in some other context, involve taking, kicking, or many other acts; XYZ, or many other stuffs; Twin Mary, or any other person. We certainly want to have the more discriminating intentional behavior explained somewhere. No good reason has been produced for thinking that it should not be explained in psychology.61 Coming is rather more critical of the more popular functional-role narrow meanings than Lycan conveys (278–279). I argue (292–299) that these putative meanings are left almost entirely unexplained and mysterious. Even if they were not, we have been given no idea how such meanings could explain intentional behaviors (like giving water to Mary) and it seems very unlikely that they could. If they do not explain these behaviors, then revisionism requires that intentional behaviors be denied altogether; for if there are these behaviors and they are not explained by narrow meanings then it is not the case that psychology should ascribe only narrow meanings. We have been given no reason to deny intentional behaviors. The argument from methodological solipsism does nothing to solve these problems for functional-role meanings. This is very bad news for the highly revisionist doctrine that psychology should ascribe only these putative meanings. That doctrine has a heavy onus arising from the apparently striking success of our present practice of ascribing wide meanings to explain behavior. Why do these ascriptions seem so successful if they are not really? What reason have we for thinking that the ascriptions that

 Horwich wonders how properties constituted by types of causal chains that are largely external to the mind could explain “any specific use” of a word and hence be meanings (292). The use of words is, of course, just one sort of intentional behavior that these properties are supposed to explain. How could they do that? It’s a long story (1996: 245–312; 2001: 482–490). A provocatively short version is: they can because they do; we have good reason to believe that we ascribe these properties to explain behavior; these explanations are successful; so what we ascribe are meanings.

61

19  Stirring the Possum: Responses to the Bianchi Papers

423

would be recommended by this revisionist doctrine would do any better? The doctrine has hardly begun to discharge its onus.62

19.3.5  The Use Theory (Horwich) Horwich has presented his use theory of meaning (UTM) with great verve and clarity (1998, 2005). I have responded with two critical articles (2002, 2011a). Horwich’s “Languages and Idiolects” is, in part, a response. I shall start with a brief recap. Horwich’s UTM takes a meaning to be an acceptance-property of the following form:– ‘that such-and-such w-sentences are regularly accepted in such-and-such circumstances’ is the idealized law governing w’s use (by the relevant ‘experts’, given certain meanings attached to various other words). (2005: 28)

In assessing UTM and comparing it with truth-referentialism, I noted (2002: 114) that we are unlikely to make progress by considering “nonprimitive” words like ‘bachelor’ which the truth-referentialist might well think are covered by description theories. We need to focus on “primitive” words like proper names and natural kind terms, the meanings of which are likely determined by direct relations to the world. (Sects. 19.3.1, 19.3.2, 19.3.3 and 19.3.4). Considering these primitives, I argued that “the very same considerations that were devastating for many description theories are devastating for Horwich’s use theory” (2011a: 205). Like description theories of reference, UTM suffers from a Kripkean “ignorance and error” problem (Sects. 19.2.1, 19.2.5). Horwich captured neatly the form of this problem for UTM: “members of a linguistic community typically mean exactly the same as one another by a given word, even when their uses of it diverge” (1998: 85–86; my emphasis). Horwich responded to this problem with the idea of deference to experts. (This process, with its epistemically demanding backward-looking intentions, should not be confused with the causal theory’s reference borrowing discussed in Sect. 19.2.1.) I complained (2002: 118) that Horwich offered only some brief remarks to explain this deference. And I gave three reasons initially (2002: 118–119) and two more later (2011a: 205) for doubting that these remarks could be developed into a satisfactory theory of deference. In brief: (i) People will often not defer where they should. (ii) They will often try to defer but fail to identify an expert. (iii) They will often defer to a nonexpert. (iv) When the bearer of a name has been long-dead – for example, Aristotle – there will be no experts around to defer to. (v) Where there are surviving experts about a dead person, there seems to be no change that a

 An autobiographical note. Bill Lycan (273n) mentions the often-hilarious episode of “Monty Python’s Flying Circus” featuring the philosophy department at “the University of Woolloomooloo” (which led David Lewis to call his cat “Bruce”). There is of course no such university in Woolloomooloo, but there is a large pier. I sailed to England from that pier in 1947. Woolloomooloo used to be a poor area suitable for students. I lived there in 1965 when an undergraduate at the University of Sydney. Woolloomooloo is no longer poor and Russell Crowe lives on the pier.

62

424

M. Devitt

deferrer could be disposed to make to conform to the experts’ basic acceptance properties. (2011a: 209)

Finally, I raised the “deeper problem” that “this appeal to deference seems to be incompatible with UTM” (2011a: 206). The problem for UTM arises from the following dilemma: either the members of a linguistic community typically share their meaning of a primitive word or they do not. Horwich’s appeal to deference seemed to arise from his grasping the first horn of the dilemma. (2011a: 206)

But, I argued, UTM cannot combine deference with shared meanings. Further, I argued, “the trouble with the second horn is that it is false” (207): meanings are typically shared. I concluded: “These problems are not minor ones of details. The problems strike at the very core of UTM. At the very least, Horwich owes us an account of how UTM can deal with them” (209). I take Horwich’s present paper to be attempting just that. Horwich responds to my dilemma, in effect, by grasping both horns! The members of a linguistic community share one meaning of a word, the “communal” meaning (285), but often not another, an “idiolectal” meaning (285). Horwich thinks he can have it both ways! This is characteristically clever but, I think, provides no way out of the dilemma. What about these idiolectal meanings? In talking about them in thesis (5), Horwich makes a distinction (albeit rather vague) between ordinary words and technical terms. In the former case, there’s a basic rule for the word’s use, which many of the speakers who often use it implicitly follow (and the majority follow at least approximately). (287)

So with “ordinary words” idiolectal meanings are more or less shared. But with “technical terms” the story is very different: the members of some small subset of the population – acknowledged experts in the relevant area – nearly all implicitly follow a certain rule for its use, whereas the non-experts rarely follow it. (287)

The “non-experts” are, of course, those who feature in Kripkean ignorance and error arguments. These arguments show that many users of proper names, natural kind terms like ‘elm’, and perhaps lots of other terms are “non-experts”. So Horwich’s “technical terms” must cover all these terms, indeed, all primitives and perhaps some nonprimitives like ‘sloop’ and ‘arthritis’ as well (2011a: 204–205). On his view, “experts” will have idiolectal meanings of these terms that will be very different from those of the “non-experts” and those of the “non-experts” will differ from each other. What about the communal meanings? Talking about them in thesis (3), Horwich says Each word’s sound-type … has a constant communal meaning. That type-meaning – which, together with the context of an utterance containing the word, determines the word’s contribution to what is said – does not vary from one speaker to another. (287)

19  Stirring the Possum: Responses to the Bianchi Papers

425

This is vital to the plausibility of UTM, of course, because to suppose that a group of organisms has a language at all is to suppose that they share meanings. But how is this sharing possible given the deficiencies of the non-experts? In thesis (6) Horwich says that such deficiencies – no matter how great – don’t count against attributions of the communal meaning. Individuals are credited with that communal meaning simply by virtue of their membership in the community. (288)

All this raises two crucial questions which I will get to in a moment. But first I should briefly state my own position on the relation of idiolects to communal languages (2006a: 178–184; forthcoming-b: ch. 5). For, Horwich rightly claims that my idiolects “are crucially different” from the idiolects that he finds so theoretically important (294 n. 10). Idiolects  The meaning of an expression in a person’s idiolect is constituted by a linguistic rule. I take that rule to be the person’s disposition to associate the expression with that meaning in the production and comprehension of language: she is disposed to use that expression with that speaker meaning; and she is disposed to assign that meaning to the use of that expression by others who she takes to share her idiolect. Occasionally a person will not do what she is normally disposed to do; she will deliberately assign another meaning to an expression, as in a metaphor or pragmatic modulation; or she will make a performance error. In these cases, an expression will have a speaker or audience meaning that is different from its literal meaning in the person’s idiolect. What causes the expression to have its meaning in a person’s idiolect? (i) If Chomsky is right, as he probably is, some syntactic rules are innate. So part of the meaning of an expression may be innate. (ii) But a large part of its meaning is typically the result of the person’s participation in a convention; so, it is a conventional meaning. (iii) Finally, some of a person’s idiolect may be her own work and so a bit idiosyncratic: the literal meaning an expression has for her may not be a meaning it has according to any linguistic convention; Mrs. Malaprop is a famous example. Languages  An idiolect is shared in a community, and hence is the language of that community, to the extent that the members of the community are disposed to associate the expressions of the idiolect with the appropriate meanings. This sharing could, in principle, come about “by chance” as the result of each member’s own work. But, of course, it does not. It probably partly comes about because of some innate syntax. It is largely the result of the community participating in the same linguistic conventions. So, conventions are the typical cause of communal meanings (but, if anything close to this story is right, it is a mistake to suppose that conventions constitute meanings). What is it for the communal meaning of an expression to arise from a convention in a community and hence be a conventional meaning? It is hard to say, and I must be brief anyway. The central idea is that each member of that community is disposed to associate the expression with that meaning because other members are similarly

426

M. Devitt

disposed; there is a certain sort of causal dependency of the disposition of each member of the community on the dispositions of others. The norm is for speakers in a community to share a linguistic meaning because they stand in the required causal relation. As a result, the idiolectal meaning of almost all expressions for almost all speakers will be the conventional meaning of those expressions in the speakers’ community. Mrs. Malaprop’s divergence from the norm is exceptional. Take any word. On this picture, a person usually participates in the conventions for that word in her community, but she may not. If she does participate, then her idiolectal meaning will be the same as the communal meaning. If she does not participate, it will be different; the word out of her mouth will literally mean something other than its meaning out of the mouths of her fellows who participate in the convention. But the word out of her mouth cannot literally have both her idiolectal meaning and the different communal meaning. Having both is simply impossible. Yet, on Horwich’s view of idiolectal meanings, this is not just possible but so common as to be almost the norm. I can see no theoretical justification for positing idiolectal meanings that are so unconnected to communal meanings; I do indeed think, as he supposes, that they are “completely beyond the pale” (294 n. 10). Horwich calls the acceptance property of an individual’s word her “idiolectal meaning”. That property is real enough but for it to be properly called a “meaning” at all, as Horwich and I agree, it has to play a certain causal role (2011a: 197–198). And any property that plays that role has to be related to what is properly called a “communal meaning” in something like the way I have described. Horwich’s acceptance property does not meet that requirement. It is time for the first crucial question. In virtue of what is a certain acceptance property the communal meaning of a word? What makes it the case that in a community of idiolects varying from experts to non-experts and among non-experts, that property is the meaning of the word in the community? Horwich has a breezy answer in his thesis (8): The upshot is that the meaning of a word in a communal language is grounded in the meanings it has within the various idiolects of individual members of the community – in the way that was indicated in theses (5) and (6). (288)

Horwich says later: “As for the name’s communal meaning, this might well be grounded in the basic acceptance rule implicitly followed by the relevant experts” (290). The idea that comes through from this is clear enough: with “technical terms”, which need to be all terms for which ignorance and error problems arise (including proper names), the communal meaning is the experts’ idiolectal meaning. And the “grounding” of communal meaning involves deference. But how does this work? We need an explanation of this grounding. Set that aside for a moment and consider a second related question, just as crucial as the first. It will be remembered that Horwich claims that, despite their deficiencies, non-experts are credited with the communal meaning “simply by virtue of their membership in the community”. But how could that be right? For them to be members of the community, in the respect that matters, they have to participate in the linguistic conventions of that community. It is not enough that they just hang out

19  Stirring the Possum: Responses to the Bianchi Papers

427

together: they have to be talking to each other in the same language. And the nonexperts can’t be doing that because they have idiolectal meanings that are quite different from the communal conventional meaning. Insofar as they are members of a linguistic community, hence participating in the conventions of the language, the communal meanings are their idiolectal meanings. What could there be about these non-experts that makes a communal meaning theirs, given that they have various idiolectal meanings different from the community meaning and so are not participating in the communal conventions? How could their words be correctly credited with both meanings? Something like Horwich’s old idea of deference is again supposed to provide the answer. I have three responses to this. (1) I still have my old complaint that we have nowhere near enough explanation. He now thinks it was a mistake to suggest that “in order for a non-expert to use a technical term with its communal meaning she must defer to what the experts say with the help of that term.” So he weakens his deference requirement: “The only role now given to this notion is in roughly explaining the notion of the ‘acknowledged experts’ as ‘those to whose opinions there is a tendency to defer’” (291 n. 6). But he says no more. (2) Prima facie, that weakening does not escape the five objections summarized above. (3) For the reasons just given, there is no legitimate distinction between idiolectal and communal meanings that can solve his “deeper problem”: his appeal to deference seems to be incompatible with UTM; UTM cannot account for the sharing of meanings in a community. I concluded “Deference and the Use Theory”: In my view, the best hope for UTM would be to make it part of a hybrid account: combining UTM for the experts with the causal theory of deference [reference borrowing]. So the meanings of nonexperts will be determined by the meanings of the experts by means of deference as explained by the causal theory. At bottom meaning would be explained by use but otherwise not. It would be interesting to explore the problems for this hybrid. (2011a: 209)

Just as there is reference fixing and reference borrowing (Sects. 19.2.1, 19.2.2, 19.2.3, 19.2.4 and 19.2.5), there is meaning fixing and meaning borrowing. My hybrid proposal combines a use theory of meaning fixing with a causal theory of meaning borrowing. (My own view combines a referential theory of meaning fixing with that causal theory of borrowing.) Horwich has not, of course, embraced this proposal but some of his remarks suggest that he is not far away from it. Thus he speaks approvingly of what he aptly calls Kripke’s “‘contagion’ picture of communal-reference inheritance” and goes on: But could it be that S’s merely hearing the name, “N”, from a fellow community member will, by itself, guarantee that, when she proceeds to use it, “N” will have the very communal meaning and referent with which her source deployed the word? I’m inclined to say ‘yes’. (290)

This sounds awfully like the causal theory of reference/meaning borrowing (Sect. 19.2.1). If he’d only drop his talk of “tendency to defer”, and drop his idiolectal meanings that are beyond the pale, he would have the hybrid. I don’t think the hybrid is right but it has much better prospects than UTM.

428

M. Devitt

19.4  Methodology 19.4.1  Putting Metaphysics First (Rey) I have already discussed the parts of Rey’s “Explanation First! The Priority of Scientific Over ‘Commonsense’ Metaphysics” that concern linguistics (Sect. 19.1.1). The main concern of his paper is methodological. Rey has two criticisms of what he takes to be my methodology. The first of these is fairly harmless, the second is not. Both criticisms are baseless. I shall discuss the first in this section, the second in the next. I am fond of the maxim, “Putting Metaphysics First”. Rey thinks that it should be replaced by the maxim of putting explanation first because that maxim is more “fundamental”. Here is what I say about my maxim in a book named after it: We should approach epistemology and semantics from a metaphysical perspective rather than vice versa. We should do this because we know much more about the way the world is than we do about how we know about, or refer to, that world. The epistemological turn in modern philosophy, and the linguistic turn in contemporary philosophy, were something of disasters in my view. My view here reflects, of course, my epistemological naturalism. The metaphysics I want to put first is a naturalized one. (2010c: 2)

The book is an extended argument for this point of view. Rey has this to say in favor of his maxim: I’d like to suggest replacing Devitt’s maxim with what seems to me a more fundamental one: Explanation first! I think it’s our reliable sense of explanation that underlies his and many other people’s considered judgment that, at least for the time being, the “metaphysics” of much of the natural sciences ought to enjoy a priority over most other claims, metaphysical, semantic, epistemic or otherwise…. I don’t expect Devitt to seriously disagree with any of this…. [H]e has in mind the metaphysics of the explanations provided by good empirical scientific theories. (300)

Indeed I do not disagree (apart from his replacement suggestion). All my work reflects an enthusiasm for using inference to the best explanation, or “abduction”, to find out about the world; thus, in discussing meaning I say, “As always, we seek the best explanation” (1981a: 115; see also, 1996: 48–86); my argument for scientific realism is an abduction (1984: 104–106; 1991: 108–110); my position on linguistics (2006a), which so appalls Rey – see Sects. 19.1.1 and 19.1.2 – is driven by explanatory concerns. But Rey’s maxim urging explanation as fundamental in epistemology could not be a replacement for my maxim because the two maxims have different concerns. My maxim is aimed at reversing practices of deriving a metaphysics from assumptions in epistemology or semantics, practices that have dominated the last three centuries. Rey’s maxim offers no guidance on those practices. And mine offers no guidance on what is fundamental in epistemology. Rey is making a false contrast. This is surely obvious and so Rey’s claim that his maxim is “a substantive alternative” to mine (320) is very odd.

19  Stirring the Possum: Responses to the Bianchi Papers

429

Rey’s second criticism of my methodological is much more serious, so serious that Rey wonders whether it is “a serious indictment of Devitt’s work as a whole” (324)!

19.4.2  “Moorean Commonsense” (Rey) Rey and I have aired disagreements on a number of substantive issues in many publications. In his present relentlessly unsympathetic paper, Rey seeks to discredit my side on these issues by attributing it to bad methodology: I am alleged to be steeped in the vice of “Moorean commonsense”. He, in contrast, is full of the virtue of science: In Devitt, I tentatively want to suggest it is an uncritical, essentially Moorean “commonsense” understanding of realism and of Quine’s “holistic” epistemology, which, I argue, badly biases his views of many topics he discusses, specifically (what I’ll discuss here) secondary properties, linguistics, and the possibility of a priori knowledge.... But all of this will be on behalf of stressing what seems to me to be the priority of scientific explanation over commonsense (301)

What starts as a tentative suggestion soon becomes the dominant thesis of Rey’s paper. Let’s start with Rey’s criticism of my inclusion, in Realism and Truth (1984, 1991), of “commonsense” physical entities in my definition of realism: “Devitt … seems to include in the metaphysics he’s putting first, ‘most’ commonsense claims, which, I argue, have no place in serious explanation” (324). So, why do I include these claims about the existence of commonsense entities? Because I aim to reject a much broader range of antirealisms about “the external world” than antirealisms about science. This range includes idealisms, so prominent in the history of philosophy, that deny the independent existence of a world consisting not only of scientific entities but also of more humdrum ones including, for example, chairs and hammers (1991: 246–249). I describe one such doctrine as follows: Constructivism. The only independent reality is beyond the reach of our knowledge and language. A known world is partly constructed by the imposition of concepts. These concepts differ from (linguistic, social, scientific, etc.) group to group and hence the worlds of groups differ. Each such world exists only relative to an imposition of concepts. (1991: 235)

This sort of neo-Kantian relativism is common among intellectuals. Indeed, it has some claim to be the metaphysics of the twentieth century, exemplified by theorists ranging from Nelson Goodman to French feminists (1991: 235–236). And it has had consequences in social and political life, bad ones in my view. Perhaps Rey thinks that such antirealisms are not worth refuting. Well, I disagree. It’s dirty work but someone has to do it. (See Sect. 19.5.1 for more on the definition of “realism”.) That criticism concerns what topics are worth discussing. The next is much more important because it concerns the methodology for discussing any topic:

430

M. Devitt

Devitt’s only real motivation for insisting on color realism is simply a wistful attachment to commonsense …. My point here is not to go to the wall upon denying the reality of all these things. I only want to insist upon the methodological point that it should be scientific explanation, not commonsense, that should be the arbiter of the issue. (305) So what might be the argument for Devitt’s [rejection of the psychological conception of linguistics]? It’s hard to resist the impression that it’s yet another instance of his commitment to the Moorean commonsense that seemed to be driving his views about color. (307) despite his claimed commitment to a “naturalized epistemology,” Devitt, like his mentor, Quine, seems to display a peculiar aloofness to the sciences of the domains he discusses…. It often seems as though he and Quine remain(ed) “outside” of the sciences they otherwise esteem, precisely as philosophers traditionally have, listening in from afar, overhearing snatches of their claims, arguing with them from a commonsensical point of view, but not engaging directly with the enterprises themselves. (324)

This is stuff and nonsense, a picture of my methodology with little relation to reality. Rey’s charge is that my methodology is one of “commitment to Moorean commonsense”. In fact, my attitude to common sense, or “folk theory” (as I mostly call it), is very different. My attitude is a consequence of my Quinean epistemological naturalism (1998, 2011c) and is briefly as follows. Folk theory can be a helpful place to start in the absence of science. We then look to science to discover whether folk theory, so far as it goes, is right. And we look to science to go further, much further. Some past folk theories have turned out to be spectacularly wrong. Still, given that conservatism is among the theoretical virtues (Quine and Ullian 1970: 43–53), being in accord with common sense is an advantage for a theory, though, of course, very far from a decisive one. My most explicit and detailed presentation of this attitude to folk theory and common sense is probably in Language and Reality (Devitt and Sterelny 1999: 286–287). The attitude is at least implicit in my many discussions of intuitions (e.g. 1996, 2006c, 2012a, 2014a)63 and exemplified in all my work, from its beginning on reference (1981a), through work on realism (1984, 1991), meaning (1996), linguistics (2006a), and biological essentialism (2008d), to recent work on experimental semantics (e.g. 2011b, 2012b; Devitt and Porot 2018) and to Sect. 19.2.3 of these very responses. In the face of all this obvious and apparently overwhelming evidence that I am very far from a devotee of “Moorean commonsense”, what does Rey offer in support of his claim to the contrary? Well, there are some uncharitable speculations about my motivations (“wistful attachment to commonsense”, “hard to resist the impression … of his commitment to the Moorean commonsense”, etc.), but what we need is evidence that I engage in arguments resting on “Moorean commonsense”. Rey produces none. True, I do use the expressions ‘Moorean’ and ‘common sense’/‘commonsense’ separately quite a few times in arguments in Putting Metaphysics First (2010c). But these uses do not provide the needed evidence.

 There is a small irony in Rey’s criticism. I have, in effect, criticized as unscientific a linguistic methodology (2006a, 2006c, 2010a, b, 2014a,  2020) that Rey (2013, forthcoming-a, forthcoming-b) defends: the practice of relying on speaker’s meta-linguistic intuitions as evidence.

63

19  Stirring the Possum: Responses to the Bianchi Papers

431

Let us start with my use of ‘Moorean’. This term is used only to describe a certain strategy in argument. Here is one example. I start my case for putting metaphysics first, hence for Realism, by arguing that “Realism is much more firmly based than the epistemological theses … that are thought to undermine it” (2010c: 62). (This is the only example of my alleged Mooreanism that Rey quotes (302).) I began calling this “a Moorean response” some time back when Steven Hales pointed out that it exemplifies Moore’s famous strategy of responding to a modus ponens with its corresponding modus tollens. Importantly, this use of ‘Moorean’ involves no commitment to common sense, as indeed the following note about the strategy makes clear: Note that the point is not that Realism is indubitable, to be held “come what may” in experience: that would be contrary to naturalism. The point is that, prima facie, there is a much stronger case for Realism than for the [epistemological] speculations. (2010c: 109 n. 19)

I finish my case for putting metaphysics first, hence for Realism, by appealing to naturalism. Turn now to my use of ‘common sense’. My only use of it in an argument is in the one for Realism that Rey quotes (302). In the course of my initial argument for Realism I claim that this Realism about ordinary objects is “the very core of common sense” (2010c: 62). And, I might have added, being in accord with common sense is a plus for Realism, given the virtue of conservatism. However, even this initial argument does not rest its case for Realism on common sense. And this initial argument is treated as far from conclusive: “the Moorean response is not of course sufficient” (2010c: 63). Indeed, how could it be? “Realism might be wrong: it is an overarching empirical hypothesis in science” (1991: 20). So, I follow up with a naturalistic argument for it (2010c: 63–66; see also 1991: 73–82). In sum, my separate uses of ‘Moorean’ and ‘common sense’ provide no basis at all for attributing to me the methodology that Rey disparages with such relish. In particular, I have always embraced “the priority of scientific explanation over commonsense” (301). Contrary to Rey’s charge, there is not a single argument in my work that rests on a “commitment to Moorean commonsense”. Finally, the charge that I “display a peculiar aloofness” to the sciences I discuss (324) is gratuitous. I conclude with some brief comments on the three substantive disagreements to which Rey applies his mistaken thesis about my methodology. Linguistics  I have already responded to Rey’s discussion of linguistics (Sect. 19.1.1). I disagree with him not because I practice the methodology that Rey disparages but because I think that he is wrong about what the science shows. Color  Rey is very dismissive of my (somewhat tentative) neo-Lockean view of colors. The issue has never been important to me and my discussions of it are old

432

M. Devitt

and very brief (1984: 69–71; 1991: 249–251).64 Perhaps if I revisited the issue more thoroughly I would revise my view in the face of contemporary vision science. I obviously agree with Rey that “it should be scientific explanation, not commonsense, that should be the arbiter of the issue” (305). The a Priori  Rey’s disparagement climaxes: “patently preposterous.… [A] mere mantra, a check written on a non-existent bank” (321). Our mentor Quine is also a target. I refer any reader interested in my criticisms of the a priori to my 1998 and 2011d.

19.4.3  Intuitions (Martí, Sterelny, Jackson) How do philosophers of language tell which theory of language is right? Edouard Machery, Ron Mallon, Shaun Nichols, and Stephen Stich (“MMNS”) noted (2004) that theories of reference are tested by consulting referential intuitions and that those intuitions are nearly always those of the philosophers themselves “in their armchairs”. Indeed, the consensus is that consulting intuitions is the method for the philosophy of language in general.65 Consider this statement, for example: Our intuitive judgments about what A meant, said, and implied, and judgments about whether what A said was true or false in specified situations constitute the primary data for a theory of interpretation, the data it is the theory’s business to explain. (Neale 2004: 79)66

MMNS responded critically to this consensus methodology by testing the intuitions of the folk, in particular those of undergraduates in Rutgers and Hong Kong, about cases like Kripke’s Gödel and Jonah ones. Whereas philosophers of language, we can presume, almost all share Kripke’s antidescriptivist intuitions about these cases, the experiments revealed considerable variation in the intuitions of the undergraduates; indeed, the intuitions of Hong Kong undergraduates leaned toward descriptivism in the Gödel case. MMNS presented this result as casting doubt on philosophers’ intuition-based methodology for theorizing about reference and hence, of course, on the resulting theories of reference. Genoveva Martí (2009, 2012) and I (2011b, 2012b, c) criticized MMNS for testing the wrong thing. Experimentalists should not be testing theories against anyone’s referential intuitions but rather testing them against the reality that these  Rey misrepresents an essay of mine, “Global Response Dependency and Worldmaking” (2010c: 122–136), as an “extended discussion of color” (304 n. 9). In fact, the essay is not a discussion of color but of the consequences for realism of the view that all properties are “response-dependent”. One color, redness, does feature but only as a “popular and plausible” example of a responsedependent property (122). I do not argue that it is a response-dependent property. 65  And for philosophy in general, of course. Still some disagree with this consensus: see Deutsch (2009) and Cappelen (2012); for a response, see Devitt (2015a). 66  In his latest work, Neale urges a very different and, in my view, much more appropriate view of the role of intuitions (2016: 231, 234). 64

19  Stirring the Possum: Responses to the Bianchi Papers

433

intuitions are about. We do not rest our biological theories on evidence from the intuitions of biologists about living things, let alone from the intuitions of the folk. We do not rest our economic theories on evidence from the intuitions of economists about money and the like, let alone from the intuitions of the folk. No more should we rest our semantic theories on evidence from the intuitions of philosophers about reference and the like, let alone from the intuitions of the folk. The primary evidence for a theory about a certain reality comes not from such intuitions about the reality but from more direct examinations of that reality. The reality of reference relations is to be found in linguistic usage. So theories of reference need to be confronted with direct evidence from usage. Martí and I are very much on the same side in our view of experimental semantics. Positively, we think that theories of reference should be tested against usage. Negatively, we are critical of the practice of testing against referential intuitions. As we are both fond of saying, the right response to armchair philosophy is not to pull up more armchairs for the folk.67 I shall explore this, and the crucial distinction between testing against referential intuitions and usage, in the next section. But first I want to say more about referential intuitions. Talking about the consensus methodology in “Experimental Semantics, Descriptivism and Anti-descriptivism. Should We Endorse Referential Pluralism?”, Martí remarks: The input the semanticist reflects on is actual usage: the ‘Feynman’ and ‘Columbus’ cases that Kripke uses to illustrate the ignorance and error arguments are real cases, things that, according to Kripke, one hears in the marketplace. (331)

What does Martí have in mind as reflecting on usage? If she is to be right about the actual philosophical methodology exemplified by Kripke, she should have in mind immediate reflection on an observed utterance that yields a referential intuition, a meta-­linguistic judgment that that expression out of the mouth of that person refers to a certain object. But I fear that this is probably not what she has in mind, as we shall see in the next section. Consider what Kripke uses as evidence in his discussion of ‘Feynman’. He thinks that the man in the street [who could not differentiate Feynman from Gell-Mann] may still use the name ‘Feynman’. When asked he will say: well, he’s a physicist or something. He may not think that this picks out anyone uniquely. I still think he uses the name ‘Feynman’ as a name of Feynman. (1980: 81)

The last sentence expresses a referential intuition. And it is surely an intuition about remembered actual utterances, as Martí suggests: Kripke has observed ignorant people using ‘Feynman’ and judges that they successfully referred to Feynman. Kripke’s discussion of the reference of ‘Columbus’ similarly rests on such an intuition. He points out that laypeople’s likely “misconceptions” about Columbus

67

 As Martí points out (331 n. 4), neither of us know the source of this witticism.

434

M. Devitt

should mean, according to the description theory, that their uses of ‘Columbus’ really refer to somebody other than Columbus. Kripke’s intuitive response is: “But they don’t” (1980: 85).68 This practice of testing a theory against intuitions is very different from one of testing it against usage, as we shall see. What lies behind the standard practice of relying on intuitions? The answer suggested by the literature is that competent speakers of a language have some sort of privileged access to the referential facts about their language. I argue that this answer is mistaken (Devitt 1996: 48–54, 72–85; 2006a: 95–121; 2006c, 2012a). Intuitions about language, like intuitions in general, are empirical theory-laden central-processor responses to phenomena, differing from many other such responses only in being fairly immediate and unreflective, based on little if any conscious reasoning. (2006a: 103; 2006c: 491)

A speaker’s competence in a language does, of course, give her ready access to the data of that language, the data that the intuitions are about, but it does not give her privileged access to the truth about the data. My talk of “theory-laden” here can mislead. In “Michael Devitt, Cultural Evolution and the Division of Linguistic Labour”, Sterelny doubts whether it is helpful to characterize people’s intuitions about a domain as “their theory of that domain” (178). Perhaps not. Still we might well think of them, as Martí does, as “theoretical; … the first step that the theorist engages in” (331). And we should see them as mostly the product of experiences of the linguistic world. They are like “observation” judgments; indeed, some of them are observation judgments. As such, they are “theory-laden”’ in just the way that we commonly think observation judgments are. That is, we would not make any of these judgments if we did not hold certain beliefs or theories, some involving the concepts deployed in the judgments. And we would not make the judgments if we did not have certain predispositions, some innate but many acquired in training, to respond selectively to experiences. So ‘theory’ in ‘theory-laden’ has to be construed very broadly to cover not just theories proper but also these dispositions that are part of background expertise.69 It is not a methodological consequence of my view that referential intuitions should have no evidential role in theorizing about reference. It is a consequence, however, that they should have that role only to the extent that they are likely to be reliable, only to the extent that they are reliable indicators. Sterelny compares referential intuitions unfavorably with knapping intuitions and doubts that the referential ones are reliable: “the reasons we have to trust intuitions seem largely absent when we consider agent’s judgements about reference” (179). In “Language from a Naturalistic Perspective”, Jackson takes intuitions to be “beliefs coming from the exercising of recognitional capacities” (156). This is an apt way to take them. He

68 69

 For more evidence of Kripke’s reliance on referential intuitions, see Devitt 2015a.  For more on “theory-laden” see Devitt 2012b: 19; 2015c: 37–38.

19  Stirring the Possum: Responses to the Bianchi Papers

435

rightly sees himself as more sympathetic to the role of intuitions than I am.70 For, the recognitional view still leaves a reliability problem. Is a person claiming to recognize an utterance as a case of referring to x reliable about such matters? That needs to be assessed empirically using independent evidence about reference relations obtained from linguistic usage. But, as noted, we need that independent evidence anyway. Are the referential intuitions of philosophers likely to be more reliable than those of the folk? Sterelny is dubious again: “I doubt whether philosopher’s intuitions have much more weight” (179). Now, it is a methodological consequence of my view of intuitions that, insofar as we rely on referential intuitions rather than usage as evidence, we should prefer those of philosophers to those of folk because philosophers have the better background beliefs and training: they are more expert.71 This is not to say that the folk may not be expert enough in judging reference in a particular situation. Whether or not the folk are is an empirical question which the theory does not answer. The theory tells us that “the more expert a person is in an area, … the wider her range of reliable intuitions in the area” (Devitt 2010b: 860) but this provides no guidance as to the level of expertise required for a person to be reliable about any particular sort of fact. Still we can say that, from the perspective of the theory, it would be no surprise if the folk, though perhaps reliable enough in judging reference in humdrum actual situations, were rather poor at doing so in fanciful hypothetical cases like Gödel (Devitt 2011b: 421, 426; 2012b: 25–26). Why does Sterelny doubt that we should prefer philosophers’ intuitions? It is true that philosophers are more practiced: they spend a lot of time and thought thinking about problematic hypothetical cases. But practice only leads to greater skill when coupled to feedback about success and failure …. Philosophers’ practice in thinking about referential examples is not coupled to any correcting feedback loop. (179)

This concern about lack of reliable feedback is empirically well-based (Weinberg et al. 2010: 340–341). Still, as I have pointed out, “philosophers of language are confronted informally by language use that does provide feedback” (2012b, 25; see also p. 29). It is time to turn to the experimental evidence and testing against usage.

 I take it that his greater sympathy stems from his view that the intuitions in question are pieces of “conceptual analysis” and are a priori. 71  This line of thought yielded an example of what has become known as ‘the Expertise Defense’ against the findings of MMNS. The Expertise Defense has led to a lively exchange of opinion: Weinberg et  al. 2010; Machery and Stich 2012; Machery et  al. 2013; Machery 2012a; Devitt 2012b; Machery 2012b; Devitt 2012c. See also the following exchange arising out of my analogous claim that we should prefer the grammatical intuitions of linguists over those of the folk: Devitt 2006a: 108–111; 2006c: 497–500; Culbertson and Gross 2009; Devitt 2010b; Gross and Culbertson 2011. 70

436

M. Devitt

19.4.4  Experimental Semantics (Martí, Sterelny) First, what is the linguistic usage in question? It consists of the largely subconscious processes of producing and understanding linguistic expressions; of moving from a thought to its expression and from an expression to its interpretation.72 These processes are very speedy and might well be called “intuitive”. But, even if they express judgments – and many may express other mental states – the expressions that we should primarily count as evidence are not expressions of referential judgments, hence not expressions of referential intuitions; rather they are expressions of judgments about the nonsemantic world. Next, what is it to test a linguistic theory T against usage? Theorists reason as follows. T predicts that in condition C competent speakers will utter E. Then if speakers do, that confirms T; if speakers do not, that disconfirms T. Similarly, if T predicts that speakers will not utter E in C and they do, that disconfirms T; if they do not, that confirms T. This is testing T against usage. How can we test theories of reference against usage? One way is to look to the corpus, the linguistic sounds and inscriptions that competent speakers produce as they go about their ordinary business without prompting from experimenters. Ironically, many of the vignettes used by experimentalists, from MMNS onward, to test theories of reference, vignettes that are all parts of the corpus, themselves provide evidence against “classical” description theories, independent of anything that the experiments using these vignettes show about reference. For, they include expressions that the description theory predicts a competent speaker would not be disposed to utter (Devitt 2012b: 27–28, 2015c: 47–49; Devitt and Porot 2018: 1562). Still the theorist cannot count on such windfalls; getting evidence about reference from the corpus is usually going to be very hard. Fortunately, there is another way to get evidence from usage: we can apply the technique of elicited production, taken from linguistics (Thornton 1995: 140). We describe situations and prompt subjects to say something that the theory predicts they will or will not say. Then we reason in the testing-usage way, as described in the last paragraph.73 Now a theorist who reasons in this way is “reflecting on usage”, in some sense, but this reflection is very different from the sort, described in the last section, that yields referential intuitions. For, taking observed usage as evidence by the sort of reasoning just described is very different from the practice of taking that usage as evidence by making immediate judgments about its reference. And this difference is important in assessing Martí’s claim, quoted in the last section, about what semanticists do. For, though philosophers probably do some informal testing of theories of reference against usage in the way described, as noted at the end of the last section, what they primarily do is test theories against referential intuitions (Devitt 2015c:

 So I include understanding under “usage”.  Linguists also get evidence from usage by testing reaction times, eye tracking, and electromagnetic brain potentials. 72 73

19  Stirring the Possum: Responses to the Bianchi Papers

437

39–43). Our discussion of Kripke illustrates the latter. In passages like the following, Martí seems to have a different take on what Kripke does: There is no doubt that people are referring to Feynman when they use ‘Feynman’ even if they do not attach a uniquely identifying description to the name. And there is no question that people are referring to Columbus, and not to some Viking that in the eleventh century set foot in the New World, when they use ‘Columbus’. These, and other similar real cases of usage that Kripke mentions, provide the data that supports the conclusion against descriptivism. (337)

What Martí finds indubitable here are referential intuitions, and Kripke is testing the description theory against those. Yet Martí seems to take Kripke to be testing the theory against usage. (I hope this appearance is misleading.) For Kripke to be actually testing the theory against usage, say an observed use of ‘Feynman’, he would have to reason like this: the theory predicts that a competent user of ‘Feynman’ would not utter the name in circumstances C. X is a competent speaker and did utter ‘Feynman’ in C. So this count against the theory. This is manifestly not what Kripke is doing.74 Nicolas Porot and I have recently conducted experiments using elicited production to test classical description theories of proper names against usage (Devitt and Porot 2018). Our experiments were on Gödel and Jonah cases. The results were decisively antidescriptivist. So too were those of Domaneschi et al. (2017) who also used elicited production. We now have powerful evidence from usage in support of Kripke’s antidescriptivism about proper names.75 I noted Sterelny’s doubts about the reliability of the folk’s referential intuitions. Experiments had given reasons for such doubts even before these recent tests of usage. We summed these reasons up as follows: First, these intuitions have proved quite susceptible to wording effects. Although the results of MMNS have been replicated several times, it has been found that small changes in wording yield somewhat different results.76 Second, there is worrying evidence that the experimental task may be beyond many participants. The MMNS prompt asks participants to say who “John”, a character in their vignette, “is talking about” when he “uses the name ‘Gödel’”. In one experiment (Sytsma and Livengood 2011, pp. 326–7), participants who had answered this question were then asked how they had understood the question: Is it about who John thinks he is talking about or about who John is actually talking about? Remarkably, 44 out of 73 chose the former, providing clear evidence that they had misun For more on the distinction between testing against referential intuitions and against usage, see Devitt 2015c: 39–45. 75  Sterelny mentions an earlier elicited production experiment that Kate Devitt, Wesley Buckwalter and I used to test usage. Sterelny thinks that “it is very similar to eliciting intuitions” (179) I think not, but the procedure had other serious problems (2015c: 51–53) and I abandoned it. I think that the experiments cited in the text do not have those problems and are different from eliciting intuitions. Sterelny is quite right though that these experiments do not discriminate “between causal descriptive theories and causal theories of reference” (179n). We don’t need experiments to reject causal descriptivism. 76  See “Clarified Narrator’s Perspective” in Sytsma and Livengood 2011; “reverse-translation probes” in Sytsma et al. 2015; “Award Winner Gödel Case” and “Clarified Award Winner Gödel Case” in Machery et al. 2015. 74

438

M. Devitt

derstood the question. (If we ask whether it rained at Trump’s inaugural, we are not asking whether Trump, or anyone else, thinks it rained.) (Devitt and Porot 2018: 1554-1555)

The most telling count against folk intuitions comes, of course, from the recent tests of usage. These tests provide powerful evidence for antidescriptivism. So, for the folk intuitions to be reliable, they would have to be antidescriptivist. But studies from MMNS on show that the folk intuitions vary greatly and are far from consistently antidescriptivist. However, Sterelny’s doubts about the referential intuitions of philosophers have not been confirmed experimentally. Given the response to Naming and Necessity in the literature, we can reasonably assume that philosophers, particularly philosophers of language, generally share Kripke’s referential intuitions. And this assumption has received some experimental support: in an experiment on a Gödel case, the intuitions of semanticists and philosophers of language were decisively antidescriptivist (Machery 2012a).77 The tests of usage show that these intuitions are right. Aside from testing referential intuitions and testing by elicited production, one might do a truth-value judgment test. In such a test an experimenter asks subjects to assess the truth value of some statement about the vignette. Machery, Christopher Olivola, and Molly De Blanc (2009; “MOD”) did just that in response to Martí’s criticism that MMNS should have tested usage (2009). MOD claimed that their truth-value judgment test was a test of usage. Martí disagreed (2012). I have sided with MOD on this, arguing that the test is “a somewhat imperfect” test of usage, its imperfection lying “in the fact that it primes a certain usage” (Devitt and Porot 2018). Martí is not quite convinced: MOD’s questions are not as close to actual use as Devitt and Porot suggest, for they still require explicitly that the subjects think about how another speaker is using a name and what she is referring to when she uses it, a reflection that, as I have argued, is theoretical, and hence it is not obvious that it provides direct evidence of disposition to use. (335)

We argued that MOD’s test does not require theoretical reflection from the subjects because of the “disquotational” property of the truth term (Devitt and Porot 2018: 1559-1561). The results in seven of our eight tests of usage (elicited production and truthvalue judgment) were significantly antidescriptivist (and we think that we explained away the exception well enough). Still Martí surmises “that some semantic revisionists would disagree” with our antidescriptivist conclusion on the ground that “a small portion of subjects … appear to use names descriptively” (336). We think that what those few subjects do is better thought of as “noise”.

77

 For more discussion of this experiment, see Devitt 2012b: 23–24.

19  Stirring the Possum: Responses to the Bianchi Papers

439

19.5  Metaphysics 19.5.1  The Definition of “Scientific Realism” (Godfrey-Smith) Peter Godfrey-Smith and I have similarly realist views about science, as one would expect from Sydney boys, but we have been arguing for years about how best to define that realism. Any definition faces an obvious problem from the start: realism’s opponents are so various; as Godfrey-Smith puts it in “Scientific Realism and Epistemic Optimism”, “a diverse family of views  – verificationism, radical constructivism, milder views like van Fraassen’s constructive empiricism (1980), and so on” (352). Rejecting this diverse family leads to two different strands in realist thinking: first, what Godfrey-Smith nicely calls “a generalized optimism about science, especially current science”; second, a “metaphysical” strand, famously rejected by idealism, about “the mind-independence of (much of) the world” (346). I call these two strands, “the existence dimension” and “the independence dimension” of realism.78 They are somewhat strange bed-fellows and so combining them in one doctrine can seem “a bit artificial” (347). Because of this, I contemplated separating them into two doctrines when writing Realism and Truth but decided against partly because of terminological conservativism and partly because it would make things too complicated. Also, it is worth noting that the two strands are not as unrelated as they might seem. For, a traditional route to idealism presumed optimism about our knowledge of the familiar world of stones, trees, cats, and the like. The route started, in effect, with an epistemological theory, a theory about what we could and could not know. It went on to argue that if the familiar world had the mind-­independent nature required by realism, then we could know nothing about it: that view is “the very root of Scepticism” (Berkeley 1710: sect. 86). But – here’s the optimism – we obviously do know about it. So the familiar world is not mind-independent.79 (This exemplifies the sort of argument that “Putting Metaphysics First” is against; see Sect. 19.4.1.) In any case, I settled on definitions of “Scientific Realism” and “Strong Scientific Realism” that combined the two strands. A recent version of the strong doctrine is: SSR: Most of the essential unobservables of well-established current scientific theories exist mind-independently and mostly have the properties attributed to them by science. (2005b: 70)

I have long had a worry about this: “Couldn’t someone be a realist in an interesting sense and yet be sceptical of the contemporary science on which realism … is based?” (1991: 20). Godfrey-Smith puts the worry like this, “one can have a realist metaphysics and be pessimistic about science” (348); he mentions Popper as an

78  Godfrey-Smith emphasizes the importance of being careful about “mind-independence” (352). I agree and hope that I have been (1991: 14–17, 246–258). 79  As Fodor put it nicely (to Rey), “Idealism is the effort to buy knowledge by selling off metaphysics.”

440

M. Devitt

example (347). My concern about this led me then, and later (1997: 303–304), to contemplate definitions to accommodate these “realists”. Godfrey-Smith also tried to accommodate them (2003). Still, I stuck with the likes of SSR for two reasons. (1) It seemed to me a bit paradoxical to call a doctrine “realism” that had no commitment to the existence of anything in particular. (2) It seemed to me unhelpful, given the enormous historical role of skepticism in the realism debate about the observable as well as unobservable world, to call a doctrine that does not confront skepticism “realism”. Godfrey-Smith has another worry. Quantum mechanics raises the possibility of a well-established scientific theory telling us “something very metaphysically surprising” (348), namely, that we should reject the independence dimension. I have, of course, noted this distressing possibility (1984: 122; 1991: 132; 2005b: 69 n. 2) but I don’t see it as posing a problem for the definition. In my view, realism, however defined, is “an overarching empirical hypothesis” (1984: 18; 1991: 20) and so ought to be open to scientific falsification. Finally, noting the qualifications in SSR  – “most”, “essential”, “well-established” – Godfrey-Smith makes two objections to my definition: (i) There may be diversity across cases with respect to the appropriate level of confidence. (ii) There may be hidden diversity in what is being claimed by the science, in a way Devitt’s formulation does not accommodate. (349)

In thinking about these objections, it is important to keep firmly in mind the point of definitions like mine. Godfrey-Smith thinks that such “a rough and coarsegrained summary … might have a role in some discussions (discussions of the relation between science and religion, for example)” (349) but not, apparently, in philosophical discussions, at least not in the one I am engaged in. But why not? The point of these definitions is to come up with a doctrine, admittedly vague, rough and ready (1984: 11, 23; 1991: 13, 24), that is nonetheless, on the one hand, clearly rejected by members of that “diverse family” that Godfrey-Smith mentions; and, on the other hand, clearly the subject of current debates like that over the “no-miracles argument”. SSR fits the bill. Consider objection (i). Godfrey-Smith rightly notes that “we have different kinds of evidence, and different shortcomings and reasons for doubt, in different fields” (349). As a result, he thinks that SSR and its ilk involve “an excessive sculpting and refining of something that can only ever be a very rough and ready summary” (349). But what changes would Godfrey-Smith recommend that avoids this “kind of false rigor” whilst still fitting the bill? Objection (ii) is based on the role of modeling in science and has two themes. The first concerns “approximation and idealization” (350). SSR presumes, as Godfrey-Smith notes, that in science “[w]e refer to objects and attribute properties to them” (350) Yet “[i]n many fields, science aims to develop models that are good approximations” (350). So drawing conclusions about what entities science is really committed to is a subtler business than SSR presumes. But, again, given the point of SSR, I think that these subtleties can be ignored.

19  Stirring the Possum: Responses to the Bianchi Papers

441

The second theme is “the primacy of structure rather than entities” (350). Scientists often think that their models show not so much that certain entities exist but that “the structure specified by these models is usefully similar to the structure of the system we are trying to understand” (350). This is a serious worry because it challenges SSR’s truth! I am not convinced by this challenge and so want to hold to my definition. But clearly the realistically inclined philosopher who is convinced should fall back to a definition of structural realism, as John Worrall (1989), and perhaps Godfrey-Smith, already have. So, for the purposes of SSR, I think it is in order to overlook the subtleties Godfrey-Smith mentions. This is not to say, of course, that it is in order to do so to serve other respectable philosophical purposes; for example, the purpose of determining precisely what a particular scientific theory tells us about reality.

19.5.2  Biological Essentialism (Godman and Papineau) 19.5.2.1  Introduction What is it to be a member of a particular biological taxon? In virtue of what is an organism say a Canis lupus? What makes it one? I take these to be various ways to ask about the ‘essence’, ‘nature’, or ‘identity’ of a particular taxon. The consensus answer in the philosophy of biology, particularly for taxa that are species, is that the essence is not in any way intrinsic to the members but rather is wholly relational, particularly, historical. Thus, in their excellent introduction to the philosophy of biology, Sex and Death, Sterelny and Paul Griffiths have this to say: there is “close to a consensus in thinking that species are identified by their histories” (1999: 8); and “the essential properties that make a particular organism a platypus … are historical or relational” (1999: 186). Samir Okasha endorses the consensus describing it as follows: we “identify species in terms of evolutionary history … as particular chunks of the genealogical nexus” (2002: 200). Philosophers of biology like to emphasize just how different their historical essentialism is from the influential views of Kripke (1980) and Putnam (1975). In “Resurrecting Biological Essentialism” (2008d), I rejected the consensus in arguing that there is an intrinsic component to a taxon’s essence. I called that doctrine, “Intrinsic Biological Essentialism” (IBE).80 Still I accepted that there was also an historical component to a taxon’s essence and have recently argued for this component in “Historical Biological Essentialism” (2018). As indicated by the title of their interesting paper, “Species have Historical not Intrinsic Essences”, Marion Godman and David Papineau (“G&P”) argue for the consensus historical doctrine and reject my IBE.  This article has been criticized by Matthew Barker (2010), Marc Ereshefsky (2010), Tim Lewens (2012), Sarah-Jane Leslie (2013), and Matthew Slater (2013), all of whom are part of the consensus. I have responded (forthcoming-c).

80

442

M. Devitt

My concern was with the essence of taxa thought to be in any one of the Linnaean categories. However, the custom in discussions of biological essentialism is to consider only species. I think this is a mistake but I shall go along with it here. “Resurrecting” presented a positive argument for IBE and criticized the historical consensus. The positive argument had two parts (2008d: 351–355), which I now summarize. 19.5.2.2  S  ummary of Argument for Intrinsic Biological Essentialism (IBE) The first part concerned the biological generalizations about the phenotypic properties of species and other taxa; generalizations about what they look like, about what they eat, about where they live, about what they prey on and are prey to, about their signals, about their mating habits, and so on. I argued that these generalizations have explanations that advert to intrinsic components of essences. In presenting this argument, I emphasized Ernst Mayr’s (1961) distinction between “proximal” and “ultimate” explanations. (I preferred Philip Kitcher’s (1984) terms for this distinction, “structural” vs “historical”, but G&P prefer Mayr’s and so I will go with that.) The explanations that featured in my argument were proximal ones about the underlying developmental mechanisms in members of a taxon that make the generalizations true. In contrast, ultimate explanations tell us how members of the taxon evolved to have such mechanisms. In the second, related, part of the earlier argument, I claimed that a taxon’s intrinsic essence explains why it is explanatory for an organism to be in a certain taxon: the generalizations we have been discussing reflect the fact that it is informative to know that an organism is a member of a certain species or other taxon: these classifications are “information stores” (Sterelny and Griffiths 1999, p. 195). But being a member of a certain taxon is more than informative, it is explanatory. Matthen points out that “many biologists seem committed to the idea that something is striped because it is a tiger” (1998, p. 115). And so they should be: the fact that an individual organism is a tiger, an Indian rhino, an ivy plant, or whatever, explains a whole lot about its morphology, physiology, and behavior. (2008d: 352)

Why does it? Because the essential nature of a taxon, to be discovered by biologists, causes its members, in their environment, to have those phenotypic properties. What nature? I argued that if our concerns are proximal, so they are with a nature that causes a tiger’s development into an organism with those properties, the nature must be intrinsic. Sarah-Jane Leslie claims plausibly that the traditional argument for essentialism “makes critical use of intuitions” (2013: 109). As can be seen, my argument for IBE does not. It makes critical use of biological explanations. (I should have emphasized this in “Resurrecting”.) G&P also emphasize explanation: Essential properties are properties that explain all the other shared properties. For any Kind C, there will be some central common feature E possessed by each C, a feature that gives rise to all the other properties F shared by the Kind. The essential property thereby explains why the Kind supports multiple generalizations. (358)

19  Stirring the Possum: Responses to the Bianchi Papers

443

So, we are very much in agreement on this methodologically significant point. Yet we end up with very different conclusions. 19.5.2.3  G&P on Alice and Artifacts G&P think of species as “historical kinds” which they contrast with “eternal kinds”, using terms they take from Ruth Millikan (1999, 2000). They rightly think that we can throw light on species by considering some other kinds. So they begin their argument by discussing the essence of a range of kinds that they think are also “historical”. And they claim that I think that species are “eternal”. I find these terms quite unhelpful and so will not argue about their application to any kind. My focus in discussing species and some of these other kinds will be very simple: Do these kinds, whether appropriately called “historical” or “eternal”, have an intrinsic component to their essence? For that is what is at issue with IBE. Whether the essence of species also have an historical component is not at issue. The clear difference between us is that I think that essential intrinsic properties answer proximal questions whilst G&P do not; they think that essential historical ones answer those questions. G&P begin their argument as follows: Consider all the different copies of Alice in Wonderland, including the paperback with a front page torn off on Marion’s bookshelf, the hardback in David’s study, and the many others in numerous libraries and book stores across the world. These instances all share their first word, their second word, … and so on to the end. They also share the same list of characters, the same plot, and the same locations. We thus have a wealth of generalizations of the form All copies of Alice in Wonderland are F. Copies of Alice in Wonderland form a Kind. But the common properties of this kind are certainly not explainable by any common physical essence…. Rather, all these instances are members of the same Kind because they are all copies of an original. Their shared features are all due to their common descent from the original version written by Lewis Carroll. It is purely this chain of reproduction, not any common intrinsic property, that explains the shared features. (359)

This is where G&P introduce their talk of copying, which is an important sign of their approach to essentialism. And, not surprisingly, copying does have a place in discussing the essence of a “copy” of Alice. I shall set Alice aside for a moment. But here’s a quick initial thought: talk of x being essentially a copy of y seems a very unpromising way to reject an intrinsic component to the essence of x. For, to be such a copy, x must share the intrinsic properties of y! G&P follow their remarks about Alice with some about artifacts: Many artefacts are like literary works in this respect. Earlier we alluded to all the features common to Vauxhall Zafiras…. But here again the commonalities are not explained by some common intrinsic property. While the Zafiras do have many physical properties in common, none of these is distinguished as the source of all the other common features. Rather their many similarities stem from their all being made according to the same original blueprint. They are constituted as a Kind by their common historical source. (359)

After mentioning some other examples, they sum up their view of historical kinds:

444

M. Devitt

all examples will involve three central ingredients: 1) the existence of a model, 2) new instances produced in interaction with the model or other past instances, 3) this interaction causes the new instances to resemble past instances. A chain of reproduction thus generates the relevant historical relations that ground and explain the Kind. (360)

19.5.2.4  Implements Now G&P’s claim is about “many” artifacts not all artifacts. But it is helpful to start by considering the essence of artifacts in general. And the first thing to note is that among artifactual kinds only the typical “trade-marked” ones like the Vauxhall Zafira or the iPhone are essentially artifacts. Consider “generic” artifacts like cars or smartphones.81 These are, of course, made by us and they are so complicated that it may seem as if they have to be made by us. This makes it harder to see what is essential to being one of those things. So, let us consider something much simpler: a paperweight. To be a paperweight an object must have a certain function, the function of securing loose papers with its weight. Paperweights often have that function because they are artifacts designed to have it. But they often get that function in a very different way: a perfectly natural object like a stone or a piece of driftwood becomes a paperweight by being regularly used to secure papers. So, whereas having a certain function is essential to being a paperweight, being an artifact is not. Similarly, being an artifact is not essential to being a doorstop, a hammer, a pencil, a chair, or even a car or a smartphone. Putnam once remarked that chairs might have grown on trees. So might cars and smartphones! We need a word for these functional objects. I call them “implements”. So what is essential to an object’s being a particular kind of implement is having a certain function. An object has that function in virtue of two properties. First, the object’s relation to us or to some other organism: a car was made by us for a certain purpose and a nest was made by a bird for a certain purpose; a paperweight found on a beach is standardly used by us for a certain purpose. So relations to organisms are essential to kinds that are implements. But, it is important to note, not one of the relations G&P pick out in discussing Alice and the Zafira are essential to generic implements: these implements need not be “copies”, have “a chain of reproduction”, have “a model”, have a “common historical source”, or be made “according to [an] original blueprint”. Importantly, the second property in virtue of which an object has the function of an implement is intrinsic. For, the object must have any intrinsic property required to perform that function; thus, a paperweight has to have an intrinsic constitution that enables it to secure loose papers. No matter how much we intended something to be a paperweight, it won’t be one unless it has that constitution; thus, a feather could not be a paperweight. And there may be more to the intrinsic component. An implement kind may have essential intrinsic properties beyond those necessary for its function, properties that distinguish it from other implement kinds with the same 81

 I draw on my 2005a: 155–156.

19  Stirring the Possum: Responses to the Bianchi Papers

445

function: pencils and pens are both writing instruments but they have different intrinsic essences. Why should we believe these essentialist claims? They are intuitively plausible, I think, but we can do better than that. As G&P point out, “essential properties are properties that explain all the other shared properties”. So we should look to such explanations to support our essentialist claims here as with species. Consider two examples. Why are paperweights useful weapons? Because the essential function of a paperweight requires it to have intrinsic properties (which we could spell out) that make it a good weapon. Why is it easier to erase writing from a pencil than from a pen? Because of the essential intrinsic difference between pencils and pens (a difference we could spell out).82 Turn now to G&P’s examples. First, the Zafira. This is a typical trade-marked implement. Unlike the generic car, it is indeed part of the essence of a Zafira that it comes from a common historical source and is made according to an original (at least implicit) blueprint.83 So, in that respect, G&P have the essence of a Zafira right. But in all other respects, they have it wrong. First, but not important, it is not essential that a Zafira be a “copy”, have “a chain of reproduction”, or have a “model”. Second, and very important, many properties of Zafiras are “explained by some common intrinsic property”. A Zafira is essentially a car and so its essence includes all the essential properties of cars. So it must have the function of a car. So, first, it must be appropriately related to our purposes. Second, it must have all the intrinsic properties essential to functioning as a car; for example, having an engine, brakes, and seats. Something without those intrinsic properties – for example, a paperweight or a smartphone – could not be a car. Third, just as a pencil has an intrinsic essence that distinguishes it from other writing implements, so too does a car have an intrinsic essence that distinguishes it from other vehicles: from a van, truck, bus, pram, golf buggy, … Furthermore, a Zafira is a special kind of car and so there is even more to its intrinsic essence than to that of a generic car. It has to have the particular sort of engine, brakes, and seats peculiar to a Zafira; for example, it has to have the special 7-seat arrangement that is a strong selling point. How do I know all this? Once again, as with species, we should look to explanation not just intuition. (a) Zafiras, like cars in general, are useful for suburban shopping. Why? The explanation is to be found in the intrinsic properties we have just alluded to. And, we should note, the historical fact that Zafiras are made by Vauxhall is irrelevant to that explanation: it would make no difference if the car had been made by Ford or not made at all, just found. (b) According to the advertisement, Zafiras are versatile, easy to drive, comfortable for a family, and disabilities friendly. Suppose they are. Each of these properties of the Zafira would be explained by its

 This discussion of generic implements bears on Millikan’s discussion of chairs (1999: 56; 2000: 21). 83  I say that the Zafira is a “typical” trade-marked implement because there may be some atypical ones that are not made at all; e.g. found objects ingeniously marketed as ornamental paperweights. 82

446

M. Devitt

intrinsic essential properties. Once again, the relation of Zafiras to Vauxhall is beside the point. So far, then, the trade-marked Zafira has just the same sort of three-part essence as the generic car: a relation to our purposes and some intrinsic properties that together give the Zafira its essential function as a car: and some other intrinsic properties distinctive of the Zafira. But there is a bit more to the essence of Zafiras. I agreed with G&P that, unlike cars in general, Zafiras must have a “common historical source”: they must be made by Vauxhall. Why must they? As we have noted this relational property of Zafiras is irrelevant to the explanations we have been considering so far. But, it is not irrelevant to some other explanations; it may, for example, be part of explanations of the reputation and desirability of Zafiras (think of the fact that an iPhone is made by Apple), of its repair record, and so on. But even with these explanations, essential intrinsic properties are likely to be central. I also agreed that Vauxhall must have made Zafiras according to an original (at least implicit) blueprint. But this requirement brings together what we have already identified as essential without adding anything new. (1) We have just agreed that Zafiras must be made by Vauxhall. (2) To say that they must be made according to that original blueprint is just to say that they must be made with the properties specified by the blueprint. And those properties include the earlier-mentioned essential intrinsic properties of the Zafira, the ones that carry the burden of explaining such commonalities of Zafiras as its special 7-seat arrangement. In brief, to say that the essence of Zafiras is to be made by Vauxhall according to the original blueprint is not to deny that the essence has an intrinsic component; rather it is to entail that it has. The essence that G&P propose is up to its ears in intrinsic properties.84 In conclusion, the view that the commonalities of a Zafira can be explained by a purely relational essence is, as I said in “Resurrecting” about an analogous view of the commonalities of a species, “explanatorily hopeless” (2008d: 363). The story for Alice is, as G&P say, much like that for the Zafira. But they have both stories wrong. It is essential to a copy of Alice that it is a copy of the original manuscript produced by Lewis Carroll just as it is essential to a Zafira that it was made from Vauxhall’s blueprint. But intrinsic properties are essential in both cases. For something to be a copy of Alice, it has to be a linguistic item with the semantic properties of Lewis Carroll’s manuscript. That is what it is to be a copy and anything without such properties is simply not a copy of Alice. And it is those intrinsic linguistic properties – the “first word, … second word, … list of characters, … plot,

 They later mention another artifact, the pound coin: “We don’t think all the pound coins pressed from some mould must have some common inner essence to explain why they share their many other joint properties” (365). On the contrary, a common inner essence is necessary to explain their behavior in ticket machines and many other properties. Of course, the most important properties of the coin are its function of being worth one pound, which it has because of its relation to the Bank of England. Still, there is an intrinsic component to its essence.

84

19  Stirring the Possum: Responses to the Bianchi Papers

447

and ... locations” – that explain such commonalities as that copies of Alice show great insight into the difference between quantifiers and names.85 19.5.2.5  Species After a brief discussion of some other kinds, G&P finally turn to biological essentialism (360). Their argument against IBE to this point is the suggestion that species are like other kinds, ones they classify as “historical”, in not having an intrinsic component to their essence. I think that they are right to look to other kinds for guidance. But what we should learn from studying these others, supported by my discussion of Zafiras and Alice, is that it is unlikely that the essence of any explanatorily interesting kind is wholly relational. (Being Australian is my favorite example of an explanatorily uninteresting kind that is probably wholly relational (2008d: 346). In any case, in the end, the rejection of IBE demands arguments about species themselves not about other kinds. We need to be shown that the proximal explanations of species commonalities do not rest on intrinsic properties. So that is what we now look for. G&P contrast their position on species with mine: Why do their members all share so many properties? As we have seen, one answer would be to assimilate species to eternal Kinds, as Devitt does, and appeal to the common genetic make-up intrinsic to each member. But an alternative would be to view species as historical Kinds, and attribute their shared properties to their common ancestry, with their genetic make-up simply being part of the species’ copying mechanism. (363)

My examples of the sorts of shared phenotypic properties that we are talking about include: “ivy plants grow toward the sunlight … polar bears have white fur … Indian rhinoceri have one horn and Africa rhinoceri have two” (2008d: 351). And my claim is that an essential intrinsic underlying property of the kind in question is central to the proximal explanation of these commonalities, to the explanation of why each of these organisms develop to have the property specified for its kind. That intrinsic component of the essence, together with the organism’s environment, causes the organism to have the specified property (352). G&P give two reasons for rejecting my view and for taking species to be “fundamentally historical”. But before considering them I want to address objections that came up in a helpful, though puzzling, private exchange with Papineau. Here is my version of the exchange: Objection  Your “essence” may explain why members of a species S have phenotypic property P or Q in common but what we want from an essence is to explain the fact that they “display a great number of commonalities” (362); we seek the common cause of a tightly correlated group of properties.

 My discussion of blueprints and copying bears on Millikan’s discussion of reproduction and copying in discussing the nature of kinds (1999: 54–56).

85

448

M. Devitt

Response  I agree that we want to explain that as well but if, as I have argued, a partly intrinsic essence explains P and it explains Q, then it follows that it explains the tight correlation of these two properties in S! Objection  But, your “essence” is not a common cause of these properties. Rather it is a conjunction of separate causes, one for P, one for Q, and so on. The essence should be a single common cause. Response  Let us go along with this intuitive talk of a “single” cause, for the sake of argument. If this requirement were good, then neither S nor Zafiras nor Alice would have an essence. For, as a matter of fact, one underlying property of S’s ancestors causes P, another, Q, and so on; one property of the blueprint is responsible for a Zafira’s brakes, another, for its special 7-seat arrangement: some semantic properties of the original Alice lead to its copies having one admirable literary property, others, another. Clearly, the singularity requirement is no good. Turn now to G&P’s two reasons. The second of these concerns microbial kinds. I did not address such kinds at all in “Resurrecting” and I later acknowledged that IBE may not apply to them (forthcoming-c). So I shall set the second reason aside and attend only to the first reason, “non-zygotic inheritance”. Does it show that IBE is wrong about the non-microbial biological world? I think not. Indeed, I’m puzzled that they think that it does. G&P start their discussion of non-zygotic inheritance by pointing out that “the children of a skilled forager” may not inherit their skills through their genes but rather “in other ways  – for instance, by her explicitly training them, or by their implicitly copying her tricks. In this sense, there is nothing at all problematic about the inheritance of acquired characteristics” (363). Indeed, there is not, and such inheritance is common in nature, as G&P bring out nicely. But this throws no doubt on IBE. For, the training and copying are part of the environment’s causal role in the development of phenotypic properties. As G&P note (364), I emphasize the obvious fact that “explanations will make some appeal to the environment” as well as to intrinsic essences (2008d: 352). At this point, I’m sorry to say, G&P seem to go right off the rails: “nothing requires characteristic traits shared by species members to depend on genetic inheritance at all” (364). Surely they can’t really mean “at all”?! But they do: Why do all tigers grow up the same, and different from zebras, even though tigers and zebras are subject to just the same environmental influences? What could explain that, except their shared genetic make-up? Well, the answer is that tigers and zebras aren’t subject to just the same environmental influences. Tigers are raised by tigers, while zebras are

19  Stirring the Possum: Responses to the Bianchi Papers

449

raised by zebras, and many of their species-characteristic properties can be due to this in itself – without any assistance from their genes. (364)

Without any assistance?! Let’s not beat about the bush: this is a Big Mistake. If it weren’t for their shared genetic make-up, no tigers would acquire any traits from interaction with their parents (or with anything else). If G&P were right, a zebra brought up by tigers would have all the traits that tiger cubs acquire from interaction with their parents. That is surely not so. Chimps brought up by humans famously fail to learn a human language. (Indeed, language acquisition is a good example of the combined action of genes and environment in acquiring a trait.) Young cuckoos don’t grow up like their foster parents. So I see nothing in non-zygotic inheritance that counts against IBE. Aside from that, what about all the other commonalities of species? What explains why Indian rhinos have one horn and Africa rhinos have two? Or why tigers and zebras have stripes? These are certainly not traits acquired from watching parents. No case has been presented against the view that an intrinsic essence provides the proximal explanation of these traits. There is a puzzle about this discussion of non-zygotic inheritance. It is offered to support the historical alternative to IBE: the shared properties are to be explained by “their common ancestry”. And, as we have noted, in the background discussion it is claimed – falsely, I have argued – that copies of Alice and Zafiras have wholly historical essences. Yet an historical essence plays no role in the explanations G&P offer in discussing this non-zygotic inheritance. In conclusion, there is certainly a relational, sometimes historical, component to the essence of literary works, artifacts and implements. But there is also an intrinsic component which plays a central role in the explanation of commonalities. And the same goes for species. IBE stands untouched.

References Almog, J. 2012. Referential uses and the foundations of direct reference. In Having in mind: The philosophy of Keith Donnellan, ed. J. Almog and P. Leonardi, 176–184. New York: Oxford University Press. Almog, J., and P.  Leonardi, eds. 2012. Having in mind: The philosophy of Keith Donnellan. New York: Oxford University Press. Almog, J., P. Nichols, and J. Pepp. 2015. A unified treatment of (pro-)nominals in ordinary English. In On reference, ed. A. Bianchi, 350–383. Oxford: Oxford University Press. Anderson, J.R. 1980. Cognitive psychology and its implications. San Francisco: W.H.  Freeman and Company. Antony, L. 2008. Meta-linguistics: Methodology and ontology in Devitt’s Ignorance of language. Australasian Journal of Philosophy 86: 643–656. Barker, M.J. 2010. Species intrinsicalism. Philosophy of Science 77: 73–91. Berkeley, G. 1710. Principles of human knowledge. Bianchi, A. 2015. Reference and repetition. In On reference, ed. A.  Bianchi, 93–107. Oxford: Oxford University Press.

450

M. Devitt

Bianchi, A., and A. Bonanini. 2014. Is there room for reference borrowing in Donnellan’s historical explanation theory? Linguistics and Philosophy 37: 175–203. Block, N. 1986. Advertisement for a semantics for psychology. In Midwest studies in philosophy X: Studies in the philosophy of mind, ed. P.A. French, T.E. Uehling Jr., and H.K. Wettstein, 615–678. Minneapolis: University of Minnesota Press. Burge, T. 1986. Individualism and psychology. Philosophical Review 95: 3–45. Cappelen, H. 2012. Philosophy without intuitions. Oxford: Oxford University Press. Capuano, A. 2018. In defense of Donnellan on proper names. Erkenntnis. https://doi.org/10.1007/ s10670-018-0077-6. Collins, J. 2006. Between a rock and a hard place: A dialogue on the philosophy and methodology of generative linguistics. Croatian Journal of Philosophy 6: 469–503. ———. 2007. Review of Michael Devitt’s Ignorance of language. Mind 116: 416–423. ———. 2008a. Knowledge of language redux. Croatian Journal of Philosophy 8: 3–43. ———. 2008b. A note on conventions and unvoiced syntax. Croatian Journal of Philosophy 8: 241–247. Culbertson, J., and S. Gross. 2009. Are linguists better subjects? British Journal for the Philosophy of Science 60 (4): 721–736. Deutsch, M. 2009. Experimental philosophy and the theory of reference. Mind and Language 24: 445–466. Devitt, M. 1972. The semantics of proper names: A causal theory. Harvard PhD dissertation. Available at https://devitt.commons.gc.cuny.edu/. ———. 1974. Singular terms. Journal of Philosophy 71: 183–205. ———. 1981a. Designation. New York: Columbia University Press. ———. 1981b. Donnellan’s distinction. In Midwest studies in philosophy VI: The foundations of analytic philosophy, ed. P.A.  French, T.E.  Uehling Jr., and H.K.  Wettstein, 511–524. Minneapolis: University of Minnesota Press. ———. 1984. Realism and truth. Oxford: Basil Blackwell. ———. 1985. Critical notice of The varieties of reference by Gareth Evans. Australasian Journal of Philosophy 63: 216–232. ———. 1989a. Against direct reference. In Midwest studies in philosophy XIV: Contemporary perspectives in the philosophy of language II, ed. P.A. French, T.E. Uehling Jr., and H.K. Wettstein, 206–240. Notre Dame: University of Notre Dame Press. ———. 1989b. A narrow representational theory of the mind. In Representation: Readings in the philosophy of psychological representation, ed. S.  Silvers, 369–402. Dordrecht: Kluwer Academic Publishers. Reprinted in Mind and cognition: A reader, ed. W.G. Lycan, 371–398. Oxford: Basil Blackwell, 1991. ———. 1990. Meanings just ain’t in the head. In Method, reason and language: Essays in honour of Hilary Putnam, ed. G. Boolos, 79–104. Cambridge: Cambridge University Press. ———. 1991. Realism and truth. 2nd ed. Oxford: Basil Blackwell. ———. 1996. Coming to our senses: A naturalistic program for semantic localism. Cambridge: Cambridge University Press. ———. 1997. Afterword. In a reprint of Devitt 1991, 302–345. Princeton: Princeton University Press. ———. 1998. Naturalism and the a priori. Philosophical Studies 92: 45–65. Reprinted in Devitt 2010c: 253–270. ———. 2001. A shocking idea about meaning. Revue Internationale de Philosophie 208: 449–472. ———. 2002. Meaning and use. Philosophy and Phenomenological Research 65: 106–121. ———. 2003. Linguistics is not psychology. In Epistemology of language, ed. A. Barber, 107–139. Oxford: Oxford University Press. ———. 2004. The case for referential descriptions. In Descriptions and beyond, ed. M. Reimer and A. Bezuidenhout, 280–305. Oxford: Clarendon Press. ———. 2005a. Rigid application. Philosophical Studies 125: 139–165.

19  Stirring the Possum: Responses to the Bianchi Papers

451

———. 2005b. Scientific realism. In The Oxford handbook of contemporary philosophy, ed. F. Jackson and M. Smith, 767–791. Oxford: Oxford University Press. Reprinted in Truth and Realism, ed. P. Greenough and M. Lynch, 100–124. Oxford: Oxford University Press, 2006. Reprinted with additional footnotes in Devitt 2010c: 67–98. (Citations are to Devitt 2010c.) ———. 2006a. Ignorance of language. Oxford: Clarendon Press. ———. 2006b. Defending Ignorance of language: Responses to the Dubrovnik papers. Croatian Journal of Philosophy 6: 571–606. ———. 2006c. Intuitions in linguistics. British Journal for the Philosophy of Science 57: 481–513. ———. 2006d. Intuitions. In Ontology studies cuadernos de ontologia: Proceedings of VI international ontology congress (San Sebastian, 2004), ed. V.G.  Pin, J.I.  Galparaso, and G.  Arrizabalaga. San Sebastian: Universidad del Pais Vasco. Reprinted in Devitt 2010c: 292–302. ———. 2006e. Responses to the Rijeka papers. Croatian Journal of Philosophy 6: 97–112. ———. 2007. Dodging the arguments on the subject matter of grammars: A response to John Collins and Peter Slezak. Online at http://devitt.commons.gc.cuny.edu/online_debates/. ———. 2008a. Explanation and reality in linguistics. Croatian Journal of Philosophy 8: 203–231. ———. 2008b. A response to Collins’ note on conventions and unvoiced syntax. Croatian Journal of Philosophy 8: 249–255. ———. 2008c. Methodology in the philosophy of linguistics. Australasian Journal of Philosophy 86: 671–684. ———. 2008d. Resurrecting biological essentialism. Philosophy of Science 75: 344–382. Reprinted with additional footnotes in Devitt 2010c: 213–249. ———. 2009a. Psychological conception, psychological reality. Croatian Journal of Philosophy 9: 35–44. ———. 2009b. The Buenos Aires symposium on rigidity: Responses. Análisis Filosófic 29: 239–251. ———. 2010a. What ‘intuitions’ are linguistic evidence? Erkenntnis 73: 251–264. ———. 2010b. Linguistic intuitions revisited. British Journal for the Philosophy of Science 61: 833–865. ———. 2010c. Putting metaphysics first: Essays on metaphysics and epistemology. Oxford: Oxford University Press. ———. 2011a. Deference and the use theory. ProtoSociology 27: 196–211. ———. 2011b. Experimental semantics. Philosophy and Phenomenological Research 82: 418–435. ———. 2011c. Methodology and the nature of knowing how. Journal of Philosophy 108: 205–218. ———. 2011d. No place for the a priori. In What place for the a priori? ed. M.J. Shaffer and M.L.  Veber, 9–32. Chicago/La Salle: Open Court Publishing Company. Reprinted in Devitt 2010c: 271–291. ———. 2012a. The role of intuitions. In The Routledge companion to the philosophy of language, ed. G. Russell and D.G. Fara, 554–565. New York: Routledge. ———. 2012b. Whither experimental semantics? Theoria 27: 5–36. ———. 2012c. Semantic epistemology: Response to Machery. Theoria 74: 229–233. ———. 2012d. Still against direct reference. In Prospects for meaning, ed. R. Schantz, 61–84. Berlin: Walter de Gruyter. ———. 2013a. The ‘linguistic conception’ of grammars. Filozofia Nauki 21: 5–14. ———. 2013b. Responding to a hatchet job: Ludlow’s review of Ignorance of language. Revista Discusiones Filosóficas 14: 307–312. ———. 2013c. Three methodological flaws of linguistic pragmatism. In What is said and what is not: The semantics/pragmatics interface, ed. C. Penco and F. Domaneschi, 285–300. Stanford: CSLI Publications. ———. 2013d. What makes a property ‘semantic’? In Perspectives on pragmatics and philosophy, ed. A. Capone, F. Lo Piparo, and M. Carapezza, 87–112. Cham: Springer.

452

M. Devitt

———. 2014a. Linguistic intuitions are not ‘the voice of competence’. In Philosophical methodology: The armchair or the laboratory?, ed. M.C. Haug, 268–293. London: Routledge. ———. 2014b. Linguistic intuitions: In defense of ‘ordinarism’. European Journal of Analytic Philosophy 10 (2): 7–20. ———. 2014c. Lest auld acquaintance be forgot. Mind and Language 29: 475–484. ———. 2015a. Relying on intuitions: Where Cappelen and Deutsch go wrong. Inquiry 58: 669–699. ———. 2015b. Should proper names still seem so problematic? In On reference, ed. A. Bianchi, 108–143. Oxford: Oxford University Press. ———. 2015c. Testing theories of reference. In Advances in experimental philosophy of language, ed. J. Haukioja, 31–63. London: Bloomsbury Academic. ———. 2018. Historical biological essentialism. Studies in History and Philosophy of Biological and Biomedical Sciences 71: 1–7. ———. 2020. Linguistic intuitions: A response to Gross and Rey. In Linguistic intuitions: Evidence and method,  ed. S.  Schindler, A.  Drożdżowicz, and K.  Brøcker, 51-68. Oxford: Oxford University Press. ———. forthcoming-a. Three mistakes about semantic intentions. In Inquiries in philosophical pragmatics, ed. F. Macagno and A. Capone. Cham: Springer. ———. forthcoming-b. Overlooking conventions: The trouble with linguistic pragmatism. Cham: Springer. ———. forthcoming-c. Defending intrinsic biological essentialism. Philosophy of Science. Devitt, M., and N.  Porot. 2018. The reference of proper names: Testing usage and intuitions. Cognitive Science 42: 1552–1585. Devitt, M., and K. Sterelny. 1989. Linguistics: What’s wrong with ‘the right view’. In Philosophical perspectives, 3: Philosophy of mind and action theory, ed. J.E. Tomberlin, 497–531. Atascadero: Ridgeview Publishing Company. ———. 1999. Language and reality: An introduction to the philosophy of language. 2nd ed. Cambridge, MA: MIT Press. 1st edition 1987. Dickie, I. 2011. How proper names refer. Proceedings of the Aristotelian Society 101: 43–78. Domaneschi, F., M.  Vignolo, and S.  Di Paola. 2017. Testing the causal theory of reference. Cognition 161: 1–9. Donnellan, K.S. 1970. Proper names and identifying descriptions. Synthese 21 (3–4): 335–358. Ereshefsky, M. 2010. What’s wrong with the new biological essentialism. Philosophy of Science 77: 674–685. Evans, G. 1973. The causal theory of names. Aristotelian Society. Supplementary Volumes 47: 187–208. ———. 1982. The varieties of reference, ed. J. McDowell. Oxford: Oxford University Press. Field, H. 1973. Theory change and the indeterminacy of reference. Journal of Philosophy 70: 462–481. Fodor, J.A. 1975. The language of thought. New York: Thomas Y. Crowell. ———. 1980. Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences 3: 63–73. ———. 1987. Psychosemantics: The problem of meaning in the philosophy of mind. Cambridge, MA: MIT Press. Godfrey-Smith, P. 2003. Theory and reality: An introduction to the philosophy of science. Chicago: University of Chicago Press. Grice, P. 1989. Studies in the ways of words. Cambridge, MA: Harvard University Press. Gross, S., and J. Culbertson. 2011. Revisited linguistic intuitions. British Journal for the Philosophy of Science 62 (3): 639–656. Haegeman, L. 1994. Introduction to government and binding theory. 2nd ed. Oxford: Blackwell Publishers. 1st edition 1991. Hawthorne, J., and D. Manley. 2012. The reference book. Oxford: Oxford University Press. Horwich, P. 1998. Meaning. Oxford: Oxford University Press. ———. 2005. Reflections on meaning. Oxford: Oxford University Press.

19  Stirring the Possum: Responses to the Bianchi Papers

453

Jackson, F. 1998. Reference and description revisited. In Philosophical Perspectives, 12: Language, Mind, and Ontology, ed. J.E. Tomberlin, 201–218. Atascadero: Ridgeview Publishing Company. Kaplan, D. 1989. Afterthoughts. In Themes from Kaplan, ed. J. Almog, J. Perry, and H. Wettstein, 565–614. Oxford: Oxford University Press. ———. 1990. Words. Aristotelian Society Supplementary Volume 64: 93–119. ———. 2012. An idea of Donnellan. In Having in mind: The philosophy of Keith Donnellan, ed. J. Almog and P. Leonardi, 122–175. New York: Oxford University Press. Kitcher, P. 1984. Species. Philosophy of Science 51: 308–333. Kripke, S.A. 1979a. Speaker’s reference and semantic reference. In Contemporary perspectives in the philosophy of language, ed. P.A.  French, T.E.  Uehling Jr., and H.K.  Wettstein, 6–27. Minneapolis: University of Minnesota Press. ———. 1979b. A puzzle about belief. In Meaning and use, ed. A.  Margalit, 239–283. Dordrecht: Reidel. ———. 1980. Naming and necessity. Cambridge, MA: Harvard University Press. Kroon, F. 1987. Causal descriptivism. Australasian Journal of Philosophy 65: 1–17. Leslie, S.-J. 2013. Essence and natural kinds: When science meets preschooler intuition. In Oxford studies in epistemology, ed. T.S. Gendler and J. Hawthorne, vol. 4, 108–165. Oxford: Oxford University Press. Lewens, T. 2012. Species essence and explanation. Studies in History and Philosophy of Biological and Biomedical Sciences 43: 751–757. Lewis, D. 1984. Putnam’s paradox. Australasian Journal of Philosophy 62: 221–236. Loar, B. 1981. Mind and meaning. Cambridge: Cambridge University Press. ———. 1982. Conceptual role and truth-conditions. Notre Dame Journal of Formal Logic 23: 272–283. Longworth, G. 2009. Ignorance of linguistics. Croatian Journal of Philosophy 9: 21–34. Ludlow, P. 2009. Review of Michael Devitt’s Ignorance of language. Philosophical Review 118: 393–402. Machery, E. 2012a. Expertise and intuitions about reference. Theoria 27 (1): 37–54. ———. 2012b. Semantic epistemology: A brief response to Devitt. Theoria 27 (2): 223–227. Machery, E., R. Mallon, S. Nichols, and S.P. Stich. 2004. Semantics, cross-cultural style. Cognition 92 (3): 1–12. Machery, E., R.  Mallon, S.  Nichols, and S.P.  Stich. 2013. If folk intuitions vary, then what? Philosophy and Phenomenological Research 86 (3): 618–635. Machery, E., C.Y. Olivola, and M. de Blanc. 2009. Linguistic and metalinguistic intuitions in the philosophy of language. Analysis 69 (4): 689–694. Machery, E., and S.P. Stich. 2012. The role of experiments. In The Routledge companion to the philosophy of language, ed. G. Russell and D.G. Fara, 495–512. New York: Routledge. Machery, E., J. Sytsma, and M. Deutsch. 2015. Speaker’s reference and cross-cultural semantics. In On reference, ed. A. Bianchi, 62–76. Oxford: Oxford University Press. Martí, G. 2009. Against semantic multi-culturalism. Analysis 69 (1): 42–48. ———. 2012. Empirical data and the theory of reference. In Reference and referring: Topics in contemporary philosophy, ed. W.P.  Kabasenche, M.  O’Rourke, and M.H.  Slater, 62–76. Cambridge, MA: MIT Press. ———. 2015. Reference without cognition. In On reference, ed. A.  Bianchi, 77–92. Oxford: Oxford University Press. Matthen, M. 1998. Biological universals and the nature of fear. Journal of Philosophy 95: 105–132. Mayr, E. 1961. Cause and effect in biology. Science 134: 1501–1506. McGinn, C. 1982. The structure of content. In Thought and object, ed. A. Woodfield, 207–258. Oxford: Clarendon Press. Millikan, R.G. 1999. Historical kinds and the special sciences. Philosophical Studies 95: 45–65. ———. 2000. On clear and confused ideas: An essay about substance concepts. Cambridge: Cambridge University Press.

454

M. Devitt

Neale, S. 2004. This, that, and the other. In Descriptions and beyond, ed. M.  Reimer and A. Bezuidenhout, 68–182. Oxford: Clarendon Press. ———. 2016. Silent reference. In Meanings and other things: Essays in honor of Stephen Schiffer, ed. G. Ostertag, 229–344. Oxford: Oxford University Press. Neander, K. 2017. A mark of the mental: In defense of informational teleosemantics. Cambridge, MA: MIT Press. Okasha, S. 2002. Darwinian metaphysics: Species and the question of essentialism. Synthese 131: 191–213. Orlando, E. 2009. General term rigidity as identity of designation: Some comments on Devitt’s criticisms. Análisis Filosófic 29: 201–218. Pepp, J. 2019. What determines the reference of names? What determines the objects of thought. Erkenntnis 84: 741–759. Pietroski, P. 2008. Think of the children. Australasian Journal of Philosophy 86: 657–659. Putnam, H. 1973. Meaning and reference. Journal of Philosophy 70: 699–711. ———. 1975. Mind, language and reality: Philosophical papers, volume 2. Cambridge: Cambridge University Press. Quine, W.V. 1961. From a logical point of view. 2nd ed. Cambridge, MA: Harvard University Press. 1st ed. 1953. Quine, W.V., and J.S. Ullian. 1970. The web of belief. New York: Random House. Reber, A.S. 2003. Implicit learning. In Encyclopedia of cognitive science, ed. L.  Nadel, vol. 2, 486–491. London: Nature Publishing Group. Rey, G. 2006a. The intentional inexistence of language – but not cars. In Contemporary debates in cognitive science, ed. R. Stainton, 237–255. Oxford: Blackwell Publishers. ———. 2006b. Conventions, intuitions and linguistic inexistents: A reply to Devitt. Croatian Journal of Philosophy 6: 549–569. ———. 2008. In defense of folieism: Replies to critics. Croatian Journal of Philosophy 8: 177–202. ———. 2014. The possibility of a naturalistic Cartesianism regarding intuitions and introspection. In Philosophical methodology: The armchair or the laboratory?, ed. M.C.  Haug, 243–267. London: Routledge. ———. forthcoming-a. A defense of the voice of competence. In Linguistic intuitions, ed. S. Schindler. Oxford: Oxford University Press. ———. forthcoming-b. Representation of language: Foundational issues in a Chomskyan linguistics. Oxford: Oxford University Press. Richard, M. 1983. Direct reference and ascriptions of belief. Journal of Philosophical Logic 12: 425–452. Schwartz, S.P. 2002. Kinds, general terms, and rigidity. Philosophical Studies 109: 265–277. Searle, J.R. 1983. Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge University Press. Slater, M.H. 2013. Are species real? An essay on the metaphysics of species. New York: Palgrave Macmillan. Slezak, P. 2007. Linguistic explanation and ‘psychological reality’. Online at http://devitt.commons.gc.cuny.edu/online_debates/ (Parts of this paper were delivered at a symposium on linguistics and philosophy of language at the University of New South Wales in July 2007). ———. 2009. Linguistic explanation and ‘psychological reality’. Croatian Journal of Philosophy 9: 3–20. Slobodchikoff, C.N. 2002. Cognition and communication in prairie dogs. In The cognitive animal: Empirical and theoretical perspectives on animal cognition, ed. M. Bekoff, C. Allen, and G.M. Burchardt, 257–264. Cambridge, MA: MIT Press. Smith, B.C. 2006. Why we still need knowledge of language. Croatian Journal of Philosophy 6: 431–456. Stanley, J., and T. Williamson. 2001. Knowing how. Journal of Philosophy 98: 411–444. Sterelny, K. 2016. Deacon’s challenge: From calls to words. Topoi 35 (1): 271–282.

19  Stirring the Possum: Responses to the Bianchi Papers

455

Sterelny, K., and P. Griffiths. 1999. Sex and death. Chicago: University of Chicago Press. Stich, S.P. 1983. From folk psychology to cognitive science: The case against belief. Cambridge, MA: MIT Press. ———. 1991. Narrow content meets fat syntax. In Meaning in mind: Fodor and his critics, ed. B. Loewer and G. Rey, 239–254. Oxford: Basil Blackwell. Sullivan, A. 2010. Millian externalism. In New essays on singular thought, ed. R. Jeshion, 249–269. Oxford: Oxford University Press. Sytsma, J., and J. Livengood. 2011. A new perspective concerning experiments on semantic intuitions. Australasian Journal of Philosophy 89 (2): 315–332. Sytsma, J., J. Livengood, R. Sato, and M. Oguchi. 2015. Reference in the land of the rising sun: A cross-cultural study on the reference of proper names. Review of Philosophy and Psychology 6 (2): 213–230. Thornton, R. 1995. Referentiality and wh-movement in child English: Juvenile D-Linkuency. Language Acquisition 4 (2): 139–175. Unger, P. 1983. The causal theory of reference. Philosophical Studies 43: 1–45. van Fraassen, B.C. 1980. The scientific image. Oxford: Clarendon Press. Weinberg, J.M., C. Gonnerman, C. Buckner, and J. Alexander. 2010. Are philosophers expert intuiters? Philosophical Psychology 23 (3): 331–355. White, S.L. 1982. Partial character and the language of thought. Pacific Philosophical Quarterly 63: 347–365. Worrall, J. 1989. Structural realism: The best of both worlds? Dialectica 43: 99–124. Wulfemeyer, J. 2017. Bound cognition. Journal of Philosophical Research 42: 1–26. Zerbudis, E. 2009. The problem of extensional adequacy for Devitt’s rigid appliers. Análisis Filosófic 29: 219–237.

Index

A Aboutness, 34–37, 129, 394, 396 Acquaintance, 75, 96, 115, 116, 245, 290, 292, 293, 395 Adams, Douglas, 2 Almog, Joseph, 97, 124, 126, 128, 129, 389, 391, 393–395 Analyticity, 70, 72, 77, 83, 86, 90, 242, 243, 257–262, 265, 316, 416 Anaphora, 32, 106, 112–114, 308, 385 Anaphoric link, 109–113, 385, 386 Anderson, John, 401 Anti-descriptivism, 329, 331–338, 432, 433, 437, 438 Anti-psychologism, 7, 8, 12, 16 Antony, Louise, 311, 314, 373 Approximation, 350, 351, 440 A Priori, 1, 2, 11, 12, 15, 72, 83, 90, 166, 237–246, 257, 258, 260–262, 299, 301, 302, 313, 314, 316, 318–323, 371, 414–416, 429, 432, 435 Argument from error, 72, 73, 90, 91, 122, 125, 330, 331, 337, 384, 396, 400, 403, 406, 424, 433 from ignorance, 72, 73, 90, 91, 122, 125, 330, 331, 337, 384, 396, 400, 403, 406, 424, 433 Armstrong, David, 267 Armstrong, Josh, 45 Artifacts, 8, 123, 149, 179, 181, 250–252, 254, 257, 260, 262, 263, 303, 359, 386, 443–444, 446, 449

Attitude ascriptions, 41, 193–196, 198–201, 203–206, 208, 210, 211, 213–221, 224–234, 316, 409–413 B Bach, Kent, 53, 80, 84, 91, 105, 256 Baker, Lynne R., 273 Baptism, 74–76, 79, 80, 111, 124–126, 130, 258, 259, 262 ‘Because’ sentences, 222–226 Behaviorism, 88, 89, 262, 301, 317 Belief ascriptions, 179, 196, 197, 203, 209, 210, 213, 221, 225, 272, 273 Berger, Alan, 108 Bermudez, José L., 48 Bianchi, Andrea, 42, 123, 125–127, 129, 130, 133, 139, 152, 234, 265, 324, 371, 388–394, 396, 397 Biological essentialism, 2, 324, 365, 366, 371, 430, 441–443, 447–449 Biological taxa, 355, 360, 365, 366, 441, 442 Block, Ned, 269, 278, 359, 420 Boghossian, Paul, 260 Bohr, Niels, 348 Bonanini, Alessandro, 125–130, 134, 388–392 Born, Max, 7 Boyd, Richard, 359 Boyd, Robert, 180–182 Braddon-Mitchell, David, 170 Braun, David, 83, 97, 193–195, 204, 220, 224, 227, 228, 230, 231, 408–413 Buckwalter, Wesley, 437

© Springer Nature Switzerland AG 2020 A. Bianchi (ed.), Language and Reality from a Naturalistic Perspective, Philosophical Studies Series 142, https://doi.org/10.1007/978-3-030-47641-0

457

458 Burge, Tyler, 71, 87, 272–274, 276, 421 Burgess, John, 72, 75 Burri, Alex, 3 Byrne, Alex, 305 C Camp, Elisabeth, 48–53, 55, 61, 62, 381–383 Cantor, Georg, 320 Capuano, Antonio, 126, 129, 130, 134, 388, 393, 394 Carnap, Rudolf, 85, 87, 93, 110, 213, 243 Carroll, Lewis, 313, 323, 359, 443, 446 Causal chains, 75, 76, 89, 98, 99, 124, 125, 130–133, 138, 139, 158, 159, 200, 272, 286, 291, 292, 403, 422 descriptive theories of reference (see Causal descriptivism) descriptivism, 90–94, 138, 141, 151, 158, 159, 173, 175, 179, 330, 371, 400–407, 437 theory of proper names (see Causal theory of reference) theory of reference, 2, 69, 73, 74, 76–80, 82, 90, 105, 108, 118, 121–130, 133, 134, 137, 138, 140, 141, 153, 154, 158, 159, 173, 177, 179, 200, 278, 291, 292, 330, 337–339, 385, 387–389, 393, 394, 399, 400, 402, 403, 406, 423, 427, 437 Chain of communication picture, 122, 124, 125, 127–129, 131, 134, 388, 393 Chakravartty, Anjan, 347 Chalmers, David, 84, 87, 91–93, 96, 167, 170, 280, 281 Chomsky, Noam, 2, 7–13, 15, 19–22, 24, 26, 31, 32, 35, 38, 40, 41, 54, 56, 176, 293, 305–313, 374, 377, 425 Church, Alonzo, 93, 213, 243 Clark, Herbert H., 53 Cognitive Priority, 394, 395 Cognitive psychology, 267, 312, 401, 402, 419 Cognitive sciences, 8, 187 Cohnitz, Daniel, 334 Collins, John, 8, 9, 11, 22, 25, 26, 31, 36, 40, 305, 309, 324, 372–380 Color, 25, 33, 48, 56, 116, 186, 251, 254–256, 263, 264, 269, 301, 303–308, 312, 313, 324, 350, 430–432 Commonsense realism, 301–303 Common-sense realism naturalized, 352, 353 Communal language, 285, 288, 293, 295, 425, 426

Index Communal meaning, 285, 287–290, 293, 294, 424–427 Communication, 26, 45, 46, 51, 52, 54, 61, 62, 75, 76, 85, 88, 98, 99, 106–108, 113, 115, 117, 122, 124, 125, 127–134, 139, 142, 148, 149, 186, 187, 306, 308, 310, 311, 338–340, 375, 384, 388, 393 Competence, 10–13, 15–20, 22–34, 36–40, 55, 57, 86, 89, 90, 92, 138, 142, 143, 147, 159, 173, 175, 176, 182–184, 219, 243, 260, 307, 309, 322, 336, 372–374, 376, 378–381, 384, 401–403, 405, 407, 434, 436, 437 Conceptual role, 267, 270 Confirmation holism, 314, 316–321, 323 Confusion, 46, 79, 105, 108–111, 113–116, 118, 174, 179, 273, 334, 335, 387, 392, 395 Consistency of meaning, 263 Constructivism, 2, 346, 352, 429, 439 Consumerist semantics, 133–134, 397 Contextualism, 2, 52 Contingent a priori, 237, 239, 240, 243, 245, 371, 414–416 Convention, 2, 17, 26, 46, 52–54, 57, 60, 77, 79, 85, 98, 129–131, 133, 159, 194, 202–205, 207–217, 219–221, 225, 227, 232–234, 257, 258, 261, 262, 277, 308, 310–312, 318, 379, 382, 390, 391, 393–396, 403, 409, 412–414, 425–427 Conventional-designation, 129, 131, 393 Conventional meaning, 52, 85, 131, 194, 202–205, 208–217, 219, 221, 225, 227, 232–234, 277, 382, 391, 409, 412, 413, 425–427 Conventional Stance, 390, 395 Coordination, 53, 54, 105, 112–113, 117–118 Coreference, 106, 109, 112–116, 199, 210, 215, 385, 386 Coreference de jure, 112–116, 385 Crain, Stephen, 37, 58, 59 Crowe, Russell, 423 Csibra, Gergely, 178, 185, 186 Cultural evolution, 173, 176, 180–183, 185, 384, 404, 434 Cumming, Samuel, 90 Cummins, Robert, 22, 31, 38 Cumulative cultural learning, 176, 179–182, 406 D Davidson, Donald, 52, 87, 88, 129, 315, 334 De Blanc, Molly, 331–336, 438 Declarative knowledge, 401, 402, 406–408

Index Deductive-Nomological theory of explanation (DN theory), 223–224, 226–227, 229–231 Deference, 76, 91, 116–118, 133, 286, 290–291, 384, 397, 423, 424, 426, 427 Definite descriptions, 2, 69–72, 74, 75, 77, 81, 89–97, 99, 123, 124, 126, 128, 132, 150, 153, 196, 197, 199, 206, 207, 212, 237–239, 245, 251, 255, 256, 259, 290, 292, 337–340, 385, 388, 399, 400, 402–404, 414, 415 Demonstratives, 123, 126, 128, 145, 152, 153, 202, 217, 247, 272, 386, 388, 393, 411 Denotation, 81, 99, 105, 124, 150, 153 Dennett, Daniel, 182–185 Derrida, Jacques, 313 Description theory of proper names, see Descriptivism Descriptive names, 150, 289, 338, 387, 399, 414, 415 Descriptivism, 81–97, 121, 122, 126, 127, 137, 138, 158, 159, 161, 173, 179, 250, 252, 260, 329, 330, 332, 334–338, 384, 396, 400, 407, 408, 416–419, 423, 432–434, 436–438 Designation, 2, 76, 82, 83, 89, 105, 108, 110, 121–125, 127–131, 141, 142, 144, 145, 183, 238, 239, 241–245, 250–252, 255, 256, 260, 263, 277, 290, 338, 385, 387, 393, 397, 416 D(esignating)-chains, 106, 111, 115, 124, 130, 277, 393 De Toffoli, Silvia, 49 Devitt, Kate, 437 Devitt, Michael, 1–3, 7–12, 15–22, 24–42, 45–52, 55–60, 62, 69–80, 82, 83, 86, 88, 90, 97–100, 105–113, 115–118, 121–134, 137–138, 140–145, 147–154, 156, 158, 160, 167, 169, 173–180, 183–186, 193–225, 227–234, 241, 249, 252–254, 257, 259, 261, 264, 265, 267–282, 286, 288–292, 294, 295, 299–324, 329–336, 338–340, 345–355, 359–366, 371–449 Diagrams, 49, 50, 61, 122 Di Paola, Simona, 329, 330, 333, 334, 336 Direct reference, 83, 97–100, 118, 150, 153, 193–197, 201–206, 210, 214, 217–219, 221, 223–234, 408, 409, 412–414 Direct reference theory (DRT), see Direct reference Discourse structure, 55

459 Division of linguistic labour, 117, 173–177, 180, 181, 183, 184, 187, 384, 386, 404, 406, 407, 434 Domaneschi, Filippo, 329, 330, 333, 334, 336, 437 Donnellan, Keith, 2, 69, 70, 72, 74, 77, 81, 94, 95, 105, 106, 121, 123, 125–131, 134, 241, 245, 255, 259, 260, 329, 330, 339, 388–394, 396 Donnellan’s historical explanation theory, 125, 129, 130, 134, 388 Dreyfus, Hubert, 311 Duhem, Pierre, 316–320 Dummett, Michael, 46, 51, 71, 85, 87, 88 E Egan, Frances, 42 Einstein, Albert, 318, 348 Elicited production, 37, 333–336, 436–438 Empty names, 83, 114, 149, 150, 177, 399 Epistemically rewarding relations (ER relations), 115–118 Epistemic optimism, 345–347, 352, 439 Epstein, Brian, 45 Essences, 252, 355, 358–366, 441–449 Essentialism, 2, 250, 251, 253–255, 324, 355, 365, 366, 371, 416, 417, 430, 441–443, 445, 447–449 Eternal kinds, 358–361, 363, 364, 443, 447 Evans, Gareth, 71, 79, 80, 107, 108, 126, 239, 277, 338, 387, 396 Everett, Anthony, 84, 93 Everett, Daniel L., 309 Evolution of language, 187, 404 Experimental philosophy, 177, 329, 338 Experimental semantics, 330, 332, 333, 337, 339, 371, 430, 433, 436 Experts, 77, 140, 154, 159, 174, 175, 177, 178, 181, 286–288, 290, 291, 293, 295, 348, 423–427, 434, 435 Explanation, 7–10, 12–36, 38, 39, 41, 42, 45–49, 52, 55, 57, 60, 80–82, 91–94, 96, 98, 106, 108, 110, 113, 122–125, 128–134, 137, 138, 142–144, 147, 149, 153, 154, 161, 165, 169, 173, 181, 182, 193, 194, 198–202, 204, 205, 218–234, 255, 260–265, 267, 269–280, 285–294, 299–317, 319–324, 355, 358–366, 373–383, 387–389, 392–394, 396, 398, 399, 403, 409–413, 419–422, 426–432, 442–449

460 Explanatory epistemology, 315, 316, 321, 324 Explicit learning, 405–407 Externalism, 11, 22, 30, 31, 155–157, 169–170, 374 External language, 8, 28, 30–32, 40, 308, 380 F Failed groundings, 137, 145, 147, 153 Fara, Delia G., 256, 263 Feyerabend, Paul K., 85 Field, Hartry, 99, 105, 109, 110, 118, 321, 387 Fiengo, Robert, 112 Flores, Carolina, 45 Fodor, Jerry, 10, 47, 48, 132, 177, 267, 268, 271, 272, 276, 280, 319, 382, 410, 420, 439 Fox Tree, Jean E., 53 Frege, Gottlob, 12, 46, 71, 81, 83, 85–88, 93, 95, 99, 133, 218, 288, 290, 321, 408 Frege’s puzzles, 73, 83, 84, 87, 90, 92–94, 96, 99 Functional properties, 25, 26, 34, 278–380, 420 Functional roles, 203, 270, 276–279, 419, 420, 422 G Gardner, Michael, 265 Gauss, Carl Friedrich, 318 Geach, Peter T., 106, 196 General terms, 78, 79, 83, 93, 96, 105, 200, 249–257, 259–261, 263–265, 329, 345, 416, 418, 422 Generative linguistics, 7–9, 11, 16, 41 Genone, James, 330, 334 Georgalis, Nicholas, 282 Gergely, György, 178, 185 Gödel/Schmidt case, 174, 177, 179, 259 Godfrey-Smith, Peter, 182, 347, 350, 352, 353, 439–441 Godman, Marion, 358, 359, 441–449 Goldman, Alvin I., 314 Goodman, Nelson, 2, 259, 261, 264, 302, 346, 352, 429 Graham, George, 281, 282 Grammars, 10–13, 15, 16, 18–26, 28, 30–32, 34, 35, 37, 38, 41, 47, 55–57, 59, 60, 261, 307–312, 371–381, 383

Index Grice, Paul, 46, 54, 87, 131–133, 220, 277, 382, 389, 391, 396, 409, 413 Griffiths, Paul, 441, 442 Gross, Steven, 35, 313, 324, 435 Grounding, 80, 86, 98, 99, 105–111, 113, 115, 117, 118, 123, 130, 131, 134, 137–139, 141–154, 173, 183, 277, 285, 288–290, 371, 384–387, 393–400, 402, 408, 414, 415, 426 H Hales, Steven, 431 Halle, Morris, 307 Hanks, Peter, 203 Hardin, C.L., 303, 304, 306 Harnish, Robert M., 22, 31, 38, 53 Haukioja, Jussi, 334 Having in mind, 125–129, 389, 393–395 Hawthorne, John, 395 Hegel, Georg Wilhelm Friedrich, 316 Henrich, Joseph, 181, 182 Heyes, Celia, 180–182, 186 Hidden indexicals, 216, 411 Hilbert, David R., 305 Historical chain picture, 73, 81, 82, 90, 97, 133, 400 Historical essences, 353, 361–363, 366, 449 Historical kinds, 358–360, 363, 364, 443, 447 Horgan, Terence, 281, 282 Hornstein, Norbert, 311 Horwich, Paul, 384, 385, 408, 411, 419, 422–427 I Ichikawa, Jonathan, 332, 340 Idealism, 17, 346–348, 353, 380, 429, 439 Idealization, 11, 20, 23, 30, 31, 322, 350, 423, 440 Idiolect, 46, 52, 132, 238, 285–288, 290, 294, 295, 306, 385, 423, 425, 426 Idiolectal meaning, 285, 287, 289, 293, 294, 424, 426, 427 Ignorance and error arguments, see Argument from error and argument from ignorance Implements, 49, 52, 55, 262, 444, 445, 449 Implicit learning, 406, 407 Inan, Ilhan, 253

Index Information, 17, 21, 34, 49, 115–117, 126, 155–169, 174–181, 186, 187, 216, 217, 220, 221, 223, 224, 226, 227, 229–233, 240, 243–245, 289, 291, 292, 330, 332, 338, 372, 402–407, 413–415, 442 Intentional behavior, 75, 76, 254, 274, 278, 279, 307, 384, 385, 395, 422 Intentional realism, 381 Internalism, 156, 169, 316 I(nternal)-language, 11, 12, 31, 40, 41, 293, 307, 312, 319 Intrinsic Biological Essentialism (IBE), 365, 441–443, 447–449 Intrinsic essences, 355, 359, 360, 362–366, 441, 442, 445, 448, 449 Intuitions, 2, 26, 27, 34–37, 80, 111, 132, 150, 156, 173, 177–179, 198, 218, 252, 313, 314, 329, 330, 332, 334, 336, 371–373, 386, 411, 415, 430, 432–438, 442, 445 Invariance, 7, 8, 14, 23, 29, 304, 372, 375 J Jackendoff, Ray, 313 Jackson, Frank, 90, 94, 96, 156, 165, 167, 181, 184, 185, 280, 281, 400–407, 432, 434 ‘Jack the Ripper’, 77, 78, 239, 241, 289, 387, 399, 414, 415 Johnson, Michael, 249, 251, 255, 259, 263, 333 K Kamp, Hans, 116, 117 Kant, Immanuel, 2, 321 Kaplan, David, 97, 99, 110, 116, 133, 194, 201, 239, 240, 247, 267, 274, 280, 385, 389, 397, 409 Kaplanian characters, 194, 274, 276, 278–282 Katz, Jerrold J., 8, 18, 22, 26, 27, 31, 34, 38, 40, 84, 91, 94 Kitcher Philip, 79, 361, 362, 442 Kriegel, Uriah, 281, 282 Kripke, Saul, 1, 2, 69–77, 79–83, 89, 91–95, 97–99, 105–111, 118, 121–134, 138, 140, 151, 152, 158, 174, 177, 179, 193, 194, 196, 199, 203, 206–208, 213, 217, 237–241, 243–245, 249, 250, 252, 253, 255–259, 263, 280, 289–295, 329–332, 335, 337, 338, 384–390, 392–394, 396, 400, 408, 410, 411, 414–417, 427, 432–434, 437, 438, 441

461 Kroon, Fred, 90, 94, 400 Kuehni, Rolf G., 304 Kuhn, Thomas S., 85 L Landau, Barbara, 59 Language, 2, 7–13, 16–19, 22, 24, 26, 28–34, 36–41, 45–52, 54–62, 69–75, 77, 78, 81, 82, 85, 87–89, 91, 93, 98, 111, 116–118, 121, 123, 124, 126, 129, 130, 132–134, 137, 138, 140–142, 147, 155, 156, 158, 163, 167, 169, 173–185, 187, 208, 209, 212–214, 216–218, 228, 234, 238, 241–244, 252, 257, 259–261, 273, 274, 277, 281, 285, 286, 288, 290, 293–295, 305–312, 316, 324, 329, 332, 334, 336, 338, 340, 372–374, 376, 377, 379–385, 389–393, 396, 398, 400–402, 404–407, 410, 415, 423, 425–427, 429, 430, 432, 434, 435, 438, 449 ‘Language expresses thought (LET)’, 46, 52, 55, 60, 381, 382 Language of thought hypothesis (LOTH), 47, 48, 50, 51, 382, 383 LaPorte, Joseph, 249, 252, 255, 256, 260, 263–265 Lawlor, Krista, 109, 110, 114 Leonardi, Paolo, 389, 394 Leslie, Sarah-Jane, 441, 442 Lewis, David, 8, 90, 168, 170, 223, 224, 226, 227, 240, 270, 311, 400, 423 Linguistic Reality (LR), 7, 8, 16–20, 22, 26, 28, 29, 35, 39, 41, 307–309, 311, 312, 374–376, 380 competence (see Competence) modesty, 160, 163, 166, 167 realism, 10, 374, 375, 379 rule, 10–12, 16–22, 27, 28, 30, 38, 40, 47, 56, 58, 87, 261, 285, 287–292, 294, 295, 310–313, 373–376, 378, 379, 381–383, 390–393, 424–426 Linguistics, 3, 7–9, 11, 13, 15, 16, 18, 19, 25, 26, 28, 30–36, 38, 39, 41, 47, 52, 121, 261, 299, 301, 305–309, 312–314, 324, 371, 372, 374, 375, 377, 378, 385, 428–431, 436 Loar, Brian, 281, 420 Lobachevsky, Nikolaj Ivanoviç, 318 Locke, John, 46, 159 Logical truth, 322, 323 Lombrozo, Tania, 300, 330, 334

462 ‘London’/’Londres’ case, 206–216, 291, 401, 410, 411 Longworth, Guy, 42, 373 Lost rigidity arguments, 417–419 Ludlow, Peter, 35, 36, 45, 57, 373 Lycan, William G., 82, 83, 93, 268, 269, 280, 281, 419–423 M Machery, Edouard, 329–336, 432, 435–438 ‘Madagascar’ case, 79, 107, 108, 126, 133, 387 Mallon, Ron, 330–336, 338, 339, 432, 435–438 Manley, David, 395 Maps, 48–50, 161 Manifestability, 87–89 Martí, Genoveva, 3, 97, 126, 128, 249, 255–257, 263, 330–335, 337–339, 389, 394, 432–434, 436–438 Martin, Charles B., 2, 388 MartÍnez Fernández, José, 249, 255–257, 263 Matthen, Mohan, 442 Matthews, Robert J., 8, 41, 42 Mature-rigidity, 253, 418, 419 May, Robert, 71, 85, 112 Mayr, Ernst, 361, 362, 442 McMullin, Ernan, 349, 351 Meaning, 2, 3, 12, 24, 25, 27, 46, 52, 54–60, 69–71, 73, 77, 81–99, 111, 127, 128, 131, 133, 177, 193, 194, 197–205, 208–221, 225–229, 231–234, 257–259, 261–263, 267–271, 273–282, 285–295, 301, 310, 322, 338, 339, 350, 371, 381, 382, 389–392, 396, 408–413, 415, 417, 419–428, 430, 432 Mental files, 70, 115–118, 339, 388 Mental representation, 46, 57, 60, 107, 115, 215, 267, 269, 271, 293, 312, 313, 379, 382 Mercier, Hugo, 178, 187 Metalinguistic descriptivism, 70, 91–95 Metaphilosophy, 2, 121, 295 Metaphysics, 2, 3, 15, 17, 34, 121, 142, 240, 253, 299–301, 303, 305, 313, 317, 320, 321, 323, 324, 346–348, 352–354, 371, 373, 380, 402, 418, 429–431, 439, 440 ‘Meter’, 237–246, 415 Methodological solipsism, 270, 272, 279, 420, 422 Methodology, 2, 3, 15, 46, 48, 61, 156, 194, 197, 198, 202, 203, 218, 303, 333, 340, 371, 428–435, 443

Index Mill, John S., 81, 83, 97–99, 250, 254, 255, 258, 355, 356, 418 Miller, Alexander, 85 Miller, George A., 10 Miller, Richard B., 138, 141, 144, 151, 153, 154 Millianism, 83, 84, 92, 97, 99, 194, 195, 255, 288, 408, 409 Millikan, Ruth G., 278, 356–359, 443, 445, 447 Mind-independence, 300, 301, 307, 346–350, 352, 353, 439 Minimal realism, 13, 15 Minimal reality, 14 Models, 10, 26, 28, 29, 39, 45, 46, 48, 50–52, 54, 55, 60, 62, 77, 180, 182, 185, 186, 267, 282, 290, 319, 320, 349–352, 360, 361, 366, 396, 403, 405, 440, 441, 444, 445 Modes of reference/referring, 200, 203, 204, 206–216, 221, 227–234, 291, 408–413 Modularity, 51, 54, 59, 61, 62 Monty Python, 273, 311, 423 Moore, George E., 302, 315, 431 Moorean commonsense, 299, 301, 307, 324, 429–431 Multiple grounding, 80, 86, 98, 99, 105–109, 111, 117, 118, 134, 387 Musolino, Julien, 59 Musso, Mariacristina, 56 N Nado, Jennifer, 333 Narrow content, 267–269, 271, 274, 280–282, 419 meaning, 267–269, 271–282, 371, 419–422 psychology, 269–273, 276, 278–280 Naturalism, 1, 2, 123, 155–157, 276, 280, 314, 318, 322–324, 388, 396, 428, 430, 431 Naturalized epistemology, 300, 314, 324, 430 Natural kinds, 77, 78, 96, 150–152, 174, 253, 254, 256, 258, 261, 417, 418 kind terms, 77–79, 99, 123, 137–140, 150–154, 250–258, 260–264, 271, 273, 274, 280, 386, 387, 404, 416–419, 423, 424 meaning, 220, 221 Neale, Stephen, 2, 54, 432 Necessary a posteriori, 167, 255, 257–259, 264, 265, 322 Negative Polarity Items (NPIs), 58, 310, 312, 379

Index Neo-Donnellanians, 129, 389–396 Neo-Kantian relativism, 429 ‘Neptune’, 80, 238, 239, 241, 415 Neurath, Otto, 316 New theory of reference (NTR), 69, 70, 73–80, 82, 83, 88, 89, 92, 94, 96, 97 Nichols, Shaun, 330–336, 338, 339, 432, 435–438 Nimtz, Christian, 263–265 Nominal descriptivism, 91, 92 kind terms, 77, 250–252, 256–258, 262, 263, 416–419 Non-zygotic inheritance, 255, 263, 264, 448, 449 Nozick Robert, 400 O Okasha, Samir, 441 Olivola, Christopher Y., 331–336, 438 Opacity, 199–203, 205–219, 227–230, 271, 410–412 Opaque attitude ascriptions, see Opacity Orlando, Eleonora, 251, 255, 257, 416 Ostertag, Gary, 330 P ‘Paderewski’ case, 203, 206, 291, 411 Palmer, Stephen, 303, 312 Papineau, David, 78, 314, 358, 359, 441–449 Paradigm terms, 263, 264 Partial reference, 105, 109–111, 113, 117, 118, 130, 387, 395 Pautz, Adam, 282 Pepp, Jessica, 126, 390, 393–396 Perry, John, 113, 115, 116 Phillips, Colin, 311 Picardi, Eva, 2 Pietroski, Paul M., 9, 42, 45, 58, 59, 305, 307, 373 Pinillos, Ángel, 333, 334, 336, 338, 339 Plantinga, Alvin, 89, 241 Plato, 315 Platonism, 26, 27, 34, 315 Popper, Karl, 347, 348, 439 Porot, Nicolas, 329–336, 339, 430, 436–438 Possibilities, 156, 157, 161, 162, 253 Postal, Paul M., 8, 18, 22, 31, 38 Post-modernism, 2 Pragmatics, 35, 52, 54, 57, 58, 60, 216, 220, 232, 233, 339, 412, 413, 425 Pre-semantics, 243–246, 415

463 Primitive explanation, 13–16, 23, 42, 375, 377, 378 Private languages, 85, 294, 295 Procedural knowledge, 401, 402, 406 Pronouns, 24, 59, 106, 109, 110, 112, 113, 152, 277, 278, 308, 375, 376, 385, 387, 422 Proper names, 2, 69–86, 89–99, 105–109, 111–118, 121–134, 137–159, 173, 175, 177, 179, 183, 184, 186, 193–196, 199, 200, 203, 206–208, 210–217, 219, 220, 227–229, 231, 234, 237–239, 241–243, 249–251, 255–259, 263, 273, 274, 277, 288–293, 330–340, 371, 384–405, 407–415, 417, 421–427, 433, 437, 438, 447 Propositions, 52, 97, 161, 163, 194–197, 200, 202, 203, 205, 220, 224–228, 230, 231, 237, 240, 242, 244, 246, 247, 258, 259, 269, 339 Prosser, Simon, 112, 113 Proto-intentional behavior, 279, 420–422 Psycholinguistics, 38, 385, 389, 404 Psychological explanation, 267, 269–280, 419–422 Psychological laws, 269, 271–274 Psychologism, 9, 11, 12, 41 Psychosemantics, 276–278, 280, 282 Pure semantics, 237, 243–246, 415 Putnam, Hilary, 1, 2, 70, 73, 76–80, 86, 88, 93, 95–97, 117, 151, 153, 167, 174, 177, 256–258, 261–264, 267, 268, 280, 316, 318, 329, 330, 346, 349, 386, 441, 444 Q Quantum mechanics, 41, 302, 318, 319, 347, 348, 353, 440 Qua-problem, 78, 79, 137–141, 144, 148–151, 153, 154, 264, 277, 371, 394, 397–399 Quasi-anaphoric link, 106, 111, 112, 385, 386 Quasi-a-priori (qua-priori), 237, 243–245, 415 Quine, Willard Van Orman, 1, 2, 10, 77, 78, 85, 87, 88, 261, 262, 299–301, 307–309, 314–321, 323, 324, 375, 377, 429, 430, 432 R Raatikainen, Panu, 88, 93, 234, 384, 386, 387, 397, 400, 402, 403 Railton, Peter, 223, 224, 226, 227, 230 Rattan, Gurpreet, 45

464 Reber, Arthur S., 406 Recanati, François, 52, 109, 114–116, 384–387 Reference, 2, 3, 69–84, 87, 89–99, 105–118, 121–134, 137–142, 144–147, 149–156, 158, 159, 161, 168, 173–177, 179, 182–187, 193–197, 199–221, 224–234, 237–243, 245–247, 255, 258–260, 268, 270–277, 280–282, 286, 288–293, 295, 305, 308, 330–340, 350, 352, 371, 375, 376, 384–397, 399–415, 418–423, 427, 428, 430, 432–438, 440 borrowing, 73, 75–78, 96, 97, 106, 112, 123, 125, 127, 130, 133, 138, 139, 179, 180, 182, 184, 277, 290, 372, 384–390, 392, 393, 396, 397, 399, 402, 404–408, 423, 427 change, 80, 105, 107–109, 118, 387 fixing, 74, 75, 79, 81, 84, 91, 94, 106, 108, 109, 116–118, 125, 134, 138–140, 152, 237–243, 245–247, 258, 281, 289, 290, 292, 293, 387–390, 392, 393, 399, 404–406, 414, 415, 427 grounding (see Grounding) Referential indeterminacy, 105, 118 Referential intuitions, 334, 336, 432–438 Referential pluralism, 329–331, 336, 337, 339, 340, 433 Reimer, Marga, 82, 83, 397–399 Representational theory of mind, 2, 267, 268, 278, 279, 419 Rey, Georges, 25, 42, 48, 261, 314, 318, 322, 372–377, 379, 380, 428–432, 439 Richard, Mark, 215, 216, 222, 225, 411 Richerson, Peter J., 180 Riemann, Bernhard, 318 Rigid application, 249, 251–253, 259, 260, 262, 416–419 Rigid designation, 89, 122, 166, 241, 245, 251, 252, 255, 256, 259, 260, 415 Rigid essentialism, 250, 251, 254, 255, 416, 417 Rigid expressionism, 250, 251, 255, 256, 416 Rigidified descriptions, 69, 70, 89, 90 Rigidity, 73, 123, 249–251, 254–257, 259–265, 371, 414–419 Robertson, Teresa, 237 Rosenberg, Jay F., 277 Rossen, Michael, 60 Rubin, Michael, 252 Rule, 10–12, 16–22, 27, 28, 30, 38, 40, 47–51, 56–59, 87, 182, 260, 261, 285, 287–292, 294, 295, 310–313, 322, 373–376, 378, 379, 381–383, 390–393, 424–426

Index Russell, Bertrand, 1, 46, 71, 83, 85, 95, 97, 133, 245, 288, 290, 299, 304, 314, 316, 320, 396, 408 Russell, Gillian, 322 Russellian propositions, 97, 194, 195, 227 Russell’s Principle, 396 S Salmon, Nathan, 97–99, 149, 195, 197, 204, 220, 240, 241, 243, 245, 249, 255, 288, 414, 415 Sandgren, Alex, 170 Schrödinger, Erwin, 348 Schroeter, Laura, 112, 113 Schwartz, Stephen P., 77, 250, 252, 256, 258, 259, 263, 414, 416–419 Scientific realism, 301–303, 313, 324, 345–354, 371, 428, 439–441 Scope ambiguity, 196, 197, 206 Searle, John, 53, 71, 72, 74, 76, 81, 83, 84, 86, 91, 92, 95, 259, 384, 385, 387 Secondary properties, 299, 301–306, 429 Sellars, Wilfrid, 88 Semantic knowledge, 407, 415, 416 Semantic reference, 110, 111, 118, 129, 131–134, 334, 387, 391–393, 395–397 Semantics, 2, 11, 12, 15, 34, 35, 37, 49, 50, 52–55, 57, 58, 60–62, 82, 88, 90, 97–99, 110–112, 118, 121, 122, 124, 129–134, 148–150, 152, 153, 156, 160, 168, 169, 175–180, 183, 184, 187, 194–198, 201, 202, 204, 205, 213, 217, 218, 224, 237–247, 251, 253, 255–259, 262–265, 269–272, 274, 290, 293, 299–301, 305, 313, 329–334, 336–340, 351, 354, 371, 384, 387, 389–393, 395–397, 403–409, 412, 414–418, 420, 428, 430, 433, 436, 438, 446, 448 Shakespearean attitude ascriptions, 195, 196, 201, 205, 206, 210, 211, 213–217, 224–228, 230, 231, 233, 409–411, 413 Singular propositions, 97, 195, 197, 205, 220, 225–228, 231, 269 terms, 2, 105, 112–114, 121, 124, 125, 200, 249, 250, 252, 255, 256, 263, 264, 277, 308, 329, 375, 376, 385, 386, 410, 416 Sleigh, Robert, 239 Slobodchikoff, Con, 405 Smith, Barry C., 8, 9, 42, 373

Index Smith/Jones case, 110, 111, 118, 124, 126, 130, 131, 387, 392, 394, 395 Soames, Scott, 8, 10–12, 17, 18, 22, 78, 90, 97, 195, 203, 216, 220, 240, 288 Speaker-designation, 129–131, 393 Speaker meaning, 87, 111, 131, 202–204, 213, 220, 221, 277, 382, 390, 392, 396, 409, 413, 425 Speaker’s reference/speaker-reference, 110, 111, 118, 129–134, 334, 387, 391–393, 395, 396 Species, 79, 142, 149, 151, 152, 183, 251, 304, 339, 355, 356, 360–365, 401, 414, 441–443, 445–449 Sperber, Dan, 178, 181, 184, 187, 305 Spoonerisms, 392 Spooner, William Archibald, 392, 395 Sprouse, Jon, 35, 311 Standard linguistic entities (SLEs), 306–308, 312, 313 Stanford, P. Kyle, 79 Stanley, Jason, 402 Sterelny, Kim, 2, 70, 71, 74, 77, 79, 80, 83, 90, 137, 138, 140–145, 147, 148, 150–154, 174, 175, 178, 181, 183, 187, 261, 264, 277, 290, 291, 372, 384–386, 388, 389, 397, 398, 400, 403–407, 416, 430, 432, 434–438, 441, 442 Steup, Matthias, 314, 316 Stich, Stephen P., 268, 330–336, 420, 432, 435–438 Stoljar, Daniel, 170 Strawson, Peter F., 71, 86, 92, 95, 111 Strong Scientific Realism (SSR), 346, 348, 349, 439–441 Structural realism, 345, 351, 441 Subjectivist semantics, 133, 134, 397 Sullivan, Arthur, 249, 255, 387 Symbols, 12, 21, 32, 33, 48, 56, 75, 98, 269–271, 308, 353, 372, 375, 376 Syntactic psychology, 269, 270, 272, 275, 279 Syntax, 11, 17, 18, 22, 24–26, 28, 29, 34, 37, 39, 46, 48, 50, 52–54, 56–62, 112, 175, 184, 185, 187, 255, 256, 269–271, 275, 277, 305, 308, 311, 313, 375, 379, 380, 382, 383, 386, 390, 425 Systematicity, 47, 48 T Taylor, Kenneth A., 106, 116 Technical terms, 130, 168, 176, 178, 179, 187, 285, 287, 290, 295, 373, 424, 426, 427

465 ‘That’-clauses, 195, 196, 198–203, 206, 208–211, 213–215, 217–220, 229, 277, 409 Thought, 12, 19, 26, 29, 45–52, 54–62, 73, 80, 85, 86, 89, 96, 105, 111, 115–117, 129, 130, 137, 142, 143, 145, 147, 148, 152, 153, 169, 174, 178, 179, 198–205, 208, 209, 214–216, 218–221, 225, 226, 229, 231–234, 246, 247, 262, 269–272, 275, 277, 280, 286, 302, 309, 310, 319, 334, 352, 353, 381–384, 386, 389, 391, 392, 394–396, 398, 402, 403, 405–407, 409, 412, 413, 420, 435, 436 Tichý, Pavel, 167 Tienson, John, 281 Tomasello, Michael, 181 Translation, 56, 85, 88, 93, 96, 159, 207–216, 243, 288, 410, 411, 437 Transparency, 10, 46, 60, 70, 113, 182, 187, 199–201, 205–207, 209, 210, 227, 228, 410 Truth, 2, 10, 13, 14, 31, 37, 38, 60, 72, 77, 82–84, 88, 90, 109–111, 114, 115, 147, 160–163, 165–167, 169, 170, 177, 187, 193, 196–203, 205–215, 217–219, 222–230, 237–245, 257, 259–262, 264, 265, 269–271, 273, 274, 276, 277, 281–283, 293, 302, 305, 312, 313, 317, 318, 321–323, 333–336, 338, 339, 345, 351, 356, 357, 362, 376, 379, 388, 399, 403, 411, 414–416, 419, 420, 423, 429, 432, 434, 438, 439, 442 Twin Earth, 73, 80, 88, 89, 96, 167–170, 262, 268, 269, 272–274, 276, 280, 281, 422 Two-dimensionalism, 87, 91, 155, 156, 160, 163, 164, 167, 170 U Understanding, 10, 20, 23, 26, 27, 32, 35, 37, 46, 71, 75, 85–87, 89, 92, 109, 110, 150, 158, 161, 165, 166, 174, 176, 178–182, 202, 204, 216, 243, 253, 257, 260, 264, 286, 287, 293, 295, 306, 334, 338, 339, 375, 378, 385, 389, 397, 401, 403, 406–408, 415, 429, 436, 437 Unger, Peter, 76, 80, 387 Universal grammar (UG), 41, 47, 55–57, 59, 60, 383 Unobservables, 33, 84, 346, 347, 349, 350, 352, 353, 439, 440 Use theory of meaning (UTM), 423–425, 427

Index

466 V van Fraassen, Bas, 347, 352, 439 Vauxhall Zafiras, 356, 359, 443–446 Verificationism, 352, 439 Vignolo, Massimiliano, 329, 330, 333, 334, 336 ‘Vulcan’, 78, 84, 149, 150, 288, 399 W Weber, Clas, 165 Wedgwood, Ralph, 316 WhyNots, 309–312, 379 Wide content, 267, 281, 282 Wiggins, David, 8, 18 Wigner, Eugene P., 348

Wikforss, Åsa, 330, 332, 334, 336, 340 Williamson, Timothy, 402 Williams Syndrome, 59, 60 Wilson, Robert A., 272 Wittgenstein, Ludwig, 46, 71, 78, 85, 259, 289, 294, 295, 314 Working epistemology, 301, 315–317, 321, 324 Worrall, John, 351, 441 Wulfemeyer, Julie, 126, 129, 388, 394 Z Zerbudis, Ezequiel, 418 Ziff, Paul, 86