What are Mental Representations? 2020015569, 2020015570, 9780190686673, 9780190686697

The topic of this book is mental representation, a theoretical concept that lies at the core of cognitive science. Toget

225 25 2MB

English Pages [338] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface

1. Introduction
Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht

2. A Deflationary Account of Mental Representation
Frances Egan

3. Defending Representation Realism
William Ramsey

4. Deflating Deflationism about Mental Representation
Dan Hutto & Erik Myin

5. Representing as Coordinating with Absence
Nico Orlandi

6. Reifying Representations
Michael Rescorla

7. Situated Mental Representations: Why we need mental representations and how we should understand them
Albert Newen and Gottfried Vosgerau

8. Representational Kinds
Joulia Smortchkova and Michael Murez

9. Functionalist Interrelations Amongst Human Psychological States Inter Se, ditto for Martians
Nicholas Shea

10. Nonnatural Mental Representation
Gualtiero Piccinini

11. Error Detection and Representational Mechanisms
Krystyna Bielecka and Marcin Miłkowski

Index
Recommend Papers

What are Mental Representations?
 2020015569, 2020015570, 9780190686673, 9780190686697

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

What Are Mental Representations?

P H I L O S O P H Y O F M I N D   SE R I E S

series editor: David J. Chalmers, New York University The Conscious Brain Jesse Prinz

The Contents of Visual Experience Susanna Siegel

Simulating Minds The Philosophy, Psychology, and Neuroscience of Mindreading Alvin I. Goldman

Consciousness and the Prospects of Physicalism Derk Pereboom

Supersizing the Mind Embodiment, Action, and Cognitive Extension Andy Clark Perception, Hallucination, and Illusion William Fish Phenomenal Concepts and Phenomenal Knowledge New Essays on Consciousness and Physicalism Torin Alter and Sven Walter Phenomenal Intentionality George Graham, John Tienson and Terry Horgan The Character of Consciousness David J. Chalmers The Senses Classic and Contemporary Philosophical Perspectives Fiona Macpherson Attention Is Cognitive Unison An Essay in Philosophical Psychology Christopher Mole

Consciousness and Fundamental Reality Philip Goff The Phenomenal Basis of Intentionality Angela Mendelovici Seeing and Saying The Language of Perception and the Representational View of Experience Berit Brogaard Perceptual Learning The Flexibility of the Senses Kevin Connolly Combining Minds How to Think About Composite Subjectivity Luke Roelofs The Epistemic Role of Consciousness Declan Smithies The Epistemology of Non-​Visual Perception Berit Brogaard and Dimitria Electra Gatzia What Are Mental Representations? Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht

What Are Mental Representations? Edited by

J OU L IA SM O RT C H KOVA , K R Z YS Z T O F D O Ł Ę G A , A N D T O B IA S S C H L IC H T

1

3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2020 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Cataloging-​in-​Publication Data Names: Smortchkova, Joulia, 1985–​editor. | Dołęga, Krzysztof, editor. | Schlicht, Tobias, editor. Title: What are mental representations? /​ edited by Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht. Description: New York, NY, United States of America : Oxford University Press, 2020. | Series: Philosophy of mind series | Includes bibliographical references and index. Identifiers: LCCN 2020015569 (print) | LCCN 2020015570 (ebook) | ISBN 9780190686673 (hardback) | ISBN 9780190686697 (epub) Subjects: LCSH: Representation (Philosophy) | Mental representation. Classification: LCC B105. R4 W44 2020 (print) | LCC B105. R4 (ebook) | DDC 121/​.68—​dc23 LC record available at https://​lccn.loc.gov/​2020015569 LC ebook record available at https://​lccn.loc.gov/​2020015570 1 3 5 7 9 8 6 4 2 Printed by Integrated Books International, United States of America

Contents Preface  Contributors 

Introduction  Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht

vii ix

1

1. A Deflationary Account of Mental Representation  Frances Egan

26

2. Defending Representation Realism  William Ramsey

54

3. Deflating Deflationism about Mental Representation  Daniel D. Hutto and Erik Myin

79

4. Representing as Coordinating with Absence  Nico Orlandi

101

5. Reifying Representations  Michael Rescorla

135

6. Situated Mental Representations: Why We Need Mental Representations and How We Should Understand Them  Albert Newen and Gottfried Vosgerau 7. Representational Kinds  Joulia Smortchkova and Michael Murez

178 213

8. Functionalist Interrelations among Human Psychological States Inter Se, Ditto for Martians  242 Nicholas Shea 9. Nonnatural Mental Representation  Gualtiero Piccinini

254

10. Error Detection and Representational Mechanisms  Krystyna Bielecka and Marcin Miłkowski

287

Index 

319

Preface This volume originates from the conference “Mental Representations: The Foundations of Cognitive Science?,” held at Ruhr-​ Universität Bochum (Germany) in September 2015 and organized by Krzysztof Dołęga, Tobias Schlicht, and Joulia Smortchkova. We’re thankful to Joe Dewhurst and Karina Vold for valuable feedback on the introduction and to Andreea Potinteu and Paola Gega for their contribution in the production of the index. We wish to thank all the authors for their effort and patience during the development of this volume. We also would like to thank the reviewers who took their time and energy to make helpful and constructive suggestions for sharpening and clarifying the ideas presented and for improving the chapters. Last but not least, we are grateful to Oxford University Press, represented by Peter Ohlin, for hosting this collection, and to David Chalmers for including this collection in his series and for his support. Krystyna Bielecka, Institute of Philosophy, University of Białystok

Contributors Krystyna Bielecka, Institute of Philosophy, University of Białystok Krzysztof Dołęga, Institute of Philosophy II, Ruhr-​Universität Bochum Frances Egan, Department of Philosophy, Rutgers University Daniel D. Hutto, Faculty of Law, Humanities and the Arts, School of Liberal Arts, University of Wollongong Marcin Miłkowski, Institute of Philosophy and Sociology, Polish Academy of Sciences Michael Murez, Centre Atlantique de Philosophie, University of Nantes Erik Myin, Department of Philosophy, University of Antwerp Albert Newen, Institute of Philosophy II, Ruhr-​Universität Bochum Nico Orlandi, Department of Philosophy, University of California, Santa Cruz Gualtiero Piccinini, Department of Philosophy and Center for Neurodynamics, University of Missouri, St. Louis William Ramsey, Department of Philosophy, University of Nevada, Las Vegas Michael Rescorla, Department of Philosophy, University of California, Los Angeles Tobias Schlicht, Institute of Philosophy II, Ruhr-​Universität Bochum Nicholas Shea, Institute of Philosophy, School of Advanced Study, University of London and Faculty of Philosophy, University of Oxford Joulia Smortchkova, Faculty of Philosophy, University of Oxford Gottfried Vosgerau, Department of Philosophy, Heinrich-​ Heine-​ University Düsseldorf

What Are Mental Representations?

 Introduction Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht

Since the cognitive revolution in the 1950s, the notion of “mental representation” has played a crucial role throughout the sciences of the mind. This theoretical concept is at the heart of research projects in neuroscience, cognitive psychology, social psychology, linguistics, artificial intelligence, cognitive anthropology, and animal cognition. It is used to explain central psychological abilities, such as language, perception, memory, theory of mind, abstract reasoning, and action. Numerous philosophical projects have been carried out in order to elucidate this notion and help to build the foundations of cognitive science. In light of this widespread use, it is surprising that there is not much agreement about the correct analysis of the notion itself. Philosophers of cognitive science disagree not only about what constitutes a mental representation, but also about the role of mental representations in explanations of cognitive functions. This volume collects original writings on the nature and role of mental representations in cognitive science. In this introduction we will first sketch out the contours of the notion of mental representation as used in cognitive science, then briefly present three debates centered around this notion, and finally give an overview of the papers in the volume. Historically, the notion of a mental representation has its origins in the philosophical discourse. Although the claim that all mental phenomena are directed at or about something or other is most closely associated with Franz Brentano’s characterization of psychological states (1874/​1995), the idea that we think by manipulating internal symbols that represent objects and states of affairs can be traced back to ancient and medieval philosophy, and has played a prominent role in the British empiricist tradition. This tradition led to the idea, summarized by Tim Crane (2003, 31) that “mental phenomena involve representation or presentation of the world,” with intentionality being the central feature of mental representations.

Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Introduction In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0001.

2  What Are Mental Representations? Minimally, a mental representation is a mental entity that possesses semantic properties (Pitt 2018).1 Two crucial semantic properties are having a content2 and having correctness conditions, which specify the conditions in which the representation represents correctly or not. When discussed in the context of cognitive science, however, mental representation has usually a more committed meaning: a mental representation is a physical object (usually instantiated in the brain by neural structures) that possesses semantic properties (see Ryder 2009a for an overview of the different conceptions of mental representations). The mainstream approach to the notion of representation is captured by William Bechtel (2016), according to whom most of the research in cognitive science is explicitly devoted to identifying representations and determining how they are constructed and used in information-​processing mechanisms that control behaviour. Characterizing neural processes as representations is not viewed as just a convenient way of talking about brain processes. The research is predicated on these processes being representations; the explanatory tasks they devote themselves to are identifying those neural processes that are representations, figuring out what their content is, and how these representations are then used in controlling behaviour. (Bechtel 2016, 1293–​1294)

Being a physical object in the brain allows mental representations to participate in the causal processes underlying cognition. Just like non-​mental symbols, mental representations have a dual aspect: there is the representational vehicle on the one hand, which is the physical realization of the representation, and the content on the other hand, which is what the representation is about. In a non-​mental example, the word “cat” and a picture of a cat are both about cats, but they represent cats with different vehicles. Vehicular properties include both material properties (the medium) and

1 Mental states are representational states, and, on a certain view, they are combinations of mental representations and attitudes (which could be propositional or not). For example, my belief that it is raining in London and my desire that it rain in London have the same mental representations as their contents (the concepts “rain” and “London”), but differ in the propositional attitude I take toward the content. 2 As we employ it here, content is roughly synonymous with the way in which the world (or its aspects) is represented by a representational mental state. There are many different philosophical theories of content, but we set the question of what is the best theory of content aside.

Introduction  3 (roughly) syntactic format properties. In our example, the word “cat” and the picture of a cat have the same content, but different formats and mediums. The word “cat” has a linguistic format, and is realized as ink on paper. The picture of a cat has an imagistic format and is realized on a photographic plate. Similarly, mental representations with the same content can have different vehicles. Three issues around mental representations have been especially central to debates in the philosophy of cognitive science. What explanatory role do mental representations play in different paradigms of cognitive science? What criteria do we need to introduce mental representations? And finally, how can intentionality be naturalized?

I.1.  The Notion of Mental Representation as  Used in Cognitive Science I.1.1.  Mental Representations and the Search for a Paradigm in Cognitive Science The modern use of mental representation in cognitive science can be traced back to the introduction of the digital computer as a metaphor for thinking about the mind. This approach helped concretize the idea that thinking is a series of permutation operations over mental representations (ancestors of such a view can be found in Leibniz, see, e.g., Jolley 1995, 226–​240). A program in a digital computer consists of data structures and algorithms operating over these data structures, implemented in physical hardware. Similarly, the mind can be seen as consisting of mental representations and computations ranging over these representations, implemented in neural hardware. Of course, the extent to which the brain is rather than merely seems to be a computer has been the focus of many heated debates (see, e.g., Sprevak 2010). However, we can sidestep this issue by distilling the core assumption of the computational-​representational theory of mind, or CRTM, shared by most computational views—​that computers can shed light on mental processes to the extent that mental processes involve computations over mental representations. This view of the mind is true if three conditions are met: (1) mental states can serve as inputs and outputs to computational processes; (2) mental states can be understood as tokenings of mental representations;

4  What Are Mental Representations? (3) mental processes that operate over mental representations are computational processes. The CRTM is thus the conjunction of two theses:  the computational theory of mind (CTM) and the representational theory of mind (RTM). Although these two theses of CRTM are widely considered by many to be inseparable (Fodor, for instance, famously wrote that there is “no computation without representation” [1981, 180]), they are in fact distinct. One may, for example, be committed to the view that the mind is essentially representational (e.g., in virtue of the representational character of perception and propositional attitudes), but remain neutral about the computational nature of cognition (most famously Searle 1990). Conversely, some proponents of CTM have denied that the vehicles that are manipulated in cognitive operations have semantic properties, even if they have syntactic ones (Chomsky 1983; Stich 1983). However, even though many philosophers working on CTM have given accounts of physical computation that do not appeal to the notion of representation (Miłkowski 2013; Piccinini 2015), they are nonetheless often independently committed to positing them for explanatory purposes in cognitive psychology and neuroscience (Boone and Piccinini 2016; Gładziejewski and Miłkowski 2017). Thus, CRTM remains the most popular account of cognition across philosophy of cognitive science. An undeniable appeal of CRTM is that it seems to solve several vexed problems in philosophy of mind. The first problem is how to account for mental states and processes within a physicalist framework: if mental processes are computational and they can be executed by a computer, and computers are physical devices obeying the laws of physics, then we can build an account of mental life that is contained within the physicalist framework (Putnam 1960). The second of these problems is the problem of the causal efficacy of the mental (Kim 2003). Let’s say that I have the intention of raising my hand and this intention causes me to raise my hand. What’s going on in my brain while this process unfolds? According to CRTM neural signals and processes act as algorithms over data structures. My mental representations have a content and a vehicle, which, for humans, is realized in a neural structure in the brain. It is in virtue of the fact that the semantic properties of a mental symbol correlate with the syntactic properties of the symbol that mental content can have a causal role (Fodor 1987). Mental processes are formal processes that detect only the syntactic properties of mental symbols, and not their semantic properties. Nonetheless, even if only the syntactic properties of mental symbols are causally efficacious, to the extent

Introduction  5 that semantic properties of mental symbols are systematically correlated with their semantic properties, the semantic properties of symbols are indirectly relevant to causal mental processes and can be used to keep track of them. This feature provides support to the appeal to mental representations in explanations in cognitive science (Shea 2018). The third problem is how to account for the rationality of transitions in thoughts. We think of an individual’s reasoning as a psychological process that starts with the individual accepting some propositions (premises) and ends with her accepting some new proposition (the conclusion) at which she arrived via some inferential operation (e.g., modus ponens, modus tollens). For such patterns of thoughts to be rational, the causal relations among this individual’s psychological states must mirror the semantic relations among the contents of the propositions. How can the causal properties of the individual’s beliefs (or more generally of her attitudes toward propositions) mirror their semantic properties? According to CRTM, what ensures this is the systematic relationship between the formal syntactic properties of mental symbols and their semantic content (Fodor and Pylyshyn 1988). The preceding position can be called a classical version of CRTM. Kim Sterelny sums up the mainstream view of the mind in the cognitive sciences as follows: A computational theory of cognition plus an account of the causal relations between mind and world explain how we can have representing minds. The decomposition of the complex capacities into simple ones, plus an account of how simple operations can be built into the physical organization of the mind, explain how we can have computing minds. (Sterelny 1990, 40)

Classicism is thus committed to the following ideas (Horgan 1997):  cognition employs mental representations; cognition is sensitive to the syntactic features of mental representations, and indirectly to the content of mental representations; cognition consists in the manipulation of these representations according to rules (similar to a digital computer); cognitive transitions are tractable and computable. Perhaps the most famous version of CRTM is the language of thought (LOT) hypothesis (Fodor 1975). LOT claims that mental representation has a linguistic structure, and that thought takes place within a mental language or mentalese. According to LOT, CRTM has a further commitment to the following claim: mental representations encode propositional content (which

6  What Are Mental Representations? is identical or constitutive of the content of mental states such as beliefs) by having a language-​like syntactic structure. The main argument in favor of LOT is based on an inference to the best explanation that starts from properties of thought that are shared with (observable) linguistic properties and ends up introducing some features of thought that best account for these properties (such as combinatorial syntax and compositional semantics) (Fodor 1987, 148). The picture of the mind suggested by LOT is one where propositional attitudes are realized as relations to sentences in the agent’s mentalese. Primitive mental symbols of such mental language are the bearers of underived intentionality. The psychologically relevant causal properties of propositional attitudes are inherited from the syntactic properties of the sentence tokens that realize the attitudes, and the semantic properties of propositional attitudes are inherited from the semantic properties of the LOT. CRTM and LOT, however, are not the only game in town. Alternative approaches to cognition bring with them alternative views about mental representations. According to connectionism, cognitive processes can be understood better by taking the organization of neural structures rather than language as the starting point for our theorizing. Accordingly, connectionists model cognitive functions using artificial neural networks, which are made up of simple elements or “nodes,” organized into layers by being connected by weighted “edges.” The general idea of all neural networks is that the nodes in the input layer are activated to various degrees in response to stimuli detected in the environment. The activations of these nodes are propagated through the layers “hidden” from the environment, activating other nodes when the weighted values they receive from the preceding layer exceed their activation threshold. Eventually the signal reaches the output layer, where the activations of particular nodes are interpreted as the outputs of the function computed by the whole network. Although there has been a lot of debate whether and how neural networks represent (Ramsey 1997; Opie and O’Brien 1999; Haybron 2000; Shea 2007), connectionists usually assume that mental representations are not symbolic, but distributed—​that they are realized by patterns of activation implemented in the weights and nodes, where no part of the network by itself represents a distinct content (though some versions of connectionism involve local representations, and are thus more similar to CRTM than the distributed version—​see Smolensky 1990; Clark 1993). Other proposed paradigms go even further, and suggest that cognitive science does not need mental representations at all. This is the case for

Introduction  7 dynamical theories (van Gelder 1995, 1998), which explain agents’ cognition by viewing the mind as a complex system which can be understood via interacting dynamical models. Thus, their preferred metaphor for the mind is that of a system which dynamically evolves in time, such as the centrifugal Watt governor used for controlling the pressure of early steam engines, rather than a Turing machine (van Gelder 1995). Although the Watt governor can be studied using the notions of computation and representation, its behavior cannot be fully explained by an appeal to discrete structures. Indeed, there is no clear mapping of a sequence of algorithms onto the active parts of the governor nor an identification of parts of the device with representations. Moreover, its operation cannot be easily divided into separate time steps. Instead, the behavior of the steam governor can be better accounted for with a series of differential equations. While the dynamicist approach is compatible with the postulation of mental representations, many proponents of this approach are anti-​representationalist, aiming to redescribe cognition as consisting not in the manipulation of internal vehicles, but rather in the agents’ dynamical coupling with their environment (Chemero 2011; Bruineberg and Rietveld 2014). An increasingly popular view, which draws inspiration from connectionism (Dayan et  al. 1995)  and dynamical systems theory (especially dynamical-​ causal modeling in neuroscience—​ Friston, Harrison, and Penny 2003), is predictive processing (PP). PP is most widely understood as a version of the Bayesian brain hypothesis (Knill and Pouget 2004), which postulates that all cognitive processes involve inferential operations approximating the optimal solution prescribed by probability theory. Thus, PP views the brain as applying Bayes’ theorem to infer the distal causes of sensory stimulations from the underdetermined proximal inputs. This approach is also committed to a particular, hierarchical cognitive architecture in which environmental regularities of increasing time spans and levels of abstraction are modeled by layers further removed from the sensory periphery. The probabilistic picture of cognition offered by PP is computational through and through, even if transitions between its states do not follow the rules of logic. The view, however, is compatible with different options for its representational status. On the one hand, the claim that the brain constructs models of its environment can be read as thoroughly representational (Gładziejewski 2016; Kiefer and Hohwy 2018). On the other hand, some philosophers have pointed out that the claim about the brain representing the world in a probabilistic way can be taken as a mere “as-​if ” metaphor (Block 2018; Orlandi

8  What Are Mental Representations? 2016). Yet another group of researchers has drawn attention to the fact that not all structures within the formal apparatus of PP can be unambiguously claimed to represent the world (Clark 2015; Ramstead et al. 2019). This is an ongoing debate, and many of the authors featured in this volume have taken a stand on its central issues (see also the papers in Dołęga, Roelofs, and Schlicht 2018).

I.1.2.  Criteria for Mental Representations Even if one accepts the CRTM, one still faces the gauntlet of specifying the criteria for the individuation of mental representations within the cognitive system. William Ramsey points out that there has been a long-​standing confusion between trying to understand how a physical structure can play the functional role of a representation and “understanding the nature of the relationship in virtue of which it represents one sort of thing and not something else” (Ramsey 2016, 5). These are two distinct facets of representation despite the fact that most accounts treat them as jointly necessary for something to qualify as a mental representation. In this section we will focus on the first aspect, leaving the second for the next section. The question about the criteria by which something counts as a mental representation can be understood as a question about the job that a notion of mental representation is meant to carry out. To offer a description of such a “representational job” is to elucidate the representations’ functional role, by providing a set of conditions that make it the case that something “is functioning as a representational state. In other words, it is the set of relations or properties that bestow upon some structure the role of representing” (Ramsey 2016, 4). Providing an adequate representational job description is primarily an explanatory challenge, one which is aimed at clarifying not only how some entity within a theory or explanation plays the function of a representation within the target system, but also why such a function is needed for the realization of the capacity or phenomenon in question. The quest for the criteria for mental representations amounts to finding representational explanations that are clear about the way in which the fact that a component of a system is a representation contributes to the functioning of that system.

Introduction  9 Many different criteria for a representational job have been proposed. Haugeland (1991) sets out three criteria that have later played a central role in other theories of what makes a certain system representational: 1. The system coordinates its behavior with the environmental features even when these features are not reliably present. 2. It copes with such cases by having something else “stand in” and guide behavior instead of immediate stimulation. 3. The “standing in” is systematic and part of a more general representational scheme of related representational states. Andy Clark (1997) proposed another very influential account. One of the features of this account is to find a criterion for distinguishing between cognitive problems that do not need mental representations to be solved, and cognitive problems that do need mental representations. Clark calls the latter type of problems “representation-​hungry” problems. For problems that do not require mental representations, “where the inner and the outer exhibit this kind of continuous, mutually modulatory, non-​decoupleable coevolution, the tools of information-​decomposition are . . . at their weakest. What matters in such cases are the real, temporally rich properties of the ongoing exchange between organism and environment” (166). On the other hand, “a processing story [is] representationalist if it depicts whole systems of identifiable inner states (local or distributed) or processes . . . as having the function of bearing specific types of information about external or bodily states of affairs” (147). Representation-​hungry problems are mainly two: 1. Cases that involve reasoning about absent, nonexistent, or counterfactual states of affairs 2. Cases that involve selective sensitivity to states of affairs whose physical manifestations are complex and unruly (e.g., the ability to pick out all the valuable items in a room) Failing to pay attention to the question of what makes something a representation has an impact on scientific practice. In neuroscience it is often the case that neurons that function as mere relays, but do not exhibit any other characteristic associated with a representational process other than correlating with some stimulus, are often still said to “represent.” According to Ramsey

10  What Are Mental Representations? (2007), this confusion stems from the fact that many parts of the cognitive system function as mere detectors, signaling and carrying information about the changes in the environment, but doing so without the ability to misrepresent or decouple from those environmental conditions. How can we distinguish between non-​ representational and representational cognitive systems? Many theorists (including Cummins 1989; Ramsey 2007; and Shea 2014)  have converged on an argumentative strategy of “comparison-​ to-​ prototype” to alleviate this worry (Gładziejewski 2016). The strategy consists in examining the everyday, pretheoretical use of a notion under investigation in order to find a widely accepted and uncontroversial application of the notion, which then serves as a prototype to which particular uses of the same notion in scientific literature are compared. For Cummins and others, cartographic maps serve as an intuitive prototype for ascriptions of representational function. As he points out, any token or type that is said to play the job of a representation should be recognizable as such. Paweł Gładziejewski (Gładziejewski 2015, 2016; Gładziejewski and Miłkowski 2017) develops this idea and identifies several features of all map and maplike devices (such as, e.g., GPS navigation systems), which make them suitable for fulfilling the representational role on Ramsey’s account. Maplike representations are useful in virtue of the relation obtaining between their properties and the features of their target (e.g., external environment). Gładziejewski and others (Cummins 1989; Opie and O’Brien 2004; Palmer 1978; Ramsey 2007; Shea 2014) argue that the relevant representational relation between the representation and its target is structural resemblance. However, this does not mean that the representation must be an exact copy of its representational object, but merely that it preserves its structural organization in a way that can be exploited by the system within which it is embedded (Shea 2018). On this account, the main features that make something a representation are these: • Action-​guidance. One of the features of maps that is a direct consequence of the structural resemblance condition is that maps allow their users to make choices and adjust their actions in accordance with the information conveyed by the representation. Thus, representations and maplike devices should, in some way, be able to guide their users’ actions.

Introduction  11 • Detachability. Maps are useful not only when navigating through some terrain, but also before the onset of a journey, that is, during planning and preparation. The whole point of having a representation is that it can be used even when the representational object is not “reliably present and manifest” to its user (Haugeland 1998; see also Grush 1997; Clark and Grush 1999). • Error detection. Finally, the resemblance and action-​ guiding conditions should be met in such a way as to allow for the possibility of the representational vehicle failing to fulfill its role. The main idea behind the error detection condition is not just that representations should be capable of representational failure, but that they should function in a way which allows for the detection of this failure. The action-​guiding property of a map would be severely limited if the users could not realize that that map contains errors or that it has been eroded in such a way that it no longer matches the layout of the terrain that it is supposed to depict. Thus, Mark Bickhard (1999, 2004) and others (Gładziejewski 2015; Bielecka and Miłkowski, this collection) postulate that representational error should be system detectable. Error detection thus goes beyond a mere account of misrepresentation as a failure of meeting the satisfaction or accuracy-​conditions of the representational content. Other recent accounts of what makes a system truly representational are Nico Orlandi’s proposal, which focuses on coordination with absence as the main criterion for genuine representations (2014, this volume), and Nick Shea’s more pluralistic “varitel semantics” (2018). Importantly, Shea disagrees with Ramsey on whether the question about what makes a system representational can be divorced from the question of how intentional states have the content that they do (Shea 2018). According to him the two issues cannot be separated because it is the semantic properties of representations that give them their explanatory purchase. Representational explanations are, in some contexts, better than mere causal descriptions of behavior, because they postulate representations with specific contents, which, in turn, give us a grip on the success and failure conditions of the behavior under investigation. Thus, for Shea, the question of content determination is at the core of the representationalist project in cognitive science. It is this issue that we will turn to in the next section.

12  What Are Mental Representations?

I.1.3.  The Naturalization Project for Mental Content We have seen that in CRTM the problem of how mental representations can play a causal role in processes is solved by the proposal that they are physical entities with semantic properties lining up with their syntactic properties. This solution, however, does not provide an answer to the question of how intentionality (the property of being directed at some object or state of affairs) comes about in the physical world. Intentionality is a feature proper to mental states, and not to physical states. This seems surprising at first, because many artifacts also exhibit intentionality: words in a book are about the things that the book is about, or a painting is about the subject it represents. Their intentionality, however, is not original, because they “inherit” their intentionality from the intentional states of the writer or the painter. In John Haugeland’s words: “Intentionality . . . is not all created equal. At least some outward symbols (for instance, a secret signal that you and I explicitly agree on) have their intentionality derivatively—​that is, by inheriting it from something else that has that same content already (such as the stipulation in our agreement). And, indeed, the latter might also have its content only derivatively from something else again; but obviously that can’t go on forever” (Haugeland 1998, 129). As already mentioned, Brentano is credited with the introduction of the contemporary notion of intentionality to philosophy of mind, and with proposing the thesis that intentionality is the mark of the mental (Brentano 1874/​1995). The thesis that intentionality is the mark of the mental can be split into two sub-​theses (Jacob 1997): that all mental phenomena exhibit intentionality; and that only mental phenomena exhibit intentionality. It is this second part of the thesis that constitutes a challenge to the materialist worldview. By accepting the second claim one ipso facto accepts the claim that no physical entity exhibits intentionality. Willard Van Orman Quine noticed the tension between the thesis that intentionality is the mark of the mental and physicalism, and wrote that “one may accept the Brentano thesis either as showing the indispensability of intentional idioms and the importance of an autonomous science of intention, or as showing the baselessness of intentional idioms and the emptiness of a science of intention. My attitude, unlike Brentano’s, is the second” (Quine 1960, 221). Quine’s dismissal of the possibility of a science of intentional (mental) states stems from what Pierre Jacob (1997) calls “Quine’s dilemma,” an inconsistent triad of three claims:

Introduction  13 1. Everything is physical (physicalism). 2. There are mental things (realism about the mental). 3. Mental things are not physical (since only mental states exhibit intentionality). Giving up (3) is the starting point of the “naturalization project” which occupied many philosophers since the 1980s. The aim of the naturalization project is to reduce intentionality to physical entities and relations with the aim of grounding intentional mental states in a physicalist worldview. The naturalization project takes on the challenge of building mental representations out of “purely natural ingredients” (Dretske 2002), and of showing that intentionality can arise from components that are physical and non-​intentional. There are many naturalization projects for intentional content, all aiming at answering the question, “How do intentional states come to have the content they do?” (for an overview of possible solutions see Ryder 2009b). Two of the most influential proposals are indicator or informational semantics (Dretske 2002; Rupert 1999; Ryder 2004)  and consumer-​based semantics (Millikan 1989; Neander 1995; Papineau 1984).3 Both these proposals are teleological in the sense that they appeal to functions in the explanation of how intentional states acquire the content they do, even if they differ in many other ways. Very roughly, while informational semantics appeals to natural information or indication based on a lawful dependency between the two events involved in the indication relation and to natural functions resulting from learning, consumer-​based accounts shift the focus from producers of representations to consumers of representations (the systems that use the representations), and from natural functions resulting from learning to natural functions resulting from evolution. Teleological naturalistic theories of mental content face several worries. One such problem is the indeterminacy problem for content (Fodor 1990), which is the problem of identifying the determinate contents for a mental representation among different compatible options that all account for the behavior of the creature with mental states. This can be illustrated by the frog’s snapping behavior at flies. When a frog snaps at a fly to eat it, several 3 There are other theories of course, such as Fodor’s asymmetrical causal dependency theory of meaning, explicitly formulated as an alternative to teleosemantic theories (Fodor 1987). More recently, Shea (2018) has proposed a pluralistic theory that combines the different teleosemantic approaches.

14  What Are Mental Representations? options for the content of the frog’s visual representation seem to be compatible with the frog’s behavior: fly, frog food, or small dark moving thing. Solving this problem requires finding a principled way of distinguishing between these options. While Millikan (1991) appeals to “normal conditions” to solve this problem, other solutions are possible. Egan (this collection), for example, explores a pragmatist option. Another worry is the challenge of extending a naturalized account of representations from “minimal” representations (which include sensory and action representations) to all the types of mental representations humans can entertain. We can think about the past, the future, about absent entities and events, about abstract things (such as mathematical proofs), about nonexistent entities (such as unicorns) or even impossible objects (such as a square-​circle). Extending the naturalistic approach to mental representations to these thoughts is a difficult but necessary endeavor for a fully naturalized theory of mental content (see Piccinini, this collection, for one solution to this challenge).

I.2.  Overview of the Papers in This Volume The chapters in this volume continue the exploration of central challenges related to mental representations in cognitive science, but they also deal with new issues emerging from recent developments. Some of the questions explored in this volume are these: • Can we deflate the notion of mental representations? Can this deflated notion meet the explanatory needs of cognitive science? • What are the criteria for distinguishing between structures that are mental representations and non-​representational cognitive structures? • What is the best account of mental representations? • What is the role of mental representations in explanations in cognitive science? • How can we solve some of the challenges that naturalistic theories of mental representations face? In what follows, we will introduce the papers of the volume in more detail. Frances Egan proposes a deflationary account of mental representations, which couples a realist construal of representational vehicles with a

Introduction  15 pragmatist account of representational content. The first part of the paper is dedicated to outlining the pragmatic aspect of the deflationary theory of representations. She starts by overviewing the major naturalistic theories of content and points toward the existence of a mismatch between these projects and the way in which computational neuroscientists appeal to content in their models. She offers an alternative that better accounts for the actual practice. This alternative uses a tracking approach with pragmatic considerations. Egan shows that tracking on its own cannot deliberate between different options for the content of representational states, because all the information that is tracked by the state could be potentially relevant. Pragmatic considerations come into play to solve the indeterminacy problem. To take the classic controversy in teleological accounts: when a frog snaps at a fly, is the frog’s internal state representing fly, frog food, or small dark moving thing? In Egan’s account the choice between these options is pragmatic: if the goal of the theorist is to explain the frog’s behavior in its ecological niche, the correct content assignment is fly; if it is to explain how the frog’s visual system works, then it is small dark moving thing. In this view semantic content does not figure in our characterization of the computational mechanism involved in a task, and it is not an element of the computational theory, but it is rather a gloss on the computational theory, which is used to facilitate the explanation of the relevant cognitive capacity. On such a view, there is no need for the development of a naturalistic theory of the content-​determining relation. This is not the only role for content, however: in some cases, it is used as a placeholder for a computational theory in the route for development and is used as a starting point for scientific inquiry. The second part of the paper is dedicated to showing that representational vehicles, on the other hand, are real entities playing a causal role in the theory. She lays out her account of what makes an internal state function as a representation: in the case of a representation there is a realization function that specifies the structures that are the physically realized vehicles of representations, and an interpretation function that assigns contents to the structures. The first function isolates the causally relevant structure for the exercise of a certain cognitive capacity; the second function assigns them contents. Even if the assignment of the semantic content is a mere gloss, the representational vehicles to which the contents are assigned are real. To prove that representations characterized in this way are truly representational she shows that her account satisfies the adequacy conditions for genuine representations proposed by Ramsey (2007). Finally, she answers a

16  What Are Mental Representations? challenge according to which the deflationary account does not correspond to actual practice in cognitive neuroscience, including recent developments in Bayesian approaches to models in cognitive science, where representations play an essential role in theories. In his paper, Ramsey argues against a group of views, according to which we should not treat appeals to representations in scientific practice as a commitment to the existence of ontologically robust entities, but rather as useful heuristics or fictions. This group of views positions itself as a middle ground between two more common options: realism about mental representations and eliminativism about mental representations. While there are different versions of the deflationist option, what unites them is the desire to avoid a strong ontological commitment to mental representations, while at the same time retaining the explanatory role of mental representations in cognitive science. According to this family of views, appeals to mental representations do not imply a commitment to objectively real structures existing in the brain with representational roles and semantic content. Ramsey focuses on three cases of representational deflationism:  Chomsky’s ersatz representationalism, the view that Sprevak calls neural representational fictionalism (2013), and Egan’s intentional gloss account (2013, this volume). Against the intentional gloss account, Ramsey argues that cognitive content is essential to the role that the cognitive subsystem plays in explaining the psychological capacity at stake. And even if one can put the subsystem in a different embedding system so as to make it compute a different cognitive content, for Ramsey this does not bear on the status of cognitive content in the explanation of the functioning of the current cognitive system that scientists try to explain. For instance, distal content plays an essential role in explaining how vision works, in a way that is discounted by the deflationist option. In the end, for Ramsey, deflationist options fail to do justice to the way in which mental representations are used in explanations in cognitive science. Dan Hutto and Erik Myin attack various deflationist views of mental representations from their own viewpoint of radical enactivism. Among the targets of their criticism there is Egan’s deflationary approach (2013, this volume). According to them, deflationist accounts are threatened by two challenges: on the one hand, they need to prove that they do not reduce to an anti-​representationalist account; on the other hand, they need to show that they have an explanatory advantage over non-​representational theories. In the second part of their paper, Hutto and Myin suggest that even deflated content, such as mathematical content for M-​representations defended by

Introduction  17 Egan, might itself end up being a useful heuristic, a “mathematical gloss” which is just another, abstract way of describing the actual causal processes that are responsible for the transitions between the system’s internal states, rather than a content that representations have essentially. One could accept that mental representations are central posits in explanations in cognitive science, and yet be skeptical about views that identify mental representations in a way that trivializes their theoretical import. Nico Orlandi explores the issue of distinguishing between structures that are mere informational relays (such as neuronal clusters in early vision), and structures that are mental representations proper (e.g., distal representations of objects in late vision). She focuses on three conditions for a structure to be a mental representation: (a) having content; (b) acting as a stand-​in; (c) guiding stimulus-​free behavior. The first criterion can be used even if we lack a problem-​free theory of content, but the lack of such a theory makes the first criterion for mental representation-​hood problematic on its own. The reason for this is that it does not give us an answer to the question, What makes a certain system a representation at all? This is the “use” condition on a theory of representations. Orlandi’s reply to this challenge is to appeal to the notion of serving as a stand-​in that guides stimulus-​free behavior. She discusses a problem faced by such a view: “use” has to be mechanized, as we can’t appeal to a homunculus-​like interpreter for the consumer system that uses mental representations. She expands the view by developing the condition of coordinating with what is absent in the control of behavior. Coordination with absence means that genuine mental representations can be decoupled from what they causally track, and be about distal and absent environments. Despite existing criticism, the representational theory of mind holds its position as the most successful paradigm in cognitive science, as demonstrated by its positive outcomes in explaining an array of cognitive capacities, from language understanding and production, to navigation, memory, perception, and action production. In his paper, Michael Rescorla outlines a new version of RTM, called the capacities-​based representational theory of mind (C-​RTM). The starting point in his paper is the notion of “representational capacities” (the capacity to represent colors, the capacity to represent whales, etc.), on the basis of which we can sort mental states and events into types. Mental representations are reifications of these types, i.e., objects in an abstract sense. After presenting the general view, Rescorla applies it to several features of mental representations: (a) how they can be combined to form complex mental representations; (b) modes of presentations for representations with

18  What Are Mental Representations? the same denotation; (c) the way that semantic properties can play a role in the individuation of mental representations (and not only syntactic properties, as in classic Fodorian RTM). The latter feature better accounts for explanations in cognitive science because, Rescorla argues, ascribing representational properties to mental events plays a central role in taxonomizing said events for the purpose of explanation. An account in which the semantic properties are not subject to arbitrary reinterpretation, but play a role in individuating mental representations facilitates the stabilization of the taxonomy of mental events to be explained and promotes reaching a consensus on what the targets of cognitive scientific explanations are. Albert Newen and Gottfried Vosgerau propose another general theory of mental representations. Their view is supposed to be a vindication of mental representations in reaction to the eliminativist’s challenge, while avoiding the commitment to a specific claim about the right format of mental representations and the right level for the representational explanation. This account integrates a functionalist approach to mental representations with a variable relational dimension between the representation and the situation. In virtue of the relational dimension, mental representations have a dynamic aspect that can be constructed on the fly according to context. Importantly, the features of the situation that are relevant to the use of mental representation in explanation are determined by both the behavioral capacity to be explained and the explanatory goals of the scientists. While the first part of their paper concerns mostly the limitations that need to be placed on a satisfactory account of mental representation, the authors flesh out their proposal in the second part of the paper. They enumerate the crucial components of their theory of representations as a commitment to (1) a functionalist approach; (2) use-​dependence of mental representation; (3) multiple formats of mental representations. With regards to their functionalist assumption, Vosgerau and Newen stress that the capacities under investigation should be defined as involving not only mappings from stimuli to behavior, but also between the internal states of the system which mediate between the two. Going further, the commitment to use-​dependence places two requirements on the introduction of mental representations. First, not all types of behaviors need mental representations to be explained; the type of behavior that requires mental representations exhibits some degree of flexibility. Second, and most importantly, the appeal to mental representations in making sense of behavior must present an explanatory advantage over a non-​representational explanation. Finally, the commitment to multiple formats of representation is to be

Introduction  19 understood as referring to the fine-​grained structure of a mental representation, determined by a certain level of equivalence in the substitution relation. This substitution relation is, in turn, defined over processes that follow the detection of the actually present object or property with equivalent internal processes. As the authors stress, there are multiple levels of such equivalence, and the choice of level involves pragmatic considerations. Therefore, mental representations can be understood at different levels: the level of the vehicle (i.e., the neural correlate of the mental representation which participates in the physical mechanism), the level of format (selected according to the adequacy condition), and the level of content (which is systematically related to the vehicle). Importantly, formats come in different varieties: correlational, isomorphic, and propositional. The same type of behavior can be explained using different formats for the mental representations involved. This is why Newen and Vosgerau allow for the indeterminacy of content, while not seeing it as a fatal flaw for a theory of mental representations. Joulia Smortchkova and Michael Murez explore the question of the discovery of natural representational kinds:  categories of mental representations that are used in scientific theories as a basis for induction and that figure in explanations. They notice that, while philosophers of cognitive science have paid attention to the question of what makes mental representations a natural kind as a whole, they have paid less attention to what makes certain representational categories natural kinds within a cognitive theory. For example, what makes “action-​representations” or “object-​files” a representational natural kind, but not, say, “wombat-​ representations”? In the chapter, they search for the possible criteria for identifying representational natural kinds by contrasting categories used as natural kinds in theories, with other clusters of mental representations. They argue that shared semantic properties are not sufficient for identifying representational kinds within the explanatory project of cognitive science, and suggest that instead the detection and identification of representational kinds is connected to the discovery of a certain “depth.” This notion of depth goes hand in hand with a mainstream approach to explanation in cognitive science, according to which the methodology of cognitive science is “vertical decomposition” of complex capacities into simpler components and their interactions. This decomposition can be articulated at different levels, which leads Smortchkova and Murez to sketch the following proposal for representational kindhood: mental representations in cognitive science are multilevel kinds.

20  What Are Mental Representations? Starting from a criticism of Mark Sprevak’s challenges to the functionalist approach to psychological states, Nicholas Shea outlines some conceptual distinctions central to the functionalist approach that are sometimes conflated. Shea notes that we need to distinguish between functionalism about psychological states in general, that is, the criteria for what it takes to be a psychological system in general; and functionalism about a specific psychological state, such as a belief (or a desire or an intention). One could individuate determinate psychological states, such as a belief in a fine-​grained manner, while allowing for a coarse-​grained individuation of a psychological system. A  coarse-​grained individuation of psychological systems, as opposed to beliefs, would count a Martian as an entity endowed with a psychological system, as long as it exhibits certain marks of mental life. Yet one would be justified in not attributing beliefs to the Martian if there is no individual type of psychological mental states sharing functional properties with beliefs in humans. Gualtiero Piccinini’s paper involves a new proposal for the naturalization of intentionality project. The aim of the paper is to provide an alternative to mainstream informational teleosemantics and fill some gaps in the approach. He notices that even if informational teleosemantics applies to natural representations (such as perceptual representations), it does not apply to nonnatural representations (such as belief representations). Natural representations are those that carry natural semantic information, which raises the probability that P (where P is what the representation is about). Nonnatural representations, on the other hand, carry nonnatural semantic information, which does not raise the probability that P (where P is what the representation is about). Paradigmatic states with nonnatural representations are imaginings, dreams, pretendings, etc. Existing accounts of informational teleosemantics cannot account for nonnatural representations, the reason being that for nonnatural representations, misrepresenting and representational malfunctioning are not two sides of the same coin. Piccinini’s solution for a naturalistic account of nonnatural representations lies in the notion of offline simulation of nonactual environments. Offline simulation is the missing ingredient in the naturalistic recipe for thought. Nonnatural representations, in Piccinini’s account, are accounted for by the conjunction of the known story for natural representations, with offline simulation of nonactual environments, and tracking of the decoupling between the simulation and the actual environment. In a certain sense, Piccinini’s solution has its roots in the empiricist tradition: the content of nonnatural representations

Introduction  21 piggybacks on the content of natural representations, by deploying the resources of natural representations but decoupling them from the actual environment, and compositional semantics that allows for the combination of representational contents that would have been deployed online, had the system been tracking the actual environment. Nonnatural misrepresentation occurs when the system malfunctions in the sense of failing to track the decoupling between the offline simulation and the actual environment. Having provided a naturalistic account of nonnatural representation, Piccinini sketches an extension to nonnatural meaning. The debate over naturalizing mental representations is also the starting point of Krystyna Bielecka and Marcin Miłkowski’s contribution. In particular, they are interested in the question of how satisfaction conditions could be available to a cognitive system. They propose a “mechanization” of core ideas in the teleosemantic naturalization project, by postulating that mental representations are parts of mechanisms which serve a biological function of representing. Importantly, while such functions are presumed to be sensitive to the semantic properties of the relevant representations, they do not need to be specified in semantic terms themselves. The authors make sense of this proposal by distinguishing between semantic information and mental representation proper. Using the notion of structural information, first introduced by MacKay, Bielecka and Miłkowski postulate that semantic information can be understood to refer to physical tokens that could be about something, for example, by having a structure which maps onto the structure of something else. What distinguishes such information from mental representation is the way that the previous two features—​intentionality and aboutness—​become available to and are exploited by the cognitive system. Thus, for a piece of semantic information to serve as a mental representation, the cognitive agent must be sensitive to the satisfaction conditions of the representation and be able to determine whether the representation represents the target it is about, or whether it fails to do so (or does it inadequately). According to the authors, “error detection” is constitutive of representation, and one can give an account of misrepresentation through this notion. Once again, they make use of the notion of structural information to spell out error detection in terms of a coherence-​based account of system-​detectable mismatch and a sketch of a computational consumer mechanism responsible for such error detection. Incoherence occurs when the system detects a discrepancy between two physical vehicles. Crucially, this is a syntactic operation, which can be treated as semantic only in those cases where error is consumed in a

22  What Are Mental Representations? way that is sensitive to its semantic value. In other words, error detection is a non-​intentional ingredient in the mechanized account of misrepresentation. This introduction barely scratches the surface of the many complex debates surrounding the topic of mental representations in philosophy of cognitive science. We hope that the new proposals and the exchanges in this volume will contribute to moving the debates forward, and that readers from different backgrounds will find the volume useful as an advanced collection of recent issues on mental representations in philosophy of cognitive science.

References Bechtel, W. 2016. Investigating Neural Representations: The Tale of Place Cells. Synthese 193 (5): 1287–​1321. Bickhard, M. H. 1999. Representation in Natural and Artificial Agents. In E. Taborsky (ed.), Semiosis. Evolution. Energy: Towards a Reconceptualization of the Sign, 15–​26. Herzogenrath: Shaker Verlag. Bickhard, M. H. 2004. The Dynamic Emergence of Representation. In H. Clapin (ed.), Representation in Mind, 71–​90. Amsterdam: Elsevier. Block, N. 2018. If Perception Is Probabilistic, Why Doesn’t It Seem Probabilistic? Philosophical Transactions of the Royal Society B 373:  20170341. http://​dx.doi.org/​ 10.1098/​rstb.2017.0341. Boone, W., and Piccinini, G. 2016. The Cognitive Neuroscience Revolution. Synthese 193 (5): 1509–​1534. Brentano, F. 1874/​1995. Psychology from an Empirical Standpoint. New York: Routledge. Bruineberg, J., and Rietveld, E. 2014. Self-​Organization, Free Energy Minimization, and Optimal Grip on a Field of Affordances. Frontiers in Human Neuroscience 8: 1–​14. Chemero, A. 2011. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Chomsky, N. 1983. Rules and Representations. Tijdschrift Voor Filosofie 45 (4): 663–​664. Clark, A. 1993. Associative Engines:  Connectionism, Concepts, and Representational Change. Cambridge, MA: MIT Press. Clark, A. 2015. Radical Predictive Processing. Southern Journal of Philosophy 53 (Spindel Supplement): 3–​27. Clark, A., and Grush, R.  1999. Towards a Cognitive Robotics. Adaptive Behavior 7 (1): 5–​16. Crane, T. 2003. The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation. 2nd ed. New York: Routledge. Cummins, R. C. 1989. Meaning and Mental Representation. Cambridge, MA: MIT Press. Dayan, P., Hinton, G. E., Neal, R. M., and Zemel, R. S. 1995. The Helmholtz Machine. Neural Computation 7: 889–​904. Dołęga, K., Roelofs, L., and Schlicht, T., eds. 2018. Enactivism, Representationalism, and Predictive Processing. Special issue of Philosophical Explorations 21 (2): 187–​203. Dretske, F. 2002. A Recipe for Thought. In D. J. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings, 491–​499. New York: Oxford University Press. Egan, F. 2013. How to Think about Mental Content. Philosophical Studies 170 (1): 115–​135.

Introduction  23 Fodor, J. A. 1975. The Language of Thought. New York: Crowell Press. Fodor, J. A. 1981. Representations: Philosophical Essays on the Foundations of Cognitive Science. Cambridge, MA: MIT Press. Fodor, J. A. 1987. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press. Fodor, J. A. 1990. A Theory of Content and Other Essays. Cambridge, MA: MIT Press. Fodor, J. A., and Pylyshyn, Z. W. 1988. Connectionism and Cognitive Architecture: A Critical Analysis. Cognition 28: 3–​71. Friston, K. J., Harrison, L., and Penny, W. 2003. Dynamic Causal Modelling. NeuroImage 19 (4): 1273–​1302. Gładziejewski, P. 2015. Explaining Cognitive Phenomena with Internal Representations: A Mechanistic Perspective. Studies in Logic, Grammar, and Rhetoric 40 (1): 63–​90. Gładziejewski, P. 2016. Predictive Coding and Representationalism. Synthese 193 (2): 559–​582. Gładziejewski, P., and Miłkowski, M. 2017. Structural Representations: Causally Relevant and Different from Detectors. Biology and Philosophy 32 (3): 337–​355. Grush, R. 1997. The Architecture of Representation. Philosophical Psychology 10 (1): 5–​23. Haugeland, J. 1991. Representational Genera. In W. Ramsey, S. P. Stich, and D. E. Rumelhart (eds.), Developments in Connectionist Theory: Philosophy and Connectionist Theory, 61–​89. Hillsdale, NJ: Lawrence Erlbaum Associates. Haugeland, J.  1998. Having Thought:  Essays in the Metaphysics of Mind. Cambridge, MA: Harvard University Press. Haybron, D. M. 2000. The Causal and Explanatory Role of Information Stored in Connectionist Networks. Minds and Machines 10 (3): 361–​380. Horgan, T. E. 1997. Connectionism and the Philosophical Foundations of Cognitive Science. Metaphilosophy 28 (1–​2): 1–​30. Jacob, P. 1997. What Minds Can Do: Intentionality in a Non-​intentional World. Cambridge: Cambridge University Press. Jolley, N., ed. 1995. The Cambridge Companion to Leibniz. Cambridge:  Cambridge University Press. Kiefer, A., and Hohwy, J. 2018. Content and Misrepresentation in Hierarchical Generative Models. Synthese 195 (6): 2387–​2415. Kim, J. 2003. Blocking Causal Drainage and Other Maintenance Chores with Mental Causation. Philosophy and Phenomenological Research 67 (1): 151–​76. Knill, D., and Pouget, A. 2004. The Bayesian Brain: The Role of Uncertainty in Neural Coding and Computation. Trends in Neurosciences 27 (12): 712–​719. Miłkowski, M. 2013. Explaining the Computational Mind. Cambridge, MA: MIT Press. Miłkowski, M. 2015. The Hard Problem of Content: Solved (Long Ago). Studies in Logic, Grammar, and Rhetoric 41 (54). https://​doi:10.1515/​slgr-​2015-​0021. Millikan, R. G. 1989. Biosemantics. Journal of Philosophy 86 (6): 281–​297. Millikan, R. G. 1991. Speaking Up for Darwin. In B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, 151–​165. Cambridge, MA: Blackwell. Neander, K. 1995. Misrepresenting and Malfunctioning. Philosophical Studies 79: 109–​141. Opie, J., and O’Brien, G. 1999. A Connectionist Theory of Phenomenal Experience. Behavioral and Brain Sciences 22 (1): 127–​148. Opie, J., and O’Brien, G.  2004. Notes Toward a Structuralist Theory of Mental Representation. In H. Clapin, P. Staines, P. Slezak (eds.), Representation in Mind: New Approaches to Mental Representation, 1–​20. Amsterdam: Elsevier.

24  What Are Mental Representations? Orlandi, N. 2014. The Innocent Eye: Why Vision Is Not a Cognitive Process. New York: Oxford University Press. Orlandi, N. 2016. Bayesian Perception Is Ecological Perception. Philosophical Topics 44 (2): 327–​351. Palmer, S. 1978. Fundamental Aspects of Cognitive Representation. In E. Rosch and B. B. Loyd (eds.), Cognition and Categorization, 259–​303. Hillsdale, NJ: Lawrence Erlbaum Associates. Papineau, D. 1984. Representation and Explanation. Philosophy of Science 51: 550–​572. Piccinini, G. 2015. Physical Computation:  A Mechanistic Account. Oxford:  Oxford University Press. Pitt, D. 2018. Mental Representation. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Winter 2018 ed. https://​plato.stanford.edu/​archives/​win2018/​entries/​ mental-​representation/​. Putnam, H. 1960. Minds and Machines. In S. Hook (ed.), Dimensions of Minds, 138–​164. New York: New York University Press. Quine, W. V. O. 1960. Word and Object. Cambridge, MA: MIT Press. Ramsey, W. M. 1997. Do Connectionist Representations Earn Their Explanatory Keep? Mind & Language 12 (1): 34–​66. Ramsey, W. M. 2007. Representation Reconsidered. Cambridge:  Cambridge University Press. Ramsey, W. M. 2016. Untangling Two Questions about Mental Representation. New Ideas in Psychology 40(A): 3–​12. Ramstead, M. J. D., Kirchhoff, M. D., Constant, A., and Friston, K. J. 2019. Multiscale Integration:  Beyond Internalism and Externalism. Synthese 1–​30. https://​doi.org/​ 10.1007/​s11229-​019-​02115-​x. Rupert, R. 1999. The Best Test Theory of Extension: First Principle(s). Mind & Language 14: 321–​355. Ryder, D. 2004. SINBAD Neurosemantics: A Theory of Mental Representation. Mind & Language 19: 211–​240. Ryder, D. 2009a. Problems of Representation I:  Nature and Role. In F. Garzon and J. Symons (eds.), The Routledge Companion to the Philosophy of Psychology, 233–​250. London: Routledge. Ryder, D. 2009b. Problems of Representation II: Naturalizing Content. In F. Garzon and J. Symons (eds.), The Routledge Companion to the Philosophy of Psychology, 251–​279. London: Routledge. Searle, J. 1990. Is the Brain a Digital Computer? Proceedings and Addresses of the American Philosophical Association 64: 21–​37. Shea, N. 2007. Content and Its Vehicles in Connectionist Systems. Mind & Language 22 (3): 246–​269. Shea, N. 2014. Exploitable Isomorphism and Structural Representation. Proceedings of the Aristotelian Society 114 (2): 123–​144. Shea, N. 2018. Representation in Cognitive Science. Oxford: Oxford University Press. Smolensky, P. 1990. Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems. Artificial Intelligence 46 (1–​2): 159–​216. Sprevak, M. 2010. Computation, Individuation, and the Received View on Representation. Studies in History and Philosophy of Science 41: 260–​270. Sprevak, M. 2013. Fictionalism about Neural Representations. The Monist 96 (4): 539–​560. Sterelny, K. 1990. The Representational Theory of Mind. Cambridge, MA: Blackwell.

Introduction  25 Stich, S. 1983. From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press. van Gelder, T. 1998. The Dynamical Hypothesis in Cognitive Science. Behavioral and Brain Sciences 21 (5): 615–​628. van Gelder, T. 1995. What Might Cognition Be If Not Computation? Journal of Philosophy 92 (7): 345–​381.

1 A Deflationary Account of Mental Representation Frances Egan

1.1.  Preliminaries A commitment to representation presupposes a distinction between representational vehicle and representational content. The vehicle is a physically realized state or structure that carries or bears content. Insofar as a representation is causally involved in a cognitive process, it is in virtue of the representational vehicle. A state or structure has content just in case it represents things to be a certain way; it has a “satisfaction condition”—​the condition under which it represents accurately. We can sharpen the distinction by reference to a simple example. See figure 1.1. Most generally, a physical system computes the addition function just in case there exists a mapping from physical state types to numbers, such that physical state types related by a causal state transition relation are mapped to numbers n, m, and n + m related as addends and sums. But a perspicuous rendering of a computational model of an adder depicts two mappings: at the bottom, a realization function (fR) that specifies the physically realized vehicles of representation—​here, numerals, but more generally structures or states of some sort—​and, at the top, an interpretation function (fI) that specifies their content. The bottom two horizontal arrows depict causal relations (the middle at a higher level of abstraction); the top arrow depicts the arguments and values of the computed function. When the system is in the physical states that under the mapping represent the numbers n, m (e.g., 2 and 3), it is caused to go into the physical state that under the mapping represents their sum (i.e., 5). For any representational construal of a cognitive system we can ask two questions:  (1) How do the posited internal representations get their meanings? This, in effect, is the problem of intentionality. (2) What is it for Frances Egan, A Deflationary Account of Mental Representation In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0002.

Deflationary Account of Mental Representation  27 EXAMPLE – ADDITION n,

m

n+m

fI s1, s2

s3 fR

p1, p2

p3

Figure 1.1  An adder

an internal state or structure to function as a representation, in particular, to serve as a representational vehicle? The appeal to representations should not be idle—​positing representations should do some genuine explanatory work. In our terms, what is at stake with (1) and (2) is justifying the interpretation and the realization functions respectively. I shall discuss each in turn later. First, however, it is useful to set out Ramsey’s (2007) adequacy conditions on a theory of mental representation. This will provide a framework for evaluating the account to be defended here. Ramsey identifies at least five general constraints: (1) Mental representations should serve a function sufficiently like paradigm cases of representation. Public language is probably the clearest case; maps are another exemplar. (2) The content of mental representations should be causally relevant to their role in cognitive processes. (3) The account should not imply pan-​representationalism: lots of clearly nonrepresentational things should not count as representations. (4) The account should not underexplain representational capacities; it should not, for example, presuppose such intentional capacities as understanding. (5) Neither should it overexplain representational capacities, such that representation is explained away. For example, according to Ramsey, if representations function as “mere causal relays” then, in effect, the phenomenon of interest has disappeared. With Ramsey’s adequacy conditions for a theory of mental representation on the table, let us turn to our first problem: the problem of mental content.

28  What Are Mental Representations?

1.2.  Representational Content: The Naturalistic Proposals We can identify several widely accepted constraints on an account of content for cognitive neuroscience: (1) The account should provide the basis for the attribution of determinate contents to the posited states or structures. (2) The account should allow for the possibility that the posited states can misrepresent. The motivating idea is that genuinely representational states represent robustly, in the way that paradigmatic mental states such as beliefs represent; they allow for the possibility of getting it wrong. There is a constitutive connection between constraints (1) and (2). If the theory cannot underwrite the attribution of determinate satisfaction conditions to a mental state (type), then it cannot support the claim that some possible tokenings of the state occur when the conditions are not satisfied, and hence would misrepresent. (3) The account should be naturalistic. Typically, this constraint is construed as requiring a specification, in non-​semantic and non-​ intentional terms, of (at least) a sufficient condition for a state or structure to have a particular content. Such a specification would guarantee that the theory makes no illicit appeal to the very phenomenon—​meaning—​that it is supposed to explain. This idea motivates so-​called tracking theories, discussed later. More generally, the constraint is motivated by the conviction that intentionality is not fundamental: It’s hard to see  .  .  .  how one can be a realist about intentionality without also being, to some extent or other, a reductionist. If the semantic and the intentional are real properties of things, it must be in virtue of their identity with (or maybe supervenience on) properties that are themselves neither intentional nor semantic. If aboutness is real, it must be something else. (Fodor 1987, 97) There are no “ultimately semantic” facts or properties, i.e. no semantic facts or properties over and above the facts and properties of physics, chemistry, biology, neurophysiology, and those parts of psychology, sociology, and anthropology that can be expressed independently of semantic concepts. (Field 1975, 386)

Deflationary Account of Mental Representation  29 Finally, (4) The account should conform to the actual practice of content attribution in cognitive neuroscience. It should be empirically accurate. Explicitly naturalistic theories explicate content in terms of a privileged relation between the tokening of an internal state and the object or property the state represents. Thus the state is said to “track” (in some specified sense) the external condition that serves as its satisfaction condition. To satisfy the naturalistic constraint both the relation and the relata must be specified in non-​intentional and non-​semantic terms. Various theories offer different accounts of the content-​determining relation. I will discuss very briefly the most popular proposals, focusing on their failure, so far, to underwrite the attribution of determinate contents to internal states. I will then illustrate how a pragmatic account of content of the sort I defend handles this thorny issue. Information-​theoretic accounts hold, very roughly, that an internal state S means cat if and only if S is caused by the presence of a cat, and certain further conditions obtain.1 Further conditions are required to allow for the possibility of misrepresentation, that is, for the possibility that some S-​tokenings are not caused by cats but, say, by large rats on a dark night, and hence misrepresent a large rat as a cat. A notable problem for information-​theoretic theories is the consequence that everything in the causal chain from the presence of a cat in the distal environment to the internal tokening of S, including catlike patterns in the retinal image, appears to satisfy the condition, and so would fall into S’s extension. Thus, information-​theoretic theories typically founder on constraint (1), failing to underwrite determinate contents for mental states, and hence have trouble specifying conditions under which tokenings of the state would misrepresent (condition [2]‌). The outstanding problem for such theories is to provide for determinacy without illicit appeal to intentional or semantic notions. Teleological theories hold that internal state S means cat if and only if S has the natural function of indicating cats. The view was first developed and defended by Millikan (1984), and there are now many interesting variations 1 See Dretske 1981 and Fodor 1990 for the most developed information-​theoretic accounts. Further conditions include the requirement that during a privileged learning period only cats cause S-​tokenings (Dretske 1981)  or that non-​cat-​caused S-​tokenings depend asymmetrically on cat-​ caused S-​tokenings (Fodor 1990).

30  What Are Mental Representations? on the central idea.2 Teleosemanticists have been notoriously unable to agree on the natural function of states of even the simplest organisms.3 Let’s focus on a widely discussed case. Does the inner state responsible for engaging a frog’s tongue-​lashing behavior have the function of indicating (and hence representing) fly, frog food, or small dark moving thing? Teleosemanticists, at various times, have proposed all three. We might settle on fly, but then Quinean indeterminacy4 rears its head: a fly stage detector or an undetached fly part detector would serve the purpose of getting nutrients into the frog’s stomach equally well. The problem is that indeterminate functions cannot ground determinate contents. Each of various function-​candidates specifies a different satisfaction condition; unless a compelling case can be made for one function-​candidate over the others, teleosemantics runs afoul of constraint (1). Moreover, the argument must not appeal to intentional or normative considerations (such as what makes for a good explanation), on pain of violating the naturalistic constraint. A third type of tracking theory appeals to the type of relation that holds between a map and the domain it represents, that is, structural similarity or isomorphism.5 Cummins 1989, Ramsey 2007, and Shagrir 2012 have proposed variations on this idea. Of course, since similarity is a symmetric relation but the representation relation is not, any account that attempts to ground representational content in similarity will need supplementation by appeal to something like use. Of more concern in the present context, a given set of internal states or structures is likely to be structurally similar to any number of external conditions. The question is whether structural similarity can be sufficiently constrained to underwrite determinate contents while still respecting the naturalistic constraint. The upshot of this short discussion is that tracking theories of mental content face formidable problems in underwriting content determinacy, and hence the possibility of misrepresentation, in a way that satisfies the naturalistic constraint. One might simply conclude that more work needs to be done, that naturalistic semantic theorists should continue to look for naturalistic conditions that would further constrain content. However, if the proposed meaning-​determining relation becomes too baroque it will fail to be 2 See Matthen 1988; Papineau 1993; Dretske 1995; Ryder 2004; Neander 2006, 2017; and Shea 2007, 2018 for other versions of teleosemantics. 3 See the discussion of the magnetosome in Dretske 1986 and Millikan 1989. 4 See Quine 1960. 5 Better, homomorphism, or what O’Brien and Opie 2004 call a “second-​order resemblance” (11).

Deflationary Account of Mental Representation  31 explanatory, leaving us wondering why that particular relation determines content. Despite the fact that there is no widely accepted naturalistic foundation for representational content, computational theorists persist in employing representational language in articulating their models. It is unlikely that they have discovered a naturalistic meaning-​determining relation that has so far eluded philosophers. Shea (2013, 499) claims that cognitive science takes semantic properties for granted, “offer[ing] no settled view about what makes it the case that the representations relied on have the contents they do. The content question has been largely left to philosophy.” There is something right about this idea, which I will say more about in the next section, but on its face it would be a bitter pill for the majority of philosophers of mind who look to the cognitive sciences, and in particular, to computational neuroscience, to provide a naturalistic explanation of our representational capacities. Their hopes would be dashed if cognitive science just kicks the project of naturalizing the mind back to philosophy. The apparent mismatch between the theories of content developed by philosophers pursuing the naturalistic semantics project and the actual practice of computational theorists in attributing content in their models cries out for explanation; it motivates a different sort of account.

1.3.  Representational Content: A Pragmatic Alternative The view that I  favor builds on the central insight of tracking theories—​ states of mind represent aspects of the world by tracking, in some sense, the distal objects and properties that they are about—​but it doesn’t suppose that a naturalistically specifiable relation is sufficient to determine a mental state’s satisfaction condition.6 Additional, pragmatic, considerations play an essential role. A content assignment requires empirical justification—​and this requires a certain fit between the mechanism and the world. A content assignment that interprets states of a system as representing Dow Jones stock index prices would be justified only if the states track the vagaries of the market, and to do that (barring a miracle) there must be a causal connection between the states of the system and market prices. The fit between biological systems 6 See Egan 2014 for elaboration and defense of the view sketched here. See also Coelho Mollo 2017.

32  What Are Mental Representations? and distal objects and properties is, of course, a product of natural selection, but it doesn’t follow, as teleosemanticists seem to assume, that evolutionary function—​the historical relation that holds between a structure’s tokening and its normal cause in the Environment of Evolutionary Adaptedness (EEA)—​best serves the cognitive (scientific) theorist’s explanatory goals. It may not, for example, if the goal is to explain how a cognitive mechanism works in the here and now. The various tracking relations privileged by naturalistic semantic theories characterize different ways that states of mind can fit the world, with the choice among tracking relations determined by explanatory, or broadly pragmatic, considerations. Let me elaborate. In ascribing representational contents the cognitive theorist may look for a distal causal antecedent of an internal structure’s tokening, or a homomorphism between distal and internal elements, but the search is constrained primarily by the cognitive capacity that the theory is developed to explain. For example, vision theorists will look to properties that can structure the light in appropriate ways; thus they construe the states and structures they posit as representing light intensity values, changes in light intensity, and further downstream, changes in depth and surface orientation. Theorists of motor control construe the structures they posit as representing positions of objects in nearby space and changes in body joint angles. And the assignment of task-​specific content—​what I call cognitive content—​is justified only if the theorist can explain how the posited structures are used by the system in ways that subserve the cognitive capacity in question. We can see the extent to which pragmatic considerations figure in the ascription of content by revisiting some of the problems encountered by tracking theories in their attempt to specify a naturalistic content-​ determining relation. Far from adhering to the strict program imposed by the naturalistic constraint, as understood by tracking theorists, the computational theorist, in assigning content to posited internal structures, selects from all the information in the signal just what is relevant for the cognitive capacity to be explained and specifies it in a way that is salient for explanatory purposes. Typically, pragmatic considerations will privilege a distal cause (the cat) over a proximal cause (catlike patterns in the retinal image), because a distal content ascription will facilitate an explanation of the interaction between the organism and its environment necessary for the organism’s success. Recall the dispute among teleosemanticists about whether the frog’s internal state represents fly or frog food or small dark moving thing. The dispute is unlikely to be settled without reference to specific explanatory

Deflationary Account of Mental Representation  33 concerns. If the goal of the theoretical project is to explain the frog’s role in its environmental niche, then the theorist is likely to assign the content fly. Alternatively, if the goal is to explain how the frog’s visual mechanisms work, then small dark moving thing might be preferred. In other words, explanatory focus resolves indeterminacy. If we turn to Quinean indeterminacy, theories are articulated in public language, and the ontology implicit in public language privileges fly over fly stage. These content choices are not motivated by naturalistic considerations—​the naturalistic constraint prohibits appeal to specific explanatory interests or to public meaning. Attention to actual practice reveals that pragmatic considerations motivate the choice among naturalistic alternatives and secure content determinacy. Cognitive content is not part of the essential characterization of a computational mechanism and is not fruitfully regarded as part of what I call the computational theory proper. The theory proper comprises a specification of the function (in the mathematical sense) computed by the mechanism,7 specification of the algorithms, structures, and processes involved in the computation, as well as what I call the ecological component of the theory—​ typically facts about robust covariations between tokenings of internal states and distal property instantiations under normal environmental conditions, which constrain, but do not fully determine, the attribution of cognitive content, as explained earlier. The computational theory proper is, strictly speaking, sufficient to explain the system’s success (and occasional failure) at the cognitive task (seeing what is where in the scene, object manipulation, and so on) that is the explanatory target of the theory. Cognitive content is not in the theory proper; rather it is best construed as a kind of gloss—​an intentional gloss—​on the computational theory. It is ascribed to facilitate the explanation of the relevant cognitive capacity. The primary function of an intentional gloss is to illustrate, in a perspicuous and concise way, how the computational theory addresses the intentionally characterized phenomena with which the theorist began and which it 7 See Egan 2017 for elaboration and defense of what I call function-​theoretic (FT) characterization, which is an environment-​neutral, cognitive domain-​general characterization of a mechanism. The inputs of a computationally characterized mechanism represent the arguments and the outputs the values of the mathematical function that canonically specifies the task executed by the mechanism: for example, smoothing functions for perceptual mechanisms (see Marr 1982, among many others), path integration for navigation mechanisms (see Gallistel 1990), vector subtraction for reaching and pointing (Shadmehr and Wise 2005). Hence, the FT characterization specifies a kind of content—​mathematical content—​that is distinct from the (cognitive) domain-​specific content that philosophers typically have in mind when they talk about “representational content” and which I call “cognitive content.”

34  What Are Mental Representations? is the job of the theory to explain. Cognitive content is “connective tissue” linking the subpersonal (primarily mathematical)8 capacities posited in the theory and the manifest personal-​level capacity that is the theory’s explanatory target (vision, grasping an object in view, and so on). But, as I noted earlier, the computational theory proper can fully explain the interaction between organism and environment, and hence the organism’s success, without adverting to cognitive content. The intentional gloss characterizes the interaction between the organism and its environment that enables the cognitive capacity in terms of the former representing elements of the latter; the theory does not. A second important heuristic function served by the assignment of representational content is to help us keep track of the flow of information in the system, or, to be more explicit, help us—​theorists and students of cognitive neuroscience—​keep track of changes in the system caused by both environmental events and internal processes, with an eye on the cognitive capacity (e.g., seeing what is where) that is the explanatory target of the theory. The choice of content will be responsive to such considerations as ease of explanation, and so may involve considerable idealization. A third function of content ascription is worth noting here; it will play a role in my argument later. A content ascription can serve as a temporary placeholder for an incompletely developed computational theory of a cognitive capacity and so guide the discovery of mechanisms underlying the capacity. For example, at the early stages of theory development, prior to the specification of the mathematical function computed and the structures and processes that enable the computation, a visual theorist may characterize a to-​be-​specified structure as representing edges or some other visible property of the distal scene. She may even call the structure an EDGE (as Marr does), foreshadowing the functional role that the structure will play in the processes to be described by the theory. Or a capacity may be characterized initially in intentional terms, as, say, shape from shading, prior to the development of the computational theory that explains the capacity. At this stage there may be little or no theory to gloss; nonetheless the intentional characterization plays an important role in the search for the mechanisms and processes underlying the intentionally described capacity. 8 See footnote 7.

Deflationary Account of Mental Representation  35 Let me return to Shea’s (2013) claim that cognitive science takes semantic properties for granted, leaving the project of specifying the conditions for content attribution to philosophy. On the account I  have sketched, there is a clear sense in which computational theorists do take meanings for granted: they don’t attempt to reduce mental content, nor do they assume that some naturalistically kosher relation grounds content attribution. Rather, they use unreduced, pragmatically motivated, content to explicate (gloss) their theories, and to serve the various explanatory functions I have described. In doing so they help themselves to the ontology implicit in public language. But, pace Shea, I am sure they would be surprised to hear that the naturalistic bona fides of their theories depend upon philosophers finding the holy grail of a naturalistic content-​determining relation. I shall conclude the discussion of representational content by returning to the constraints on an adequate account of content for cognitive neuroscience discussed earlier. In the first place, the account should provide the basis for determinate contents. The pragmatic account does this by explicitly recognizing the role of explanatory interests and other pragmatic considerations in determining content ascription. Second, the account should allow for the possibility of misrepresentation. Once determinacy is secured, we can see how misrepresentation can arise on the pragmatic account. Assume that the interpretation function (fI), justified in part by reference to pragmatic considerations, assigns the determinate content fly to a posited internal state. If the system goes into that state in the absence of a fly, then it misrepresents some other condition as a fly. The third constraint requires that the account be naturalistic. At first blush, it may seem that the appeal to explanatory and other pragmatic considerations in the determination of representational content would compromise the naturalistic credentials of cognitive neuroscience. That is not so, because the pragmatic elements and the contents they determine are “quarantined” in the intentional gloss, to use Mark Sprevak’s (2013) apt description of my view. The theory proper does not traffic in ordinary (i.e., cognitive task-​specific) representational contents, so its naturalistic credentials are not threatened. I want to consider the empirical adequacy of the deflationary account of representation as a whole, so I shall postpone discussion of the final constraint until later.

36  What Are Mental Representations?

1.4. Representational Vehicles Turning now to our second question: what is it for an internal state or structure to function as a representation, that is, to serve as a representational vehicle? Many of our intuitions about representation are shaped by thinking about public language, which is the model for the most popular account of mental representation. According to the language of thought hypothesis (LOT), mental representations are literally symbols in an internal language (aka mentalese), and mental processes are to be understood as operations on internal sentences.9 Like more familiar linguistic systems, LOT has a compositional syntax (specified by a realization function fR) and semantics (specified by an interpretation function fI). The content of LOT representations is said to be explicitly represented, as opposed to represented implicitly in the architecture of the system.10 But the analogy with public language can be misleading. While the information encoded in printed text is (in some sense) explicit, it must be usable. Think, for example, of an encyclopedia without an index or a library without a catalog. In addition to inert data structures there must be processes that read them. And the process that “reads” mental representations can’t involve understanding, on pain of underexplaining our representational capacities, as Ramsey might put it. As Fodor (1980) noted with his formality condition, computational processes are sensitive only to formal (that is, non-​semantic) properties of representations. The relevant properties of the symbols to which computational processes are sensitive will be specified by the realization function fR. A wide variety of cognitive models do not posit explicit representations, in the preceding sense. To mention just a few: (1) connectionist models typically explain cognitive phenomena as the propagation of activation among units in highly connected networks; (2) dynamical models characterize cognitive processes by a set of differential equations describing the behavior of the system over time; (3) enactive models treat cognition as consisting, fundamentally, of a dynamic interaction between the subject and the environment, rather than a static representation of that environment. None of these models characterize cognitive processes as involving computational operations defined on symbol structures. A  relatively

9 Jerry Fodor is LOT’s most ardent champion. See, especially, Fodor 1975 and 2008.

10 See Kirsh 1990 for a useful discussion of the notion of explicit representation.

Deflationary Account of Mental Representation  37 recent development in Bayesian modeling, predictive processing models, treats the brain as a predictive machine that uses perception and action to minimize prediction error; it is not obvious that predictive processing models lend themselves naturally to a representational construal in the sense presumed by LOT. The proliferation of various types of cognitive modeling compels us to re-​examine our intuitions about when and how information is encoded in a system. At very least, the linguistic model underlying LOT seems overly restrictive. In general, intuitions differ on the representational status of the various types of models. Clark (1997), Bechtel (1998, 2001), and others argue for a representational construal of connectionist and dynamical models. Chemero (2009), Gallagher (2008), and Ramsey (2007) argue that they do not posit representations. According to Ramsey the structures posited in connectionist models are “mere causal relays.” If they count as representations, he cautions, then pan-​representationalism threatens. A locus of dispute has been the Watt governor (figure 1.2), first introduced into the discussion by Van Gelder (1995). As the speed of the engine increases, centrifugal force elevates the arms of the flywheel, closing off a valve and restricting the flow of steam, thereby

Figure 1.2  The Watt Governor

38  What Are Mental Representations?

Figure 1.3  Ramsey’s toy car

decreasing the engine speed. The issue is whether the angle of the arms represents the speed of the flywheel. Bechtel (1998, 2001)  and Chemero (2000) think that a representational construal is appropriate; Ramsey (2007) and Shapiro (2010) think it is not. Another hotly disputed case is the toy car (figure 1.3) described by Ramsey (2007, 199). The car negotiates a tricky S-​curve tunnel by making use of a groove-​and-​ rudder system that guides the wheels of the car smoothly through the curve. According to Ramsey the system is representational because it uses a structure that is isomorphic to the curved tunnel. But the representational construal of the system is open to dispute. Whatever representational capacity the car has doesn’t generalize—​it can’t negotiate other tracks. And Tonneau (2011) argues that, by Ramsey’s measure, a key represents a lock. We can identify at least three general motivations for resisting a representational construal of a cognitive model: (1) A  too narrow, language-​based construal of representation, in other words, the intuition that only models that posit interpreted symbol structures with a compositional syntax count as representational. It should be noted, however, that not all public representation involves such symbol structures—​maps, for example, do not—​so the intuition that internal representations must be quasi-​linguistic is dubious.

Deflationary Account of Mental Representation  39 (2) The idea, popular among proponents of embodied and enactive approaches, that representation is not necessary for cognition.11 (3) The worry, often expressed by proponents of enactivism, that a naturalistic account of representational content is simply not in the cards, and so invoking representations in a scientific account of cognition is indefensible. Hutto and Myin (2013) claim that representation-​based theories “are unable to account for the origins of content in the world if they are forced to use nothing but the standard naturalist resources of informational covariance, even if these are augmented by devices that have the biological function of responding to such information” (xv), dubbing this the hard problem of content. They identify three options for the theorist of cognition: (i) give up content, and hence mental representation; (ii) hope that content can be naturalized in some other way; or (iii) posit content as an irreducible, explanatory primitive, in other words, embrace a kind of dualism. Enactivists propose (i), eschewing content and hence mental representation. But, as I have argued earlier, there is a fourth option for dealing with the “hard problem”: don’t give up on content, but recognize that it is in part pragmatically determined, and confine it to an explanatory gloss. Let us return to the central issue of this section: what is it for an internal state to function as a representation? I suggest that focusing on non-​cognitive cases—​the Watt governor, Ramsey’s car—​tells us very little about how representations function in accounts of cognition. Intuitions about these cases are not dispositive. It is more fruitful to focus on the typical explanatory context in which a theory in cognitive science is developed—​a manifest cognitive capacity such as seeing what is where in the scene, locomotion, manipulating objects in view, and so on—​and ask under what conditions such a theory is committed to representations. This project is more modest—​ it won’t tell us what it is to function as a representation in general. There may not be an interesting non-​disjunctive answer to that question.12 Rather, what 11 Rodney Brooks (1991) famously claimed:  “explicit representations and models of the world simply get in the way. It is better to use the world as its own model” (1991/​1999, 81). Despite the rhetoric, Brooks doesn’t argue against representations per se, but rather against positing general context-​ free representations of the environment, and separate, explicit representations of goals. He is the father of “action-​oriented representations.” 12 The concept representation may not pick out a natural kind but rather be a motley, functioning differently in different contexts. This possibility can’t be ruled out a priori.

40  What Are Mental Representations? we seek is an account of what it is to function as a representation in an explanatory account of a cognitive capacity. This would fall short of a general metaphysical account of representation, but it would be interesting nonetheless. As it happens, our characterization of the adder (figure 1.1) provides the basis for answering the question. The mapping fR isolates the causal structure relevant for the exercise of a given cognitive capacity. This will typically involve characterizing a set of states or structures, and the properties of these states or structures in virtue of which they play the distinctive roles they do in the exercise of the capacity. These states/​structures will function as representations—​in particular, as representational vehicles—​just in case they are interpreted by a mapping fI that assigns them contents. Given the content assignment specified by fI, the states or structures specified by fR are not “mere causal relays,” as they would be without the semantic interpretation. The representational vehicles specified by fR are as real as states or structures posited in any well-​confirmed scientific explanation of observable phenomena. An analogy may be helpful: genes are realized by physical/​ chemical structures; molecular biology groups these structures together by their causal powers to produce proteins ultimately responsible for particular phenotypical effects, abstracting away from some of their more basic physical/​chemical properties. Similarly, the realization function (fR) abstracts away from some of the properties of the realizing neural states and groups them together by their role in cognitive processing. In both cases, the states/​ structures may be multiply realized by states/​structures characterized at the more basic level. In both cases, assuming that the theory is empirically well confirmed, a realist attitude toward the posited structures is appropriate. The upshot of the foregoing discussion is that a cognitive model—​whether so-​ called “classical,” connectionist, dynamical, embodied, or enactive—​ posits representations just in case it identifies representational vehicles, via fR, and assigns them contents in fI. The kinds of states or structures that can count as representational vehicles—​the kinds of objects and properties specified by fR—​is left open.13 Intuitions grounded in our familiarity with public representational systems carry little weight here. A connectionist model that construes characteristic patterns of activation of hidden units to be causally efficacious in the exercise of a given cognitive capacity and assigns these

13 Since fR specifies the causal organization of the system, the relevant objects and properties must be capable of having causal powers. Abstracta, therefore, cannot function as representational vehicles.

Deflationary Account of Mental Representation  41 patterns of activation contents in an appropriate gloss would thereby posit representations. An implication of the view is that to determine whether an explanatory theory of a cognitive capacity posits representations, it must be articulated at the level of structures and processes. Absent an account of the causal organization of the system given by fR, we cannot determine the representational commitments of the theory. That said, my account of computational characterization is something of an idealization. A complete fR mapping specifies precisely how the mechanism is realized in neural hardware. Many computational models are not fully articulated at the level of neural structure. The important point here is that a theory is committed to representations only if it posits structures/​states to serve as representational vehicles, and causal processes in which these vehicles are involved, even if the realizing neural details are yet to be supplied. The proposed account of mental representation couples a realist account of representational vehicles and a pragmatic account of representational content. The resulting package is deflationary about mental representation. Contents serve a variety of heuristic purposes but are not part of what I have called the “theory proper.” They are, strictly speaking, not necessary to explain the target phenomena and are best construed as part of an explanatory gloss. They are not determined by a privileged representation relation but are rather motivated by a variety of pragmatic considerations. A deflationary view of mental representation is not a species of fictionalism.14 Fictional objects cannot play causal roles in cognitive processes, as representations are presumed to do. Neither is it a version of interpretivism, as that view is normally understood.15 The states/​structures that are interpreted in the gloss have their causal roles—​though, of course, not their representational contents—​independently of the interpretative practices of theorists.

1.5.  Satisfying the Adequacy Conditions Let us see how the deflationary account fares with respect to Ramsey’s (2007) adequacy conditions.

14 A fictionalist construal of neural representation has been discussed (though not endorsed) by Sprevak (2013). 15 See, for example, Dennett 1987.

42  What Are Mental Representations? (1) Mental representations must serve a function sufficiently like paradigm cases of representation. Considering the variety of functions served by public representations with which we are familiar—​utterances, inscriptions, maps, photographs, graphs, and so on—​it isn’t clear that there is a single function shared by paradigm cases. However, representations are often said to stand in for the object or property specified by their content, where the relation of “standing in” is left sufficiently vague to cover the central cases. But this is no less true for mental representations, as characterized by the deflationary account. Once a representational vehicle is assigned a content in an appropriate gloss, then it can be regarded, for all intents and purposes, as standing in (in the same vague sense) for the object or property specified by that content. The stand-​in plays a characteristic causal role in the exercise of the target cognitive capacity. (2) The content of mental representations should be causally relevant to their role in cognitive processes. Many philosophers have made this point.16 Dretske (1988) talks about “content getting its hands on the wheel.” Of course, since content is abstract, it cannot literally be a cause of anything. Rather, the requirement seems to be something like this: the content that a state has causally explains the role that the state plays in cognitive processing. So understood, the requirement puts the cart before the horse, and so should be rejected. Content captures a salient part of the causal nexus in which the state is embedded. For example, construing the frog’s internal structure as representing fly emphasizes the causes of its tokening in the frog’s normal ecological niche (its production); construing it as representing frog food emphasizes downstream nutritional effects of its tokening (its consumption). Thus it is no surprise that content looks to be causally relevant—​one of its jobs, as noted earlier, is to characterize internal structures/​states in a way that makes perspicuous their causal role in a cognitive process, again, given specific explanatory concerns. But content doesn’t causally explain anything. (3) The account should not imply pan-​representationalism: lots of clearly nonrepresentational things shouldn’t count as representations. 16 For a sample of the literature promoting this idea see Dretske 1988; Segal and Sober 1991; and Rescorla 2014.

Deflationary Account of Mental Representation  43 Pan-​representationalism is not a worry for the deflationary account, because it does not purport to offer a metaphysical theory of representation. It does not specify a general representation relation that holds independently of explanatory practice in cognitive neuroscience. This is one sense in which the account is deflationary. The view has no implications for Venus flytraps, Watt governors, and some of the other things Ramsey cautions may turn out to be representations if the account is not sufficiently constrained. (4) The account should not underexplain representational capacities; it should not, for example, presuppose such mental capacities as understanding. The realization function fR isolates the causal structure relevant for the exercise of the target cognitive capacity. It characterizes the states or structures that serve as representational vehicles and the properties of these states/​ structures in virtue of which they play the distinctive causal roles they do in the exercise of the capacity. Cognitive processes are not sensitive to any semantic or intentional processes that the vehicles may be assigned in the interpretation function fI, so the theory does not posit or presuppose any intentional processes such as understanding. Moreover, as explained earlier, there is no appeal to a representational relation in what I call the “theory proper.” So the deflationary account does not underexplain our representational capacities, in Ramsey’s sense. If anything, it may seem at risk of violating his final condition: (5) It should not overexplain representational capacities, such that representation is explained away. According to Ramsey, if representations function as “mere causal relays,” then, in effect, the phenomenon of interest has disappeared. Causal relays are ubiquitous; surely not all of them are representations. I  claim that representations are distinguished from mere causal relays by the fact that they are assigned contents by the interpretation function fI, but since the content assignment is confined to the heuristic gloss, it might be argued that the phenomenon of interest—​representation—​has indeed disappeared. My response to this charge is to challenge the adequacy condition. Cognitive neuroscience purports to give reductive accounts of cognitive capacities. This is what Fodor and Field, motivated by the conviction that intentionality is not fundamental, were asking for in the passages quoted

44  What Are Mental Representations? earlier. The same conviction motivates the naturalistic semantics project. But a reductive account of a phenomenon—​especially a mental phenomenon, with which we have an intimate, first-​person acquaintance—​will often tend to look like overexplaining. The phenomenon of interest may seem to have “disappeared.” By the same token, biochemistry, in explaining the essence of life in terms of carbon-​based molecular processes, may appear to have overexplained its target: the special élan vital that we know and value has, in effect, disappeared. But if our representational capacities are really to be explained—​naturalistically explained—​then at some point the notions “representation” and “content” are going to drop out of the account given by the theory, and what is left may look like mere causal relays. The appropriate reaction is not to find fault with the reductive theory (assuming it is well confirmed), or with the urge to subsume the phenomenon of interest under more fundamental processes that are better understood. Successful reduction, and the unification that it makes possible, is the hallmark of scientific progress. Nonetheless, there is something right about the “Don’t overexplain” requirement. A  reductive account of a phenomenon that is both central to our way of understanding ourselves and also, pretheoretically, somewhat mysterious—​as life, intentionality, and consciousness certainly are—​creates an explanatory gap of sorts between the account given by the theory and the common-​sense conception of the phenomenon with which we began, a gap, in other words, between the scientific and the manifest image, as Wilfrid Sellars (1962) would have put it.17 This gap typically leaves the reductive theorist with an obligation to connect the theory with the pretheoretically conceived explanatory target, and this is precisely the function served by an explanatory gloss. In the case of a reductive explanation of our representational abilities, what is required is an intentional gloss connecting the theory proper with the intentionally characterized phenomenon with which we are pretheoretically familiar. One needn’t endorse my pragmatic account of mental content to see the point. An intentional gloss would most likely be needed even if the naturalistic semantics project were to succeed in specifying sufficient non-​semantic and non-​intentional conditions for a mental state’s having the meaning it does. There are at least two reasons for this. In the first place, existing naturalistic theories, at best, require further conditions to resolve indeterminacy. 17 The explanatory gap between reductive proposals for consciousness and phenomenal experience is, of course, the most famous example.

Deflationary Account of Mental Representation  45 Perhaps striking out in an entirely new direction is a more promising strategy. In any event, if there are non-​semantic and non-​intentional conditions that ground determinate content, they are likely to be highly disjunctive or their specification otherwise very complex.18 There is no reason to think that such conditions would be explanatory of intentionality, because they would not necessarily contribute to our understanding of intentional phenomena in any significant way. The job of connecting the naturalistic theory with the target phenomenon—​meaning—​would be left for a gloss. Second, a naturalized reduction of intentionality is likely to leave what is distinctively personal out of the picture. If there are naturalistic conditions for content, then what we think of as distinctively mental representations—​thoughts and feelings—​ may turn out not to be special. The conditions may be satisfied by all kinds of mindless systems. For example, plants have circadian clocks, and it has been argued that they represent temporal properties. But plants are not thought to have what Morgan (2014) calls mental-​grade intentionality. We need to consider the possibility that from a detached, naturalistic perspective there may not be any distinctively mental representation. But, of course, human minds don’t just present themselves as objects for scientific study; we have direct acquaintance with our own states of minds, and it is the phenomena with which we are intimately acquainted (thoughts and feelings!) that will seem to have disappeared. Reconciling these two perspectives—​finding what the theory seems to have lost—​is a job for a gloss.19

1.6.  Is the Deflationary Account Empirically Accurate? The deflationary account has recently come under attack as failing to accurately describe actual practice in cognitive neuroscience. The charge is that computational theories are fully committed to representations; the attribution of representational content is not a mere gloss. I  shall consider arguments offered by William Bechtel and Michael Rescorla in turn.

18 A  case in point is Fodor’s ultimate (1990, 121)  formulation of his asymmetrical dependency proposal, which requires three somewhat (in this reader’s opinion) non-​intuitive conditions, and yet still leaves the possibility of (at very least) Quinean indeterminacy, and so requires still further conditions. 19 Much more needs to be said about the relation between the cognitive contents posited in explanatory glosses of computational models and personal-​level contents, but this issue is beyond the scope of the present paper.

46  What Are Mental Representations? Appealing to the development of theories of spatial representation in the rodent brain, Bechtel (2016) argues that much neuroscience research is in fact directed at determining which neural processes are content bearers and understanding how they represent what they do. Content characterizations are not mere glosses on the research; the goal of the research is to determine what content the representations have. (1291)

The discovery in the 1970s of “place cells” in the rat hippocampus prompted research on the role of these cells in navigation, which eventually led to the discovery of other types of neurons—​grid cells, head-​direction cells, boundary cells—​whose firings correlate reliably with tokenings of various spatial properties in the local environment. These cells were shown to interact with place cells in the mechanism responsible for spatial navigation. Bechtel says of this and related work: A strategy neuroscientists have employed with great success in attempting to understand the mechanisms that underlie cognitive abilities is to identify cells in which the rate of action potentials increases in response to specific stimulus conditions. They then construe such neurons as representing those features in the environment whose presence is correlated with the increased firing and attempt to understand how subsequent neural processing utilizes representations that stand in for those features of the environment in guiding behavior. (1288)

So, for example, place cells respond to particular regions of the local environment. They are said to represent that location. Head-​direction cells are so named because they respond to head direction, and are said to represent head direction. It does not follow, however, that these content attributions play an essential role in the theory, or that the goal of the research is to “to determine what content the representations have,” as Bechtel claims. The significant theoretical achievement here is specifying the distal conditions to which the cell’s firing is responsive and determining its role in controlling subsequent behavior. That is the goal of the research, not determining the content that the posited representations have. Once the cell’s role in the cognitive process has been characterized the theoretical heavy lifting is done. Talk of the cell’s firing representing its distal stimulus conditions is a

Deflationary Account of Mental Representation  47 convenience—​a gloss—​that adds nothing of theoretical significance. Recall one of the functions of content ascription I identified earlier: to characterize internal structures/​states in a way that makes perspicuous their causal role in a cognitive process that typically extends into the environment. This is the main function of content ascription here. Arguing that representational content plays a fundamental role in cognitive neuroscience, Bechtel goes on to say: an early and integral step in the investigation of how specific information is processed within organisms appeals to representational content to determine representational vehicles. Initial characterizations of the vehicles and attributions of content are then both subject to revision as more vehicles are discovered and the processing mechanisms that generate the relevant activity and respond to it are identified. What is especially important is that such additional inquiry is inspired and guided by the initial attributions of representational content and directed at fleshing out the account. The attribution of content is a first step in articulating an account of a mechanism for processing information. (1291)

Here Bechtel seems to recognize that the goal of the research is to identify the structures and processes responsible for the target capacity. He points out that content attributions can play an important role in their discovery, illustrating one of the functions of representational content I identified previously: to serve as a temporary placeholder for an incompletely developed computational theory and to guide the discovery of mechanisms underlying the capacity. Characterizing to-​be-​discovered structures in terms of content allows the theorist to formulate hypotheses about the causal roles of the structures she is investigating. To be sure, it is not appropriate to call such content ascription a gloss because at this early stage there may be little or no theory to gloss—​representational vehicles have yet to be fully characterized—​but the relevant point is that the content ascription serves an explicitly heuristic purpose, analogous to glosses deployed in developed theories. In conclusion, the deflationary account I favor explains the rat navigation case quite well. And since Bechtel’s argument depends on general features of neuroscientific theorizing, there is good reason to think the account will handle a wide range of cases. Another version of the empirical accuracy challenge focuses on a very different class of cognitive models. Michael Rescorla argues that my deflationary

48  What Are Mental Representations? account is false of Bayesian psychological models. He claims that representational content plays a fundamental and essential role in Bayesian theorizing: Bayesian models individuate both explananda and explanantia in representational terms. The science explains perceptual states under representational descriptions, and it does so by citing other perceptual states under representational descriptions. For instance . . . the generalizations type-​identify perceptual states as estimates of specific distal shapes. . . . Thus, the science assigns representation a central role within its explanatory generalizations. The generalizations describe how mental states that bear certain representational relations to the environment combine with sensory input to cause mental states that bear certain representational relations to the environment. (2015, 14)

Rescorla claims that Bayesian perceptual models construe perceptual states as essentially representational; their distal content plays an essential role in specifying these states. In another recent paper, on sensorimotor models, he characterizes the Bayesian program as follows: Researchers adopt a two-​step approach: first, construct a normative model describing how an optimal Bayesian decision-​ maker would proceed; second, fit the normative model as well as possible to the data. . . . Our model yields ceteris paribus generalizations relating sensory input, mental activity, and behavior. We evaluate through experimentation how well the generalizations describe actual humans. Hence, the basic explanatory strategy is to use Bayesian normative models as descriptive psychological tools. This explanatory strategy presupposes that the motor system largely conforms (at least approximately) to Bayesian norms. (2016a, 31–​32)

I shall make two points about Rescorla’s characterization of Bayesian psychological models. In the first place, and most importantly for the discussion of the empirical accuracy of the deflationary account of representation, Bayesian models are typically not developed at a level of description that allows us to assess their representational commitments. More accurately, they have no representational commitments, in the relevant sense. There is no computational implementation—​no commitment to internal states or structures and causal processes defined on them—​and so no commitment to representational vehicles, in other words, no commitment to representations,

Deflationary Account of Mental Representation  49 in the sense at issue. Bayesian models are the merest of mechanism “sketches” (in the sense articulated by Piccinini and Craver 2011). It is not simply that we don’t know how the models are implemented in neural mechanisms. More relevantly, we don’t have an account of the causal organization of the system at the level of abstraction specified by fR.20 If we had a computational implementation of a Bayesian mechanism, then, but only then, we could determine whether the contents assigned to the posited states play an essential, individuative role in the theory, or whether they function as a gloss of the sort I have proposed. This is certainly not intended as a criticism of the Bayesian program. In the absence of a computational implementation, how else is the theorist to describe to-​be-​posited internal states and processes except in intentional terms, by reference to their presumed distal contents?21 It is merely to note that assessment of the representational commitments of specific Bayesian models must await their further development.22 Second, to the extent that a realist construal of Bayesian psychological models is appropriate, they are committed to the claim that mental processes are probabilistic inferences, and that internal mechanisms compute probability distributions optimally, according to Bayes’ theorem.23 Under a natural interpretation, internal structures represent probability distributions.24 20 Rescorla is at pains to point out that Bayesian models are not committed to what he calls “formal/​syntactic” computation, claiming, “The science . . . individuates mental states in representational terms as opposed to formal syntactic terms” (2016a, 25). He is right—​Bayesian models are not articulated at the level of structures and processes, so they are not committed to syntax. But syntactic objects are just one type of representational vehicle. A theory is committed to representations only if it posits representational vehicles and assigns them content (setting aside for present purposes whether the content assignment is in the theory or in a gloss), so a characterization of mental states in terms of content does not obviate the need to characterize them in terms of their causal role in cognitive processes. Simply put: no vehicles, no representations. 21 Thus, distal content ascription in Bayesian models, whatever else it may do, serves the placeholder function described previously. 22 It is worth noting the slide between “explanatory” and “descriptive” in the last two sentences of the Rescorla quote. There is some dispute about the correct interpretation of Bayesian models: are they intended to explain actual psychological processes, or merely to describe them in a way that systematizes and predicts behavior? Colombo and Series (2012) argue for the latter view. They point out that current Bayesian models do not provide mechanistic explanations—​they do not specify the structures and processes that implement Bayesian computations—​and argue that at the current stage of theorizing an instrumentalist attitude toward the models is appropriate. An assessment of this instrumentalist conclusion is beyond the scope of the present paper, though see Egan 2017 for defense of the view that a characterization of the function (in the mathematical sense) computed in the exercise of a cognitive capacity can be explanatory even absent an account of how the capacity is computationally (or neurally) implemented. 23 Under idealization, of course, just as hand calculators and human subjects compute the addition function only under idealization. 24 But, as Wiese (2017) points out, neither textbook Bayesian inference nor approximate Bayesian inference (in, e.g., predictive processing models) requires representing probability values or values of probability density functions. He calls the problem of determining how the brain implements an

50  What Are Mental Representations? In any event, Bayesian models, to the extent that they say anything about how the brain actually works, give what I have called a function-​theoretic characterization; they specify the function, in the mathematical sense, computed by the mechanism.25 The function is specified intensionally by Bayes’ theorem. Rescorla apparently thinks that the mathematical characterization is an artifact of our idiosyncratic conventions, rather than a central commitment of Bayesian psychology: Bayesian perceptual psychology offers intentional generalizations governing probability assignments to environmental state estimates. We articulate the generalizations by citing probability distributions and pdfs [probability distribution functions] over mathematical entities. But these purely mathematical functions are artifacts of our measurement units. They reflect our idiosyncratic measurement conventions, not the underlying psychological reality. (2015, 32)

This is very puzzling. To think that commitment to Bayes’ theorem—​a function defined on probability distributions—​reflects an arbitrary choice of conventions is analogous to thinking that a claim that a device computes the addition function reflects a commitment to representing addends and sums in base 10.26 Contra Rescorla, to the extent that Bayesian models are to be construed realistically—​and if they are not, then disputes about the status of representational content in Bayesian models are pointless—​such proposals should be construed as hypotheses about underlying psychological reality, committed, in particular, to the claim that the system is computing an approximation to Bayes’ theorem. To summarize my reply to the empirical accuracy objection, that is, to the claim that theories in cognitive neuroscience and cognitive psychology do in fact make essential appeal to representation: (1) If the theory characterizes a cognitive capacity in terms of mechanisms, states, and processes (as in the

approximation to Bayesian inference the probability conundrum and notes that different solutions to it have been proposed in the literature. Kwisthout and van Rooij (2013) argue that considerations involving computational tractability suggest that explicit representations of probability distributions are unlikely to be employed by the brain. 25 See footnote 7. 26 Rescorla 2016b makes the same point, arguing, against my account of function-​theoretic description, that to characterize a device as computing a mathematical function is to commit to an arbitrary choice of measurement units.

Deflationary Account of Mental Representation  51 account of rat navigation), then a deflationary reinterpretation of the representational talk employed by theorists is appropriate. Such talk is playing a gloss-​like role. (2)  If it does not characterize the capacity in terms of mechanisms, states, and processes (as in current Bayesian psychological models), then the theory has no representational commitments in the relevant sense, that is, no commitment to representations. A final word on the so-​called “representation wars,” currently raging over whether predictive processing models, enactivist accounts, and other recent approaches posit representations. The deflationary account is itself neutral in the representation wars. In particular, the idea that representational content functions as a kind of gloss has no implications for which broad classes of cognitive models, when the computational details are spelled out, carry representational commitments (other than “classical” models, which undoubtedly do). But the view has implications for how the wars should be settled. A cognitive model posits representations just in case it identifies representational vehicles, via fR, which play crucial causal roles in the exercise of the capacity, and assign these vehicles contents in fI.

Acknowledgments Thanks to Robert Matthews, the editors of this volume, and an anonymous referee for helpful comments on earlier versions of this paper. Thanks also to audiences at the Institute for Philosophy at the University of London, the Society for Philosophy and Psychology annual meeting at Duke University in June 2015, the conference “Mental Representation” at Ruhr-​University Bochum in September 2015, and the students in my Mental Representation seminar at Rutgers University in fall 2017.

References Brooks, R. A. 1991/​1999. Intelligence without Representation. Artificial Intelligence 47: 139–​159. Reprinted in Cambrian Intelligence (Cambridge, MA: MIT Press). Chemero, A. 2000. Anti-​representationalism and the Dynamical Stance. Philosophy of Science 67: 625–​647. Chemero, A. 2009. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Bechtel, W. 1998. Representations and Cognitive Explanations. Cognitive Science 22: 295–​318.

52  What Are Mental Representations? Bechtel, W. 2001. Representations from Neural Systems to Cognitive Systems. In W. Bechtel, P. Mandik, J. Mundale, and R. Sufflebeam (eds.), Philosophy and the Neurosciences, 332–​348. Oxford: Blackwell. Bechtel, W. 2016. Investigating Neural Representations: The Tale of Place Cells. Synthese 193: 1287–​1321. Clark, A. 1997. The Dynamical Challenge. Cognitive Science 21 (4): 461–​481. Coelho Mollo, D. 2017. Content Pragmatism Defended. Topoi https://​doi.org/​10.1007/​ s11245-​017-​9504-​6. Colombo, M., and Series, P. 2012. Bayes in the Brain:  On Bayesian Modeling in Neuroscience. British Journal for the Philosophy of Science 63 (3): 697–​723. Cummins, R. 1989. Meaning and Mental Representation. Cambridge, MA: MIT Press. Dennett, D. C. 1987. The Intentional Stance. Cambridge, MA: MIT Press. Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge, MA: MIT Press. Dretske, F. 1986. Misrepresentation. In R. Bogdan (ed.), Belief:  Form, Content, and Function, 17–​36. Oxford: Clarendon Press. Dretske, F. 1988. Explaining Behavior. Cambridge, MA: MIT Press. Dretske, F. 1995. Naturalizing the Mind. Cambridge, MA: MIT Press. Egan, F. 2014. How to Think about Mental Content. Philosophical Studies 170: 115–​135. Egan, F. 2017. Function-​Theoretic Explanation and the Search for Neural Mechanisms. In D. M. Kaplan (ed.), Explanation and Integration in Mind and Brain Science, 145–​163. New York: Oxford University Press. Field, H. 1975. Conventionalism and Instrumentalism in Semantics. Nous 9: 375–​405. Fodor, J. A. 1975. The Language of Thought. New York: Thomas Y. Crowell. Fodor, J. A. 1980. Methodological Solipsism Considered as a Research Program in Cognitive Science. Behavioral and Brain Sciences 3: 63–​109. Fodor, J. A. 1987. Psycho-​semantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press. Fodor, J. A. 1990. A Theory of Content and Other Essays. Cambridge, MA: MIT Press. Fodor, J. A. 2008. LOT 2:  The Language of Thought Revisited. Oxford:  Oxford University Press. Gallagher, S. 2008. Are Minimal Representations Still Representations? International Journal of Philosophical Studies 16 (3): 351–​369. Gallistel, C. R. 1990. The Organization of Learning. Cambridge, MA: MIT Press. Hutto, D., and Myin, E. 2013. Radicalizing Enactivism:  Basic Minds without Content. Cambridge, MA: MIT Press. Kirsh, D. 1990. When Is Information Explicitly Represented? Vancouver Studies in Cognitive Science 1: 340–​365. Kwisthout, J., and van Rooij, I. 2013. Bridging the Gap between Theory and Practice of Approximate Bayesian Inference. Cognitive Systems Research 24:  2–​8. http://​dx.doi. org/​10.1016/​j.cogsys.2012.12.008. Marr, D. 1982. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York: WH Freeman. Matthen, M. 1988. Biological Functions and Perceptual Content. Journal of Philosophy 85: 5–​27. Millikan, R. 1984. Language, Thought, and Other Biological Categories. Cambridge, MA: MIT Press. Millikan, R. 1989. Biosemantics. Journal of Philosophy 86: 281–​297. Morgan, A. 2014. Representations Gone Mental. Synthese 191: 213–​244.

Deflationary Account of Mental Representation  53 Neander, K. 2006. Content for Cognitive Science. In G. F. Macdonald and D. Papineau (eds.), Teleosemantics, 140–​159. New York: Oxford University Press. Neander, K. 2017. A Mark of the Mental:  In Defense of Informational Teleosemantics. Cambridge, MA: MIT Press. O’Brien, G., and Opie, J. 2004. Notes toward a Structuralist Theory of Mental Representation. In H. Clapin, P. Staines, and P. Slezak (eds.), Representation in Mind: New Approaches to Mental Representation, 1–​20. Oxford: Elsevier. Papineau, D. 1993. Philosophical Naturalism. Oxford: Blackwell. Piccinini, G., and Craver, C. 2011. Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches. Synthese 183 (3): 283–​311. Quine, W. V. O. 1960. Word and Object. Cambridge, MA: MIT Press. Ramsey, W. 2007. Representation Reconsidered. New York: Cambridge University Press. Rescorla, M. 2014. The Causal Relevance of Content to Computation. Philosophy and Phenomenological Research 88: 173–​208. Rescorla, M. 2015. Bayesian Perceptual Psychology. In M. Matthen (ed.), The Oxford Handbook of the Philosophy of Perception, 694–​716. Oxford: Oxford University Press. Rescorla, M. 2016. Bayesian Sensorimotor Psychology. Mind & Language 31: 3–​36. Rescorla, M. 2020. The Computational Theory of Mind. Stanford Encyclopedia of Philosophy. Ryder, D. 2004. SINBAD Neurosemantics: A Theory of Mental Representation. Mind & Language 19 (2): 211–​240. Segal, G., and Sober, E. 1991. The Causal Efficacy of Content. Philosophical Studies 63: 1–​30. Sellars, W. 1962. Philosophy and the Scientific Image of Man. In R. Colodny (ed.), Frontiers of Science and Philosophy, 35–​78. Pittsburgh: University of Pittsburgh Press. Reprinted in Science, Perception and Reality (London: Routledge and Kegan Paul, 1963). Shadmehr, R., and Wise, S. 2005. The Computational Neurobiology of Reaching and Pointing: A Foundation for Motor Learning. Cambridge, MA: MIT Press. Shagrir, O. 2012. Structural Representations and the Brain. British Journal for the Philosophy of Science 63: 519–​545. Shapiro, L. 2010. Embodied Cognition. New York: Routledge. Shea, N. 2007. Consumers Need Information:  Supplementing Teleosemantics with an Input Condition. Philosophy and Phenomenological Research 75 (2): 404–​435. Shea, N. 2013. Naturalizing Representational Content. Philosophy Compass 8: 496–​509. Shea, N. 2018. Representation in Cognitive Science. Oxford: Oxford University Press. Sprevak, M. 2013. Fictionalism about Neural Representations. The Monist 96: 539–​560. Tonneau, F. 2011. Metaphor and Truth: A Review of Representation Reconsidered by W. M. Ramsey. Behavior and Philosophy 39: 331–​343. van Gelder, T. 1995. What Might Cognition Be, If Not Computation. Journal of Philosophy 91: 345–​381. Wiese, W. 2017. What Are the Contents of Representations in Predictive Processing? Phenomenology and the Cognitive Sciences 16: 715–​736.

2 Defending Representation Realism William Ramsey

2.1.  Introduction What sort of ontological attitude do cognitive scientists and researchers take toward the representational structures that are invoked in different accounts of cognitive processes? Perhaps a more important (and certainly more philosophical) question is, what sort of attitude should researchers (and all of us) have toward these representational posits? At first blush, this might seem like a straightforward question that allows for only a couple of answers. Either researchers should adopt a realist stance, whereby they regard the representational posits to be real entities that we can discover at some level of analysis in the brain, or they should be eliminativists about them, rejecting the accuracy and utility of theories that invoke them. Of course, people can adopt a mixed view, whereby they are realists about some types of representational posits and eliminativists about others. But with regard to any specific sort of representation invoked in a given theory, it might initially seem that the ontological stance one can take comes down to these two options. However, in the philosophy of cognitive science there have been those who have argued that things are not so simple, at least not with regard to the metaphysical status (and theoretical entailments) of representational posits. There have been some philosophers who have claimed that there exists a third option, or range of options, lying somewhere between straightforward realism and out-​and-​out eliminativism. This third option comes in many forms, but most of them imply that we should not regard the invoking of representations in cognitive theories and models as involving a commitment to the existence of ontologically robust, real structures that have actual representational content and that help explain various cognitive capacities. Instead, representations posited by researchers should be treated as something more like useful fictions or theoretical devices that play some sort of pragmatic, heuristic role in our theorizing. Thus, the third option is a view William Ramsey, Defending Representation Realism In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0003.

Defending Representation Realism  55 about the invoking of cognitive representation in science that is somewhat similar to some of the things Daniel Dennett has suggested about the attribution of propositional attitudes in ordinary discourse (Dennett 1987). In both realms, it is recommended that we not regard the attribution of intentional states as involving a commitment to actual representational structures in the brain, with robust intentional properties that exist apart from our explanatory and interpretive practices. In this chapter I’m going to argue against this sort of antirealism about representations. I do not believe it correctly captures the right way to think about most scientific theorizing about cognition that appeals to inner representations. To establish this, I will first spell out what I think a robust sort of representational realism involves, and say something about what it should and should not be contrasted to. Then I will offer an argument for favoring representational realism over antirealism that is grounded in the functionality of representations. Of course, there are many sorts of antirealism and I will not be able to look closely at all of them. But I do want to take a closer look at three examples: Chomsky’s account of “non-​relational” representations (Chomsky 1995), Sprevak’s “neural representation fictionalism” (Sprevak 2013), and Egan’s account of computational content as a sort of “intentional gloss” (Egan 2014a, 2014b). In all of these cases, I’ll argue that on further inspection the proposed antirealism has deep problems or actually collapses into a more conventional sort of representational realism or eliminativism.

2.2.  Defining Robust Representational Realism So what do I mean by robust representational realism? Robust representational realism is a claim about what the positing of representations in cognitive theories should be thought to entail in terms of ontological commitments. It is the view that when theorists appeal to the existence of representations, they are proposing the existence of real, discoverable structures that are, at some level of analysis, playing the functional role of representing some property, entity, relation, etc., and that this representational structure is employed in the performance of some cognitive task. Structures playing this role necessarily have some sort of representational content and will possess most of the features we typically associate with representation, such as the capacity for misrepresentation and content that is not simply derived from our

56  What Are Mental Representations? own interpretive activities. Thus, realism is the view that representational models and theories are trying to literally describe objective cognitive reality. If the representational structures that these theories put forward do not actually exist, then the model or theory is just wrong in this regard and ought to be revised or abandoned. Robust representational realism is thus to be contrasted with various sorts of antirealism. However, there are two types of antirealism that need to be distinguished. First, there is the sort of representational antirealism that is associated with eliminativism; that is, that involves the straightforward denial that the posited representational stuctures actually exist (Chemero 2011; Hutto and Myin 2012; Ramsey 2007; Stich 1983). This sort of antirealism often involves the straight-​out rejection of representationalism and a claim that cognitive science should move in a different direction. Yet  although representational realists and eliminativists disagree about the reality of cognitive representations, they more or less agree on how we should interpret the positing of representations. They agree that when investigators appeal to representations in their accounts, they are making a commitment to the ontological reality of such entities. They agree that researchers should be regarded as offering a literal account of what is taking place in the brain, at some level of analysis. Hence, my criticisms will not be targeting the eliminativist sort of antirealism. As far as I’m concerned, realism and eliminativism are on the same side in the dispute I want to address. The second sort of antirealism is more complicated, and is the target of this chapter. The second sort of antirealism claims that both the realist and the eliminativist are mistaken about how we should regard the positing of representations. It typically involves, in some way, a denial that the positing of representations ought to be regarded as involving a commitment to objectively real, discoverable structures in the brain that are playing a specific representational role and that have any sort of robust, non-​derived content. This would include any sort of account that treats representational theorizing as a mere heuristic strategy with no strong ontological commitments, or any account that denies representational theories should be taken literally, or anything that regards either representations or their content (or both) as simply useful fictions. This sort of representational antirealist denies that the value of theories that invoke representations to explain cognitive processes rests on whether or not there actually are discernable content-​bearing things that are playing that representational role. To avoid confusing this sort of antirealism with eliminativism, I will hereafter refer to this second sort of antirealism

Defending Representation Realism  57 as “deflationism.” The accounts of cognitive theorizing I want to address deflate the degree to which invoking representations involves an ontological commitment. Unfortunately, the term “deflationism” also has multiple meanings in discussions of cognitive representations. Sometimes it is used to denote a view also called “liberalism,” whereby the conditions put forward for something to qualify as a representation are very weak. Liberalist deflationists propose relatively minimal sufficient conditions for representation in the brain. For example, Markman and Dietrich (2000) have proposed an account whereby structures that are doing little more than playing a mediating role in the processing would qualify as representations. While I have argued extensively against this sort of minimalist representationalism since I  think the structures described are not playing any sort of representational role (Ramsey 2007), I do believe that their proponents are realists about their ontological status. People like Markman and Dietrich believe that there really are structures in the brain that are functioning as the type of causal mediators they describe. Their account is misguided not because they oppose realism, but because they mischaracterize causal mediators as representations.1 While robust representational realism is a fairly strong view, I believe it is compatible with acknowledging certain problematic aspects of cognitive representations. For example, it is compatible with at least some degree of indeterminacy, both with regard to something’s playing a representational role and with regard to the representational content of the thing playing that role. With regard to the former, one can be a robust realist and at the same time admit that at least in some cases, the question of whether or not some neural structure is actually playing a representational role can be difficult to answer. As with many sorts of things playing functional roles, there can be borderline cases where the actual job that a given structure is playing can be difficult to determine, and where we aren’t quite sure what to say. But that fact doesn’t undermine the existence of clear-​cut cases where we have a much better sense of whether or not the thing is playing the functional role in question. That point applies to representational structures in the brain as well.

1 My main problem with characterizing causal mediators as representations is that it provides the wrong job description for such things; they are more accurately characterized as relay circuits. Moreover, this form of liberalist deflationism would make practically every complex causal system a representational system, thereby undermining the explanatory value of the representational theory of mind. For more, see Ramsey 2007.

58  What Are Mental Representations? Regarding the indeterminacy of content, I’m inclined to think that some degree of indeterminacy is a common feature of most straightforwardly real representational systems. Does the gas gauge in my car represent the level of regular unleaded gasoline currently in the tank, or the level of gasoline, regardless of the grade, or just something more general, like the level of fluid? Thinking that there is no obvious or even metaphysically determinate answer to this particular question does not require giving up a realist interpretation of the fuel gauge and the representational job it is doing. The same point applies to most other types of representational devices, including cognitive representations. So certain neurons in the frog’s brain might be treated as representing flies or, alternatively, as representing small flying bits of food, and there is perhaps no principled or clear way of determining which content attribution is correct. But that doesn’t mean that any content attribution is acceptable, or that the representational content of the neurons is simply a matter of convention, or that the neurons aren’t really representations of something independent of our interests. Philosophical work on naturalistic accounts of content have exposed a variety of theoretical resources for reducing content indeterminacy. For example, many teleosemanticists have emphasized that looking very closely at the specific representational job a structure was actually selected to perform greatly reduces the amount of content indeterminacy (Millikan 1984; Dretske 1988; MacDonald and Papineau 2006). While the exact nature of content indeterminacy can be debated (e.g., whether it is metaphysically real or merely a function of our limited understanding), here I am making the following conditional claim:  even if content indeterminacy is a metaphysically real feature of natural representations, that wouldn’t undermine a realist interpretation of those representation. After all, most scientific realists recognize that the world we are trying to understand can be such that not all questions about its specific nature have determinate answers. Robust representational realism is also compatible with acknowledging that it is far from easy to spell out exactly how a neural structure can come to play a representational role. The past 30 years of work in the philosophy of cognitive science have revealed just how difficult it is to provide a plausible, physical, reductionist account of representation in the brain. Much of the problem is due to the fact that the role of representing is not a straightforward causal role; representations in some way relay information or signal something. They have content. But the mere fact that providing a naturalistic account of representation and representational content is difficult does not

Defending Representation Realism  59 undermine the realist view that when we provide such an account, it should be treated as an account of something actually instantiated in the brain. Accounting for neurological representation is tricky business, but trickiness is not a reason to be a deflationist. Finally, robust realism is compatible with the idea that the posited representations should be regarded as belonging at a higher or computational level of analysis. Of course, cognitive representations will ultimately need to be instantiated by neurological states and structures in the brain. But if, say, a given account treats them as functional or computational states, or as parts of an algorithm that is not characterized in neurological terms, that would not make the representations any less real. Scientific entity realism is compatible with there being different levels of organization and explanation, and the entities invoked by the higher levels of description are certainly capable of being robustly real.

2.3.  Supporting Realism over Deflationism I believe representational realism should be regarded as the natural default position for our understanding of representational posits. But why is that—​why should we start out assuming the burden is on the deflationist to unseat realism? One obvious consideration is that realism seems to be the attitude adopted by most of the researchers and theorists who introduce representations in their various accounts of cognition. It is easy to find quotes that imply a realist attitude toward the posited representations, and nearly impossible to find cognitive researchers, either in psychology, cognitive modeling, or neuroscience, who explicitly put forth a deflationist perspective. Prima facie, it seems that realism is the most common attitude adopted by the actual theorists who invoke representations as part of their explanations of cognitive processes. Given how rare it is for researchers to go out of their way to caution against a realist interpretation, there is reason to be confident that, by and large, most cognitive scientists are not deflationists about representational states and structures. Yet as I noted earlier, a more interesting and philosophical question is the normative one. Regardless of what they actually do, what sort of ontological stance should cognitive scientists and others adopt toward representational posits? After all, one could acknowledge that most cognitive scientists are realists, and yet consistently maintain that this is a mistake—​that some form

60  What Are Mental Representations? of deflationism is the proper attitude to adopt. So, how should this normative question be answered? One straightforward way to address the normative issue regarding representation in cognitive science would be to simply apply whatever position one takes with regard to the broader scientific realism/​antirealism debate. Those who endorse a realist outlook toward theoretical entities in general may simply apply this outlook toward representations posited in cognitive scientific theories. Similarly, anyone who is a non-​realist about scientific theoretical posits across the board could simply extend this attitude to cognitive representations and thereby become some form of deflationist. If you happen to be someone in this latter group, whose rejection of representational realism is just part of a more general rejection of scientific realism, especially theoretical entity realism, then it is doubtful that anything I can say in this chapter will persuade you to change your mind. If you reject a realist interpretation of any neurological, psychological, or cognitive theory, irrespective of the nature of its theoretical posits, then nothing I say about representations will have much bearing on your primary antirealist motivations. For those whose adoption of realism or deflationism is determined on a case-​by-​case basis, and for whom the status of cognitive representations is an open question, the broader scientific realism/​antirealism debate may still prove relevant. That’s because arguments from that broader debate could be co-​opted to either support or attack representational realism. Take, for example, Putnam’s “miracle” argument in favor of realism (1975). Putnam argues that the predictive and explanatory success of certain scientific theories would be an unexplained miracle if the things and processes those theories invoke weren’t objectively real. In other words, the best explanation for the scientific success certain theories enjoy is that those theories provide a literal and true account of the phenomena they are attempting to explain. Even for those who do not think this argument works to support scientific realism in general, they might think it could be used to support robust representational realism. Reasoning along the lines of Putnam, they might note that it would be highly unlikely to have a cognitive model that (a) proved extremely successful in explaining and predicting our performance in some cognitive task, (b) required structures to be playing a representational role in accounting for that bit of cognition, and yet (c) did not accurately or literally describe the underlying processes because no such representations actually existed.

Defending Representation Realism  61 Going the other way, at least one popular strand of scientific non-​realism is driven by the empiricist tenet that only observable entities deserve to be treated as objectively real. For example, van Frassen’s constructive empiricism claims that scientific theories aim only for empirical adequacy and that we should be agnostic about the status of unobservable entities (van Frassen 1980). Although someone might reject constructive empiricism as a general theory that applies to all of science, such a person might suppose that at least some of van Frassen’s arguments for constructive empiricism (like his appeal to adequacy over truth) might make sense when applied to certain representational theories in cognitive science. This might in turn lead to a sort of agnostic deflationistic stance regarding microscopic neurological states that are claimed to be functioning as representations. Thus, general arguments from the broader scientific realism debates could be marshaled either for or against representational realism. In this regard, representations may be like most other theoretical posits, in that broader considerations that ignore particular details about the nature of the posit itself would prove indecisive. Consequently, it is worth asking if there is anything specific about the nature of cognitive representation itself that supports a certain ontological stance. Is there something about our conception of representation or the role representations are thought to play in our theorizing about the mind that makes them uniquely suited for a realist or deflationist interpretation? Apart from general considerations, is there something unique to representation that tips the scale in either direction? I believe one general consideration that favors realism is the fact that our conception of neurological or cognitive representation is almost certainly a functional notion, involving something doing a specific sort of job in cognitive processes. As John Haugeland has famously pointed out, “representing is a functional status or role of a certain sort, and to be a representation is to have that status or role” (Haugeland 1991, 69). That implies that representations need to have the kind of ontological heft that allows them to causally interact with various other elements of the brain or cognitive system. Of course, functionality, as such, does not necessarily require causality, as abstract entities can play epistemic or logical roles.2 However, the sort of functionality assigned to cognitive representations, though somewhat odd, is nevertheless the sort that typically involves causally responding to certain stimuli and/​ 2 I am grateful to Kristin Boyce for emphasizing this point to me.

62  What Are Mental Representations? or generating processes that guide or steer the cognitive system in various ways. For example, on one account of how representations explain cognition, they allow for surrogative reasoning, whereby the representational system is directly exploited to ascertain facts about the represented system, just as we might make inferences about the nature of a map that are then used to draw conclusion about the environment the map represents (Swoyer 1991). It is difficult to make sense of such accounts without regarding the inner representational system as something very real that can become causally engaged in the problem-​solving activities. Computational and/​or cognitive exploitability strongly implies a robust level of metaphysical reality. Perhaps another way to see this point is to pick up on Godfrey-​Smith’s observation that our conception of representation in cognitive systems is likely modeled upon our understanding of non-​cognitive representations—​on things like thermometers, gas gauges, and maps and such (Godfrey-​Smith 2006). These non-​cognitive representations are very real, concrete bits of the physical world. They are actual things that we rely upon in our everyday lives. They are not entities whose ontological status is in some way problematic, like abstract entities or modal facts. Nor are they theoretical entities that are located at the edges of our conception of reality, like quarks or electrons. They are observable, middle-​sized objects that we employ to navigate and learn about our world. Investigators who posit cognitive representation are claiming that our brains possess entities that are doing the same sort of thing as these non-​cognitive representations. They claim that there are neuronal states and structures that belong in the same functional category. Given the robust reality of non-​cognitive representations, the most natural interpretation of these claims is one that takes them at face value—​that treats them as offering an improved understanding of the actual functional mechanics of the brain and cognition. As far as I can see, the situation is no different than when a physiologist tells us that there is something in the heart that functions as a valve. This claim is both truthful and scientifically beneficial because it describes objective reality—​there really are physical structures in the heart that open and close and allow something to flow in one direction in exactly the way valves function.3 If there was nothing in the heart that functioned 3 As an anonymous reviewer has pointed out, there may be some, such as Craver (2013), who adopt a perspectivalist, and thus observer-​dependent, account of nearly all biological mechanisms, including things like heart valves. Insofar as these folks are non-​realists about all biological mechanisms, they would qualify as the sort of broad-​based, across-​the-​board deflationist that, as I note earlier, I won’t try to persuade here.

Defending Representation Realism  63 in that manner, then the proper response would be to abandon the idea that the heart is a valve-​user. The claims of cognitive researchers positing representations should be treated the same way. If, in explaining how the brain performs some cognitive task, a researcher claims that it employs neuronal structures that play a representational role, then it is extremely hard to see how such a claim ought to be interpreted as anything other than a truth-​ evaluable assertion about the functionality of actual structures in the brain. Furthermore, if something really is functioning as a representation, then it is a representation of or about something. Indeterminacy issues aside, I don’t see how anything could be playing any sort of representational role without it also, at the same time, having some sort of actual representational content. Positing a representation that has no real content would be like characterizing something as a functioning pump, but then insisting that there is nothing—​no fluid or gas or anything—​that it serves to pump. In both cases, the functional role in question requires that certain conditions be met. And just as you can’t have a fully functioning pump without something being pumped, so too, you can’t have a functioning representation without something being represented—​that is, without their being some sort of representational content. Given these sorts of considerations about representational function and content, and the actual things that inspire talk of representation in cognitive science, it seems the appropriate default position with regard to the representational posits is robust realism. In other words, the burden of proof is on the deflationist—​on those who claim that, given the explanatory goals of cognitive science, a robust type of representational realism is in some way misguided. What I want to do now is look at three interesting examples of deflationism to see if that burden has been met. I will argue that it has not.

2.4.  Three Examples of Representational Deflationism 2.4.1.  Chomsky’s Ersatz Representationalism The first example of representational deflation comes, surprisingly, from Noam Chomsky. I  describe this as surprising because Chomsky is often characterized as a champion of the psychological reality of inner rules and representations in accounting for our linguistic competence. Yet in a paper criticizing externalist interpretations of psychology—​accounts that claim

64  What Are Mental Representations? science must appeal to the representational content of various structures—​ Chomsky explicitly denies that the content of cognitive representations plays any sort of explanatory role in our theorizing, and, indeed, he appears to deny that representations even have real content (Chomsky 1995). For example, when talking of the research of David Marr and his collaborator Shimon Ullman investigating the visual system, Chomsky tells us: His studies of determination of structure from motion used tachistoscopic presentations that caused the subject to see a rotating cube, though there was no such thing in the environment. . . . If Ullman could have stimulated the retina directly, he would have done that; or the optic nerve. . . . There is no meaningful question about the “content” of the internal representations of a person seeing a cube under the conditions of the experiments, or if the retina is stimulated by a rotating cube, or by a video of a rotating cube; or about the content of a frog’s “representation of ” a fly or of a moving dot in the standard experimental studies of frog vision. No notion like “content,” or “representation of,” figures within the theory, so there are no answers to be given as to their nature. The same is true when Marr writes that he is studying vision as “a mapping from one representation to another, and in the case of human vision, the initial representation is in no doubt—​it consists of arrays of image intensity values as detected by the photoreceptors in the retina” (Marr 1982, p. 31)—​where “representation” is not to be understood relationally, as “representation of.” (Chomsky 1995, 52–​53)

Here, while adopting a realist stance about the representational vehicles, Chomsky clearly rejects the notion that they have any sort of objectively real content that researchers care about. This leads Egan to suggest that Chomsky is really endorsing something like an “ersatz” notion of representation (Egan 2014a). Chomsky states that representations in cognitive theories are “not to be understood relationally, as ‘representation of.’ ” For Chomsky, then, representations appear to be non-​relational inner elements that play a causal role in the processing, but stand in no special intentional relation to other things. At least initially, this seems to put Chomsky in the deflationist camp since he appears to denying the cognitive reality of representational content. There are some aspects of Chomsky’s account that are unclear. First, it is not fully clear if Chomsky believes he is expressing Ullman’s or Marr’s views about content, or instead is expressing what he regards as the right view about content. If it is the former, then there are passages by these researchers that

Defending Representation Realism  65 suggest he is misinterpreting their views. To be charitable, let’s instead assume Chomsky is simply putting forward his own position on how we should think about representational content. Second, it is not completely clear why he adopts this skeptical attitude about content, though part of his motivation appears to be the fact that representational states in the visual system can be generated by different causes. As he notes, an image of a rotating cube can be generated by an actual rotating cube in the visual field, a tachistoscopic presentation, or, at least in principle, by directly stimulating the retina or even the optic nerve. For Chomsky, it seems it is this feature of representations in the visual system that undermines any temptation to regard representations as representations of something. For him, it seems that if the state in question can be caused in different ways, then there is no theoretical value in assuming the state has any sort of objectively real content. If this is Chomsky’s motivation, it strikes me as a poor justification for rejecting a realist perspective on representational content. No one has ever doubted that inner representations with a certain content can be triggered by lots of different causes. This is precisely why very simplistic causal theories of content have been abandoned by nearly everybody. It is why causal theories have become much more sophisticated, in part by introducing further factors that enable us to distinguish the content grounding causal links from the aberrant ones. In other words, there are various factors, like teleology or causal histories, that allow us to say why an image of a rotating cube represents a rotating cube, and if it is generated when there is no cube present, then we have a straightforward case of misrepresentation—​of the world being represented in a way that does not correspond to how it actually is (Dretske 1988; Fodor 1987, 1990; Millikan 1984). Moreover, even if there is some residual indeterminacy of content, it certainly wouldn’t follow that, as Chomsky puts it, “there is no meaningful question about the ‘content’ of the internal representations.” Quite the contrary, it would seem there is an array of perfectly meaningful and important questions about how we should characterize what is being represented in the environment and how that representational content enables the cognitive system to succeed at whatever cognitive task it is performing.4

4 In fairness, it should be noted that Chomsky also rejects talk of success with regard to the performance of different cognitive tasks. Yet it is difficult to see how we can make sense of systems like the visual system without characterizing it as performing different tasks that either can be done properly or not.

66  What Are Mental Representations? However, for our purposes here, I do not actually need to argue against Chomsky’s content skepticism. All I need to show is that you can’t be a content skeptic like Chomsky and, at the same time, characterize the internal structures that Ullman and Marr and other cognitive researchers posit as representations. Egan (2014a) plausibly suggests that Chomsky is an eliminativist about content. That seems right, but I would add that if one is an eliminativist about representational content, then one is a straightforward eliminativist about representation. As I noted in the last section, if there is no “representation of,” if there is nothing that is actually being represented, then there is no representation, period. So while Chomsky writes as though he accepts the invoking of inner representations in these theories of perception, you really can’t do that and plausibly deny that an essential aspect of representation—​ the content—​ is nonexistent. You really can’t have it both ways. Thus, Chomsky’s “ersatz” representationalism collapses into just a straightforward eliminativism about inner representations. There is no “third option” here.

2.4.2.  Sprevak’s Neural Representation Fictionalism A second and more overt form of representational deflationism has recently been presented by Mark Sprevak (Sprevak 2013). Sprevak’s interesting proposal is to regard the invoking of neurological representations in cognitive theories as a type of fictionalism, whereby the invoking should not be viewed as an attempt to describe actual structures in the brain. As Sprevak puts it, “According to NRF [neural representation fictionalism], statements of the following form are false: (5) ‘Neuron/​brain region activity X represents Y.’ However, statements of the form (5) nevertheless serve a useful purpose and are fact-​stating” (2013, 10). So, just as some philosophers have endorsed fictionalism as a way of dealing with problematic things, like modal facts or nonexistent entities,5 Sprevak suggests that a similar strategy might work for neural representations. As Sprevak notes, neural representation fictionalism is at least initially appealing because it retains what is attractive about both realism and eliminativism. On the one hand, one can avoid the counterintuitive consequences of eliminativism by retaining a place for representations in our theorizing. The paradigm of representationalism need not be abandoned. 5 See Caddick Bourne 2013 for a good overview.

Defending Representation Realism  67 On the other hand, as with eliminativism, we can avoid the difficult task of providing a naturalistic, cognitive account of both representation and representational content. If representations are treated as useful fictions, then we can legitimize our representational theorizing without committing ourselves to explaining just how neurological structures actually come to play a representational role. Sprevak also develops the fictionalist hypothesis by noting that many fictional statements have a relation to truth that is somewhat complicated. Many are false in the ordinary sense. For example, the claim that Sherlock Holmes was a detective based in London is false. But there is also a context in which such statements are, in some sense, true. Within the fiction of “The Hound of Baskerville” it is true that Sherlock Holmes is a brilliant detective based in London who solves an important mystery. So the idea behind neural representation fictionalism is that, within the context of a representational theory or model of cognition, the claim that the brain employs inner representations is true, even if it is not straightforwardly true outside of that context. Thus, a proponent of NRF would endorse a claim like this: In the neural representation fiction, neuron/​brain region activity X represents Y. What are we to make of this? First, in fairness to Sprevak, it must be noted that he does not offer a straight-​out endorsement of neural representation fictionalism. In fact, he presents two compelling arguments against it toward the end of his paper. Instead, he presents NRF as a compelling proposal—​as a thesis that is worth exploring. I agree, so let’s critically explore it. I believe that there are two very big problems concerning the compatibility between neural representational fictionalism and the traditional goals of cognitive science, and that these problems would be very hard to overcome. The first of these problems concerns the sort of contextualized truthfulness that fictional claims allegedly have. Neural representation fictionalism implies that there is a context—​the neural representation fictional theory—​that allows us to say that the claims of the representational theory are true even if there are no representations. The immediate problem is that the scientific enterprise doesn’t really provide such a context. With a deliberately fictional story, we are asked to entertain a possible world with various characters and events and so on. And within that possible world, certain things exist and events occur that do not exist or occur in the real world. By contrast, cognitive theorizing is not trying to describe a possible world. Like most scientific theorizing, it is for the most part attempting to describe and explain and predict the real world. In science, a purely fictional account of some phenomena

68  What Are Mental Representations? is exactly what we are trying to avoid. So as with most of science, in cognitive theorizing there is no contextual separation between the way things are depicted in the model or theory on the one hand, and the way things are in the real world on the other hand. The way things are depicted in the context of the theory is supposed to correspond to the way things are in reality. Truthfulness within the context of a cognitive scientific theory is no different than just good old-​fashioned truthfulness. Consequently, the sort of truth-​ within-​story context that might apply to some sorts of fictions does not apply to the scientific realm. Now of course, a representational fictionalist can admit all this, abandon any appeal to truth, and still claim that there is some sort of pragmatic or instrumental value in invoking representations even when they don’t actually exist. But this brings us to the second big problem. The pragmatic goals of science are much, much harder to achieve with fictional theories than they are with theories that accurately describe what is really going on. In fact, Sprevak acknowledges that with regard to a couple of the pragmatic goals of representational theorizing, fictionalism appears to be a nonstarter. One goal is providing the best explanation of cognitive processes; however, surely one key feature of the best explanation is that it accurately captures what is actually taking place in the relevant system. Second, a traditional objective of scientific theorizing is to account for the causal structure of reality, yet it is quite clear that fictional entities cannot actually engage in any causal relations. However, as Sprevak notes, it is at least possible that fictionalism can bring about some of the other pragmatic goals of science. For example, a fictionalist treatment of representations could possibly allow us to nevertheless make accurate predictions of behavior or cognitive phenomena. After all, past theories that have invoked nonexistent entities have nevertheless generated accurate predictions. Also, fictionalism may still allow us to provide at least useful descriptions of neural processes, as sometimes helpful descriptions in other areas of science employ fictional entities, like the perfectly elastic, billiard-​ball-​like atoms of the kinetic theory of gases. And finally, NRF may still permit us to appeal to representations for intervention and manipulation, just as the fictional centrifugal force can be invoked in the manipulation of spinning objects. I acknowledge that there is indeed the possibility that theories which posit purely fictional cognitive representations could make good predictions, provide useful descriptions, and allow for successful intervention. But that strikes me as far too weak a standard for evaluating any proposed sort of

Defending Representation Realism  69 fictionalism. It is possible for any bad theory that, in the long run, ought to be abandoned to nevertheless be beneficial in these ways, at least temporarily. What matters is whether there are any reasons to think that neural representational fictionalism is likely to be helpful in these ways—​whether it is probably going to be more pragmatically beneficial, in the long run, than theories that accurately describe the mechanisms and processes underlying cognition. I’m aware of no accounts, including Sprevak’s, that provide such reasons. Yet surely the burden is on the proponent of representational fictionalism to explain why an account that posits representational states and processes that, in fact, are not real would prove over time to be pragmatically superior to a fact-​ based account. As far as I can see, this burden has not been met. By contrast, it if fairly easy to provide reasons for thinking that theories that accurately describe the structures, mechanisms, and processes in any given system, including the brain, are far more likely to deliver on various theoretical desiderata than theories that are divorced from reality in the way a fictional theory is. A theory is far more likely to generate accurate predictions, provide helpful descriptions, and allow for successful intervention if the mechanisms it attributes to the processing are objectively real and doing what the theory says they are doing. This is for the simple reason that accurate, nonfictional theories provide a much better understanding of that phenomenon, and things like prediction, description, and manipulation are enhanced by greater understanding. In short, while it is possible for pragmatic goals to be achieved with a theory invoking fictional entities, it is far less likely to occur than when we have a theory that appeals to the actual mechanisms and states underlying some phenomenon.6 Look at things this way: at the present, we can be uncertain about whether the representations posited in cognitive theories are objectively real. Hence, there are basically two possibilities with regard to what is really going on in the brain—​either the posited representations exist or they do not. Consider the first possibility and suppose that the representations that researchers posit really do exist. If that is the case, then adopting a fictionalist interpretation of them would be a bad mistake. Such an interpretation may get us off the hook in terms of explaining how representation can happen in the brain. 6 In fairness, as one reviewer has pointed out, idealizations can sometimes simplify matters and thereby make an account easier to understand. Yet there is also a notorious trade-​off: the greater the idealized simplification, the greater the diminution of complete, detailed understanding. As I note later, insofar as an idealization is sufficiently minimal to preserve correct, robust comprehension, it seems doubtful the idealization is a form of real fictionalism, as opposed to a close approximation of the truth.

70  What Are Mental Representations? But if representation really is happening in the brain, then we should get to work and provide the explanation of how that is occurring. Fictionalism would not only endorse the wrong attitude about the ontological status of the representational posits of cognitive theories, it would also encourage a counterproductive explanatory apathy that would hinder real scientific progress. Going the other way, suppose that there really are no cognitive representations of the sort that researchers posit. Then the question becomes this: which development would lead to a better outcome for cognitive science? Would it be better if we continued to embrace representationalism and representational theories, yet treating the representations they invoke as fictional entities? Or would it be better if instead we came to see representationalism as a deeply flawed paradigm, abandoned it, and started developing a new perspective on the nature of cognition? In addressing this question about how we should deal with theoretical posits that are not actually part of the causal/​physical world, the history of science provides some guidance. It suggests that eliminativism is the proper route. Notice that fictionalism was always an option regarding entrenched theories in the past that posited entities whose reality came into doubt. There could have been folks defending a fictionalist interpretation of theories involving phlogiston, the celestial spheres, the humors of the body, and so on. But in all those cases, it seems clear that fictionalism would have been a big mistake. Adopting a fictionalist stance toward these nonexistent things would have been epistemically and pragmatically disastrous because it would have thwarted the development of more accurate theories that were ontologically correct and gave us a better understanding of reality. If neural representations do not exist, then adopting a fictionalist attitude would be equally unwise. Fictionalism would hinder our acquisition of a better theory, in terms of both epistemic and pragmatic goals, that would yield a superior understanding of the structures, processes, and mechanisms underlying cognition. A possible rebuttal to my criticism could go like this: Sprevak notes that fictionalism often arises in science when theorists posit an idealized or simplified characterization of some entity or phenomena. For instance, in fluid dynamics water is sometimes characterized as a completely incompressible fluid, but in reality it is only relatively incompressible compared to some other liquids. Still, in some areas of fluid dynamics it helps to describe or treat water as incompressible, and this seems like a case where fictionalism is useful in science. Now given what I said earlier about the indeterminacy of

Defending Representation Realism  71 content, it seems a similar sort of idealization could occur with the positing of neural representations. Positing a neural representation with very specific, determinant content may be an idealization that is not so different from the fictions of incompressible water or perfectly elastic atoms. If so, then wouldn’t that give us a helpful sort of neural representation fictionalism? In response to this possible objection, two points are in order. First, it is far from clear to me that idealizations really should count as out-​and-​out fictions. In fact, it seems that they are approximations of the truth—​cases where you have very real and actual things described as having a property that they do indeed have, but just not to the degree that is described in the theory. I agree with Winsberg (2009) that idealizations should be treated as some form of nonfictional assertion. Second, it is also far from clear that researchers insist upon positing representations with a very specific and determinant content. It seems that much of the time, theorists allow that exactly what is represented is a little unclear or indefinite; often they preface descriptions of the content with a “something like” or “roughly.” In other words, insofar as some degree of indeterminate content is tolerated in cognitive science, there is no idealization that would justify a fictionalist interpretation. I will, however, make one concession to the neural representation fictionalist. My concession is with regard to the way we sometimes talk about information that often accompanies our talk of representation. We often speak of information as if it is something that gets carried by representations, or that is passed on or relayed or stored. Information talk sometimes appears to reify information, suggesting that it is a kind of thing or substance. However, we should regard all of this language as figurative and treat information, qua thing, as a kind of helpful heuristic. What is real is the existence of some kind of correspondence between states of representations and other parts of the world, plus the ways in which various systems exploit this arrangement. That correspondence and exploitation are what we are really talking about when we talk about information being carried and relayed. With regard to this figurative language of information-​qua-​thing, I admit that some sort of weak fictionalist interpretation might be the proper route to take.

2.4.3.  Egan’s “Intentional Gloss” The third and final sort of deflationism I want to look at concerns Francis Egan’s account of computational content in different sorts of cognitive

72  What Are Mental Representations? theories, such as Marr’s account of early visual processing and Shadmehr and Wise’s theory of motor control (Marr 1982; Shadmehr and Wise 2005). Egan’s account is important because it is firmly grounded in actual cognitive theorizing, and she is to be commended for her efforts to accurately describe real science. Moreover, I should note at the outset that I regard her account as one that involves only a very weak sort of deflationism, or quasi-​deflationism. While I think she accurately describes the nature of these representational theories, it is her retreat from a robustly realist account of cognitive content that I find problematic. Egan argues that computational theories of cognitive processes typically involve representations with two very different sorts of content. First, there is the type of representational content that is associated with the computational function that is being instantiated. Egan notes that this function is typically mathematical in nature and, indeed, often involves a function with which we have prior familiarity. For example, in Marr’s account of early visual processing, there is a proposed mechanism that is used to derive zero-​crossings (significant changes in light intensities) in the image from light intensity values. According to Marr, this subsystem computes a function involving a Laplacian operator—​it is an image-​smoothing filter that, as Egan puts it, computes the Laplacian of the Gaussian. Consequently, the input and output of this function—​its arguments and values—​are mathematical entities. But, of course, since mathematical entities cannot be instantiated in the brain, then the neurological computational system uses representations of mathematical entities as inputs and outputs. The same is true, she argues, in a variety of other computational theories in cognitive science. Second, there is also the representational content associated with these same inputs and outputs that allows us to see the relevance of this mathematical function to the relevant cognitive task in question. The visual system computes a function with a Laplacian operator for a reason—​it has a specific role to play in visual processing. It converts representations of light intensity values into, very roughly, representations of the boundaries and edges in the real-​world scene. This content, which Egan refers to as the “cognitive content,” allows us to understand the psychological role that the mathematical computer is playing in the broader cognitive capacity that we want to have explained. In short, for Egan, the input-​output representations of computational subsystems have two sorts of content. One type of content is mathematical in nature, accommodating the mathematical function being computed, and

Defending Representation Realism  73 the other type of content is non-​mathematical and involves real-​world properties and things that are germane to the cognitive task in question, like edges and boundaries in the world. So far, this strikes me as a very helpful and scientifically grounded way to think about content in computational theories. It is closely related to what I have referred to elsewhere as the “IO” notion of representation (for “Input-​Output”; Ramsey 2007), only Egan’s account is more sophisticated in that it captures this duality of content that appears in many computational cognitive theories. I would only add that such an account needn’t be restricted to mathematical functions. It seems the account would work equally well for various logical, syntactic, or other formal operations. Thus, the inner computer would be doing something like modus ponens, converting any input conditional and antecedent into an output consequence. However, there is another aspect of Egan’s account that I find more problematic. According to Egan, only the mathematical content is essential to the inputs and outputs of the computational system. The more conventional cognitive content, involving real-​world properties and parameters, is simply what she calls an “intentional gloss.” The assignment of cognitive content is parochial and dependent upon our own peculiar explanatory interests. She states: Cognitive contents  .  .  .  are not part of the essential characterization of the device, and are not fruitfully regarded as part of the computational theory proper. They are ascribed to facilitate the explanation of the cognitive capacity in question and are sensitive to a host of pragmatic considerations. . . . Hence, they form what I call an intentional gloss, a gloss that shows, in a perspicuous way, how the computational/​mathematical theory manages to explain the intentionally-​described explanandum with which we began . . . contents defined on distal objects and properties appropriate to the cognitive domain (what I call “cognitive” contents) . . . are not in the theory; they are in the explanatory gloss that accompanies the theory, where they are used to show that the theory addresses the phenomena for which we sought an explanation. (128–​131)

Thus, in Egan’s dual-​content account of computational theorizing, the mathematical content is essential, domain or environment neutral, and largely independent of our explanatory interests. By contrast, the cognitive content is not essential to the input and output representations, at least with regard

74  What Are Mental Representations? to the operation of the inner computational device. The cognitive content is merely an add-​on that facilitates our idiosyncratic psychological expository interests. To me, that sounds like a quasi-​deflationary interpretation of cognitive content. But what motivates this depreciation of the cognitive content? It is a little unclear. I do understand how it is not essential to the system, qua mathematical computer. But why not say that the cognitive content is essential to the role of the relevant computational subsystem, qua psychological mechanism? Given that Marr’s computational subsystem has been selected by evolutionary processes to play a specific cognitive role pertaining to vision, involving representing specific features of the environment like light and boundaries, it would seem that the cognitive content is as real as representational content gets. One apparent reason Egan diminishes the cognitive content of the input and output representations is the fact that the system computing the particular mathematical function could compute the same function in a completely different system with a completely different overall role and, thus, with different cognitive contents. As Egan puts it, “the ‘representational vehicles’ do not have their cognitive contents essentially. . . . If the mechanism characterized in mathematical terms by the theory were embedded differently in the organism, perhaps allowing it to sub-​serve a different cognitive capacity, then the structures would be assigned different cognitive contents” (2014a, 127). So the subsystem that computes the Laplacian of the Gaussian filter in our visual system could conceivably be plugged into a different system to perform the same mathematical computation for a process that has nothing to do with light intensity or edges. For example, it could be (in theory) plugged into our auditory system, in which case the mathematical content would remain quite similar, but then the cognitive content would correspond not to light intensities and edges, but to (presumably) some sort of acoustic properties in the auditory environment. While I  agree with Egan that the mathematical minicomputer is portable in this way, I do not see why the inter-​changeability of the embedding system has any bearing at all on the status of the representation’s current non-​mathematical content. Biological subsystems often play their primary role only by virtue of being embedded in a very specific system that brings with it the relational properties that help define that role. Our hearts really do have the function of pumping blood, and that fact doesn’t change even if, in principle, a human heart could be removed and plugged into an alien’s body

Defending Representation Realism  75 where it would pump something very different. With regard to representational systems, there are many robustly real representational devices with a particular content that would nevertheless be transformed if the device were to be relocated into a different embedding system. The gauge that currently tells me how much fuel I have left in my car could conceivably be embedded into a very different system. Depending on the details, were that to happen the gauge could come to represent something else; say, how much water is left in some holding tank. But that fact doesn’t in any way imply that right now, when I describe it as actually representing fuel levels, I am merely providing some sort of heuristic interpretive gloss. Consider one of Marr’s own examples employed to illustrate his understanding of computation and representation (Marr 1982, 22–​24). Marr asks us to consider a cash register and the adder subsystem that is one of its internal components. As Shagrir (2010) has forcefully argued, for Marr it is important that we understand why the mechanism in the cash register is an adder and not, say, a multiplier. There is an adder in the cash register and not a multiplier because addition is the mathematical function that captures the financial reality of how combined tabs are determined. Like the mathematical systems Marr later describes in the visual system, the adder is computing a well-​known mathematical function, and thus its inputs are representations of numbers and its outputs are representations of sums. But, of course, as the adder is functioning in a cash register, these numerical values correspond with real-​world prices of individual items to be purchased and the overall amount of money that needs to be paid. Now of course, the exact same adder could be removed from a cash register and placed into some other device, say some sort of coin counter. In a coin counter, the adder would compute the exact same mathematical function. But now the numerical inputs would correspond to the values of coins as opposed to the prices of items. Still, this transportability of the adder in no way alters or diminishes the representational role of its inputs and outputs when it is in the cash register. In such a machine, the adder, qua tally calculator, really does have inputs that stand for item prices and an output that stands for the overall amount owed. The input-​output representations of the adder would seem to have this content essentially, given the functional role it is in fact playing in a cash register. In one sense, Egan is right—​treating the inputs and output representations of inner computational subsystems as representing elements germane to the cognitive task in question is critical for our explanatory purposes. But we should not treat this content as simply a gloss or an interpretive device added

76  What Are Mental Representations? on for our parochial interests. Or if it is a gloss based upon our interests, it is no more of a gloss than, say, assigning a functional role to biological subsystems or, for that matter, assigning mathematical computational roles to neurological systems.7 In fact, a case could be made for saying that if we are going to demote one of these contents to a nonessential gloss, it is actually the mathematical content, not the cognitive content. After all, the mathematical content helps us to classify the computational subsystem as an instance of a well-​known mathematical function. But this classification, while no doubt beneficial, is dependent upon our idiosyncratic mathematical knowledge. For our understanding of the computational cognitive process, it seems we could conceivably get by without this classification. If we were mathematically unsophisticated and ignorant of Laplacian filters, we could still come to see the neurological mechanism as performing transformations on cognitive representations of light intensities to derive representations of boundaries. Of course, I  don’t want to deflate any sort of representational content. Instead, my point is this: while Chomsky’s account collapses into straightforward eliminativism, I would suggest that Egan’s account should be seen as collapsing into straightforward realism. There is good reason to treat her notion of cognitive content as an essential dimension of the input and output representations that are themselves essential to the computation being performed in the specific cognitive system. The neurological structures that Marr posits as representations do stand for mathematical values, but those values also correspond to very real features of the visual environment. The representation of those features is critical to his account of the way computational cognitive processing occurs in the visual system.

2.5.  Conclusion My primary goal in this chapter has been to convince you that a realist/​ eliminativist stance toward the representations that are posited in cognitive theories is metaphysically and pragmatically superior to a deflationary stance. Realism can handle many of the difficulties associated with cognitive representation, it is embraced by most cognitive scientists who appeal to

7 Some might bite the bullet and take this point to indicate that all three—​functional role, mathematical content, and representational content—​are all merely a gloss. But for Egan and most others, I’m assuming this would be a very tough bullet to bite.

Defending Representation Realism  77 representations in their theorizing, and given our functionalist conception of representation, it is the most natural and sound perspective on what is taking place when representations are invoked in the sciences of the mind. That is not to deny that there are interesting deflationary accounts that have been put forward; in fact, we have examined three of the most intriguing ones. However, those accounts do not hold up to scrutiny, as they either have serious flaws or they collapse into either realism or eliminativism. Of course, it is certainly possible that down the road, there will be scientific accounts of cognitive processes that invoke a type of representation that is indeed suited for a deflationary treatment. I once thought that certain accounts of “tacit knowledge” allegedly represented in the architecture of computational systems might merit this treatment; however, I  came to regard them instead as just misleading characterizations of nonrepresentational dispositional states (Ramsey 2007, 151–​187). In any event, at the present, given the nature of current representational theorizing, I recommend embracing the view that when representations are invoked in cognitive theories, this comes with a significant ontological commitment to the existence of actual structures playing a representational role and that have real representational content.

Acknowledgments Versions of this paper were presented at the conference “Mental Representations: The Foundation of Cognitive Science?” at the University of Bochum, Bochum, Germany, September 21–​23, 2015, and at the University of Mississippi philosophy colloquium on February 6, 2017. I am grateful for the helpful feedback these audiences provided, as well as helpful suggestions from two anonymous reviewers.

References Caddick Bourne, E. 2013. Fictionalism. Analysis 73: 147–​162. Chemero, A. 2011. Radical, Embodied Cognitive Science. Cambridge, MA: MIT Press. Chomsky, N. 1995. Language and Nature. Mind 104: 1–​61. Craver, C. 2013. Functions and Mechanisms: A Perspectivalist View. In P. Huneman (ed.), Functions: Selection and Mechanisms, 133–​158. Dordrecht: Springer Press. Dennett, D. 1987. The Intentional Stance. Cambridge, MA: MIT Press.

78  What Are Mental Representations? Dretske, F. 1988. Explaining Behavior. Cambridge, MA: MIT Press. Egan, F. 2014a. How to Think about Mental Content. Philosophical Studies 170: 115–​135. Egan, F. 2014b. Explaining Representation:  A Reply to Matthen. Philosophical Studies 170: 137–​142. Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press. Fodor, J. 1990. A Theory of Content and Other Essays. Cambridge, MA: MIT Press. Godfrey-​ Smith, P. 2006. Mental Representation, Naturalism, and Teleosemantics. In G. MacDonald and D. Papineau (eds.), Teleosemantics, 42–​68. Oxford:  Oxford University Press. Haugeland, J. 1991. Representational Genera. In W. Ramsey, S. Stich, and D. Rumelhart (eds.), Philosophy and Connectionist Theory, 61–​89. Hillsdale, NJ: Lawrence Erlbaum. Hutto, D., and Myin, E.  2012. Radicalizing Enactivism:  Basic Minds Without Content. Cambridge, MA: MIT Press. MacDonald, G., and Papineau, D., eds. 2006. Teleosemantics. Oxford:  Oxford University Press. Markman, A., and Dietrich, E. 2000. In Defense of Representation. Cognitive Psychology 40 (2): 138–​171. Marr, D. 1982. Vision. New York: Freeman. Millikan, R. 1984. Language, Thought and Other Biological Categories. Cambridge, MA: MIT Press. Putnam, H. 1975. Mathematics, Matter and Method. Cambridge:  Cambridge University Press. Ramsey, W. 2007. Representation Reconsidered. Cambridge: Cambridge University Press. Shadmehr, R., and Wise, S. 2005. The Computational Neurobiology of Reaching and Pointing: A Foundation for Motor Learning. Cambridge, MA: MIT Press. Shagrir, O. 2010. Marr on Computational-​ Level Theories. Philosophy of Science 77: 477–​500. Sprevak, M. 2013. Fictionalism about Neural Representations. The Monist 96 (4): 539–​560. Stich, S. 1983. From Folk Psychology to Cognitive Science:  The Case against Belief. Cambridge, MA: MIT Press. Swoyer, C. 1991. Structural Representation and Surrogative Reasoning. Synthese 87: 449–​508. van Frassen, B. 1980. The Scientific Image. Oxford: Oxford University Press. Winsberg, E. 2009. A Function for Fictions: Expanding the Scope of Science. In M. Suárez (ed.), Fictions in Science: Philosophical Essays on Modeling and Idealization, 179–​190. London: Routledge.

3 Deflating Deflationism about Mental Representation Daniel D. Hutto and Erik Myin

Cognitive science—​the representational science of mind—​ has nothing like a clear and universal concept of representation and this fact is holding it back. —​Clapin 2002, 19 Anyone seriously interested in the conceptual foundations of cognitive science must eventually grapple with what makes something a representation. —​Morgan and Piccinini 2018

Cognitive science is generally billed as a representational science of mind, through and through. The textbooks tell us that mainstream cognitive science is firmly if not foundationally committed to the existence and causal explanatory power of mental representations. In his celebrated history of and introduction to cognitive science, Gardner (1987) identifies a handful of core commitments that he takes to be central to its enterprise. First and foremost on his list of commitments is the necessity to invoke “mental representations and to posit a level of analysis wholly separate from the biological and the neurological, on the one hand, and the sociological and the cultural, on the other” (3). In line with Gardner’s characterization, Kolak et al. (2006) offer a familiar formulation, telling us that “researchers in cognitive science seek to understand brain processes as computational systems which manipulate representations” (2). Daniel D. Hutto and Erik Myin, Deflating Deflationism about Mental Representation In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0004.

80  What Are Mental Representations? Occasionally, cognitive science is characterized in apparently broader and more neutral terms. For example, Bermudez (2014) claims, without any mention of mental representations, that its fundamental, unifying idea is that “the mind is an information processor” (xxviii). Branquinho (2001) also takes commitment to information processing to be the field’s substantive, defining assumption (xii). Yet he thinks it is supplemented by two further, “relatively uncontroversial, foundational claims. . . . The assumption that the mind represents and the assumption that the mind computes” (xxiii–​xiv, xv). Others see a stronger connection between information processing and mental representation. They contend that information processing entails computing over mental representations. For example, Friedenberg and Silverman (2006) tell us that cognitive science’s theoretical perspective pivots around the idea of computation, which may alternatively be called information processing. Cognitive scientists view the mind as an information processor. Information processors must both represent and transform information. That is, a mind, according to this perspective, must incorporate some form of mental representation and processes that act on and manipulate that information (3).

Others, like Thagard (2005), are more cautious and less definitive about cognitive science’s commitment to representationalism. Thus Thagard (2005) is more circumspect in telling us that “most cognitive scientists agree that knowledge in the mind consists of mental representations” (4, first emphasis added). Let us assume, as this short survey of textbook sources proposes, that mainstream cognitive science is firmly—​even if not utterly decisively—​ committed to mental representations. Knowing this, on its own, does not tell us much about the nature of such a commitment. This is because there is more than one notion of mental representation afoot (Rescorla 2016; Rowlands 2017).1 As there is more than one notion of mental representation in play, the commitment to mental representation in the sciences of the mind can take

1 Rescorla (2016) currently observes that “Philosophers and scientists use the phrase ‘mental representation’ in many different ways” (17).

Deflating Deflationism  81 different forms. Importantly, for the purposes of this paper, some notions are more deflationary than others. Deflationists face a tough challenge. They must somehow secure mental representationalism while letting go of the very properties that are widely assumed to make mental representations representational. Moreover, they must do so while showing that such a maneuver results in a net explanatory gain. The deflationists’ challenge is as follows. On the one hand, they must show that the deflated account of mental representation on offer does not collapse into a nonrepresentationalist account of cognition. On the other hand, assuming the first challenge can be met, they must establish that the proposed deflated account of mental representation offers a clear explanatory advantage over its nonrepresentationalist rivals. We argue that neither full nor partial deflationism succeeds in satisfying these demands. Bearing this in mind, we structure the chapter as follows. In section 3.1, we introduce deflationism about mental representations. Sections 3.2 and 3.3 explicate the nature of full deflationism—​which is marked out by its complete rejection of mental representational content—​and situate it with respect to contentless forms of computationalism and enactivism. In the end, we argue that full deflationism risks a slippery-​slope collapse into the latter of the aforementioned anti-​representationalist positions. Section 3.4 explicates a version of partial deflationism that invokes a notion of mathematical content that allows it to avoid the problem of collapse that haunts full deflationism. However, on the one hand, if mathematical content is construed realistically, partial deflationism encounters familiar explanatory puzzles that it fails to resolve. Yet, on the other hand, if mathematical contents are not construed realistically, then it is unclear how they can be explanatorily interesting for the sciences of the mind. Thus, in the end, we see only deflated prospects for the most developed deflationist approaches on today’s market.

3.1.  Mental Representation: With and Without Representational Content As we have seen, in cognitive science, it is easy to stumble on the idea that mental representations are states of mind that bear mental content of some kind. Some notion of mental content is implied wherever the content/​vehicle distinction is drawn (see Egan 2014a, 116–​117; and also Hutto and Myin 2017, 37).

82  What Are Mental Representations? There is no shortage of those who believe that the kind of mental content that states bear is mental representational content. The received view is that mental representational content plays a pivotal, indispensable explanatory role in explaining intelligent behavior. For example, Shea (2013) tells us, “cognitive science encompasses several hugely successful disciplines that rely on representational content as a central explanatory resource” (499). Moreover, it is commonly assumed that mental representation content is only explanatory because it causally contributes to cognition (O’Brien 2015; Williams and Colling 2017). What is mental representational, or MR, content? MR-​content is understood quite broadly in cognitive science: it is taken to be the semantic property of states of mind that represent how things are with the world. MR-​ contents are taken to specify conditions of satisfaction—​where the latter can be understood flexibly in terms of truth, accuracy, or veridicality conditions. Essentially, MR-​contents come in a variety of forms, but, as a class, MR-​ contents entail representational conditions of satisfaction of some sort. What all MR-​contents have in common is that they specify a way the world is such that the world might, in fact, not be that way. This conception of MR-​content is a centerpiece of analytic philosophy of mind and cognitive science. It is the default or textbook notion of mental content that is standardly invoked by most authors when they are thinking about mental representations. Once the core notion of MR-​content is understood, it becomes clear that propositional content is just one, albeit prominent, type of MR-​content. There is MR-​content wherever it makes sense to talk of a vehicle carrying the sort of content that “says” something about the world, where that something said can be evaluated in terms of its fit with the way the world actually is (cf. Isaac 2019; Skyrms 2010). In a similar vein, there is great flexibility as to how we might think about the bearers or vehicles of mental content. On the standard conception, mental content must be carried by computational states of some kind. However, this leaves open a wide range of possibilities as to the form vehicles, and hence mental representations, might take. For example, the vehicles of mental content might be syntactically structured sentences in a language of thought, as tradition would have it (Fodor 2008). Or the vehicles of content might be much more exotic in character, as Andy Clark (2004) imagines them to be, when making logical space for a dynamic computationalism. In sketching what such a hybrid account would look like, he offers a picture of cognition

Deflating Deflationism  83 according to which complex and temporally extended processes “act as the vehicles of representational content. Traditional computationalism may have been just too narrow-​minded in its vision of the likely form of the inner bearers of information and content” (158). However, despite all this, serious doubts have been raised about whether cognitive science needs to posit mental representations with MR-​content and, indeed, whether it can successfully do so (Ramsey 2007; Chemero 2009; Thompson 2007; Hutto and Myin 2013, 2017; Gallagher 2017; Fuchs 2017; Di Paolo et al. 2017). There exists great skepticism in some quarters about whether MR-​contents will feature in our mature accounts of cognition and best explanations of behavior. Writing over a decade ago, Clapin (2002) put his finger on a major concern that still troubles the field today. Pithily, he observed, “If there are inner mental representational states, we need an account of how those inner states are able to carry content or meaning” (Clapin 2002, 6). But understanding how concrete states can bear abstract mental content is not the only issue that troubles those who assume that MR-​content plays a pivotal explanatory role in cognitive science. For even if we manage to figure out how mental states could carry contents, we also need an explanation as to how any carried content could make a causal difference to acts of cognition. Ultimately, we also need a naturalistic account that explains where mental states get their putative contents in the first place. Putting these concerns together, we need accounts of the origins of mental content; how mental contents could be carried by vehicles; and how mental contents might matter.2 As things stand, it is far from clear that it will prove possible to solve these problems. Some defenders of representationalism are untroubled by this situation. This is because, by their lights, cognitive science only commits to the existence of mental representations conceived of in a weaker sense—​ such that mental representations are not assumed to bear MR-​contents intrinsically. To embrace such view is to subscribe to mental representational deflationism. Deflationists hold that we need to posit mental representations when theorizing about the nature of cognition, but they deny that doing so 2 It turns out that it isn’t all that easy to explain how mental contents could arise in nature. In point of fact it has proved frighteningly hard to explain how content arises in nature (Hutto and Myin 2013, 2017). There is every reason to doubt that the familiar naturalistic theories of content—​those which appeal to, e.g., causal, informational, and teleosemantic theories of content—​have failed to deliver their promised goods, pace Miłkowski (2015).

84  What Are Mental Representations? requires making any realistic ontological commitment to the existence of MR-​contents. Deflationists about mental representations commit to representationalism while at the same time denying that mental states that figure in the best explanations of intelligent activity actually bear real, causally efficacious MR-​contents (see also Ramsey, this volume). Deflationism about mental representation might be motivated by a desire to avoid certain long-​standing problems relating to mental content—​ namely those of the kind mentioned earlier (Egan, this volume). Or it might be motivated by pragmatic considerations on the grounds that attributing mental contents, without committing to them as real and causally efficacious properties, plays important heuristic functions—​acting as a placeholder or helping us to keep track of explanatorily relevant changes occurring within cognitive systems. Defending deflationism might be seen as part of an explicit revisionist strategy—​a way of dealing with trivializing arguments that aim to show that existing conceptions of mental representations are not suited to do the sort of explanatory work earmarked for them. As such, deflationism might be a way of rethinking mental representationalism in an effort to “develop, strengthen, and indeed reform the mainstream understanding of what representations are” (Gładziejewski and Miłkowski 2017, 2). Should, in the end, deflationism carry the day, then—​in the best case—​ what will have changed is not the nature of the explanatory posits actually used in cognitive science, but only philosophers’ understanding of the nature of such posits. What the field will have come to realize is that the mental representations “are not the semantically loaded ones of recent philosophy” (Jacobson 2015, 626).

3.2.  Full Deflationism: The Whatever Policy A fully deflationist theory of mental representation has to pull off a neat trick. On the one hand, it must abandon any commitment to the existence of certain bothersome properties canonically associated with mental representations. Yet, on the other hand, it must retain the idea that mental representations feature in our best characterization of what lies at the basis of cognition. Can this be done? Chomsky (1995) makes a prominent attempt. He rejects the idea that the sciences of the mind should seriously posit states with MR-​content in their

Deflating Deflationism  85 explanations of behavior, adamantly telling us that “no notion like ‘content,’ or ‘representation of,’ figures within [cognitive] theory” (Chomsky 1995, 52). Nevertheless, despite this rejection, he holds that a deflated notion of mental representation can and should be retained by those sciences. What motivates this attempted combination of views? As Egan (2014a) reports, despite holding that the notion of MR-​content has no place in the sciences of the mind Chomsky holds that a notion of mental representation is needed by those sciences in order to “preserve the idea that cognitive theories describe some cognitive capacity or competence, since these, the explananda of scientific cognitive theories, are described in intentional terms” (Egan 2014a, 119). This is because, for Chomsky, reference “to what looks to be an intentional object is simply a convenient way of type-​identifying structures with the same role in computational processing” (Egan 2014a, 118).3 It is instructive to compare this style of deflationist, yet realist, way of understanding mental representations with openly fictionalist attempts to deal with MR-​content. Fictionalists deny mental content exists or plays any causal role in explaining intelligent behavior. Nevertheless, they admit that intentional talk “provides us with a way of referring to the (real) neural causes of behavior . . . [A]‌ttributing representational properties to neural states, even if false, provides a way of labeling neural states, and hence of keeping track of them” (Sprevak 2013, 555). This can be so, fictionalists maintain, even if the “attribution of representational properties to neural states is systematically false” (Sprevak 2013, 555). Where deflationary realists differ from fictionalists is that the former remain deeply committed to the presumption that cognition must involve the manipulation of mental representations of some stripe—​ viz., that representations of some kind must figure in our best explanation of intelligent behavior. Deflationists are supremely confident in the value of positing mental representations even though they will not commit to realism about MR-​ contents. Despite eschewing realism about MR-​content, they hold that a representational cognitive science is a must: it is the only possible game in town. Why so? A familiar answer starts with the observation that truly intelligent behavior is flexible (O’Brien and Opie 2015). Intelligent organisms agilely adapt and adjust to the novel circumstances in which they find themselves 3 Accordingly, “intentional construals of David Marr’s (1982) theory of vision, such as Burge (1986) and many subsequent accounts, Chomsky claims, are simply a misreading, based on conflating the theory proper with its informal presentation” (Egan 2014a, 119).

86  What Are Mental Representations? in ways that suit their needs and goals. The fact that they can do this is the basis for one of the most persuasive and influential arguments for thinking that mental representations—​of some kind or another—​must feature in the explanation of such flexible, intelligent activity. For, so the well-​rehearsed reasoning goes, there must be something special that underwrites such cognitively driven activity; something that explains why such intelligent activity is set apart from the rest of unthinking nature that contains only mere, but not genuinely intelligent, behavior. Enter mental representations. The long-​ standing conviction of analytic philosophers of mind and cognitive scientists is that mental representations—​whatever they are—​are what puts the intelligence inside organisms. It is mental representations—​whatever their precise properties—​ that are responsible for guiding and, presumably, causally explaining specific patterns of intelligent behavior. Thus, the truth of mental representationalism is secured simply by the fact that we are seeking to explain intelligent behavior—​namely, the competencies of intelligent creatures or systems. The explanatory need for representations is guaranteed, no matter what we discover about the precise character of the properties that underwrite or explain cognitive competencies. Call such an open policy the Whatever Policy. Such a policy should raise alarm bells for explanatory naturalists. As Ramsey (2017) observes, casting the notion of mental representation as a placeholder is “not supported by a proper scientific outlook” (4). Worse still, as Ramsey emphasizes, such a policy is bound to lead to bad consequences, such as (1)  unnecessarily restricting our theorizing about cognition; (2) undermining the empirical nature of the representational theory of mind; and (3) encouraging substantial weakening of the notion of mental representation. That cognitive science cannot do without mental representations is putatively better supported by the classic line of reasoning at the heart of Chomsky’s famous critique of behaviorism (Chomsky 1959). Chomsky’s criticism has inspired rejection of any theory that refuses to posit intervening states in cognitive systems of a sort that play a computational role in mediating and modulating a system’s responses to stimuli. Behaviorism’s ultimate failing, according to official sources, is its inability to make room for intervening states of mind of a computational sort. Hence, even though behaviorism allows for intervening states of mind of a neural variety, it does not conceive of such states as implementing abstract cognitive programs that would allow for the multiple realizability of cognitive

Deflating Deflationism  87 processes. The fact that behaviorism is incompatible with computationalism is precisely why Block (2001) declares it a dead doctrine. Block (2001) highlights the pivotal role computationalism played in unseating behaviorism, reminding us of its standard refutations (Block 1995, 377–​384; Braddon-​Mitchell and Jackson 1996, 29–​40 and 111–​121). He sums up the familiar history, telling us, “what really killed behaviorism was the rise of the computer model of cognition. If cognitive states are computational states of certain sorts, behaviorism runs into the problem that quite different computational states of the relevant sort can be input-​output equivalent. . . . Behaviorism died because it didn’t fit with the computational picture of cognition” (978). This line of reasoning sets the stage for and promotes adoption of a restricted Whatever Policy—​namely, the strategy of using the label “mental representation” to refer to computational states that intervene between inputs and outputs, while remaining entirely open about the other properties of such states, including their MR-​contents. In adopting a restricted Whatever Policy, it seems possible to know in advance that mental representations-​ cum-​computations must do the explanatory work in the cognitive sciences whatever else we discover in the end about such mental states.4 The emptiness of the Whatever Policy, restricted or otherwise, is revealed by the fact that it provides no criterion for demarcating mental representations from other mediating states that do important work in a cognitive system.5 Consider, for example, glial cells, one very common type of cell in the brain that is not a neuron. Anderson (2014) remarks that instead of being “just the nutritional backup to neurons, they make important contributions to brain function” (78). For example, they are thought “to regulate the formation of synapses, modulate learning mechanisms such as long-​term potentiation, and regulate synaptic transmission because they both manage the clearance of neurotransmitters from the synaptic cleft and also release their own neuromodulatory substances” (78). In other words, glial cells mediate and contribute to cognitive activity but they don’t do so in a way that is recognizable representational.

4 Machery (2009) provides a clear instance of the Whatever Policy in action—​restricting it to perception—​when he comments that “perceptual representations are whatever psychologists of perception say perception involves” (110). 5 As Egan (2014a) observes, “an unfortunate consequence of Chomsky’s view is that too many things will count as representations: intelligent systems have all sorts of mediating states. Surely, they are not all representations” (119).

88  What Are Mental Representations? Hence, the strategy of employing the Whatever Policy for securing a full deflationism about mental representationalism results in a hollow and vacuous explanatory representationalism—​one that risks being a representationalism in name only (see Ramsey 2007, 2017).

3.3.  Deflating Contentless Computationalism At this juncture, a further move may become attractive—​namely, to hold that computation is essential for cognition, even if content isn’t. To go this way would be to abandon any attempt to keep a deflated notion of mental representation in the cognitive science story. This would be to leave representational theory of mind, or RTM, behind while embracing a contentless computational theory of mind, or CTM. A number of theorists have tried to develop mechanistic theories of computation that could provide a basis for such a contentless CTM (Miłkowski 2013; Piccinini 2015). They aim to provide a robust, non-​semantic notion of computation that avoids the problems of triviality and pan-​computationalism. In this respect, it is recognized that “a good account of computing mechanisms and systems should entail that all paradigmatic examples of non-​computing mechanisms and systems, such as planetary systems, hurricanes and digestive systems, don’t perform computations” (Piccinini 2015, 12). Certainly, adopting a contentless, computational account of cognition would neatly avoid the hard problems associated with MR-​content. Embracing such a view would be to endorse a partially post-​cognitivist vision of cognitive science; it would be to abandon one, but not both, of the two main pillars of traditional cognitivism (see Hutto and Myin 2017, ch. 1). It would be to let go of RTM while retaining CTM. The fate of RTM and CTM need not be decided together. As Villalobos and Dewhurst (2017a, 2017b) have tried to establish, with important adjustments, it may even turn out that a CTM is compatible with a nonrepresentationalist enactivism, as foreshadowed by Clark’s (2004) discussion of dynamic computationalism.6 Yet, going the other way, those attracted to nonrepresentationalist accounts of cognition—​enactivist or otherwise—​ need not retain a CTM at all. 6 Clark (2004) sketches a hybrid dynamist-​computational view according to which “the details of the flow of information are every bit as important as the larger scale dynamics, and in which some local dynamic features lead a double life as elements in an information-​processing economy” (158).

Deflating Deflationism  89 It would take too much space to provide all of the reasons for thinking that a contentless CTM might not be the best way forward for cognitive science here. While we cannot detail our worries about CTM in full-​dress fashion, it is possible to briefly outline some central concerns about it. First and foremost, it must be noted that even if a workable mechanistic theory of computation, or MTC, proves possible, having an MTC does not suffice for having a CTM. In other words, there is ample room for doubting that a CTM is the best way to think about the basis of cognition, even if an MTC proves to be the right way to think about computation. In this light, it is worth noting some important reasons that have been given for thinking that brains are not digital computers. Shagrir (2010), for example, identifies a range of difficulties that arise when trying to apply a purely structural or mechanical notion of computation in the domain of neuroscience. Summing up, he notes that the dynamics of neural networks is not “in any obvious sense, a rule-​governed or step-​satisfaction process . . . [and that] the dynamics is no more digital than the dynamics of stomachs, tornadoes, or any other noncomputing system” (865). Even if this is so, it does not follow that brains do not compute, it only follows that they do not compute in the same way digital computers compute. However, looking empirically at how brains do what they do, there is no evidence that they are performing computations of any familiar kind, digital or analog. Summarizing an analysis of a wide range of findings, Piccinini and Bahar (2013) report that, “in a nutshell, current evidence indicates that typical neural signals, such as spike trains . . . are neither continuous signals nor strings of digits” (477). Does it follow from the existing evidence that brains are not performing any kind of computation? Again, no. They may be performing computations of a special variety. This is precisely what Piccinini and Bahar (2013) conclude, maintaining that neural computation happens in its own special way—​namely that “neural computation is sui generis” (see also Piccinini 2015, 223). They argue for this conclusion by advancing the view that, despite not being analog or digital, brains process information by manipulating medium-​independent vehicles in accordance with rules. However, there is a reason to be skeptical of this kind of argument for neural computationalism—​namely, we lack a worked-​out account of how it is possible for concrete neural processes to causally manipulate abstract, medium-​independent vehicles. Nor should we expect such an account any time soon. For if medium-​independent vehicles are defined by their abstract

90  What Are Mental Representations? properties, then it is unclear how such vehicles could be concretely manipulated. Piccinini and Bahar (2013) offer no account of how this might be achieved; they only supply reasons for thinking it must be achieved if we think of brains in information-​processing terms. By contrast, no mystery arises if we assume that neural processes are informationally sensitive to concrete, medium-​dependent properties (see Hutto and Myin 2017, epilogue). In this light, rather than concluding that brains compute in their own special way, it is open for us to entertain the possibility that brains simply do not compute (for a detailed argument for this conclusion see Hutto et al. 2018, 278; see also Anderson 2014). There are, prima facie, further reasons to steer clear of CTM if we assume that cognitive processes are importantly autonomous in character, in the enactivist sense (Thompson 2007). Some hold that there is no real conflict between enactivism and computationalism on this score. Villalobos and Dewhurst (2017a, 2017b) have argued that enactivists can embrace a CTM if we adopt an innocent and anodyne notion of inputs—​taking them to be mere perturbations or triggering factors—​and assume that outputs are simply a system’s response. Crucially, on such a rendering of CTM, inputs are not data, nor do they issue instructions. This implies cognitive systems are not program-​controlled computational systems. Perhaps enactivism is compatible such a weak and deflated version of CTM—​one in which computation requires neither the processing of data nor the following of a program. Yet to fully assess such a proposal we would need a detailed explication of what such a maximally deflated CTM looks like and some positive reason for thinking it still qualifies as a CTM at all. Even if such a deflated version of CTM could be articulated, it might be wondered if positing it would carry any explanatory value. We expect, in the end, a deflated CTM might go the way of the deflated notion of representation, and the dodo. If so, the future of cognitive science might lie with abandoning both RTM and CTM. Of course, that would—​as Hutto and Myin (2017) argue—​constitute yet another revolution in the sciences of the mind, one that issues in a fully post-​cognitivist cognitive science.

3.4.  Partial Deflationism: Mathematical Contents As the preceding sections reveal, a full or complete deflationist approach to mental representations is difficult to motivate. In this light, a halfway house

Deflating Deflationism  91 position of a partial deflationism may seem attractive. A partially deflationist approach does not surrender the idea of mental contents altogether; rather it seeks to retain the idea of mental representation while relinquishing a realistic commitment to MR-​content. Egan (2014a) outlines the central commitments of such a partial deflationism in advancing the view that the kind of contents that really matter to cognitive science are mathematical contents, or M-​contents, and not MR-​contents. According to Egan, M-​contents are captured by “the canonical description of the task executed by the device” (Egan 2014a, 122). Such canonical descriptions are mathematical descriptions of the function or functions putatively computed by cognitive devices or systems. These descriptions are cast in terms of inputs and outputs that respectively represent such things as “numerical values over a matrix” or “rate of change,” “vectors” or “differences between vectors,” and so on. Egan holds that this canonical mathematical characterization of the tasks performed by cognitive devices, which she calls the function-​theoretic characterization, “gives us a deeper understanding of the device” (Egan 2014a, 122). Indeed, it is by focusing solely on M-​contents putatively computed by cognitive devices that we find a “notion of representation that . . . computational cognitive science both needs and actually uses” (Egan 2014a, 119). Importantly, function-​ theoretic characterizations are domain general and environmentally neutral. To characterize a cognitive task in a function-​ theoretic way is to characterize in an abstract, mathematical way that makes no mention of the features of the actual environments in which the mechanism might be deployed. So understood, cognitive mechanisms are pluripotent, multipurpose tools, capable of a “variety of different cognitive uses in different contexts” (Egan 2014a, 122). Of course, in any given context, we may wish to know what the system relates to in that environment. In saying what the system is directed at, we attribute contents that are environment sensitive and “specific to the cognitive task being explained” (Egan 2014a, 124). Such world-​directed or intentional contents are fundamentally different from M-​contents. On this basis, Egan (2014a) distinguishes these two kinds of contents, MR-​contents and M-​contents. For her, mental representations are internal structures that bear M-​contents, the subject matter of which is mathematical objects. Crucially, she holds that such states of mind have their “mathematical contents essentially” (Egan 2014a 129). By contrast, these structures that underwrite cognition “do not have their cognitive [or MR] contents essentially”

92  What Are Mental Representations? (Egan 2014a, 127). Thus, when it comes to understanding the operations of cognitive mechanisms—​according to Egan (2014a)—​such devices might be assigned different MR-​contents or “no cognitive contents at all” (127). On her analysis, it turns out that MR-​contents are “not part of the essential characterization of the device” (Egan 2014a, 128, emphasis added). By this logic, M-​contents are explanatorily indispensable when giving an account of cognitive mechanisms, whereas the attribution of MR-​contents, even if they prove indispensable for other explanatory purposes are, at best, heuristic and “subject to a host of pragmatic considerations” (Egan 2014a, 128). Advancing her particular brand of deflationary line, Egan (2014a) sees MR-​contents as an “intentional gloss” we superficially apply to cognitive mechanisms; they are pragmatic posits that help us to characterize what a system happens to relate in any given instance, but they do not characterize real working parts of cognitive mechanisms. An attractive feature of Egan’s deflationary take on cognitive contents is that it skirts the traditional problem of having to assign MR-​contents using only the resources of the hard sciences. Indeed, if it were true that we only ever assign MR-​contents in an inessential, glossy way, then it would be easy to explain why attempts to solve the infamous disjunction problem continually fail (Fodor 1990b). The disjunction problem arises for naturalistic theories of content that appeal to causal, informational, or biological functions to fix MR-​contents. All such theories lack the resources to justify assigning unique MR-​contents to mental states, as opposed to assigning an array of coextensive, disjunctive contents to them. The problem is not that it is indeterminate which objects the alleged representational states target, but rather that there is no way to decide in which way, out of countless possibilities, the objects are contentfully represented by those states. The real disjunction problem concerns the question of how to assign intensional, and not intentional, properties. Calling on a timeworn example: “In the case of the frog’s visual motion detector, ‘fly’ or ‘moving black dot’ could both be simultaneously acceptable descriptions of the signal’s semantic content” (Cao 2012, 54). The general verdict in the field is that “no amount of environmental appropriateness of a neural state or its effects is fine-​grained enough to give unique . . . content to the neural state” (Rosenberg 2014, 26). Egan (2014a) concurs:  “no naturalistic relation is likely to pick out a single, determinate content” (124). Yet on her intentional gloss account of MR-​contents the disjunction problem is not a bug, it’s a feature. Cognitive

Deflating Deflationism  93 structures, by her lights, don’t have their cognitive contents essentially—​such structures can only ever be assigned MR-​contents when they “are used in certain ways by the device, ways that facilitate the cognitive task in question” (Egan 2014a, 124). The theory proper contains both a mathematical specification of computations performed by a mechanism, as well as a specification of how performing these computations allows the organism to successfully perform a cognitive task such as seeing or grasping in the typical environment of the organism. To understand the computations performed by the mechanism in situ we need to look at which correspondences in fact hold between the values it computes and facts about what organisms relate to in specific environments. Thus, in giving a full explanation of what a situated cognitive system is doing it will be important to note environment-​sensitive details. In one environment, calculating a certain algorithm will allow the organism to detect edges, while in another environment the same algorithm will allow an organism to detect shadows. Whether we ascribe MR-​contents relating to edges, to shadows, or to some generic concept pertaining to both will depend on our explanatory goals. Thus, on Egan’s (2014a) account, glossy attributions of MR-​contents are sensitive to parochial, pragmatic factors that are not essential to the operations of cognitive mechanisms proper. To understand its true power and limitations, it is instructive to apply Egan’s dual-​content account to clarify the kinds of content that might be attributed to the generative models in the popular predictive processing accounts of cognition. Clark (2016) tells us that “instead of simply describing ‘how the world is,’ these models—​even when considered at the “higher” more abstract levels—​are geared to engaging those aspects of the world that matter to us. They are delivering a grip on the patterns that matter for the interactions that matter” (Clark 2016, 292, first two emphases added). But in what sense do such models describe the world at all? Clark’s reply to his own question is revealing. He asks, “What are the contents of the many states governed by resulting structured, multi-​level, action-​oriented, probabilistic generative models?” (Clark 2016, 292). He answers, “It is . . . precision-​weight estimates . . . that drive action . . . such looping complexities . . . make it even harder (perhaps impossible) adequately to capture the contents or the cognitive roles of many key inner states and processes using the vocabulary of the ordinary daily speech” (Clark 2016, 292, emphasis added).

94  What Are Mental Representations? In some places Clark (2016) makes clear that, for him, the generative models in question have MR-​contents. He maintains that content of predictive processing “hypotheses are intrinsically affordance-​ laden; they represent both how the world is and how we might act in that very world (they are thus a species of what Millikan (1996) has called Pushmi-​pullyu representations)” (187). There are technical problems that confront anyone hoping to call on teleosemantic theories to provide a naturalistic account of the putative MR-​ contents of predictive processing models. For the reasons rehearsed previously, and others discussed at length elsewhere (see, e.g., ch. 4 of Hutto and Myin 2013), teleosemantic theories do not provide the necessary goods to account for the MR-​content of the models that predictive processing accounts of cognition posit. It is open to Clark to make a more austere move. He could assume that contents of the generative models posited by predictive processing accounts are best understood as precision-​weight estimates alone. After all, as he notes himself, ordinary MR-​contents of the sort we can capture in everyday language are not even good candidates for translating the content of predictive processing models. To avoid this awkward result, Clark could, following Egan (2014a), conceive of the content of such models as being purely mathematical, without assuming they have any MR-​contents essentially. On such a reading Clark would get everything that he needs for understanding the essence of cognition by simply referring to a hierarchy of statistical models. However, the price of going this way is considerable. As Gärtner and Clowes (2017) stress, “When we take away the semantic posits, we seem to be only left with the maths of brains and their neural networks” (69, emphasis added). But there is a sting in the tail. Egan (2014a) openly admits, “it is not likely that mathematical content could be naturalized” (123). Certainly, as she acknowledges, “the naturalistic proposals currently on offer—​information-​ theoretic accounts that depend on a causal relation, teleological accounts that advert to evolutionary function—​are non-​starters” (Egan 2014a, 123). The lack of naturalistic accounts of M-​contents is not the only hard problem that besets partial deflationism. A  partial deflationist could hold that cognitive mechanisms make calculations involving M-​contents—​e.g., Clark’s (2016) precision-​ weight estimates or Isaac’s (2019) S-​ vectors—​ that depict a change in the probability distribution over possible ways the world might be. Contentful calculations of this mathematical kind might

Deflating Deflationism  95 be thought to causally drive action. On such a reading, the manipulation of M-​contents makes a causal difference to the guidance and control systems, shaping specific behaviors and how they unfold. On the face of it, the two prominent theorists under discussion, Egan (2014a) and Clark (2016), are committed to a causal reading. We are told, for example, that The theorist must explain how computing the value of the specified function, in the subject’s normal environment, contributes to the exercise of the cognitive capacity that is the explanatory target of the theory. (Egan 2014a, 123, emphases added)

As observed earlier, Clark speaks of the contents of the generative models that the brain putatively employs as “delivering a grip on the patterns that matter,” of such contents “driving actions,” and the like. He elsewhere uses other telltale causal language: PP . . . deals extensively in internal models—​rich, frugal and all points in-​ between—​whose role is to control action by predicting complex plays of sensory data. (Clark 2016, 291, emphasis added) Our massed recurrent neuronal ensembles are not just buzzing away constantly trying to predict the sensory stream. They are constantly bringing about the sensory steam by causing bodily movements that selectively harvest new sensory stimulations. (Clark 2016, 7)

To the extent that these theorists are seriously committed to the idea that cognition is essentially a matter of manipulating M-​contents, their accounts appear to be seeking to reach the “the central explanatory goal of cognitive science—​namely, to provide an account of the causal mechanisms which subserve cognition and behavior” (Samuels 2010, 281). This would not be unusual: after all, the standard reason given for invoking contents in the sciences of the mind is in order to causally explain the flexibility of intelligent behavior. Yet, to this day it is unclear how contents qua contents—​MR-​content, M-​ content, or otherwise—​can influence the behavior of states of a system so as to causally explain what the system does. To solve this problem requires giving a workable answer to the long-​standing challenge in the philosophy of mind.

96  What Are Mental Representations? Many attempts have been made down the years to do so. Contents, it has been argued, get their causal powers by virtue of the fact that they are implemented by means of brain activity. That is to say, content strongly supervenes on the computations that implement cognitive processes (see Fodor 1990a, 286). Notoriously, this strong supervenience answer runs headlong into the infamous exclusion problem prominently highlighted by Kim (see, e.g., Kim 1993, 1995, 2005). It seems that contents are forever systematically excluded from playing such roles by a cognitive system’s formal or physical properties.7 Though there have been attempts to get around the exclusion problem, e.g., by proposing identity solutions that assume no gap between the relevant properties, such answers have well-​rehearsed problems of their own (see Robb and Heil 2014 for a discussion). Until we solve these puzzles we lack an account of how contents can causally guide the cogs and gears that drive action. Without answers to these old chestnuts—​these how-​possible questions—​we are within our rights to doubt that contents of any kind can make causal contributions to cognition after all. Of course, we need not assume that mental contents are causal; it need not be assumed that they figure in causal explanations. Indeed, there are plausible reasons for thinking that “computational explanations are genuine, but non-​causal explanations” (Rusanen and Lappi 2016, 3933). In an attempt to explicate Marr’s views on this topic, Rusanen and Lappi (2016), defend the idea that computational explanations “explain why certain information should be represented, and operated on in a particular way (as opposed to explaining how the system’s causal mechanisms sustain the representations and carry out the computational operations)” (3933). A virtue of this approach, given the problems just cited, is that content-​ invoking explanations can be understood to complement, rather than to compete with, mechanistic explanations. Notably, Rusanen and Lappi (2016) talk of computations in terms of representing and operating on information. The trouble with assuming that computational explanations are non-​ causal is that it becomes unclear what is gained by holding that cognitive systems actually make any contentful calculations at all. If M-​contents do not

7 Speaking of computers, Churchland and Churchland (1990) write: “the machine goes from one state to another because it is caused to do so . . . without reference to whatever semantic properties our cognitive states may or may not have” (303). It seems that the only “honest answer is . . . [content] doesn’t do any work, so forget it” (Cummins 1991, 124).

Deflating Deflationism  97 causally explain anything, then we are owed an account of how and what they do explain. Perhaps functional-​mathematical characterizations might simply be, at best, predictively or heuristically useful descriptions of the behavior of systems.8 We might think of such attributions as applying a mathematical gloss to the behavior of cognitive systems. The value of attributing M-​contents, just as with attributing MR-​contents, may be pragmatic and heuristic for certain explanatory purposes. Yet, in that case, pace Egan (2014a), it is not obvious why such characterizations would be essential.9

3.5.  Conclusion Having reviewed the tenability and prospects of the most prominent deflationist proposals on today’s market, we conclude that deflationism, whether full or partial, fails to deliver the explanatory goods. Deflationism either fails to distinguish itself from its nonrepresentationalist rivals, such as a contentless computationalism or enactivism, or it fails to show how it can outperform them explanatorily. Accordingly, we see only deflated prospects for deflationism about mental representations.

Acknowledgments Hutto thanks the Australian Research Council for funding the Discovery Project “Minds in Skilled Performance” (DP170102987) for supporting this research. Myin was supported by the Research Foundation Flanders (FWO), projects G0C7315N, “Getting Real about Words and Numbers,” and G049619N, “Facing the Interface.”

8 After all, we are told, “The levels of algorithm and implementation describe how the computations are carried out” (Rusanen and Lappi 2016, 3933, emphasis added). 9 Egan (2014a) comes close to drawing this conclusion herself. She states, with an interesting use of scare quotes, that even “ ‘essential’ mathematical content has a pragmatic rationale” (131).

98  What Are Mental Representations?

References Anderson, M. 2014. After Phrenology: Neural Reuse and the Interactive Brain. Cambridge, MA: MIT Press. Block, N. 1995. The Mind as the Software of the Brain. In D. Osherson, L. Gleitman, S. Kosslyn, E. Smith, and S. Sternberg (eds.), An Invitation to Cognitive Science, 377–​425. Cambridge, MA: MIT Press. Block, N. 2001. Behaviorism Revisited. Behavioral and Brain Sciences 24 (5): 977–​978. Braddon-​ Mitchell, D., and Jackson, F. 1996. Philosophy of Mind and Cognition. Oxford: Blackwell. Branquinho, J., ed. 2001. The Foundations of Cognitive Science. Oxford:  Oxford University Press. Burge, T. 1986. Individualism and Psychology. Philosophical Review 95: 3–​45. Burge, T. 2010. The Origins of Objectivity. Oxford: Oxford University Press. Chemero, A. 2009. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Chomsky, N. 1959. A Review of B. F. Skinner’s Verbal Behavior. Language 35 (1): 26–​58. Chomsky, N. 1995. Language and Nature. Mind 104: 1–​61. Churchland, P. M., and Churchland, P. S. 1990. Stalking the Wild Epistemic Engine. In W. Lycan (ed.), Mind and Cognition, 300–​311. Oxford: Blackwell. Clapin, H. 2002. Philosophy of Mental Representation. Oxford: Oxford University Press. Clark, A. 2004. Mindware. 2nd ed. Oxford: Oxford University Press. Clark, A. 2016. Surfing Uncertainty:  Prediction, Action and the Embodied Mind. Oxford: Oxford University Press. Cao, R. 2012. A Teleosemantic Approach to Information in the Brain. Biology and Philosophy 27 (1): 49–​71. Cummins, R. 1991. Meaning and Mental Representation. Cambridge, MA: MIT Press. Di Paolo, E., Buhrmann, T., and Barandiaran, X. 2017. Sensorimotor Life: An Enactive Proposal. Oxford: Oxford University Press. Egan, F. 2014a. How to Think about Mental Content. Philosophical Studies 170: 115–​135. Egan, F. 2014b. Explaining Representation:  A Reply to Matthen. Philosophical Studies 170: 137–​142. Fodor, J. A. 1990a. Banish Discontent. In W. Lycan (ed.), Mind and Cognition, 420–​439. Oxford: Blackwell. Fodor, J. A. 1990b. A Theory of Content and Other Essays. Cambridge, MA: MIT Press. Fodor, J. A. 2008. LOT 2. Cambridge, MA: MIT Press. Friedenberg, J., and Silverman, G. 2006. Cognitive Science: An Introduction to the Study of the Mind. Thousand Oaks, CA: Sage. Fuchs, T. 2017. Ecology of the Brain:  The Phenomenology and Biology of the Embodied Mind. Oxford: Oxford University Press. Gallagher, S. 2017. Enactivist Interventions:  Rethinking the Mind. Oxford:  Oxford University Press. Gardner, H. 1987. The Mind’s New Science:  A History of the Cognitive Revolution. New York: Perseus. Gärtner, K., and Clowes, R. W. 2017. Enactivism, Radical Enactivism and Predictive Processing: What Is Radical in Cognitive Science? Kairos 18: 54–​83. Gładziejewski, P., and Miłkowski, M. 2017. Structural Representations: Causally Relevant and Different from Detectors. Biology & Philosophy 32 (3): 337–​355.

Deflating Deflationism  99 Hutto, D. D., and Myin, E. 2013. Radicalizing Enactivism: Basic Minds without Content. Cambridge, MA: MIT Press. Hutto, D. D., and Myin, E. 2017. Evolving Enactivism:  Basic Minds Meet Content. Cambridge, MA: MIT Press. Hutto, D. D., Myin, E., Peeters, A., and Zahnoun, F. 2018. The Cognitive Basis of Computation: Putting Computation in Its Place. In M. Colombo and M. Sprevak (eds)., The Routledge Handbook of the Computational Mind, 272–​282. London: Routledge. Isaac, A. 2019. The Semantics Latent in Shannon Information. British Journal for the Philosophy of Science 70: 103–​125. https://​doi.org/​10.1093/​bjps/​axx029. Jacobson, A. 2015. Three Concerns about the Origins of Content. Philosophia 43: 625–​638. Kim, J. 1993. Supervenience and Mind:  Selected Philosophical Essays. Cambridge: Cambridge University Press. Kim, J. 2005. Physicalism, or Something Near Enough. Princeton, NJ:  Princeton University Press. Kolak, D., Hirstein, W., Mandik, P., and Waskan, J. 2006. Cognitive Science: An Introduction to Mind and Brain. New York: Routledge. Machery, E. 2009. Doing without Concepts. Oxford: Oxford University Press. Marr, D. 1982. Vision. New York: Freeman. Miłkowski, M. 2013. Explaining the Computational Mind. Cambridge, MA: MIT Press. Miłkowski, M. 2015. The Hard Problem of Content: Solved (Long Ago). Studies in Logic, Grammar and Rhetoric 41 (1): 73–​88. Millikan, R. G. 1996. Pushmi-​Pullyu Representations. In J. Tomberlin (ed.), Philosophical Perspectives, vol. 9, 185–​200. Atascadero, CA: Ridgeview Publishing. Morgan, A., and Piccinini, G. 2018. Towards a Cognitive Neuroscience of Intentionality. Minds and Machines 28 (1): 119–​139. https://​doi.org/​10.1007/​s11023-​017-​9437-​2. O’Brien, G. 2015. How Does Mind Matter? Solving the Content Causation Problem. In T. K. Metzinger and J. M. Windt (eds.), Open MIND, 28(T). Frankfurt am Main: MIND Group. https://​doi.org/​10.15502/​9783958570146. Piccinini, G. 2015. Physical Computation:  A Mechanistic Account. New  York:  Oxford University Press. Piccinini, G., and Bahar, S. 2013. Neural Computation and the Computational Theory of Cognition. Cognitive Science 37 (3): 453–​488. Ramsey, W. M. 2007. Representation Reconsidered. Cambridge:  Cambridge University Press. Ramsey, W. M. 2017. Must Cognition Be Representational? Synthese 194 (11): 4197–​4214. Rescorla, M. 2016. Bayesian Sensorimotor Psychology. Mind and Language 31 (1): 3–​36. Robb, D., and Heil, J. 2014. Mental Causation. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Spring ed. https://​plato.stanford.edu/​archives/​spr2014/​ entries/​mental-​causation/​. Rosenberg, A. 2014. Disenchanted Naturalism. In B. Bashour and H. D. Muller (eds.), Contemporary Philosophical Naturalism and Its Implications, 17–​36. New York: Routledge. Rowlands, M. 2017. Arguing about Representation. Synthese 194 (11): 4215–​4232. Rusanen, A.-​ M., and Lappi, O. 2016. On Computational Explanations. Synthese 193: 3931–​3949. Samuels, R. 2010. Classic Computationalism and the Many Problems of Cognitive Relevance. Studies in the History and Philosophy of Science 41: 280–​293. Shagrir, O. 2010. Computation, San Diego Style. Philosophy of Science 77: 862–​874.

100  What Are Mental Representations? Shea, N. 2013. Naturalising Representational Content. Philosophy Compass 8: 496–​509. https://​doi.org/​10.1111/​phc3.12033. Skyrms, B. 2010. Signals:  Evolution, Learning, and Information. New  York:  Oxford University Press. Sprevak, M. 2013. Fictionalism about Neural Representations. The Monist 96: 539–​560. Thagard, P. 2005. Mind:  Introduction to Cognitive Science. 2nd ed. Cambridge, MA: MIT Press. Thompson, E. 2007. Mind in Life:  Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA: Harvard University Press. Villalobos, M., and Dewhurst, J. 2017a. Why Post-​cognitivism Does Not (Necessarily) Entail Anti-​computationalism. Adaptive Behaviour 25 (3): 117–​128. Villalobos, M., and Dewhurst, J. 2017b. Enactive Autonomy in Computational Systems. Synthese 195 (5): 1891–​1908. https://​doi.org/​10.1007/​s11229-​017-​1386-​z. Williams, D., and Colling, L. 2017. From Symbols to Icons: The Return of Resemblance in the Cognitive Neuroscience Revolution. Synthese 195: 1941–​1967. https://​doi.org/​ 10.1007/​s11229-​017-​1578-​6.

4 Representing as Coordinating with Absence Nico Orlandi

What is a mental representation? There is a vast literature that takes up this question, but in this chapter, I defend the following characterization. Mental representations, I  contend, are states or structures, typically internal to a system, that 1. Are explanatory posits of the psychological sciences (Brentano 1874/​ 2009; Fodor 1975; Pylyshyn 1984; Ramsey 2007; van Gelder 1995). 2. Can be implicit and unconscious (Davies 2015; Dennett 1982). 3. Can have different formats—​for example, they can be linguistic, maplike, digital, analog, etc. (Ramsey 2007; Dretske 1981; Beck 2015). 4. Have content (Fodor 1990; Dretske 1981; Millikan 1984). 5. Serve as stand-​ins for something (Haugeland 1991; Clark 1997). 6. Guide stimulus-​free performance (Brentano 1874; Burge 2010; Clark and Toribio 1994; Gladzeijewski 2015; James 1890/​1983; Orlandi 2014; Pylyshyn 1984; Ramsey 2007). These six claims are necessary and jointly sufficient conditions on mental representation gleaned from the practice of cognitive science. Each of these conditions has played a role in a conjunctive characterization of mental representation both historically and presently. Mental representations are not elements of common-​sense parlance. They are notions employed in a cluster of disciplines that study the mind. I propose that we look at what mental representations are by looking at how they have been used in these disciplines. In this respect, I take philosophers interested in the notion of mental representation to be akin to those philosophers and historians of science more generally who investigate the nature of scientific posits by looking at scientific practice (Ramsey 2007). Nico Orlandi, Representing as Coordinating with Absence In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0005.

102  What Are Mental Representations? It might be that the conditions I identify here are not sufficient to capture what mental representations are. One notable absence, for example, is any reference to the notion of computation (although this notion will later come up in connection with condition 5). Some theorists of mind may insist that mental representations be defined in part as elements of computations in models of the mind. But for reasons that I cannot rehearse in detail here, I propose to keep the notion of mental representation and that of computation separate. I think that computation can be defined—​and indeed can occur—​without reference to mental representation and, vice versa, I think that representation can be understood without reference to computation (Orlandi 2019). Even if I am wrong on this point, however, I believe it will be instructive to see why I am wrong. Some of the conditions listed are contentious. For example, following some proponents of phenomenal intentionality, we may want to give up the claim that representations are, and can be, unconscious (condition 2). Similarly, we may want to give up the claim that it is a defining characteristic of mental representations that they guide stimulus-​free behavior (condition 6). This, again, would be instructive and the conclusion of an argument (or two). Although these conditions are intended as necessary and jointly sufficient, I  think that the notion of mental representation has fuzzy boundaries. Reasonable people may disagree on these conditions and there might be cases where some of the conditions are met while others are not. In such cases, there might be genuine disagreement as to whether a given structure is a representation.1 These six conditions are all important in capturing what mental representations are, but this chapter focuses in particular on the final three. These three conditions are crucial to defining what is representational about mental representations. What makes a given state or structure a representation, rather than a mere physical or functional feature of a system, are the fact that the state or structure has content, the fact that it acts as a stand-​in, and the fact that it guides stimulus-​free behavior. This chapter clarifies two important points about these three conditions. One is a negative claim, previously emphasized by William Ramsey, Karen Neander, and others (Neander 2017; Ramsey 2007; Schulte 2015):  that a

1 The case of magnetosomes in anaerobic bacteria is a case in point (Burge 2010, 300; Millikan 2009, 406; Schulte 2015). It is unclear whether the bacteria exhibit stimulus-​free behavior, or whether they are more akin to simple (floating) compass needles (Cummins et al. 2006, 203). There is, correspondingly, disagreement concerning the status of magnetosomes as representations.

Representing as Coordinating with Absence  103 theory of content—​intended as a theory of how a state comes to be about something—​is only part of the story in a theory of mental representation. This negative claim is illustrated here by reference to certain naturalistic accounts of content. Naturalistic accounts hold (roughly) that content can be grounded in non-​mental phenomena described by the natural sciences. Some naturalistic accounts of how a state comes to be about something, if considered sufficient for representation, would trivialize the notion of representation, and they would be in tension with condition 1. For this reason, proponents of naturalistic accounts of content generally agree that a story of content is not an exhaustive story of mental representation (Fodor 1987a, 1990a, 130; Dretske 1988; Millikan 1984). This chapter’s second clarificatory point about these three conditions is a positive claim: that an internal physical structure with content is a representation when it serves as a stand-​in that produces stimulus-​free behavior. Spelling out what serving as a stand-​in that produces stimulus-​free behavior means, and whether anything in the brain qualifies, is itself a challenge that is at least as pressing as the difficulty of giving an account of content. This chapter attempts to draw out the contours of such a challenge. In section 4.1, I briefly describe conditions 1–​3. In section 4.2, I summarize both some central positions concerning content and what I take to be some central points of disagreement between them. Readers already acquainted with the literature on content (and thus perhaps uninterested in conditions 1–​3) can skip to later sections. Section 4.3 makes clear why a theory of content is not tantamount to a theory of mental representation, illustrating the point by looking at naturalistic accounts of content. Section 4.4 reflects on the notion of “standing-​in” and offers a proposal on how to spell it out that appeals to computation, and to a specific subset of analog representations. In section 4.5, this proposal is enriched by bringing in the idea of informing about what is distal and absent.

4.1.  Mental Representations as Psychological Posits (and a Few Other Conditions) Conventional representations—​representations such as maps, words, and signs—​are human artifacts used to stand in for, and describe, a given domain. These types of representations differ from mental representations in several respects. A central difference is that conventional representations are not explanatory posits of the psychological sciences (condition 1).

104  What Are Mental Representations? The notion of mental representation has been (relatively) recently reintroduced in the cognitive sciences in response to behavioristic approaches in psychology. Behaviorism attempted to ground psychological laws in fully observable stimulus-​response patterns. The anti-​behaviorist cognitive revolution consisted in part in recognizing the value that introducing unobservable explanatory posits has for a scientific psychology. When stimulus-​response laws were difficult to come by—​as psychological beings tend to respond in different ways to the same stimulus—​appeal to internal psychological features was seen as inevitable. The intuition was that the response behavior of creatures with a mind depends not just on stimulus-​ conditions, but also on how they represent the environment to be—​for example, their knowledge of the stimulus, their ability to perceive it in certain ways, and so on (Chomsky 1967). While explanatorily useful from a third-​person point of view, mental representations are generally not regarded as objects to the experiencing subject. When Andreas believes that there is a rainbow outside his window, he is (if the representationalist and cognitivist story is correct) in a representational state. But Andreas is not aware of the representation. He is aware of a rainbow and of a window. The representation is not an object to him. Mental life, in this conception, is transparent: subjects are not aware of the vehicles of thought, but only of its objects. Embedded in the idea that representations are explanatory posits of the psychological sciences is the thought that representations are distinctive explanatory posits of these sciences. Representations are presumed to be one of the marks of the mental, or one of the elements that sets psychological systems apart from merely biological, chemical, physical, and mechanical systems. The background of this condition is Franz Brentano’s insight that intentionality is exclusive to psychological agents (1874).2 Brentano thought of intentionality primarily as the capacity of the mind to be directed at things in the environment, and he thought of “intentional inexistence”—​the capacity of states of mind to be about things that are not present or not existing—​as one of the characteristic features of creatures that represent.

2 As an anonymous referee points out, the idea that intentionality is one of the marks of the mental was also present before Brentano, for example in early modern philosophy, and, in some form or other, in antiquity.

Representing as Coordinating with Absence  105 More recently, attempts at naturalizing intentionality aim to show that, contra Brentano, systems that lack a mind as sophisticated as a human mind (or that lack a mind at all)—​for example, plants that orient toward the sun—​ can exhibit a certain type of intentionality, and derivatively can instantiate natural forms of representation (Dretske 1997; Millikan 1984). Naturalists, however, also generally agree that there is, in addition to this natural type of meaning and intentionality, a more distinctive mental form of representation. The latter is an explanatory tool typically introduced in the context of explaining both person-​level and subpersonal states and processes. Mental representations are introduced, in particular, when it is difficult to explain the behavior of a system as a mere response to environmental stimuli. Appeal to such stimuli, and to other biological, chemical, physical, and functional features of the system, is insufficient to serve an adequate explanation in such cases (Pylyshyn 1984). The behavior to be explained, in this context, is what the system does as described at a level of description that is, in one respect, fairly abstract. Specific trajectories of movement, for example, are seldom the focus of explanation in psychology. Andreas’s moving to the window, as opposed to the specific way in which he moved to the window, is generally the focus of psychological explanation. In another respect, however, psychological explanation is fairly fine-​ grained. Psychological studies often aim to test and explain behavior that is more specific than Andreas’s generic walking to the window. Psychological studies often test how quickly a subject presses a button, or how close subjects sit from a confederate, or how likely subjects are to positively rank a CV, or how well they perform on a test. The behavior tested in these cases is more specific than a generic walking or grabbing, but it is also more abstract than a specific trajectory of movement. In psychological experiments, it is generally not the specific trajectory of button-​pressing that matters, for example (unless the trajectory is precisely what is being tested), but that the subject pressed the button and how fast she did it. This relative distinctiveness of the postulates and subject matter of psychology is thought to be one of the reasons for psychology’s presumed independence from other sciences (Fodor 1987b; Sterelny 1995). Psychology studies patterns of movement that abstract from specific trajectories or sequences. It does so, in part, by employing a strategy common to other sciences: it decomposes high-​level capacities into sub-​capacities that are functionally individuated, and that are easier to understand (Cummins 1983).

106  What Are Mental Representations? But if the broadly cognitivist story is right, psychology also introduces explanatory constructs that are distinctive of psychological creatures. That is, it introduces mental representations with specific causal powers. If psychologists want to explain someone’s tendency to sit at a certain distance from a confederate, for example, then they might suppose that the person has a (negative) belief (or representation) about the confederate. It is unlikely that an explanation in terms of environmental conditions, or of mechanical and chemical factors, would do as well. Mechanical, chemical, and physiological factors—​particularly as they pertain to muscle movements—​would perhaps better explain the specific trajectory that the subject uses to sit down. But they would not explain the more general fact of why that specific trajectory of movement occurred. It could be that, as a matter of empirical fact, we can provide explanations that are just as adequate, and perhaps even better, without appeal to mental representations. If this were true, as some eliminativists believe (Hutto and Myin 2012, 2017), then we would have a number of options concerning the status of psychology. We could conclude that psychology is not an independent science after all, that it can be eliminated in favor of the biochemical sciences (or of neuroscience). Alternatively, we could hold that other features distinguish psychological creatures from the rest—​features such as complexity, phenomenology, or some other characteristic. Representations, in the conception we are working with, are explanatory posits that are distinctive of psychology, but they need not be the only elements that are distinctive of the psychological. Having a conscious, phenomenological life, or having a point of view, might also be distinctive of psychological agents. So, accepting condition 1 does not mean that one needs to be committed to representations. It might be that we can do psychology, conceived as an independent science, without appeal to the notion of representation. But if mental representations are useful, then they are distinctive of psychological creatures. Condition 2—​ that mental representations can be implicit and unconscious—​adds to this conception the idea that the capacity to represent is independent of consciousness. Subjects can mentally represent something while being unconscious of doing so, and also while being unconscious of what they represent. This is true both if we think of consciousness in terms of access, or if we think of it in phenomenal terms (Block 1995). An example of the first kind of case is given by priming studies. In the paradigmatic priming setup, subjects are quickly primed with visual stimuli and subsequently

Representing as Coordinating with Absence  107 tested on a task. While often unable to report what they saw, or that they saw anything at all (lacking access), subjects’ subsequent behavior is affected by the prime, suggesting that they “took it in” or represented it. An example of phenomenal consciousness with some minimal access consciousness is instead given by recall studies. When an array of letters is flashed for a short period of time, subjects report seeing (being phenomenally aware of) more than what they can recall (Sperling 1960). If one questions these types of results—​as in Phillips 2016—​an additional motivation for thinking that representations can (and often are) unconscious is provided by information-​processing models of mental activity. Such models describe unconscious (and subpersonal) transitions of representational states that enable psychological agents to perform various tasks. Two influential examples are given by language-​acquisition and by perception. In learning a natural language, speakers are said to be exposed to an insufficient number of examples. They learn a language by already knowing an internalized generative grammar. The principles of such grammar, although represented within speakers, are not something of which speakers are aware (Chomsky 1995). Similarly, in visually perceiving the environment, perceivers are thought to undergo an unconscious process of hypothesis formation and confirmation. This process has been most recently described in Bayesian terms (Clark 2015; Rescorla 2013) and is regulated by a number of internalized principles of which perceivers are not conscious. That representations can be unconscious is one way in which they are also said to be implicit. But there are other ways of spelling out what “implicit” means. Some conceive of implicit representations as representations that, while not accessed by the subject, figure as common factors in explanations of transitions between representations (Davies 2015, 8). In other conceptions, a representation is implicit when it is not encoded in linguistic or propositional format (Palmer 1999, 83). According to this way of understanding the notion of implicit representation, representing explicitly amounts to having a formula in a sentence-​like structure. Representing implicitly, by contrast, amounts to having the principle encoded in a different way. This way of drawing the implicit-​explicit distinction highlights the fact that representations come in different formats (condition 3). The word “fire,” a picture of a fire, and smoke may all be taken to represent fire, but they do so in different ways, by having different formal properties. Mental representations can similarly have different formats.

108  What Are Mental Representations? Cognitive science recognizes at least three types of mental representations that correspond to Charles Peirce’s useful taxonomy of signs (Peirce 1931–​ 58; Ramsey 2007). Peirce distinguishes different types of signs that have different formats, and that are related to their objects in different ways. Symbols, according to Peirce, are types of representations that are connected to their objects by convention or stipulation, as in the case of the word “fire.” Icons, according to Peirce, are connected to their objects by virtue of some sort of structural similarity or isomorphism, as in the case of pictures, maps, and diagrams. Indicators or indices designate by virtue of causal or law-​like relations, as in the case of simple discriminating devices like a temperature light that signals high motor temperature. Cognitive science recognizes all three types of representational formats. Linguistic comprehension and production are thought to require symbolic representations that have syntactic properties similar to words of a natural language (Fodor 1975). Intelligent locomotion and navigation are thought to presuppose representations that display a sort of functional isomorphism (Gallistel 1990; Tsoar et al. 2011). Neuroscience research is often committed to what Bill Ramsey calls a “detector notion” of representation. Individual neurons that are reliably activated upon a certain threshold by some environmental element indicate such elements and inform some upstream process of the presence of the elements (Elliffe 1999; O’Reilly and Munakata 2000, 23–​25). As should already be clear, what type of formal features mental representations have is inferred (not always unproblematically) from the behavior they explain (Fodor and Pylyshyn 1988). Behavior is also one of the guides to the content of representations. The next section explores the question of content.

4.2. Content A theory of mental content is primarily a theory of how a mental state comes to be about particular aspects of the world. In virtue of what is Andreas’s belief in rainbows about rainbows? What makes a thought of fire be about fire? One requirement for such a theory is that it explains error or inaccuracy. If a mental state has content, then it is about something whether or not the thing is present (this is one way of understanding Brentano’s idea of “intentional inexistence”). This feature of representation engenders error in cases

Representing as Coordinating with Absence  109 in which the subject represents something as being the case when it is not the case. Andreas’s belief that there is a rainbow outside the window is about the presence of a rainbow, and of a window. The belief continues to be about these elements even if there is no rainbow, and even if Andreas is in a windowless environment. Moreover, the belief “tells” Andreas something—​that there is a rainbow outside the window. Whether there is such a rainbow determines whether the belief is true or false. Contents specify some accuracy conditions—​that is, some conditions that would make the contentful mental states true or false. Mental content is typically understood as concerning conditions “outside” the individual. A state of Andreas is about something out there in the world—​a rainbow.3 Additionally, when talking of mental content, what is generally meant is original, intrinsic, or underived content (Haugeland 1985). For present purposes, we can understand original content in negative terms. Original content is content that is not derived.4 This characterization requires understanding what derived content is. A representation has derived content when what it means is determined solely by either some societal convention or by the intentions, beliefs, or other contentful states of an individual. When we use white parallel stripes on the street to mean pedestrian crossings, such stripes have meaning, but only derivatively. Their specific meaning is conferred upon them by a collective convention or agreement. There is nothing in the nature of white stripes that links them to pedestrian crossings other than the relevant convention. Similarly, words of a language may have meaning that is entirely derived from a linguistic convention. There is nothing in the nature of the word “rainbow” that makes the word be about and mean rainbows. This word has derived meaning. Along similar lines, when someone takes a red sticker on a key to mean “house key,” the red sticker means “house key” only derivatively, that is, only because someone confers meaning upon the sticker. There is, again, nothing in the nature of a red sticker that links it to house keys.

3 Some deny this way of approaching content. Noam Chomsky for example, holds (roughly) that a “rainbow representation” is not a representation of a rainbow, but rather a representation that plays a certain role in computational processing (Chomsky 1995; Egan 2014). This type of position links the notion of representation to the notion of computation, and it departs somewhat both from common sense and from how the notion of content has been used historically. For these reasons, I leave this type of position aside in what follows. Ultimately, resolving the issue of content is not the central issue of this chapter, so not much hangs on this choice. 4 For a more insightful discussion of this distinction, see, for example, Rescorla 2017.

110  What Are Mental Representations? Our thoughts and beliefs, by contrast, seem to have content that is not of this derived kind. There seems to be a non-​arbitrary relation between our thoughts and what they are about. One could think about pedestrian crossings, rainbows, and house keys before we had signs for them. Thoughts seem to have content prior to, and independently of, collective conventions, and prior to anyone taking them to be about what they are about. Some philosophers are skeptical of the very idea of original content (Dennett 1987), but those who are not skeptical typically aspire to provide an account of it. Following others, we can think of theories of non-​derived content as varying from more to less demanding (Beck 2015; Burge 2010). Demanding theories usually impose high requirements on content and on genuine reference, requirements such as intersubjectivity and the ability to speak a language (Davidson 1975), the ability to engage in conceptual thought (Evans 1982), the ability to be a participant in the “space of reasons” (McDowell 1994), and the ability to employ certain epistemic attitudes (Dickie 2015). In a demanding characterization, while an organism like Andreas has many mental states that are about rainbows, windows, and a number of other things, the mental states of organisms that speak no language, or that have no capacity for conceptual thought, lack genuine contents. At the other extreme sit deflationary views of content that hope to specify content in terms of other naturalistic and non-​mental notions, such as the notion of information (Fodor 1987b), or the notion of biological function (Dretske 1988). Since mental representations are thought to be one of the marks of the mental, the naturalist project is part of a larger project of naturalizing the mind—​that is, of explaining what the mind is in a way that is continuous with the natural sciences (more on naturalism subsequently). A third alternative between these two extremes is views of content that appeal to consciousness to explain how a state comes to be about something. “Phenomenal intentionality” refers to the type of intentionality that arises from phenomenal consciousness alone. In these accounts, content may arise from dispositions by a conscious agent to “take” something to be about something else (Mendelovici 2018), or it may supervene on phenomenal character—​that is, on how things appear to the conscious individual (Kriegel 2011; Pitt 2004; Mendelovic 2018; Siewert 1998; Strawson 1994). There are stronger and weaker versions of phenomenal intentionality (Bourget and Mendelovici 2019). Some versions give up the second condition from the beginning of this chapter, claiming that genuine mental representations are never unconscious (Mendelovici 2018). Other, more moderate versions

Representing as Coordinating with Absence  111 allow for unconscious representations by allowing for non-​phenomenal intentionality that is derived from phenomenal intentionality, which is taken to be the only genuine kind of original intentionality (Kriegel 2011). It is an open question whether demanding theories of content are capable of explaining original content. For example, views where original content requires the use of a shared language, and where language itself has wholly derived meaning arising from linguistic convention, may have trouble accounting for this type of content. But even when we consider theories that do not face this problem, it has been difficult in the extreme to formulate satisfactory accounts of original content. We can illustrate the problem by looking at one type of naturalistic and deflationary view of content. (Those very familiar with this debate can skip to the end of this section.) Naturalistic accounts typically focus on indicators and begin by giving some informational story of how a state can come to carry information about something (Ramsey 2007, 133). In a causal framework, for example, carrying information means thinking that an internal state of a system carries information about an entity x when (and only when) x causes the state to occur in a lawful manner (Fodor 1990b, 514). A temperature light in a car carries information about high temperature by being caused to go on by high temperature in the motor compartment. Similarly, if some neurons in early visual areas of the brain are lawfully caused to fire above a certain threshold by bars of light, then they carry information about bars of light. It is widely agreed, however, that information carriage is not sufficient to specify original content. This is primarily because a story of content requires a story of error. A neural state may causally co-​vary with things other than bars of light. In that case, the state would carry information about something other than bars of light: it wouldn’t be in error when being active in the presence of something other than bars of light. In fact, this kind of situation seems to be the norm rather than the exception in the brain. Neurons in perceptual areas of the brain have variable receptive fields (Barlow 1995; O’Reilly and Munakata 2000, 24). They reliably respond to a very large number of inputs coming both from the environment and from adjacent neurons. For this reason, naturalistic accounts typically add a further condition to information carriage. One prominent proposal appeals to teleological function (Dretske 1997). An informational state is capable of being in error when the state has the selected function of carrying the information it carries (Dretske 1997). Temperature lights indicate high temperature and

112  What Are Mental Representations? not gasket expansion (which also happens at high temperature) because they are designed to indicate high temperature. Neurons that respond to bars of light are about bars of light even when they are caused to fire by, for example, changes in wavelength because they were selected for responding to bars of light. They are supposed to respond to bars of light. Such neurons evolved, and were incorporated into the perceptual system, because of their sensitivity to bars of light, not because of their sensitivity to something else. When they fire in the presence of changes in wavelength, they are malfunctioning. Error is a product of such malfunctioning. There is considerable debate concerning whether this type of view can deliver a satisfactory theory of content (for some of the issues, see Mendelovici 2013). A persistent worry concerns content-​indeterminacy. Since there are many different types of indeterminacy, we can sketch the problem by looking at an original case discussed by Jerry Fodor (1990). Groups of neurons in the frog’s optic nerve that respond to dark moving objects and make the frog’s tongue snap in the direction of the object are generally considered to be “fly detectors” (Lettvin et al. 1959, 1951). If teleosemantics is right, then such neurons are fly detectors because they respond to flies and they have the selective function of doing so. Fodor, however, notices that the notion of function is indeterminate between coextensive features. If we presume that in the frog’s environment flies and other small dark moving things are reliably coextensive, then snapping at small dark moving things is as adaptive as snapping at flies. Similarly—​for a Quinean possibility—​if flies are coextensive with undetached fly parts, then it is as adaptive for the fly to snap at undetached fly parts. Derivatively, we can equally say that the function of the frog’s neurons is (a) to detect flies, (b) to detect small, dark moving things, and (c) to detect undetached fly parts. In this way, appeal to functions results in content indeterminacy. Proponents of teleosemantic views respond to this challenge in various ways. Some remark that some candidates for content are ruled out in the practice of the empirical sciences where “undetached fly part” is not an option (Burge 2010, 214–​215; Rupert 1999). Others look at the learning history of a given mechanism, such as the frog’s tongue-​snapping system (Ryder 2004; Millikan 1984). This involves looking at both the causal path of the mechanism, and at what the mechanism is selected for—​that is, what the mechanism’s consumer is. The frog’s detecting and snapping system has contributed to survival because it results in the ingestion of nutrition, not in the ingestion of black dots. From the frog’s point of view (from the point of view

Representing as Coordinating with Absence  113 of the consumer), what is relevant is that the black dots are flies (or food), not that they are simply black dots. These types of considerations, according to teleosemantics, help specify contents in the face of indeterminacy. There is ongoing debate as to whether naturalistic accounts succeed in this respect. Indeterminacy is one reason to appeal to more demanding notions. But richer views of content are themselves problematic. In addition to the already noted difficulty with explaining non-​derived content, demanding accounts have a complicated relationship with scientific practice. Cognitive science is replete with examples of both unconscious representations, and of nonhuman animals that lack sophisticated linguistic, epistemic, and conceptual capacities, yet are purportedly capable of representing (Gallistel 1990; Walsh 2003; Dehaene 2011). Perhaps there is a way to accommodate these facts, but, in what follows, I propose to shift the focus from content to other features of representation (Ramsey 2007, 126–​127). In order to give a story of representation, I will argue, we should not stop at a story of content.

4.3. Beyond Content Theories of content are sometimes presented as equivalent to theories of representation (Beck 2015; Burge 2010). This is intuitive insofar as we think that explaining what a structure is about (and how it manages to be about it) is a big part of what it means to explain why the structure is a representation. In the domain of conventional representations, for example, a big part of what makes something a signal or a map is the fact that it signals or maps something else. The question “Why is this light on the dashboard a signal that the temperature is too high?” can be answered appropriately by giving a story of content, that is, by saying something like, “Because the light is caused to turn on when the temperature reaches a certain dangerously high level.” So, the idea is that once we indicate how a structure comes to signal what it does, we have also finished explaining why it is a representation. On the other hand, theorists interested in content, particularly in a naturalistic context, often supplement their views with further conditions for representational status (Fodor 1990a; Millikan 1984; Dretske 1997). The fact that a structure has the function of faithfully responding to specific stimuli is only part of the story of what makes the structure a representation. This is obscured by the fact that, typically, we discuss content only in the context of structures that we already view as representations. We generally start

114  What Are Mental Representations? with some intuitive examples of representations (signals, words, maps, fly detectors, etc.) and then try to figure out in virtue of what the representations have the content that they have. A physical structure that signals high temperature is a signal, in part, because it is about temperature. But this does not seem to exhaust why it is a signal. We might want to ask, first, why it is a signal as opposed to, say, a map or a word for temperature. And we might further want to know why it is a signal at all, as opposed to something that plays no signaling role. The asphalt on streets and highways expands to a certain degree when the temperature is high, but asphalt rarely, if ever, serves as a signal for high temperature. The expansion of asphalt causes other things to happen—​for example slab curling, which may lead to accidents—​but in this causal chain asphalt expansion does not do the job of a signal. Asking what a representation is about is different than asking whether something serves as a representation at all—​and so whether it is a representation.5 The latter question amounts to asking: Suppose that asphalt expansion were to serve as a signal for high temperature given that it can (because it is caused to reliably reach a certain volume in response to high temperature); in virtue of what would it serve as a signal? The distinctiveness of this question is clear even when we consider richer accounts of content. One might suppose that the problem with regarding asphalt as a signal at all is that asphalt was not designed to signal temperature. If the function of asphalt were to be caused to reach a certain volume in response to high temperature, then asphalt, just like a car light, would be a signal. But although appeal to design may be a good story of how asphalt comes to be about temperature (indeterminacy notwithstanding), it is not clear that it is a good story of how asphalt could come to be a signal at all. Not everything that has been designed to exploit a reliable correlation plays the role of a signal (and is a signal). The firing pin in a gun, for example, is designed to reliably strike the detonator when the trigger is pulled.6 The position of the pin reliably co-​varies with the pulling of the trigger, and the pin is designed to so co-​vary for the overall task of making the gun fire. But it seems wrong to say that the firing pin is a signal of the pulling of the trigger. The firing pin does not perform 5 Here and in what follows I am using “serving as a representation” and “being a representation” interchangeably. The idea is that if, in addition to a story of content, we provide a story of when a state serves as a representation, then we have also given a story of the status of the state as a representation. 6 This example is from Ramsey 2007, 125.

Representing as Coordinating with Absence  115 its function by signaling that the trigger has been pulled and by “communicating” this fact to the detonator. A representation-​heavy description of gun firing is superfluous since an explanation in terms of mechanisms that are reliable causal mediators works just as well in this case. Similarly, human skin responds in various ways to exposure to the sun. When the temperature rises, the blood vessels just under the surface of the skin dilate to allow greater amounts of blood to flow near the surface. As a consequence, the skin releases body-​heat through radiation. This, in turn, helps in the general process of cooling down, and ultimately, of survival. Even though it is plausible to suppose that blood vessels were designed to do this, it is both implausible and superfluous to think of them as representing a rise in temperature that, in turn, triggers the release of body heat. A state can have the function of reliably being caused to occur by something without playing a representational role. Failing to recognize this fact delivers a notion of representation, and particularly a notion of mental representation, that is too liberal. This point is indeed recognized in the practice of cognitive science. There is nothing particularly distinctive about neurons such that they should always be regarded as representing something. In particular, not all neuronal states that have the function of reliably activating in the presence of something are treated as representations. The neurons in the midbrain that control the reflex responses of the eye, for example, evolved to reliably mediate between the intensity of light present at the retina and eye-​muscle responses (Purves and Lotto 2003, 30). Such neurons are sometimes seen as physiological features of the perceptual system, and they are not universally regarded as representations of light intensity. They adjust the diameter of the pupil in a process that eventually leads to visual perception, but they do not do this by representing light intensity. By contrast, it is often presumed that the neuronal activations involved in early visual processing represent various features of the environment, such as bars of light at different orientations, depths, and wavelengths. Neurons in the visual system of cats and other animals that become active in the presence of slits of light, for example, are often regarded as “edge representations” (Hubel and Wiesel 1962, 1968). Such neurons causally co-​vary with the presence of edges and it is presumably their function to do so, but, in addition, these neurons are also thought to serve as signals in a process that eventually results in visual perception and in belief. In these two examples, the neurons are considered to be representations in the latter case but not in the former, even though the neurons are not

116  What Are Mental Representations? different with respect to what would give them content (i.e., causal covariation plus function). Some researchers appeal to the edge representations’ roles in computations and in cognition to justify their representational status (Dretske 1997, 19 and 170 n. 6), others to the fact that the edge detectors, unlike light intensity detectors, allow subjects to successfully deal with edges of things in the world (Rupert 2011). Perhaps these ways of drawing the distinction do not work, and one might reasonably question whether the differential treatment is justified in these two cases (Orlandi 2014). For present purposes, however, what matters is that the neurons are indeed (at least by some authors) treated differently. This differential treatment suggests that it is embedded in the practice of cognitive science—​even in the practice of those researchers who have fairly liberal notions of representation—​that there is more to representation than the question of content. One theoretical consequence of this point is clear:  A basic causal/​informational view of content should not be treated as a view of what makes something a representation; otherwise it will encompass all sorts of nonrepresentational phenomena. This will contrast with condition 1 from the beginning of this chapter—​namely, that mental representations are explanatory posits of the psychological sciences, and one of the marks of the mental. Structures that are designed to exploit a reliable correlation are ubiquitous in nature; they are not exclusive to psychological creatures (Burge 2010). To avoid overgeneralization, a theory of content must be supplemented with a further constraint on what it takes for a state to function as a representation—​ what is sometimes called a “use” condition. Thus far, I have argued that we should keep the question of content and the question of representational status separate by focusing on naturalistic accounts of content. This line of reasoning also highlights that an account of representation should not trivialize the notion. I think, however, that the distinction between content and status should be drawn regardless of what account of content we consider. Views of content that link mental content to more sophisticated abilities might avoid trivializing the notion of representation. But such views are still answering the question of how a structure comes to be about a particular aspect of the environment. As Bill Ramsey usefully remarks when he introduces his “job description challenge,” what makes a representation have the specific content that it has is different than (although related to) what it is to be a representation at all (Ramsey 2007, xv; see also Sterelny 1995). This is true even if all things that have content are also representations and vice versa. Content and representation plausibly

Representing as Coordinating with Absence  117 coextend, but the question of content and the question of representational role or status are not the same question.7 In sum, this section has argued that giving a story of content is not equivalent to giving a story of representation. The next two sections try to address Ramsey’s challenge by discussing what other conditions are involved in defining genuine representation.

4.4.  Serving as a Stand-​In When does a physical state or structure play a representational role? The next two sections develop the view that this is the case when a state serves as a stand-​in that guides stimulus-​free behavior. Not all reliable causal mediators in the world are properly described as stand-​ins, and not all stand-​ins are just reliable causal mediators (Clark and Toribio 1994; Haugeland 1981). Both asphalt and blood vessels under the skin reliably expand in response to temperature, but they do not stand in for temperature in the process of slab curling and of thermoregulation. A stand-​in is, as Dretske, Ramsey, and others have emphasized, a state that is used in such a way that the specific content of the state makes an explanatory difference to what the state (and the system in which the state is embedded) does. This means that viewing the state as having content, and viewing the state as “communicating” such content to other parts of the system, provides some explanatory advantage. A system that uses representations is a system whose behavior is more intelligible as mediated by contentful intermediaries than as a simple response to the world. To start understanding what this means, we can think back to some of the standardly recognized problems with classical computationalism. Classical computational accounts of the mind understand cognition as a form of quasi-​linguistic symbol manipulation done in accordance with specifiable rules (Haugeland 1985, 39). A crucial insight of this type of position is that the symbols are manipulated by mechanical systems merely in virtue of the symbols’ physical form or syntax. In fact, the physical form of a symbol can be related to its content in a rather arbitrary way, just as words do not need to have any resemblance to what they stand for and refer mostly by 7 Thanks to Anna-​Sara Malmgren for helping with this point. See also, Peter Schulte (2015) and Karen Neander (2017).

118  What Are Mental Representations? convention. This means that the semantic content of the symbols in classical computationalism is causally inert and irrelevant to the computations—​a fact reflected in the adage, “if you take care of the syntax, the semantics will take care of itself ” (Haugeland 1985, 106). Critics of this type of position rightly point out that real mental representations involve intrinsic intentionality (Harnad 1990). Mental representations—​for example, beliefs—​interact with one another and produce behavior by virtue of what they are about. So the quasi-​linguistic symbols of classical computers fail, in effect, to play a representational role. They fail to act as symbols because their content is causally inert. They are used as mere syntactic structures (Stich 1983). A state plays a representational role, and serves as a stand-​in, when the content that it has is relevant to what the state does. Derivatively, to show that representations are at play in mental activity we need to do more than show that mental activity is well modeled by the regimented manipulation of symbolic structures. Content cannot be irrelevant to such manipulation. An analogous point applies to indicators—​states that are causally related to something in a reliable manner. Indicators do not have the syntactic form of quasi-​linguistic symbols. They are designed to exhibit a certain form—​for example, showing a given pattern of activation—​whenever something is present. Nevertheless, to show that a system uses mental representations rather than mere reliable trackers we need to show more than that some internal states of the system can be assigned content in virtue of their lawful response behavior (as connectionists sometimes do in analyzing their networks). We also need to show that the content is active in the computation (Ramsey, Stich, and Garan 1990). In other words, we need to show that such states act as representations rather than as mere functional and/​or causal intermediaries. As we engage in the endeavor of specifying when a state or structure acts as a stand-​in of this kind, however, we encounter a problem that in some ways parallels the problem of specifying original content. It is unclear that we can give a story of non-​derived standing-​in, that is, a story of how a state can come to be used as a representation, without appeal to an interpreter (Dennett 1981). One obvious way in which Peircian symbols—​such as the quasi-​linguistic elements of classical computations—​serve as representations is when an individual or a group uses them as such. Quasi-​linguistic symbols do not serve as representations if their physical form or syntax is all that matters to their causal powers. Similarly, a light in a car dashboard plays the role of a signal

Representing as Coordinating with Absence  119 for high temperature when people use it as a signal and take the appropriate actions. It is questionable that the light would play a similar representational role if we were to remove the interpreter and make the light part of a purely mechanical transaction. The light would then resemble the firing pin in a gun much more closely than it resembles an indicator. The firing pin in a gun is not a signal because no one uses it as a sign that the trigger has been pulled and that the detonator needs striking. By contrast, asphalt would also be a signal if people exploited its correlation with temperature. Just as in the case of linguistic representations, an obvious way in which an indicator serves as an indicator is when someone uses it as such. By parallel reasoning, maps may be acting as maps only in virtue of interpreters and of conventions concerning the markers in the map (more on this later). If this is right, however, it appears to be bad news for the notion of mental representation in cognitive science. There is no interpreter (or homunculus) in the brain or anywhere else that interprets the symbols of classical computations. Nor is there a homunculus that checks the firing pattern of neurons and informs some upstream process. Neurons in the visual cortex, for instance, might be tuned to environmental elements, such as edges, but they may not play the role of representations. They might be more similar to firing pins than to temperature lights. It would then be unclear why we should think that anything representational is involved in producing visual percepts. In the practice of cognitive science, one strategy to overcome this problem consists in mechanizing the interpreter. This strategy has been used for both indicators and for linguistic symbols. Representations in these two formats, despite their differences, seem to share the already noted problem of not being properly seen as stand-​ins unless someone uses them as such. The strategy of mechanizing the interpreter proceeds as follows. Theorists employing this strategy typically understand cognitive processes in computational terms, and they point to some mechanism that can serve as the consumer of a given state while not being a full-​blown individual. The first step consists in identifying a task that needs explanation. For example: how does the visual system generate visual representations of distal objects and scenes from retinal stimulation? Since it is very difficult to understand how any system does that, the task is decomposed into subroutines (Cummins 1983; Haugeland 1998). The subroutines are thought to process features that are more easily extracted from retinal excitation. One subroutine, for example, may process edges. The subroutines communicate their “findings” to

120  What Are Mental Representations? other subroutines, for example, to subroutines that process surfaces, through a hierarchy that eventually outputs representations of distal objects. In this case, the subroutines serve as mechanistic consumers or interpreters, and the states or structures that they exchange intuitively play a representational role. The states inform the next subroutines of the findings of the previous ones. A similar story is proposed for states that have syntactic/​symbolic structure. Consider a machine that performs multiplication (Ramsey 2007, 68). Such a machine takes symbols that stand for numerals as inputs, and it outputs other symbols that stand for numerals (the results of the multiplication). The machine might perform this task by having subroutines that perform addition serially. Each subroutine exchanges a symbol with the next subroutine, and it performs addition until the product is generated. The subroutines, in this case as well, would be the “interpreters,” and the symbols would play a recognizably representational role. We would get an explanation of both how a system does what it does, and of what it is doing (multiplication) by appeal to states that refer to numbers and that are used as symbols through a mechanistic process. Appeal to content would make sense of the transitions, and of why they happen. Now, this first strategy for understanding how a state can serve as a stand-​ in without a full-​blown interpreter is fairly common, but it is also problematic. On its face, the strategy presupposes that we already regard the outputs of certain processes as representations, independently of whether they are used by subroutines. Additionally, and more problematically, as we mechanize the consumers, we risk (again) trivializing the notion of mental representation. Thermoregulation and ocular reflexes are complex tasks that can be subdivided into subroutines. In this case, we can think of the skin as the “consumer” of vessel dilation, and of the system of pupillary muscles as the interpreter of neuronal firings, making these processes representational. Since we can offer useful computational models of many processes that intuitively have nothing to do with cognition, we risk again having a notion of mental representation that is too liberal. The immune system, for instance, exhibits nonlinear and stochastic interactions. Immunologists propose computational models of this system that are geared at better understanding how diseases reproduce (Chakraborty 2017). Unless we want to grant that the immune system represents, we need to supplement the mechanistic strategy under consideration. A common thought is that the products of vision (or of the multiplication module), unlike the products of the immune system, are clearly

Representing as Coordinating with Absence  121 representational states that guide behavior and that ground beliefs. So the intermediate computational states must also be representations. But this thought seems to presuppose (as noted earlier) that we already regard the products of the computation as representational in nature for independent reasons—​ reasons that seem unrelated to being involved in computational subroutines. A second strategy in cognitive science for understanding representational status concerns primarily icons (structures that exhibit isomorphism with what they stand for) rather than indicators. I previously suggested that we might view maps as also requiring an interpreter to act as maps, for example. This second strategy, however, attempts to eliminate the need for an interpreter altogether. The idea is to appeal to the structural similarity between a state and what the state represents, and to then think of the similarity (or isomorphism) as being exploited in the execution of some task (Shea 2018). Ramsey gives a useful example to illustrate this idea (Ramsey 2007, 199). He imagines a toy car that moves down an S-​shaped road with walls on each side. If the car moves down the road by bumping into the walls and adjusting course as a result, the car can be seen as using reliable intermediaries (that go off when a wall has been hit) but no genuine representations. Reliable intermediaries, without an interpreter, are not indicators. By contrast, if the toy car uses an S-​shaped groove with a rudder that traces the course of the groove and turns the wheels, thus enabling the car to navigate the road, then the car would intuitively make use of a representational structure even without an interpreter. The S-​shaped groove embodies the same abstract structure as the road, and serves to guide the car along the S shape. By parity of reasoning, if a brain state were to be isomorphic (enough) to some condition in the environment, and the isomorphism were directly implicated in the production of behavior, then the intuition is that the brain state would serve as a representation. The isomorphism, in this case, would be implicated in explaining both why the state is about whatever it is about, and how the state can serve as a stand-​in in the absence of an interpreter. In other words, the isomorphism would be causally active in the production of behavior (or in the performance of a given task) while also accounting for content. In this instance, the contentful state would intuitively act as a model. A map of New York is about New York in virtue of having a certain structure that is isomorphic to New York. But a structure can also inform behavior if the isomorphism is what guides it. The path of a marble traveling in a marble labyrinth that reproduces New York is determined by the represented shape of New York.

122  What Are Mental Representations? The view that mental representations are isomorphic structures to what they represent has a long history. Steven Palmer, for example, defines mental representation as a mapping “from objects in the represented world to objects in the representing world such that at least some relations in the represented world are structurally preserved in the representing world” (Palmer 1978, 266–​267). Problems for this type of view, however, are also well known. Nelson Goodman raises concerns for any resemblance theory of representation (Goodman 1968). Resemblance and isomorphism are symmetric and reflexive relations, while representation is neither of these two things. A map of an S-​shaped road represents the S-​shaped road, but the road does not represent the map. Moreover, isomorphisms are as ubiquitous as causal relations, so both indeterminacy and trivialization loom. A map of a road is isomorphic to the road, but also to many other things in the environment. Some of these worries can be met. To address the issue of symmetry we can think of representations as being homomorphic, rather than isomorphic to what they represent (Bartels 2006; Morgan 2014). An isomorphism is a one-​ to-​one function from one structure to another, which preserves the relations between the elements of each structure. A homomorphism, by contrast, is a structure-​preserving mapping that need not be one to one. A homomorphic map need not map each element and relation in what it maps. In this way, homomorphisms are more permissive kinds of mappings, but they also fail to display symmetry with what they map. Indeterminacy is also an issue here, but an issue that concerns primarily content-​individuation. Since homomorphism is just a mapping function, there will be many such functions for any map we can consider. Homomorphism alone is unable to determine content, or what a given homomorphic structure represents. Here, again, I would like to keep the center of attention on how to understand the role of a state as a representation rather than focusing on content. Homomorphism is put forward as a proposal for how to explain how a state can serve as a model without an interpreter. Proponents of this idea can use various moves to avoid content indeterminacy. As in the case of causal accounts of content, one option is to appeal to the history of how the homomorphic structure was established. If the selected function of homomorphism is to map New York, then that is the content of the representation. Another option is to appeal to use. If the homomorphism is used primarily to guide people around New York, regardless of how the homomorphism was established historically, then that type of use determines content.

Representing as Coordinating with Absence  123 Aside from content, however, the present thought is to use the homomorphism to explain the fact that a given internal state is playing a representational role—​it is acting like a map—​without anyone reading the map. To do this, the homomorphism will have to be complex enough to avoid trivialization. This is the main challenge for this type of position. Simple kinds of homomorphisms are ubiquitous. Blood vessels dilate to rises in temperature in a way that is homomorphic to them. So, simple homomorphism is not sufficient to regard a structure as acting as a model (Morgan 2014). Similarly, maps come in different varieties. A map, in the loose sense, is simply a representation that stands for geometric aspects of the environment, for example the layout of entities in space (Rescorla 2018). There is also, however, a richer notion of a map. On the richer notion, a map is a representation that has itself geometric structure, mirroring the structure it represents. Points on the map, for instance, represent cities, and distances between points on the map are proportional to distances between cities in the physical environment. In more general terms, maps in the strict sense are such that certain constituents of the representation correspond to certain constituents of the represented domain, and certain relations among the constituents of the represented domain are mirrored by relations among the constituents of the representation (Block manuscript; Cummins 1989; Shea 2014; Rescorla 2017). It is not clear that blood vessels display this more complex type of homomorphism, nor that they act as they do in virtue of it. Not all homomorphisms in the environment are complex enough to count as maps in the strict sense, and not all homomorphisms are exploited in the production of behavior. Circadian clocks in plants, for example, are homomorphic with the day-​night cycle. They are used to control behavior with respect to the sun—​for example, they are enlisted to reorient the plant’s leaves overnight to face the “anticipated” location of the sun in the morning.8 But their complexity as maps, rather than as simple, mechanical indicators, is under dispute. One of the challenges for the proposal under consideration is precisely to specify the level of complexity needed to regard a homomorphic structure as a map. A further challenge is to find out whether brains do in fact use structures that resemble maps in the strict sense. What would a brain map that guides behavior look like?

8 Thanks to an anonymous referee for this example.

124  What Are Mental Representations? Nick Shea has recently discussed an idealized example inspired by research on rat navigation (Shea 2014; O’Keefe and Burgess 2005).9 In the example, particular cells called place cells in the rat’s hippocampus display a characteristic pattern of firing when the rat is in a particular location within a spatial domain (Cell 1 fires when the rat is in the corner, Cell 2 when it is in the middle, Cell 3 when it is three-​quarters of the way between the two corners, etc.).10 The firing of, say, Cell 1 can become associated with reward (if the corner is where a reward is often located). Since the rat follows continuous routes, place cells that correlate with nearby locations will tend to fire one after the other, and become associated, thus creating a co-​activation structure (Dragoi and Tonegawa 2013). The speed of sequential activation of place cells mirrors the distance from a given reward (the quicker the co-​ activation, the shorter the distance). So, in this case, a relation between physical states—​how quickly Cell 2 fires after Cell 1—​would mirror a relation of the domain (distance between two locations). Now, suppose that the place cells can consistently activate offline—​in the absence of a direct causal connection with the spatial domain that the place cells stand for, and before the rat takes off—​and that the rat develops a disposition to pick the route that corresponds with the quickest co-​activation pattern (which corresponds to the shortest distance) (Dragoi and Tonegawa 2013). It seems that in such a case, the rat exploits a physical structure whose elements mirror a relationship between elements in the environment. The rat’s movements happen because of this mirroring. So, the proposal has it that a homomorphism is being exploited by the rat, the homomorphic structure is about the environment, and the structure stands in for it in the rat’s transactions. Similar examples of brains using structures that resemble maps in the strict sense in fact do pop up in research on pyramidal cells in the cerebral cortex (Ryder 2004), in other accounts of animal navigation (Gallistel and Matzel 2013), and in the explanation of fast reaching. In fast reaching a neural circuitry is thought to emulate the arm-​hand system by being homomorphic 9 Shea uses this example and the idea of “structural representation” to address content-​ indeterminacy. As noted, I use it here to shed light on the notion of standing-​in. 10 An anonymous referee points out that the research in question is more complicated than what I (and Shea) describe. Place cells also exhibit remapping where the same place cell might selectively respond to a different place when the animal is exploring a different environment. The example described here is indeed simplified and idealized, but its main point is to show what it might be like for rats to use maplike structures. Whether place cells in rats also have a context-​sensitive response behavior is less central.

Representing as Coordinating with Absence  125 with it, and to provide mock feedback in place of slow or absent real-​world feedback (Clark and Toribio 1994, 402). More generally, maps in the strict sense are a subset of analog representations that are often invoked in cognitive science to explain how human agents represent magnitudes such as distance, numerosity, heat, and temporality (Beck 2015; Block manuscript; Carey 2009). In the “mirroring conception” of analog representation, an internal state or structure is an analog representation when the representational medium co-​varies with the represented quantity (Lewis 1971; Maley 2011). This conception, among other things, helps explain why magnitude discriminations conform to Weber’s law. According to this law, the ability to discriminate between two magnitudes is a function of their ratio. As the ratio of two magnitudes approaches 1:1, like when two groups of items are almost equally large, they are harder to discriminate between. The proposed explanation based on analog representation is that the neural entities representing the magnitudes become increasingly similar as the ratio between them approaches 1:1. When they do that, they also become similar with respect to their causal powers (Carey 2009). So the idea of using homomorphism to understand representational role is that a complex homomorphic structure that is causally active in virtue of the homomorphism could act as a stand-​in without an interpreter. There is no corresponding story to be given for indicators. Is this differential treatment justified? Alex Morgan has recently argued that it is not (Morgan 2014). There are actually (at least) two points that Morgan is making. One is that being homomorphic is, just like being a reliable causal mediator, not sufficient for something to play a representational role. Something can be homomorphic to something else (e.g., blood vessels’ volume being homomorphic to rises in temperature) and yet not be used to tell us anything about what it is homomorphic to; the homomorphism might instead be used for other purposes (Morgan 2014, 20). This seems true, but also not damaging for the current proposal. The proposal is only geared at explaining how a state could play a representational role without reference to an interpreter. The idea is not that any type of homomorphism is sufficient for representation (see next section for an additional requirement). Why is the proposal promising? Because a complex homomorphism that is used to guide behavior seems to act as a model in a way that a reliable causal intermediary does not do.

126  What Are Mental Representations? The second point that Morgan seems to be defending is that there is really no relevant distinction between indicators and icons since indicators are always also homomorphic to what they stand for (Morgan 2014, 22). This may be true, but part of the point of the distinction at hand is that what makes a state an indicator is a different property than what makes something an icon. The point of icons is that their homomorphism is relevant both to what they are about, and to what they do. The point of indicators is that what makes them about what they are about is their causal profile, not their homomorphism. A light on a car dashboard is about high temperature in the motor because it is caused to go off by it. The light may also be, in some sense, homomorphic to high temperature, but that’s not what determines its content or causal powers. Indicators and icons may be coextensive, in other words, but different properties are relevant to distinguishing them. In this section, I tried to trace the contours of a challenge that I think is as pressing as the challenge of providing a theory of mental content. We need a story of how internal structures of cognizers serve as stand-​ins. I discussed a proposal that I find promising and that appeals to the notion of homomorphism. I then noted that there are some challenges to this idea. Regardless of whether or not the idea works, we would not be done, however. Something is used as a representation when it acts as a stand-​in that guides stimulus-​free behavior. The next section concludes the chapter by expanding on what it means to be stimulus-​free.

4.5.  Coordinating with Absence Ruth Millikan describes representations as explanatory posits with two faces (Millikan 1995). One face looks back at the world that the representation is supposed to mirror. The other face looks forward to the behavior that the representation governs, and that is produced somewhat independently of the world. One way in which representation-​guided behavior is independent of the world is that it is generated by stand-​ins. Another way in which this behavior is independent of the world is that it is not describable as a reaction to present stimulus conditions. The behavior is stimulus-​free (Pylyshyn 1984, 12). There is more to this idea, I think, than the thought that contentful intermediaries produce the behavior. The intermediaries have to be of a certain kind.

Representing as Coordinating with Absence  127 In animal navigation, creatures can travel to food or shelter by beaconing. In beaconing, animals travel to a source that emanates sensory cues (a beacon). Because a beacon continuously impinges on the animal’s sensory apparatus, it involves causal mediation but not representation, that is, not stand-​ins that inform about the beacon (Burge 2010; Rescorla 2017). Such behaviors are explained mostly by appeal to current environmental conditions. By contrast, a more controversial, map-​based interpretation of animal navigation has it that animals employ maplike representations of the spatial layout of their environment. To do so, animals must have mental coordinates of both their own position (egocentric coordinates) and of landmarks in the world (the sun, the nest, etc.). These are typically labeled “allocentric coordinates.”11 Map-​based navigation is invoked to explain how animals return to places that do not impinge on their senses, and that are not reachable by simply keeping a record of position by using self-​motion cues, as in dead reckoning (Tsoar et al. 2010). Why do we think that beaconing does not require representation while map-​based navigation does? This difference in judgment is not, I think, due solely to the difference in complexity and structure between the states that respond to the beacon, and the states that map an animal’s spatial layout. The difference is also due to the fact that in beaconing we only have states that respond to proximal and continuous simulation, while in mapping we have states that inform about distal and absent conditions. Some regions of primary visual cortex, so-​called “retinal maps,” have rich structures that are presumably continuously homomorphic to the image projected on the retina, that is, to the pattern of activation that light produces at the retina. Although such maps exhibit homomorphism with retinal states, and they are thought to carry information about what is present at the sensory receptors for further processing, they are not what comes to mind when we think of paradigmatic examples of mental representations. Part of the problem is that retinal maps are about proximal stimulus conditions and they do not directly explain behavior that is independent of that stimulus. Representational capacities thus emerge in organisms that are able to interact not (only) with ongoing and changing sensory stimulation, but (also) with relatively constant, distal, and absent events and properties. In the 11 Understanding how an animal can estimate its current allocentric position while also estimating the allocentric landmark position is a problem in both robotics and in animal cognition (Rescorla 2017, 37).

128  What Are Mental Representations? literature on mental representation, there are a number of ways of developing this idea. Tyler Burge has recently criticized both “indicator” accounts of representation, and accounts based on functional isomorphism (Burge 2010). Such accounts, according to Burge, obscure a more robust notion of representation that is rooted in perceptual constancies. Perceptual constancies are those mechanisms that allow subjects to prune away continuously changing features of a given stimulation and respond, instead, to what is constant in the world. Such constancies enable subjects to move past, for example, the changing wavelength present at the retina and to represent uniform color as belonging to some distal object. Fodor argues similarly for a distinction between states and structures that respond to properties that appear in laws (what Fodor calls “nomic properties”), on the one hand, and states and structures that respond to what he calls “nonnomic” properties, on the other (Fodor 1987a, 11). Temperature and light intensity are prime examples of nomic properties since they appear in laws of thermodynamics and of optics. Non-​psychological systems such as paramecia and thermostats, according to Fodor, respond only to nomic properties. By contrast, the behavior of psychological creatures is often guided by states that encode properties that are not quite as “lawful”—​for example, the property of “being a house” or the property of “being a crumpled shirt” (Fodor 1987a, 11). There may be some laws that we have yet to discover about houses and crumpled shirts, but there are no known laws of the physical sciences about them as such. Houses and crumpled shirts are not categories of physics. Fodor’s choice of labels for the properties that psychological agents represent may be infelicitous. What properties end up being nomic depends on what laws there are. If only physics turns out to have laws, many systems that respond to simple optical and chemical properties would, contrary to Fodor’s intention, count as mentally representing. Perhaps a way to salvage Fodor’s point here is to make reference to the difference between proximal and distal properties (Palmer 1999, 313). Light intensity is a paradigmatic example of a proximal property. It is a property that is capable of affecting the sensory receptors and that is in continuous flux. Distal properties, by contrast, are those properties in the environment that remain invariant despite variations in light and viewing conditions. Perceptual constancies are again an example of this kind of distinction.

Representing as Coordinating with Absence  129 Other contemporary authors understand the idea of interacting with what is not present in stimulation either by appeal to “de-​coupleability” or by appeal to offline use (Grush 1997). Offline use amounts to representations being usable in the absence of a direct causal relation with what the representations stand for, as in simulations. Perhaps de-​coupleability in this sense is too strong of a notion since it would seem to exclude perceptual states from the representational realm. Perceptual states are not straightforwardly usable in simulations. Mental representations are de-​coupleable states, on the other hand, when they do not require ongoing causal contact with their referents (Cantwell-​Smith 1996; Chemero 2011, 48–​49; Clark 1997, 144). We see the emergence of intentionality and of representational capacities, according to this view, when an agent is able to temporarily track what is beyond her “effective reach” (Cantwell-​Smith 1996). Elsewhere, I have argued that it is better to understand de-​coupleability in terms of stimulus conditions than in terms of a temporary absence of referents (Orlandi 2014). A token of a behavior-​guiding state does not become a representation when it can stand for something absent in the stimulus, or when it does so, but only for a short period of time. Rather, a behavior-​guiding state is a representation when it is actually and continuously detached from stimulus conditions. In line with my interpretations of Burge and Fodor, I take genuine representation to emerge when standing-​in structures are about the distal and absent environment. In this way, representations guide stimulus-​ free behavior. If this is right, then what is common to these different contemporary stories of mental representation is the thought that they make their possessors capable of successfully interacting with what is absent in the proximal environment. When a neural network provides the layout of a scene, including the location of a reward that is not currently impinging on the senses, the network gives us a glimpse of something absent. When we employ words in thought, we are able to think of things that are far removed from present stimulus conditions. When we retrieve mental images in imagination, we are able to think of things that may not exist at all. The judgments and behaviors that result from these exercises are the product of representational structures that serve as stand-​ins and that have a certain type of content. They are about distal, absent, and, in some cases, fairly abstract properties and events. They do not concern present stimulation.

130  What Are Mental Representations? One of the distinctive features of the psychological realm is freedom from the bustling, buzzing confusion of sensation. In this sense, mental representations allow us to coordinate with absence.

4.6. Conclusion In this chapter, I presented an empirically informed view of mental representation that aspires to keep together a number of conditions on representational status. I focused, in particular, on two claims: the claim that a theory of content is not exhaustive of a theory of mental representation, and the claim that representations are stand-​ins that allow us to successfully interact with what is not present to the senses. Further work is needed, I believe, to spell out this view and to understand its implications for contemporary models of mental activity.

Acknowledgments This chapter develops themes from Orlandi 2014. Carol Hay, Anna-​Sara Malmgren, Angela Mendelovici, and Peter Schulte read an earlier draft and provided extremely helpful comments. Two anonymous referees helped improve the central argument of the chapter. I also thank Geoff Lee, Ram Neta, Susanna Schellenberg, Krzysztof Dołęga, Joulia Smortchkova, Tobias Schlicht, the audience of the “Mental Representation:  The Foundation of Cognitive Science?” conference in Bochum, the audience of the 2018 Eastern American Philosophical Association in Savannah, the audience of the Philosophy Masterclass in Bielefeld, the Rotman Institute for Philosophy in London, and the audience of the “Perceptual Capacities and Psychophysics” workshop at Rutgers University. Finally, I am indebted to Bill Ramsey for deeply inspiring work on the topic of mental representation.

References Bartels, A. 2006. Defending the Structural Concept of Representation. Theoria, 2nd ser. 21, no. 1 (55): 7–​19. Beck, J. 2015. Analogue Magnitude Representations:  A Philosophical Introduction. British Journal for the Philosophy of Science 66 (4): 829–​855.

Representing as Coordinating with Absence  131 Block, N. J. 1995. On a Confusion about the Function of Consciousness. Behavioral and Brain Sciences 18: 227–​247. Bourget, D., and Mendelovici, A. 2019. Phenomenal Intentionality. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Spring 2019 ed. Brentano, F. 1874/​2009. Psychology from an Empirical Standpoint. Routledge: Taylor and Francis. Burge, T. 2010. Origins of Objectivity. New York: Oxford University Press. Cantwell-​Smith, B. 1996. On the Origin of Objects. Cambridge, MA: MIT Press. Carey, S. 2009. The Origin of Concepts. New York: Oxford University Press. Chakraborty, A. K. 2017. A Perspective on the Role of Computational Models in Immunology. Annual Review of Immunology 35: 403–​439. Chemero, A. 2011. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Chomsky, N. 1967. A Review of B. F. Skinner’s Verbal Behavior. In L. A. Jakobovits and M. S. Miron (eds.), Readings in the Psychology of Language, 142–​143. Englewood Cliffs, NJ: Prentice-​Hall. Chomsky, N. 1995. Language and Nature. Mind 104: 1–​61. Clark, A. 1997. Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press. Clark, A. 2015. Surfing Uncertainty:  Prediction, Action, and the Embodied Mind. New York: Oxford University Press. Clark, A., and Toribio, J. 1994. Doing without Representing? Synthese 101 (3): 401–​431. Cummins, R. 1983. The Nature of Psychological Explanation. Cambridge, MA: Bradford Books /​MIT Press. Cummins, R. 1989. Meaning and Mental Representation. Cambridge, MA: MIT Press. Cummins, R., Blackmon, J., Byrd, D., Lee, A., and Roth, M. 2006. Representations and Unexploited Content. In G. MacDonald and D. Papineau (eds.), Teleosemantics, 195–​ 207. Oxford: Oxford University Press. Davidson, D. 1975. Thought and Talk. In S. Guttenplan (ed.), Mind and Language, 7–​23. Oxford: Oxford University Press. Davies, M. 2015. Knowledge—​ Explicit, Implicit and Tacit:  Philosophical Aspects. International Encyclopedia of the Social & Behavioral Sciences,  74–​90. Dehaene, S. 2011. The Number Sense: How the Mind Creates Mathematics. OUP. Dennett, D. 1981. Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, MA: MIT Press. Dennett, D. 1982. Styles of Mental Representation. Proceedings of the Aristotelian Society 83: 213–​226. Dennett, D. 1987. The Intentional Stance. Cambridge, MA: MIT Press. Dickie, I. 2015. Fixing Reference. Oxford University Press. Dragoi, G., and Tonegawa, S. 2013. Distinct Preplay of Multiple Novel Spatial Experiences in the Rat. Proceedings of the National Academy of Sciences 110: 9100–​9105. Dretske, F. 1981. Knowledge and the Flow of Information. Stanford, CA: CSLI Publications. Dretske, F. 1988. Explaining Behavior. Cambridge, MA: MIT Press. Dretske, F. 1997. Naturalizing the Mind. Cambridge, MA: MIT Press. Egan, F. 2014. How to Think about Mental Content. Philosophical Studies 170: 115–​135. Elliffe, M. 1999. Performance Measurement Based on Usable Information. In R. Baddeley, P.  Hancock, and P.  Foldiak (eds.), Information Theory and the Brain, 180–​200. Cambridge: Cambridge University Press. Evans, G. 1982. The Varieties of Reference. New York: Oxford University Press.

132  What Are Mental Representations? Fodor, J. A. 1975. The Language of Thought. Cambridge, MA: Harvard University Press. Fodor, J. A. 1987a. Why Paramecia Don’t Have Mental Representations. Midwest Studies in Philosophy 10 (1): 3–​23. Fodor, J. A. 1987b. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Vol. 2. Cambridge, MA: MIT Press. Fodor, J. A. 1990. A Theory of Content and Other Essays. Cambridge, MA: MIT Press. Fodor, J. A., and Pylyshyn, Z. 1988. Connectionism and Cognitive Architecture: A Critical Analysis. Cognition 28: 3–​71. Gallistel, C. R. 1990. The Organization of Learning. Cambridge, MA: MIT Press. Gallistel, C.  R., and Matzel, L.  D. 2013. The Neuroscience of Learning:  Beyond the Hebbian Synapse. Annual Review of Psychology 64: 169–​200. Goodman, N. 1968. Languages of Art. Indianapolis: Hackett. Grush, R. 1997. The Architecture of Representation. Philosophical Psychology 10 (1): 5–​23. Harnad, S. 1990. The Symbol Grounding Problem. Physica D 42: 335–​346. Haugeland, J. 1981. Analog and Analog. Philosophical Topics 12: 213–​222. Haugeland, J. 1985. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press. Hubel, D.  H., and Wiesel, T.  N. 1962. Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex. The Journal of Physiology 160 (1): 106. Hubel, D. H., and Wiesel, T. N. 1968. Receptive Fields and Functional Architecture of Monkey Striate Cortex. The Journal of Physiology 195 (1): 215–​243. Haugeland, J.  1998. Having Thought:  Essays in the Metaphysics of Mind. Harvard University Press. Hutto, D. D., and Myin, E. 2012. Radicalizing Enactivism: Basic Minds without Content. Cambridge, MA: MIT Press. James, W. 1890/​1983. Principles of Psychology. Harvard University Press. Kriegel, U. 2011. Cognitive Phenomenology as the Basis of Unconscious Content. In T. Bayne and M. Montague (eds.), Cognitive Phenomenology, 79–​102. New York: Oxford University Press. Lettvin, J. Y., Maturana, H. R., McCulloch, W. S., and Pitts, W. H. 1959. What the Frog’s Eye Tells the Frog’s Brain. Proceedings of the IRE 47 (11): 1940–​1951. Lewis, D. 1971. Analog and Digital. Nous 5: 321–​327. Maley, C. J. 2011. Analog and Digital, Continuous and Discrete. Philosophical Studies 155 (1): 117–​131. McDowell, J. 1994. Mind and World. Cambridge, MA: Harvard University Press. Mendelovici, A.  2013. Reliable Misrepresentation and Tracking Theories of Mental Representation. Philosophical Studies 165 (2): 421–​443. Mendelovici, A. 2018. The Phenomenal Basis of Intentionality. New  York:  Oxford University Press. Millikan, R. G. 1984. Language, Thought, and Other Biological Categories: New Foundations for Realism. Cambridge, MA: MIT Press. Millikan, R. G. 1995. Pushmi-​ pullyu Representations. Philosophical Perspectives 9: 185–​200. Millikan R. G. 2009. Biosemantics. In B. McLaughlin, A. Beckermann, and S. Walter (eds.), The Oxford Handbook of Philosophy of Mind, 394–​406. New  York:  Oxford University Press. Morgan, A. 2014. Representations Gone Mental. Synthese 191 (2): 213–​244. Neander, K. 2017. A Mark of the Mental:  In Defense of Informational Teleosemantics. Cambridge, MA: MIT Press.

Representing as Coordinating with Absence  133 O’Keefe, J., and Burgess, N. 2005. Dual Phase and Rate Coding in Hippocampal Place Cells: Theoretical Significance and Relationship to Entorhinal Grid Cells. Hippocampus 15 (7): 853–​866. O’Reilly, R.  C., and Munakata, Y.  2000. Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Bradford Book. Orlandi, N. 2014. The Innocent Eye:  Why Vision Is Not a Cognitive Process. New York: Oxford University Press. Orlandi, N. 2019. Perception without Computation? In M. Sprevak and M. Colombo (eds.), The Routledge Handbook of the Computational Mind, 419–​451. New York: Routledge. Palmer, S. E. 1978. Fundamental Aspects of Cognitive Representation. In E. Rosch and B. Bloom-​Lloyd (eds.), Cognition and Categorization, 259–​303. Hillsdale, NJ: Lawrence Erlbaum Associates. Palmer, S. E. 1999. Vision Science:  Photons to Phenomenology. Vol. 1. Cambridge, MA: MIT Press. Peirce, C. S. 1931–​58. The Collected Papers of C. S. Peirce. 8 vols. A. Burks, C. Hartshorne, and P. Weiss (eds.). Cambridge, MA: Harvard University Press. Phillips, I. 2016. Consciousness and Criterion:  On Block’s Case for Unconscious Perception. Philosophy & Phenomenological Research 93 (2): 419–​451. Pitt, D.  2004. The Phenomenology of Cognition or What is it Like to Think that P? Philosophy and Phenomenological Research 69 (1): 1–​36. Purves, D., and Lotto, R. B. 2003. Why We See What We Do: An Empirical Theory of Vision. Sinauer Associates. Pylyshyn, Z. 1984. Computation and Cognition. Cambridge University Press. Ramsey, W. M. 2007. Representation Reconsidered. New York: Cambridge University Press. Ramsey, W. M., Stich, S., and Garan, J. 1990. Connectionism, Eliminativism, and the Future of Folk Psychology. In D. J. Cole, J. H. Fetzer, and T. L. Rankin (eds.), Philosophy, Mind, and Cognitive Inquiry, 117–​144. Dordrecht: Springer. Rescorla, M. 2013. Bayesian Perceptual Psychology. In M. Matthen (ed.), The Oxford Handbook of the Philosophy of Perception, 694–​716. New York: Oxford University Press. Rescorla, M. 2017. Levels of Computational Explanation. In T. M. Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics, 5–​28. Cham, Switzerland: Springer. Rescorla, M. 2018. Maps in the Head? In Kristin Andrews and Jacob Beck (eds.), The Routledge Handbook of Philosophy of Animal Minds, 34–​45. Routledge. Rupert, R. 1999. The Best Test Theory of Extension: First Principle(s). Mind & Language 14 (September): 321–​355. Rupert, R. 2011. Embodiment, Consciousness, and the Massively Representational Mind. Philosophical Topics 39 (1): 99–​120. Ryder, D. 2004. SINBAD Neurosemantics: A Theory of Mental Representation. Mind & Language 19 (2): 211–​240. Schulte, P. 2015. Perceptual Representations: A Teleosemantic Answer to the Breadth-​of-​ Application Problem. Biology & Philosophy 30 (1): 119–​136. Shea, N. 2018. Representation in Cognitive Science. Oxford University Press. Siewert, C. P. 1998. The Significance of Consciousness. Princeton University Press. Sperling, G. 1960. The Information Available in Brief Visual Presentations. Psychological Monographs 74 (11): 1–​29. Sterelny K. 1995. Basic Minds. Philosophical Perspectives 9: 251–​270. Stich, S. P. 1983. From Folk Psychology to Cognitive Science:  The Case against Belief. Cambridge, MA: MIT Press.

134  What Are Mental Representations? Strawson, G.  1994. The Experiential and the Non-​ Experiential. In T.  Szubka and R. Warner (eds.), The Mind-​Body Problem, 69–​86. Oxford: Blackwell. Tsoar, A., Nathan, R., Bartan, Y., Vyssotski, A., Dell’Omo, G., and Ulanovsky, N. 2011. Large-​Scale Navigational Map in a Mammal. Proceedings of the National Academy of Sciences 108: E718–​E724. Tsoar, A., Shohami, D., and Nathan, R. 2010. A Movement Ecology Approach to Study Seed Dispersal and Plant Invasion: An Overview and Application of Seed Dispersal by Fruit Bats. Fifty Years of Invasion Ecology: The Legacy of Charles Elton: 101–​119. van Gelder, T.  1995. What Might Cognition Be, If Not Computation? The Journal of Philosophy 92 (7): 345–​381. Walsh, V. 2003. A Theory of Magnitude: Common Cortical Metrics of Time, Space, and Quantity. Trends in Cognitive Sciences 7: 483–​488.

5 Reifying Representations Michael Rescorla

5.1.  The Representational Theory of Mind A venerable tradition holds that the mind is stocked with mental representations: mental items that represent. There is a mental representation that represents whales, a mental representation that represents mammals, and so on. Mental representations are similar in key respects to the communal representations employed by human society, such as pictures, maps, or natural language sentences, but they are housed in the mind rather than the external world. They can be stored in memory, manipulated during mental activity, and combined to form complex representations (e.g., the complex representation ). Following Fodor (1981), let us call this picture the representational theory of mind (RTM). Philosophical reception of RTM has fluctuated between acclaim and derision. In the medieval era, Ockham postulated a language of thought containing mental representations analogous to natural language words and sentences. Early modern philosophers frequently invoked ideas, often conceived in imagistic terms. In the early and mid-​20th century, scientists and philosophers almost universally denounced mental representations as suspect entities that deserve no place within scientific theorizing. Tolman (1948) dissented from the anti-​representationalist consensus. He hypothesized that rats navigate using cognitive maps: mental representations that represent the environment’s spatial layout. The 1960s cognitive science revolution sparked a huge resurgence of support for mental representations, including cognitive maps (e.g., Evans 1982; Gallistel 1990; O’Keefe and Nadel 1978). Fodor (1975) crystallized the trend with an influential argument that our best psychological theories presuppose a language of thought (often called Mentalese). Nevertheless, mental representations remain objects of suspicion in many philosophical and scientific circles.

Michael Rescorla, Reifying Representations In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0006.

136  What Are Mental Representations? This paper presents a novel version of RTM. I begin from the premise that the mind has representational capacities: a capacity to represent whales, a capacity to represent mammals, and so on. We may sort mental states and events into types based upon the representational capacities that they deploy. We may then reify the types, i.e., we may treat them as objects. Mental representations are the reified types. They are abstract entities that mark the exercise of representational capacities. The Mentalese word marks the exercise of a capacity to represent whales; the Mentalese word marks the exercise of a capacity to represent mammals; and so on. Complex mental representations mark the appropriate joint exercise of representational capacities. The complex Mentalese sentence marks the appropriate joint exercise of the representational capacity corresponding to , the representational capacity corresponding to , and other capacities as well. A cognitive map marks the appropriate joint exercise of capacities to represent individual landmarks, a capacity to represent oneself, and a capacity to represent entities as spatially positioned a certain way. I call my approach the capacities-​based representational theory of mind (C-​RTM). I will develop C-​RTM and apply it to long-​standing debates over the existence, nature, structure, individuation, and explanatory role of mental representations. Section 5.2 motivates a foundational commitment underlying my treatment: mental states and events have explanatorily significant representational properties. Section 5.3 presents my core thesis:  mental representations are abstract types that we cite so as to classify mental states and events in representational terms. Anyone who accepts that mental states can represent the world and who countenances abstract entities should countenance mental representations as construed by C-​RTM—​or so section 5.4 argues. Section 5.5 uses C-​RTM to elucidate the familiar thought that some mental representations are complex. Sections 5.6–​5.7 address how to individuate mental representations, with particular emphasis upon the individuative role that C-​RTM assigns to representational properties.

5.2.  Mental Representation and Mental Representations Researchers across philosophy and cognitive science use the phrase “mental representation” in many different ways. On the usage that concerns me, mental representation is tied to veridicality-​conditions: conditions for veridical representation of the world. To illustrate:

Reifying Representations  137 • Beliefs are the sorts of things that can be true or false. My belief that Emmanuel Macron is French is true if Emmanuel Macron is French, false if he is not. • Perceptual states are the sorts of things that can be accurate or inaccurate. My perceptual experience as of a red sphere before me is accurate only if a red sphere is before me. • Desires are the sorts of things that can be fulfilled or thwarted. My desire to eat chocolate is fulfilled if I eat chocolate, thwarted if I do not eat chocolate. Beliefs have truth-​conditions, perceptual states have accuracy-​conditions, and desires have fulfillment-​conditions. Truth, accuracy, and fulfillment are species of veridicality. So beliefs, perceptual states, and desires have veridicality-​conditions. In daily life, we often explain mental and behavioral outcomes by citing beliefs, desires, and other representational mental states. We identify these mental states through their veridicality-​conditions or through representational properties that contribute to veridicality-​conditions. When we say “Frank believes that Emmanuel Macron is French,” we specify the condition under which Frank’s belief is true (that Emmanuel Macron is French). When we say “Frank wants to eat chocolate,” we specify the condition under which Frank’s desire is fulfilled (that Frank eats chocolate). Everyday discourse assigns a central role to intentional explanations, i.e., explanations that cite veridicality-​conditions or representational properties that contribute to veridicality-​conditions. We can formalize many aspects of folk psychological intentional explanation more rigorously using the mathematical apparatus of Bayesian decision theory, which replaces the binary notion belief with the graded notion degree of belief and the binary notion desire with the graded notion utility. Contemporary science builds upon folk psychology by assigning mental representation a central role within psychological explanation. A  few examples: • High-​level cognition, including belief-​fixation, decision-​making, deductive reasoning, planning, problem-​solving, linguistic communication, and so on. Folk psychology routinely explains high-​level mental phenomena by citing representational mental states, such as beliefs, desires, and intentions. The same explanatory strategy figures prominently within social psychology, developmental psychology, economics,

138  What Are Mental Representations? linguistics, and all other fields that study high-​level cognition. Each field presupposes mental states resembling those posited by folk psychology. Each field takes folk psychology (sometimes filtered through Bayesian decision theory) as a template. • Perception. Cognitive science extends folk psychology by describing subpersonal processes in representational terms. For example, perceptual psychology studies how the perceptual system transits from proximal sensory stimulations to perceptual states that represent environmental conditions. The orthodox view, going back to Helmholtz (1867), holds that the transition involves an “unconscious inference” from proximal stimulations to representational perceptual states. The inference is executed not by the person herself but rather by her mental subsystems. To describe unconscious perceptual inference, contemporary perceptual psychologists offer mathematical models grounded in Bayesian decision theory (Knill and Richards 1996). The models cite representational properties of perceptual states, such as representational relations to shapes, sizes, colors, and other aspects of the distal environment (Rescorla 2015). • Navigation. Cognitive science extends the intentional paradigm from humans to nonhuman animals. Psychologists attribute representational mental states to mammals, many birds, and some insects. For example, the research program launched by Tolman (1948) has established that spatial representation underwrites mammalian navigation. Mammals represent how the environment is spatially arranged, and on that basis they navigate through space (Rescorla 2018). Overall, intentional explanation has achieved striking success within cognitive science. It illuminates perception (Burge 2010a), motor control (Rescorla 2016), navigation (Rescorla 2018), deductive reasoning (Rips 1994), mathematical cognition (Gallistel and Gelman 2005), language (Heim and Kratzer 1998), and many other core mental phenomena. Bayesian models of the mind have proved especially successful (Rescorla 2020b, forthcoming a). This scientific work provides strong abductive support for intentional realism:  realism about representational mental states and events. Intentional realists hold that representational properties are genuine,

Reifying Representations  139 scientifically important features of mentality. Fodor defends intentional realism in a series of publications stretching over several decades (1975, 1981, 1987, 1994, 2008). Burge (2010a) also defends intentional realism, focusing on perception. Over the past century, many philosophers and scientists have opposed intentional realism (e.g., Chemero 2009; Churchland 1981; Quine 1960; Skinner 1938; Stich 1983; van Gelder 1992). Anti-​representationalists hold that we should eschew talk about mental representation, just as modern chemistry eschews talk about phlogiston. I  agree with Fodor and Burge that the anti-​representationalist perspective is misguided. Philosophical arguments against intentional explanation are uniformly unconvincing. Attempts at expunging representationality from science have failed, with the proposed anti-​representationalist theories typically far less successful than the representationalist theories they are meant to supplant. Overall, current cognitive science offers numerous impressive intentional explanations whose benefits do not look replicable within a non-​representational approach. I will not engage any further with anti-​representationalism. I focus instead on developing intentional realism. Intentional realism does not entail RTM. One can acknowledge the reality and importance of representational mental states while rejecting all talk about mental representations. Soames (2010) and Stalnaker (1984) occupy this position. When I  perspire, we do not postulate perspirations that do the perspiring. When I procreate, we do not postulate procreations that do the procreating. When I relax, we do not postulate relaxations that do the relaxing. Why, then, should mental representation involve mental representations that do the representing? The usual rejoinder is to defend RTM abductively. Within psychology, Tolman (1948) argues that cognitive maps help us explain mammalian navigation. Within philosophy, Fodor (1975) contends that our best cognitive science explanations presuppose a language of thought. I think that abductive arguments for RTM have considerable force. However, I find them most compelling when they specify as carefully as possible what is being claimed when one postulates mental representations. In particular, a truly satisfying abductive argument should elucidate how exactly RTM goes beyond intentional realism. Sections 5.3–​4 offer my own preferred elucidation.

140  What Are Mental Representations?

5.3.  Mental Representations as Types I will pursue the following intuitive idea: mental representations are types that mark the exercise of representational capacities. In quotidian and scientific discourse, we incessantly classify entities into categories. We taxonomize sounds, shapes, utterances, actions, artifacts, molecules, animals, books, symphonies, dances, and so. Given any reasonable scheme for taxonomizing entities, we may recognize a collection of types corresponding to the taxonomic scheme. Each type correlates with a category employed by the taxonomic scheme. We type-​identify an entity by specifying a type instantiated by the entity. Types are abstract objects. When I say that they are “objects,” I mean that we can refer to them, quantify over them, and count them (Parsons 2008). When I say that they are “abstract,” I mean that they are not located in space and time and that they do not participate in causal interactions. An important feature that distinguishes types from other abstract entities—​such as sets, numbers, and functions—​ is that types are instantiated by tokens. Individual copies of Moby Dick are tokens of a single book-​type. Individual dogs are tokens of a single species. Individual rectangles are tokens of a single shape. Suitable inscriptions or utterances are tokens of the English word-​type “dog.” In each case, we posit an abstract type instantiated by certain tokens. We reify types by treating them as objects.1 Why reify? Why posit abstract types instantiated by tokens? One reason is that reification of types pervades virtually all serious discourse. As Wetzel (2009) documents, types figure pervasively across a vast range of human endeavors. A very partial list: linguistics (words, sentences, phonemes); computer science (LISP expressions, computational systems); biology (genes, species); chemistry (atoms, molecules); physics (forces, fields, protons); music theory (notes, chords, arpeggios); politics (bills, parliamentary procedures); athletics (gymnastics routines, football plays); gaming (chess gambits, poker hands). In most cases, it is doubtful that we could preserve anything like current expressive power while expunging types from our discourse. For example, each of the following statements refers to, quantifies over, or counts types:

1 On reification in general, see Quine’s writings (e.g., 1980, 1981, 1995).

Reifying Representations  141 Chaucer introduced fewer words into the English language than Shakespeare. Over 1,000 species went extinct last year. Mary learned numerous chess gambits yesterday, including the queen’s gambit and the king’s gambit. The gas in the container is composed of five different molecules, including methane and carbon dioxide. During the music theory lecture, Professor Smith discussed the diminished seventh chord, the Neapolitan sixth chord, and four other new chords. The filibuster is arguably the most pernicious parliamentary procedure regularly used in the United States Senate.

In each case, it is unclear how one might paraphrase away the reified types. Even if it is possible in principle to paraphrase types away, doing so would as a practical matter be an oppressive burden. Taking current practice as our guide, types seem as useful and firmly entrenched within our discourse as any other entities, including material objects. Let us apply this viewpoint to the special case of mental representation. As I  argued in section 5.2, cognitive science frequently classifies mental events in representational terms. Just as other disciplines reify types, so can cognitive science reify mental event types. Given a taxonomic scheme that classifies mental events through their representational properties, we can posit a corresponding collection of types. Each type correlates with a category employed by our representationally based taxonomic scheme. On my version of RTM, mental representations are the reified representational mental event types. Their tokens are token mental events. Here I construe the term “event” broadly to include states, such as beliefs, and processes, such as inferential transitions within thought. I will develop these ideas by invoking the notion of a representational capacity. Whenever we describe a creature’s mental activity in representational terms, we presuppose that the creature has certain representational capacities. Examples: • High-​level cognition. When a thinker forms the belief that some dogs are furry, he exercises a capacity to represent dogs. By describing his mental state as a belief that some dogs are furry, we recognize (at least implicitly) that he has exercised this capacity. When we describe him as forming an

142  What Are Mental Representations? intention to eat chocolate, we recognize (at least implicitly) that he has exercised a capacity to represent chocolate. • Perception. Perceptual states represent shapes, sizes, colors, and other distal properties. By describing perceptual states in this way, we presuppose perceptual capacities to represent shapes, sizes, colors, and so on. For example, by describing a perceptual state as an estimate that an object is spherical, we recognize (at least implicitly) that the perceiver has exercised a perceptual capacity to represent sphericality. • Navigation. Mammals mentally represent the environment’s spatial layout, especially the locations of salient landmarks. By saying that a creature mentally represents a specific spatial layout, we recognize (at least implicitly) that the creature has exercised a capacity to represent that layout. Cognitive science routinely groups a creature’s mental events into types by invoking (at least implicitly) representational capacities exercised by the creature or the creature’s mental subsystems. We may posit an array of types corresponding to those representational capacities: • High-​level cognition. Suppose a thinker has a capacity to represent dogs within high-​level thought. The thinker might exercise this capacity by forming a belief that some dogs are furry, or a desire to buy a new dog, or a conjecture that dogs are closely related to wolves, and so on. We may classify the thinker’s mental events based upon whether she exercises the capacity. We may then reify, positing a type instantiated precisely when the thinker exercises the capacity. Citing the type , we type-​identify mental events based upon whether they deploy the correlated representational capacity. In similar fashion, we may recognize a mental representation , a mental representation , and so on. These items resemble the Mentalese words posited by Fodor (although section 5.7 will argue that they do not have all the properties attributed by Fodor to Mentalese words). • Perception. Suppose a perceiver has a capacity to represent sphericality within perception. This capacity can figure in innumerable perceptual states. For example, the perceiver might perceptually represent an object as a large red sphere, or she might perceptually represent a different object as a small green sphere, and so on. We may classify perceptual states based upon whether the perceiver exercises the capacity. We may then

Reifying Representations  143 reify by recognizing a perceptual representation instantiated precisely when the perceiver exercises the capacity. Similarly for other perceptual capacities, such as capacities to represent specific sizes, colors, etc. • Navigation. Suppose an animal has capacities to represent a range of possible spatial layouts. We may classify the animal’s mental events based upon which such capacity the animal exercises. We may posit a range of mental representations, each instantiated precisely when the animal exercises the correlated capacity. These mental representations resemble the cognitive maps posited by Tolman. According to the capacities-​based representational theory of mind (C-​RTM), a mental representation is a type that correlates with a representational capacity. The type is instantiated precisely when a thinker (or one of her subsystems) exercises the correlated capacity. As I will put it, the type marks the exercise of the representational capacity.2 C-​RTM asserts that mental representations as thus construed exist and are explanatorily important. From C-​RTM’s viewpoint, talk about mental representations embodies an “ontologized” way of classifying mental events through representational capacities deployed by the events. Such talk is a more ontologically loaded expression of something to which folk psychology and cognitive science are firmly committed: animals have representational capacities, and those capacities are important for understanding how the mind works. By positing mental representations, we reify the mental event types that figure in our quotidian and scientific classificatory procedures. We thereby undertake ontological commitment to the types. In some areas, such as research on cognitive maps, current theorizing already makes the reification explicit. In other areas, such as folk psychology, explicit reification is less common. When introducing C-​RTM, I  used the locution “exercise a representational capacity.” A few clarificatory remarks about this locution and its role in my account: • I  do not construe the locution in a voluntaristic or action-​theoretic way. You can exercise a capacity involuntarily, without intending to exercise it, and without being consciously aware that you are exercising it. For example, you have a capacity to breathe, and while reading this 2 I borrow the term “mark” from Burge (2010a, 39), who uses it in a similar but not completely identical fashion.

144  What Are Mental Representations? paper you have until now most likely been exercising that capacity involuntarily, without intending to, and without conscious awareness. Many of the representational capacities invoked by cognitive science are not under voluntary control: you cannot help but perceive a sphere as spherical when you see it under the proper viewing conditions. In some cases (such as Bayesian inferences executed by the perceptual system), a representational capacity is exercised not by the thinker but rather by a mental subsystem. Exercising a representational capacity is not necessarily something that you decide to do, something you know you are doing, something you can control, or even something done by you (as opposed to your mental subsystems). • You may have a representational capacity without exercising it. For example, many people have an unexercised capacity to represent the property green-​eyed ophthalmologist currently living in Marseille who loves rock-​climbing. We may posit a mental representation that marks the exercise of this representational capacity. Someone who has the capacity without exercising it never instantiates the mental representation. Had she exercised the capacity, she would have instantiated the representation. A mental representation is an abstract type that exists and that marks the exercise of a representational capacity whether or not the capacity is actually exercised. • When a thinker (or one of her subsystems) exercises a representational capacity, there occurs a mental event that is an exercise of the representational capacity. The event might be a judgment, a belief, a perceptual state, an inferential transition, and so on. A mental event is a token of a mental representation just in case the event is an exercise of the representational capacity marked by the representation. It would be good to say more about capacities in general, about representational capacities more specifically, and about the relation mental event e is an exercise of representational capacity C even more specifically. My strategy in the present paper is to take these notions for granted and explore whether they can help clarify the nature of mental representations. It does not immediately follow from what I  have said that mental representations have representational properties. A  type does not automatically inherit properties from its tokens. For example, the type red ball has red tokens but is not itself red. Likewise, one might hold that mental

Reifying Representations  145 representation R marks the exercise of a capacity to represent d but that R does not itself represent d. I finesse this issue by introducing a term denote* governed by the following stipulation: Mental representation R denotes* d iff R marks the exercise of a capacity to represent d.

There seems little harm in writing the “*” with invisible ink, yielding: (Δ) Mental representation R denotes d iff R marks the exercise of a capacity to represent d.

We may therefore postulate mental representations that denote, keeping in mind that the operative notion of “denotation” is given by (Δ).

5.4.  Admitting Mental Representations into Our Discourse C-​RTM goes beyond intentional realism in two main ways. First, C-​RTM invokes representational capacities. Strictly speaking, an intentional realist might eschew all talk about representational capacities. However, this first difference does not strike me as very significant, because invocation of representational capacities seems at least implicit in any view that attributes representational properties to mental events. Second, and more importantly, C-​RTM outstrips intentional realism by explicitly reifying representational mental event types. A theorist might type-​identify mental events representationally while declining to reify. For example, any intentional realist should agree that a belief that some dogs are furry a desire to buy a new dog a conjecture that dogs are closely related to wolves a fear that Fred’s dog will bark etc. have something important in common. A  common capacity to represent dogs is exercised in each case. C-​RTM goes further by asserting that there

146  What Are Mental Representations? exists a type instantiated in each case. The existential quantifier signals ontological commitment to an abstract type. By reifying representational mental event types, C-​RTM goes beyond intentional realism. Why reify mental representations? Why posit representational mental event types? Well, why not? We usually feel no qualms about positing abstract types corresponding to a reasonable taxonomic scheme. Why feel qualms about reifying the mental event types that figure in intentional explanation? (Cf. Burge 2010a, 40.) Given the plethora of types recognized across a vast range of human endeavors, and given the scientific success of representational taxonomization, why scruple at positing representational mental event types? Philosophers who favor a broadly nominalist viewpoint will decry the reifying step from intentional realism to C-​RTM. Nominalists deny that abstract entities exist. In particular, they deny that types exist. However, nominalism is a problematic position. A large literature over the past century has convincingly demonstrated that abstract entities are, at least in some cases, metaphysically harmless (Linnebo 2018; Parsons 2008)  and indispensable to scientific inquiry (Quine 1981, 1–​23, 148–​155; Putnam 1975a, 345–​357). Setting nominalist worries aside, I see no good reason why we should decline to reify representational mental event types. any theorist who is comfortable with abstract entities in general should gladly take the extra reifying step from intentional realism to C-​RTM. I distinguish two dialectical roles for abduction within the defense of C-​ RTM. First, as I urged in section 5.2, there is strong abductive evidence for intentional realism. Having endorsed intentional realism, we should happily take the extra reifying step to C-​RTM. Second, there are several areas where our best cognitive science theories already take the reifying step by assigning explanatory centrality to mental representations. Examples: • According to Bayesian perceptual psychology, the perceptual system attaches probabilities to hypotheses. For instance, shape perception results from a Bayesian inference over hypotheses that represent possible distal shapes. Bayesian models individuate hypotheses representationally—​ by citing shapes, sizes, colors, and other such distal properties represented by the perceptual system (Rescorla 2015). A Bayesian model of shape perception individuates hypotheses in terms of represented shapes; a Bayesian model of size perception individuates

Reifying Representations  147 hypotheses in terms of represented sizes; and so on. The science presupposes standing capacities for perceptual representation of distal properties. It invokes these standing capacities when individuating hypotheses that figure in Bayesian perceptual inference. Thus, Bayesian perceptual psychology posits perceptual representations as characterized by C-​RTM. • Researchers posit cognitive maps employed during mammalian navigation. Detailed computational models, some Bayesian, describe how mammals use sensory input and self-​motion cues to update their cognitive maps and how they use cognitive maps during navigation (Madl et al. 2015). The models individuate cognitive maps representationally—​ by citing specific represented spatial layouts (Rescorla 2018). The science presupposes that mammals have standing capacities to represent spatial layouts. It invokes these capacities when individuating cognitive maps. Thus, current research posits cognitive maps as construed by  C-​RTM. Explicit reification of representational mental event types also occurs in explanatorily fruitful theories of motor control (Rescorla 2016), deductive reasoning (Rips 1994), mathematical cognition (Gallistel and Gelman 2005), and many other phenomena. In most such cases, it is unclear whether we can preserve the theory’s explanatory achievements while declining to reify. Take Bayesian perceptual psychology. Bayesian perceptual models postulate hypotheses to which probabilities attach. To accept that the models are true or even just approximately true, one must accept that the postulated hypotheses exist and are roughly as depicted by the models. The models individuate hypotheses representationally, by invoking representational capacities exercised when the hypotheses are instantiated. A nonrepresentational individuative scheme would depart dramatically from current science, most likely with undesirable explanatory repercussions (Rescorla 2015). When a successful theory reifies representational types, and when rejecting the types would beget an explanatory loss, this provides abductive evidence for C-​RTM. C-​RTM is a very anodyne version of RTM, in that it involves fairly minimal commitments beyond intentional realism. Some readers may fear that it is too anodyne. How much substance can there be to a version of RTM that extends so little beyond intentional realism? Can such a weak view really capture what proponents of RTM have intended by postulating mental

148  What Are Mental Representations? representations, or what opponents have intended by denying that such entities exist? My appeal to representational capacities may also appear suspiciously empty. When I say that mental event e is an exercise of a capacity to represent d, isn’t that just a needlessly prolix way of saying that e represents d? It may seem that representational capacities are doing no real work, so that my account has even less substance than I have advertised. The rest of the paper addresses these worries. My basic strategy is to show that the appeal to representational capacities advances our understanding of mental representations, sometimes in surprising ways. Section 5.5 uses C-​ RTM to clarify the complexity of certain mental representations. Section 5.6 uses C-​RTM to elucidate how distinct mental representations can share the same denotation. Section 5.7 argues that C-​RTM improves upon many rival versions of RTM by assigning a central individuative role to representational properties. Sections 5.5–​5.7 collectively show that C-​RTM, properly developed, is a highly substantive view that preserves and illuminates many traditional core commitments of RTM.

5.5.  Complex Mental Representations as Complex Types Advocates of RTM universally agree that mental representations can combine to form complex representations. What does it mean to say that a mental representation is “complex”? I think that C-​RTM supplies a good answer. Before saying how, I must reflect in a general way upon types and tokens.

5.5.1. Complex Types It is often natural to regard a type as a complex entity that bears structural relations to other types. The molecule methane CH4 is instantiated when four hydrogen atoms and one carbon atom form appropriate chemical bonds with one another. A C-​major chord occurs when the three notes C, E, and G play simultaneously. The English sentence “John loves Mary” occurs when the individual words “John,” “loves,” and “Mary” are arranged in a suitable syntactic structure. In each case, we recognize a type that marks the structured instantiation of other types. The first type intimately involves the other types as arranged in an appropriate configuration. What counts as “appropriate”

Reifying Representations  149 varies. The same notes (C, E, and G) yield a chord or an arpeggio depending on whether they play simultaneously or sequentially. I say that type t incorporates types t1, . . . , tn other than itself when, necessarily, any token of t has parts that are tokens of t1, . . . , tn and that bear appropriate relations to one another. The “appropriate relations” depend upon t. I say that type t is complex when it incorporates other types. Here I employ an abstract notion of part affiliated with mereology. Parts need not be spatial parts. For example, the Biles is a complex gymnastics routine first performed by Simone Biles consisting of a double layout and half twist, where the half twist occurs during the end of the double layout. Each token of the Biles is a complex event with two distinct parts: a token performance of a double layout, and a token performance of a half twist. A token performance of a double layout is part of the overall token performance of the Biles, but it is not a spatial part. Recognizing a type as complex is often an essential first step toward elucidating it. That is why, when we introduce a novice to a complex type such as a C-​major chord, methane, or the Biles, we usually cite other types and indicate how their tokens must relate in order for the complex type to be instantiated. A complex type has tokens that are themselves complex. The tokens may be medium-​sized physical objects (e.g., automobiles), microscopic particles (e.g., molecules), events (e.g., linguistic utterances, gymnastic performances), or otherwise. Each token of a complex type t has parts that are tokens of the types incorporated by t. Complex types resemble the structural universals posited by Armstrong (1980). Lewis helpfully characterizes structural universals as follows (1999, 81): “Anything that instantiates [a structural universal] must have parts; and there is a necessary connection between the instantiating of the structural universal by the whole and the instantiating of other universals by the parts.” However, I do not assume that complex types have all the properties attributed by Armstrong and Lewis to universals. More specifically, Lewis (1999, 80)  says that a universal satisfies two conditions:  “wherever it is instantiated, there the whole of it is present” and “[w]‌hen it is instantiated, it is a nonspatiotemporal part of the particular that instantiates it.” I do not assume that types satisfy either condition.3 3 Lewis (1999, 78–​107) argues that structural universals do not exist. His arguments generalize straightforwardly from structural universals to complex types. This is worrisome, because complex types figure across a vast range of disciplines, including linguistics, computer science, biology, chemistry, music theory, and so on. Were Lewis’s arguments successful, we would face the unsavory

150  What Are Mental Representations? Does a complex type have parts other than itself? Does it have internal structure? It sounds plausible to say that methane is composed of carbon and hydrogen bound together in an appropriate way, that a natural language sentence is composed of certain words arranged in a certain syntactic structure, and so on. Such claims are common within philosophy, linguistics, computer science, and most other disciplines that cite complex types. However, they do not follow from my definition of “complex type.” My definition requires that tokens of the complex type have parts, not that the complex type itself have parts. One might hold that a complex type is an unstructured entity whose tokens necessarily have internal structure. One might hold that complex type t marks the structured instantiation of t1, . . . , tn even though t is not itself structured. I doubt that any truly important questions about the mind hinge upon how we resolve this issue. What matters for my purposes is the role that complex types play within our discourse: a complex type marks the structured instantiation of other types. Whether the complex type itself has structure is an interesting question that I set aside.4 A complex type is instantiated only when incorporated types are instantiated in appropriate relation to one another. In that sense, the complex type is instantiated only when incorporated types combine in an appropriate way. While I remain agnostic as to whether complex types have internal structure, it still seems right to say that a complex type results from combining types that it incorporates. We can say with good conscience that the C-​major chord results from appropriately combining C, E, and G, that the Biles results from appropriately combining a double layout and a half twist, and so on. In agreeing that a complex type results from combining incorporated types, we need not agree that incorporated types are parts of the complex type. There are many cases where an entity results from combining items that are not its parts. A cake results from appropriately combining its ingredients, but few philosophers would say that the ingredients are parts of the cake.

prospect of revising these disciplines so as to eliminate reference to complex types. Luckily, the literature suggests several ways that one might contest Lewis’s arguments (e.g., Bennett 2013; Davis 2014; Hawley 2010). For present purposes, we may safely disregard Lewis’s arguments and assume that complex types exist. 4 The two possible answers to this question correspond to two possible views distinguished by Lewis (1999) regarding structural universals: the pictorial conception (“a structural universal is isomorphic to its instances” [96]) and the magical conception (“a structural universal has no proper parts” [100]).

Reifying Representations  151

5.5.2.  Structured Instantiation of Representational Capacities Typically, a representational mental event deploys multiple representational capacities in coordination with one another. The coordinated capacities are sub-​capacities of a complex representational capacity. A complex mental representation is a mental representation that marks the exercise of a complex representational capacity. To illustrate: • High-​level cognition. When a thinker judges that some dogs are furry and some are not, she exercises a complex representational capacity composed of sub-​capacities that include a capacity to represent dogs (corresponding to the Mentalese word ), a capacity to represent furriness (corresponding to the Mentalese word ), and capacities for conjunction, negation, and existential quantification. We posit a complex Mentalese sentence , which marks the exercise of the complex representational capacity. • Perception. A  typical perceptual state attributes distal properties to observed particulars (Burge 2010b). The perceptual state deploys a complex representational capacity composed of sub-​capacities that may include capacities to represent specific shapes, sizes, colors, and other distal properties; capacities for singular representation of environmental particulars; and a capacity to attribute distal properties to perceived particulars (Rescorla 2020a). We may posit a complex perceptual representation that marks the exercise of this complex representational capacity. • Navigation. A cognitive map marks the exercise of a complex representational capacity composed of sub-​capacities that include capacities to represent individual landmarks; a capacity to represent the animal itself; and capacities to represent particulars as positioned a certain way in physical space. A complex mental representation marks the coordinated exercise of representational capacities. I now argue that complex mental representations are complex types. A complex mental representation R marks the exercise of a complex representational capacity C. A mental event is a token of R just in case it is an exercise of C:

152  What Are Mental Representations? (1) Necessarily, any token of R is an exercise of C. C has sub-​capacities C1, . . . , Cn. Exercise of C consists in the coordinated exercise of C1, . . . , Cn. Exercise of Ci is part of the exercise of C, just as a gymnast’s performance of a double layout is part of her performance of the Biles. Any exercise of C has as parts exercises of sub-​capacities C1, . . . , Cn, where the sub-​capacities must be coordinated appropriately with one another: (2) Necessarily, any exercise of C has parts that are exercises of C1, . . . , Cn and that bear appropriate relations to one another. There exist mental representations R1, . . . , Rn that mark the exercise of capacities C1, . . . , Cn: (3) Necessarily, any exercise of Ci is a token of Ri. (1)–​(3)  entail (4) Necessarily, any token of R has parts that are tokens of R1, . . . , Rn and that bear appropriate relations to one another. Since R1, . . . , Rn are distinct from R, it follows from (4) that R incorporates R1, . . . , Rn. Thus, any complex mental representation R is a complex type in section 5.5.1’s sense. R’s tokens are complex mental events.5 Clause (4) is a crucial feature of complex mental representations, found across a wide range of psychological domains. Examples: • Mentalese sentences. To instantiate , a thinker must instantiate the type , the type , and types corresponding to conjunction, negation, and the existential quantifier. The thinker must instantiate these types in appropriate relation to one another. So the Mentalese sentence incorporates , , and types corresponding to conjunction, negation, and the existential quantifier. 5 Fodor also explicates the complexity of mental representations by citing the complexity of mental events (1987, 136–​139). However, he combines his emphasis on complex mental events with several additional doctrines that I do not accept (see section 5.5.5 and section 5.7).

Reifying Representations  153 • Complex perceptual representations. Suppose a token perceptual state attributes observable properties to observed particulars. Then the state instantiates a complex perceptual representation that incorporates perceptual representations of the properties and singular perceptual representations of the particulars. • Cognitive maps. To instantiate a cognitive map, an animal must instantiate various singular representations that mark the exercise of capacities to represent various particulars, including both individual landmarks and the animal itself. The animal must also instantiate mental coordinates that mark the exercise of capacities to represent physical particulars as positioned a certain way in physical space (Gallistel 1999). The animal instantiates these types in appropriate relation to one another, thereby representing the denoted particulars as positioned a certain way in space. In each case, a complex mental representation R marks the structured instantiation of mental representations R1, . . . , Rn.

5.5.3. A Toy Example Let me illustrate these ideas with a toy example. Imagine an idealized mathematical thinker with a capacity to represent the number 0 and a capacity to represent the successor function. I posit mental words and that mark the respective exercise of each capacity. The thinker also has a capacity to apply functions to arguments. These three capacities yield complex capacities to represent natural numbers. I posit an array of complex mental numerals that mark the exercise of the complex capacities. For example, there is a mental numeral that marks the exercise of a complex capacity to represent the number 4. This complex capacity involves three sub-​capacities: a capacity to represent 0 a capacity to represent the successor function (deployed four times) a capacity to apply a function to an argument (deployed four times) To instantiate , a thinker must instantiate and , and she must do so in the appropriate way—​ by iteratively combining her capacity for

154  What Are Mental Representations? function-​application with the capacities corresponding to and . Thus, incorporates  and  . Similarly, suppose the thinker has a capacity to represent the addition function. I posit a mental word correlated with this capacity, and I posit further complex mental numerals that mark the exercise of the resulting complex capacities. Mental numeral marks the exercise of a complex capacity involving four sub-​capacities: a capacity to represent 0 a capacity to represent the successor function (deployed four times) a capacity to represent the addition function (deployed once) a capacity to apply a function to an argument (deployed five times)

This complex capacity is a capacity to represent the number four. However, it is a different capacity than the complex capacity corresponding to . The capacities are different because they involve different sub-​capacities exercised in different ways. More generally, the thinker has an infinite array of representational capacities, corresponding to an infinite array of mental numerals. If t is mental numeral, then t is a complex mental numeral that marks appropriate joint exercise of the capacity corresponding to , the capacity corresponding to t, and a capacity for function-​application: (5) If mental numeral t marks the exercise of a capacity to represent d, then t marks the exercise of a capacity to represent the successor of d. If s and t are mental numerals, then s t is a complex mental numeral that marks appropriate joint exercise of the capacity corresponding to , the capacities corresponding respectively to s and t, and a capacity for function-​application: (6) If mental numeral s marks the exercise of a capacity to represent d, and if mental numeral t marks the exercise of a capacity to represent e, then s t marks the exercise of a capacity to represent the sum of d and e. What results is a toy language of complex mental words, each word correlated with a distinct complex representational capacity.

Reifying Representations  155 Our toy language comes with a canonical compositional semantics. We stipulated that marks the exercise of a capacity to represent 0. Recall also our stipulation from section 5.3: (Δ) Mental representation R denotes d iff R marks the exercise of a capacity to represent d. Our stipulations jointly entail the base clause: denotes 0. (5) and (Δ) jointly entail the recursion clause: t denotes the successor of the denotation of t, for any mental numeral t. (6) and (Δ) jointly entail the recursion clause: s t denotes the sum of the denotation of s and the denotation of t,

for any mental numerals s and t. From the base clause and recursion clauses, we can derive familiar Tarski-​style clauses specifying the denotations of specific terms, such as: denotes the successor of the successor of the successor of the successor of 0.

Our stipulations thereby determine a unique denotation for each mental numeral, as specified by the compositional semantics.

5.5.4.  Complex Mental Representations I propose that we take the toy mathematical language as a paradigm. When studying a mental phenomenon—​perception, motor control, navigation, language, high-​level cognition, or what have you—​we should identify the complex representational capacities at work. We should clarify how each complex capacity decomposes into sub-​capacities. We may then posit complex mental

156  What Are Mental Representations? representations corresponding to the complex capacities. For each complex mental representation, we should clarify which sub-​capacities are exercised when the representation is instantiated, which “appropriate relations” must obtain between the exercised sub-​capacities, and which representational properties are implicated by appropriately coordinated exercise of the sub-​capacities. Because the toy mathematical language is an artificial example, I could stipulate its properties. When studying real-​life examples, we may no longer stipulate. We must instead seek guidance from folk psychology, scientific psychology, introspection, and philosophical reflection. In some cases, those sources already afford considerable insight. Two examples: • Any thinker has a general capacity for predication. If she has a capacity to represent n-​place relation F, and if she has a capacity to represent objects a1, . . . , an, then her general predicative capacity also makes available (in principle) a complex capacity for predicating F of a1, . . . , an. Exercising this capacity, she can think a thought that is true just in case a1, . . . , an stand in relation F to one another. For example, given a capacity to represent London, a capacity to represent Paris, and a capacity to represent the relation in which two entities stand when the first is north of the second, the agent can think a thought that is true just in case London is north of Paris. In doing so, she exercises a complex capacity composed of the aforementioned sub-​capacities. Exercise of the complex capacity is marked by a Mentalese sentence that incorporates individual Mentalese expressions , , and . In order for the sentence to be instantiated, the individual expressions must be instantiated in the appropriate way, drawing upon the thinker’s predicative capacity. • Any thinker has a general capacity for conjunctive thought. Given a capacity to think a thought with some truth-​condition, and given a capacity to think a thought with a second truth-​condition, the thinker has a capacity to think a thought that is true iff both truth-​conditions are satisfied. Accordingly, we may posit a Mentalese word that combines appropriately with Mentalese sentences. The Mentalese sentence correlates with a complex capacity composed of sub-​capacities to represent London, Paris, New York, and the relation of being north of, along with general capacities for predication and conjunction. All these sub-​capacities must

Reifying Representations  157 be exercised and appropriately coordinated in order for the Mentalese sentence to be instantiated. A complete account should develop my intuitive formulations more systematically, including provision of a compositional semantics. A complete account should also address additional logical compounding devices, including disjunction, negation, the conditional, and the quantifiers. There are many delicate philosophical and technical details here.6 My goal is not to provide a complete account but rather to indicate a promising direction for future research. I have focused so far on representational complexity as it arises within high-​level cognition. Some authors propose that high-​level cognition has a different format than perception (Fodor 2008; Peacocke 1992), mental imagery (Kosslyn 1980), analogue magnitude representation (Beck 2012), or other relatively low-​level representational phenomena. For example, Burge (2010b) holds that perceptual representations exhibit nothing like the logical structure characteristic of high-​level thought. In (Rescorla 2009), I suggested that something similar may hold of cognitive maps. These issues remain murky and controversial. I think that C-​RTM can help. To elucidate representational format, we should analyze how complex representational capacities arise from the appropriate joint exercise of representational sub-​capacities. Different representational formats correspond to differently structured ways of exercising sub-​capacities. I acknowledge that my formulations are sketchy. Once again, I offer them only to indicate a promising path forward. Even in its present preliminary state, C-​RTM validates the traditional thought that mental representations can combine to form complex mental representations. There is a natural sense in which type results from appropriately combining types and :  is instantiated only when and are instantiated in an appropriate way. Similarly, there is a natural sense in which results from appropriately combining , , and . Just as a C-​major chord results when a musician appropriately combines individual notes, and just as the Biles results when a gymnast appropriately combines individual gymnastics moves, so does a complex mental representation result when the mind appropriately combines mental representations. Complex types are rightly so-​called 6 For example, one must extend (Δ) with a generalized clause stipulating what it is for mental representation R to have semantic value α.

158  What Are Mental Representations? because they mark the structured instantiation of other types. Complex mental representations are rightly so-​called because they mark the structured instantiation of other mental representations.

5.5.5.  Internally Structured Mental Representations? Philosophers usually construe the complexity of mental representations more literally than I  have done. Most discussants hold that a complex mental representation is a structured entity composed of less complex mental representations. Call this the mereological thesis. According to the mereological thesis, a complex mental representation has other mental representations as literal parts. I remain agnostic regarding the mereological thesis, as befits my agnosticism regarding whether complex types have internal structure. What matters most is that a complex type marks the structured instantiation of incorporated types. Specifically, a complex mental representation marks the structured exercise of representational capacities. Whether the mental representation itself has structure is not nearly so important.7 My agnosticism regarding the mereological thesis mandates a slight departure from the usual treatment of compositionality. Philosophers usually gloss compositionality in mereological terms. They say, roughly, that the meaning of a complex mental representation is determined by the meanings of its parts and the way in which those parts are put together. A mereological gloss is unavailable to me, since I do not assume that mental representations have parts. Instead, I say that the meaning of a complex representation is determined by the meanings of the representations that it incorporates and the way that it incorporates them. As illustrated by my discussion of the toy mathematical language, we need not assume that a complex mental representation has internal structure when specifying its compositional semantics.

7 Burge (2009, 2010a) posits an array of mental representations employed in thought, perception, navigation, and other mental activities. He identifies these items with mental representational contents (2009, 248). In many respects, my position is similar to Burge’s. One difference is that Burge regards mental representations as structured. He writes: “At bottom, representational contents are just kinds, or aspects of kinds, of psychological states. The structure of representational contents marks structural aspects of the capacities embodied in the psychological states” (2010a, 41). In contrast, I remain neutral as to whether mental representations have internal structure. This difference is not as significant as it may initially appear, because I still recognize an important sense in which complex mental representations result from combining together other mental representations.

Reifying Representations  159 We can instead assume that it marks the structured instantiation of other mental representations. Proponents offer various arguments for the mereological thesis (Davis 2003, 368–​406). Most famously, Fodor and Pylyshyn (1988) highlight a phenomenon called systematicity: there are systematic relations among which thoughts a thinker can entertain. To use their example, someone who can think that John loves the girl can also think that the girl loves John. Fodor and Pylyshyn write (1988, 39): The systematicity of thought shows that there must be structural relations between the mental representation that corresponds to the thought that John loves the girl and the mental representation that corresponds to the thought that the girl loves John; namely, the two mental representations . . . must be made of the same parts. But if this explanation is right (and there don’t seem to be any others on offer), then mental representations have internal structure.

Thus, Fodor and Pylyshyn defend the mereological thesis by arguing that it underwrites the best explanation for systematicity. Let us consider in more detail how the explanation is supposed to go. Assume that Fred can think that John loves Mary. Why can he also think that Mary loves John? Presumably Fodor and Pylyshyn have in mind an explanation along the following lines: (7) If Fred can think that John loves Mary, then he can stand in an appropriate relation T to a structured mental representation R. The parts of R can be recombined to form a different structured mental representation, corresponding to the thought that Mary loves John. Fred can also stand in relation T to that second structured mental representation, since he has access to all its parts. Thus, he can think that Mary loves John. I agree that this is one possible explanation. But an alternative explanation is possible: (8) When Fred thinks that John loves Mary, he exercises a complex representational capacity C composed of sub-​capacities that include a capacity for predication; a capacity to represent John; a capacity

160  What Are Mental Representations? to represent the loving relation; and a capacity to represent Mary. Because Fred has all these sub-​capacities, he also has a second complex representational capacity C*, corresponding to the thought that Mary loves John. He has capacity C* because C* involves precisely the same sub-​capacities as C. The sub-​capacities are merely coordinated in a different way when one exercises C versus C*. Thus, Fred can think that Mary loves John. I think that (8) is as good an explanation as (7). Yet (8) does not assume that complex mental representations are structured. It does not even mention mental representations. A nominalist who refuses to reify mental representation types could accept (8). Of course, I am not a nominalist. I cheerfully reify, correlating each representational capacity mentioned by (8)  with a mental representation. But I do not claim that reification improves the explanation given by (8). Structural relations among representational capacities, not structural relations among mental event types, explain why Fred’s ability to think that John loves Mary entails his ability to think that Mary loves John. Contrary to Fodor and Pylyshyn, systematicity does not show that there are structural relations among mental representations. It only shows that there are structural relations among representational capacities. Thus, systematicity provides no support for the mereological thesis over my less committal viewpoint.8 Fodor and Pylyshyn ask (1988, 44): “how could the mind be so arranged that the ability to be in one representational state is connected with the ability to be in others that are semantically nearby? What account of mental representation would have this consequence?” I reply that the ability to be in a representational state is typically a complex capacity composed of sub-​capacities; the sub-​capacities can be redeployed so that the thinker instantiates “semantically nearby” states. The crucial observation is that mental representation involves the structured exercise of redeployable representational capacities. This observation concerns capacities, not the metaphysics of types. Even if we were to conclude that complex mental representations have mereological structure, it would still be preferable to explain systematicity without invoking 8 Here I build on Evans’s discussion. Evans (1982, 104) endorses a form of systematicity that he calls the Generality Constraint. To explain why thinkers satisfy the Generality Constraint, Evans cites the structured nature of representational capacities. He also writes (1982, 101): “I should prefer to explain the sense in which thoughts are structured, not in terms of their being composed of several distinct elements, but in terms of their being a complex of the exercise of several distinct conceptual abilities.”

Reifying Representations  161 that structure. The best explanation would still adduce the structured way that mental events deploy representational capacities, without any detour through the structure of mental event types. The former kind of structure, not the latter, is explanatory fundamental. I have critiqued Fodor and Pylyshyn’s systematicity argument for the mereological thesis. Parallel considerations apply to other well-​ known arguments for the thesis. Parallel considerations show that any mental phenomena customarily explained by the mereological thesis are explained as well, if not better, by citing structural relations among representational capacities. For present purposes, I must leave my assessment undefended.

5.6.  Mode of Presentation In the previous section, I invoked representational capacities to clarify the complexity of mental representations. I now invoke them to clarify another phenomenon widely recognized among proponents of RTM: distinct mental representations may share the same denotation. Frege (1892/​1997) argued that a thinker can represent a single denotation in different ways, or under different modes of presentation. He illustrated by considering a thinker who believes that Hesperus is Hesperus but does not believe that Hesperus is Phosphorus. Frege says that the thinker mentally represents the same denotation (Venus) under two distinct modes of presentation. But what exactly are modes of presentation? Here are two well-​known proposals: • Fodor (1994) proposes that we gloss modes of presentation as mental representations. For example, we can posit distinct, co-​ referring Mentalese words that denote Venus. • Evans (1982, 100–​105), following Geach (1957), proposes that we elucidate modes of presentation in terms of abilities. He writes:  “When two thought-​episodes depend on the same ability to think of something, we can say that the thing is thought about in the same way” (101). Conversely, the thing is thought about in different ways when the two thought-​episodes depend on different abilities to think about it.9 9 Citing Evans as an influence, Beck (2013) likewise elucidates modes of presentation in terms of abilities.

162  What Are Mental Representations? C-​RTM allows us to combine Fodor’s proposal with Evans’s. We can say that distinct modes of presentation are distinct mental representations, marking the exercise of distinct representational capacities. For example, we may posit a Mentalese word that marks the exercise of a capacity to represent Venus in higher-​level thought and a distinct Mentalese word that marks the exercise of a different capacity to represent Venus. Distinct co-​referring mental representations correspond to distinct capacities for representing a single denotation. Why distinguish the Mentalese words and ? Why say that the corresponding representational capacities differ? To simplify matters, let us consider a thinker who has never examined the heavens closely enough to perceive Venus directly. The thinker is still able to represent Venus within her thought. She acquires a capacity to represent Venus when she learns the English word “Hesperus.” This capacity emerges as she masters those aspects of linguistic practice involving the word “Hesperus.” When she learns the English word “Phosphorus,” she acquires a second capacity to represent Venus. This second capacity emerges as she masters those aspects of linguistic practice involving the word “Phosphorus.” That the two capacities are different is evidenced by several facts: • They are acquired on different occasions and through exposure to different aspects of linguistic practice. • A thinker can acquire one without acquiring the other. • Even after a thinker has acquired both, she can exercise one without exercising the other (e.g., she can think of an entity as Hesperus without thinking of it as Phosphorus). and are distinct types that mark the exercise of distinct capacities to represent the same planet. The thinker instantiates the first type when she represents Venus via her connection to our linguistic practices surrounding the word “Hesperus.” She instantiates the second type when she represents Venus via her connection to our linguistic practices surrounding the word “Phosphorus.”10 10 If the thinker directly observes Venus, then she may on that basis acquire a third capacity to represent Venus within thought. This capacity is distinct from the capacities corresponding to Hesperus and Phosphorus, because it is grounded in perception of Venus rather than linguistic competence. We should therefore postulate a third mental representation that marks the exercise of this third representational capacity.

Reifying Representations  163 Modes of presentation are crucial for understanding mental representation in general, not just high-​level thought. A good illustration is sensory cue combination. The perceptual system typically estimates distal conditions based upon multiple cues. It can estimate an object’s size based on visual feedback or haptic feedback. It can estimate depth using binocular disparity, motion parallax, convergence, and many other visual cues. Bayesian perceptual psychology offers detailed models of cue combination (Trommershäuser, Körding, and Landy 2011). A  key presupposition underlying some of the most successful models is that distinct sensory cues correspond to distinct co-​referring perceptual representations (Rescorla, forthcoming b). For example, the perceptual system employs a vision-​based representation that denotes distal size s, and it employs a distinct touch-​based representation that also denotes s. Distinct co-​referring perceptual representations denote the same denotation, but they do so in different ways. Proponents of C-​ RTM can say that distinct co-​ referring perceptual representations mark the exercise of distinct capacities to represent the same denotation. For example, a vision-​based representation that denotes size s marks the exercise of a different capacity than a touch-​based representation that denotes s. The capacities are tied to different sensory cues. A perceiver has the first capacity only if she has (or is appropriately related to) a perceptual system wired to estimate distal size based upon visual cues. A perceiver has the second capacity only if she has (or is appropriately related to) a perceptual system wired to estimate distal size based upon haptic cues. A perceptual system might be wired the first way without being wired the second way, and vice versa. Hence, the capacities are distinct. Similarly for other cases where distinct perceptual representations correspond to distinct sensory cues. A mental event represents denotation d only because the event occurs within the mental activity of a creature with representational capacities. The creature is able to represent d by virtue of past causal interactions with d, or evolutionary relations to progenitors that interacted with d, or embedding within a linguistic practice that represents d, or internal psychological processing, or some combination of these and possibly other factors. Some combination of factors renders the creature able to represent d. A relevantly different combination of factors yields a different capacity to represent d. We register such differences by positing distinct co-​referring mental representations. Thus, C-​RTM grounds mental co-​reference in a fundamental feature of mental representation: diverse factors may render a creature able to represent a single denotation.

164  What Are Mental Representations? One might question how I am individuating representational capacities. Why individuate them in so fine-​grained a way? There is such a thing as the sheer capacity to represent Venus. You exercise that capacity when you think that Hesperus has craters and also when you think that Phosphorus has craters. Why not emphasize the general representational capacity deployed by both thoughts? Likewise, why not emphasize the general representational capacity to represent size s within perception, without distinguishing between vision-​based and touch-​based exercises of the capacity? Why not adopt a coarse-​grained scheme that individuates representational capacities entirely through the represented denotations? I reply that we should taxonomize mental events so as to support good explanations. Hesperus-​ thoughts and Phosphorus-​ thoughts play different roles in cognition. They figure differently within belief-​fixation (certain astronomical observations may lead you to believe that Hesperus has craters but not that Phosphorus has craters), decision-​making (you may make different plans if you want to test whether Hesperus has craters than if you want to test whether Phosphorus has craters), linguistic comprehension (subpersonal linguistic processing proceeds differently if someone tells you that Hesperus has craters than if someone tells you that Phosphorus has craters), and other mental processes. The differences diminish markedly if you learn that Hesperus is Phosphorus, but even then some differences persist, especially differences in linguistic processing. A  satisfactory theory must track the differences. To do so, it must adopt a fine-​grained taxonomic scheme that differentiates Hesperus-​thoughts from Phosphorus-​thoughts. A coarse-​grained taxonomic scheme that recognizes only the sheer capacity to represent Hesperus will not serve nearly as well as a finer-​grained scheme that distinguishes among capacities for representing Hesperus. The coarse-​ grained scheme may be useful for some purposes. Overall, it is less apt to promote fruitful explanation. How exactly do distinct, co-​referring mental representations differ? For example, under what circumstances does a mental event instantiate the type rather than the type ? The literature on RTM has addressed these questions extensively (Field 2001, 55–​58; Fodor 1994; Schneider 2011; Stich 1983). Lying in the background is a widespread conviction that we should admit entities into our discourse only when we can associate them with well-​defined identity conditions, i.e., conditions for re-​ identifying an entity as the same again. Quine (1969, 23) forcefully argues as much, summarizing his viewpoint through the slogan “no entity without

Reifying Representations  165 identity.” As applied to mental representations, Quine’s viewpoint requires that we specify conditions under which mental representation R is the same as mental representation S. Adopting this viewpoint, proponents of RTM seek a plausible individuative scheme for mental representations, while opponents often contend that existing individuative schemes are unsatisfactory and hence that RTM is problematic (Prinz 2011). C-​RTM holds that different mental representations correlate with different representational capacities. However, this just pushes the bump under the rug from individuation of mental representations to individuation of representational capacities. Under what conditions are mental events e and e* exercises of the same representational capacity? For example, what exactly distinguishes the representational capacity correlated with from the representational capacity correlated with ? Lacking answers to these questions, we still lack non-​circular identity conditions for mental representations. My response is to deny that we must provide non-​ circular identity conditions for entities before admitting them into our discourse. Quotidian and scientific discourse posit diverse entities:  properties, events, persons, words, species, symphonies, cities, etc. Outside the realm of extensional mathematics taken by Quine as a paradigm, we can rarely supply anything like non-​circular identity conditions. Identity conditions are helpful when available, but they are not mandatory. This is true for entities in general, and it is true for types specifically. Taxonomic schemes found in quotidian and scientific discourse rarely come associated with explicit necessary and sufficient conditions. To pick only one example, we have nothing like non-​ circular identity conditions for natural language words (e.g., Bromberger 2011; Hawthorne and Lepore 2011). Few philosophers would argue on that basis for banishing natural language words from our ontology. Likewise, non-​circular identity conditions are not needed for mental representations to play a valuable role within psychological theorizing. Cognitive science practice demonstrates that we can make substantial explanatory progress by reifying mental representations absent non-​circular identity conditions. Luckily, one can illuminate how mental representations are individuated without furnishing non-​circular identity conditions. As a fairly straightforward example, consider the co-​referring representations and . These types mark the respective exercise of complex representational capacities. We can say pretty explicitly how the capacities differ, because we can describe how each capacity decomposes into the exercise of simpler

166  What Are Mental Representations? capacities. More generally: once we identify a mental representation as complex, we can usually then analyze the complex representational capacity correlated with it. Whether or not a mental representation is complex, we can often say something helpful about the corresponding representational capacity. A few examples: • Singular terms. Over the past century, we have learned a lot about representational capacities corresponding to Mentalese singular terms. Most importantly, Kripke (1980) shows that you can have such a capacity without knowing descriptive information that distinguishes the denotation from other possible denotations. You can think about Kurt Gödel as Kurt Gödel without entertaining or grasping any definite description that discriminates Kurt Gödel from other people. • Predicates. You can think about elm trees as elm trees even though you have no knowledge that differentiates elm trees from beech trees (Putnam 1975b). You can think about arthritis as arthritis even if you think arthritis is a disease that occurs in muscles rather than joints (Burge 2007). In general, you can exercise the representational capacity corresponding to a Mentalese predicate even if you lack discriminating information about the predicate’s extension, and despite significant false beliefs about the extension, so long as you are suitably embedded in an appropriate linguistic practice. • Perceptual representations. Perceptual psychology sheds considerable light upon the representational capacities deploying during perception. Consider a vision-​based perceptual representation of size s. As I argue elsewhere (Rescorla, forthcoming b), you can instantiate the representation in response to a wide range of possible retinal stimulations and despite large changes in Bayesian priors. The corresponding representational capacity is tied to your perceptual system’s general capacity for estimating size based on one or more visual cues, not to the specific retinal stimulations that serve as inputs to visual estimation or to the specific Bayesian priors deployed during perceptual processing. Obviously, these observations fall far short of non-​ circular identity conditions. Nevertheless, as such observations accrue, we gradually clarify how mental representations and their affiliated representational capacities should be individuated.

Reifying Representations  167 In my opinion, individuation of representational capacities is best elucidated on a case-​by-​case basis, by interrogating folk psychology or cognitive science regarding specific capacities. Insight is more likely to emerge from detailed study of particular examples than from grand attempts at overarching non-​circular identity conditions. The present paper aims not to supply satisfactory elucidations but rather to delineate a framework within which elucidation can occur.11

5.7.  An Individuative Role for Representation Sections 5.5–​5.6 argued that C-​RTM sheds light upon some commitments that are widespread among proponents of RTM. I now show that C-​RTM diverges in at least one important respect from other contemporary versions of RTM. Say that an entity is semantically indeterminate when it does not have its meaning essentially. A semantically indeterminate entity could have had a different meaning without any change in its fundamental nature, identity, or essence. Say that an entity is semantically neutral when it bears an arbitrary relation to its meaning (assuming it even has meaning). A semantically neutral entity could have had arbitrarily different meaning, or no meaning at all, without any change in its fundamental nature, identity, or essence. Semantic neutrality entails semantic indeterminacy, but not vice versa: semantic indeterminacy entails that the entity could have had some different meaning, while semantic neutrality entails that it could have had any different meaning. Most communal representations are semantically neutral. For example, the word “dog” means dog, but it could just as well have meant cat, or anything else, or nothing at all. Over the past few decades, many philosophers have pursued a semantically indeterminate taxonomic scheme for mental representations. This approach originates with Fodor (1981). He holds that Mentalese expressions have formal syntactic properties, and he introduces an array of corresponding 11 Because my approach invokes modes of presentation, it accommodates many phenomena that modes of presentation are famously well-​suited to handle. Consider reference failure. Suppose we want to describe Le Verrier’s mental state when he conjectured that Vulcan orbits between Mercury and the sun. (In fact, there is no such planet as Vulcan.) We may posit a Mentalese word Vulcan that lacks any denotation. This Mentalese word marks the exercise of a defective representational capacity. The capacity is representational because it is a capacity to attempt reference to an object. The capacity is defective because the attempt fails: someone who exercises the capacity does not thereby succeed in referring to any object. Developing these remarks and bringing them into contact with the large literature on reference failure are tasks for another paper.

168  What Are Mental Representations? formal syntactic Mentalese types. A Mentalese syntactic type has representational import, but it does not have its representational import essentially. For example, Fodor posits a Mentalese word DOG that denotes dogs. According to Fodor, DOG could have had a different denotation had it played a different role in the thinker’s psychological activity. In his early writings, Fodor regarded formal syntax as semantically indeterminate but not semantically neutral. He claimed that Mentalese syntactic type constrains meaning while leaving meaning underdetermined (1981, 225–​253). Fodor’s later writings (1994, 2008) suggest the stronger thesis that Mentalese syntactic types are semantically neutral. Many authors explicitly advocate a semantically neutral taxonomic scheme for Mentalese syntax (e.g., Egan 1992, 446; Field 2001, 58; Haugeland 1985, 91, 117–​123; Pylyshyn 1984, 50). The contemporary consensus in favor of semantic indeterminacy departs from philosophical tradition. Historically, proponents of RTM tended to individuate mental representations in representational terms. For example, Ockham does not hold that one can “hive off ” a mental word’s semantic or representational properties and leave behind a theoretically significant formal syntactic residue. Philosophers did not traditionally regard mental representations as items subject to reinterpretation in the way that communal representations are subject to reinterpretation. Say that an entity is semantically permeated when it is not semantically indeterminate. A  semantically permeated entity is not a piece of formal syntax requiring an interpretation. Rather, its semantics is built into its inherent nature. It “comes with its meaning attached.” I will argue that mental representations as construed by C-​RTM are semantically permeated.

5.7.1.  Semantically Permeated Mental Types According to C-​RTM, a mental representation is a type that marks the exercise of a representational capacity. The type is instantiated only when the thinker (or a mental subsystem) exercises the capacity. Tokens of the type must have whatever representational properties are implicated by that exercise. Thus, mental representations are individuated at least partly through their representational properties. Examples: • Mentalese words. The Mentalese word denotes dogs: it marks the exercise of a capacity to represent dogs within high-​level thought. A mental

Reifying Representations  169 event instantiates only if it is an exercise of that capacity, to be which the event must represent dogs. Thus, is not an uninterpreted formal item that could just as easily have denoted cats. A mental event that instantiates must be an exercise of the corresponding representational capacity, so it must represent dogs rather than cats. The Mentalese word, by its inherent nature, already mandates one specific interpretation. • Perceptual representations. Consider a perceptual representation S that marks the exercise of a capacity to represent sphericality. A perceptual state instantiates S only if it is an exercise of the corresponding capacity, to be which it must represent sphericality. So S could not have denoted another shape, let alone some other distal property. S, by its inherent nature, is instantiated only by mental states that represent sphericality. • Cognitive maps. A cognitive map M marks the exercise of a capacity to represent some spatial layout. In order for M to be instantiated, the animal must exercise the correlated capacity. M could not have represented a different spatial layout, nor could it have had non-​spatial representational import. M, by its inherent nature, is instantiated only by mental states that represent a particular spatial layout. Similarly, the mental numerals from section 5.5.3 have built-​in denotations, as specified by the compositional semantics. In general, mental representations are not subject to arbitrary reinterpretation. They are imbued with representational import by their inherent natures. My semantically permeated approach accords well with much of the historical literature even while it flouts current conventional wisdom.12 Semantically permeated types are individuated through their representational properties, but not all representational properties need play an individuative role. For example, a Mentalese sentence that has its truth-​ condition essentially need not have its truth-​value essentially. is true iff London is north of Paris, and this truth-​condition is inherent to the Mentalese sentence. Whether the Mentalese sentence is true, on the other hand, depends on how the world is. It depends on whether London is north of Paris, which is not essential to the type. Whether London is north of Paris plays no role in individuating the complex type.

12 Burge pursues a similar approach, applied especially to concepts (2007, 292) and perceptual representations (2010a, 76).

170  What Are Mental Representations? For a subtler example where representational properties need not play an individuative role, consider indexicality. Looking at a cube, I may judge that that cube is blue and intend to grab that cube. My judgment’s truth-​condition depends on the specific contextually determined cube, as does my intention’s fulfillment-​ condition. Indexicality also arises pervasively in perception (Burge 2010a). If my perceptual state represents a perceived cube as blue, then the state’s accuracy-​condition depends upon the specific contextually determined cube. To focus the discussion, let us consider two thinkers A and B who are psychological duplicates except that they perceive distinct, qualitatively indiscernible cubes CA and CB. A intends to grab CA, while B intends to grab CB. The two intentions are the same in all relevant respects except that they represent distinct contextually determined cubes. Suppose we posit a mental demonstrative that figures in A’s intention. How should we individuate this demonstrative? Evans (1982) and McDowell (1998) individuate mental demonstratives in an object-​dependent way, so that A’s mental demonstrative is different from B’s corresponding mental demonstrative. Burge (2005) argues that we should sometimes individuate mental demonstratives in an object-​independent way, so that A and B instantiate the same mental demonstrative type. The main point I want to stress is that C-​RTM can accommodate both the object-​dependent and the object-​independent viewpoints. One can individuate representational capacities in either an object-​dependent or object-​independent fashion. From an object-​dependent viewpoint, A has a capacity to represent cube CA, while B has a different capacity to represent a different cube CB. From an object-​independent viewpoint, A and B share a common capacity to represent some perceptually presented cube, and this common capacity determines different denotations when exercised in different contexts. If we adopt the object-​dependent viewpoint, we posit a mental demonstrative that comes with its specific denotation attached. If we adopt the object-​independent viewpoint, we posit a mental demonstrative that comes with certain representational properties attached (e.g., the property of being a mental demonstrative), not including its contextually determined denotation. C-​RTM allows both viewpoints. It also allows theorists to adopt the first viewpoint for some projects and the second viewpoint for other projects. Thus, C-​RTM is equally friendly to the object-​dependent and object-​independent individuative schemes. Semantic permeation does not

Reifying Representations  171 entail that a mental representation has its context-​dependent representational properties essentially.13

5.7.2.  Representation as Explanatorily Central Why do I  adopt a semantically permeated taxonomic scheme for mental representations? Why individuate mental representations in representational terms? The main reason is that I want to track how explanation proceeds within cognitive science. Mental representations are abstract entities whose primary role in our discourse is to facilitate taxonomization of mental events for explanatory purposes. When we decide how to individuate these entities, explanation should be our main touchstone. How we taxonomize mental events affects which explanations we can provide, so we should choose a taxonomic scheme that underwrites good explanations. Moreover, our best guide to good psychological explanation is actual explanatory practice within scientific psychology. As I urged in section 5.2, numerous areas of cognitive science classify mental events through their representational properties. Representation occupies explanatory center stage in current scientific theories of perception, motor control, mammalian navigation, high-​level cognition, linguistic communication, and numerous other core mental phenomena. The theories cite representational properties so as to sort mental events into types. I reify the types by positing semantically permeated mental representations. I thereby codify current cognitive science practice in ontologically loaded terms. In some areas, such as Bayesian perceptual psychology, the science already posits semantically permeated mental representations. Of course, one might employ a semantically neutral taxonomic scheme in addition to a semantically permeated taxonomic scheme. One might classify mental events in representational terms for certain explanatory purposes but not for other explanatory purposes. To what extent does cognitive science practice actually involve non-​representational taxonomization of representational mental events? 13 An object-​independent version of C-​RTM requires that we revise (Δ) to allow for context-​ dependent denotations. One option is to emend (Δ) along the following lines: R denotes d in context γ iff R marks the exercise of some representational capacity C and any exercise of C in context γ is a mental event that represents d.

172  What Are Mental Representations? Non-​ representational description figures crucially in neuroscience. Neuroscientists frequently adduce firing rates, action potentials, and other such non-​representational aspects of neural activity. A typical neural event type is semantically neutral. It could have had any arbitrarily different representational import (or none at all) depending on its role in the cognitive system as a whole. Neurophysiological description leaves open which if any representational properties neural events have. However, philosophers who espouse semantically indeterminate mental representations do not usually envisage a neurophysiological taxonomic scheme. Building on Putnam’s (1975b) critique of type-​physicalism, these philosophers think that psychological description should be multiply realizable in the neural. They think that psychological description should admit wildly different neural instantiations. Mentalese syntax, like psychological description more generally, is supposed to be multiply realizable. Accordingly, advocates of formal mental syntax pursue a taxonomic scheme for mental representations that does not cite neural properties but instead cites multiply realizable psychological properties (Fodor 2008, 91; Haugeland 1985, 5; Stich 1983, 151). In my opinion, current cognitive science does not support any such formal syntactic taxonomic scheme (Rescorla 2017b). The proposed scheme plays no role within current scientific theories of perception, motor control, mammalian navigation, or numerous other core mental processes. Researchers describe these processes in representational terms. They also try to illuminate how representational mental activity is grounded in underlying neural activity. Researchers describe the processes through multiply realizable representational descriptions and non-​ representational descriptions that are not multiply realizable. They do not employ multiply realizable non-​ representational descriptions. For example, perceptual psychology describes perceptual inference in representational terms as opposed to formal syntactic terms. (Cf. Burge 2010a, 95–​97.) Based on current scientific practice, I see little evidence that we can “hive off ” a mental representation’s representational import and isolate an explanatorily significant formal syntactic residue. Perhaps cognitive science as it evolves will eventually individuate mental representations in formal syntactic fashion. Current science provides little reason to expect so.14 14 An exception is Carey’s (2009) work on concept acquisition, which postulates something like formal syntactic Mentalese types. On Carey’s approach, a child can acquire a concept through exposure to a new “explicit symbol,” such as a natural language word. “The capacity for explicit symbolization makes possible the creation of mental symbols that are not yet connected to anything in

Reifying Representations  173 I am skeptical about formal syntactic description of representational mental events, not formal syntactic description of mental events more generally. Cognitive scientists deploy multiply realizable non-​representational description to explain some mental phenomena, such as certain kinds of low-​level insect navigation (Rescorla 2013). When regimenting these explanations, it is natural to postulate semantically indeterminate mental event types. However, the relevant mental events are not representational. They do not represent the world as being a certain way. I doubt that formal syntactic description of representational mental events offers any explanatory benefit to cognitive science explanation. The central issue here is explanation, not existence. No doubt we can describe representational mental events in the formal syntactic terms favored by Fodor. We can also posit semantically indeterminate types corresponding to a formal syntactic taxonomic scheme. What I question is whether the scientific study of mental representation gains any explanatory value by positing such types or by employing the taxonomic scheme that they embody. Over the past few decades, philosophers have advanced various arguments that any complete cognitive science requires multiply realizable, semantically indeterminate descriptions of representational mental events. I believe that all these arguments fail. I have critiqued the most prominent arguments elsewhere (Rescorla 2017a). Philosophers commonly emphasize a sharp distinction between representational vehicles and representational contents. For example, Fodor (1981, 1994, 2008) regards Mentalese syntactic types as vehicles that express contents. It may seem that the items I am calling “mental representations” should not be so-​called, because they are closer to contents than to vehicles. In particular, their semantically permeated character may seem to render them more “content-​like” than “vehicle-​like.” Personally, I see no great import in whether we classify mental representations construed in my terms the world . . . [M]‌ental symbols are established that correspond to newly coined or newly learned explicit symbols. These are initially placeholders, getting whatever meaning they have from their interrelations with other symbols” (Carey 2009, 474). Through analogical reasoning, induction, and other techniques, the child “bootstraps” her way from placeholder symbols to new concepts. Placeholder symbols are individuated in semantically indeterminate (perhaps even semantically neutral) fashion. They are uninterpreted items that become endowed with denotations only during the bootstrapping process. If Carey is right, then formal syntactic taxonomization should play a central role in any complete theory of concept acquisition. However, Carey’s approach is controversial (e.g., Rips and Hespos 2011), and the crucial notion of “bootstrapping” remains murky. I doubt that scientific research into concept acquisition (as opposed to perception, motor control, navigation, linguistic comprehension, causal learning, and various other psychological domains) is sufficiently developed to support solid conclusions about the individuation of mental representations.

174  What Are Mental Representations? as vehicles, contents, both, or neither. Excessive focus upon the locutions “vehicle” and “content” has deflected attention from the more fundamental question of which taxonomic schemes best serve psychological explanation. Mental representations as I  construe them are perfectly tailored to serve good psychological explanation, because they are directly grounded in the representational capacities already presupposed by our best cognitive science explanations.

5.8.  A Framework for Studying Mental Representations C-​ RTM vindicates mental representations as scientifically respectable and metaphysically harmless. It clarifies how they combine to form complex representations. It elucidates differences among co-​referring mental representations. Most importantly, it promotes a traditional semantically permeated perspective on their individuation. By linking mental representations to the exercise of representational capacities, it honors the pivotal role that representational properties play within psychological explanation. C-​RTM assigns representationality its rightful centrality within the philosophical study of mental representations, dispelling the explanatorily idle formal syntactic properties that typically mar contemporary expositions. I have offered a framework for inquiry, not a finished theory. The framework raises numerous issues that merit further scrutiny. In future work, I will enhance the framework with supplementary theoretical reflections and more detailed case studies.

Acknowledgments I am grateful to Jacob Beck, Eric Mandelbaum, Marcus Giaquinto, Mark Sainsbury, and two anonymous referees for this volume for comments that greatly improved the paper. I presented versions of this material at the University of Pittsburgh Center for Philosophy of Science; the Institute of Philosophy, London; Ohio State University; the University of California, Irvine; and the 2018 Workshop on Perceptual Capacities and Psychophysics at Rutgers University. I  thank the audience members on those occasions for their helpful feedback, especially Casey O’Callaghan, David Papineau, Christopher Peacocke, Jake Quilty-​Dunn, Richard Samuels, Nicholas Shea,

Reifying Representations  175 and Declan Smithies. My research was supported by a fellowship from the National Endowment for the Humanities. Any views, findings, conclusions, or recommendations expressed in this publication do not necessarily reflect those of the National Endowment for the Humanities.

References Armstrong, D. 1980. A Theory of Universals. Vol. 2 of Universals and Scientific Realism. Cambridge: Cambridge University Press. Beck, J. 2012. The Generality Constraint and the Structure of Thought. Mind 121: 563–​600. Beck, J. 2013. Sense, Mentalese, and Ontology. Protosociology 30: 29–​48. Bennett, K. 2013. Having a Part Twice Over. Australasian Journal of Philosophy 91: 83–​103. Bromberger, S. 2011. What Are Words? Comments on Kaplan (1990), on Hawthorne and Lepore, and on This Issue. Journal of Philosophy 108: 486–​503. Burge, T. 2005. Disjunctivism and Perceptual Psychology. Philosophical Topics 33: 1–​78. Burge, T. 2007. Foundations of Mind. Oxford: Oxford University Press. Burge, T. 2009. Five Theses on De Re States and Attitudes. In J. Almog and P. Leonardi (eds.), The Philosophy of David Kaplan, 246–​316. Oxford: Oxford University Press. Burge, T. 2010a. Origins of Objectivity. Oxford: Oxford University Press. Burge, T. 2010b. Steps toward Origins of Propositional Thought. Disputatio 4: 39–​67. Carey, S. 2009. The Origin of Concepts. Oxford: Oxford University Press. Chemero, A. 2009. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Churchland, P. M. 1981. Eliminative Materialism and the Propositional Attitudes. Journal of Philosophy 78: 67–​90. Davis, W.  2003. Meaning, Expression, and Thought. Cambridge:  Cambridge University Press. Davis, W. 2014. On Occurrences of Types in Types. Australasian Journal of Philosophy 92: 349–​363. Egan, F. 1992. Individualism, Computation, and Perceptual Content. Mind 101: 443–​459. Evans, G. 1982. The Varieties of Reference. Oxford: Clarendon Press. Field, H. 2001. Truth and the Absence of Fact. Oxford: Clarendon Press. Fodor, J. 1975. The Language of Thought. New York: Thomas Y. Crowell. Fodor, J. 1981. Representations. Cambridge, MA: MIT Press. Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press. Fodor, J. 1994. The Elm and the Expert. Cambridge, MA: MIT Press. Fodor, J. 2008. LOT 2: The Language of Thought Revisited. Oxford: Oxford University Press. Fodor, J., and Pylyshyn, Z. 1988. Connectionism and Cognitive Architecture: A Critical Analysis. Cognition 28: 3–​71. Frege, G. 1892/​1997. On Sinn and Bedeutung. In M. Beaney (ed.), M. Black (trans.), The Frege Reader, 151–​171. Malden, MA: Blackwell. Gallistel, C. R. 1990. The Organization of Learning. Cambridge, MA: MIT Press. Gallistel, C. R. 1999. Coordinate Transformations in the Genesis of Action. In B. Bly and D. Rumelhart (eds.), Cognitive Science: Handbook of Perception and Cognition, 2nd ed., 1–​42. New York: Academic Press.

176  What Are Mental Representations? Gallistel, C. R., and Gelman, R. 2005. Mathematical Cognition. In K. Holyoak and R. Morrison (eds.), The Cambridge Handbook of Thinking and Reasoning, 559–​588. Cambridge: Cambridge University Press. Geach, P. 1957. Mental Acts. London: Routledge and Paul. Haugeland, J. 1985. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press. Hawley, K. 2010. Mereology, Modality and Magic. Australasian Journal of Philosophy 88: 117–​133. Hawthorne, J., and Lepore, E. 2011. On Words. Journal of Philosophy 108: 447–​485. Heim, I., and Kratzer, A. 1998. Semantics in Generative Grammar. Malden, MA: Blackwell. Helmholtz, H. von. 1867. Handbuch der Physiologischen Optik. Leipzig: Voss. Knill, D., and Richards, W., eds. 1996. Perception as Bayesian Inference. Cambridge: Cambridge University Press. Kosslyn, S. 1980. Image and Mind. Cambridge, MA: Harvard University Press. Kripke, S. 1980. Naming and Necessity. Cambridge, MA: Harvard University Press. Lewis, D. 1999. Against Structural Universals. In Papers in Metaphysics and Epistemology, 78–​107. Cambridge: Cambridge University Press. Linnebo, Ø. 2018. Thin Objects:  An Abstractionist Account. Oxford:  Oxford University Press. Madl, T., Chen, K., Montaldi, D., and Trappl, R. 2015. Computational Cognitive Models of Spatial Memory in Navigation Space: A Review. Neural Networks 65: 18–​43. McDowell, J. 1998. Meaning, Knowledge, and Reality. Cambridge, MA:  Harvard University Press. Ockham, W. 1957. Summa Logicae. In P. Boehner (ed. and trans.), Philosophical Writings: A Selection. London: Nelson. O’Keefe, J., and Nadel, L. 1978. The Hippocampus as a Cognitive Map. Oxford: Clarendon Press. Parsons, C. 2008. Mathematical Thought and Its Objects. Cambridge:  Cambridge University Press. Peacocke, C. 1992. A Study of Concepts. Cambridge, MA: MIT Press. Prinz, J. 2011. Has Mentalese Earned Its Keep? On Jerry Fodor’s LOT 2. Mind 120: 485–​501. Putnam, H. 1975a. Mathematics, Matter, and Method. Vol. 1 of Philosophical Papers. Cambridge: Cambridge University Press. Putnam, H. 1975b. Mind, Language, and Reality. Vol. 2 of Philosophical Papers. Cambridge: Cambridge University Press. Pylyshyn, Z. 1984. Computation and Cognition. Cambridge, MA: MIT Press. Quine, W. V. 1960. Word and Object. Cambridge, MA: MIT Press. Quine, W. V. 1969. Ontological Relativity and Other Essays. New  York:  Columbia University Press. Quine, W.  V. 1980. From a Logical Point of View, 2nd ed. Cambridge:  Harvard University Press. Quine, W. V. 1981. Theories and Things. Cambridge: Harvard University Press. Quine, W. V. 1995. From Stimulus to Science. Cambridge: Harvard University Press. Rescorla, M. 2009. Cognitive Maps and the Language of Thought. British Journal for the Philosophy of Science 60: 377–​407. Rescorla, M. 2013. Millikan on Honeybee Navigation and Communication. In D. Ryder, J. Kingsbury, and K. Williford (eds.), Millikan and Her Critics, 87–​102. Malden: Wiley-​Blackwell. Rescorla, M. 2015. Bayesian Perceptual Psychology. In M. Matthen (ed.), The Oxford Handbook of the Philosophy of Perception, 694–​716. Oxford: Oxford University Press.

Reifying Representations  177 Rescorla, M. 2016. Bayesian Sensorimotor Psychology. Mind and Language 31: 3–​36. Rescorla, M. 2017a. From Ockham to Turing—​and Back Again. In A. Bokulich and J. Floyd (eds.), Turing 100: Philosophical Explorations of the Legacy of Alan Turing, 279–​ 304. Berlin: Springer. Rescorla, M. 2017b. Levels of Computational Explanation. In T. Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics, 5–​28. Berlin: Springer. Rescorla, M. 2018. Maps in the Head? In K. Andrews and J. Beck (eds.), The Routledge Handbook of Philosophy of Animal Minds, 34–​45. New York: Routledge. Rescorla, M.  2020a. How Particular Is Perception? Philosophy and Phenomenological Research 100: 721–​727. Rescorla, M. 2020b. A Realist Perspective on Bayesian Cognitive Science. In A. Nes and T. Chan (eds.), Inference and Consciousness, 40–​73. New York: Routledge. Rescorla, M. Forthcoming a. Bayesian Modeling of the Mind: From Norms to Neurons. WIREs Cognitive Science. Rescorla, M. Forthcoming b. Perceptual Co-​ reference. Review of Philosophy and Psychology. Rips, L. 1994. The Psychology of Proof:  Deductive Reasoning in Human Thinking. Cambridge, MA: MIT Press. Rips, L., and Hespos, S. 2011. Rebooting the Bootstrap Argument:  Two Puzzles for Bootstrap Theories of Concept Development. Behavioral and Brain Sciences 34: 135–​136. Schneider, S. 2011. The Language of Thought: A New Philosophical Direction. Cambridge, MA: MIT Press. Skinner, B. F. 1938. The Behavior of Organisms. New York: Appleton-​Century-​Crofts. Soames, S. 2010. What Is Meaning? Princeton, NJ: Princeton University Press. Stalnaker, R. 1984. Inquiry. Cambridge, MA: MIT Press. Stich, S. 1983. From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press. Tolman, E. 1948. Cognitive Maps in Rats and Men. Psychological Review 55: 189–​208. Trommershäuser, J., Körding, K., and Landy, M., eds. 2011. Sensory Cue Integration. Oxford: Oxford University Press. van Gelder, T. 1992. What Might Cognition Be, If Not Computation? Journal of Philosophy 92: 345–​381. Wetzel, L. 2009. Types and Tokens. Cambridge, MA: MIT Press.

6 Situated Mental Representations Why We Need Mental Representations and How We Should Understand Them Albert Newen and Gottfried Vosgerau

6.1.  Why Posit Mental Representations? Mental representations are a means to explain behavior. This, at least, is the idea on which cognitive (behavioral) science is built: that there are certain kinds of behavior, namely minimally flexible behavior, which cannot be explained by appealing to stimulus-​response patterns. Flexible behavior is understood as behavior that can differ even in response to one and the same type of stimulus or that can be elicited without the relevant stimulus being present. Since this implies that there is no simple one-​to-​one relation between stimulus and behavior, flexible behavior is not explainable by simple stimulus-​response patterns. Thus, some inner processes of the behaving system (of a minimal complexity) are assumed to have an influence on what kind of behavior is selected given a specific stimulus. These inner processes (or states) are then taken to stand for something else (features, properties, objects, etc.) and are hence called “mental representations.” They are presupposed for two main reasons, (1) to account for flexible reactions to one and the same stimulus and (2)  to account for behavior triggered when the relevant entities are not present. The latter case is highlighted by J.  Haugeland:  “if the relevant features are not always present (detectable), then they can, at least in some cases, be represented; that is, something else can stand in for them, with the power to guide behavior in their stead. That which stands in for something else in this way is a representation” (Haugeland 1991, 62). In this sense—​which is a very liberal and wide sense of “representation” that does not imply a specific structure or degree of complexity of

Albert Newen and Gottfried Vosgerau, Situated Mental Representations In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0007.

Situated Mental Representations  179 mental states—​we have to posit some kinds of mental representation so as to explain flexible behavior.1 If the behavior is not flexible but reflex-​like and only elicited in the presence of the stimulus, behavioristic explanations without the posit of mental representations will suffice. This is not to say that the details of the mechanisms do not play a role in the explanation of the behavior. For example, there are smoke detectors that contain a little light source and a light-​dependent resistor, so that they react on everything that leads to less light on the resistor, including smoke. These details are of course relevant to understanding the details of the behavior of the smoke detector. However, they do not justify calling the light in the smoke detector a representation of non-​smoke or anything like this. The same is true, e.g., for sea bacteria with magnetosomes or the dogs in Pavlov’s experiments. This is, at least, the picture that we believe cognitive science to be built on2 and the picture we want to defend here. We presuppose a standard concept of representation according to which an MR has at least two features, namely being a vehicle and having a content. If someone wants to reduce MRs to just the aspect of being a vehicle and argue that the content is irrelevant, then this makes the concept of MRs superfluous. The aspect of the vehicle can be adequately accounted for by the description of a mechanism without the aspect of content. According to our view, we need MRs only if the scientific explanation of flexible behavior needs to involve the aspect of content. There is one group of philosophers challenging the view that content can play an explanatory role since it seems unclear how, as Dretske (1988) puts it, content gets its hand on the steering wheel, or more precisely, how contents can play a causal role in a mechanism. Let us outline our strategy: we will argue for a systematic interdependence between vehicle and content of an MR; we accept that only the vehicle is causally involved in the relevant mechanism(s). But the content of an MR is part of a causal explanation which is available at different levels which are more coarse-​grained than the level of the mechanism. Even if there is only one causal mechanism in which the vehicle plays a causal role, there are many causal explanations at different levels which are all systematically related (in different ways) to this causal mechanism which guarantees that 1 This idea can also be found in the pioneering work that led to the assumption of mental representations, e.g., Tolman and Honzik 1930 and Tolman 1948 (cf. Vosgerau 2010; Vosgerau and Soom 2018). 2 In fact, cognitive science started as a reaction to behaviorism, which could not explain the behavior of rats (see also footnote 1).

180  What Are Mental Representations? the content explanations are also coarse-​grained causal explanations (further clarification in section 6.5.4). Recently, some philosophers have challenged the possibility of contents playing any explanatory role, arguing that we should get rid of the concept of mental representation. According to many enactivists (e.g., Hutto and Myin 2013, 2017) all kinds of behavior are to be described without presupposing any mental representation. Linguistic representations are still assumed to be real representations and to be needed in the explanation of some kinds of behavior; however, they are no longer classified as mental representations. We will come back to this challenge later, since it is deeply interwoven with the question of where to draw the line between nonrepresentational and representational cognitive abilities. To account, in principle, for the critical attitude of enactivism, we focus on nonlinguistic mental representations which are needed to explain the behavior of cognitive systems. We argue that we actually need to presuppose nonlinguistic mental representations to explain flexible behavior in animals, in prelinguistic children, and also in adults. But these mental representations have to be understood in a new way, namely as real, nonstatic, use-​dependent, and situated mental representations relative to a certain behavior or cognitive ability. MRs are real in the sense that their functional roles are realized by mechanistic relations involving the relevant vehicle and its components; and this realization can involve different levels, such as the neural level, but also the bodily or further physical and social levels. They are nonstatic in the sense that the variation between the realization basis of two tokens of an MR type can be huge: there are no necessary and together sufficient features determining the relevant behavior or cognitive ability to be explained, as we will show.3 Rather, the realization basis is an integrated pattern of characteristic features that can vary greatly because a single integrated token can involve features from the package of characteristic features that are not included in another token and vice versa (e.g., as argued for emotions in Newen, Welpinghus, and Juckel 2015). They are use-​dependent in the sense that the coarse-​grained content of an MR (what it is about) depends on what the MR is used for, i.e., what the behavior is targeting (given an explanatory interest). MRs are situated in the double sense that (1) we do not allow only for neural states as vehicles (but also for 3 As we will see, there are of course very specific neural mechanisms in play at a low level of processing, but they are usually involved in the realization of rather many different types of flexible behavior or cognitive abilities and are thus not sufficient (or necessary) for the specific behavior. We will elaborate on this in section 6.5.4.

Situated Mental Representations  181 a combination of neural and bodily states), and (2) one and the same MR can have different specific fine-​grained contents (format of representation) depending on the detailed explanatory level. To elaborate on these four features (being real, nonstatic, use-​dependent, and situated), we will develop them by critically delineating our account from the main alternative accounts of MRs, instead of discussing the features one by one. First we describe the functionalist framework of our account (section 6.2), then introduce what-​where-​when memory in birds and rats as a key example as in need of involving nonlinguistic MRs (section 6.3). Then we analyze the central deficits of three leading theories of MR, namely the realist account of Fodor, the Ersatz-​representationalism of Chomsky, and the mathematical pragmatist account of Egan (section 6.4). Our alternative account (developed in section 6.5) can be called situated mental representation: it combines a functionalist account of representation with a relational dimension that can vary with the situation type and that allows nonstatic constructions of mental representations in specific contexts. We spell out our account of MRs according to which we need to distinguish vehicle, content, and format of MRs and especially account for the nonstatic construction of the vehicle. Of course we also deliver some fruitful applications.

6.2.  A Minimal Characterization of  Mental Representations and the Exclusion of Nonfunctionalist and Unitary Feature Accounts There is some consensus concerning the “job description” for mental representations. (1) MRs should be introduced only if we need them to explain a certain type of minimally flexible behavior of cognitive systems.4 (For this paper, we leave aside cases relying on natural language.) But we should not adopt an ideological stance on which mental representations are presupposed as definitional for cognitive science. We need to be open-​ minded about alternative nonrepresentational explanations of cognitive abilities (Ramsey 2017). Furthermore, it is commonly agreed that (2) MRs

4 In effect, this is the same as to say that representations can be wrong. In the case of non-​flexible behavior, as reflexes, there is no mistake possible. It doesn’t make sense to say that Pavlov’s dog dribbled mistakenly. So, this constraint is closely connected to the widely agreed upon statement that there is no representation without misrepresentation.

182  What Are Mental Representations? are asymmetric (if r represents an object o, then o does not represent r) and (3) MRs have to allow for misrepresentation. These three conditions allow us to exclude three radical views of MR. Regarding (3):  we can exclude a purely causal theory of MR. MRs cannot be explained by causal relations alone because there is no such thing as miscausation. Moreover, often the same effect can have different causes, which leads to underspecification, making MRs useless for explaining behavior.5 More generally, causation is usually not connected to representation: an avalanche could be caused by a falling tree (while the latter can be caused by a strong wind), but the avalanche represents neither the falling tree nor the strong wind nor both together. Even if we have a strict and reliable causal relation, such that one type of effect can only be produced by one type of cause, there is still no representation because there would be no possibility of misrepresentation (even if we imagined smoke being caused by fire and by nothing else, it is still not true that smoke would represent fire). Furthermore, MRs of nonexisting things like vampires cannot be caused by what they represent. Regarding (2): we can also exclude a pure similarity theory of MR. If r is said to represent an object o just because r is similar in certain respects to o, we lose the feature of asymmetry of MRs because similarity is in general a symmetric relation. Regarding (1): finally, one may argue for a purely conventionalist theory of MR such that r represents o if and only if there is a convention in a group such that r is by this convention connected with o—​the paradigm case here are linguistic conventions, e.g., a name representing a person. But to assume that every MR is a linguistic representation would clearly be a case of over-​intellectualizing behavior: many types of flexible behavior are not dependent on the existence of linguistic rules, especially flexible behavior in prelinguistic children and nonhuman animals. Thus, we need to presuppose MRs which are not anchored in social or linguistic conventions even if some MRs might be linguistic. As already mentioned, radical enactivists (Hutto and Myin 2013, 2017)  challenge the assumption that there are any nonlinguistic MRs, arguing that we can explain all kinds of behavior with the help of linguistic representations or no mental representations at all. Our argument against this view is based on examples of behavior by nonhuman animals or 5 This is basically the so-​called disjunction problem (Fodor 1987). His “solution” to the problem is to formulate asymmetric dependencies between the different causal relations, which is rather a sophisticated way of restating the problem than actually solving it. What we would need is an explanation of where the asymmetric dependencies come from (cf. Vosgerau 2009).

Situated Mental Representations  183 prelinguistic children that are best explained by reference to nonlinguistic representations. What exactly is the difference between a position assuming nonlinguistic mental representations, and the radical enactivists’ denial of them? Although they are anti-​representationalists, radical enactivists do not deny that there are internal neural processes which cause nonlinguistic cognitively guided behavior, but they mainly deny (a)  that these internal neural processes are systematically connected with a type of behavior and (b) that those internal neural processes are vehicles for contents, i.e., mental representations. Rather, they argue that it is adequate and sufficient to describe such behavior in terms of affordances and dispositions. In particular, it would be both unnecessary and impossible to suppose that some neural correlates have the systematic functional role of “standing in for something else,” i.e., that they have content: “Fundamentally, by REC’s [Radical Enacted Cognition] light, basic cognition is a matter of sensitively and selectively responding to information, but it does not involve picking up and processing information or the formation of representational contents” (Hutto and Myin 2017, 92). On the contrary, representationalists argue that there is evidence that some neural, corporeal, or clusters of neural and corporeal states have systematic functional roles such that it is justified to ascribe content to them. The ascription of content is justified because this content is explanatorily fruitful and even the best tool we can get for certain types of explanation, specifically for the explanation of some kinds of flexible behavior. This is the core of the debate. Our examples of nonlinguistic abilities are supposed to furnish evidence for systematic functional roles as being implemented in some sufficiently distinguishable neural processes or cluster of neural processes (or further conditions of implementation), and show that these are best understood as mental representations.6 So far, we have excluded purely causal, purely similarity-​based, and purely convention-​based accounts of MR, and promised to show that the purely linguistic account is also insufficient. We now want to set out and defend a specific functionalist account. To do this, we first introduce one key example which also has the function of demonstrating that we need to presuppose nonlinguistic MRs.

6 Although we focus on neural correlates as the main implementation basis, we are not committed to a strict internalism. The debate over internalism and externalism is independent of the debate on mental representations (Vosgerau and Soom 2018; Vosgerau 2018), and the latter is the sole topic of this article.

184  What Are Mental Representations?

6.3.  A Key Example: Spatial Navigation and Food Finding in Rats Scrub jays (Clayton and Dickinson 1998, 2007), rats (Crystal 2013), and probably more species have episodic-​like memory. Let us focus on rats, since this case has been intensively investigated. Trained in an eight-​arm maze, they are able to register three daylight situations (morning, midday, evening), to distinguish different types of food (normal food and chocolate), and of course to register the spatial organization of the maze. Rats swiftly learn to expect a certain type of food in a certain arm, and can also combine this with a certain time of day. They can even learn to understand that if there is normal food in arm2 in the morning, there will be chocolate in arm7 at midday (Crystal 2013; Panoz-​Brown et  al. 2016). Why do we have to presuppose MRs to explain the behavior of the rats? Why are behavioral dispositions not sufficient? Rats in this case are shown to be able to register and systematically distinguish at least two types of food, eight locations, and three day times. The best explanation is to postulate an informational state with a (partially) decoupled representation of the type of food that can be combined with representations carrying information about different daytimes and different locations. The alternative would be to posit as many different internal states or independent affordances as there are combinations of the different parameters. This explanation, however, is not compatible with the fast learning curve of the rats. Therefore, an explanation in terms of specific dispositions or affordances, which the rat would have to acquire or learn one by one, is not only incomplete but also inadequate. The informational state of rats which have learned to behave according to a conditional in the maze is best characterized as structured into components of . The alternative would be to presuppose a high number of independent, non-​structured dispositions which need to include all the possible permutations of associations between a starting state of affairs and a type of behavior. And these dispositions would need to be learned independently of each other, since there would be no common component to be taken over. We cannot decide on the basis of intuitions which explanatory strategy counts as more parsimonious (Starzak 2017), but we need additional facts to provide a foundation for the best explanation. Let us first look at the neural processing evidence: earlier studies have shown that rats (at least a large group of them) can spatially orient themselves based on landmarks (realized by the hippocampus), in contrast to a group of rats which rely on rigid procedural

Situated Mental Representations  185 knowledge (realized by basal ganglia) (Packard and McGaugh 1996). Rats using landmark orientation are able to adjust their behavior when they have to start from different positions in the maze:  this indicates that they remember and recognize landmarks and can combine that with different types of behavior (taking the second right from one direction, or taking the second left from the opposing direction). This indicates that we have to presuppose nonlinguistic representations. This is not an over-​intellectualization, because the presupposition of structured representations of is also supported by further neuroscientific evidence. There is strong evidence that rats are able to represent three-​dimensional objects (Burke et al. 2012), while the relevant behavior correlates with activity of the perirhinal cortex. Furthermore, well-​corroborated research suggests that location is represented by either place cells or their combination with grid cells (Bjerknes et al. 2018; Moser, Kropff, and Moser 2008) in such a way that a sequence of places which a rat has actually visited leads to a sequence of place cell activation. Even more convincingly, there is evidence that such sequences of place cell activations can be reactivated without the rat actually being in the same scene, which is usually interpreted as a replay of a spatial path that was taken before (Káli and Dayan 2004; Jezek et al. 2011). Rats also seem to represent time as a feature of an event (Tsao et al. 2018). Taken together, there is overwhelming evidence that we should presuppose nonlinguistic mental representations in rats structured as . An additional argument for structured representations is drawn from observations of learning and relearning by rats in the previously discussed scenario: rats can quickly learn new conditionals in the eight-​arm maze, or, even more demandingly, are able to understand the eight-​arm maze as having been newly restructured (e.g., rats can also learn conditional behavior in a freely accessible eight-​arm maze, and understand in a certain context that only the arms numbered 1, 3, 5, and 7 are part of the game, while the others are irrelevant for inspection [Crystal 2013]). Thus, the informational state of the rat has components which are structured and can be flexibly combined. This allows for an interesting degree of epistemic flexibility and predicts a quick learning of new combinations that cannot adequately be described as a multiplicity of independent, non-​structured dispositions. What follows is that the best description and explanation of the rats’ behavior requires MRs with content. Nevertheless, a defender of radical enacted cognition could reply that “[b]‌asic minds target, but do not contentfully represent, specific objects and states of affairs. To explicate how this could be so, Hutto and Satne (2015)

186  What Are Mental Representations? introduced the notion of target-​focused but contentless Ur-​Intentionality” (Hutto and Myin 2017, 92). The general line of argument is that basic cognition (which includes all nonlinguistic cognition [Hutto and Myin 2013]) can consist in target-​directed behavior without involving mental representations because basic intentionality can be realized without contents (Hutto and Myin 2017, chs. 5, 6). We partially grant this thesis, which leads to a clarification: in the case of rigid mechanisms, there is indeed no need to presuppose mental representations. For example, the navigation of sea bacteria with the help of magnetosomes can be completely explained and predicted without mentioning mental representations. We also know that rigid mechanisms can produce quite impressive behavior in an adequate environment (Gigerenzer 2000).7 But, contrary to the claim of radical enactivists, not all nonlinguistic behavior can be accounted for with this move. And this is what our key example should demonstrate: first, it has been shown that there are systematic underlying neural processes which correlate with type of food, location of the food, and the time of the day. Second, these informational states implemented by neural states are reused by the rat when realizing memory-​ based behavior, when quickly learning a new conditional in the maze, or when learning to apply the same conditional in a newly structured maze. The structure of the underlying neural processes can best be described to include representations of the type of food, representations of the location of the food, and representations of the time of day, since there are systematic underlying neural processes and they are best understood as bearers of information which can be easily systematically recombined and guide, specifically, quick learning and flexible behavior. Finally, it is obvious, yet still worth mentioning, that these are not linguistic representations. We thus take it that nonlinguistic MRs are needed, and move now to the task of detailed and consistent description.

6.4.  Three Important but Unsuccessful Strategies to Account for Representation and Misrepresentation Before presenting the three prominent positions and our criticisms, it is worth highlighting that none of the three accounts to be canvassed runs into the problems faced by simple causal, similarity-​based, or convention-​based 7 Thus, complex behavior is not a sufficient criterion for presupposing mental representation but still a very good indicator which forces us to investigate the structure of the information processing involved and to show that it is indeed flexible.

Situated Mental Representations  187 accounts as set out earlier. The reason is that the following three accounts are all functionalist accounts, and we will develop the details of a functionalist framework later when presenting our own account. For now, we focus on discussing the three most important explanatory strategies in the literature, and show that despite their having a functionalist framework, they all are deficient. As explained previously, MRs have (at least) two features:  they are implemented as vehicles (brain states), and they have content. Fodor’s realism, also called “hyperrepresentationalism”:  the vehicles of mental representations are brain states that are syntactically organized. They thereby implement a language of thought. Their content is a feature of the mental representations which is constituted either purely intrinsically (Fodor 1980) or by an intrinsic syntactic organization of the symbols of this language of thought and by their causal anchoring in the environment (Fodor 1987). In the latter case, the relation between content and vehicle (brain state) is determined by intrinsic syntactic organization of symbols and an additional causal relation from symbols to objects (properties or substances) (Fodor 1987, 1994). In both cases, a purely intrinsic or an intrinsic-​causal determination of the content, the relations to the world are described as use-​independent, i.e., independent from an explanatory perspective. Jerry A. Fodor’s account faces open questions and radical challenges in both versions: the main worry is that we do not have any empirical evidence for a language of thought. In the 1990s, we did not have alternative explanatory strategies to account for the acquisition of concepts or the systematicity and productivity of language. Accordingly, the theory of a language of thought underlying our natural language ability was very popular. But theories concerning the evolution and development of language have made a lot of progress; in particular, Tomasello (2003, 2008) outlines a gradual development of language abilities anchored in basic cognition, including joint attention and shared intentionality. Thus, it is no longer a convincing implication to accept an inborn language of thought. More importantly, we run into general problems with the claim that the content of an MR is an intrinsic feature or an intrinsic-​causal feature, since both claims involve independence from an explanatory perspective: there is the well-​known problem of underdetermination of content, often illustrated with the case of the frog’s fly-​catching mechanism: What is the content of the activated MR in a situation when the frog detects and catches a fly? Is it (1) moving black dot present; (2) fly present; or (3) food present? Is it only an informational state

188  What Are Mental Representations? or is it—​as Millikan (1996, 2009) argues—​a pushme/​pullyou representation, which also includes a motivational content such as activate the tongue (OR catch it OR eat it)? If content was an intrinsic or intrinsic-​causal feature (independent from an explanatory perspective), there should be no underdetermination. However, pinning down the content is notoriously difficult or even impossible without already adopting a specific explanatory perspective. The reason is that the only way to constrain the otherwise underdetermined content is to appeal to the best scientific explanation (Schulte 2012). Therefore, we have to integrate the relevant explanatory perspective into the determination of content. The content of the frog’s representation is thus dependent on the relevant explanatory perspective: if we want to explain the activation of the tongue, it is adequate to describe the content as “black dot present and activate the tongue.” If we want to explain the survival of frogs in a context of flies being the standard food available, it is adequate to describe the content as “fly present and catch it.” If we aim for the general explanation of the survival of a frog, the content is adequately described as “food present and insert it.” We are not excluding that there may exist one underlying mechanism, but—​if it were nonrigid (further discussion subsequently)—​it would need a content explanation in addition to the mechanistic description, and then the content is underdetermined and needs specification through an explanatory perspective. Thus, Fodor’s view is inadequate: there is lack of evidence for the presupposed complex inborn language of thought, and the content of mental representations is either systematically underdetermined or dependent on the explanatory perspective; i.e., it cannot be neither a purely intrinsic nor an intrinsic-​causal feature. Taking the argument from underdetermination very seriously, Noam Chomsky developed a view which Egan (2014) calls “Ersatz-​ representationalism.” Chomsky’s Ersatz-​representationalism: While mental representations are required for scientific explanation, contents are not. Chomsky takes the argument from underdetermination to imply that “content” is not a scientific notion at all. Nevertheless, he wants to keep the notion of MR (whatever that might be and whatever role such contentless representations could play). The price to pay for this view is that all content-​involving explanations become unscientific, i.e., they have no place in scientific explanations,8 and this price 8 We use the expression “unscientific” to refer to explanations and terms that we will not find in a “pure” science. Some of the unscientific explanations and terms might still be used by scientists, but only for didactic reasons or because of their simplicity and elegance.

Situated Mental Representations  189 is too high, because many explanations in psychology and psychiatry are still evaluated in the special sciences as paradigmatic cases of scientific explanation. At the same time, they essentially rely on the ascription of contents: e.g., if someone is suffering from delusions of persecution, then his rigid belief with the content that the CIA is after him enables us to explain why he never uses any name tags. And this belief is a paradigmatic case of a mental representation. To criticize Chomsky’s view, it does not matter that it is a linguistically based MR. Of course, delusions of persecution have some neural basis (Bell and Halligan 2013), but this knowledge is far too unspecific to predict the enormously diverse actions of the patient in different situations. Only at the level of the content of the delusion of persecution can we hope to find specific explanations of all the different reactions of the patient. Thus, it would be like throwing out the baby with the bathwater to deny that MRs have contents. Frances Egan (2014) is impressed by Chomsky’s arguments, but she wants to keep the proverbial “baby” by introducing a scientific content for mental representations that is different from the unscientific content of folk psychology. Egan’s dual representationalism: Paradigmatic scientific explanations are explanations involving computational mechanisms. They involve mental representations with a content, but only with a mathematical content. Any further ascription of content, especially in folk psychology, is purely pragmatic and thus not part of the theory proper. Egan thinks that Chomsky is right in denying the relevance of content in scientific explanations as far as folk-​psychological content is concerned (including concepts, beliefs, desires etc.). Nevertheless, she concedes that there is a level of explanation that still involves some scientifically acceptable content:  this is what she calls mathematical content, and it is implemented at the level of computational mechanisms. For many phenomena, like picking up an object, we are able to deliver a good description of the computational mechanism (Egan 2014). This computational mechanism involves mathematical content, i.e., a function from numbers as arguments to numbers as values. And this computational mechanism has its mathematical content intrinsically in the sense that the mathematical function completely and uniquely matches the mechanism. Egan then argues that if we have identified a representational vehicle, we can attribute two contents to it, namely an intrinsically attached scientific content, which is the mathematical content, and the pragmatic content, i.e., an intensional content of beliefs and desires in a situation. The latter is just an unscientific gloss (i.e., it is not part of the theory

190  What Are Mental Representations? proper) resulting from placing the mathematical content into a rich context and extending it into an intensional context-​dependent content. The latter may still be helpful in scientific theory construction and presentation and in everyday life, but remains a gloss which has no foundation in any mechanism. Thus, MRs have two types of content, although only the mathematical content has a place in the scientific theory proper. The main argument that Egan puts forward to exclude pragmatic content from being scientific is basically the argument from underdetermination of intensional content illustrated by the frog example (used against Fodor). Our central criticism against Egan is twofold: First, if the argument of underdetermination is successful against pragmatic content’s being scientific, it is also successful against mathematical content’s being scientific. So, we are left without any intrinsic content for MR. And second, even if an MR does not have an intrinsic content (as we accept), we are able to characterize contents that are scientific since they are required by explanations in the special sciences. The latter will be developed in our own proposal for MR. Let us elaborate on our first and main criticism:  We accept that intensional contents are underdetermined in the frog case. Moreover, content is in principle underdetermined if we take into consideration only the input and output of the underlying neural processing. The reason is, ultimately, that the behavior we try to explain remains unclear if we restrict the description in this way. Is it catching-​flies behavior or tongue-​throwing behavior that we are trying to explain? The inputs and outputs of the neural mechanisms do not distinguish between these two ways of describing the behavior. However, the difference is of great importance for empirical research: if we want to explain how frogs are able to catch flies, the cases with black dots count as unsuccessful behavior (indeed as systematic failures), which give us valuable insights into the details of the mechanisms involved. So, the basic scientific description of the behavior to be explained is not given by the inputs and outputs to neural mechanisms alone, and thus the representations and contents, which are supposed to take on some explanatory role, cannot be determined by them either. In particular, even if there was a unique mathematical description of the frog’s behavior, it could also not determine which cases are successful cases of the behavior in question; a fortiori, it could not determine which cases are based on misrepresentations, and thus it could not said to be a representational content at all (remember: representation implies the possibility of misrepresentation).

Situated Mental Representations  191 We want to add a general argument of underdetermination from a philosophical perspective: if we describe a physical mechanism with a mathematical operation, then the latter remains underdetermined relative to the mechanism, since usually there are many mathematical functions which fit the same data of the mechanism. Our selection of one mathematical operation for an explanation depends on other presuppositions, including the selection of an explanatory interest. Having made this very general point, we want to remind the reader that the frog example is a bad candidate for representation-​hungry behavior: it is not flexible. This invites a short discussion of two typical cases of basic cognition, frogs’ fly catching and the homing behavior of ants. Indeed, the frog (and likewise the toad) throws its tongue whenever a black moving dot occurs, and there may be a rigid neural mechanism underlying it (Neander 2017). This means that the tongue-​throwing behavior is not flexible and can be sufficiently explained without any representation at all—​it is a simple stimulus-​response pattern.9 MRs are needed to account for flexible behavior, and they are introduced to account for a triggering role of informational states without the causal source being present. An intensely discussed case concerning the latter condition is the homing behavior of desert ants: the ant is able to return to its nest although it does not get any sense information about the nest. This is based on the process of path integration or dead reckoning (Gallistel 1990), and involves a systematic representation of the animal’s location relative to its nest (Gallistel 2017). One observation indicating the substitutional role of the internal mechanism is the following: if the ant has found some food during an unsystematic search, it can go straight back to its nest. Now, if you displace the ant at this very moment, it runs along the same parallel vector, of course not ending at the nest but relying on the internal representation of the relative spatial location. Thus, without a nest-​specific input to the neural mechanism the ant is able to find the location of the nest. There is a systematic neural underpinning, and it has an informational content that triggers the ant’s behavior. What exactly is the content? Parallel to the frog example, it depends on our explanatory interest: we could either say that it is the nest (when we want to 9 As mentioned previously, this is not to say that the details of the mechanism, including the details of the registration of the black moving dot—​the frog’s perception—​do not play a role in explaining the details of the behavior. Of course they do, and they are investigated in neuroscience as much as any other reflex is a fascinating research topic for neuroscience. However, not every neuroscience is cognitive (neuro)science in a demanding sense.

192  What Are Mental Representations? explain the homing behavior of the ant, which consists in returning to its home) or that it is the vector normally directed to the nest (when we want to explain in which direction the ant moves, which is different from the homing behavior since it obviously has different success conditions). As we will argue in detail (section 6.5.2), in the case of the ant’s cognitive system the different descriptions based on different explanatory perspectives are connected with different explanatory power, and the description as homing behavior informs us about new aspects of the underlying mechanism. In the case of the frog—​due to its rather rigid underlying mechanism—​the analog different descriptions are not coming with any increased understanding of the cognitive mechanism; thus, this may be best described as a case not involving mental representations at all. In the second part of our criticism of Egan (and of the other accounts as well), we dispute the implicit presupposition that if a content is not attached intrinsically, i.e., as long as it involves some underdetermination, then it cannot be part of a scientific explanation proper. We take this presupposition to be unjustified. This can, for example, be illustrated with cases of multiple realization: we know many realizations from the natural science (Fang 2018). For didactic reasons, let us focus on the well-​known case of a multiple realization of jade as two chemical substances, namely either as jadeite, NaAl(Si2O6), or as nephrite, Ca2(Mg, Fe)5((OH,F)Si4O11)2. Despite this obvious chemical difference, they have similar superficial properties and thus could be used for similar purposes in everyday life. Therefore, we can describe high-​level causal processes which are realized by two different low-​ level mechanisms. High-​ level explanations are underdetermined in relation to low-​level explanations. Nevertheless, both can be scientific explanations. It is an important insight worked out by Sober (1999) that scientific explanations need both high-​level and low-​level explanations to account for a phenomenon, where the level of explanation depends on the pragmatic research interest: Higher-​level sciences often provide more general explanations than the ones provided by lower-​level sciences of the same phenomena. This is the kernel of truth in the multiple realizability argument—​higher-​level sciences “abstract away” from the physical details that make for differences among the micro-​realizations that a given higher-​level property possesses. However, this does not make higher-​level explanations “better” in any

Situated Mental Representations  193 absolute sense. Generality is one virtue that an explanation can have, but a distinct—​and competing—​virtue is depth, and it is on this dimension that lower-​level explanations often score better than higher-​level explanations. The reductionist claim that lower-​level explanations are always better and the antireductionist claim that they are always worse are both mistaken. (Sober 1999, 560)

Let us transfer this insight to a hypothetical case: assume that the fly-​catching mechanism of the frog could be steered by two different mechanisms; one being the well-​known visual mechanism reacting to black moving dots, the other an olfactory mechanism reacting to certain pheromones of flies. Then a low-​level description of the behavior would differentiate between a catching-​flies-​by-​sight behavior and a catching-​flies-​by-​smell behavior. However, on a more general level, there would be only the catching-​flies behavior, and the common explanation would involve representations with the intensional content there is a fly. This explanation would identify the flexible behavior of the frog, which it shows both with certain visual and with certain olfactory inputs. And it would not be reducible to the low-​ level explanations since the commonalities of the two mechanisms are not even visible under a low-​level (maybe mathematical) description of the mechanisms (cf. Vosgerau and Soom 2018). The result is that Egan’s two contents, mathematical and pragmatic, are in the same boat: both are underdetermined. But it does not follow that they cannot be part of a scientific explanation. Explanations of this kind would depend on pragmatic factors such as explanatory interest.10 How can we develop a theory of MR which allows for both the dependency on pragmatic factors and a role for MRs in scientific explanations? This is what our own proposal of situated MR aims to set out.

10 In line with Sober, we clearly distinguish the level of scientific explanation from nonexplanations, and the involvement of pragmatic research interests is fully compatible with scientific explanations:  “The claim that the preference for breadth over depth is a matter of taste is consistent with the idea that the difference between a genuine explanation and a nonexplanation is perfectly objective” (Sober 1999, 550).

194  What Are Mental Representations?

6.5.  Defending a Functionalist Framework and Setting Out an Account of Situated Mental Representation 6.5.1.  The Functionalist Framework We want to demonstrate that the most fruitful way to capture the basic ideas of MR is through a functionalist framework. We start with the classical notion of (this kind of) functionalism that stems from the discussion of abstract automata by Putnam (1975). In the first step, we briefly characterize the minimal basis by highlighting three aspects central to our notion: (1) The notion of “function” used is the mathematical notion referring to a mapping from sets to sets, where every member of the first set is mapped to exactly one member of the other set. This is a very abstract description, and, of course, we will go further by assuming that such functions are somehow implemented by causal mechanisms; however, this is an additional assumption. (2) Behavior can, in general, be described by functions, i.e., by mappings from inputs (stimuli) onto outputs (reactions). In this simple form, we can describe reflexes in the spirit of behaviorism. In order to account for the flexible behavior discussed previously, however, we need to add internal states to the functions, which takes us from behaviorism to functionalism. This is done on both sides of the mapping: the reaction to an input is not only determined by the input but also by the internal state the system is in; and the output is not merely a reaction to the stimulus /​internal state pair, but also comprises the internal state the system will enter as a result of the behavior. Thus, the function now takes two arguments—​stimulus and internal state—​and two values—​the subsequent internal state and the reaction. (3) As in behaviorism, the relevant function containing the MR (internal states) involves behavior (as opposed to, e.g., brain output alone). However, in this sense the function in which the MR “plays a functional role” can determine what the MR stands in for: it stands in for objects or properties that are relevant for the behavior (e.g., the nest in the case of the homing behavior of the ant).11 11 Whether the function is interpreted as describing causal relations such that the “functional role” turns out to be a “causal role” is not crucial here. Indeed, interpreting the functional role as a causal role is the most obvious way to combine the functionalist framework with a materialistic ontology.

Situated Mental Representations  195 Let us elaborate on this central point by specifying that behavior should be conceived of in terms of interacting with objects, properties, and other entities in the world; and not just as relations between stimuli (sense data) and motor output. This refined version of functionalism allows us to integrate evolutionary and ontogenetic aspects of the cognitive system as they are introduced by teleological versions of MR. Due to the embedding in the real world, behavior can be described as a mapping from one state of affairs to another state of affairs, e.g., from finding normal food in arm2 in the morning to finding chocolate in arm7 at midday. Further parts of the story are adapted from Putnam’s functionalism: since one state of affairs does not always lead to the same (kind of) state of affairs, we need to assume internal states that enter the mapping function as arguments and values. Thereby the internal states (or aspects of them) can be identified as the things that play the same role as the external objects (in the states of affairs), and thus can be said to represent them. In particular, as mentioned earlier, one central case for introducing mental representations is the case of misrepresentation, especially when a certain behavior is elicited although the stimulus is not there. Misrepresentations can happen in two ways in our case study: first, if the rat finds normal food in arm2 in the morning and thus expects chocolate in arm7 at midday without there being chocolate (misexpectation of the relevant consequence); second, if the rat misremembered there being food in arm2 in the morning (misrepresentation due to wrong encoding of the relevant trigger for an expectation) and thus approached arm7 at midday. These cases of misrepresentation can occur because the places that the representations take in the functions we have described could—​in principle—​also be taken by states of affairs. However, since the states are not directly available to the rat (one is to be remembered, the other to be predicted), representations have to take the places and the functional roles of these states of affairs. Given a series of control experiments, it is convincing to presuppose that a structured representation of food type, location, and time is involved in the informational processing (Crystal 2013)  at those places, which could—​in principle—​be played by the food type, location, and time directly.

6.5.2.  The Use-​Dependence of MRs The content of an MR cannot be determined on the basis of intrinsic properties alone; rather, the “use” of the MR that leads to a certain behavior is

196  What Are Mental Representations? decisive. More specifically, the behavior of a cognitive system involves a target which is the goal of a goal-​directed behavior. The target of the behavior is the coarse-​grained content of the MR, i.e., what the MR is about. What exactly the goal of the behavior is, is in turn dependent on how the behavior is described, i.e., on what is chosen as the explanatory goal (see also our previous discussion). In this sense, it makes a huge difference whether we describe the behavior of the ant as a triangle completion behavior or as a homing behavior. This might sound as if we could freely choose between different equally valuable descriptions. This is, however, not the case; it is the complexity of the relevant mechanisms and how they are realized in the world that makes the difference. Some descriptions are to be favored over others, ultimately depending on whether they give us an explanatory advantage given the cognitive system in the world, i.e., if they add something to our explanation that is not available under a different description (cf. Ramsey 2007). Let us illustrate this with the example of the ant and the frog. We show that the frog’s cognitive system is so rigid that different descriptions are not helpful in contrast to the ant’s cognitive situation. In the frog’s case the behavior can be described as a tongue-​throwing behavior with the frog’s goal to hit the thing that elicited the behavior, or as a fly-​catching behavior with the frog’s goal to catch flies. We can systematically distinguish the two ways of describing the behavior, since different cases count as unsuccessful: catching a black moving dot is a successful tongue-​throwing behavior but can be an unsuccessful fly-​catching behavior if the black moving dot is not a fly. However, as far as we know, the frog’s reaction to unsuccessful and successful cases does not differ in an interesting way. Thus, the more demanding description as a fly-​catching behavior (in contrast to tongue-​throwing behavior) does not seem to add anything interesting to our explanation of the frog’s behavior with respect to the underlying mechanism; it informs us about the normal evolutionary embedding of the mechanism without adding anything about the structure or functioning of the mechanism. (The addition that the tongue-​throwing behavior is advantageous for the frog because most black moving dots around it are flies in its natural habitat works independently of the type of description.) In the case of the ant, things are different: the behavior can be described either as a triangle-​completion behavior with the ant’s goal being to return to its starting point, or as a homing behavior with the ant’s goal being to return to its nest. And these descriptions have a different explanatory value: The

Situated Mental Representations  197 description of the ant’s behavior as a triangle-​completion behavior leaves no room to describe the systematic difference between cases in which the nest is reached and cases in which the triangle is completed (the “correct” vector has been followed, e.g., after being displaced) but the nest is not reached. The difference is that in the latter but not the former case, the ant starts a typical nest-​searching behavior, i.e., moving in growing circles around the endpoint of the triangle completion. Thus, describing the ant’s behavior as a homing behavior brings an explanatory advantage: it allows us to systematically distinguish between successful cases and unsuccessful cases since only the latter are followed by specific “repair” mechanisms (or other additional features shaping the mechanism). Describing the ant’s behavior as a homing behavior adds relevant information about the structure of the triangle completion mechanism, namely that it includes a local search mechanism activated only in the case of not reaching the nest after a rough completion of the triangle. Thus, the description of the ant’s behavior as a flexible behavior is not a matter of taste; it is a matter of the best scientific description that optimally reflects the important factors. In the frog’s case, there is no reason to assume that factors beyond black moving dots play a role. In the ant’s case, however, we have good reason to assume that finding the nest is a very relevant factor that determines what the ant is doing in cases of failure. Thus, the latter case involves a mental representation of the nest, while the frog can be (properly) described without involving mental representations.

6.5.3.  The Format of MRs: The Adequacy Relation While the coarse-​grained content of an MR is determined by actual behavior (under normal conditions) being directed at the target of the behavior,12 the adequacy relation is determined by the function in which the representation is used as a stand-​in for the target. Let us elaborate our view by using the general idea of a substitution-​relation within our functionalist framework: an internal state (or process) functions as a substitute for an object, property, etc. if the activation of this internal state alone triggers an informational processing that is equivalent to the processing that follows the registering of the actual presence of the object (or property).13 The substitution-​relation (e.g., 12 This is the place where teleosemantic ideas kick in to our account. 13 In the case of the ant, the nest cannot be registered; instead, the number of steps and the angle to the sun is registered, and this registration then stands in for the location of the nest, thus representing

198  What Are Mental Representations? the internal state substituting the registering of an actual object) explains the coarse-​grained content of a mental representation and thereby allows us to differentiate between representations and nonrepresentations: any inner state (or process) that can substitute something equivalently in a function describing the system’s behavior is a mental representation, everything else is not. To determine the coarse-​grained content of a mental representation thus involves determining equivalence in information processing. At this point, a pragmatic research perspective comes into play:  the behavior has to be described as a function, which is in principle possible on different levels (see our earlier discussion); thus there are many levels of equivalence (even if the represented object stays the same). To account for this conceptually, we introduce a third feature of MR, namely the format of MRs. “Format” is not used here to refer to different modalities (e.g., visual in contrast to tactile); our aim is to highlight different ways of structuring information about the same object or situation (see the subsequent discussion). The vehicle of an MR is typically a neural state of the cognitive system. Its coarse-​grained content is typically the target of the behavior to be explained (given an explanatory interest), and it is determined by the functional role the vehicle takes over in the function that describes the behavior of a cognitive system. The format of the representation is the fine-​grained structure of the MR, which is determined by a certain level of equivalence in the substitution relation. The respective relation between the substitute (representation) and the substituted (represented) object is called the adequacy relation for the substitutional role. Intuitively, something can be a substitute for something else only if it has the right properties to fulfill the same role as the represented entity. And what are the “right properties”? They are the properties that the function operates on.14 In this sense, a mental representation is adequate to substitute whatever it represents just in case it has the right properties to fulfill the same role in the function describing the behavior. The right properties are determined by the relevant function itself. And since there are functions on different levels of cognition, we can distinguish different kinds of adequacy relations leading the nest. Or take Millikan’s famous example: the beaver uses the registration of the sound of another beaver’s splash in a process that would equally well function with a registration of an endangering predator; thus the idea that the registration of the splash represents the danger of being attacked by a predator. 14 Crucially, we here depart from a purely mathematical understanding of the function toward a more causal or mechanistic understanding of it. However, we try to stay as abstract as possible in order to be compatible with different interpretations of the function.

Situated Mental Representations  199 to different kinds of mental representations at different cognitive levels depending on the explanatory interest (Vosgerau 2008, 2009, 2011).15

6.5.4.  Different Formats of MR and the Ontology of MR A mental representation can be characterized by the vehicle, the format, and the content which is related to a type of behavior of a cognitive system.16 The relation between these notions is as follows:  if the vehicle of an MR (e.g., a neural correlate) is fixed relative to the type of behavior that is to be explained, and if we determine the relevant format (which involves a pragmatic selection of a relevant level of adequacy condition), then the content of the MR is constrained enough to play an explanatory role in scientific explanation. In more detail, our view presupposes that there is one (or more) relevant underlying mechanism(s). The vehicle of the MR is explanatory since it is part of a causal mechanism. The content of the MR is explanatory since it is systematically related to the vehicle, even though it is not part of the mechanism. The content-​involving explanations are coarse-​grained causal explanations which can be found at higher levels of description: either the content-​involving explanation is a coarse-​grained description of many (possible and real) underlying low-​level mechanisms sharing explanatorily relevant factors (Vosgerau and Soom 2018), or it is a description of a higher-​level causal mechanism and is in this sense coarse-​grained (cf. Krickel 2018).17 Content-​involving explanations are causal explanations since they are finally related to and anchored in the causal mechanism(s). We need to elaborate on the format of representation. To do this, we describe three formats of MR in a general way as correlational, isomorphic, and propositional. Different formats are connected with different types of representation relations which allow us to describe the same behavior in different ways.

15 Our notion of the format of an MR is not introduced to account for a difference between a sensorimotor-​based and a perception-​based vehicle of an MR. This distinction is not irrelevant, but the adequacy relation described previously is more basic. 16 For an adequate explanation, we do not always need a specification of all aspects of an MR. Sometimes a coarse-​grained description without specifying the format is sufficient. 17 The cited literature describes detailed accounts to prevent the famous causal exclusion argument:  it is compatible either with having only low-​level mechanisms but several levels of causal explanations or with also allowing for high-​level causal mechanisms within a mechanistic framework.

200  What Are Mental Representations? The correlational format of a mental representation (MR): r (realized by vehicle v) is an MR of an entity x with respect to the type of behavior B (which is described by a certain function) in a correlational format iff r is substituting x in the function describing B (i.e., if r is taking over the functional role of x), and r is adequate with respect to its substituting x only if it covaries with x. The isomorphic format of an MR: r (realized by vehicle v) is an MR of an entity x with respect to the type of behavior B in an isomorphic format iff r is substituting x in the function describing B, and r is adequate with respect to its substituting x to the degree that it is isomorphic with the relevant part of the structure of x, where the relevant part is determined by the functional roles in B. The propositional format of an MR: r (realized by vehicle v) is an MR of a state of affairs x with respect to the type of behavior B in a propositional format iff r is substituting x in the function describing B, and r is adequate with respect to its substituting x only if r can be attributed the propositional content that p and p refers to the state of affairs x (or equivalently: p is true iff x is the case). We can apply these different types of representational formats to explain the same everyday behavior, e.g., Sandra grasps her car key and her phone on top of which the key is positioned, in order to drive home. This goal-​directed grasping behavior can be explained with an MR of correlational format by highlighting the proximal goal, i.e., the key-​phone unit. Then the claim is that the cause of the behavior involves an MR of the key-​phone unit: this means that there is a neural correlate as part of the causally relevant neural processes which stands in for the key-​phone unit as the goal of the grasping behavior, e.g., the neural correlate of a percept or an efference copy.18 Thus, the relevant explanation would basically run as follows: Sandra is grasping the key and the phone at once because her visual apparatus presents both to her as a key-​ phone unit. This is a narrow explanation of the grasping behavior focusing on one central correlation with the relevant object-​unit of the behavior. The behavior can also be explained using an MR in an isomorphic format. Then the claim is that the cause of her behavior involves at least an MR of the key and an MR of the phone: this means that there are at least two neural correlates as part of the causally relevant neural process, one standing in for

18 The underlying cognitive vehicle could be, e.g., an efference copy of the goal of the grasping movement as is presupposed according to the comparator model of explaining simple goal-​directed grasping behavior (Wolpert and Flanagan 2001; Synofzik, Vosgerau, and Newen 2008).

Situated Mental Representations  201 the phone, the other for the key. This would be a minimal structure to allow for an isomorphic substitution. Usually this would also involve the isomorphic substitution of relevant properties, e.g., the property of the key to open her car, the property of the phone to enable her to make a call while driving home. The objects and properties are thought to be implemented as object files in the neural system (Kahneman, Treisman, and Gibbs 1992; Spelke 2000; Carey 2011). Thus, the relevant explanation at this level would run as follows: Sandra is grasping the key and the phone at once because there is an activation of a neural correlate of a key and its property to open her car and of the neural correlate of the phone and its property to allow for phone calls. Thus, the affordances of the objects, to open a car and to enable phone calls, are a relevant aspect of grasping them. The most complex level of explanation uses a propositional format of MRs: then the claim is that the cause of Sandra’s behavior involves at least an MR of a proposition p (a state of affairs) such that there is a neural correlate that has a functional role adequately constrained by the propositional content or equivalently a cluster of behavioral dispositions; i.e., a neural correlate of a thought is activated that can be characterized by the same behavioral dispositions that would be activated if Sandra had registered the state of affairs expressed by p. This results in the usual folk-​psychological explanation: Sandra grasps her phone together with her key because she wants to open her car to drive home and she wants to make a phone call during her way home and she has the relevant background information about the affordances of her key and her phone. We can explain the same behavior on different levels using specific formats of mental representation for each of these explanatory levels, thereby describing different contents of MRs for the same behavior. What does this imply for the ontology of MRs? Isn’t it clear that MRs can only be pragmatic stories—​just a narrative “gloss,” according to Egan (2014), and not something closely related to and systematically anchored in causal mechanisms? Here we argue that despite the different levels of explanation, and the three types of MRs which can be used to explain the same behavior of a cognitive system, MRs remain systematically coupled to reality. Explanations using MRs can indeed be scientific explanations. The reason is that different aspects of reality can be described on different levels, which in turn requires different levels of explanation. Take, for example, a car traveling at 50 km/​h. We can describe this “behavior” on different levels and accordingly explain it on different levels. The

202  What Are Mental Representations? question “Why is the car traveling at 50 km/​h?” could be answered by saying that a force in the direction of travel is acting on the car that is equal to the friction of the car at exactly this speed. This is quite abstract, but not unscientific, and it applies to a lot of cases, including cases of towed cars. Of course, a more detailed answer could be that an air-​gas mixture is exploding in the cylinders, thus moving the pistons such that they transfer the force onto the connecting rod, etc. This level of explanation is surely more detailed, but it is not more scientific: it only applies to fewer cases. If we drill down into more detail about the molecular structure of the gas and the rod, our explanation will still not gain scientificness; it rather loses generality, which means that it gets more and more specific. In a similar way, behavior can be described at different levels—​the more detailed the description gets, the fewer cases are subsumed and the more detailed the explanation has to be. However, this does not mean that the explanation also becomes more scientific. To clarify the role of mental representations, let us add a general remark concerning the relation between mechanisms and MRs.19 If a behavior seems to be flexible but has an underlying rigid mechanism combining only the same stimulus with the same output behavior, then a generalization which goes beyond that mechanism itself is not explanatorily fruitful—​i.e., we do not need to presuppose mental representations in a scientific explanation of the behavior. Then we would define the resulting behavior not as flexible but as rigid, although it seems flexible. But if we actually have good reasons to suppose that the behavior relies on variable or even multiple mechanisms, then this multiple realization makes a generalization at the level of common interrelations between triggering situations and the resulting behavior explanatorily fruitful (at least if we are interested in explaining and predicting the behavior not only for one specific situation but also for a large variety of situations and challenges). Let us illustrate the fact that MRs are nonstatic and thus have multiple realizations by an example that clarifies our view about the vehicle of MRs: for quite a while it was thought that amygdala activation is a neural mechanism that constitutes the emotion of fear. Then it was discovered that it also is activated when observing someone else’s fear. Furthermore, it was discovered that it is often active in cases of intense negative emotions in general, and that amygdala activation can be observed in extroverted people observing happy 19 We do not have our own theory of mechanisms in the background, but our theory of mental representations fits with the characterizations in Krickel 2018.

Situated Mental Representations  203 faces, but not in introverts doing so (Canli et al. 2002). It is now commonplace that the amygdala is involved in several functions of the body, including arousal, autonomic responses associated with fear, emotional responses, hormonal secretions, and memory. To make the picture even more complex, there is a study that demonstrates that a person can experience intense panic with strong damage of bilateral amygdala, namely the patient S. M., who lost her amygdala due to Urbach-​Wiethe disease. Interestingly, S. M. was first mentioned as evidence for the amygdala as the center of processing fear since S. M. showed an absence of overt fear manifestations in typical fear-​inducing situations (including exposing her to living snakes and spiders, taking her on a tour of a haunted house, and showing her emotionally evocative films) and an overall impoverished experience of fear (Feinstein et al. 2011). But first of all, fear was still processed although to a diminished degree, and second, under certain conditions she experienced intense panic (Feinstein et  al. 2013). Thus, we can conclude that amygdala activation is neither necessary nor sufficient for experiencing fear. While starting out with a candidate for a neural correlate of fear, we ended with a neural correlate which is important to process fear but nevertheless is involved in the realization of many other mental abilities. If we complement this bottom-​up perspective with a top-​down one, we should start with the phenomenon of fear and try to work out the underlying features realizing a token of fear: we have argued elsewhere in detail (Newen, Welpinghus, and Juckel 2015)  that fear is realized as an integrated pattern of characteristic features none of which is sufficient and most of which are not necessary to process fear. Thus, there is no unitary mechanism that realizes fear. And we think that this observation is neither exceptional nor due to a premature state of neuroscience but typical for the architecture of neural mechanisms realizing a minimally flexible behavior or cognitive phenomenon. Without being able to comprehensively defend this view as a general view in this paper, we want to mention two further reasons why this observation of multiple realization which still remains systematic is generalizable:  (1) Most biological systems displaying flexible behavior are those that are able to learn; thus, their neural vehicles change systematically during the learning processes. (2) More generally, the neural processes have a high level of neural plasticity which is due to permanently ongoing reorganizations in the brain (Hübener and Bonhoeffer 2014). Thus, complex biological systems regularly implement flexible behavior. Hence, the generalized level of content explanations is epistemically fruitful and is indeed often the only scientific

204  What Are Mental Representations? means to access minimally interesting generalizations in order to explain the behavior.

6.5.5.  Features of Situated Mental Representations, Some Applications, and Advantages Let us summarize the main features of situated mental representations: an MR is individuated by three aspects, namely vehicle, content, and format relative to a relevant minimally flexible behavior. We have already seen the interdependence of the three aspects, which is always relative to the main job of MRs, i.e., to explain a generalized type of flexible behavior. Furthermore, we have illustrated that the same type of behavior B, e.g., Sandra grasping her car key and her phone in order to drive home, can be explained on three different levels by using different formats for the MRs which share the same vehicle and the same coarse-​grained content. This flexible way of explaining a behavior B has its limits. The first danger is that of anthropomorphizing or over-​intellectualizing nonlinguistic behavior of animals and babies, e.g., the homing behavior of desert ants, by ascribing propositional content to them. This would be empirically inadequate since we do not have any additional evidence that their behavior can be based on such a generalized cluster of mechanisms which is compatible with belief ascription. The mechanism of dead reckoning or path integration is much more constrained. The second danger is that of over-​intellectualizing the behavior of adults. Grasping the car key and the phone can be based on an explicit consideration of planning to drive home now and to make an urgent phone call during the drive, or it can be just a well-​established habit. If the latter is the case, then a correlational or isomorphic MR may be much more adequate to describe the behavior than a propositional MR. Thus, despite the flexible way of using different formats of representation to explain the same behavior, in many situations it is still possible to determine the most adequate description which makes this behavior typical for a certain representational format. Here are some examples to illustrate this briefly: Typical correlational MRs: This is the most adequate representational format to explain the homing behavior of desert ants: there is a neural process which is systematically correlated with the spatial vector and thus with the location of the nest under favorable conditions.20 A further example is 20 Our interpretation of the complexity of the neural processing in desert ants and bees is such that we take it to involve MRs. Furthermore, we interpret what science knows about the homing behavior

Situated Mental Representations  205 the rather complex bee behavior. Bees make also use of path integration—​as described already for desert ants—​but to perform the famous waggle dance, they clearly rely on memorized representations of the profile of the position of the sun. This case is quite convincing to illustrate the “substitution relation”: the bees extract from the incoming signal of the sun in a situation a representation of the sun’s position and seem to build a profile of its positional changes. Thus, the neural processing of the bees involves an internal informational memory of these changing positions. Even if the actual information about the relation between the direction of the source from the hive and the direction of the sun from the hive was acquired many minutes earlier, bees use the correct calculated actual position of the sun when doing the waggle dance. This is based on a presupposed substitutional memory that is required to carry this direction vector forward in time (Menzel and Eckoldt 2016; see also the description in Gallistel 2017). This representation is even accurate and adjusted in time when bees only are exposed to the sun in the afternoon for a while and then are tested in a morning situation (Menzel and Eckoldt 2016). Thus, bee behavior can best be explained by presupposing a correlational mental representation realized by a cluster of neural processes which implement a vector representation standing in for the direction and the distance of the food source. Typical isomorphic MRs: Here we only remind the reader of the example of episodic memory realized in rats (Crystal 2013) or in scrub jays (Clayton and Dickinson 1998; Seed, Emery, and Clayton 2009). Our key example illustrates that rat behavior can be best described by a cluster of neural processes implementing substitutes of the time, the location, and the type of food. Thus, we have to presuppose isomorphic mental representations to explain and predict their behavior. Typical propositional MRs:  The so-​called explicit false belief task tests whether children are able to distinguish their own belief, e.g., about the location of an object, from the belief about another person concerning the location of the same object. If the other person has left the room before the object was transferred from box A to box B and we ask four-​year-​old children where the other person will search for the object when coming back, then of ants and bee navigation such that the neural vehicles are not structured in an isomorphic way with clear separable components, as is clearly the case in rats and birds with their what-​where-​when information. Rather, we interpret them as merely covarying with the location of the nest and the food source, respectively. Space constraints prevent us from clarifying the lower bound of MRs and the borderline between correlational and isomorphic MR.

206  What Are Mental Representations? the four-​year-​olds are able to account for the false belief of the other person, answering that the other person will search in box A even if the object is in box B. This behavior typically demands the ascription of a belief, considering what the other person knows in contrast to me and thus is a paradigmatic case of typically involving a propositional format of representation. Mental representations can be used in two strategies of content explanations, which we want to call “top-​down” and “bottom-​up” strategies. A top-​down strategy investigates a certain flexible behavior, e.g., grasping a bottle of water from the fridge, and wants to explain this by relying on the relevant content, e.g., the content of the person’s belief that a bottle of water is in the fridge combined with the desire to drink some cold water. Using such a folk-​psychological explanation, we aim to constrain the vehicle of this MR in a scientific way. Thus, we want to disclose the neural correlate of the relevant type of belief (and the relevant desire). But in the case of a type of belief, we did not find and cannot even expect to find any rigid underlying neural correlate embedded in a rigid mechanism, but neuroscientific evidence is much more complex: the best we can expect is a cluster of neural correlates embedded in one quite contextually varying mechanism or even embedded in a plurality of mechanisms. Thus, even if we succeed in characterizing scientifically the underlying cluster of neural correlates as well as the relevant mechanisms, the folk-​psychological content-​explanation (constrained by the format of the MR) remains explanatorily fruitful as describing the possible range of underlying neural correlates and mechanisms at a general level.21 A bottom-​up strategy of a content explanation would ideally start with the discovery of neural mechanisms that are only known to be associated with the relevant flexible behavior of grasping a bottle of water from the fridge, e.g., the comparator mechanism (Synofzik, Vosgerau, and Newen 2008; Newen 2017). Then the challenge is to constrain the content of some neural correlates or cluster of neural correlates involved in the mechanism. This can be done by first fixing a level of explanatory interest, i.e., by determining the format of MR, and then aiming to disclose the correlation between some neural states and processes and the external states and properties in the world. Then we can develop a content explanation which again 21 These explanations can be causal explanations since we can allow for different relevant levels of causal explanation even if there is only one level of a causal mechanism. Causal explanations need to be anchored in a causal mechanism, but this can be the case more or less precisely depending on the level of description of the causal explanation (if there are worries about the classical problem of mental causation and overdetermination, see for discussion Newen and Cuplinskas 2002).

Situated Mental Representations  207 has the function of constraining the possible underlying mechanisms in the cognitive system. Both strategies are used in cognitive sciences and illustrate the role of content explanations as systematically connected to underlying (clusters of) neural correlates and mechanisms such that content explanations are scientific but only with a rather broad range of generality, in contrast to mechanistic explanations involving neural mechanisms, which are rather specific. Let us finally take up the challenge of radical enactivists who argue that we do not need MRs to explain flexible behavior and that dispositions would suffice to do this (Hutto and Myin 2017). In the case of episodic memory in rats introduced earlier, it is supposed to be sufficient to describe the disposition to search for chocolate in arm7 at midday if there is normal food in arm2 in the morning. According to our view, a dispositional explanation is not in conflict with an explanation of behavior based on MRs. A dispositional explanation can be sufficient if one is not interested in the architecture of cognitive processes underlying the dispositions. However, if we are able to discover the main features of this architecture, we can, in principle, (1) make more fine-​grained and more general explanations and predictions about the behavioral dispositions, and (2) change the perspective from observation to intervening in the cognitive system. It follows from our assumptions that, (ad 1) if one knows the (relevant) vehicle (e.g., the neural correlate) of episodic memory, one can predict that the rat will show or remains disposed to show a certain search behavior as soon as this vehicle (or a certain group of vehicles) is activated again even in a rather different context. Furthermore, (ad 2) knowing the (relevant) vehicle in a form of a neural correlate (eventually plus some embodied or social properties) allows us to modify the behavioral dispositions. The latter is what we aim for in the case of behavioral deficits. E.g., a person with deviant behavior due to a brain tumor (the patient developed pedophilia due to a tumor) is of course treated by removing the brain tumor, which is a change of the neural basis and thereby the behavioral dispositions; in this case after the removal of the tumor the pedophilia was gone (Burns and Swerdlow 2003). This coarse-​grained successful intervention only illustrates the general strategy of intervening; other methods include psycho-​pharmacological drugs in the case of mental disorders to treat positive symptoms of schizophrenia. We know that these are nowadays still rather coarse-​grained interventions, and we also take the idea of embodied and social anchoring of MRs seriously. E.g., social interventions are still the most effective means of treatment of mental disorders, but the general

208  What Are Mental Representations? conclusion is that it is fruitful to presuppose MRs since they can be used to explain and predict behavior and to systematically intervene. Our account predicts that the development of neuroscience will open new possibilities of intervention; however, we also predict that these possibilities have to investigated in close relation to relevant aspects of the whole body and the social environment, which is made possible by integrating them into explanations of behavior with MRs.

6.6.  Conclusions We are in need of mental representations to explain or predict minimal flexible behavior. This is behavior which by definition cannot be explained by a rigid mechanism. Given the evidence from neuroscience, this is very often the case since many mental phenomena are realized either by a contextually rather variable mechanism or by a plurality of mechanisms. The latter observation is a main reason for enactivists to take an anti-​representationalist position and to constrain all explanations of nonlinguistic cognitive phenomena to explanations based on affordances and dispositions. We are not denying that explanations suggested by enactivists can be enlightening but argue that their anti-​representationalism is an ideological constraint on the plurality of scientific explanations. Furthermore, we developed a new notion of situated mental representations according to which a mental representation is individuated by three aspects, namely vehicle, content, and format, relative to a relevant minimally flexible behavior to be explained. Thus, mental representations do not have their content intrinsically but only relative to an explanatory level; and this content can be characterized by (at least) three formats of representation. Egan (2014) argues that any content that is not intrinsically attached to a mental representation cannot be taken scientifically seriously but is only a pragmatic gloss. We argue that all types of content suffer from the problem of underdetermination and thus there is never an intrinsic connection between content and mental representation. But we argue that content-​based explanations can nevertheless be scientific. We need to and can distinguish nonexplanations from explanations of behavior, and in the remaining package of explanations we have a great variety of high-​level and low-​level explanations with different advantages. Low-​level explanations involving specific mechanisms are typically much more constrained, while high-​level

Situated Mental Representations  209 explanations involving mental representations are typically much more general: the level of generalization can be described in more detail with our notion of representational format. Thus, mental representations can be part of scientific explanations if they are characterized by vehicle, format, and content relative to a flexible behavior of a cognitive system. Mental representations understood in such a way are dependent on the explanatory level of interest, allow for multiple realization in a cluster of vehicles and thus have to be characterized as nonstatic and flexible. If we want to account for mental representations as used in scientific explanations and realized by neural correlates in biological systems, then we have to give up the traditional Fodorian view of rigid symbolic mental representations. At the same time, we do not have to throw out the baby with the bathwater and accept anti-​representationalism. Instead we offer a notion of mental representations that enables them to take part in scientific explanations while they are real, nonstable, use-​dependent, and situated. They are dependent on the levels of explanation, allowing for multiple realization and modifications of the realization basis due to learning and plasticity. Furthermore, such mental representations are a necessary part of the progress of cognitive science in bringing bottom-​up and top-​down explanations of the same phenomenon closer together.

Acknowledgments For helpful critical comments on earlier versions, we would like to thank especially Robert Matthews, as well as Edouard Marchery and Robert Rupert. Furthermore, we received helpful criticism from Matej Kohár and Beate Krickel. We would like to thank the German Research Foundation DFG which supported this research in the context of funding the Research Training Group “Situated Cognition” (GRK 2185/​1)/​gefördert durch die Deutsche Forschungsgemeinschaft (DFG; GRK 2185/​1).

References Bell, V., and Halligan, P. W. 2013. The Neural Basis of Abnormal Personal Belief. In F. Krüger, and J. Grafmann (eds.), The Neural Basis of Human Belief Systems, 191–​224. Hove, East Sussex: Psychology Press.

210  What Are Mental Representations? Bjerknes, T. L., Dagslott, N. C., Moser, E. I., and Moser, M. B. 2018. Path Integration in Place Cells of Developing Rats. PNAS 115 (7): 1637–​1646. Burke, S. N., Maurer, A. P., Hartzell, A. L., Nematollahi, S., Uprety, A., Wallace, J. L., and Barnes, C. A. 2012. Representation of Three-​Dimensional Objects by the Rat Perirhinal Cortex. Hippocampus 22 (10): 2032–​2044. Burns, J. M., and Swerdlow, R. H. 2003. Right Orbitofrontal Tumor with Pedophilia Symptom and Constructional Apraxia Sign. Archives of Neurology 60: 437–​440. https://​ doi.org/​10.1001/​archneur.60.3.437. Canli, T., Sivers, H., Whitfield, S. L., Gotlib, I. H., and Gabrieli, J. D. E. 2002. Amygdala Response to Happy Faces as a Function of Extraversion. Science 296: 2191. Carey, S. 2011. The Origin of Concepts. Oxford: Oxford University Press. Clayton, N. S., and Dickinson, A. 1998. Episodic-​Like Memory during Cache Recovery by Scrub Jays. Nature 395: 272–​274. Crystal, J. D. 2013. Remembering the Past and Planning for the Future in Rats. Behavioural Processes 93: 39–​49. Dretske, F. 1988. Explaining Behavior:  Reasons in a World of Causes. Cambridge, MA: MIT Press. Egan, F. 2014. How to Think about Mental Content. Philosophical Studies 170 (1): 115–​135. Fang, W. 2018. The Case for Multiple Realization in Biology. Biology & Philosophy 33: 3, https://​doi.org/​10.1007/​s10539-​018-​9613-​7. Feinstein, J. S., Adolphs, R., Damasio, A., and Tranel, D. 2011. The Human Amygdala and the Induction and Experience of Fear. Current Biology 21: 34–​38. Feinstein, J. S., Buzza, C., Hurlemann, R., Follmer, R. L., Dahdaleh, N. S., Coryell, W. H., Welsh, M. J., Tranel, D., and Wemmie, J. A. 2013. Fear and Panic in Humans with Bilateral Amygdala Damage. Nature Neuroscience 16 (3): 270–​272. Fodor, J. 1980. Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology. Behavioral and Brain Sciences 3 (1): 63–​109. Fodor, J. 1987. Psychosemantics:  The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press. Fodor, J. 1994. Fodor’s Guide to Mental Representation. In S. Stich (ed.), Mental Representation: A Reader, 9–​33. Cambridge, MA: Blackwell. Gallistel, C. R. 1990. The Organization of Learning. Cambridge, MA:  MIT Press /​ A Bradford Book. Gallistel, C. R. 2017. Learning and Representation. In J. Byrne (ed.), Learning and Memory: A Comprehensive Reference, 141–​154. Amsterdam: Elsevier. Gigerenzer, G. 2000. Adaptive Thinking: Rationality in the Real World. New York: Oxford University Press. Haugeland, J. 1991. Representational Genera. In W. Ramsey, S. Stitch, and D. Rumelhart (eds.), Philosophy and Connectionist Theory, 61–​89. Hillsdale, NJ: Erlbaum. Hübener, M., and Bonhoeffer, T. 2014. Neuronal Plasticity: Beyond the Critical Period. Cell 159 (4): 727–​737. Hutto, D. D., and Myin, E. 2013. Radicalizing Enactivism: Basic Minds without Content. Cambridge, MA: MIT Press. Hutto, D. D., and Myin, E. 2017. Evolving Enactivism:  Basic Minds Meet Content. Cambridge, MA: MIT Press. Jezek, K., Henriksen, E. J., Treves, A., Moser, E. I., and Moser, M. B. 2011. Theta-​Paced Flickering between Place-​Cell Maps in the Hippocampus. Nature 478 (7368): 246–​249. https://​doi.org/​10.1038/​nature10439.

Situated Mental Representations  211 Kahneman, D., Treisman, A., and Gibbs, B. J. 1992. The Reviewing of Object-​Files: Object-​ Specific Integration of Information. Cognitive Psychology 24: 174–​219. Káli, S., and Dayan, P. 2004. Off-​line Replay Maintains Declarative Memories in a Model of Hippocampal-​Neocortical Interactions. Nature Neuroscience 7: 286–​294. Krickel, B. 2018. The Mechanical World:  The Metaphysical Commitments of the New Mechanistic Approach. Cham, Switzerland: Springer. Menzel, R., and Eckoldt, M. 2016. Die Intelligenz der Bienen:  Wie sie denken, planen, fühlen und was wir daraus lernen können. 2nd ed. Munich: Knaus. Millikan, R. G. 1996. Pushmi-​Pullyu Representations. In J. Tomberlin (ed.), Philosophical Perspectives, vol. 9, 185–​200. Atascadero, CA: Ridgeview Publishing. Millikan, R. G. 2009. Biosemantics. In B. McLaughlin, A. Beckermann, and S. Walter (eds.), The Oxford Handbook of Philosophy of Mind, 394–​406. New  York:  Oxford University Press. Moser, E. I., Kropff, E., and Moser, M. B. 2008. Place Cells, Grid Cells, and the Brain’s Spatial Representation System. Annual Review of Neuroscience 31: 69–​89. Neander, K. 2017. A Mark of the Mental. Cambridge, MA: MIT Press. Newen, A. 2017. What Are Cognitive Processes? An Example-​Based Approach. Synthese 194 (11): 4251–​4268. https://​doi.org/​10.1007/​s11229-​015-​0812-​3. Newen, A., and Cuplinskas, R. 2002. Mental Causation:  A Real Phenomenon in a Physicalistic World without Epiphenomenalism or Overdetermination. Grazer Philosophische Studien 65: 139–​167. Newen, A., Welpinghus, A., and Juckel, G. 2015. Emotion Recognition as Pattern Recognition: The Relevance of Perception. Mind & Language 30 (2): 187–​208. Packard, M. G., and McGaugh, J.  L. 1996. Inactivation of Hippocampus or Caudate Nucleus with Lidocaine Differentially Affects Expression of Place and Response Learning. Neurobiology of Learning and Memory 65: 65–​72. Panoz-​Brown, D., Corbin, H. E., Dalecki, S. J., Gentry, M., Brotheridge, S., Sluka, C. M., Jie-​En Wu, J. E., and Crystal, J. D. 2016. Rats Remember Items in Context Using Episodic Memory. Current Biology 26: 2821–​2826. Putnam, H. 1975. The Nature of Mental States. In Mind, Language, and Reality, 429–​440. Cambridge: Cambridge University Press. Ramsey, W. M. 2007. Representation Reconsidered. New York: Cambridge University Press. Ramsey, W. M. 2017. Must Cognition Be Representational? Synthese 194 (11): 4197–​4214. Schulte, P. 2012. How Frogs See the World: Putting Millikan’s Teleosemantics to the Test. Philosophia 40 (3): 483–​496. Seed, A. M., Emery, N., and Clayton, N. 2009. Intelligence in Corvids and Apes: A Case of Convergent Evolution? Ethology 115: 401–​420. Sober, E. 1999. The Multiple Realizability Argument against Reductionism. Philosophy of Science 66 (4): 542–​564. Spelke, E. S. 2000. Core Knowledge. American Psychologist 55 (11): 1233–​1243. Starzak, T. 2017. Interpretations without Justification:  A General Argument against Morgan’s Canon. Synthese 194 (5): 1681–​1701. Synofzik, M., Vosgerau, G., and Newen, A. 2008:  Beyond the Comparator Model:  A Multifactorial Two-​Step Account of Agency. Consciousness and Cognition 17: 219–​239. Tolman, E. C. 1948. Cognitive Maps in Rats and Men. Psychological Review 55: 189–​208. Tolman, E. C., and Honzik, C. H. 1930. “Insight” in Rats. University of California Publications in Psychology 4: 215–​32.

212  What Are Mental Representations? Tomasello, M. 2003. Constructing a Language:  A Usage-​ Based Theory of Language Acquisition. Cambridge, MA: Harvard University Press. Tomasello, M. 2008. Origins of Human Communication. Cambridge, MA: MIT Press. Tsao, A., Sugar, J., Lu, L., Wang, C., Knierim, J. J., Moser, M. B., and Moser, E. I. 2018. Integrating Time from Experience in the Lateral Entorhinal Cortex. Nature 561 (7721): 57–​62. Vosgerau, G. 2008. Adäquatheit und Arten Mentaler Repräsentationen. Facta Philosophica 10: 67–​82. Vosgerau, G. 2009. Mental Representation and Self-​ Consciousness:  From Basic Self-​ Representation to Self-​Related Cognition. Paderborn: mentis. Vosgerau, G. 2011. Varieties of Representation. In A. Newen, A. Bartels, and E. Jung (eds.), Knowledge and Representation, 185–​209. Stanford, CA:  CSLI Publications; Paderborn: mentis. Vosgerau, G. 2018. Vehicles, Contents and Supervenience. Filozofija i drustvo (Philosophy and Society) 29: 473–​488. Vosgerau, G., and Soom, P. 2018. Reduction without Elimination: Mental Disorders as Causally Efficacious Properties. Minds and Machines 28 (2): 311–​330. Wolpert, D. M., and Flanagan, J. R. 2001. Motor Prediction. Current Biology 11 (18): R729–​R732.

7 Representational Kinds Joulia Smortchkova and Michael Murez

7.1.  Introduction Cognitive scientists seek to explain psychological capacities and often appeal to mental representations in doing so. However, not all representations are equally useful to cognitive science, and only some cluster together to form kinds and “carve cognition at its joints.” What makes some representational categories natural representational kinds in cognitive science? This is the question we explore in this chapter. Some examples of candidate representational kinds within cognitive science include essentialist representations, prototype representations, exemplar representations, object-​ files, mental maps, analog magnitude representations, action-​ representations, structural geon-​ based object representations, visual working memory representations, quasi-​pictorial representations, face representations in the FFA (face fusiform area), mental models, body schema representations, singular terms in the language of thought, bug representations in the frog’s brain, dominance hierarchy representations in baboons, naive theories, core cognition representations, syntactic tree representations, Universal Grammar, content (or address-​)addressable representations in memory, edge representations, etc. The status of each of these categories as a genuine representational kind is controversial. Some will strike the reader as more plausible or as better examples of representational kinds than others. We don’t wish to take a firm stand on any case listed here in particular, but merely appeal to them in order to help fix the reference of what we mean by “representational kind” within the context of cognitive science. The issue that interests us is not what a representation is, nor whether the notion of “mental representation” in cognitive science itself picks out a natural kind (Ramsey 2007; Shea 2018). Rather, assuming there are mental representations, we are interested in what it is for a particular class of them Joulia Smortchkova and Michael Murez, Representational Kinds In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0008.

214  What Are Mental Representations? to form a natural kind (we will often refer simply to “representational kinds,” implying that they are natural or real). We are also not interested in entering into debates about whether folk psychological concepts (such as belief) aim at or succeed in picking out natural kinds (Jackson and Pettit 1990; Lycan 1988). Nor are we concerned with kinds of representations that are not mental (for instance, kinds of external representational artifacts, such as photographs). Our concern is with the notion of representation as it figures in cognitive science. This question must also be distinguished from the question of what in general constitutes a kind in cognitive science. There are presumably many varieties of kinds within cognitive science, some of which are not representational. For example, “innateness” has been argued to be a cognitive natural kind (Khalidi 2016). While there is a class of innate representations, there might also be innate cognitive structures, processes, or mechanisms that are not representational in nature.1 There are two parts to the notion of a “representational kind.” First, its members must be mental representations. Second, they must form a kind. We argue that reflection on these two components of the notion, together with certain widely shared assumptions within cognitive science, lead to our central thesis: representational kinds are multilevel. The chapter will unfold as follows. In section 7.2 we introduce a contrast between a representational category and a natural representational kind, and argue that the difference between them has to do with a contrast in their respective explanatory depths. In section 7.3, we outline a notion of non-​classical natural kindhood which can be applied to representations. In section 7.4 we flesh out the multilevel proposal and the relevant notion of “depth” by appealing to the nature of mental representations and their role in multilevel explanations. In section 7.5 we refine the multilevel thesis. In section 7.6 we reply to objections.

7.2. A Contrast Case A first way to get at the notion of “representational kind” is by noting an intuitive contrast between two exemplary categories of representations:  the 1 Note that it wouldn’t follow merely from the fact that “innate” is a cognitive kind, that “innate representation” is a representational kind, even if it turns out that “representation” is also a cognitive kind.

Representational Kinds  215 class of wombat-​representations, on the one hand, and the class of action-​ representations, on the other (Gallese et al. 2002). The two classes have much in common. Both are characterized by the domain of entities in the world which they represent: wombat-​representations refer to the domain of wombats, while action-​representations refer to the broader domain of actions performed by living beings. Thus, both classes are clearly characterized semantically, or by their intentional content. Based on this, one might hold that there are necessary and sufficient conditions for membership in each class. An entity is a wombat-​ representation just in case it correctly applies to all and only wombats. Similarly, an entity is an action-​representation just in case it correctly applies to all and only (visible) actions. Yet the former is not a good candidate for representational kindhood, while the latter is. The study of wombat-​ representations is of little interest to cognitive science. Intuitively, such a class is not likely to correspond to a joint in (cognitive) nature, and so is not a worthwhile target of cognitive scientific inquiry. By contrast, there is a dynamic and successful research program centered around the investigation of action-​representations. What distinguishes the two cases? A natural reaction might be that the difference lies entirely in the domains that each class of representations targets, as opposed to something about the representations themselves. Actions are a very general category of things, which are of evolutionary and psychological significance to humans and other animals. By contrast, wombats are an idiosyncratic domain, which is unlikely to be of much interest (unless you’re an ethologist specialized in wombats). This is obviously true, but it misses the point. There are other very general and significant domains, such as food2 (Lafraire et al. 2016), or supernatural entities3 (Atran 2002), of which humans undoubtedly form representations.

2 While food is an important category for human survival, we need to distinguish a category of representations from the category they represent. In order for food representations to form a kind, it is not enough for food to be an interesting and important category; we would need to find properties that representations of food share qua representations. Yet food representations lack unity, from a cognitive perspective. For instance, young infants do not seem to possess a dedicated learning mechanism for food items (while they possess dedicated learning mechanisms for tracking agents and small quantities, for instance), and adults distinguish between edible and inedible items via a motley collection of domain-​general mechanisms, which include texture, color, and smell (Shutts et al. 2009). 3 Beliefs about supernatural beings are similar to food representations. According to Atran (2002), there is no special cognitive system for thinking about religion or representing supernatural beings. Religious beliefs are instead underpinned by “a variety of cognitive and affective systems, some with separate evolutionary histories, and some with no evolutionary history to speak of. Of those with an

216  What Are Mental Representations? Yet upon investigation, these classes of representations do not appear, by cognitive scientists’ lights, to correspond to natural representational kinds (Atran 2002; Shutts et al. 2009). Members of these classes lack a sufficient degree of unity with respect to their representational properties, even if they are domain-​specific. Moreover, not all representational kinds are characterized by their specific target domains. Many plausible candidates for representational kindhood range across multiple referential domains (e.g., essentialist representations). Other plausible candidate representational kinds might not be characterized by the domain of things they represent, but rather in terms of other properties, such as their format (e.g., analog representations). And there are debates about whether representations of the very same domain constitute a representational kind, e.g., whether the different bodies of information that pick out the same category form a representational kind, corresponding to “the concept” of that category (Machery 2009). Consider an analogy. Dust and gold differ both in their significance for humans, and with respect to their natural kindhood. However, what makes it the case that dust is not a natural kind has little to do with its lack of significance:  what prevents dust from being a natural kind is simply that what instances of dust have in common that makes them dust is something entirely superficial. Dust is a nominal kind. Whatever appears dusty, or whatever we are disposed to categorize as dust (at least in normal circumstances) just is dust. There can be fool’s gold—​stuff that seems like it is gold but turns out not to be upon further investigation. Yet there cannot be “fool’s dust,” nor “shmust” on Twin Earth (Dennett 1994). This is because dust has no underlying nature, or essence. In turn, there is no science of dust as such. The intuition we hope to elicit, and build upon in what follows, is that wombat-​representations are to action-​representations what dust is to gold. The class of wombat-​representations is superficial, whereas there is something deeper that action-​representations have in common, and which makes them fruitful targets for cognitive scientific inquiry. As we see it, the important difference is that the only thing that wombat-​representations have in common is their intentional or semantic content. Wombat-​representations evolutionary history, some parts plausibly have an adaptive story, while others are more likely by-​ products” (Atran 2002, 265). Religious beliefs are not a natural kind, and representations of supernatural entities are not kinds either, but are distributed across different systems, have different formats, and have different evolutionary histories (113).

Representational Kinds  217 may constitute an intentional (or semantic) category. But intentional categories are not necessarily representational kinds, which are those that matter from the perspective of cognitive science. Action-​representations’ status as a representational kind depends on it being possible to discover other sorts of properties that this particular class of representations, which we initially characterized merely in terms of their target domain, non-​accidentally share. First, there are specific psychological effects associated with the category. For instance, in the apparent body motion paradigm, when people observe a hand action moving across the body, they perceive the longer but anatomically plausible path, rather than the shorter but anatomically implausible path (Shiffrar 2008). Also, action-​ representations are stored separately in visual working memory from colors, locations, and shapes, as evidenced by action-​specific memory capacity limits (Wood 2007). Furthermore, the category exhibits unity at the neural level. Action-​representations seem to be reusing the same neural mechanisms that are used for action production (Rizzolatti and Sinigaglia 2007). We can study their neural correlates, as they elicit activations in the superior temporal sulcus (STS) and in the premotor cortex area F5 (Jellema and Perrett 2007). Note that several of the interesting, unobvious properties of action-​representations are possessed not merely in virtue of what the representations are about. None of the aforementioned properties of action-​representations can be discovered by reflecting on their content (or by considering properties directly related to their content, such as entailment-​patterns). In our view, this hints toward a central aspect of representational kinds: representational kinds in cognitive science are inherently multilevel. This connects to one of the central features of explanations in cognitive science:  merely intentional categories belong to horizontal explanations, while representational kinds belong to vertical explanations, and figure in the functional decomposition of a capacity (Cummins 1983). This is what lends certain classes of representations the “depth” which enables them to constitute genuine kinds, despite the fact that—​by contrast with gold—​representations plausibly lack classical essences. We will discuss multilevelness in section 7.4, but first, in order to set the stage for this view of representational kinds, we need to say some more about the non-​ classical view of natural kinds that it presupposes.

218  What Are Mental Representations?

7.3.  Non-​classical Kindhood Our aim is not to enter into debates on natural kinds in general outside of cognitive science, nor it is to defend a particular view. It is rather to sketch an independently plausible, broader approach to natural kinds that is compatible with our specific proposal for representational kinds. Natural kinds figure in scientific explanations—​ traditionally, natural kinds are taken to be the categories which feature in natural laws—​and in scientific predictions and investigations. It is often said that natural kinds are “inductively deep” (Carey 2009; Quine 1969). This means that a natural kind supports projection: there are numerous true, non-​obvious generalizations that can be made about its members, which share scientifically interesting properties because there is an underlying factor that they have in common, rather than by accident. According to the classical account of natural kinds (Kripke 1980; Putnam 1975), this underlying factor is an essential property, which also provides necessary and sufficient conditions for kind membership, in addition to causing members to exhibit a shared syndrome of properties—​a cluster of generally more superficial features than kind-​ members tend to exhibit on average. Syndrome properties are epistemically useful in initial identifications of potential members. They may even be practically indispensable: one typically postulates a kind prior to grasping its essence, which is psychologically represented only by a “placeholder” (Gelman 2004). Yet from a metaphysical perspective, syndromes are only contingently related to kind-​membership. A member of a kind may fail to satisfy its syndrome to any significant degree, while a nonmember may satisfy it perfectly. Ordinary users of kind-​concepts tacitly know this, as shown by research on “psychological essentialism” (Gelman 2004). Kind-​membership depends on possession of the potentially hidden essence, rather than superficial resemblance. So shared kindhood is inferable from shared essence even in the absence of superficial resemblance. In turn, knowledge that superficially dissimilar entities are of a kind enables one to justifiably expect them to bear some deeper commonalities than otherwise evident—​which enables kind-​concepts to play a major heuristic role in guiding scientific inquiry. One well-​known problem with the classical account of kinds is that it fails to apply to special sciences, such as psychology (Dupre 1981; Fodor 1974; Millikan 1999). For instance, it seems implausible that there is a classical

Representational Kinds  219 essence that all and only action-​representations share. If all kinds were classical, there would presumably be no representational kinds in cognitive science whatsoever.4 Note that this is true even if membership in the category can be defined strictly, by reference to its content. Some representations can correctly apply to all and only actions (i.e., have the content: being an action) without being members of the relevant representational kind. For example, imagine someone were to design an artificial intelligence with the ability to detect actions, yet whose action-​representation capacities were based on entirely different computational mechanisms from those found in the human mind/​ brain: for instance, the artificial intelligence might detect actions thanks to a lookup table which accessed a huge database with stored templates for every single action it ever encountered (Block 1981).5 In such a case, cognitive scientists would not classify the artificial intelligence’s action-​recognition capacity as belonging to the same representational kind as our own. We will return to the issue of what—​beyond intentional content—​unites representational kinds. But whatever it is, there is little hope that it will be a classical essence. Presumably, there are no interesting necessary and sufficient conditions for being an “action-​representation” in the sense in which this expression picks out a representational kind for cognitive scientists. Luckily, there are alternatives to the classical account of kinds. Some accounts, such as Boyd’s influential homeostatic property cluster (HPC) view (Boyd 1991), preserve the idea that kinds play a privileged epistemic and scientific role without postulating essences. Many of our scientific practices tacitly rely on, as well as support, the contrast between merely nominal kinds (those for which syndrome-​possession constitutes membership), and

4 An anonymous reviewer suggested that there might be representational kinds that possess a classical essence. One such candidate kind is “perceptual demonstrative”: there might be an essence (not in the sense of a microstructure, as for chemical natural kinds, but in the sense of a sufficient and necessary condition) for how perceptual demonstratives have their reference fixed. We are not convinced there is such an essence. But even if we were to discover such an essence for the class of perceptual demonstratives, it’s unclear to what extent this class would be a representational kind in cognitive science (the focus of the chapter), rather than a kind outside the domain of cognitive science (for instance, in metasemantics or normative epistemology). 5 An anonymous reviewer suggests that the brain could use something like lookup-​based convolutional networks to categorize actions, and that this is not a possibility that we can rule out a priori. We agree that if this turns out to be the case, then we would classify the AI’s action-​recognition system as deploying the same representational kind as our own. In this example we suppose that the lookup table AI and the brain do not use the same computations, to make the point that when computations are radically different, classificatory practices in cognitive science tend to split the kind.

220  What Are Mental Representations? natural ones—​even within the special sciences where classical essences are plausibly absent.6 According to HPC, natural kinds are the result of an interplay between human classificatory interests and mind-​independent joints in nature. HPC thus seeks to combine a pragmatist conception of kindhood, as partially dependent on human practices and concerns, and a realist approach, which captures the traditional intuition that natural kinds mark non-​accidental and non-​conventional divisions in the world. As in the classical account, HPC kinds are defined by two factors:  co-​ occurring property clusters (syndromes) that members tend to instantiate more than nonmembers; and a second factor which accounts for the non-​ accidentality of resemblance between members. By contrast with classical essences, however, property clustering is grounded in a collection of “homeostatic mechanisms.” These mechanisms are dynamic (they may evolve over time) and context-​dependent (they may cause the syndrome only in certain “normal” environments, rather than in all circumstances). For Boyd, the relevant notion of “mechanism” is a broad, inclusive one, not to be confused with the notion of “mechanism” recently popular elsewhere in philosophy of cognitive science (Machamer, Darden, and Craver 2000). Boydian (non-​classical) “essences” need not be reducible to more basic interacting entities and their activities. Nor do they need to be intrinsic to their members. For instance, Boydian kinds can possess historical essences—​ meaning that property clustering is sustained over time by mechanisms of replication or information transfer between members, some of which may be external to kind-​members. Though the classical account has its defenders (Devitt 2008), and is sometimes still taken to apply to “microstructural” kinds in the fundamental natural sciences,7 non-​classical accounts like Boyd’s are dominant in philosophy of the special sciences. Indeed, Boyd’s view and other similar ones (Millikan 1999) are designed to allow for functional kinds (which are defined in terms of extrinsic causal interactions) to potentially count as a subspecies of natural ones, and do not require that kinds necessarily figure in law-​based explanations. As a result, non-​classical views are attractive to defenders of 6 While we’re not discussing in detail how classical and non-​classical kinds differ, we mention here some of the main dissimilarities: non-​classical kinds admit of imperfections in the members of the kind (for example, Sphynx cats are still cats, even if they don’t have fur), and they have fuzzy boundaries between categories and admit of overlaps between members of different kinds (such as mules, which are the outcome of a cross between a jack and a mare). 7 Though see Needham 2002.

Representational Kinds  221 broadly functionalist conceptions of the mind (though see Ereshefsky and Reydon 2015 for a contrary opinion). In particular, such views are compatible with the widespread assumption that psychological kinds are multiply realized (Boyd 1999).8 Henceforth, we will assume that there are representational kinds, and so that some version or other of the non-​classical account of kindhood will apply to them (for our purposes, we will not commit to one specific non-​classical account of kinds). However, it remains unclear how exactly to apply a non-​ classical account to representational kinds. Although we have contrasted intentional categories, which are merely nominal or superficial from the perspective of cognitive science, with genuine representational ones, we have yet to explain in any detail what gives representational kinds their characteristic “depth.” In the next section, we suggest that the answer to this question is to be found by reflecting on the nature of mental representations, and their role in vertical, multilevel explanation—​which is the paradigmatic form of explanation that takes place in cognitive science.

7.4.  Representational Kinds as Multilevel Merely intentional categories—​like the class of wombat-​representations—​ illustrate the fact that shared intentional content is not sufficient for representational kindhood in cognitive science (even if there might be intentional kinds outside of cognitive science; see section 7.6). Of course, there will be a cluster of other intentional properties that tend to be co-​instantiated with the property of having a given intentional content. For instance, people who possess wombat-​representations probably tend to use them to make certain inferences, such as moving from the premise “X is a wombat” to the conclusion that “X is a mammal” or “X is brown.” But this is a consequence of the meaning of “wombat”—​or at least of widely shared world-​knowledge about wombats. Other sorts of properties of the representational category are hard to derive from its content. Indeed, it is plausible that there are simply no non-​ intentional properties that non-​accidentally cluster with the aforementioned intentional ones. 8 The classical account of natural kinds is incompatible with the multiple realizability thesis, because psychological kinds are defined not in terms of their internal structure (what they are made of), but in terms of their functional properties (what they do). These properties can be implemented in different microstructures and materials.

222  What Are Mental Representations? For action-​representations, on the other hand, what we can learn about the category is not exhausted by its intentional content. Rather, intentional content plays a role akin to ostension in the case of physical kinds, like gold: it fixes the reference of the potential representational kind and guides the (largely empirical) search for other properties which are not obvious consequences of its having the content that it has. What, then, makes a cluster of representations in cognitive science a kind rather than a mere category? Taking the example of action-​representations in section 7.2 to be representative, we suggest that talk of “representational kinds” is justified when the co-​clustering of properties is multilevel, that is, when clusters of properties associated with the kind are discovered at different levels and are used in vertical, decompositional, explanations. Thus, our master-​ argument in favor of the multilevelness of representational kinds stems from the role played by natural kinds in scientific endeavors, and from a mainstream approach to explanation in cognitive science: 1. Natural kinds play a central role in the epistemic project of scientific disciplines:  they are used to project properties when making predictions, and for explaining systematic observations. 2. The epistemic project of cognitive science is to decompose a complex cognitive capacity into simpler components and their relations. This decomposition is articulated at multiple levels of explanation, and is iterative: a complex cognitive capacity is composed of subsystems that can in turn be broken up into further sub-​subsystems until no more decomposition is possible. 3. From 1 and 2, the best account of natural representational kinds is one that allows them to play a role in these multilevel decompositional explanations, with different aspects of representations lining up at different levels of explanation. The picture of the mind suggested by such an approach should be familiar to the reader:  according to the computational-​representational theory of mind (CRTM), the mind is a computational and representational system (Fodor 1975), which accounts for thinking by viewing mental processes as computational processes operating over representations, with “formal” or “syntactic” properties lining up with semantic properties. At the semantic level, the (logical) thinker moves rationally from premises to conclusion in

Representational Kinds  223 a truth-​preserving way. This semantic relation between mental states corresponds, at the syntactic level, to a causal transition, ensured by appropriate formal relations between thought-​vehicles. Token mental representations are thus concrete, though potentially spatiotemporally distributed, particulars with a dual nature: on the one hand, they have intentional, semantic properties; on the other hand, they possess non-​intentional, non-​semantic, vehicular properties. Mental representations’ vehicular properties include not only syntactic (“formal” properties), but also physical properties which correspond to the physical structures that realize (or “implement”) the representations (for humans, neurons and neural configurations). We do not wish to endorse any specific approach to CRTM, such as the classic approach put forward by Fodor and Pylyshyn: our account is compatible with connectionist as well as classical views of computations. What we do endorse is the idea that both semantic and vehicular properties of mental representations are in the domain of cognitive science, an idea at the core of CRTM (in any form). In CRTM mental representations are individuated by an appeal to both contents and vehicles: classes of representations that are alike with respect to their contents might not be alike with respect to the properties of their vehicles, and thus play different roles in processing (Shea 2007). This is why merely intentional categories do not necessarily correspond to representational categories of use in cognitive science. Understanding how cognitive scientists explore non-​observable vehicular properties of mental representations throws light on the process by which representational kinds are uncovered. Representations’ vehicular properties cannot be discovered a priori, nor (usually)9 directly observed. But in many cases they can be inferred from characteristic psychological effects. Following Cummins (1983, 2000, 2010), we distinguish between psychology’s primary explananda, that is, cognitive capacities (such as language acquisition or depth perception), which need not be discovered empirically, and secondary explananda, that is psychological effects, which have 9 An anonymous reviewer casts doubts on this affirmation. On the one hand, the reviewer notices that there are recent claims that suggest that representations can be directly observed (Thomson and Piccinini 2018)  or causally manipulated (Widhalm and Rose 2019). On the other hand, the reviewer doubts that psychological effects are directly observed. Regarding the first point, the proposal in this chapter is compatible with representations sometimes being observed: if and when they are, this would facilitate the discovery of representational kinds. However, direct observation of representations does not seem to be frequent, given the present state of our knowledge. Concerning the second point, we’re not arguing that effects are directly observed: effects are uncovered through the usual tools of cognitive science (reaction time, priming, etc.).

224  What Are Mental Representations? to be discovered empirically (for example, the effects of the algorithms and representations employed for language acquisition or for depth perception). The discovered effects are not the endpoints of exploration. Instead, effects need explanations in turn, which appeal to constituents of the cognitive systems (including functional and physical components) and their organization. Again drawing on Cummins, we can illustrate the notion of “psychological effect” as follows. Multiplications can be performed via two types of mechanisms: the first type of mechanism multiplies each digit of one factor by each digit of the other, then adds the results; the second mechanism multiplies by repeated addition, and it represents 3 × 3 as 3 + 3 + 3. The “linearity effect” (i.e., computational cost as a linear function of multiplier size) is only shown by the second type of mechanism: for example, 3 × 6 requires twice as many operations as 3 × 3, and as a result takes twice the time (all other things equal). The linearity effect is characteristic of the particular algorithm through which the semantically specified task of multiplication is accomplished, but it is tangential to what the mechanism does. What it shows is something about how the mechanism works, and what kind of algorithms and representational vehicles it uses. Much valuable work in psychology simply demonstrates a certain effect. Ultimately, however, psychology appeals to effects not in their own right, but in order to study processes and properties of the representations involved in various cognitive capacities. One example is the study of memory capacity limits defined in terms of the cost of processing a given number of stimuli. Limits on working memory depend on the number of representations to be retained (typically three to five chunks of stimuli) (Cowan 2010). To return to our previous example, one of the reasons to consider “action-​ representations” a representational kind is precisely that distinctive capacity limits can be observed for this category of representations. How many action-​ representations can be stored at one time is entirely incidental to the overall semantic function of standing for actions. But it puts constraints on the type of vehicles, qua vehicles, that can be involved (whether they are stored by the number of objects or by the number of features). Another example is analog magnitude representations, representations of spatial, temporal, numerical, and related quantities (Beck 2015). The main evidence used to uncover the analog format of magnitude representations is Weber’s law, according to which the ability to discriminate two magnitudes is determined by their ratio (Jordan and Brannon 2006). The closer the magnitudes are in their ratio, the more difficult it is to distinguish them

Representational Kinds  225 until the point (Weber’s constant) where they are not discriminable. Again, the presence of this effect shows something about the nature of the representations involved and requires moving beyond the semantically characterized capacity (in this case the capacity to represent approximate quantities). The strategy of decomposing complex capacities into their components, which are then investigated, points toward a central feature of explanations in cognitive science:  they are vertical rather than horizontal explanations (Drayson 2012). The difference between the two kinds of explanations is that horizontal explanations explain an event’s occurrence by referring to a sequence of events that precede it, while vertical explanations explain a phenomenon synchronically by referring to its components and their relations. Folk psychological explanations are horizontal, because they appeal to sequences of mental events that precede the behavior to be explained. Cognitive psychological explanations are vertical, because a complex capacity is decomposed in simpler components, via functional analysis (Cummins 1975). This does not mean that there are no valuable horizontal explanations of behavior using belief-​desire psychology (or more sophisticated elaborations on the belief-​desire model, such as one finds in certain areas of social psychology, for example). However, not all explanatorily useful psychological categories are representational kinds. Talk of “representational kinds” occurs and is justified only when explanation takes place vertically, and projection of properties of representations (at least potentially) crosses explanatory levels.10 What is the relevant notion of levels in multilevel explanations? There are many accounts of levels (e.g., Dennett 1978; Marr 1982; Pylyshyn 1984), with Marr’s being the most influential. According to Marr, we can study a cognitive system at three levels (and a “complete explanation” of a cognitive phenomenon spans all three): 1. At the computational level, the task of the system is identified and redescribed in terms of the information-​processing problem that the 10 An example is provided by the literature on object perception and cognition (Carey and Xu 2001; Scholl 2001), which led to the postulation of the representational kind “object files.” Similar effects were observed both in adults and in infants, such as similar set-​size capacity limits, or similar prioritization of spatiotemporal over featural information, leading to adults’ and infants’ object representations being “lumped” together into a common kind. Carey and Xu argue that “[y]‌oung infants’ object representations are the same natural kind as the object files of mid-​level vision [in adults]” (Carey and Xu 2001, 210).

226  What Are Mental Representations? system has to solve, and the constraints that the system has to satisfy to perform this task are identified (this level is concerned with what the system does and why it does it). 2. At the algorithmic-​representational level the transformation rules (algorithms) and the syntactic properties of the representations that accomplish the task identified at level 1 are specified (including the format of the representations that are the inputs, the intermediaries, and the outputs of the processes). 3. At the implementational level the physical structures (neural mechanisms) that realize the processes described at level 2 and the neural localization are specified. Marr’s account of levels has been challenged: for example, it has been argued that these are not special levels of explanation, but merely heuristic clusters of questions (McClamrock 1991), that there are intermediate levels between the computational and the algorithmic levels (Griffiths, Lieder, and Goodman 2015), or that the implementational level is not as autonomous as the original account made it out to be (Kaplan 2017). There is also a debate about what exactly Marr’s levels are: are they levels of abstraction, levels of organization, levels of analysis, or levels of realization (Craver 2014)? An exhaustive overview of Marr’s levels and their different interpretations is a topic for another paper (or book). Here we endorse the view that these are levels of explanation, and that they are identified by the type of questions cognitive scientists might ask in their investigation. We side with the view that Marr’s computational level is specified with reference to intentional representational contents (which makes it similar to Pylyshyn’s semantic level).11 On the other hand, we do not assume that there are only three levels: our account is open to the inclusion of further levels in addition to the ones mentioned. What matters to our purpose is the close relation between levels and kinds. In many domains, it is fruitful to study cognitive systems as multilevel when trying to uncover whether they are natural kinds. For example, this is the strategy used by Michaelian (2011) for exploring whether memory is a natural kind. For memory systems this implies asking three questions:

11 In the debate on whether Marr’s computational level refers to representations and their contents, or to a mathematically specified function (Shagrir 2010).

Representational Kinds  227 (a) Is there an information-​processing task common to the relevant memory systems? (computational level) (b) Is there a procedure for performing that task common to the systems? (algorithmic-​representational  level) (c) Is there an implementation of the procedure common to the systems? (implementational level) A lack of unity at each of the levels, and an impossibility of projecting across levels, implies that the memory system is not a cognitive natural kind (if Michaelian is right). Though this example concerns a cognitive system, it highlights an important feature of inquiry in cognitive science more generally:  natural kinds in cognitive science are plausibly necessarily multilevel. Similarly, for representational kinds, clusters of non-​ accidentally co-​occurring properties need to be discovered at multiple levels in order for there to be kindhood. As previously mentioned, representations have a content and a vehicle with syntactic and physical properties. These properties of representations entail that they can be studied at different levels. The alignment between properties of representations and levels might not be perfect, but it provides a heuristic for the discovery of representational kinds. At the computational level we’re interested in finding out what the representations represent, and how their contents are connected with other mental states and with behaviors. For example, at the computational level the action detection system’s aim is to detect actions in the environment. At the algorithmic-​representational level, scientists discover the features of the processes that the representations enter into, as well as some properties of the vehicles, such as their format. Here, for example, scientists bring to light the processes implicated in the differentiation between actions and non-​animated movements, as well as the format of action-​representations. At the implementational level, scientists look for the features of the neural structures underlying action-​representations, as well as for their localization in the brain. Our claim is that for a representational category to constitute a kind, it must allow for cross-​level projection of properties. This strongly suggests that the representations correspond to a joint in (cognitive) nature:  they have more in common than mere content, and further resemblance between them is unlikely to be accidental. To return to the example we used earlier, the semantic syndrome of action-​representations provides a first step in the positing of a potential representational kind: there is evidence

228  What Are Mental Representations? for domain-​specific detection of stimuli that involve visible movements. But action-​representations also show effects that are due to non-​semantic properties of the representations involved, which we reviewed in section 7.1. At this point one could object (as one anonymous reviewer does) that there is no real contrast between wombat-​ representations and action-​ representations, because one can always find a cluster of properties at different levels, even for wombat-​representations. As a result, our account over-​generates. This objection can be formulated in two ways. In the first version of the objection, when I deploy a wombat-​representation, there are some activations in my brain, as well as some psychological processes (such as categorization) that occur at the same time. For example, I can imagine a picture of a wombat and I can think “Wombats are cute.” In both of these cases we can find syntactic and implementational properties. Why not say that they cluster together to form the (generic) kind “wombat-​representation”? We contend that in this case the features would be too heterogeneous for natural kindhood:  very few shared features would be found at the algorithmic-​ representational and implementational levels. Indeed, such a proposal would be subject to the same arguments Machery uses against prototypes, exemplars, and theories belonging to the same natural representational kind “concept” (Machery 2009). In the second version of the objection, the focus is only on wombat-​ representations that are prototypes (a similar argument could be formulated for exemplars or theories). In this case, there are certainly properties that cluster together at the algorithmic-​representational and implementational levels. Yet, in this case the properties associated with wombat-​prototypes are also shared with other “basic-​level categories” (basic for thinkers familiar with the Australian fauna), such as kiwi-​prototypes, kangaroo-​prototypes, and so on. The representational kind here is not wombat-​representation, but “prototype category-​representation,” of which wombat-​prototypes are a subcategory, similarly to how Dalmatian, Pomeranian and chow chow are all different breeds of the same kind “dog.” By contrast, action-​representations show a significant clustering of properties at different levels:  intentional properties at the semantic level, special format and effects at the syntactic level, and characteristic patterns of brain activations at the implementational level. Moreover, this cluster of features is also distinct from other representations in the vicinity, such as representations of non-​action movements, including random spatiotemporal trajectories (see Smortchkova 2018).

Representational Kinds  229 Thus, contrary to wombat-​representations, action-​representations in cognitive science are good candidates for kindhood. They exhibit commonalities which allow them to be identified at various levels of inquiry. They form a kind not merely on the basis of their intentional content, but on the basis of the discovery of psychological effects and physical-​implementational properties, which provide evidence of multilevel resemblance between representations of the relevant sort—​in other words, they exhibit the relevant sort of “depth” which justifies talk of “kinds,” despite the absence of a classical essence.

7.5.  How Many Levels of Explanation Are Needed? A strong version of the multilevel thesis would require non-​accidental clustering of properties across all levels, i.e., that the kinds go “all the way down” to the implementational level. One reason to endorse such a view would be if one conceived of the algorithmic-​representational level as a mere “sketch” of what is going on at the implementational level, and insisted that functional constituents of a system have to line up with spatiotemporal ones (Piccinini and Craver 2011). In this section we would like to explore how some representational kinds could be unified at some (more than one) levels without being necessarily unified at all levels. The first way in which not all levels might be needed is tied to the standard version of the multiple realization thesis (Putnam 1967): a kind that is unified at the computational and algorithmic-​representational levels might not be unified at the implementational level. The most obvious reason to reject the all-​level version of the multilevel thesis is thus to allow for representational kinds to be genuinely psychological, as opposed to necessarily neuropsychological, i.e., multiply realized at the implementational level. A second way in which a representational kind could be multilevel without encompassing all levels would be for the kind to exhibit what we could call “inverse multiple-​realizability”: a category of vehicles is inversely multiply realized if it forms a kind at the algorithmic-​representational and implementational levels, yet has multiple different types of contents. Such a kind would be unified at the algorithmic-​representational level, as well as in terms of the implementational mechanisms that realize the relevant algorithmic function; yet at the computational level, there would be few or no semantic properties in common between its members. There is clearly room in logical—​and indeed empirical—​space for such a kind.

230  What Are Mental Representations? One potential example is “address addressable representations” and “content addressable representations” used in information-​ processing approaches inspired by computer science (Bechtel and Abrahamsen 2002; Gallistel and King 2011). “Address-​addressable representations” are used to retrieve mnemonic contents stored at different locations (addresses) via indexes that (non-​ semantically) identify the addresses. “Content-​ addressable representations” search for storage locations in memory by probing them with partial contents, which are used to retrieve the rest of the stored information. When applied to explanations of human cognition, these representations would acquire their kindhood status if properties that enabled the brain to realize these particular sorts of computational operations were discovered at the implementational level. What would be stored at the various memory addresses would, by hypothesis, be representations, endowed with content. However, there might be nothing in common to the different contents of the representations within the (broad) class of “address-​ addressable memory representations.” By analogy with typical examples of multiple realizations, we would find the address-​addressable representations to have a fairly open-​ended disjunction of semantic properties—​though what enables them to all be addressable in the same way might be empirically discovered to be some similar brain mechanism. For such a kind, semantic content, though present, is merely “quantified over” rather than made reference to—​it does not play a role in unifying the kind. If one adopts an approach to the brain strongly driven by concepts from computational theory, à la Gallistel and King, one should expect—​indeed hope—​to find such kinds. It is also worth noting that representational kinds are sometimes identified at the computational and implementational levels without reference to the intermediary algorithmic-​representational level. This seems to be an approach that is commonly used in some areas of neuroscience to find representational kinds. When studying how the brain represents types of objects, the methodology is often based on differential activation of brain regions: for example, regions in the FFA are said to represent faces because they are active when face stimuli are present, and not when other kinds of stimuli are present, such as tools (Grill-​Spector 2003). Prima facie this practice suggests that kinds could be identified on the basis of content and implementation alone, with little or no regard for “syntax.” One could object that, in such cases, the gap at the algorithmic-​ representational level is only epistemic—​it reflects temporary ignorance, or division of labor between neuroscientists and psychologists, rather

Representational Kinds  231 than the true nature of the kind. According to such a view, in reality, face-​ representations will be known to constitute a representational kind only when cognitive scientists also discover shared algorithms underlying face perception, and other syntactic properties of face-​representations. However, there is a stronger thesis that would suggest that we can bypass the algorithmic-​representational level altogether and identify representational kinds via their semantic content and their implementational features only: such a thesis might be taken to follow from a version of the semantic view of computation in cognitive science recently defended by Rescorla (2017). According to Rescorla’s view computation individuation proceeds via semantic properties along with nonrepresentational implementational mechanisms (302) without any contribution from formal syntactic properties. Thus, Rescorla writes that “[w]‌e can model the mind computationally without postulating formal mental syntax” (281). Views about computation individuation do not necessarily apply to accounts of computational explanation, and the claim that we should do without mental syntax altogether is obviously highly controversial. Therefore, Rescorla’s view may or may not be true of computational explanation in cognitive science in general. But the view certainly does not seem downright contradictory. As a result, it strikes us as at least coherent to suppose that some explanations could in principle take place only at the computational and implementational levels. If that is the case, it would behoove us to make room for multilevel kinds that span levels 1 and 3 of the tri-​level approach: if there is no need for level 2, then level 2 properties are not required for kindhood. While we do not wish to endorse Rescorla’s view, the important point for our purposes is that it is compatible with ours: it leaves room for some multilevel representational kinds that do not span all levels. Finally, we do not claim that one of the levels is more basic, in the sense of having to play the role of the sustaining mechanism for the syndrome of properties at other levels. For example, we are not claiming that the implementational level or the algorithmic-​representational level is where one finds the mechanism sustaining the clustering of semantic properties at the computational level. Indeed, the sustaining mechanisms might be extrinsic to the representational kind. The most obvious argument for such a view derives from (meta)semantic externalism, the view according to which what determines the content of a mental representation are causal-​historical relations to one’s environment or community (Deutsch and Lau 2014). For instance, according to teleosemantic approaches to intentional content, the

232  What Are Mental Representations? mechanism which determines the content of mental representations is at least partly historical (Millikan 1984). Our multilevel view of representational kindhood is entirely compatible with externalism about content. What matters for representational kindhood is the existence of robust clustering of properties and cross-​level projections, which enable the representational kind to support many generalizations and predictions. Such multilevelness is sufficient, in our view, to support the claim that the clustering is non-​accidental. The multilevel thesis for kindhood need not include the claim that one of the lower levels is the sustaining mechanism for the cluster of properties that are exhibited by the kind. Even if research is guided by the tacit assumption that there is some sustaining mechanism for the kind, such a mechanism might be extrinsic. There might also be a plurality of mechanisms. This plurality might be synchronic: for instance, if a two-​factor view of content determination were correct (Block 1986) the semantic level would be partially sustained by internal causes or functional role, and partially by external environmental causes. Note that not all aspects of functional roles have to be equally content-​ determining: incidental effects may be incidental also in the sense of being metasemantically inert. A plurality of cluster-​sustaining mechanisms may also be involved diachronically: it is compatible with Boyd’s view that different mechanisms play the role at different times, or on different time-​scales (e.g., ontogenetically and phylogenetically).12

7.6.  Objections and Replies In this section we consider some objections to our account, and offer replies that we hope will help clarify the proposal. Objection 1: Some representational kinds in cognitive science are single level. For instance, some are purely semantic (e.g., singular thought, belief), or purely based on format or syntax (e.g., analog representations, or address-​ addressable memory representations). Reply: Our reply to this objection is that, in reality, single-​level kinds fall into two cases: either they are not kinds in cognitive science, or they are (secretly) multilevel.

12 We hope to further explore the question about the sustaining mechanism in future work.

Representational Kinds  233 To illustrate the first case, take “singular thought” as an example: to the extent that it is a merely semantic (or merely epistemological) kind, identified by a clustering of semantic or epistemic properties, it is not relevant to cognitive science. It becomes a candidate representational kind from a cognitive scientific perspective when one engages in the project of vertical explanation by looking at properties at other levels whose existence could support kindhood (Murez, Smortchkova, and Strickland 2020). To clarify our view on this point, it is worth stressing that we do not deny that some categories of entities may constitute kinds merely by virtue of sharing semantic properties. For instance, formal semanticists debate whether adverbs or quantifiers form natural kinds (see Balcerak Jackson 2017, who defends an account of semantic kinds inspired by Evans 1985). But these are, precisely, candidate semantic kinds, which are studied by linguists, not mental representational ones studied by cognitive scientists. The latter are those that are the primary targets of investigation in cognitive science, the science which studies mental representations. What about solely format-​based kinds? “Analog” picks out a candidate representational kind only to the extent that analog representations share interesting properties at other levels, in addition to their being analog. For example, analog representations play a role both in number cognition (Dehaene 1997) and in mental modeling (Johnson-​Laird 2006), where they pick out entities in different domains. These different roles and domain-​ specificities might come with different properties at the computational and implementational levels, which might lead to the postulation of different representational kinds with analog format. This brings us to the second reply to the objection. Many purportedly single-​level kinds might actually turn out to be multilevel, if one adopts a more accurate view of levels. Indeed, various philosophers and cognitive scientists have argued that the usual tripartition between levels is insufficiently fine-​grained. For example, there might be levels between the computational and the algorithmic (Griffiths et al. 2015; Peacocke 1986). Likewise, the semantic level might usefully divide into semantic (what is the content?) and metasemantic (what determines or grounds the content?) levels (Burgess and Sherman 2014). In some cases, the appearance of a single-​level kind could be explained by an insufficiently fine-​grained account of what constitutes a level. A kind could span multiple sublevels, thereby acquiring the requisite sort of “depth.”

234  What Are Mental Representations? Objection 2: Our proposal goes against the independence of the levels and multiple realizability. Reply: The notion of “kindhood” we use relates kindhood to explanation in cognitive science. Our view is entirely compatible with a methodological or merely epistemic independence of levels: it might be useful, for various epistemic or methodological purposes, to abstract away from other levels in studying a representational kind, for example when the implementation is unknown, or when the algorithms haven’t been yet discovered. Furthermore, as we made clear, we are not committed to the claim that kinds go “all the way down” (i.e., to a “tri-​level” view of representational kinds). Nevertheless, to talk of “kinds” is to commit to there being more to the representations being studied than what is found at a single level. The view is compatible with representational kinds being multiply realizable: all it requires is that there be clusters of properties at different levels and inter-​level constraints (a requirement commonly accepted in multilevel explanations). Objection 3: Levels are not the only source of “inductive depth”; history (evolutionary history) also plays a role. For example, Ereshefsky (2007) suggests that we should study psychological categories as homologies, and not only as functions or as adaptations. Reply: We agree with this. As we noted, there are different possible views of the sustaining mechanism, that is, the factor that unifies a kind and that causes the properties in the kind-​syndrome to non-​accidentally cluster together:  intrinsic (e.g., a common neural mechanism) or extrinsic (e.g., a common developmental or evolutionary history). A representational kind might correspond to a grouping of representations that share properties in virtue of a common evolutionary ancestor, for example. This assumption is often implicit in cognitive science and guides research. Indeed, the discovery of essentialist representations in the cognition of adults and infants (Gelman 2004) guides the search for the same representational kind in apes (Cacchione et al. 2016). What is the role of historical factors for representational kinds? According to one view, some representational kinds are unified by extrinsic, historical factors. We could talk of such representational kinds having a “historical sustaining mechanism.” Another possible option is that historical properties can be counted among those that non-​accidentally cluster together to form a kind, without necessarily being the mechanism that causes the clustering. Yet another option would be that, if levels of explanation in cognitive science are properly understood, they would make room for a historical

Representational Kinds  235 or evolutionary level. Our view of representational kinds is compatible with these different options for the role of history, which might each be true of different representational kinds. Objection 4: What distinguishes representational kinds from other cognitive kinds? For instance, what distinguishes representational kinds from kinds of mental processes? Reply: This issue intersects with the more general issue “What is a mental representation?” We will not attempt a general definition. We assume that representations have semantic properties, but also non-​semantic ones. The semantic properties are “substantial” to the extent that, e.g., the distinction between representational correctness and error plays a not easily eliminated explanatory role. A kind is not representational if it makes no reference to semantic properties. For instance, a classification of brain states according to merely physiological criteria might count as a cognitive scientific kind, but not as a representational one. Any functional kind will allow for some sort of multilevel approach: one can distinguish role-​ level investigation, and realizers/​ implementational level. However, the division into levels introduced by Marr in the context of a computational theory of vision is (arguably) specific to—​or at least paradigmatic of—​representational explanation. Hence, it might be suggested that the sort of multilevel approach we describe is indeed specific to representational kinds, as opposed to other sorts of cognitive kinds. Objection 5: Can’t representational kinds be unified/​individuated merely by the sorts of processes that operate on those representations? For instance, suppose you have an independent notion of what makes a process perceptual, as opposed to cognitive. Might one not then introduce a kind “perceptual representations” that would not satisfy our criteria? Reply:  Representations play a role within processes, including computational-​inferential processes, and perhaps non-​computational and non-​inferential processes (such as, arguably, associations).13 In general, there is a close connection between representations and processes. In such cases, a kind of representation is picked out by reference to a kind of process (or to a component of cognitive architecture). Yet, kinds of processes 13 An anonymous reviewer challenges this claim by remarking that associations can be computational if they occur in artificial neural networks, and that their non-​inferential status depends on the notion of inference one endorses. Here we accept the standard contrast between associative transitions and computational transitions in the computational theory of mind. See Mandelbaum 2015 for a review of the contrast.

236  What Are Mental Representations? and kinds of representations can be orthogonal. In some cases, classification of representations into kinds crosscuts processing, for instance when one entertains the possibility that visual imagery manipulates the same representations as visual perception. In other cases, the same process might make use of different kinds of representations, for example if perceptual processes use both iconic and discursive representations (Quilty-​Dunn 2020). According to our requirement of depth, for there to be a genuinely representational kind, there must be more in common to the representations in question than merely that they are processed in the same way. For instance, “visual representation” might pick out a representational kind if it turns out that such representations tend (non-​accidentally) to share format-​properties (e.g., they tend to be iconic) and effects at deeper levels. On this view, while “visual representation” may pick out a useful class of representations, it only picks out a genuinely representational kind if the representations non-​ accidentally share properties at multiple levels. The fact that representations participate in processes and are partially identified by the role they play in these doesn’t undermine the existence of representational kinds, just like the fact that, in the biological domain, mitochondria participate in the process of cellular respiration doesn’t diminish their status as a potential natural kind. A related objection (due to an anonymous reviewer) states that appealing to processes in the identification of representational kinds runs the risk of over-​generating. All representations that can be stored in a limited working memory would be of the same kind (“working memory representations”). This suggests that the properties that make representational categories kinds are not properties of representations, but properties of the system that processes representations. If we could explain all features of representations by the type of processes to which they belong, then we wouldn’t have representational kinds, but only process kinds. Our previous reply also applies to this objection: storage in working memory is only one of the features of a genuine representational kind. Objection 6: Semantic content of the sort that matters to cognitive science is internalistic, and determined by syntax. Externalistic intentional content is largely irrelevant. What you call a “semantic syndrome” plays no role in representational kinds’ individuation. Reply:  This objection appeals to an internalist account of mental representations that has been proposed (among others) by Egan (2014), who distinguishes representations with “mathematical content” from

Representational Kinds  237 representations with cognitive content (content that refers to the environment). Representations with mathematical content are characterized functionally by the role they play in the computational account of a cognitive capacity. This role is specified in mathematical terms and does not make reference to worldly entities. Yet mathematical representations are representational because they can misrepresent, relative to the “success criteria” of the cognitive capacity that has to be explained. Egan uses as an example Marr’s description of the stage of early visual processing which takes light intensity values at different parts of the image as inputs and gives as outputs calculations of the change rate of the intensities in the image. This representation could be applied to an environment where light behaves very differently than it does in ours (Egan 2014, 122). Regardless of whether Egan’s account of “mathematical content” is correct, Egan’s representational kinds are, in fact, multilevel in our sense, because in order for the mathematical description of the capacity to explain cognition, it has to be mapped onto the physical realizer via a realization function, such that transitions in the physical system mirror transitions between symbols in the mathematical computation (Egan 2014). Furthermore, though we are sympathetic to the claim that Egan’s account applies to some representational kinds (but see Sprevak 2010 for a critical discussion), it strikes us as implausible that it will extend to all representational kinds studied in cognitive science. Action-​representations are first identified by their worldly extension, and the fact that they refer to actions plays a central role in the study of this representational kind. For many representational kinds, an externalist account of content (Burge 1979) seems to be more adequate.

7.7.  Conclusion: The Notion of Maximal Representational Kind To conclude, we would like to suggest that cognitive scientists operate with a (tacit) ideal of what a representational kind looks like—​a maximal representational kind. A maximal representational kind is a category of representations that are maximally “deep,” in that they non-​accidentally share properties across all levels of description. Representational kindhood can come in degrees, to the extent that an actual class of representations can match more or less closely to this ideal of a maximal kind. We think that the notion of

238  What Are Mental Representations? a maximal kind can be viewed as a heuristic device that guides research in cognitive science: to hypothesize that a certain class of representations forms a kind is to be disposed to investigate other properties that co-​cluster with those that have been used to initially pick out members of the kind in the process of vertically decomposing the capacity to be explained. There are still many details to fill in to fully develop an account of representational kinds. In this chapter we have sketched the main outline of a proposal that we hope to expand and refine in the future.

Acknowledgments We are grateful to audiences in Bochum, London, Nottingham, and Oxford for insightful comments on previous versions of the chapter. In particular, we thank the participants of the Philosophy of Mind Work in Progress group in Oxford (Umut Baysan, Dominic Alford-​Duguid, Sam Clarke, Anil Gomes, Jake Quilty-​Dunn, Nick Shea, and Margot Strohminger) for helpful discussion of a previous version of this paper, two anonymous reviewers for insightful comments, and Krys Dołęga for a last round of comments. Joulia Smortchkova’s work on this chapter has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant agreement No. 681422.

References Atran, S. 2002. In Gods We Trust: The Evolutionary Landscape of Religion. New York: Oxford University Press. Balcerak Jackson, B. 2017. Structural Entailment and Semantic Natural Kinds. Linguistics and Philosophy 40 (3): 207–​237. Bechtel, W., and Abrahamsen, A. 2002. Connectionism and the Mind: Parallel Processing, Dynamics, and Evolution in Networks. Oxford: Blackwell Publishing. Beck, J. 2015. Analog Magnitude Representations: A Philosophical Introduction. British Journal for the Philosophy of Science 66 (4): 829–​855. Block, N. 1981. Psychologism and Behaviorism. Philosophical Review 90 (1): 5–​43. Block, N. 1986. Advertisement for a Semantics for Psychology. Midwest Studies in Philosophy 10 (1): 615–​678. Boyd, R. 1991. Realism, Anti-​foundationalism and the Enthusiasm for Natural Kinds. Philosophical Studies 61 (1): 127–​148. Boyd, R. 1999. Kinds, Complexity and Multiple Realization. Philosophical Studies 95 (1): 67–​98.

Representational Kinds  239 Burge, T. 1979. Individualism and the Mental. Midwest Studies in Philosophy 4 (1): 73–​122. Burgess, A., and Sherman, B.  2014. A  Plea for the Metaphysics of Meaning. In Alexis Burgess and Brett Sherman (eds.), Metasemantics: New Essays on the Foundations of Meaning, 1–​16. Oxford: Oxford University Press. Cacchione, T., Hrubesch, C., Call, J., and Rakoczy, H. 2016. Are Apes Essentialists? Scope and Limits of Psychological Essentialism in Great Apes. Animal Cognition 19 (5): 921–​937. Carey, S. 2009. The Origin of Concepts. New York: Oxford University Press. Carey, S., and Xu, F. 2001. Infants’ Knowledge of Objects: Beyond Object Files and Object Tracking. Cognition 80 (1–​2): 179–​213. Cowan, N. 2010. The Magical Mystery Four: How Is Working Memory Capacity Limited, and Why? Current Directions in Psychological Science 19 (1): 51–​57. Craver, C. F. 2014. Levels: Open MIND. Frankfurt am Main: MIND Group. Cummins, R. C. 1975. Functional Analysis. Journal of Philosophy 72 (November): 741–​764. Cummins, R. C. 1983. The Nature of Psychological Explanation. Cambridge, MA: MIT Press. Cummins, R. C. 2000. “How Does It Work?” versus “What Are the Laws?”:  Two Conceptions of Psychological Explanation. In F. Keil and R. A. Wilson (eds.), Explanation and Cognition, 117–​145. Cambridge, MA: MIT Press. Cummins, R. C. 2010. The World in the Head. New York: Oxford University Press. Dehaene, S. 1997. The Number Sense. New York: Oxford University Press. Dennett, D. C. 1978. Brainstorms. Cambridge, MA: MIT Press. Dennett, D. C. 1994. Get Real. Philosophical Topics 22 (1–​2): 505–​568. Deutsch, M., and Lau, J. 2014. Externalism about Mental Content. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Fall 2019 ed. Devitt, M. 2008. Resurrecting Biological Essentialism. Philosophy of Science 75 (3): 344–​382. Drayson, Z. 2012. The Uses and Abuses of the Personal/​ Subpersonal Distinction. Philosophical Perspectives 26 (1): 1–​18. Dupre, J. 1981. Natural Kinds and Biological Taxa. Philosophical Review 90 (1): 66–​90. Egan, F.  2014. How to Think About Mental Content. Philosophical Studies 170 (1): 115–​135. Ereshefsky, M. 2007. Psychological Categories as Homologies: Lessons from Ethology. Biology and Philosophy 22 (5): 659–​674. Ereshefsky, M., and Reydon, T. A.  C. 2015. Scientific Kinds. Philosophical Studies 172 (4): 969–​986. Fodor, J. A. 1974. Special Sciences. Synthese 28 (2): 97–​115. Fodor, J. A. 1975. The Language of Thought. Cambridge, MA: Harvard University Press. Gallese, V., Fadiga, L., Fogassi, L., and Rizzolatti, G. 2002. Action Representation and the Inferior Parietal Lobule. In W. Prinz and B. Hommel (eds.), Common Mechanisms in Perception and Action. Attention & Performance, vol. XIX, 334–​ 355. Oxford University Press. Gallistel, C. R., and King, A. P. 2011. Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. New York: John Wiley & Sons. Gelman, S. 2004. Psychological Essentialism in Children. Trends in Cognitive Sciences 8 (9): 404–​409. Griffiths, T. L., Lieder, F., and Goodman, N. D. 2015. Rational Use of Cognitive Resources: Levels of Analysis between the Computational and the Algorithmic. Topics in Cognitive Science 7 (2): 217–​229.

240  What Are Mental Representations? Grill-​Spector, K. 2003. The Neural Basis of Object Perception. Current Opinion in Neurobiology 13 (2): 159–​166. Jackson, F., and Pettit, P. 1990. In Defense of Folk Psychology. Philosophical Studies 59 (1): 31–​54. Jellema, T., and Perrett, D. I. 2007. Neural Pathways of Social Cognition. In R. I.  M. Dunbar and L. Barrett (eds.), The Oxford Handbook of Evolutionary Psychology, 163–​ 177. New York: Oxford University Press. Johnson-​Laird, P. N. 2006. How We Reason. New York: Oxford University Press. Jordan, K. E., and Brannon, E. M. 2006. A Common Representational System Governed by Weber’s Law:  Nonverbal Numerical Similarity Judgments in 6-​Year-​Olds and Rhesus Macaques. Journal of Experimental Child Psychology 95 (3): 215–​229. Kaplan, D. M. 2017. Integrating Mind and Brain Science: A Field Guide. New York: Oxford University Press. Khalidi, M. A. 2016. Innateness as a Natural Cognitive Kind. Philosophical Psychology 29 (3): 319–​333. Kripke, S. 1980. Naming and Necessity. Cambridge, MA: Harvard University Press. Lafraire, J., Rioux, C., Giboreau, A., and Picard, D. 2016. Food Rejections in Children: Cognitive and Social/​Environmental Factors Involved in Food Neophobia and Picky/​Fussy Eating Behavior. Appetite 96: 347–​357. Lycan, W. G. 1988. Judgement and Justification. New York: Cambridge University Press. Machamer, P., Darden, L., and Craver, C. F. 2000. Thinking about Mechanisms. Philosophy of Science 67 (1): 1–​25. Machery, E. 2009. Doing without Concepts. New York: Oxford University Press. Mandelbaum, E. 2015. Associationist Theories of Thought. In E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. Summer 2017 ed. Marr, D. 1982. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York: WH Freeman. McClamrock, R. 1991. Marr’s Three Levels:  A Re-​evaluation. Minds and Machines 1 (2): 185–​196. Michaelian, K. 2011. Is Memory a Natural Kind? Memory Studies 4 (2): 170–​189. Millikan, R. G. 1984. Language, Thought, and Other Biological Categories: New Foundations for Realism. Cambridge, MA: MIT Press. Millikan, R. G. 1999. Historical Kinds and the “Special Sciences.” Philosophical Studies 95 (1–​2): 45–​65. Murez, M., Smortchkova, J., and Strickland, B. 2020. The Mental Files Theory of Singular Thought: A Psychological Perspective. In R. Goodman, J. Genone, and N. Kroll (eds.), Mental Files and Singular Thought, 107–​142. Oxford: Oxford University Press. Needham, P. 2002. The Discovery That Water Is H2O. International Studies in the Philosophy of Science 16 (3): 205–​226. Peacocke, C. 1986. Explanation in Computational Psychology: Language, Perception and Level. Mind and Language 1 (2): 101–​123. Piccinini, G., and Craver, C. 2011. Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches. Synthese 183 (3): 283–​311. Putnam, H. 1967. Psychological Predicates. Art, Mind, and Religion 1: 37–​48. Putnam, H. 1975. The Meaning of “Meaning”. Minnesota Studies in the Philosophy of Science 7: 131–​193. Pylyshyn, Z. W. 1984. Computation and Cognition. Cambridge, MA: MIT Press. Quilty-​Dunn, J. 2020. Perceptual Pluralism. Noûs. https://​doi.org/​10.1111/​nous.12285.

Representational Kinds  241 Quine, W. V. 1969. Ontological Relativity and Other Essays. New  York:  Columbia University Press. Ramsey, W. M. 2007. Representation Reconsidered. New York: Cambridge University Press. Rescorla, M. 2017. From Ockham to Turing—​and Back Again. In A. Bokulich and J. Floyd (eds.), Turing 100: Philosophical Explorations of the Legacy of Alan Turing, 279–​304. Berlin: Springer. Rizzolatti, G., and Sinigaglia, C. 2007. Mirror Neurons and Motor Intentionality. Functional Neurology 22 (4): 205. Scholl, B. J. 2001. Objects and Attention: The State of the Art. Cognition 80 (1–​2): 1–​46. Shagrir, O. 2010. Marr on Computational-​Level Theories. Philosophy of Science 77 (4): 477–​500. Shea, N. 2007. Content and Its Vehicles in Connectionist Systems. Mind and Language 22 (3): 246–​269. Shea, N. 2018. Representation in Cognitive Science. New York: Oxford University Press. Shiffrar, M. 2008. Embodied Motion Perception: Psychophysical Studies of the Factors Defining Visual Sensitivity to Self-​and Other-​Generated Actions. In R. L. Klatzky, B. MacWhinney, and M. Behrmann (eds.), Embodiment, Ego-​Space, and Action, 113–​143. New York: Psychology Press. Shutts, K., Condry, K. F., Santos, L. R., and Spelke, E. S. 2009. Core Knowledge and Its Limits: The Domain of Food. Cognition 112 (1): 120–​140. Smortchkova, J. 2018. Seeing Goal-​Directedness: A Case for Social Perception. British Journal for the Philosophy of Science. Sprevak, M. 2010. Computation, Individuation, and the Received View on Representation. Studies in History and Philosophy of Science Part A 41 (3): 260–​270. Thomson, E., and Piccinini, G. 2018. Neural Representations Observed. Minds and Machines 28 (1): 191–​235. Widhalm, M. L., and Rose, N. S. 2019. How Can Transcranial Magnetic Stimulation Be Used to Causally Manipulate Memory Representations in the Human Brain? Wiley Interdisciplinary Reviews: Cognitive Science 10 (1): e1469. Wood, J. N. 2007. Visual Working Memory for Observed Actions. Journal of Experimental Psychology: General 136 (4): 639–​652.

8 Functionalist Interrelations among Human Psychological States Inter Se, Ditto for Martians Nicholas Shea

8.1.  Introduction How fine-​grained or coarse-​grained should functionalist specifications of mental states be? Fineness of grain is a matter of how many and various the clauses are that appear in a functionalist specification of a particular mental state type. Both common-​sense functionalism and psychofunctionalism admit of more and less fine-​grained variants. A very fine-​grained version of functionalism could have the consequence that no actual organisms other than humans have beliefs. That would still be compatible with one functionalist intuition, namely that psychological categories are neutral about the substrate in which they are realized. However fine-​grained, a functional category could in principle be realized in a substrate other than carbon. There is, however, a second prominent motivation for functionalism—​a motivation that does seem to be undermined by individuating mental states in a very fine-​grained way. The intuition is that we ought not to be too anthropocentric in characterizing what it takes to have psychological states. That motivates functionalism, because functional specifications are less parochial and more apt to be realized by a range of different organisms. The octopus, which engages in complex and seemingly intelligent behavior, has a very different way of organizing information and generating behavior (Godfrey-​Smith 2016). The hypothetical intelligent Martian could be just as different from humans in its way of organizing information and generating behavior, and while also being based on a different substrate. Nevertheless, we should not rule out that octopuses and Martians have mental states. This motivation is thought to suggest that functionalist specifications should be Nicholas Shea, Functionalist Interrelations among Human Psychological States Inter Se, Ditto for Martians In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0009.

Functionalist Interrelations for Humans and Martians  243 relatively coarse-​grained, so that they can apply to creatures that are very different from us. The second motivation for functionalism has played an important role in underpinning the extended mind hypothesis (Chalmers 2008; Clark 2008, 88–​89). It also has wider significance, because it raises the question of how we ought to individuate psychological states in general and beliefs in particular. Even if our primary interest is not in the possibility of cognitive extension, the broader question of how to individuate beliefs turns partly on whether it is appropriate to individuate them in a coarse-​grained or a fine-​grained way. Within the extended mind literature, Clark has argued that the claim that mental states extend into the world beyond the skin is based on functionalist specifications of cognitive states like belief being relatively coarse-​grained (Clark 2007, 167). He also argues that this way of individuating mental states is supported by common-​sense psychology (2008, 88). Sprevak has turned these arguments into a purported reductio of functionalism. He argues that functionalism implies an unacceptably liberal form of cognitive extension (Sprevak 2009). Any functionalist treatment of mental states that is coarse-​grained enough to apply to Martian psychology, as functionalists intend, will also count as cognitive the information contained in many of the artifacts with which humans interact, like the hard disk of your computer. This paper questions that argument. Section 8.2 argues that beliefs are individuated by reference to other human psychological states. Section 8.3 sets out the kind of Ramsey sentences that define human psychological states and those that define Martian psychological states. A resource that would count as psychological because of its relation to other Martian psychological states, for example an external notebook, does not thereby qualify as falling under any psychological category when a human interacts with it. The anti-​parochialist motivation for functionalism is therefore consistent with psychological states being individuated in a relatively fine-​grained way. I conclude that the argument that functionalism entails radical extension fails.

8.2.  Functionalism Connects Belief with Other Human Psychological States Some have argued that there are conditions on a state’s being a belief which limit the ambit of cognitive extension.1 For example, maybe beliefs have to 1 Clark and Chalmers 1998; from the opposite perspective: Adams and Aizawa 2001, 2008.

244  What Are Mental Representations? be readily and fluidly integrated with one another so as to allow continuous checking for consistency, in a way that semantic memories are but entries in notebooks are not. Such additional constraints would foreclose the problematic consequence that every informational resource with which a human interacts is part of her extended mind (Rupert 2004, 2009). Sprevak argues that that move is unavailable to the functionalist who is motivated by the intuition that Martians too have psychological states, since such fine-​grained conditions would not be satisfied by Martian mental states. Sprevak rejects radical extension and embraces the modus tollens: functionalism about cognitive states is untenable. However, it is important here that we distinguish what it is to be a psychological state in general from what it is to be a psychological state of a particular kind, like a belief. Humans have beliefs, desires, hopes, and intentions; visual perceptions, auditory perceptions, and sensations like pain; anger, happiness, fear; and so on. One central concern of psychology is to characterize each of these psychological states; that is, to say how they are to be individuated. The subject matter, then, is the psychological states found in humans. This is not to presuppose that only humans can have them. It is an open empirical question whether other animals share any psychological states with humans—​a question which turns in part on how such states are to be individuated. It could turn out that many other animals have beliefs, and that other primates have concepts, say. The target of this enquiry is the collection of psychological states that we humans happen to have, not those (if any) that are proprietary to humans. Another important, less empirical, question asks what it takes for there to be mental or psychological states in general. There the target is not the class of psychological states that humans happen to have, but the class of possible psychologies, which may be much wider. Possible psychology includes other ways that an organism could be set up to have some psychological organization or other. In addressing that question we should not presuppose that the inventory of mental states found in humans is the only possible psychological organization. Maybe there are quite different ways of processing information so as to respond flexibly and intelligently to the environment. What does it take to have some form or other of psychology, cognition, or mental life? Considerations that seem relevant include that the organism engages in consistent and complex forms of behavior that are responsive to features of its environment; that it pursues and achieves goals that

Functionalist Interrelations for Humans and Martians  245 are relevant to its interests; and that it processes information in rational or intelligent ways. The question is related to debates about the “mark” of the mental, although there the focus is the form of psychological life exemplified by humans. To disambiguate we can use “psychological” as the more general term and reserve “mental” for the categories of human psychology. Consciousness is a potential but contentious mark of the mental, but it is unlikely all states that count as psychological are conscious. Intentionality is also contentious in this respect, both because is unclear whether all human psychological properties exhibit intentionality (e.g., moods, emotions on some views), and because many prima facie non-​psychological artifacts seem to have intentionality. These are difficult questions, and it is in any event not the aim of this paper to give an account of what it takes to have psychological states. The important claim is that the class of possible psychological states may be much wider than the collection of psychological states that humans happen to have (even granting that some human psychological states may turn out to be instantiated more widely, for example in other animals on earth, with which we share common ancestors). A particular kind of mental state, like a belief, is individuated by its functional relations to stimuli, behavior, and other psychological states. A plausible constraint is that the “other psychological states” figuring in a functionalist specification should be drawn from the same kind of psychology as the state in question; for humans, from the inventory of other human psychological properties. So human beliefs will be type-​identified partly in terms of their relations to human desires and human intentions. Similarly, Martian mental states will be individuated by reference to Martian mental state types. Although it is widely recognized that functionalist specifications should include relations to other mental states—​that’s what sets functionalism apart from behaviorism—​a potential restriction to states from the same psychological system (either human or Martian, not both) is less widely noted. A functionalist can hold that a Martian has psychological states without being committed to the Martian’s having beliefs (or any of the other types of mental states instantiated by humans). Human mental states are individuated by a set of functional relations between perception, memory, belief, desire, intention, language processing, and so on. Martian mental states are individuated by a set of functional relations among the (different) collection of categories that characterizes Martian psychology. In both cases the constitutive interrelations may be fine-​grained.

246  What Are Mental Representations?

8.3.  Different Sets of Ramsey Sentences Human psychological states are implicitly defined by sets of Ramsey sentences (Lewis 1972). So are Martian psychological states. These sets of Ramsey sentences will generally be separate. We should not have to appeal to roles in Martian psychology to implicitly define the properties that figure in human psychology. So different psychological terms will figure in each set. Human psychological states are functionally defined in terms of inputs, outputs, and beliefs, desires, intentions and other psychological states (Psy1, Psy2, . . .): bel p = f (bel p′ , des q, intention x, Psy1, . . ., stimulus1, . . ., behavior1, . . .) des q = f ′ (bel p, des q ′, intention y, Psy2, . . . , stimulus2, . . . , behavior2, . . .)  . . . 

We then follow the Ramsey-​Lewis method to give a functional characterization of all human psychological types at once in terms of their interrelations.2 Functionalism says the same process is available to characterize Martian psychology, but functionalism need not be committed to Martians having the same collection of interrelated mental state types. So the specifications that enter into a functionalist treatment of Martian mental states (MPsy1, MPsy2, . . .) need not mention any human psychological categories:

MPsy1 = g (MPsy2, MPsy3, . . . , stimulus 1, . . . , behaviior1, . . .)



MPsy2 = g ′ (MPsy1, MPsy3, . . . , stimulus2, . . . , behavior2, . . .)  . . . 

We should distinguish between two different kinds of Martian. Martians* are very like humans in the fine-​grained functional organization of their psychological processing, but that processing happens to take place in a radically

2 Wadham (2016) points out that belief may be a cluster or prototype concept, based on having a sufficient number of relevant properties. Then the definition will be more complex, where some conditions are not individually necessary but certain conjunctions of them are.

Functionalist Interrelations for Humans and Martians  247 different substrate—​silicon perhaps. Here the intuition is that such creatures would operate in all relevant respects, both internally and externally, just like humans (although perhaps on a different length-​or timescale). An intuition of substrate-​neutrality underpins the conclusion that Martians* have beliefs, desires, perceptions, working memory, and all the rest. The second anti-​ parochial functionalist intuition discussed earlier motivates a second kind of case, Martians**. Martians** have a quite different psychological setup. Here a difference in substrate is irrelevant. The functional organization of Martians** is unlike anything humans have ever known. For example, the way they organize and process information may be radically different. Nevertheless, they seem to act purposively, pursue long-​ term projects, deal effectively with changing environmental contingencies, build tools and artifacts, communicate with each other, work together in complex social organizations, and so on. Their psychology is so alien that we humans would never be able to empathize with them or see the world as they do. Still they do seem to have a psychology, to behave on the basis of inner psychological states. The octopus may be a real-​life example of our hypothetical Martian**. It shares our organic substrate, but our last common ancestor with the octopus was some relatively simple, probably wormlike organism that almost certainly did not have beliefs (Godfrey-​Smith 2017). Yet we have the strong sense that octopuses are intelligent, with some kind of interesting if very different form of psychological organization; that is, that they have psychological states. We can distinguish two potential intuitions about Martians**:  that they have psychological states and that they have beliefs. Granted that we set up the case so that Martians** seem to have a psychology of some kind, is it also plausible that they have beliefs? It may seem unlikely that a psychofunctionalist specification of what it is to believe that p would extend to Martians**, but what about common-​sense functionalism about belief? Common-​sense functionalism was designed with anti-​parochialist motivations in mind (Braddon-​Mitchell and Jackson 2007) and this is the form of functionalism relied on by Clark in the extended mind debate (Clark 2008, 88, 240). But even common-​sense functionalism appeals to interrelations between various mental states, for example between belief, desire (as opposed to drives or hedonic states), and intention. There is no reason why this interrelated collection of mental states should mark the bounds of the psychological.

248  What Are Mental Representations? So the more secure anti-​parochialist motivation for functionalism implies that Martians** have psychological states, not that they have beliefs, desires, or any other human mental state. Martians** do not then imply that the functional specification of beliefs must be drawn broadly. They show only that any functionalist characterization of what it is to be a psychological state must be sufficiently broad to include the set of fine-​grained relations among Martian psychological states exhibited by Martians**, as well as the (very different) set of fine-​grained relations among human psychological states exhibited by humans. The distinction between psychofunctionalism and common-​sense functionalism need not align with the distinction between fine-​grained and coarse-​grained functional specifications. A  Ramsey sentence that tied belief to just a single non-​common-​sense psychological category would be psychofunctionalist but coarse-​grained. A  Ramsey sentence that posited complex interrelations between beliefs, desires, intentions, perceptual states, emotions, sensations, moods, memories, and sentence meaning, for example, would be common-​sense and fine-​grained. Our question is about fineness of grain. We saw earlier that both Clark and Sprevak rely on the idea that a motivation for functionalism—​ that Martians could have mental states—​ suggests that functional specifications will be sufficiently coarse-​grained so as to apply to some actual human informational resources outside the skin. Consider the well-​known case of Otto, who uses entries in a notebook as a way of remembering addresses. Sprevak envisages a Martian with a storage and recall system inside its head that works like Otto’s notebook (2009, 508). He argues that functionalism entails that the notebook is cognitive in both the Martian and the human (Otto) case. However, the functionalist intuition that the Martian is a cognitive agent could at most entail only that the notebook falls under one of the categories of Martian psychology, e.g., MPsy1. A state in the Martian’s internal notebook of functional type MPsy1 could stand in the right relations to the other states of Martian psychology to count as a psychological state of some kind. But it does not stand in the right functional relations with human beliefs, desires, intentions, and so on to count as a human belief, since it does not stand in any causal relations with human psychological states. A fortiori, it does not stand in the right functional relations with human psychological properties to qualify as any other kind of human psychological state.

Functionalist Interrelations for Humans and Martians  249 Now consider the notebook of the human, Otto. Nothing in the case speaks against human beliefs being individuated by relatively fine-​grained functional roles, in which case the notebook does not stand in the right relations to human psychological states to satisfy the functional definition of any of them (the functions f, f′, . . . presented earlier). That would be so whether or not the notebook was internal or external to Otto’s skin. Nor does it stand in the right relations with MPsy2, MPsy3, etc., to count as a Martian cognitive state (the functions g, g′, . . . presented earlier), since it is not, by hypothesis, causally connected to any states of Martian psychology at all. So functionalist treatments of particular kinds of human psychological state (belief, desire, etc.) do not license the conclusion that the notebook contains beliefs; nor do functionalist treatments of particular Martian mental states license the conclusion that the (human) Otto’s notebook contains any Martian-​type mental state. In short, the possibility of a Martian** psychology in which interactions with a notebook were integrated with other Martian** psychological states in such a way that states of the notebook count as states of Martian** psychology would not immediately have the consequence that states of (human) Otto’s notebook are psychological states of any kind. Both Otto and the Martian** have the right kind of functional organization to fall under the very general kind, psychological system. Otto’s psychology is human psychology, which may well individuate beliefs in a sufficiently fine-​grained way that states of the notebook do not count as beliefs (e.g., because they are not sufficiently fluidly integrated with the rest of Otto’s beliefs). Martian** psychology has a different inventory of psychological states, by reference to which the Martian**’s notebook counts as an instance of state MPsy1**, say. That does not imply that human Otto’s notebook states are instances of MPsy1**. They are not causally connected to the other MPsy** states by which MPsy1** is individuated. Whether things fall out that way depends upon three issues: how human psychological states are to be individuated; how Martian** psychological states are to be individuated; and what it takes to fall under the general category of being a psychological system. Those issues are where the hard work needs to be done to decide if there are any actual examples of cognitive extension. In particular, as is widely recognized, the answer will depend on how coarse-​or fine-​grainedly human psychological states ought to be individuated. My aim here is not to argue against the extended mind hypothesis,

250  What Are Mental Representations? but to resist the argument that a major motivation for functionalism entails radical cognitive extension. That argument does not go through once we recognize that the states in a psychological system are functionally defined by reference to causal relations to other states drawn from the same kind of psychological system. Intuitions about the psychological states of organisms with a very different functional organization from ours—​with a very different psychology—​therefore get no grip on external resources with which humans interact. We could instead consider a Martian that matched the fine-​grain psychology of human belief, desire, etc., but who also had a notepad as an extra internal resource, interacting with beliefs, desires, etc. in the same ways, functionally, that Otto’s external notepad interacts with his beliefs, desires, etc. If the functionalist intuition were to require us to count the Martian’s internal notepad as cognitive, that would indeed entail that Otto’s external notepad is cognitive. But does functionalism entail that the Martian’s internal notebook is cognitive? The fact that it is located inside the Martian’s head cannot be determinative, on a functionalist treatment (Wheeler 2012). Treating him as a Martian*, we conclude (rightly) from the similarity in his internal organization that he has beliefs, desires, etc. But that says nothing about whether his notebook is a resource that meets any of these fine-​grained specifications of human psychological states. To answer that question we need an account of what it takes to be a belief (where belief is one of the psychological state types instantiated by humans, and possibly other species). Again, that question is where the action takes place. Notice that it will not be enough to have an independently motivated, locationally uncommitted account of what it takes for a state to be cognitive in general, as some have argued (Walter 2010; Wheeler 2010). That will just tell us what it is for an arbitrary collection of interrelated functionally defined states to count as cognitive—​a test that applies to a whole organism and its full inventory of states. That states of a notebook could count as psychological within some functional organization or other does not yet show that they count as psychological when causally related to the categories of human psychology (belief, desire, intention, etc.). That still depends on how coarse-​ grained or fine-​grained the right functionalist characterization of beliefs should be. A final question is whether, even if they are not beliefs, states of Otto’s notebook fall under some other psychological state type. Maybe the advent of writing brings with it the appearance of new psychological categories. The

Functionalist Interrelations for Humans and Martians  251 written artifacts people interact with in the right way would then count as psychological, in the spirit of the extended mind intuition, without falling under any of the psychological categories applicable to preliterate humans (belief, desire, intention, etc.). Whether that is so depends on what it takes to be a psychological state, which I have not attempted to answer here. But notice that the conclusion would be less radical than the standard extended-​ mind claim that Otto has beliefs. We already rely on categories like diary, portable written record, and electronic address book to explain patterns in human behavior. Treating these systematically, a category like readily accessible artifact-​based memory could turn out to qualify as psychological, taking its place alongside semantic memory and episodic memory in explanations of human behavior.3 This possibility does not entail that the category of belief must be functionally individuated in a coarse-​grained way.

8.4.  Conclusion So we return to the question which is widely recognized to lie at the heart of the extended mind debate: whether there are extra conditions on a state’s being a belief which limit the ambit of cognitive extension (Clark and Chalmers 1998; Adams and Aizawa 2001, 2008), foreclosing the problematic consequence that every informational resource with which a human interacts is part of her extended mind (Rupert 2004, 2009; Sprevak 2009). That debate cannot be resolved by a general argument that the motivations for functionalism require coarse-​grained functional specifications of categories like belief, pace Clark and Sprevak. Functionalism remains a viable ontology of mental states, and the fate of “extended functionalism” (Clark 2008)  turns on whether external information resources have the kinds of connections to the rest of human psychology that would, whether they were located inside or outside the skin, make them count as a belief, a desire, an intention, or any other particular type of human psychological state. The fineness-​of-​grain question is important for functionalism in general, not just for those interested in the hypothesis of extended cognition. The anti-​parochialist motivation for functionalism would seem, at first glance, to motivate a relatively coarse-​grained way of individuating psychological 3 There is an issue about whether these would be psychological states of Otto or of the hybrid system consisting of Otto-​plus-​notebook (Miyazono 2017).

252  What Are Mental Representations? states. But that does not follow. We need to differentiate between functionalism about what it takes, in general, to have a collection of psychological states that amount to a psychological system, and functionalism about what it takes to be a belief. A fine-​grained way of individuating beliefs is compatible with a functionalism that counts organisms with quite a different functional organization, and without beliefs, as also being psychological systems, and hence as having psychological states of their own kind.

Acknowledgments The author would like to thank Ellen Fridland, Mark Sprevak, and two anonymous referees for helpful comments and criticism. This research has received support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 681422 (MetCogCon).

References Adams, F., and Aizawa, K. 2001. The Bounds of Cognition. Philosophical Psychology 14: 43–​64. Adams, F., and Aizawa, K. 2008. The Bounds of Cognition. Malden, MA: Blackwell. Braddon-​Mitchell, D., and Jackson, F. 2007. The Philosophy of Mind and Cognition. 2nd ed. Oxford: Blackwell. Chalmers, D. 2008. Forward to Clark 2008. Clark, A. 2007. Curing Cognitive Hiccups: A Defense of the Extended Mind. Journal of Philosophy 54 (4): 163–​192. Clark, A. 2008. Supersizing the Mind:  Embodiment, Action, and Cognitive Extension. Oxford: Oxford University Press. Clark, A., and Chalmers, D. 1998. The Extended Mind. Analysis 58: 7–​19. Godfrey-​Smith, P. 2016. Other Minds:  The Octopus, the Sea, and the Deep Origins of Consciousness. New York: Farrar, Straus and Giroux. Godfrey-​Smith, P. 2017. The Evolution of Consciousness in Phylogenetic Context. In K. Andrews and J. Beck (eds.), Routledge Handbook of Philosophy of Animal Minds, 216–​ 226. New York: Routledge. Lewis, D. 1972. Psychophysical and Theoretical Identification. Australasian Journal of Philosophy 50: 249–​258. Miyazono, K. 2017. Does Functionalism Entail Extended Mind? Synthese 194: 3523–​3541. Rupert, R. 2004. Challenges to the Hypothesis of Extended Cognition. Journal of Philosophy 101: 389–​428.

Functionalist Interrelations for Humans and Martians  253 Rupert, R. 2009. Cognitive Systems and the Extended Mind. Oxford:  Oxford University Press. Sprevak, M. 2009. Functionalism and Extended Cognition. Journal of Philosophy 106: 503–​527. Wadham, J. 2016. Common-​Sense Functionalism and the Extended Mind. Philosophical Quarterly 66 (262): 136–​151. Walter, S. 2010. Cognitive Extension: The Parity Argument, Functionalism, and the Mark of the Cognitive. Synthese 177: 285–​300. Wheeler, M. 2010. In Defense of Extended Functionalism. In R. Menary (ed.), The Extended Mind, 245–​270. Cambridge, MA: MIT Press. Wheeler, M. 2012. Minds, Things, and Materiality. In J. Schulkin (ed.), Action, Perception and the Brain, 147–​163. London: Palgrave Macmillan.

9 Nonnatural Mental Representation Gualtiero Piccinini

9.1.  Intentionality Here are two things that minds can do. First, they can think about objects that may or may not exist; second, they can think about both truths and falsehoods. These are not the same capacity: the truths and falsehoods that minds can think are about things that either exist or don’t exist. Compare: “I exist,” “I don’t exist,” “There is a unicorn,” “There are no unicorns.” Some minds are capable of using a natural language. In addition to thinking, linguistically endowed minds can talk—​either truly or falsely—​ about things that may or may not exist. And talking truly or falsely is not the only kind of talking. In addition to asserting, as in “There is a unicorn,” utterances may have other kinds of force: inquiring, ordering, etc. So, utterances have both meaning—​semantic content—​and force. Thinking and talking are aspects of intentionality, which is notoriously difficult to explain. Philosophers have tried for a long time and made some progress, but they have yet to find a complete explanation. Rather than reviewing the debate, I will begin with what I take to be the most promising explanatory strategy—​the representational theory of intentionality—​and propose a way to improve over previous versions.1 In the next section, I  introduce the representational theory of intentionality. In section 9.3, I  distinguish two kinds of (descriptive) representation:  natural and nonnatural. I  argue that nonnatural representation is necessary to fully explain intentionality. In section 9.4, I briefly introduce my preferred mechanistic, neurocomputational theoretical framework—​ including the notion of structural representation and the functional role

1 See Morgan and Piccinini 2018 for an opinionated literature review, which paves the way for the present project. Gualtiero Piccinini, Nonnatural Mental Representation In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0010.

Nonnatural Mental Representation  255 of structural representations, which is to simulate their target in order to guide action. In section 9.5, I  introduce a version of informational teleosemantics, which I take to be the most adequate theory of the content of (descriptive) mental representation. In section 9.6, I argue that informational teleosemantics fails to explain nonnatural representation and hence to fully explain intentionality. In section 9.7, I introduce offline simulation, which will help build an account of nonnatural representation. In section 9.8, I sketch an account of the semantic content of offline simulations. Finally, in section 9.9 I  extend traditional accounts of mental representation by sketching an account of nonnatural representation in terms of offline simulation. This is a step toward a naturalistic, mechanistic, neurocomputational account of intentionality.

9.2.  The Representational Theory of Intentionality Explaining intentionality requires explaining both how linguistic utterances (and other signs) gain their meaning and how mental states such as beliefs and desires gain their semantic content. The mentalist theory of meaning (MTM) maintains that the meaning of utterances can be explained in terms of the semantic content of mental states (cf. Speaks 2017, sec. 3). If this is right, then in order to explain intentionality one thing we have to do is explain the semantic content of mental states. More precisely, MTM claims that utterances gain both their force and their meaning in virtue of speakers possessing mental states with the relevant attitudes and semantic contents: Utterance U has force F and means that P as a function of the speaker possessing mental state M with attitude A and content P.

There are several versions of MTM, which differ on how the meaning and force of utterances depend on the content and attitude of appropriate mental states. Here I cannot review the options, but I can illustrate with a couple of examples. An utterance of “C’è un unicorno” is an assertion that there is a unicorn because it expresses the speaker’s belief that there is a unicorn. The attitude expressed by the assertion is belief, and the belief ’s (and hence the assertion’s) content is that there is a unicorn. By contrast, an utterance of “Mi regali un unicorno?” is a request that the hearer gift a unicorn to the speaker

256  What Are Mental Representations? because it expresses the speaker’s desire that the hearer gift her a unicorn. Here, desire is the attitude expressed by the request, and the desire’s (and hence the request’s) content is that the hearer gift a unicorn to the speaker. Assuming that some version of MTM is correct, what remains to be explained is the attitudes and semantic content of mental states. The representational theory of intentionality (RTI) claims that the attitudes and semantic contents of mental states are explained in terms of internal states, called representations, with appropriate functional roles and semantic contents.2 The explanation takes roughly the following form: Mind M has mental state with attitude A and content P if and only if it possesses an internal representation with functional role RA and content P.

Here, RA is the functional role that corresponds to attitude type A.  For example, beliefs are explained by mental representations with an appropriate functional role RBel, such that, among other effects, under appropriate circumstances they cause the utterance of assertions with the same content that the belief has. By contrast, desires are explained by mental representations with a different functional role RDes, such that, among other effects, under appropriate circumstances they cause the utterance of requests with the same content that the desire has. In addition to beliefs and desires, RTI may posit other types of mental representations corresponding to other mental states. RTI is behind much work in the philosophy of mind and language of recent decades (Jacob 2014; Speaks 2017; Pitt 2017). RTI also fits well with the way mainstream cognitive neuroscience explains cognition: in terms of neural representations and their neurocomputational processing (Piccinini 2020). RTI conveniently ignores phenomenal consciousness. This is contrary to a long philosophical tradition stretching back at least to Husserlian phenomenology. According to this alternative tradition, a complete account of intentionality must take phenomenal consciousness into account (Horgan and Tienson 2002; Loar 2003; Kriegel 2013; Bourget and Mendelovici 2017). By contrast, RTI theorists hope to explain at least some aspects of intentionality without getting bogged down in the consciousness morass. This essay is an implicit argument that progress can be made by developing RTI further; 2 RTI is my label for that portion of the representational theory of mind that addresses intentionality.

Nonnatural Mental Representation  257 I leave sorting out the relation between intentionality and consciousness to future work. In the rest of this essay, I focus solely on descriptive mental states and the descriptive (or indicative) representations that correspond to them. I  use “descriptive” in a broad sense, according to which an agent may bear a descriptive attitude that P without being committed to P—​or, equivalently, without fully believing that P. Examples of such descriptive intentional states include believing, imagining, forming a hypothesis, pretending, etc. A full theory of intentionality must also account for desires, intentions, and other nondescriptive mental states. Nondescriptive attitudes lie beyond the scope of this essay. From now on, when I write “representation” and “intentionality,” I mean descriptive representation and descriptive intentionality.

9.3.  Natural and Nonnatural Representation As Paul Grice (1957) points out, there are two importantly different notions of meaning: natural and nonnatural. I will use a modified version of Grice’s distinction. According to one usage of “meaning,” a sign means that P just in case it raises the probability that P. This is natural meaning. For example, smoke naturally means fire in the sense that the presence of smoke raises the probability that there is a fire; dark clouds naturally mean rain in the sense that the presence of dark clouds raises the probability that it’s raining. If we are lucky, such signs raise the probability of what they mean to 1. If so, they are factive—​they entail that what they mean is true. Grice focuses on this kind of case, so he defines natural meaning as factive. In real life, though, many signs raise the probability of what they mean to less than 1. This is the world we live in, and philosophers have to adapt to it. That’s why I stipulate the weaker condition that the natural meaning that P raises the probability that P, without necessarily raising it to 1. According to another usage of “meaning,” which Grice dubs nonnatural meaning, a sign means that P whether or not it raises the probability that P. For example, an utterance of “C’è un fiore” may or may not raise the probability that there is a flower, yet it still means that there is a flower. Similarly, “C’è un unicorno” does not raise the probability that there is a unicorn at all, yet it still means that there is a unicorn. Natural languages and other arbitrary sign systems are examples of systems whose well-​formed expressions have nonnatural meaning.

258  What Are Mental Representations? The notion of meaning is well suited to characterizing public signs, including natural language utterances. But what I’m investigating here is not, primarily, the meaning of natural language. My main target is the semantic content of the representations that, according to RTI, are expressed via natural language. Since my topic is representations and their semantic content, I need a notion—​analogous to meaning—​that applies to representations. The standard thing to say is that representations carry information. Accordingly, Andrea Scarantino and I (2010, 2011) introduced a distinction between natural and nonnatural semantic information that parallels Grice’s distinction between natural and nonnatural meaning. A  vehicle carries natural semantic information that P just in case it raises the probability that P. By contrast, nonnatural semantic information that P need not raise the probability that P. Given this notion, a vehicle may carry nonnatural information that P even though it does not raise the probability that P.  Nonnatural semantic information is a broader notion than nonnatural meaning. For nonnatural semantic information may be carried by any information bearers, including representations internal to a cognitive system, whereas nonnatural meaning is restricted to public signs such as linguistic utterances. Scarantino and I  (2010, 2011)  used the notion of representation—​ understood as a state that can misrepresent or get things wrong—​to shed light on the notion of nonnatural semantic information. We implied that to carry nonnatural semantic information that P is the same as to represent that P. That was a mistake: it is entirely possible to represent that P while carrying natural semantic information that P. This is not to say that carrying natural semantic information is sufficient for being a representation—​later we’ll discuss what else is required. The important point is that carrying nonnatural semantic information is just one way of representing; representing and carrying nonnatural semantic information are not the same thing. Instead of explicating nonnatural semantic information in terms of representation, I will now use the distinction between natural and nonnatural semantic information to introduce a parallel distinction between two types of representation. This will give us the concepts we need to make progress on intentionality. I will call a representation that carries natural semantic information a natural representation. By definition, tokening a natural representation that P—​e.g., a perceptual state carrying the natural semantic information that P—​raises the probability that P. In addition, I will call a representation that carries nonnatural semantic information a nonnatural representation. By

Nonnatural Mental Representation  259 definition, tokening a nonnatural representation that P—​e.g., a representation corresponding to the intentional act of imagining that P—​need not raise the probability that P.3 A complete account of (descriptive) intentionality in terms of representation requires both natural and nonnatural representations. Perhaps nonnatural representation can be reduced to natural representation somehow; we’ll discuss this possibility later. For now, we should accept that RTI requires nonnatural representation too. The reason is that there is a large class of broadly descriptive mental states and corresponding linguistic expressions whose tokening need not raise the probability of what they represent. Because they need not raise the probability of what they represent, natural representation—​which is just representation that raises the probability of what it represents—​cannot fully account for intentionality. For example, imagining that P need not raise the probability that P. Hypothesizing that P need not raise the probability that P. Considering whether P need not raise the probability that P. Dreaming that P need not raise the probability that P. Pretending that P need not raise the probability that P. Heck, even believing that P may or may not raise the probability that P. Explaining such mental states in terms of representation requires representations whose tokening need not raise the probability of what they represent—​which is also what they carry information about. Accordingly, such representations are nonnatural representations, which carry nonnatural semantic information. An adequate RTI needs nonnatural representations in addition to natural ones.

9.4. Functional Role As we’ve seen, representations have two aspects: semantic content and functional role. For example, according to RTI, a belief that there is a flower is explained by a representational vehicle with an appropriate functional role carrying the semantic content that there is a flower. Functional role, in turn, is the causing of certain effects under certain conditions. As long as one grants that representations can play a suitable functional role within cognitive systems, some version of the argument in the rest of this essay goes 3 I call it “nonnatural” only because that is the term Grice used. I am not implying that nonnatural representation cannot be naturalized. On the contrary, by the end of this paper I will sketch a naturalistic account of nonnatural representation.

260  What Are Mental Representations? through. Nevertheless, it is worth explicating the relevant functional role in more detail. In this section, I introduce a multilevel, mechanistic, neurocomputational framework and use it to explicate representational functional role. What follows is a brief summary; a more detailed exposition and defense may be found elsewhere (Piccinini 2020). Minds are complex computational, representational, multilevel mechanisms. They are constituted by mechanisms with many levels of organization. A mechanism is a structure that performs activities. It is composed by a set of concrete components (substructures) that perform sub-​activities and are organized so that the activities of the components constitute the activities of the whole mechanism. The functional role of a well-​functioning mechanism is the activity it performs under relevant circumstances. For example, when food enters a well-​functioning stomach, the stomach digests the food. Each component of a multilevel mechanism is in turn a mechanism, and each mechanism may be embedded in a larger containing mechanism. Structures (i.e., potential components of a mechanism) and activities constrain one another:  a given structure can only perform a limited range of activities, and a given activity can only be performed by a limited range of structures. Thus, functional roles are not separable—​not distinct and autonomous from—​the structures that perform them. Nevertheless, often the same activity can be performed by different structures, different arrangements of the same structures, or different arrangements of different structures. Because of this, activities are often multiply realizable—​that is, different kinds of mechanism can exhibit the same activity. Some mechanisms and their components perform teleological functions, which are a special kind of functional role. In other words, such mechanisms don’t just perform activities simpliciter—​ they perform (teleological) functions, which are a special kind of activity. Performing their function at the appropriate rate in appropriate circumstances is what they are supposed to do. If they fail to perform their functions at the appropriate rate in the appropriate circumstances, they malfunction. Performing computations and processing representations are teleological functions of certain specialized mechanisms. Teleological functions are controversial. Some people find them ontologically suspect; others account for them in terms of the evolutionary or selection history of a system (Garson 2016). I prefer a goal-​contribution account: a teleological function is a stable contribution to a goal of organisms.

Nonnatural Mental Representation  261 Organisms have biological and nonbiological goals. For present purposes, what matters most are the biological goals, which are survival, development, reproduction, and helping others. Any regular contribution to a biological goal of an organism on the part of a biological trait or artifact is a biological (teleological) function of that trait or artifact. Some functions may be subject to plasticity and learning, meaning that some traits (e.g., the nervous system) may have the function to acquire more specific functions through developmental and cognitive selection processes. For example, a neuronal population may acquire the function of representing the values of a specific environmental variable through a process of biological development and learning (cf. Millikan 1984 on derived proper functions, Garson 2012, and Neander 2017, 21–​22). I will leave a full treatment of the relation between functional plasticity and mental representation to another occasion. Some multilevel mechanisms are computational and representational; that is, they perform computational and representational functions. Such mechanisms range from primitive computing components such as logic gates and perhaps neurons all the way up to complex networks of processors. Each level is composed of lower-​level mechanisms and partially constitutes higher-​level mechanisms, all the way up to the whole system. Higher-​level representations and computations are constituted by lower-​level ones, all the way down to atomic computations performed by primitive computing components. Computation does not require representation—​there are computations defined over vehicles that are not representations because they have no semantic content. Nevertheless, minds do perform computations over representations. That is how they keep track of what is happening around them and react appropriately, which contributes to their survival and inclusive fitness. A computation is a kind of mechanistic process defined over certain degrees of freedom of the vehicles being processed, regardless of any more specific physical properties of the vehicles (Piccinini 2015, ch. 7). Because of this, computations are not only multiply realizable, as many other activities and functions are—​they are also medium-​independent. In principle, computations may be realized using any medium with the appropriate degrees of freedom. (In practice, there are other constraints to consider, such as temporal constraints on how fast an organism needs to respond to stimuli.)

262  What Are Mental Representations? The kinds of computational systems we are interested in, such as minds, employ structural representations. Structural representations are internal states that are systematically related to one another as well as to what they represent. They guide action by modeling, or simulating, what they represent (Craik 1943). For example, a smartphone navigation app helps you find your way, in part, by modeling your environment. Thus, the functional role of a structural representation is guiding action with respect to an environment by simulating the environment or a portion thereof. More precisely, structural representations “stand in” for what they represent by being homomorphic (i.e., partially isomorphic) to their target and by guiding action with respect to it; in short, they are functioning homomorphisms (Gallistel 1990, 2008; Gallistel and King 2009). Contrary to what is sometimes claimed (Ramsey 2007), this notion of representation is the one best suited to make sense of neural representation.4 Minds may be natural, and hence biological, or artificial. The same framework I sketch here applies to both natural and artificial minds, even though the mechanisms involved may differ. Since my primary target is human minds and other natural minds, I will set artificial minds aside and focus on natural minds. The natural minds we are acquainted with are realized in neurocognitive systems, which are systems of multilevel neurocognitive mechanisms. These systems are embodied and embedded in the innocuous sense that they are tightly coupled with a body and environment and their functioning is deeply affected, in real time, by such a coupling. As we shall see, representations acquire semantic content thanks to the coupling between neurocognitive systems and their environment. Perhaps biological minds also include aspects of the body and environment beyond the neurocognitive system as their parts; whether they do makes no difference to the framework I adopt, so I will not comment on that any further.5

4 For other expositions of the notion of structural representation and, in some cases, for arguments that it’s the notion best suited for neuroscience, with some variation in the details, see, among others, Shepard and Chipman 1970; Swoyer 1991; Cummins 1996; Grush 2003, 2004; O’Brien and Opie 2004; Ryder 2004, forthcoming; Bartels 2006; Waskan 2006; Bechtel 2008; Churchland 2012; Shagrir 2012; Isaac 2013; Hohwy 2013; Clark 2013; Morgan 2014; Neander 2017, ch. 8. Some of these authors also point out that structural representations simulate what they represent, that simulations may be run offline, and that running simulations offline increases the representational power of a system. As far as I can tell, none of them develops offline simulation into an account of nonnatural representation, as I do in what follows. 5 For more on whether and how the mind is embodied and embedded, see Robbins and Aydede 2009.

Nonnatural Mental Representation  263

9.5. Informational Teleosemantics In the previous section, I sketched an account of the functional role of (structural) representation:  to be a functioning homomorphism of its target or, equivalently, to be a simulation that guides action. What remains to figure out is how individual representations within a structural representational system acquire semantic content and what content they have. As a first step, I  will briefly sketch a version of informational teleosemantics.6 In later sections, I will extend the theory to cover nonnatural representations. Informational teleosemantics and related theories have been developed by a number of philosophers over the last few decades (e.g., Stampe 1977; Dretske 1981, 1986, 1988; Fodor 1987, 1990a, 2008; Millikan 1984, 1989, 1993, 2000; Ryder 2004, forthcoming; Neander 2017; this literature is surveyed in Adams and Aizawa 2010; Neander 2012; and Neander 2017, ch. 4). I don’t have space to compare and contrast my version of informational teleosemantics with other theories; that’s okay because the differences are mostly irrelevant here. For present purposes, all theories in this family share most of the same strengths and weakness. Informational teleosemantics has two main ingredients: natural semantic information and (teleological) function. As we have seen, a state (or signal) carries natural semantic information that P just in case it raises the probability that P. The core of informational teleosemantics is natural representation: (NR) A state (or signal) R occurring within a system S naturally represents that P =def A function of S is tracking that P by producing R. A system S tracks that P by producing R =def S produces R if and only if P so as to guide the organism in response to the fact that P.7

6 Some proponents of structural representation take it to provide an alternative to informational teleosemantics by accounting for semantic content in terms of homomorphism (or structural similarity) instead of information. But homomorphism, in order to amount to representational content, must be established and sustained through a causal or informational connection with a referent. That’s where informational teleosemantics comes in. Conversely, when a system of internal states systematically carries information about a system of external states, it is homomorphic to it. Thus, homomorphism and information are two sides of the same semantic coin (cf. Morgan 2014). 7 I am omitting an additional condition on natural representation: that R be one among a range of similar states that map onto a range of similar external states. In addition, neural representations are medium-​independent. I  am omitting these condition because they do not affect the present argument.

264  What Are Mental Representations? In other words, a state R represents that P just in case the system that produces R has the function of producing R just in case P obtain, in order to guide the organism in response to the fact that P. That is the system’s representational function. A state R will co-​vary with the fact that P to the degree that the system fulfills its representational function. If the system always fulfills its representational functions, the occurrence of R will raise the probability that P to 1 and thereby be factive. More realistically, systems don’t always fulfill their representational function, in part because the world is full of noise. Therefore, there is often a trade-​off between false positives and false negatives. The more the system works to avoid false negatives, the more it ends up with false positives—​and vice versa. Under realistic circumstances, if the system fulfills its representational function enough of the time to generate a reliable enough correlation between R and the fact that P, then R will raise the probability that P to a corresponding degree. To that degree, R carries natural semantic information that P. This is the connection between representation and natural semantic information. In general, R will be one among a range of similar states (e.g., the firing of a neuronal population at different rates) all of which have the function of tracking specific values of an environmental variable (e.g., the orientation of a line within a specific portion of the visual field). That may include values that the organism has never encountered. That is, a representational system may learn to track, say, oriented lines in the visual environments; when learning is sufficiently advanced, the system may be able to track lines with any orientation, including orientations that were never encountered during the learning period. In addition, typical representational states are holistic in the following sense: the parts of a representation mutually affect one another as the system constructs a representation of a whole perceptual field. Taken together, the natural representational states that a system produces track the actual environment. In other words, they constitute a structural representation that simulates the organism’s environment. Thus, this is an account of the semantic content of natural structural representations. For simplicity, however, (NR) abstracts and idealizes these complexities away to focus on individual states in isolation from one another and from their context. The tracking function mentioned in (NR) amounts to not only producing R—​which carries natural semantic information that P—​just in case P, but also being appropriately connected with motor control systems so as to guide the agent with respect to the fact that P. One reason to include action guidance

Nonnatural Mental Representation  265 is to identify the (distal) fact that P—​as opposed to one of its more proximal intermediaries—​as the content of a representation.8 If a fact P obtains and a representation R is produced and appropriately connected with motor control systems, the system fulfills its representational function by correctly representing that P via R. If P is not the case but R is produced anyway, and R is connected with motor control systems so as to guide them, then R misrepresents that P. If R is produced but fails to be appropriately connected with motor control systems so as to guide the agent with respect to the fact that P, then R fails to represent that P—​in fact, it fails to represent anything.9 For example: neurons whose function is firing in the presence of cows may occasionally fire in the presence of horses, perhaps because the dark night makes it difficult to distinguish cows from horses (Fodor 1990a); in that case, such neuronal firings are still carrying natural semantic information that there is a cow, because in general such firings raise the probability that there are cows. In addition, the organism responds to such firings by making action plans as if there were cows; therefore, the function of such firings is to track cows. If such firings occur in the presence of a horse, they are now misrepresenting the presence of a horse as the presence of a cow. This is how informational teleosemantics solves this aspect of the so-​called disjunction problem: do those neurons naturally represent the disjunction cow-​or-​horse-​ on-​a-​dark-​night? No, they represent only cows, because their function is tracking cows. Many sensory, perceptual, and perceptual-​belief states in the nervous system fit this informational teleosemantic account of the semantic content of representation. Representational systems have the function of producing states that co-​vary with states in the organism’s body and environments—​ that is, of producing internal states just in case certain external states occur—​ so as to guide the organism with respect to those external states. When internal states carry such information and are connected to the organism’s motor control system in the right way, they fulfill their representational function. Otherwise, they misrepresent or fail to represent.

8 For a more detailed discussion of the problem of distal content, including a solution along similar lines, see Neander 2017, ch. 9. 9 Motor control systems may be dysfunctional or disconnected from the locomotive system, as when the organism is paralyzed. Whether that is the case is a separate question from whether the system represents things. Paralysis does not prevent representation.

266  What Are Mental Representations? Informational teleosemantics has many virtues and it explains a very important, basic notion of representation:  natural representation. It does not work for nonnatural representation.

9.6.  Why Informational Teleosemantics Fails for Nonnatural Representation Nonnatural representations are representations that carry nonnatural semantic information, which in turn is information that need not raise the probability of what it’s about. They are the kind of state that, according to RTI, explains intentional states such as imagining, hypothesizing, pretending, entertaining possibilities, and the like. In addition, any instance of believing that P that does not raise the probability that P is a nonnatural representation. Nonnatural representations are also the kind of state that, according to RTI, causes typical natural language utterances and tokens of other arbitrary sign systems. There is no direct way to reduce nonnatural representation to natural representation. This is because the content of natural representation is natural semantic information, and natural semantic information that P consists in raising the probability that P. Since nonnatural representation that P is defined as a representation that need not raise the probability that P, the content of a nonnatural representation that P is, by definition, something other than the natural information that P. At this point, an optimistic informational teleosemanticist might hope to account for nonnatural representation by adding to (NR) a clause that employs the notion of nonnatural information: (NNR) A state (or signal) R occurring within a system S nonnaturally represents that P =def

(1) R carries nonnatural semantic information that P and (2) A function of S is tracking that P by producing R.

Clause (2)  still attributes to the representational system the function of tracking that P.  But nonnatural representations need not have a tracking function. For example, when an agent is imagining that P or entertaining the

Nonnatural Mental Representation  267 possibility that P, the function of its representational system is not at all to track that P. Therefore, clause (2) must go. Can it be substituted with a clause that appeals to the function of carrying nonnatural semantic information? Consider this proposal: (NNR*) A state (or signal) R occurring within a system S nonnaturally represents that P =def A function of S is carrying nonnatural semantic information that P so as to guide the organism in response to the nonnatural semantic information that P.

(NNR*) attempts to mirror (NR) as closely as possible by substituting nonnatural semantic information for natural semantic information. Unfortunately, (NNR*) has two fatal problems. First, unlike (NR), (NNR*) is not a naturalistic account of representational content. Unlike natural semantic information, which raises the probability that something is the case, we have not been given a naturalistic account of nonnatural information. All we know is that nonnatural semantic information need not raise the probability that P. Thus, nonnatural information is not a suitable ingredient for a naturalistic account—​at least not until it is naturalized. Second, (NNR*) does not lead to a viable account of misrepresentation. (NR) accounts for natural misrepresentation as a kind of malfunction: a natural representation that P misrepresents just in case it is produced when P, which the system has the function of tracking, is not the case. Thus, a natural representation misrepresents just in case the system fails to fulfill its representational function. This does not work for nonnatural representation. Whether a nonnatural representation misrepresents does not depend on whether it malfunctions. Conversely, whether nonnaturally misrepresenting is a malfunction does not depend on whether the system fulfills its function of carrying nonnatural information. To see this, consider an example. Suppose that, due to Cotard’s delusion, I believe I’m dead. By RTI, I believe that I’m dead by tokening a mental representation whose content is that I’m dead. If I were really dead, I couldn’t do this. Therefore, my representation that I’m dead cannot track that I’m dead. Therefore, it cannot be a natural representation—​it’s a nonnatural representation. I’m not dead but my representational system nonnaturally misrepresents me as dead. By hypothesis,

268  What Are Mental Representations? my belief is a delusion due to neurocognitive malfunction. Thus, my belief may well count as a nonnatural representation that misrepresents due to system malfunction. But now suppose that my neurocognitive system is working just fine. I  know I’m alive. I  just imagine being dead, perhaps as part of a meditative practice. As before, by RTI my state of imagining is explained by my tokening a mental representation whose content is that I’m dead. Since I’m not dead, my imaginative state misrepresents me as dead. Since my representational system cannot track that I’m dead, this is again a case of nonnatural representation. But there is no sense in which my representational system is malfunctioning. If its function is to carry the nonnatural information that I’m dead in the service of guiding action, then it is fulfilling its function. My representational system is functioning perfectly well—​there is no malfunction at all. The point generalizes beyond this somewhat idiosyncratic example. Take any nonnatural representation that P. Under many conditions, the functions of a system that is representing nonnaturally are such that the system ought to represent that P even though P is not the case. It may be entertaining hypotheses, evaluating counterfactuals, or daydreaming. Regardless of the exact attitude involved, a system that is representing nonnaturally may have the function of representing that P whether or not P is the case. If so, then nonnatural misrepresentation is not representational malfunction. Therefore, the standard informational teleosemantics account of misrepresentation as representational malfunction does not work for nonnatural representation. And without a viable account of nonnatural misrepresentation, we lack a viable account of nonnatural representation. In summary, informational teleosemantics fails for nonnatural representation for two reasons: nonnatural information has not been naturalized and no viable account of nonnatural misrepresentation has been given.

9.7. Offline Simulation To construct a viable account of nonnatural representation, I will build on some basic features of structural representation. As we’ve seen, I  am assuming that a representational system constructs internal simulations of what it represents—​that is, dynamical models that guide action. Let’s consider the case of an organism responding to its environment in real time. The

Nonnatural Mental Representation  269 semantic content of its perceptual representational system is its current internal and external environment. For example, the semantic content of the visual system is the visual scene surrounding the organism; the semantic content of the proprioceptive system is the state of the organism’s body. Environments change constantly, both on their own and in response to organisms’ actions. Therefore, in order to guide an organism successfully, it isn’t enough for a representational system to simulate how the environment is at a given time; such a simulation must be continuously updated in real time. In addition, such a simulation cannot be based solely on incoming sensory information about the environment at the time it impinges on the organism, for two reasons. First, receiving and processing sensory information takes time. Any simulation that is based solely on what sensory information indicates about the present state of the environment will lag behind the state of the environment and be out of date. As the environment evolves, the organism needs a simulation of how the environment is right now, not how it was when it impinged on the organism. Second, action must be planned in advance. Even real-​time responses to the current environment take time to be executed, so they may have to be programmed in response to how the environment will be at the (slightly future) time of action rather than how the environment is right now. For example, walkers must predict where their feet will meet the ground to program appropriate muscle contractions in response to contact with the ground. If they have the wrong expectation, they may lose their balance. The time difference between the present and when action takes place may be small, but under many circumstances successful action requires adjusting for even a small difference. In short, a representational system must be able to predict how the environment is when a simulation is running, which is slightly later than the state of the environment indicated by incoming information, as well as when action is executed, which is slightly later than now. I will call the difference between the time when action is executed and the time when sensory information impinges on the organism the sensorimotor time interval. Because the sensorimotor time interval must be taken into account in order to maintain relatively accurate simulations of the actual environment in real time, internal models must be coupled to their target through a combination of sensory information and extrapolation.10 10 This point is reminiscent of some Bayesian approaches to neuroscience and psychology (e.g., Hohwy 2013; Clark 2013), though I remain neutral on the exact balance between sensory information

270  What Are Mental Representations? So far, we are still talking about natural representation. Except that we’ve now built into it a fact often neglected in the informational teleosemantics literature: in order to function, natural representation must represent more than what sensory experience indicates about the environment at the time the environment impinges on the organism. Natural representation must be based on a combination of sensory information and extrapolation that simulates how the environment is—​either right now or in the immediate future. A representational system may represent more than what current sensory information indicates in many other ways: 1. It may track objects and their location when they are partially or wholly occluded, and hence not perceivable. Obstacles may partially or wholly occlude objects, yet a sufficiently powerful representational system will continue to track the existence and location of some occluded objects. Object completion, object permanence, and cognitive maps are just some classic phenomena involving such tracking. 2. It may keep a trace of past events (episodic memory). 3. It may predict future states of the organism and its environment beyond the sensorimotor time interval, as when a predator intercepts a prey based on where it expects the prey to be at a future time rather than where the prey is now (or within the sensorimotor time interval). 4. It may infer unobservable causes of observable events. These are still natural representations, or extended forms thereof, as long as their function is to track the actual environment—​that is, to co-​vary with the environment in order to guide action. Any misrepresentation is a representational malfunction. Nevertheless, these representations depart from standard examples of natural representations by representing more than what is indicated by incoming sensory information. Paradigmatic examples of natural representations are kept up to date by checking them against incoming sensory information from the environmental variables that are being tracked. By contrast, the kinds of simulations in the preceding list are maintained in spite of the absence of incoming sensory information from the environmental variables that are being tracked. and prediction that is at play. As far as I am concerned, the balance between sensory information and prediction may vary depending on circumstances.

Nonnatural Mental Representation  271 This shows that natural representation requires the ability to simulate environments in ways that go beyond—​are partially decoupled from—​incoming sensory information. I call a simulation that is coupled to its current environment through incoming sensory information an online simulation. To the degree that a simulation represents more than what sensory information indicates about the current environment, I call it an offline simulation. I have argued that even natural representation involves a degree of offline simulation of the environment. Offline simulation comes in kinds and degrees. That is, there are different ways that a simulation can be decoupled from incoming sensory information. Each of these ways varies along a continuum: 1. The target of an offline simulation can be more or less extended in space. An offline simulation may involve as little as making the best estimate of what the environment is like where we have our blind spot or as much as forming a cognitive map, complete with landmarks, of an environment we cannot see at the moment. 2. The target of an offline simulation can be more or less extended in time. An offline simulation may involve as little as predicting how the environment will be in a few milliseconds, when our actions will be executed, or as much as anticipating where a prey will be a few seconds from now. When we add the kind of time-​tracking and planning abilities that human beings have, it may involve planning actions years in advance as well as remembering events that happened years ago. 3. An offline simulation may be more or less coordinated with online simulation. It may involve as little as filling in a few missing details from an actual visual scene (such as what’s in the blind spot or what lies behind occluding objects) or as much as imagining entire counterfactual scenarios. 4. An offline simulation may require more or less interaction between perceptual systems and motor systems. On the purely perceptual side, it may involve simply filling in visual or auditory details beyond what the incoming sensory information indicates. On the purely motor side, it may involve simply selecting an action that is affordable within the currently perceived environment. Perceptual imagination and motor imagination can also be integrated. A representation of a hypothetical action may require a representation of a currently invisible,

272  What Are Mental Representations? hypothetical, or future environment, and it can be fed back to the perceptual imagination system to predict how the environment would change in response, and then perhaps predict how other agents would respond further, and so on, so as to assess the pros and cons of various possible actions before choosing one. 5. An offline simulation may be more or less automatic. It may be as automatic (and unconscious) as filling in the blind spot or as deliberate as planning a vacation. Some of the preceding ways and degrees of offline simulation have to do with responding to the current environment in real time. Others do not—​ they have to do with understanding and communicating past events, planning long-​term action, and other cognitive functions that are not directly tied to ongoing (nonlinguistic) action. To understand the ways that offline simulation can augment the representational power of a cognitive system, we must distinguish three importantly different cases.

9.7.1.  Augmented Online Simulation Let’s begin with what I call augmented online simulation: the simulation of the current environment in real time, to guide current action. As we’ve seen, even simulations that are largely online represent more than what is indicated by incoming sensory signals about the state of the environment at the time it impinges on the system. Accordingly, augmented online simulation is built on some combination of incoming sensory information and extrapolation. Some of the ways that the simulation goes beyond sensory information need not be tracked by the system. For instance, the system need not track that it’s extrapolating the current or immediately future state of the environment or filling in the blind spot. It’s all part of building the most accurate and actionable model of the current environment and using it to guide action. Other ways in which the simulation goes beyond sensory information must be tracked. An obvious case is object completion: a good visual system represents objects in its environment as partially occluded three-​ dimensional wholes, not as two-​dimensional patches. To do this, the system must track which parts of which objects are visible and which parts are occluded, either by the object itself or by other objects. Such tracking, which is necessary to respond appropriately to the environment, is still a form of

Nonnatural Mental Representation  273 natural representation because its function is to track aspects of the current environment.

9.7.2.  Offline Simulation of the Actual Environment Next, let’s consider the offline simulation of portions of the actual environment that cannot be currently perceived. This involves remembering past events, considering what is present in portions of the environment that are not currently perceivable (e.g., non-​visible parts of a maze), or predicting future states of the environment for medium-​and long-​term planning. In addition to tracking the environment itself, such offline simulation requires tracking that the simulation is offline. The system must track which aspects of a given simulation are online and which are offline, on pain of getting confused and responding to a non-​present, non-​actionable portion of the environment. Again, offline simulation and the tracking of its offline status are a form of natural representation, for their function is to track some aspect of the actual environment. But offline tracking and the tracking of its offline status are not closely coupled with the state of the environment via sensory information in the way that online simulation is. Since the simulation is entirely offline, it cannot be constantly updated and checked for accuracy by using sensory information. Therefore, it constitutes a nontrivial extension of (augmented) online natural representation. One way to see that this is a nontrivial extension of natural representation is to consider the degree to which an offline misrepresentation of the current environment counts as a malfunction. If an offline representation represents the angle between two branches of a maze as being 90 degrees when in fact it’s 60 degrees, it’s tempting to say that the system is malfunctioning. But it may be more accurate to say that it’s providing the best estimate of the angle based on available natural semantic information. Offline simulation provides the best estimate of an environment that the organism is capable of. As long as the organism is not neglecting more accurate natural semantic information it possesses, the representational system may be functioning correctly even if it doesn’t represent everything completely accurately.11 11 Another way to see that this is a nontrivial extension of natural representation is that this extension is not even possible under causal theories of representational content, which posit that the content of a representation is its present cause (e.g., Fodor 1990a; Neander 2017). In general, the content

274  What Are Mental Representations?

9.7.3.  Offline Simulation of Nonactual Environments Finally, let’s consider the simulation of a nonactual environment. Since the environment is nonactual, it cannot be simulated online—​any such simulation must be conducted offline. Simulation of a nonactual environment involves forming representations that, according to RTI, explain acts such as imagining counterfactual scenarios, entertaining possibilities and hypotheses, and daydreaming. A special case is dreaming simpliciter. During a dream, typically the subject experiences nonactual events without tracking that they are nonactual. This does not matter because dreaming occurs while the organism is asleep. While the organism is asleep, normally, the motor system is paralyzed—​ except for eye movements. While dreaming, motor commands may be issued in response to the experienced stimuli, but such motor commands do not leave the nervous system, so the body does not move—​except in pathological cases such as sleepwalking. In other cases of offline simulation of nonactual environments, which occur while the organism is awake, it is critical that the representational system track that such simulations are offline and the way they depart from the actual environment, on pain of acting in response to the imagined environment rather than the actual one. To illustrate, suppose a hungry organism fantasizes about having a succulent meal in front of it. If it attempts to eat such a meal, it will fail. If it represents the meal as being somewhere in its non-​visible environment without representing it as a fantasy, the organism will start searching for such a meal. If it systematically acts in response to such fantasies at the expense of acting on accurate representations of the actual environment, it will eventually starve. To act effectively, the organism’s representational system must keep track that its imaginary meal is not in the actual environment. It must track which of its representations are offline simulations and the way they depart from the actual environment. To perform such tracking, the representational system must possess some internal state or signal whose function is tracking the degree to which a simulation is offline. Since the function of this state or signal is tracking a of an offline simulation of the actual environment is not its present cause, and certainly it need not be its present cause (cf. Fodor 1990b for an argument to this effect as well as precursors to some of the present themes). By contrast, (NR) does not require that the semantic content of a representation be its present cause, so it supports the extension from augmented online simulations to offline simulations of the actual environment.

Nonnatural Mental Representation  275 current aspect of the (internal) environment, this is still a form of natural representation—​it tracks one current aspect of the representational system itself.12 Yet it provides a crucial ingredient by which the representational system achieves nonnatural representation. The most important point is that offline simulations of nonactual environments do not fit the standard informational teleosemantics story, because they are not coupled with the environment via sensory information. Where do offline simulations of nonactual environments get their semantic content, and what content do they have?

9.8. Offline Semantics To give an account of the semantic content of offline simulations, we need to say a bit more about what mental simulations consist in—​what their parts are and how they are bound together to form representations of objects and their properties. I assume that something like the mainstream view of neural representation is correct: mental simulations consist of the bound firing of different neuronal populations at appropriate rates; each portion of the simulation has the function of tracking one aspect of what is represented. For instance, the firing of different neuronal populations at certain rates may represent certain shapes, textures, colors, positions in space, directions of motion, etc. When a combination of different neuronal populations is firing together and their firing is bound via an appropriate mechanism—​e.g., their firing is synchronized—​the result is a simulation of an object and its properties. This is in line with our understanding of neural representation from mainstream neuroscience (Thomson and Piccinini 2018). Whether the details are correct matters less than that something along these lines is on the right track. Having said what a mental simulation in general consists in, I  assume that offline simulations consist in the offline deployment of representational resources—​neuronal populations whose firing is bound—​that are acquired in the course of online simulations. There is empirical evidence that this

12 That’s not to say that such internal tracking is metarepresentational. It may simply amount to constructing simulations that play an appropriately modified role in guiding action whenever representational resources are deployed in a way that departs from what the system represents to be the actual state of the world.

276  What Are Mental Representations? assumption is correct, but I will not review such evidence here (for a start, see Barsalou 1999; Kosslyn, Thompson, and Ganis 2006). When a representational resource is deployed online, in the service of (augmented) online simulation of the current environment, its accuracy is maintained and updated in light of incoming sensory information. Standard informational teleosemantics (NR) applies. That resource represents what it has the function to track in the actual environment. When a representational resource is deployed offline, in the service of offline simulations of the actual environment, its accuracy can be checked at least in principle by collecting sensory information under appropriate circumstances. An extension of informational teleosemantics (NR) still applies. That resource still represents what it has the function to track in the actual environment. Finally, when a representational resource is deployed offline, in the service of offline simulations of a nonactual environment, everything changes. What is represented is no longer actual, so the simulation is likely to be inaccurate relative to the actual environment. Worse, the actual environment can no longer be used directly to attribute semantic content to the representational resource, so informational teleosemantics (NR) does not apply. But the representational resource could be used in the service of online simulation. If it were so used, it would have semantic content thanks to standard informational teleosemantics. I propose that when a representational resource is deployed offline, in the service of offline simulations of nonactual environments, it retains the same semantic content it has when it is deployed online. Thus, offline simulations represent what they would represent according to informational teleosemantics if they were deployed within online simulations. Thus, the content of offline simulations piggybacks on the content of online simulations. Typical representations consist of many representational resources bound together in the relevant way. Accordingly, a principle of compositionality applies:  the content of the whole representation is a function of the content of the representational resources that compose it and the way they are bound together. Here is a simplified example:  suppose that a representational system constructs a representation consisting of a HORN representation and a HORSE representation, and the two are bound to one another as well as to a CONTIGUITY representation such that what is represented by the HORN representation is spatially related to what is represented by the

Nonnatural Mental Representation  277 HORSE representation in the relevant way. The result is a UNICORN representation, which is a composite representation deriving its content from the content of its constituent representational resources. Representational resources can be deployed together whether or not their targets can go together. When targets cannot go together but we represent them together, we represent the impossible. There is much more to be said about representational resources, how they can be combined, and how they give rise to various aspects of representation. Given the scope of this essay, I have to stop here and return to nonnatural representation.

9.9.  Nonnatural Representation as Offline Simulation We can finally account for nonnatural representation and misrepresentation, and then use nonnatural representation to explain nonnatural meaning. The first step is to show that offline simulations of nonactual environments represent nonnaturally. By definition, offline simulations of nonactual environments need not have the function of tracking things as they are. If anything, they often have the function representing things as different than they are, although their precise function varies from case to case. Now suppose that an offline simulation of a nonactual environment has the function of representing things as different than they are. In order to fulfill such a function, the organism must track that the simulation is offline and the way it departs from the actual environment. Returning to an earlier example, suppose I  intentionally imagine myself as dead, knowing full well that I’m alive. By RTI, I can do this by running a simulation of myself that binds to a simulation of a state of death, while my representational system tracks that representing myself as dead is a departure from reality. My simulation represents me as dead because it is composed of two bound representational resources that represent both me and the state of death, which they represent because that’s what they represent when they are run online. But this is no malfunction: my simulation is functioning correctly, because it misrepresents myself as dead while the system tracks that this is a departure from reality. The simulation does not raise the probability that I’m dead at all—​its existence is actually incompatible with being a correct representation. Therefore, it represents nonnaturally.

278  What Are Mental Representations? Alternatively, suppose I have Cotard’s delusion and imagine myself as dead by running the same sort of simulation as before, while my representational system fails to track that representing myself as dead is a departure from reality. I actually believe I’m dead. Now my simulation is both misrepresenting and malfunctioning. In combination with the previous example, this shows that in the case of offline simulations of nonactual environments, misrepresentation and malfunction are logically independent. As I argued in section 9.6, this is a characteristic feature of nonnatural representation. More generally, one of the main functions of offline simulations of nonactual environments is representing things as different than they are in ways that the representational system tracks. When the system successfully tracks the ways in which a simulation departs from (what the system knows about) the actual environment, the representational system functions correctly—​it misrepresents because that’s what it’s supposed to do. By contrast, when the system fails to track the ways in which the simulation departs from the actual environment, the representational system malfunctions. Either way, this is a case of nonnatural representation. Its content is determined thus: (NNR**) A state (or signal) R occurring within system S nonnaturally represents that P =def

(1) R is deployed within an offline simulation of a nonactual environment. (2) A function of S is tracking the ways in which R is an offline simulation of a nonactual environment. (3) If R were deployed during an online simulation, R would naturally represent that P. We finally have an account of the content of nonnatural representations, their function, and the relation between nonnatural representational function and misrepresentation. Clause (1) is entailed by (2) so it’s redundant; I include it for clarity and explicitness. Recall that a state R naturally represents that P just in case it’s produced by a system that has a function of tracking that P by producing R. If a system produces R to fulfill some other function, while having at least the function of tracking the way in which R departs from the actual environment, then R still represents that P, but nonnaturally so. This is

Nonnatural Mental Representation  279 the case whether or not the system fulfills its function of tracking the way in which R departs from the actual environment.13 If R represents an impossible situation, or a situation that is incompatible with R being produced (such as that I’m dead), (NNR**) applies to the representational resources that constitute R. That is, each representational resource that constitutes R represents what it would represent if it were deployed during an online simulation, and R represents what the bound representational resources that constitute it would represent if they could be deployed during an online simulation. (NNR**) is an account of the semantic content of nonnatural mental representations, which is a step toward a naturalistic account of intentionality. The next challenge would be to cash out the mentalist theory of meaning by explaining the nonnatural meaning of linguistic utterances in terms of NNR**. The best I can do here is sketch how such an explanation would go. Before we get to the kind of nonnatural representation that explains linguistic intentionality and nonnatural meaning, we need one more ingredient. We need an arbitrary signaling system—​a system of signals that are arbitrarily assigned to their targets. Notice that the kind of representational resources we’ve been working with—​the ones that compose into mental simulations—​are not arbitrary. Natural representations are acquired and deployed in response to specific stimuli in order to guide action with respect to those stimuli. Their non-​arbitrariness is also the reason they have the semantic content they have. Their format is not separable from their content.14 But organisms need to communicate, and to communicate effectively they need public signals. Public signals can be associated with referents relatively arbitrarily and can become an effective means of communication so long as organisms have a way to coordinate their use of the signals (Barrett and Skyrms 2017; Skyrms 2010). 13 Morgan (2014) argues that being a structural representational system, and even being a structural representational system that conducts offline simulations, is insufficient for being a mental representation. His reason is that some plants may possess structural representational systems that conduct offline simulations (e.g., of the day-​night cycle). If plants don’t have a mind, then there must be more to mental representation than just offline simulation. One feature of mental representations that plants lack is the offline simulations of nonactual environments. There are probably others, such as medium independence (Piccinini 2020). 14 I  suspect that this is why mental states, which are at least in part constituted by mental representations, have original rather than derived intentionality. Original intentionality is intentionality that does not derive from any other intentionality. It is widely believed that natural language utterances and other tokens of public sign systems derive their intentionality from the intentionality of mental states. Mental states, in turn, are believed to have original intentionality.

280  What Are Mental Representations? Arbitrary signaling systems, by themselves, are not enough to give rise to nonnatural meaning. For an arbitrary signaling system may be deployed solely to express natural representations. Consider vervet monkeys. They have a set of alarm calls for indicating to other monkeys when predators are around. They use different signals for different types of predator. As far as I know, the function of alarm calls is to track and communicate the presence of predators. If this is their only function, then vervet monkey alarm calls have only natural meaning, even though they are an arbitrary signaling system. When arbitrary signaling systems are combined with offline simulations of nonactual environments, however, they acquire the kind of nonnatural meaning that explains linguistic intentionality. If a linguistic utterance U expresses the force and semantic content of mental state R, and the representational system has the function of tracking the actual environment by producing R, then the function of U may still be to track the actual environment. If a linguistic utterance U expresses the force and semantic content of mental state R, and the representational system produces R while fulfilling a non-​tracking descriptive function—​e.g., the function of misrepresenting how things are—​then the function of U is to fulfill the relevant non-​tracking descriptive function. Since utterances may or may not have the function of representing correctly, they may or may not raise the probability of what they describe. Therefore, their semantic content is nonnatural (rather than natural) meaning. Needless to say, this is just a bare sketch of an account of linguistic intentionality. Many complexities are sidestepped. At a minimum, a more adequate account would describe the ways in which simulations with different functional roles and contents give rise to utterances with different forces and meanings.

9.10.  Conclusion I have argued that an adequate explanation of intentionality needs to account for nonnatural meaning and nonnatural representation, namely, meaning and representation that need not raise the probability of what they represent. I have also argued that mainstream accounts of the semantic content of mental representations—​informational teleosemantics and related theories—​fail for nonnatural representation and a fortiori cannot account for nonnatural meaning.

Nonnatural Mental Representation  281 One reason is that nonnatural misrepresentation—​as opposed to natural misrepresentation—​is not representational malfunction. Unlike the natural representations that traditional representational theories of intentionality tend to posit and account for, the function of a nonnatural representation is not necessarily to accurately represent the actual state of an environmental variable. Rather, it may be to represent the state of an environmental variable inaccurately or even to represent something that is not in the environment at all. For example, if I want to imagine that I am dead, the function of my representation is to represent that I am dead, and my representational system would malfunction if it failed to represent me as dead. Here is another example: if I want to deceive someone by lying, what would actually be a malfunction of my cognitive-​linguistic system is telling the truth. These are examples of misrepresentations whose function is precisely to misrepresent!15 Thus, malfunction (simpliciter) cannot explain the distinction between a nonnatural representation that represents correctly and a nonnatural representation that represents incorrectly. That is, malfunction (simpliciter) cannot explain misrepresentation by nonnatural representations. A fortiori, malfunction (simpliciter) cannot explain nonnatural representation. I have sketched an account of nonnatural representation (and misrepresentation) in terms of natural representation plus offline simulation of nonactual environments plus tracking the ways in which a simulation departs from the actual environment. Nonnatural representation plus an arbitrary sign system gives rise to nonnatural meaning. To represent nonnaturally, the system must be able to decouple internal simulations from sensory information by activating representational resources offline. The system must be able to represent things that are not in the actual environment and to track that it’s doing so; i.e., there must be an internal signal or state that can indicate whether what is represented departs 15 Batesian mimicry is an example of signals that may be seen as carrying nonnatural information without being produced by a representational system in the present sense. Species that are harmful to predators often send warning signals—​such as specific sounds or coloration patterns—​to discourage predators from attacking them. Such warning signals carry the natural information that the would-​be prey is harmful. In Batesian mimicry, a harmless species imitates the warning signals that a harmful species produces. From the perspective of the predator, the mimic’s warning signal might still carry natural information in the sense that a mimic’s warning signal raises the probability that a potential prey is harmful (Skyrms 2010, 75–​77). From the perspective of the mimic, however, the function of its warning signals is to mislead predators—​that is, to cause predators to misrepresent the mimic as harmful. In this sense, a mimic’s warning signals carry nonnatural information.

282  What Are Mental Representations? from the actual environment. In addition, the system must be able to manipulate a representation independently of what happens in the actual environment and keep track that it’s doing so. The simplest case might be a system that manipulates an internal model based on potential actions without carrying out the action. This does not require an imaginative system separate from the motor system; it only requires the ability to manipulate parts of the motor system offline and redirect an efferent copy of its output to the perceptual system, while keeping track that the system is offline. To summarize: nonnatural representations are offline simulations whose departure from the actual environment the system has the function to keep track of. The follow-​up question is, what makes an offline simulation, which does not correlate with anything in the perceivable environment, represent something? I proposed that these states represent based on a compositional semantics that begins with representational resources that can be deployed online and represent what they would represent if they were deployed online. In other words, nonnatural representational content piggybacks on natural representational content. As in mainstream informational teleosemantics, natural representations get their content from performing the function of tracking a specific environmental variable. Nonnatural representations get their content by deploying natural representational resources to produce representations that do not track, and often lack the function of tracking, the actual environment. So nonnatural representations always represent what their natural counterparts have the function of representing. With this framework in place, we are in a position to explain the difference between a state that misrepresents by mistake (malfunctioning natural representation) and a state that, at least in some cases, misrepresents on purpose (correctly functioning nonnatural representation). Notice that nonnatural representations may also misrepresent by mistake, for instance in cases of delusion. The organism is mistakenly nonnaturally representing that P and believing that P, when in fact it is not the case that P. When an organism is wrong in this way, it is failing to track the decoupling between its internal models and the actual environment. The organism naturally represents both its environment and its actions. Representation of the environment can be decoupled from immediate sensory stimuli or even from the actual environment. Such representations of the environment can be manipulated. As long as the organism correctly tracks that it is manipulating its own mental simulations decoupled from the actual environment (for planning purposes, for entertainment purposes, or

Nonnatural Mental Representation  283 to deceive someone), it can represent the environment incorrectly while fulfilling the system’s representational function. When it fails to keep track of its own interventions on its own representations, it makes a mistake. So nonnatural representation is the manipulation of natural representational resources decoupled from the actual environment. Functional nonnatural representation is nonnatural representation in which the organism keeps track of its representational manipulations. Dysfunctional nonnatural representation is nonnatural representation in which the organism fails to keep track of its sensory decoupling and representational manipulations. This proposal breaks new ground and makes substantive progress toward an adequate naturalistic account of intentionality in terms of mental simulations implemented by neural representations. A  lot more must be worked out before a full account of intentionality is at hand. For starters, the preceding account of nonnatural representation must be combined with an arbitrary signaling system to account for the nonnatural meaning of natural language utterances.

Acknowledgments Thanks to Michael Barkasi, Alex Morgan, Andrea Scarantino, Joulia Smortchkova, Alberto Voltolini, and an anonymous referee for helpful comments on previous versions. For helpful feedback on presentations that led to this essay, thanks to audiences at the University of Turin, University of Missouri–​St. Louis, “Mental Representation: The Foundation of Cognitive Science?,” Bochum, Germany, September 2015, and “Information and Its Role in Science: Physics, Biology, Cognitive and Brain Sciences,” Jerusalem and Tel Aviv, Israel, June 2016. Thanks to Alex Schumm for editorial assistance. This material is based upon work supported by the National Science Foundation under grant no. SES-​1654982.

References Adams, F., and Aizawa, K. 2010. Causal Theories of Mental Content. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Spring 2010 ed. https://​plato.stanford.edu/​ archives/​spr2010/​entries/​content-​causal/​. Barrett, J. A., and Skyrms, B. 2017. Self-​Assembling Games. British Journal for the Philosophy of Science 68 (2): 329–​353.

284  What Are Mental Representations? Barsalou, L. W. 1999. Perceptual Symbol Systems. Behavioral and Brain Sciences 22: 577–​660. Bartels, A. 2006. Defending the Structural Concept of Representation. Theoria 21 (55): 7–​19. Bechtel, W. 2008. Mental Mechanisms:  Philosophical Perspectives on Cognitive Neuroscience. London: Routledge. Bourget, D., and Mendelovici, A. 2017. Phenomenal Intentionality. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Spring 2017 ed. https://​plato.stanford.edu/​ archives/​spr2017/​entries/​phenomenal-​intentionality/​. Clark, A. 2013. Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science. Behavioral and Brain Sciences 36 (3): 181–​253. Churchland, P. M. 2012. Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. Cambridge, MA: MIT Press. Craik, K. 1943. The Nature of Explanation. Cambridge: Cambridge University Press. Cummins, R. 1996. Representations, Targets, and Attitudes. Cambridge, MA: MIT Press. Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge, MA: MIT Press. Dretske, F. 1986. Misrepresentation. In R. Bogdan (ed.), Belief:  Form, Content, and Function, 157–​173. Oxford: Clarendon Press. Dretske, F. 1988. Explaining Behavior:  Reasons in a World of Causes. Cambridge, MA: MIT Press. Fodor, J. A. 1987. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press. Fodor, J. A. 1990a. A Theory of Content and Other Essays. Cambridge, MA: MIT Press. Fodor, J. A. 1990b. Information and Representation. In Philip P. Hanson (ed.), Information, Language and Cognition, 513–​524. Vancouver: University of British Columbia Press. Fodor, J. A. 2008. LOT 2:  The Language of Thought Revisited. Oxford:  Oxford University Press. Gallistel, C. 1990. Representations in Animal Cognition: An Introduction. Cognition 37 (1–​2): 1–​22. Gallistel, C. 2008. Learning and Representation. In J. Byrne (ed.), Learning and Memory: A Comprehensive Reference, 227–​242. Amsterdam: Elsevier. Gallistel, R. G., and King, A. P. 2009. Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. New York: Wiley/​Blackwell. Garson, J. 2012. Function, Selection, and Construction in the Brain. Synthese 189: 451–​481. Garson, J. 2016. A Critical Overview of Biological Functions. Dordrecht: Springer. Grice, P. 1957. Meaning. Philosophical Review 66: 377–​388. Grush, R. 2003. In Defense of Some “Cartesian” Assumptions Concerning the Brain and Its Operation. Biology and Philosophy 18 (1): 53–​93. Grush, R. 2004. The Emulation Theory of Representation: Motor Control, Imagery, and Perception. Behavioral and Brain Sciences 27 (3): 377–​396. Hohwy, J. 2013. The Predictive Mind. Oxford: Oxford University Press. Horgan, T., and Tienson, J. 2002. The Intentionality of Phenomenology and the Phenomenology of Intentionality. In D. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings, 520–​933. Oxford: Oxford University Press. Isaac, A. 2013. Objective Similarity and Mental Representation. Australasian Journal of Philosophy 91 (4): 683–​704.

Nonnatural Mental Representation  285 Jacob, P. 2014. Intentionality. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Winter 2014 ed. https://​plato.stanford.edu/​archives/​win2014/​entries/​intentionality/​>. Kosslyn, S., Thompson, W. L., and Ganis, G. 2006. The Case for Mental Imagery. New York: Oxford University Press. Kriegel, U., ed. 2013. Phenomenal Intentionality. Oxford: Oxford University Press. Loar, B. 2003. Phenomenal Intentionality as the Basis of Mental Content. In M. Hahn and B. Ramberg (eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge, 229–​ 258. Cambridge, MA: MIT Press. Millikan, R. 1984. Language, Thought, and Other Biological Categories. Cambridge, MA: MIT Press. Millikan, R. 1989. Biosemantics. Journal of Philosophy 86 (6): 281–​297. Millikan, R. 1993. White Queen Psychology and Other Essays for Alice. Cambridge, MA: MIT Press. Millikan, R. 2000. Naturalizing Intentionality. In B. Elevitch (ed.), Philosophy of Mind: The Proceedings of the Twentieth World Congress of Philosophy, vol. 9, 83–​90. Bowling Green, OH: Philosophy Documentation Center, Bowling Green State University. Morgan, A. 2014. Representations Gone Mental. Synthese 191 (2): 213–​244. Morgan, A., and Piccinini, G. 2018. Towards a Cognitive Neuroscience of Intentionality. Minds and Machines 28 (1): 119–​139. Neander, K. 2012. Teleological Theories of Mental Content. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Spring 2012 ed. https://​plato.stanford.edu/​ archives/​spr2012/​entries/​content-​teleological/​. Neander, K. 2017. A Mark of the Mental:  In Defense of Informational Teleosemantics. Cambridge, MA: MIT Press. O’Brien, G., and Opie, J. 2004. Notes toward a Structuralist Theory of Mental Representation. In H. Clapin, P. Staines, and P. Slezac (eds.), Representation in Mind, 1–​20. Amsterdam: Elsevier. Piccinini, G. 2015. Physical Computation:  A Mechanistic Account. Oxford:  Oxford University Press. Piccinini, G. 2020. Neurocognitive Mechanisms:  Explaining Biological Cognition. Oxford: Oxford University Press. Piccinini, G., and Scarantino, A. 2011. Information Processing, Computation, and Cognition. Journal of Biological Physics 37 (1): 1–​38. Pitt, D. 2017. Mental Representation. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Spring 2017 ed. https://​plato.stanford.edu/​archives/​spr2017/​entries/​ mental-​representation/​. Ramsey, W. 2007. Representation Reconsidered. Cambridge: Cambridge University Press. Robbins, P., and Aydede, M. 2009. The Cambridge Handbook of Situated Cognition. Cambridge: Cambridge University Press. Ryder, D. 2004. SINBAD Neurosemantics: A Theory of Mental Representation. Mind & Language 19 (2): 211–​240. Ryder, D. Forthcoming. Models in the Brain. New York: Oxford University Press. Scarantino, A., and Piccinini, G. 2010. Information without Truth. Metaphilosophy 43 (3): 313–​330. Shagrir, O. 2012. Structural Representations and the Brain. British Journal for the Philosophy of Science 63 (3): 519–​545.

286  What Are Mental Representations? Shepard, R. and Chipman, S. 1970. Second-​ Order Isomorphism of Internal Representations: Shapes of States. Cognitive Psychology 1 (1): 1–​17. Skyrms, B. 2010. Signals:  Evolution, Learning and Information. Oxford:  Oxford University Press. Speaks, J. Theories of Meaning. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Spring 2017 ed. https://​plato.stanford.edu/​archives/​spr2017/​entries/​ meaning/​. Stampe, D. 1977. Toward a Causal Theory of Linguistic Representation. In P. A. French, T. E. Uehling Jr., and H. K. Wettstein (eds.), Midwest Studies in Philosophy: Studies in the Philosophy of Language, vol. 2, 81–​102. Minneapolis: University of Minnesota Press. Swoyer, C. 1991. Structural Representation and Surrogative Reasoning. Synthese 87 (3): 449–​508. Thomson, E., and Piccinini, G. 2018. Neural Representations Observed. Minds and Machines 28 (1): 191–​235. Waskan, J. 2006. Models and Cognition: Prediction and Explanation in Everyday Life and in Science. Cambridge, MA: MIT Press.

10 Error Detection and Representational Mechanisms Krystyna Bielecka and Marcin Miłkowski

10.1.  The Mechanistic Approach to Mental Representation The most vexing issue in any realistic account of mental representations is the problem of content: how could content be naturalized? The problem is to provide a thoroughly naturalized account of satisfaction (or accuracy) conditions.1 This question leads to two similar but separate issues:  what determines satisfaction conditions and what functional role they play in a cognitive system (Ramsey 2016). The latter could be understood as a question of how satisfaction conditions could be available to a cognitive system (Bickhard 1993; Eliasmith 2005). We confine ourselves to the latter question in this chapter. We seek to develop a thoroughly naturalized view of mental representation. The view makes use of the notion of a representational mechanism and an account of the role of error detection as based on coherence checking. The main point of our account is that representational mechanisms serve the biological functions of representing. These functions are related to the semantic value of representation: its truth or falsity, its being vacuous or satisfied, or its accuracy. As long as semantic value is causally relevant in cognitive explanations, the content of representation is arguably causally relevant, which vindicates the notion of mental representation in contemporary scientific research. In other words, what we offer here is a mechanistic rendering and extension of ideas inherent in teleosemantics.

1 By satisfaction conditions we mean conditions in which a given piece of information is satisfied, that is, true in the case of declarative propositional content, followed in the case of directive content, or non-​vacuous in the case of non-​propositional content. Krystyna Bielecka and Marcin Miłkowski, Error Detection and Representational Mechanisms In: What Are Mental Representations? Edited by: Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190686673.003.0011.

288  What Are Mental Representations? To show that semantic value can be causally relevant, it is sufficient to show that representational mechanisms contain (or interact with) error-​detection mechanisms. Error detection is understood mechanistically in terms of coherence checking, which is purely computational and does not presuppose any semantic function. Thus, as long as we can show that cognitive agents rely on the feedback they receive to correct their mental representations, and thus, over time, improve their performance, we can show that the contents of mental representations are causally efficacious. We will analyze this conceptually and demonstrate that this account is descriptively adequate by citing a recent experiment on zebra finches, even though discrepancy detection is not always related to intentionality. The structure of the chapter is as follows. First, we introduce features of mental representation, and by relying on them, we sketch an account of functions of representational mechanisms. Then we focus on the special role we ascribe to error detection. We argue that error detection essentially relies on coherence checking, which implies that mental contents are causally relevant. But error detection does not require any representation. Subsequently, we describe a study of singing zebra finches (Gadagkar et al. 2016) in terms of representational error-​detecting mechanisms. Then we describe possible future applications of our account.

10.2.  From Features of Representation to Mechanisms The notion of mental representation remains elusive and problematic, mostly because of the difficulty of naturalizing the capacity of representations to be about something. Aboutness, or intentionality, is the first essential feature of representation that must be naturalized. The problem, both for naturalistic and anti-​naturalistic accounts of representation, is that the target—​what the representation is about—​may not exist. We can think of the trillions of British pounds we will earn by publishing this chapter: not only do they not exist right now (it’s future remuneration, after all) but it’s highly unlikely that we could earn trillions of pounds by writing philosophy. The amount we dream of is the extension of a mental representation that occurs at least in the mind of one of the authors (the greedy one). Nonetheless, we can still talk about and think of what we could buy with these hypothetical heaps of money. The difficulty in accounting for this is that we cannot be related to

Error Detection and Representational Mechanisms  289 anything that does not exist; the existence of a two-​place relation requires that both its relata exist. To solve this problem, it is traditionally presupposed that we could, instead, specify the characteristics of the target, were it to exist. That way, the target can be identified if found. Thus, instead of relying on a relation that may, but need not, obtain between the representation vehicle and the representation target to define intentionality, it can be assumed that properties of the representation vehicle specify, in some way, the characteristics of the target to make its identification possible. Consequently, the intension (characteristics) partially determines the extension. Even if the traditional claim that intension fully determines extension has been rightly criticized (Putnam 1975), this does not mean that the partial determination of extension cannot be at least sometimes successful, if some referents (members of the extension set) can be picked based on the representation characteristics. More biologically speaking, animals capable of detecting typical features of food might err in nonstandard environments (one does not need to talk about a Twin Earth here to notice the environmental dependence), but partial determination might be enough for them to survive. At the same time, we assume that purely extensional representations cannot exist. Thus, the second essential feature of representation is that it has an intension (or characteristics of the target). According to the present account, the characteristics of the target might be very minimal indeed, such as bearing a particular label, or being located in a “mental file” (Recanati 2012). Whether or not a given system includes a richly structured representational medium that specifies characteristics is a matter for empirical investigation, and not for armchair conceptual analysis. The two features of representation we have just adumbrated, usually called extension and intension, have long been assumed in theories of meaning, at least since John Stuart Mill (Mill 1875; Frege 1960; Twardowski 1977; Carnap 1947; Millikan 1984; Cummins 1996). We follow this tradition in assuming that one cannot strip away the notion of intension, or characteristics of the target. The third feature of representation that we consider essential for understanding the role of representation in guiding external behavior and thinking is that representation may be used. In other words, we assume that cognitive representations cannot simply “sit” inertly in the mechanism. Although it has been argued that there could be inert or junk representations in our

290  What Are Mental Representations? minds (Shapiro 1997), or that content need not be exploited (Cummins et al. 2006), the arguments are exclusively conceptual, and empirical evidence hardly supports such a view. Moreover, they do not undermine our position. According to Shapiro (1997), there may be representations that enter into no psychological processes or produce no behavior. But we submit that representations are essentially available for use in behavior or thought. The unexploitable pieces of information that Shapiro describes as junk representations are, according to us, just pieces of semantic information, and not mental representation. By semantic information we simply mean physical tokens that could be true of something, while mental (or cognitive) representation requires that they also have a functional role in the cognitive economy of an agent (cf. Morgan 2013). The content of a representation need not be actually exploited—​this is not what we claim—​but it has to be available for further exploitation. The task of this paper is to elucidate one essential kind of exploitation that renders semantic information a mental representation. Thus, we assume there are three crucial features of mental representation: (1) A representational vehicle may refer to a target of the representation (if any). (2) A representational vehicle identifies the characteristics of the target, or its satisfaction conditions, at least in a minimal fashion. (3) The satisfaction conditions are poised to be causally efficacious in the cognitive machinery of an agent. Our proposed account of representational mechanisms uses the framework of the new mechanistic explanation to elucidate mental representation. According to the new mechanistic framework, mechanisms are organized systems composed of component parts and operations or interactions, whose orchestrated activity is responsible for a phenomenon of interest (Machamer, Darden, and Craver 2000; Craver 2007; Bechtel and Richardson 2010; Glennan 2017). The mechanisms we focus on are responsible for the representational functions of cognitive agents, or subsystems of cognitive agents. Moreover, we also consider these mechanisms to be fallible: they may fail in performing their representational role. Thus, the representational mechanisms are functional (Garson 2013). (In section 10.2.1, we offer a short exposition of our hybrid account of function as applied to representational mechanisms.)

Error Detection and Representational Mechanisms  291 First, we will introduce an abstract schema of representational mechanisms: a blueprint that abstracts away from the details of their organization. Note that the schema is not in itself explanatory, but serves expository purposes by highlighting and elucidating further the role of representation in cognitive functioning. In section 10.3, we offer a conceptual analysis of error checking in non-​semantic terms. Then, in section 10.4, we describe the representational mechanism in singing zebra finches, which is no longer a mere blueprint, because it points to a particular causal structure. In previous work another instance was analyzed: anticipatory representational mechanisms in rodents (Miłkowski 2015), and in humans experiencing visual hallucinations (Miłkowski 2017a). For applications of a similar (but less systematic) mechanistic account of human cognitive representations, see Plebe and De La Cruz (2016).

10.2.1. Representational Mechanisms The function of representational mechanisms is, primarily, to make information available for the cognitive system (Miłkowski 2013). The information in this case is semantic as far as it has satisfaction or correctness conditions. The most basic notion of information, as applied to information vehicles, is that of structural information (called logon by MacKay [1969]). The structural information of a medium can be defined in terms of the degrees of freedom of the medium. If a change of the physical medium makes a difference to the system that operates within it, then this medium bears structural information, which can be quantified in terms of the degrees of its freedom. For example, a medium with one degree of freedom can be in at least two physical states detected by the system. Structural information is the minimal notion of information, which is not to be confused with Shannon information. Only by introducing senders and receivers, a channel, and the uncertainty of the receiver, could one use Shannon’s measures as well. For our purposes, this is not required, though these measures are applicable as long as one can talk about information being received by some subsystem from another one, uncertainties are known, etc. As described subsequently, we do assume that semantic information inherent in mental representation has producers and consumers, which makes such an application possible.

292  What Are Mental Representations? The function of representational mechanisms is not just to make information available. This information should display all three features of representation introduced earlier. Thus, the semantic information should be about some targets (if any) and it should specify their characteristics. It should also be casually relevant to the cognitive functioning of an agent to deserve the label of mental representation. No wonder, then, that the class of representational mechanisms can be hugely varied, just as physical computational devices can vary from analog computers such as Perceptron through mass-​produced digital smartphones, to unconventional computation using DNA strands. Moreover, the specification of a causal structure of representational mechanisms is quite abstract. However, they all share certain properties that make them members of a single class: they are mechanisms whose component parts and operations or interactions contribute causally to certain patterns of their function, which can be fruitfully seen as specifying or modifying characteristics of their targets, which can be predicted and explained as referring to targets, and which perform certain information-​related operations. In addition, all norms of mechanistic explanation apply here; the explanation should offer new predictions and be general, the entities mentioned in explanatory texts should be causally relevant, and the causal model of the mechanism must include all and only the causally relevant parts and operations or interactions of a representational mechanism. Even this general description is enough to see whether a given mechanism is representational or not. So, for example, my pocket electronic calculator operates on digits when I press its buttons, but it cannot detect any error in my calculations of the word count of this chapter. This makes it a mere computational mechanism without proper representational features. But to justify this claim, we have to develop an account of error detection as related to intentionality.

10.2.2.  Error Detection and Representation Our claim is that a piece of semantic information—​or the physical bearer of information about something—​is a mental representation as long as a cognitive agent (or its part) is sensitive to the semantic value of the representation, or to its satisfaction conditions (Eliasmith 2005). If the agent is not sensitive to satisfaction conditions, an anti-​representationalist may retort

Error Detection and Representational Mechanisms  293 that representational vehicles are just parts of a merely causal story about behavior or cognition, and that there is nothing that makes them actually representational. This is, for example, the move recommended by Bill Ramsey (2007) vis-​ à-​ vis most connectionist accounts of mental representation. Ramsey poses a job description challenge for any account of mental representation that has ambitions to be useful also in cognitive science. According to him, most connectionist representations do not meet this challenge because they could be thought of merely as causal mediators of behavior. But if the vehicle is not only a causal factor in behavior but also plays a representational role, then it could be evaluated, at least sometimes, by the cognitive machinery. While satisfaction conditions may be involved in many other operations of the cognitive machinery, there is one that is especially important for us: error detection. Thus, if error is detected and causes additional behavior, for example, the search for more accurate information by an animal that is experiencing conflicting sensory information, then the vehicle cannot be merely one cause of behavior as one of the parts of the cognitive machinery. It must also be a representational part of the cognitive machinery. Error detection in X, in other words, is evidence that X has a clearly representational role, because error detection is constitutive to representation. Consequently, if junk representation, posited by Shapiro, or unexploited content, posited by Cummins, can never undergo error detection by the cognitive machinery of a cognitive agent, it is not really cognitive representation or content. Instead, it is merely an inert piece of semantic information. The stress on the importance of accounting for misrepresentation goes back to the groundbreaking paper of Fred Dretske (1986). There, he defended the claim that it should be possible for any cognitive representation to be in error. Error should not be confused with a lack of intentionality: Misrepresentations remain intentional. However, Dretske did not stress that the error should be detectable by the cognitive system in question. For Dretske, it is sufficient that an external observer is able to ascribe error, which can be seen in the example he uses: a magnetotactic bacterium is supposed to be in error when it swims in a lethal direction because it has been placed in another hemisphere of the earth. The bacterium does not notice the error, at least in Dretske’s story. Subsequently, a number of authors stressed that detection of error by a cognitive agent actually makes it clear that the agent is sensitive to the accuracy of representation, which vindicates the claim that the content is accessible

294  What Are Mental Representations? to the agent. For example, Jerry Fodor underlined that the ability to detect error in some representation was evidence that this content was part of one’s psychology. This is clear in his discussion of the difference between human conceptual abilities and the ability of frogs to detect that they have snapped at the wrong object: Sometimes Macbeth starts at mere dagger appearances; but most of the time he startles only if there’s a dagger. What Macbeth and I have in common—​and what distinguishes our case from the frog’s—​is that though he and I both make mistakes, we are both in a position to recover. By contrast, frogs have no way at all of telling flies from bee-​bees. (Fodor 1992, 107)

Fodor, however, did not develop this at length, in contrast to Mark Bickhard, who has made it his criterion of the adequacy of any naturalistic approach to intentionality (Bickhard 1993, 2009). By his account, the error is detected in basic representations that drive actions of an agent, as long as the agent can detect failures of actions anticipated by the agent to succeed. The detection of failure is, therefore, something that frogs may not have, at least about things that they snap at (if Fodor is right about them), but which we (and Macbeth) do have. Bickhard’s basic claim that error detection is constitutive for representationality may be usefully contrasted with the one defended by Davidson (2004). On one hand, Davidson says: no account of error that depends on the classifications we find most natural, and counts what deviates from such an error, will get at the essence of error, which is that the creature itself must be able to recognize error. (Davidson 2004, 141)

This idea conforms with the requirement of the system detectability of error. On the other hand, Davidson denies that “any causal story about the sequence of stimuli reaching an isolated creature can account for this feature of conceptualization or intensionality, provided the story is told in the vocabulary of the natural sciences” (141). For this reason, he turns to intersubjective standards of error: by observing the behavior of others experiencing similar stimuli, a creature can discover that it is in error. In other words, the notion of error appears in the context of the Davidsonian framework of triangulation and radical interpretation, which presumes intersubjectivity.

Error Detection and Representational Mechanisms  295 Nothing like intersubjective triangulation is assumed in the account defended at present. First, there is nothing nonnaturalistic in the notion of discrepancy detection; it seems to be on a par with other computational and information-​processing terms used in cognitive science and psychology, which we assume to be natural sciences. In addition, the error is not reducible to perceptual error, as Davidson seems to assume, and perception is not reducible to transformation of stimuli. Cognitive systems are highly organized, and perceptual activity is not to be confused with unconditional reflexes, because it is normally explorative and cyclical. Second, if a creature is able to detect discrepancy between its own behavior and the behavior of its peers, which is presumably how it manages to recognize error (Davidson is not particularly clear about this), then why shouldn’t it be able to detect discrepancy between what it sees and what it touches, for example? Macbeth was able to realize that the dagger he saw was illusory by trying to touch it. Thus, intersubjectivity is not necessary in order to recognize error; it is merely sufficient. In the next section, we provide a mechanistic account of error detection.

10.3.  The Coherence-​Based Account of Error Detection In this section, we will first present incoherence detection as the necessary condition of error detection. Then we will defend a computational account of error detection in mechanistic terms, and finally we will argue that not all kinds of incoherence detection, or discrepancy detection, are sufficient to justify the claim that they pertain to full-​blown mental representation.

10.3.1.  Incoherence Detection as Error Detection A coherence-​based account of system-​detectable error elucidates how a cognitive system can misrepresent and can detect its own errors. The model assumes that the detectability of representational adequacy is the necessary condition for a cognitive system to represent at all. Thus, it is based on the idea of system-​detectable error defended by Bickhard, extended by an inconsistency condition, which states that the adequacy of mental representation is available to the system as long as it can detect its informational inconsistency. In order to detect whether representation A is inconsistent with semantic information (which might also be a cognitive representation) B, a cognitive

296  What Are Mental Representations? system must be able to register an inconsistency of A and B, and both A and B must be available to the system. We do not require that both A and B are mental representations, for one very important reason:  if we posited this, then B would have to be checked by yet another cognitive representation, and an infinite regress might seem imminent. Of course, it could happen that B is checked for consistency with A and vice versa, which would not be, technically speaking, a regress but circularity. This kind of circularity is not necessarily vicious, although perhaps unlikely: differing pieces of information are usually assumed to be less or more reliable, depending on their source, in the cognitive economy of an agent (Deacon 2012). Nonetheless, we insist on our distinction between mere semantic information and cognitive representation: one major difference between them is that the latter must be checked for semantic error, and the first may only be used in such error-​checking procedures. Thus, while A, which is being checked, must be a representation, B need not. Moreover, the information about inconsistency found between A and B is not a mental representation. If we assumed that a piece of inconsistency information is also a mental representation, this would immediately create an infinite regress in our account:  every detected inconsistency should have been checked as well, which would create another representation that should be checked, and so on. Inconsistency information is, in our view, simply an indicator of misrepresentation, but not necessarily a full-​blown cognitive representation. Although not all misrepresentations are represented, all are registered, that is, there are pieces of semantic information about them in a cognitive system that do not have to be further checked. Furthermore, a cognitive system can correct the misrepresentation X only if it can detect misrepresentation X. To correct the error, it suffices for the system to have an ability to detect or assume the difference in the degree of reliability of information A and B (for a similar account of error correction, see Deacon 2012). The agent-​based criterion of correctness is then the internal consistency of representations, or of representation and information. Let us now turn to the functional dimension of representation. Proponents of teleosemantics attempt to account for mental representation in terms of biological teleological function (Dretske 1986; Millikan 1984; Cao 2012). The notion of function is either etiological (Millikan 1984, 1989), or based on the self-​maintenance of biological systems (Anderson and Rosenberg 2008; Bickhard 1993, 2009). Spelling out the notion of mental representation in

Error Detection and Representational Mechanisms  297 terms of teleological function allows teleosemanticists to account for misrepresentation, characterized as a representational dysfunction rather than as something that is not a representation at all (Dretske 1986). In Bickhard’s account, function is accounted for in biological and dynamic (thermodynamic) terms. In this view, a feature X is functional only if X helps a system to maintain itself as a system far from its thermodynamic equilibrium (Bickhard 2009, 555). Basic representations (emergent from the basic structure of the biological organism) are functional because they play a role in its self-​ maintenance while indicating the organism’s future possible actions (again, a similar account of intentionality is defended by Deacon [2012]). The representational function helps an organism to stay far from its thermodynamic equilibrium; thanks to such a function, the organism can anticipate its action results, as the representational function is related to anticipations of action results. These anticipations of future possible actions have satisfaction conditions. An organism or its subsystem can detect whether these are satisfied by confronting anticipations with the results of actions caused by them. Furthermore, the idea that organisms themselves recognize the adequacy of their representations is enshrined in Bickhard’s notion of system-​detectable error. So, only an organism that can detect its own errors can really anticipate its future actions. In this sense, error detectability makes representations necessary in a stability-​maintenance process. In contrast, the etiological dimension can make a mechanism responsible for a given type of mental representation if the mechanism has appeared at least once in the history of an organism and has played a role that helped it to survive. According to Millikan’s biological account of function, representations are functional only if they are products of adaptive mechanisms. To have a function, an item must also come from a lineage that has survived due to a correlation between traits that distinguish it and the effects that are functions of these traits, keeping in mind that a correlation is defined by contrasting positive with negative instances. Intuitively, these traits have been selected for reproduction over actual competitors. Because the correlation must be a result of a causal effect of the trait, the trait will not merely have been selected but will have been selected for (Sober 1984). Thus a thing’s proper functions are akin, intuitively, to what it does by design, or on purpose, rather than accidentally. (Millikan 1984, 8)

298  What Are Mental Representations? Thus, an organism has a representational function F if a certain property P was selected for F. This means that such a property is positively correlated with realization of that function and it contributes to further replication of P because of the effect that P was naturally selected for. Sober distinguishes between two differing concepts, selection for and selection of (Sober 1984, 1993). According to him, selection for a property P implies that ownership of that P causes success in survival and reproduction. Selection for is contrasted with selection of; both are used to explain why some properties increased in frequency. While the former describes causes, the latter describes effects of selection: to say that there is selection for one trait (Fast) and against another (Slow) is to make a claim about how those traits causally contribute to the organism’s survival and reproductive success. . . . One trait may be fitter than another because it confers an advantage or because it is correlated with other traits that do so. (Sober 1993, 83)

The trait of having a heart is adaptive for pumping blood in a population whose members have hearts as a result of earlier selection for having a heart, and also because having a heart contributed to their fitness because the heart pumped blood. So, having a heart was selected for in the sense that it has a role in biological adaptation, because it helped an organism to survive while pumping blood, but the heartbeat was selected of in the sense that it is a latter effect of selection, not that it is causally important for a heart to function properly. Applying Sober’s terminology to representational mechanisms, while a representational mechanism is selected for in the sense that it has a role in biological adaptation, vehicular properties are selected of in the sense that they are effects of selection. According to Christensen and Bickhard, what makes the etiological account of function different from the dynamical one is that only the latter emphasizes serving a function, which they find more vital to understand due to its importance in system dynamics: In certain respects we are simply focusing on a different issue: etiological theory takes as its task the problem of explaining what it is for a part of a system to have a function, whereas we focus on what it is to be an adaptive system, and on the nature of serving a function relative to such a system. (Christensen and Bickhard 2002, 4)

Error Detection and Representational Mechanisms  299 Moreover, Christensen and Bickhard argue that etiological function is epiphenomenal. Let us introduce an iconic thought experiment used in a discussion on causal relevance of mental representations (Davidson 1987). Imagine a lion that springs into existence and is atom-​by-​atom identical to a real lion. According to Millikan, only the real one has functions because it has evolutionary history, while the science-​fiction lion does not. Bickhard finds her claim counterintuitive: Function, in this view, is dynamically—​causally—​epiphenomenal. It makes no difference to the causal or dynamic properties of an organism whether or not its organs have functions. Etiological models thus fail to naturalize function. Etiological history explains the etiology of something, but it does not constitute any of the causal or dynamic properties of that something. Etiology cannot constitute the dynamics of what it is the etiology of. (Bickhard 2009, 558)

So, history doesn’t constitute any causally relevant property—​it is causally epiphenomenal. Although such a consequence seems to be intuitive in the artificial example of a science-​fiction lion, in biological examples it does not. In our view, a representational mechanism remains functional even if it fails, because it produces misrepresentations and the system is no longer capable of detecting error in some of its representations, leaving them uncorrected. This holds so long as the mechanism normally has a feature that allows it to correct error; this feature has been selected for detecting error even if it currently has no effect, or is causally epiphenomenal in the system, and does not serve its function anymore. We propose, in essence, to assume a hybrid account of function that includes both etiological and dynamical dimensions. The notion of etiological function is used to account for the why-​questions concerning the biological structures, while the dynamical function is best understood as answering the how-​questions that pertain to the current dynamics. Therefore, both can be used in a more complex, multidimensional account of biological functionality, which has its roots in Tinbergen’s (1963) account of explanations of behavior (Miłkowski 2016). In this account, cognitive errors themselves are currently dysfunctional (as in dynamical function) but the representational mechanism is adaptive, or etiologically functional. Hence, this mechanism also has a function to detect its own errors, even if it is not currently performed or served. Surely

300  What Are Mental Representations? Bickhard doesn’t claim that error should always be detected, so he would admit that an error detection mechanism is dysfunctional, but it isn’t possible for him to claim that it is an error detection mechanism if its damage is persistent in an organism. If such damage is inborn, there is no evidence that such a mechanism has a role to detect errors because it has never played such a role in an organism, so it didn’t contribute even once to the organism’s being far from dynamical equilibrium. In contrast, according to Millikan’s account of proper function, this mechanism can be classified as having a proper function that is now not served. That’s why supplying Bickhard’s account of function with etiological elements helps to save the basic principle of his account. In the present account, the function to represent is primarily ascribed to representational mechanisms. However, particular representational tokens also have a derived function to represent. These tokens do not remain inert in representational mechanisms. Quite the contrary; representational mechanisms are appropriately connected representation producers and consumers, which generate, transform, and respond to representational tokens. The notions of the representation consumer and producer are understood along the lines of teleosemantics: Representation consumers are devices that have been designed (in the first instances, at least) by a selection process to cooperate with a certain representation producer. The producer likewise has been designed to match the consumer. What the consumer’s function is, what it is supposed to effect in responding as it does to the representations it consumes, could be anything at all. It may have numerous alternative functions. It may also be but one of many consumer systems that use representations made by the same producer. (Millikan 1993, 126)

The original account of the consumer role in teleosemantics remains fairly unspecific. Nowadays, proponents of teleosemantics tend to stress that it should be considered more active (Butlin 2018) or competent (Cao 2012). This is usually linked to the flexibility in the way the consumer responds to a representation token. In contrast to these attempts, we aim to elucidate the consumer role in error detection, which implies that the semantic value of the representation is accessible to the representational mechanism. This also implies, as we argue, that satisfaction conditions of representation are causally relevant to success of actions of the agent, because the incoherence detected acts as a proxy for failure to satisfy the correctness conditions of

Error Detection and Representational Mechanisms  301 representation. This is the task for the next section, in which we elucidate the computational mechanism of incoherence detection.

10.3.2.  Computational Mechanisms of Incoherence Detection The preceding account of incoherence detection may be made more tangible by sketching a computational consumer mechanism responsible for error detection. As will be argued in the following subsection, however, mere computational error detection is not sufficient for semantic error detection. Computational error detection is only a necessary condition. To detect an error, the mechanism has to manipulate information vehicles, which makes it computational according to the mechanistic account of physical computation (Miłkowski 2013; Piccinini 2015). This account considers physical computation to be a process that occurs in physical mechanisms whose function is to interact with information vehicles in a way that is described by some mathematical model of computation (philosophers usually call such models rules, but models of computation need not be literally rule-​based). Thus, representational mechanisms are a subclass of computational mechanisms that have some specific representational capabilities. There is a very large class of computational mechanisms that could perform an error-​detecting role because the computational bar is set very low. Even an XOR logic gate, or the logic gate that implements exclusive disjunction by returning TRUE whenever its input values are not the same (not both TRUE or both FALSE), can perform this function. Note that XOR gates are very low in the hierarchy of computational functions in computability theory, and do not require universal, or even finite-​state, computation. From the mechanistic point of view, the XOR gate is a physical mechanism which contains a certain medium that takes at least two states, conventionally described as 0 and 1 (or TRUE and FALSE). These are, in physical reality, usually voltage levels, but they could also be implemented in Lego blocks, wood, and marbles, using pneumatic machines, or basically in any medium that supports the same causal structure: whenever the inputs A and B are not both 0 or both 1, the output is 1. This toy example is sufficient to show that computational mechanisms can detect discrepancy. There are also much more sophisticated kinds of error detection. The predictive processing (PP) framework provides a clear

302  What Are Mental Representations? example: according to proponents of PP, the point of having a mind is to minimize its predictive error (Hohwy 2013; Clark 2016). PP is associated with particular hypothesized implementations in natural neural systems; without going into details, we may simply state that predictive error detection is at the heart of this framework. Of course, detecting incoherence in one’s beliefs or, generally, detecting the logical inconsistence of a large set of propositions may be computationally infeasible. This can happen, for example, when the computational power of a given mechanism makes it subject to incompleteness theorems; in such cases, consistency simply cannot be proven using computational methods of the same mechanism. But even mere syntactic coherence detection requires non-​polynomial time, or is NP-​complete, although tractable approximations exist (cf. Thagard 2000). In less technical terms, there are computational limits to the kinds of incoherence that may be detected. But we do not require that these limits be exceeded; such absolute error detection may be simply out of reach of finite minds that usually remain in the dark about their own inconsistency (Putnam 1960). While it seems quite obvious that there could be physical mechanisms capable of error detection, as long as one is not skeptical that XOR gates exist, it is far from clear that computational error detection does not tacitly presuppose semantics. After all, error detection provides a critical part of the naturalized account of intentionality, so it must have something to do with intentionality. Are we trying to have our cake and eat it, too?

10.3.3.  Not All Discrepancy Detection Is Semantic For us, it is critical that discrepancy detection, which is at the core of the coherence-​based account of semantic error, is itself non-​semantic. We deny that error information is necessarily representational. In what follows, we show that there is no need to view this error semantically. But this may sound as if we were contradicting ourselves: if error detection supports naturalized intentionality, then it may already be specified in intentional terms. Are we smuggling semantics somehow? To make this objection more precise, one could adopt the semantic view on physical computation (Shagrir 2017, 2010; Sprevak 2010; Fodor 1975). According to the semantic view, computational operations are performed over semantic representations according to semantically specifiable rules.

Error Detection and Representational Mechanisms  303 Simply, computers perform operations on items that represent values of interest to us. Then if they detect errors it is because we interpret these items as semantic. Adding computational mechanisms in our account is just a red herring and diverts attention from the trick we performed by importing an observer perspective that makes sense of physical computation. Or so it seems. But defenders of the mechanistic account of physical computation repeatedly argue that one need not know how or whether physical states of a medium (digits, to use Piccinini’s terminology) represent anything. Simply, the account of physical computation need not presuppose a substantial account of representation (Piccinini 2008; Fresco 2010; Miłkowski 2013; Dewhurst 2018). From the mechanistic point of view, it makes no difference to computational operation of the mechanism whether computation is interpreted and whether the digits are representational or not. Of course, they could represent something, especially in the case of cognitive machinery (Miłkowski 2017b), where semantic properties become causally relevant, but they need not. The case in point is the XOR gate itself: the way we interpret the physical states really makes no difference to the way it works. The causal structure of computation remains the same. In this regard, the mechanistic account is somewhat reminiscent of Egan’s (2010, 2014) treatment of contents of representation, which are treated just like a gloss on computational structure: the gloss makes no difference to computation itself. But it differs crucially in stressing that computation is only one part of the story about representation, and that contents of representation are not merely a gloss but a causally relevant property. Just like computers, other devices that can be thought of as error detectors need not be representational. One such example is the Watt governor, a control device designed for stabilizing the speed of a steam engine. There has been a lively debate over whether the states of the governor are representational or not (van Gelder 1998; Bechtel 1998; Nielsen 2010). According to the account defended here, the Watt governor is not a representational mechanism even if one admits that it is a measuring and control device whose parts carry semantic information about the engine speed. A similar point can be made about a simpler machine, a bimetallic strip thermostat, whose shape reliably reacts to changes in the ambient temperature, and, in turn, switches the thermostat’s furnace off or on to keep the temperature at a certain level. According to O’Brien (2015), because of this, it retains a resemblance relation to the temperature, which makes it representational.

304  What Are Mental Representations? ON a Contact

b

bimetallic strip

Support point

OFF bimetallic st

rip

Support point

Contact

HEAT

Figure 10.1  The basic operating principle of a thermostat: (a) the bimetallic strip is not bent, so the furnace is on; (b) the strip is bent, which turns off the furnace.

Let us focus on the thermostat that closes an electronic circuit, which is mechanically much simpler than the Watt governor (see figure 10.1). The change in temperature alters the shape of the bimetallic strip, which closes (ON) or opens (OFF) the circuit. One could claim that this device carries information (in the semantic sense as defined by Dretske [1982]) about the surrounding environment. Moreover, one could also claim, following Dennett (1981), that thermostats have beliefs about the temperature:  the thermostat will turn off the furnace as soon as it comes to believe the room has reached the desired temperature. Now, it does not seem a stretch to say that if the desired temperature is not reached, then the thermostat detects a discrepancy between the desired state and the world. Basically, this is the essence of any negative feedback-​driven control device: it detects a discrepancy between the desired state and the current state, and then adjusts the plant (whatever it controls). Any naturalized account of intentionality which presupposes that control devices have proper functions and accepts basic notions of teleosemantics would have to claim that these devices are representational. But we deny this. They might be representational, but only if the error detected plays a semantic role; that is, only if error is consumed in a way that is still sensitive to its semantic value. In other words, a consumer is active in evaluating the

Error Detection and Representational Mechanisms  305 semantic value of information, and by influencing other processes that depend on the results of this evaluation. In such a case, the semantic value is causally relevant to these processes. Thus, what we argue is the following: in a cognitive system, a certain physical vehicle (most likely, a certain kind of physical process) plays a representational role only if the error in this kind of representation could be at least sometimes detected by the system. Semantic error detection is not just any response to a discrepancy because any negative feedback device—​or any causal process, for that matter—​might be then thought to be error-​detecting because the results are informative about their causes (Collier 1999). Error detection that constitutes representation is a response to incoherence as consumed by an appropriate mechanism that remains causally sensitive, not only to features of the physical vehicle, but also to its satisfaction conditions. Hence, incoherence detection is a syntactic process: all it takes to detect a discrepancy is to see that two physical vehicles do not have the same property. There is no content involved yet. But as long as the result of the syntactic operation tracks the satisfaction conditions of the vehicle, the vehicle also attains a semantic, or intentional, role. To see whether this role is intentional, one must analyze the causal role of the vehicle in the mechanism. This can be illustrated with the thermostat. The change of the shape of the bimetallic strip in response to the change of the temperature causes the furnace circuit to close (ON). According to the standard teleosemantic story, the strip has the proper function of responding to the temperature of the room. It may be considered a kind of pushmi-​pullyu representation (Millikan 1995):  it informs about the temperature and it also controls the furnace. Thus, one could claim that the discrepancy detected has a representational role, after all, because it has a directive function in control of the furnace and describes the temperature. But note that the “error” here is only in the eye of the beholder: in particular, the “error correction” happens only via the work of the external device, that is, the furnace that is supposed to drive the temperature to the required range. Disconnect the furnace, and the thermostat will continue to work normally. It merely carries semantic information in Dretske’s sense, for it lawfully covaries with the room temperature. When connected to the furnace, it assumes an additional control function. But there is no operation that actually tracks the accuracy of the temperature indication in this simple thermostat. The only causally relevant factor for explaining the operation of the furnace as driven by the thermostat is the shape of the

306  What Are Mental Representations?

33˚C

turn off furnace

move bimetallic strip

Figure 10.2  The basic causal structure of a simple thermostat: a temperature level of 33°C, and not some other level (say, 30°C), causes the movement of the bimetallic strip, which then turns the furnace off. The causal feedback loop from the furnace to the ambient operating temperature has been omitted for clarity (and how to best represent feedback loops using directed acyclic graphs, as above, is a nontrivial issue; cf. Gebharter 2016).

bimetallic strip, and not the informational relation between the strip and the ambient operating temperature (see figure 10.2). There is, for example, the wear and tear on the strip, which causes some drift in its actuation temperature after 10,000 or 20,000 usage cycles. The amount of drift is typically less than five degrees Celsius in either direction. However, the thermostat will be “oblivious” to its inaccuracy. It will simply adjust the furnace in a different way. Thus, the thermostat is merely an indicator (or detector) of the ambient operating temperature with a control function and not a representational mechanism. One could have a representational thermostat, though. There is a range of electronic thermostats that provide greater accuracy. These thermostats could contain a redundant second temperature sensor that serves to check the accuracy of the first one in order to avoid sensor drift. In such a case, inaccuracy could be detected if there is discrepancy between the sensors (see figure 10.3). Then, if the inaccuracy results in, for example, displaying a message to a user, or performing other error-​correcting operations, the thermostat is appropriately representational, even if it remains a rather simple device compared to biological organisms. To summarize, the difference between the bimetallic thermostat and the electronic one boils down to their causal structure. The first one directly responds to ambient temperature and actuates the furnace. The second also responds to the result of a discrepancy check between its sensors, which causally tracks the accuracy and reliability of their temperature indication. In other words, the first thermostat responds only to the shape of the bimetal strip, and the second to the relation of the discrepancy between the sensors,

Error Detection and Representational Mechanisms  307

33˚C

sensor1 = 30˚C

sensor2 = 33˚C detect: sensor1 ≠ sensor2 turn off furnace display error message

Figure 10.3  A thermostat with built-​in error detection: it issues an error message if the readings of its two sensors are not the same. As in figure 10.2, the feedback loop is omitted for clarity.

and this detected discrepancy stands for the inaccuracy of the indication. But this also means that the second thermostat responds to satisfaction conditions of the temperature indicators: as long as they are not coherent, it issues a message that the indication may be inaccurate, or semantically false. Of course, because all things can break, the discrepancy indicator or both indicators can break without the inaccuracy being detected. The bimetallic thermostat cannot detect that its indicator already suffers from drift, or that the indication is totally inaccurate. The upshot of this discussion is the following: all kinds of indicators can be understood as discrepancy detectors; after all, the indicator’s state changes as long it detects a change in the state of something else. There will be, therefore, all kinds of physiological indicators in the human body that detect a discrepancy between the desired state and the current state of the organism. But homeostatic mechanisms need not be representational, as long as the agent (or its subagent machinery) is blind to the accuracy of the indication. Not all control implies representation. This said, it is likely that biological signaling is self-​correcting; thus, we may plausibly expect that representations will abound in living organisms, even evolutionarily early ones.

308  What Are Mental Representations? On the background of these two somewhat artificial examples, let us now discuss the application of the current conceptual framework to a real biological case: zebra finches that were forced to err.

10.4.  Performance Error Detection in Zebra Finches Zebra finches are a particularly important species for research on singing birds, especially in the context of memory-​guided error correction (Thomson and Piccinini 2018). An adult zebra finch is capable of producing a stereotyped song sequence composed of a repeated series of acoustically distinct syllables (Zann 2002). The stereotyped song makes this bird a particularly attractive model organism: any deviation from the stereotype is not novelty. It is performance error. The question is, however, how to study the performance error. In a recent experiment, Gadagkar et al. (2016) have shown that dopamine neurons encode the error. It is well known that dopamine neurons fire in reward-​related tasks, in both humans and other animals, and their activity is related to error-​ related negativity (ERN) in the EEG signal (Gehring et al. 1993). According to Shea (2014), these dopamine neurons, when activated in reward-​driven tasks, should be considered metarepresentational, as they are presumed to be encoding errors in reinforcement learning (Schultz, Dayan, and Montague 1997; Schultz 1998; Dayan and Niv 2008; Glimcher 2011; for a historical account, see Colombo 2014). To be metarepresentational, in Shea’s account, it suffices to have satisfaction or correctness conditions that concern the content of another representation. Notably, he stresses that not all discrepancy checking gives rise to metarepresentation; in this respect, his account is compatible with ours. Error signals might concern the content of other representations in reinforcement learning because they encode the divergence of the actual reward that an agent has received from its expected value. During learning, for example by using the so-​called temporal difference algorithm (Sutton and Barto 1998), the agent minimizes the error signal δ over time. Quite clearly, his account is, in a sense, much more liberal than the one presented here: not only do we require satisfaction conditions, which is what semantic information has, but we also require being exploited in the cognitive economy in a way that responds to satisfaction conditions. We do not deny, in particular, that error signals involved in reinforcement learning have

Error Detection and Representational Mechanisms  309 a metacognitive role. Thus, our difference in this case boils down to a less liberal account of what constitutes mental representation. But this is not a verbal dispute. Shea discusses this issue explicitly and argues that an error signal δ in reinforcement learning is appropriately consumed: the cognitive system is set up to process and act upon δ so as to harvest maximal rewards. In other words, δ goes beyond mere feedback (as in our simple thermostat examples) toward the use of the error signal because the value of the signal causes subsequent changes in the learning algorithm. For us, this implies that the expected value information (which deviates from the actual reward by δ) is representational in our sense. Nonetheless, while the error signal is semantic and the piece of information that encodes δ has satisfaction conditions (it is about δ, after all), the latter do not play any upstream role in this reinforcement learning setup. There are some accounts of cognition, in which error signals are processed in a highly complex manner, such as in predictive coding, where it is hypothesized that errors are part of a complex hierarchy of information processing in the brain (Friston and Kiebel 2011; Hohwy 2013; Clark 2016). In this hierarchy, lower levels are supposed to send only prediction error signals to higher levels. These produce these predictions and send them to lower levels (although the hierarchy itself remains somewhat loosely defined and controversial; cf. Williams 2018). In such a hierarchy, one level of a hierarchy may operate merely on prediction errors related to other prediction errors, which would make them representational in our sense, because satisfaction conditions of information about errors might be causally relevant in such a hierarchy. However, it remains an open question whether predictive coding may indeed subsume reinforcement learning, as its defenders claim (Friston, Daunizeau, and Kiebel 2009; for criticism, see Colombo and Wright 2017). Most importantly, reward prediction errors operating under the temporal difference algorithm, such as the one discussed by Shea, are usually posited in a simpler setup, namely actor-​critic architecture (Sutton and Barto 1998; Khamassi 2005). The actor’s learning rule specifies that the connections between state and action choices should be modified according to a reinforcement signal from a separate critic module. The role of the critic is to generate this reinforcement signal, for example, in the form of a reward prediction error. But the reward prediction error itself is not checked further. Thus, it may carry semantic information about expected value without being representational in our sense. In other words, we have little reason to believe

310  What Are Mental Representations? that error signals in reinforcement learning are always metarepresentational. They are merely pieces of semantic information in the case we discuss here, as their own accuracy is not checked any further. What is, however, surprising about the recent study of zebra finches is that it shows that dopamine neurons can encode error signals unrelated to reward prediction. The idea of the experiment is simple. Songbirds sing by using their auditory feedback, just as we do. In previous work, it was hypothesized that it becomes part of the reward-​driven reinforcement learning (Fee and Goldberg 2011). In contrast, Gadagkar et al. hypothesize that a zebra finch evaluates its own song to compute an auditory error-​based reinforcement signal that guides learning. This neural signal is sent to motor circuits to let them “know” if the recent vocalization was “good” and should be reinforced or “bad” and should be eliminated. Thus, researchers simply distorted the auditory feedback for birds placed in a sound isolation chamber and recorded the activity of ventral tegmental area (VTA) dopamine neurons. In this experimental intervention, the recorded VTA neurons operated in a significantly different way from VTA neurons in a non-​distorted feedback condition (the exact detail is not significant for our purposes). These findings suggest that finches have an innate internal goal for their learned songs. The basic causal structure of the mechanism is not very different from the one depicted for our imaginary thermostat with error messages (see figure 10.3), except that instead of triggering an error message to be displayed, the error signal is sent to the song production system. Hence, the current motor plan of the song fulfills the conditions that we consider crucial to mental representation: it is not only a pattern of information that describes a song to be performed, but it is also checked for accuracy by comparing it with the learned song template. The learned template, as the result of learning (Böhner 1983), also has all the features of mental representation. Both, thus, are present in the representational mechanisms of songbirds. Interestingly, performance error in zebra finches does not have the features of reward prediction error. It is not derived from sensory feedback of intrinsic reward, or reward-​predicting value. This is evidenced by the fact that birds which passively hear distorted or undistorted syllables do not have error responses. Hence, they do not evaluate these sounds in terms of reward. Instead, researchers suggest that performance error is derived from the evaluation of auditory feedback against internal performance benchmarks. These benchmarks require, at each time-​step of the song sequence, information about the desired outcome, the actual outcome, and also the predicted

Error Detection and Representational Mechanisms  311 probability of achieving the desired outcome. Birds such as zebra finches learn stereotyped song templates by hearing a tutor song early on in life. Consequently, birds could compare their current performance to their memorized song templates. The results of the study show, nonetheless, that performance error is not limited to such a simple comparison. It also includes the recent performance history, so that the bird may evaluate its current performance as worse or better than the past performances. It remains unknown how the information about the song template and the predicted performance (based on the recent performance history) are integrated to compute performance error. While this would be enough for Shea to state that the performance error signals are metarepresentational (in his sense), for us, the evidence is insufficient. The experiment does not support the hypothesis that the error signal is further checked by some extra process. Hence, for us, this is merely an error information, and not a metarepresentation. This zebra finch study is elegant in its simplicity. It constitutes one of the thousands of studies on error-​related neural activity, which is hypothesized to be crucial to learning, in particular reinforcement learning, as well as in a plethora of tasks related to cognitive control and conflict monitoring (for a review, see Larson, Clayson, and Clawson 2014). For us, not only does it show that there are already quite extensive studies of processes employed to encode error in the brain, but also that it plays exactly the same role required by the account of representational mechanisms defended here. The extensive processing of error signals in zebra finches shows that they represent their songs mentally. There is little doubt that songs have a biological function in singing birds and that they represent them, which satisfies the traditional teleosemantic criteria. Moreover, the satisfaction conditions of song templates are available to the song system, which is evidenced by the existence of performance error signals in zebra finches. Consequently, our account can also substantially dispel the recent worry that there is no systematic answer to the question of what makes semantic information mental representation (Morgan 2013): its functional role includes the evaluation of its satisfaction conditions in the system that contains such information. This way, our account meets the job description challenge by clearly showing not only that satisfaction conditions of cognitive representations are causally relevant to the success of agents that entertain such representations, but also that they are evaluated by specific coherence-​ checking consumer mechanisms. Perhaps there is no further distinctive “mental” role for representation.

312  What Are Mental Representations?

10.5.  Conclusion In this paper, we have defended an account of representational mechanisms, which, according to us, require error-​processing mechanisms. Without error-​processing mechanisms, they simply process semantic information that could remain unavailable for the organism. However, if errors are causally related not just to properties of the vehicle of the information, but also to the relation of the vehicle and the target, as characterized by this vehicle, or simply to the satisfaction conditions partially determined by the vehicle, that shows that content has a causal role in such mechanisms. Notice that satisfaction conditions can be shown to be causally relevant to the organism’s success in achieving its goals as well. For example, Gładziejewski and Miłkowski (2017) show that satisfaction conditions of structural representations, or similarity between the structural properties of the representational vehicle and properties of the target, are causally relevant to the success of organisms that entertain such representations. At the same time, we may presume, such organisms have their own error-​checking mechanisms that reliably correct the structure of representations that were detected to be in error. Thus, satisfaction conditions of representations play explanatory roles both for the external behavior and for the internal functioning of agents. Needless to say, our focus here was on the possibility of misrepresentation and system-​detectable error. The vehicles of information may be misrepresentational because they have the function of representation. By detecting incoherence between information vehicles, the cognitive system may also have access to error, which is simply evidence that content is available for the system. Our project of naturalizing intentionality elucidates the representational character of content by showing how it can be causally efficacious. In particular, because we are realists about representational mechanisms, we believe they can be used to truly describe, explain, predict, and successfully manipulate cognitive processes and behaviors. We have applied the account of representational mechanisms to nonlinguistic phenomena in animals, as they are simpler to study experimentally. We believe it is also applicable to phenomena in human beings, which will be the focus of some of our future work. We have also illustrated it with subpersonal mechanisms, which are also related to personal phenomena in zebra finches. Note that when zebra

Error Detection and Representational Mechanisms  313 finches make errors—​without scientists around who distort their auditory feedback—​they can recover. But if they cannot recover with distorted feedback, we need not imply that zebra finches are instrumentally irrational in trying to sing out of tune. In our account, we also elucidate the notion of intentionality. We believe that determinants of reference and content depend on the mechanism in question. While we had no space to argue for semantic externalism here, we follow standard teleosemantics in this respect (for further details see Bielecka 2017). We are also pluralists about various kinds of representation, for example, structural representations, symbolic representations, and indicators. In our example, we didn’t focus on truth; we only focused on the accuracy of representation of the song syllable that the zebra finch detects when it is sung out of tune. Specifically, the song production system gets the signal that the song syllable was misrepresented, and thus, it has to correct the “error” (induced artificially by the distorted feedback). The account presented here remains sketchy. We presented a shortened description of the representational mechanism in zebra finches, omitting some of the known details, in particular about the localization of various parts of the bird brain and the process of learning the initial song template. The presented mechanism is just one of the many possible representational mechanisms. Another kind of representational mechanism that was analyzed using the same framework in the past is found in rodents that predict their future navigation paths; these representations are structural and anticipatory (Miłkowski 2015). Perceptual representational mechanisms may also be used to explain hallucinations (Miłkowski 2017a, 2018). Our future work, however, will focus on their role in mental pathology, both in humans and in other animals (Bielecka and Marcinów 2017).

Acknowledgments The work on this paper was funded by National Science Center (Poland) from research 2016/​23/​D/​HS1/​02205 (PI:  Krystyna Bielecka) and 2014/​ 14/​E/​HS1/​00803 (PI: Marcin Miłkowski). The authors wish to thank Brice Bantegnie, Juraj Hvorecky, and two anonymous reviewers as well as the editors of this volume for their helpful comments.

314  What Are Mental Representations?

References Anderson, M. L., and Rosenberg, G. 2008. Content and Action: The Guidance Theory of Representation. Journal of Mind and Behavior 29 (1–​2): 55–​86. Bechtel, W. 1998. Representations and Cognitive Explanations: Assessing the Dynamicist’s Challenge in Cognitive Science. Cognitive Science 22 (3): 295–​318. Bechtel, W., and Richardson, R. C. 2010. Discovering Complexity:  Decomposition and Localization as Strategies in Scientific Research. 2nd ed. Cambridge, MA: MIT Press. Bickhard, M. H. 1993. Representational Content in Humans and Machines. Journal of Experimental & Theoretical Artificial Intelligence 5 (4):  285–​ 333. https://​doi.org/​ 10.1080/​09528139308953775. Bickhard, M. H. 2009. The Interactivist Model. Synthese 166 (3): 547–​591. https://​doi.org/​ 10.1007/​s11229-​008-​9375-​x. Bielecka, K. 2017. Semantic Internalism Is a Mistake. Internetowy Magazyn Filozoficzny HYBRIS 38: 123–​146. Bielecka, K., and Marcinów, M. 2017. Mental Misrepresentation in Non-​ human Psychopathology. Biosemiotics 10 (2):  195–​210. https://​doi.org/​10.1007/​s12304-​017-​ 9299-​2. Böhner, J. 1983. Song Learning in the Zebra Finch (Taeniopygia guttata): Selectivity in the Choice of a Tutor and Accuracy of Song Copies. Animal Behaviour 31 (1): 231–​237. https://​doi.org/​10.1016/​S0003-​3472(83)80193-​6. Butlin, P. 2018. Representation and the Active Consumer. Synthese, September. https://​ doi.org/​10/​gd87r3. Cao, R. 2012. A Teleosemantic Approach to Information in the Brain. Biology & Philosophy 27 (1): 49–​71. https://​doi.org/​10.1007/​s10539-​011-​9292-​0. Carnap, R. 1947. Meaning and Necessity. Chicago: University of Chicago Press. Christensen, W., and Bickhard, M. H. 2002. The Process Dynamics of Normative Function. The Monist 85 (1): 3–​28. Clark, A. 2016. Surfing Uncertainty:  Prediction, Action, and the Embodied Mind. New York: Oxford University Press. Collier, J. D. 1999. Causation Is the Transfer of Information. In H. Sankey (ed.), Causation, Natural Laws and Explanation, 279–​331. Dordrecht: Kluwer. Colombo, M. 2014. Deep and Beautiful:  The Reward Prediction Error Hypothesis of Dopamine. Studies in History and Philosophy of Biological and Biomedical Sciences 45 (March): 57–​67. https://​doi.org/​10.1016/​j.shpsc.2013.10.006. Colombo, M., and Wright, C. 2017. Explanatory Pluralism: An Unrewarding Prediction Error for Free Energy Theorists. Brain and Cognition, Perspectives on Human Probabilistic Inferences and the “Bayesian Brain,” 112 (March): 3–​12. https://​doi.org/​ 10.1016/​j.bandc.2016.02.003. Craver, C. F. 2007. Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Oxford University Press. Cummins, R. 1996. Representations, Targets, and Attitudes. Cambridge, MA: MIT Press. Cummins, R., Blackmon, D., Byrd, D., Lee, A., May, C., and Roth, M. 2006. Representation and Unexploited Content. In G. McDonald and D. Papineau (eds.), Teleosemantics, 195–​207. New York: Oxford University Press. Davidson, D. 1987. Knowing One’s Own Mind. Proceedings and Addresses of the American Philosophical Association 60 (3): 441–​458.

Error Detection and Representational Mechanisms  315 Davidson, D. 2004. Problems of Rationality. Oxford: Clarendon Press; New York: Oxford University Press. Dayan, P., and Niv, Y. 2008. Reinforcement Learning:  The Good, the Bad and the Ugly. Current Opinion in Neurobiology, 18 (2):  185–​196. https://​doi.org/​10.1016/​ j.conb.2008.08.003. Deacon, T. W. 2012. Incomplete Nature:  How Mind Emerged from Matter. New  York: Norton. Dennett, D. C. 1981. True Believers: The Intentional Strategy and Why It Works. In A. F. Heath (ed.), Scientific Explanation: Papers Based on Herbert Spencer Lectures Given in the University of Oxford, 150–​167. Oxford: Clarendon Press. Dewhurst, J. 2018. Individuation without Representation. British Journal for the Philosophy of Science 69 (1): 103–​116. https://​doi.org/​10/​gdvhz2. Dretske, F. I. 1982. Knowledge and the Flow of Information. 2nd ed. Cambridge, MA: MIT Press. Dretske, F. I. 1986. Misrepresentation. In R. Bogdan (ed.), Belief:  Form, Content, and Function, 17–​37. Oxford: Clarendon Press. Egan, F. 2010. Computational Models:  A Modest Role for Content. Studies in History and Philosophy of Science Part A 41 (3):  253–​259. https://​doi.org/​10.1016/​ j.shpsa.2010.07.009. Egan, F. 2014. How to Think about Mental Content. Philosophical Studies 170 (1): 115–​ 135. https://​doi.org/​10/​gdvhwj. Eliasmith, C. 2005. A New Perspective on Representational Problems. Journal of Cognitive Science 6: 97–​123. Fee, M. S., and Goldberg, J. H. 2011. A Hypothesis for Basal Ganglia-​Dependent Reinforcement Learning in the Songbird. Neuroscience 198 (December):  152–​170. https://​doi.org/​10/​ckvkvx. Fodor, J. A. 1975. The Language of Thought. New York: Thomas Y. Crowell. Fodor, J. A. 1992. A Theory of Content and Other Essays. Cambridge, MA: MIT Press. Frege, G. 1960. Translations from the Philosophical Writings of Gottlob Frege. Edited by P. T. Geach and M. Black. Oxford: Blackwell. Fresco, N. 2010. Explaining Computation without Semantics: Keeping It Simple. Minds and Machines 20 (2): 165–​181. https://​doi.org/​10.1007/​s11023-​010-​9199-​6. Friston, K. J., Daunizeau, J., and Kiebel, S. J. 2009. Reinforcement Learning or Active Inference? PLOS ONE 4 (7): e6421. https://​doi.org/​10.1371/​journal.pone.0006421. Friston, K. J., and Kiebel, S. J. 2011. Predictive Coding: A Free-​Energy Formulation. In M. Bar (ed.), Predictions in the Brain: Using Our Past to Generate a Future, 231–​246. Oxford: Oxford University Press. Gadagkar, V., Puzerey, P. A., Chen, R., Baird-​Daniel, E., Farhang, A. R., and Goldberg, J. H. 2016. Dopamine Neurons Encode Performance Error in Singing Birds. Science 354 (6317): 1278–​1282. https://​doi.org/​10.1126/​science.aah6837. Garson, J. 2013. The Functional Sense of Mechanism. Philosophy of Science 80 (3): 317–​ 333. https://​doi.org/​10.1086/​671173. Gebharter, A. 2016. Causal Nets, Interventionism, and Mechanisms:  Philosophical Foundations and Applications. New York: Springer. Gehring, W. J., Goss, B., Coles, M. G. H., Meyer, D. E., and Donchin, E. 1993. A Neural System for Error Detection and Compensation. Psychological Science 4 (6): 385–​390. https://​doi.org/​10/​fdbssq. Gelder, T. van. 1998. The Dynamical Hypothesis in Cognitive Science. Behavioral and Brain Sciences 21 (5): 615–​628; discussion 629–​665.

316  What Are Mental Representations? Gładziejewski, P., and Miłkowski, M. 2017. Structural Representations: Causally Relevant and Different from Detectors. Biology & Philosophy 32 (February): 337–​355. https://​ doi.org/​10.1007/​s10539-​017-​9562-​6. Glennan, S. 2017. The New Mechanical Philosophy. New York: Oxford University Press. Glimcher, P. W. 2011. Understanding Dopamine and Reinforcement Learning:  The Dopamine Reward Prediction Error Hypothesis. Proceedings of the National Academy of Sciences 108 (Supplement 3): 15647–​15654. https://​doi.org/​10.1073/​pnas.1014269108. Hohwy, J. 2013. The Predictive Mind. New York: Oxford University Press. Khamassi, M. 2005. Actor-​ Critic Models of Reinforcement Learning in the Basal Ganglia: From Natural to Artificial Rats. Adaptive Behavior 13 (2): 131–​148. https://​ doi.org/​10.1177/​105971230501300205. Larson, M. J., Clayson, P. E., and Clawson, A. 2014. Making Sense of All the Conflict: A Theoretical Review and Critique of Conflict-​Related ERPs. International Journal of Psychophysiology 93 (3): 283–​297. https://​doi.org/​10/​f6gvhj. Machamer, P., Darden, L., and Craver, C. F. 2000. Thinking about Mechanisms. Philosophy of Science 67 (1): 1–​25. MacKay, D. M. 1969. Information, Mechanism and Meaning. Cambridge, MA: MIT Press. Miłkowski, M. 2013. Explaining the Computational Mind. Cambridge, MA: MIT Press. Miłkowski, M. 2015. Satisfaction Conditions in Anticipatory Mechanisms. Biology & Philosophy 30 (5): 709–​728. https://​doi.org/​10.1007/​s10539-​015-​9481-​3. Miłkowski, M. 2016. Function and Causal Relevance of Content. New Ideas in Psychology 40: 94–​102. https://​doi.org/​10.1016/​j.newideapsych.2014.12.003. Miłkowski, M. 2017a. Modelling Empty Representations:  The Case of Computational Models of Hallucination. In G. Dodig-​ Crnkovic and R. Giovannoli (eds.), Representation and Reality in Humans, Other Living Organisms and Intelligent Machines, 17–​32. Cham, Switzerland:  Springer. https://​doi.org/​10.1007/​978-​3-​319-​ 43784-​2_​2. Miłkowski, M. 2017b. The False Dichotomy between Causal Realization and Semantic Computation. Hybris, no. 38: 1–​21. Miłkowski, M. 2018. Explaining Hallucinations Computationally. In B. Brożek, Ł. Kwiatek, and J. Stelmach (eds.), Explaining the Mind, 239–​260. Cracow: Copernicus Center Press. Mill, J. S. 1875. A System of Logic, Ratiocinative and Inductive Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation. 9th ed. London: Longmans, Green, Reader, and Dyer. Millikan, R. G. 1984. Language, Thought, and Other Biological Categories: New Foundations for Realism. Cambridge, MA: MIT Press. Millikan, R. G. 1989. In Defense of Proper Functions. Philosophy of Science 56 (2): 288–​302. Millikan, R. G. 1993. White Queen Philosophy and Other Essays for Alice. Cambridge, MA: MIT Press. Millikan, R. G. 1995. Pushmi-​ Pullyu Representations. Philosophical Perspectives 9: 185–​200. Morgan, A. 2013. Representations Gone Mental. Synthese 191 (2): 213–​244. https://​doi. org/​10.1007/​s11229-​013-​0328-​7. Nielsen, K. S. 2010. Representation and Dynamics. Philosophical Psychology 23 (6): 759–​ 773. https://​doi.org/​10.1080/​09515089.2010.529045. O’Brien, G. 2015. How Does Mind Matter? Edited by T. Metzinger and J. M. Windt. Open MIND. https://​doi.org/​10/​gdvhz9.

Error Detection and Representational Mechanisms  317 Piccinini, G. 2008. Computation without Representation. Philosophical Studies 137 (2): 205–​241. https://​doi.org/​10.1007/​s11098-​005-​5385-​4. Piccinini, G. 2015. Physical Computation:  A Mechanistic Account. Oxford:  Oxford University Press. Plebe, A., and De La Cruz, V. M. 2016. Neurosemantics. Studies in Brain and Mind, vol. 10. Cham, Switzerland: Springer. http://​link.springer.com/​10.1007/​978-​3-​319-​28552-​8. Putnam, H. 1960. Minds and Machines. In S. Hook (ed.), Dimensions of Mind, 148–​179. New York University Press. Putnam, H. 1975. The Meaning of Meaning. In Philosophical Papers, vol. 2:  Mind, Language, and Reality, 215–​271. Cambridge: Cambridge University Press. Ramsey, W. M. 2007. Representation Reconsidered. Cambridge: Cambridge University Press. Ramsey, W. M. 2016. Untangling Two Questions about Mental Representation. New Ideas in Psychology, no. 40: 3–​12. https://​doi.org/​10.1016/​j.newideapsych.2015.01.004. Recanati, F. 2012. Mental Files. Oxford: Oxford University Press. Schultz, W. 1998. Predictive Reward Signal of Dopamine Neurons. Journal of Neurophysiology 80 (1): 1–​27. Schultz, W., Dayan, P., and, Montague, P. R. 1997. A Neural Substrate of Prediction and Reward. Science 275 (5306): 1593–​1599. https://​doi.org/​10.1126/​science. 275.5306.1593. Shagrir, O. 2010. Brains as Analog-​Model Computers. Studies in History and Philosophy of Science Part A 41 (3): 271–​279. https://​doi.org/​10.1016/​j.shpsa.2010.07.007. Shagrir, O. 2017. The Brain as an Input-​Output Model of the World. Minds and Machines, October, 1–​23. https://​doi.org/​10.1007/​s11023-​017-​9443-​4. Shapiro, L. 1997. Junk Representations. British Journal for the Philosophy of Science 48 (3): 345–​362. https://​doi.org/​10.1093/​bjps/​48.3.345. Shea, N. 2014. Reward Prediction Error Signals Are Meta-​representational. Noûs 48 (2): 314–​341. https://​doi.org/​10.1111/​j.1468-​0068.2012.00863.x. Sober, E. 1984. The Nature of Selection:  Evolutionary Theory in Philosophical Focus. Cambridge, MA: MIT Press. Sober, E. 1993. Philosophy of Biology. Boulder, CO: Westview Press. Sprevak, M. 2010. Computation, Individuation, and the Received View on Representation. Studies in History and Philosophy of Science Part A 41 (3): 260–​270. https://​doi.org/​ 10.1016/​j.shpsa.2010.07.008. Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press. Thagard, P. 2000. Coherence in Thought and Action. Cambridge, MA: MIT Press. Thomson, E., and Piccinini, G. 2018. Neural Representations Observed. Minds and Machines, February, 1–​45. https://​doi.org/​10.1007/​s11023-​018-​9459-​4. Tinbergen, N. 1963. On Aims and Methods of Ethology. Zeitschrift für Tierpsychologie 20 (4): 410–​433. Twardowski, K. 1977. On the Content and Object of Presentations:  A Psychological Investigation. The Hague: Nijhoff. Williams, D. 2018. Predictive Coding and Thought. Synthese 197: 1749–​1775. https://​doi. org/​10.1007/​s11229-​018-​1768-​x. Zann, R. A. 2002. The Zebra Finch: A Synthesis of Field and Laboratory Studies. Oxford: Oxford University Press.

Index For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only one of those pages. Figures are indicated by f following the page number    abilities causal role, 4–​5, 12, 15–​16, 41, 42, 43, 47, cognitive, 46, 180–​81n.3, 181–​82 51, 58–​59, 85, 179–​80, 305, 312 conceptual, 110, 160, causal theory of content, 17, 65, 182 160–​61n.7,  293–​94 Chemero, Antony, 37–​38 detection, 219, 293–​94, 296 Chomsky, Noam, 4, 16, 55, 63–​66, 76, discrimination, 125, 224–​25 84–​85, 86, 104, 107, 180–​81, 188–​90 linguistic, 110, 187–​88 Churchland, Patricia, 13, 96n.7 aboutness, 21–​22, 28, 288–​89 Churchland, Paul, 96n.7, 139 accuracy-​conditions, 11, 82, 108–​9, Clark, Andy, 6, 7–​8, 9, 11, 37, 82–​83, 88, 137, 287 93–​95, 101, 107, 117, 124–​25, 129, action-​representation, 14, 217 243, 247, 248, 251, 301–​2, 309 adequacy conditions, 15–​16, 18–​19, 27, cognition 41, 43, 199 animal, 1, 127n.11 algorithm, 59, 93, 224, 308 basic, 182–​83, 185–​86, 187–​88,  191–​92 antirealism, 55, 56–​57, 60 core, 213, 234 anti-​representationalism, 6–​7, 16–​17, 81, enacted, 36–​37, 39, 182–​83, 185–​86 135, 139, 182–​83, 208, 292–​93 explanation of, 61–​63, 70, 82, 230, asymmetric causal dependency, 13n.3, 237, 256 29n.1, 44–​45n.18, 182n.5 high-​level, 137–​38, 141–​42, 151, autonomy of psychology, 29, 105–​6 155–​56, 157, 171    cognitive architecture, 7–​8, 207–​8, 235–​36 Bayesian approach, 15–​16, 50, 107, 269n.9 cognitive revolution, 1, 104, 135 Bayes’ theorem, 7–​8, 49–​50 coherence-​based account of error, 21–​22, Bechtel, William, 2, 37–​38, 45–​47, 230, 295, 302 262, 290, 303 compositional semantics, 5–​6, 20–​21, 155, behaviorism, 86–​87, 104, 179–​80, 157, 169, 282 194, 245 compositional syntax, 36, 38 Bickhard, Mark, 11, 287, 294, 295, 296–​97, computation, 4, 6–​7, 90, 102, 103, 118, 298–​300 120–​21, 261, 292 Block, Ned, 86–​87 analog, 89 Boyd, Richard, 219–​21, 232 mathematical, 74, 75, 76, 88, 89, Brentano, Franz, 1, 12, 104, 105, 108–​9 237, 301 Burge, Tyler, 85n.3, 128, 129, 138–​39, neural,  89–​90 143n.1, 157, 158n.6, 169n.11, 170    physical, 4, 301, 302–​3

320 Index computational level, Marr’s, 226, 227, 229,  231–​32 computational process, 3–​4, 36, 222–​23 computationalism, 81, 82–​83, 86–​87, 90, 147 contentless, 4, 97 computer, 4–​5, 35, 72–​73, 74, 87, 118, 243,  302–​3 digital, 3–​4, 5, 89 concept, 2n.1, 39–​40n.12, 79, 93, 179–​81 conditions. See accuracy-​conditions; adequacy conditions; satisfaction conditions; truth-​conditions connectionism, 6, 7–​8, 36–​37, 40–​41, 118, 223, 292–​93. See also neural network consciousness, 44, 106–​7, 110–​11, 244–​45,  256–​57 consumer-​based semantics, 13, 14 consumer system, 13, 21–​22, 112–​13, 119–​20, 291, 300–​1, 304–​5, 311 content ascription of, 32, 47 coarse-​grained, 180–​81, 195–​96, 197–​99,  204 cognitive, 16, 32–​34, 71–​72, 73–​74,  236–​37 computational, 55, 71–​72 derived, 109 distal, 16, 32–​33, 48–​49 intensional, 189–​90, 193 intentional, 13, 91, 215, 216–​17, 219, 221–​22, 226, 229, 232, 236 mathematical, 16–​17, 72–​73, 74, 75–​76, 81, 91–​92, 94–​95, 96–​97, 189–​90,  236–​37 mental, 12, 13–​14, 27, 30–​31, 35, 44–​45, 81, 96, 108, 109, 116–​17, 126, 288 original, 56–​57, 109, 110, 111, 113 pragmatic,  189–​90 propositional, 5–​6, 82, 110, 201, 204, 287n.1 semantic, 14–​16, 92, 117–​18, 216–​17, 230, 231, 236, 254–​56, 258, 259–​60, 261, 262–​63, 264, 265, 268–​69, 275, 276, 279, 280 coordination with absence, 9, 11, 17,  126–​30 co-​reference, 162, 163, 164–​66, 174

Craver, Carl, 48–​49, 220, 226, 229, 290 Cummins, Robert, 10, 30, 105–​6, 119–​20, 123, 217, 223–​24, 225, 289–​90, 293    Davidson, Donald, 110, 294–​95, 299 dead reckoning, 127, 191–​92, 204 decomposition, 5, 9, 19, 222 de-​coupleability, 129. See also detachability deflationism, 14–​15, 16–​17, 35, 41, 42, 43, 45, 47–​49, 51, 56–​57, 57n.1, 59–​60,  63 full, 81, 88, 97 partial, 81, 90–​91, 94–​95, 97 representational, 16, 41, 66, 83–​84 denotation, 17–​18, 145, 148, 155, 161, 162, 163–​64, 166, 167–​68, 169, 170–​71 detachability, 11, 129. See also de-​coupleability detector, 9–​10, 29–​30, 92, 112, 115–​16, 303, 307 Dennett, Daniel, 54–​55, 101, 110, 118, 216, 225, 304 discrepancy detection, 288, 295, 302, 305,  306–​7 disjunction problem, 92–​93, 265 Dretske, Fred, 13, 42, 58, 65, 101, 102–​3, 105, 110, 111–​12, 113–​14, 115–​16, 117, 179–​80, 200, 293, 296–​97, 304,  305–​6    Egan, Frances, 14–​15, 16–​17, 55, 64, 66, 71–​76, 85, 87n.5, 91–​93, 94, 95, 97, 167–​68, 188, 189–​90, 192, 193, 201, 208–​9, 236–​37,  303 eliminativism, 16, 54–​55, 56–​57, 66–​67, 70,  76–​77 empiricism, 1, 20–​21, 61 enactivism, 16–​17, 39, 81, 88, 90, 97,  180–​81 error, 11, 21–​22, 108–​9, 111–​12, 235, 291, 293, 294–​96, 299–​300, 302, 304–​6, 307f, 308, 309, 310–​11, 312–​13. See also misrepresentation computational, 301, 302 error detection, 11, 21–​22, 287–​88, 292, 293–​94, 295–​96, 297, 299–​303, 304–​5, 307f, 312

Index  321 error signals, 308–​11 Evans, Garreth, 160n.7, 161, 170 evolution, 13, 31–​32, 74, 94, 163, 187–​88, 195, 196, 215, 215–​16n.3, 234–​35, 260, 299, 307 explanation causal, 96, 179–​80, 199, 206n.21 computational, 96–​97, 189, 231 horizontal, 217, 225 intentional, 137, 138, 139, 146 low-​level, 192, 193, 208–​9 mechanistic, 48–​49n.23, 96, 206–​7, 290, 292 multilevel, 59, 179–​80, 192, 201, 214, 221, 222, 225, 234 psychological, 105, 137, 171, 173–​74,  225 reductive, 44, 192–​93 representational, 8, 11, 18–​19, 114–​15, 193, 235 vertical, 217, 222, 225, 233 extended mind, 243–​44, 247, 249–​51 extended mind hypothesis, 243, 249–​50 extension, 29, 165, 166, 237, 289 cognitive, 243–​44, 249–​50, 251 radical, 243–​44,  249–​50 externalism, 182–​83n.6,  231–​32 semantic, 231–​32, 313    fictionalism, 16, 41, 67, 68–​71, 85. See also neural representational fictionalism Fodor, Jerry, 4, 13n.3, 28, 36, 43–​44, 112, 128, 129, 135, 138–​39, 142, 159, 160–​61, 162, 167–​68, 173–​74, 180–​81, 187–​88, 223,  293–​94 function biological, 21–​22, 39, 92, 110, 287, 311 cognitive, 1, 6, 272 interpretation, 15–​16, 26, 35, 36, 43 mathematical, 34, 50, 74, 75–​76, 112, 189–​90,  194 natural, 13, 29–​30 realization, 15–​16, 26, 27, 36, 40, 43, 237 representational, 10, 63, 261, 264, 265, 267, 278–​79, 282–​83, 290, 297, 298 teleological, 13, 111–​12, 260, 296–​97

functionalism, 18–​19, 20, 77, 180–​81, 182–​83, 186–​87, 194–​95, 197–​99, 220–​21, 242–​44, 245, 246, 247–​50,  251–​52 commonsense, 242, 248 functional role, 8, 55–​56, 57, 63, 75–​76, 180–​81, 182–​83, 194, 195, 197–​99, 200–​11, 232, 254–​55, 256, 259–​60, 262, 263, 280, 287, 290, 310–​11    Gallagher, Shaun, 37, 182–​83 Gładziejewski, Paweł, 10, 312 Grice, Paul, 257, 258    hard problem of content, 39, 88, 94–​95 Haugeland, John, 2, 9, 12, 61–​62, 178–​79 homeostatic property cluster (HPC),  219–​20 homomorphism, 32, 122–​23, 124, 125–​26, 127, 262, 263 Hutto, Daniel, 16–​17, 39, 90, 101    icon, 121, 126 identity conditions, 165–​67 implementational level, Marr’s, 227, 228–​32, 233, 235 inaccuracy, 306–​7. See also error incoherence detection, 295, 300–​1, 305 indeterminacy, 18–​19, 29–​30, 32–​33, 44–​45, 58, 65, 70–​71, 112–​13, 114, 122. See also semantic indeterminacy indeterminacy problem, 13–​15 indicator, 111, 118–​19, 121, 123, 125, 126, 128, 306–​7, 313 individuation of representations, 8, 17–​18, 136, 165 information natural, 263, 264–​65, 273 natural/​nonnatural, 258–​59,  266 nonnatural, 259, 266, 267–​68 processing, 80, 89, 107, 182–​83, 197–​99, 225–​26, 227, 230, 244–​45, 247 semantic, 20–​22, 258–​59, 265, 290, 292–​93, 295–​96, 303, 305–​6, 308–​10,  311 sensory, 269–​73, 275, 276, 281–​82, 293 structural, 21–​22, 291

322 Index informational state, 184–​86, 187–​88,  191–​92 informational teleosemantics, 20–​21, 254–​55, 263, 265, 266, 268, 275, 276, 280, 282. See also teleosemantics innate, 214, 310 intensionality, intensional, 49–​50, 92, 189–​90, 193, 294 intentional gloss, 16, 33–​34, 35, 44–​45, 55, 73,  92–​93 intentional inexistence, 14, 104 intentionality, 1, 12, 28, 43–​45, 104, 244–​45, 254–​55, 256–​57, 258–​59, 279, 280–​81, 283, 288–​89, 293, 297 derived, 12, 279n.13 naturalization of, 3, 20–​21, 105, 294, 302, 304–​5,  312–​13 internalism, 105–​6,  236–​37 interpreter, 17, 118–​20, 121, 122, 125 isomorphism, 30, 108, 121, 122, 128    Jacob, Pierre, 12, 256 job description challenge, 8, 116–​17, 181–​82, 292–​93, 311. See also representational role    kind natural, 19, 213–​14, 216, 217–​18, 220, 222, 226, 227, 228, 233, 236 representational, 19, 213–​16, 217–​19, 221–​22, 223–​24, 225, 227–​28, 229, 230–​32, 233, 234–​35, 236, 237–​38    language of thought (LOT), 5–​6, 36, 82–​83, 108, 135, 139, 187–​88, 213. See also mentalese laws, 4–​5, 104, 128, 218 Lewis, David, 149, 246    MacKay, Donald MacCrimmon, 21–​22,  291 malfunction, 20–​21, 111–​12, 260, 267–​68, 273, 277, 278, 281, 282 representational, 20–​21, 268, 270, 281, 282 map, 10–​11, 27, 30, 38, 42, 61–​63, 101, 103, 108, 113–​14, 119, 121, 122, 123, 124–​25, 127, 135

cognitive, 135, 139, 143, 147, 151, 153, 157, 169, 270, 271 mapping, 6–​7, 18–​19, 21–​22, 26, 40, 41, 64, 113, 122, 127, 194, 195, 237. See also function Marr, David, 34, 63–​64, 66, 71–​72, 74, 75, 76, 85n.3, 96, 225, 226, 235, 236–​37 Martian, 20, 242–​44, 245, 246–​50 Martian psychology, 243, 245, 246, 248–​49 meaning, 2, 13n.3, 26–​27, 28, 30–​31, 35, 44–​45, 57, 83, 109, 158–​59, 167–​68, 172n.13, 221, 254, 255–​56, 257, 258, 280, 289 derived, 109, 111 natural, 105, 257, 258, 280 nonnatural, 20–​21, 257, 258, 277, 279, 280, 281, 283 mechanism causal, 95, 96, 179–​80, 194, 199, 201, 206n.21 cognitive, 31–​32, 91–​92, 93, 94–​95, 191–​92,  262 computational, 14–​15, 21–​22, 33, 88, 93, 189–​90, 219, 292, 300–​3 learning, 87, 214n.2 neural, 48–​49, 180–​81n.3, 190, 191–​92, 202–​3, 206–​7, 217, 226, 234 physical, 18–​19, 191, 301, 302 representational, 287–​88, 290–​91, 292, 298, 299–​301, 303, 306, 310, 311,  312–​13 rigid, 185–​86, 191–​92, 202, 206, 208 mechanism sketch, 48–​49, 301 memory, 1, 17–​18, 135, 180–​81, 185–​86, 202–​3, 204–​5, 213, 217, 224, 226, 227, 230, 232, 245, 250–​51, 308 episodic, 184–​85, 205, 207–​8, 250–​51,  270 working, 213, 217, 224, 236, 246–​47 mental causation, 4–​5, 206n.21, 288 mental demonstrative, 170–​71 mental event, 17–​18, 139, 142, 143, 144, 145–​46, 147–​48, 151, 152, 160–​61, 163, 164–​65, 168, 170–​71n.12, 171, 172–​73,  225 mental imagery, 129, 157 mentalese, 5–​6, 36, 135, 136, 142, 151, 152, 156–​57, 161, 162, 166, 167n.10,

Index  323 167–​69, 172, 172n.13, 173–​74. See also language of thought (LOT) mentalist theory of meaning, 255, 279 metarepresentational, 274–​75n.11, 308,  309–​11 Miłkowski, Marcin, 21–​22, 312 Millikan, Ruth Garrett, 13–​14, 29–​30, 94, 126, 187–​88, 197–​99n.13, 297, 299–​300 misrepresentation, 11, 20–​22, 29, 30–​31, 35, 55–​56, 65, 181–​82, 190, 195, 267, 268, 270, 273, 277, 278–​79, 281, 293, 296–​97, 299, 312. See also error model Bayesian, 47–​51,  146–​47 cognitive, 36–​37, 38, 40, 47–​48, 51, 54–​55, 60, 67, 87 computational, 26, 41, 44–​45n.19, 120, 147 connectionist, 36–​37,  40–​41 dynamical, 6–​7, 36–​37,  268–​69 generative, 93–​94, 95 modeling, 6, 62–​63, 118, 231, 233, 262 cognitive, 36–​37, 59 multiple realizability, 40, 86–​87, 172, 173, 192–​93, 202–​4, 208–​9, 220–​21, 229–​30, 234, 260, 261. See also multiple realization multiple realization, 192, 202–​4, 208–​9, 229, 230. See also multiple realizability Myin, Erik, 16–​17, 39, 90    naturalism, 13–​15, 20–​21, 28, 29–​33, 35, 39, 43–​45, 58–​59, 66–​67, 83, 86, 92–​ 93, 94–​95, 102–​3, 105, 110, 111–​12, 113–​14, 116–​17, 254–​55, 258–​59n.3, 267, 279, 283, 288–​89, 294, 295 navigation, 17–​18, 33n.7, 46, 47, 50–​51, 108, 124–​25, 127, 138, 139, 142, 143, 147, 151, 155–​56, 158n.6, 171, 172, 173, 185–​86, 204–​5n.20, 262, 313 Neander, Karen, 102–​3, 191–​92, 261, 265n.7, 273n.10 neural network, 6, 89, 129, 235–​36n.13. See also connectionism neural representational fictionalism, 16, 55, 66–​69, 71. See also fictionalism

neuroscience, 1, 4, 7–​8, 9–​10, 15–​16, 28, 29, 31, 34, 35, 43–​44, 45, 46, 50–​51, 59, 89, 106, 108, 172, 191–​92n.9, 203, 207–​8, 230, 256, 262n.4, 269n.9, 275 neuron, 9–​10, 17, 46, 58, 62–​63, 66, 67, 87, 95, 108, 111–​12, 115–​16, 119, 120, 223, 261, 264, 265, 275–​76, 308, 310 normativity, 29–​30, 48, 59–​60, 218–​19n.4    object-​files, 19, 200–​1, 213, 225n.10 ontological commitment, 16, 55–​57, 77, 83–​84, 143,  145–​46    perceptual constancies, 128 perceptual inference, 138, 146–​47, 172 perceptual state, 248, 258–​59, 265 phenomenal consciousness, 106–​7, 110–​11,  256–​57 phenomenal intentionality, 102, 110–​11 phenomenology, 106, 256–​57 physicalism, 12, 13, 172 physical system, 26, 237 Piccinini, Gualtiero, 20–​21, 79, 89–​90 plasticity, 208–​9, 261 pragmatic considerations, 14–​15, 18–​19, 32–​33, 35, 41, 73, 84, 91–​92 predictive coding See predictive processing predictive processing, 7–​8, 36–​37, 49–​50n.25, 51, 93, 94, 301–​2, 309–​10 probability, 20–​21, 49–​50, 93, 146–​47, 257, 258–​59, 263, 264, 265, 266, 267, 277, 280, 281n.14, 310–​11 probability distributions, 49–​50, 49–​50n.25,  94–​95 producer, 13, 291, 300 property causal, 5–​6, 84 distal, 128, 142, 146–​47, 151 representational, 17–​18, 85, 136, 137, 138–​39, 141, 144–​45, 148, 155–​56, 168, 169, 170–​71, 172, 174, 215–​16 semantic, 2, 4–​6, 11, 12, 17–​18, 19, 21–​22, 28, 31, 35, 96n.7, 222–​23, 229, 230, 231–​32, 233, 235, 303 syntactic, 2–​3, 4–​6, 12, 17–​18, 108, 167–​68, 174, 222–​23, 226, 230–​31 propositional attitude, 2n.1, 4, 5–​6, 54–​55

324 Index propositional content. See content propositional format, 107, 200, 201, 205–​6 prototype, 10, 213, 228, 246n.2 psychology, 28, 59, 63–​64, 104, 105–​6, 137–​38, 139, 188–​89, 218–​19, 223–​24, 244, 245, 246, 247, 249–​50, 251, 293–​94, 295 Bayesian, 50, 146–​47, 163, 171, 269n.9 cognitive, 1, 4, 50–​51 folk, 137–​38, 143, 156, 167, 189 perceptual, 50, 138, 146–​47, 163, 166, 171, 172 scientific, 104, 156, 171 social, 1, 137–​38, 225 Putnam, Hilary, 60, 172, 194, 195 Pylyshyn, Zenon, 159, 160–​61, 223    Quine, Willard Van Orman 12, 164–​65    Ramsey sentence, 243, 246, 248 Ramsey, William, 8, 9–​10, 15–​16, 27, 30, 36, 37–​38, 39–​40, 43, 46–​47n.20, 86, 102–​3, 108, 116–​17, 121, 292–​93 realism, 13, 16, 54–​56, 57, 58–​59, 60, 61–​62, 63, 66–​67, 76–​77, 85, 138–​39, 145–​46, 147–​48,  187 reasoning, 1, 5, 9, 60, 85–​86, 87, 116–​17, 119, 121 deductive, 137–​38, 147 receptor, 46–​47n.20, 64, 127, 128 reduction, 28, 43–​45, 58–​59, 192–​93 reference, 110, 163, 167n.10, 213, 218–​19n.4, 222, 230, 235–​37, 313 reification, 17–​18, 140, 143, 147, 160 representation analog, 103, 125, 213, 216, 224–​25, 232, 233 distributed, 6, 215–​16n.3, 222–​23 ersatz, 16, 64, 66, 180–​81, 188–​89 explicit, 36–​37, 39n.11, 49–​50n.25, 107, 145 implicit, 101, 106–​7, 145 inert, 289–​90, 293, 300 junk, 289–​90, 293 linguistic, 38, 119, 180–​81, 182–​83, 185–​86,  188–​89 maplike, 10, 101, 127 minimal, 14, 41n.14, 57, 289

neural, 16, 55, 66–​69, 70–​71, 256, 262, 275, 283 nonlinguistic, 180–​81,  182–​86 nonnatural, 20–​21, 254–​55, 258–​59, 263, 266–​69, 274–​75, 277, 278–​79, 280, 281–​82, 283 non-​relational, 55, 64 perceptual, 20–​21, 87n.4, 138, 142–​43, 146–​47, 151, 153, 157, 163, 166, 169, 169n.11, 235, 268–​69, 313 quasi-​linguistic, 38,  118–​19 situated, 180–​81, 193, 204, 208 structural, 124n.9, 254–​55, 262, 263, 263n.6, 264, 268–​69, 278–​79n.12, 312, 313 unconscious, 101, 102, 106–​7, 110–​11,  113 representational theory of mind, 4, 17–​18, 57n.1, 69–​70, 88, 135, 256n.2 representation-​hungry problems, 9, 10 resemblance, 11, 117–​18, 122, 218, 220, 227–​28, 229, 303 structural, 10 role causal, 4–​5, 12, 15–​16, 41, 42, 43, 46–​47, 48–​49n.21, 51, 58–​59, 64, 85, 179–​80, 194n.11, 305, 312 consumer,  300–​1 explanatory, 3, 16, 63–​64, 82, 83, 136, 179–​81, 190, 199, 206–​7, 234–​35,  312 functional, 8, 34, 55–​56, 57, 63, 75–​76, 182–​83, 194, 195, 197–​99, 200, 201, 232, 249, 254–​55, 256, 259–​60, 262, 263, 280, 287, 290, 297, 311 representational, 10, 16, 56–​57, 58–​59, 60, 62–​63, 66–​67, 75, 77, 114–​15, 116–​17, 118–​20, 123, 125, 290, 292–​93, 305 (see also job description challenge) semantic,  304–​5    satisfaction conditions, 11, 21–​22, 26, 28, 29–​30, 31, 82, 287, 290, 291, 292–​93, 297, 300–​1, 305, 306–​7, 308–​9,  311–​12 semantic indeterminacy, 167, 168. See also indeterminacy

Index  325 senses, 127, 129, 130 Shagrir, Oron, 30, 75, 89 Shea, Nicholas, 11, 20, 31, 35, 82, 124, 174–​75,  308–​11 sign, 119, 257, 266, 281 simulation offline, 20–​21, 254–​55, 262n.4, 271, 272, 273, 274, 275–​76, 277, 278, 278–​79n.12, 280,  281–​82 online, 271, 272, 273, 273n.10, 274, 275–​76, 278, 279 singular term, 166, 213 Sober, Elliott, 192–​93, 298 Sprevak, Mark, 16, 20, 35, 55, 66–​67, 68–​69, 70–​71, 243–​44, 248, 251, 252 stand-​in, 9, 17, 42, 101, 102, 103, 117, 118, 119, 120, 121, 125, 126–​27, 129, 130, 182–​83, 197–​99, 200–​1,  204–​5 standing in. See stand-​in subsystems, 16, 72–​73, 74–​76, 138, 142, 143–​44, 168, 222, 290, 291, 297

supervenience, 28, 96 systematicity, 159, 160–​61, 187–​88    targets, 16–​18, 215, 216–​17, 276–​77, 292 taxonomy, 17–​18, 108 teleosemantics, 13–​14, 20–​21, 29–​30, 31–​33, 58, 94, 112–​13, 231–​32, 287, 296–​97, 300–​1, 304–​5, 311, 313. See also informational teleosemantics truth-​conditions, 82, 137, 156–​57, 169 twin earth, 216, 289    unobservable, 61, 104, 270    vehicle representational, 2–​3, 15–​16, 26–​27, 36, 40–​41, 42, 43, 47, 48–​49, 51, 64, 74, 173–​74, 189–​90, 224, 259–​60, 290, 292–​93,  312    Watt governor, 6–​7, 37, 39–​40, 43, 303, 304 Weber’s law, 125, 224–​25