Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience 3031267451, 9783031267451

In this book, Matej Kohar demonstrates how the new mechanistic account of explanation can be used to support a non-repre

202 45 5MB

English Pages 198 [199] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgements
Contents
List of Acronyms
Chapter 1: Introduction
References
Chapter 2: The New Mechanistic Theory of Explanation: A Primer
2.1 The Concept of Mechanism: Mechanisms, Phenomena, and Constitution
2.2 Constructing Mechanistic Models
2.3 Mechanistic Explanation
2.4 Conclusion
References
Chapter 3: Mechanistic Explanatory Texts
3.1 Craver and Kaplan´s Contrastive Account of Mechanistic Explanation
3.2 Problems with Craver and Kaplan´s Account
3.3 Mechanism Descriptions and Mechanistic Explanatory Texts
3.4 Constructing Mechanistic Explanatory Texts
3.5 Conclusion: Solving the Problems
References
Chapter 4: Representations and Mechanisms Do Not Mix
4.1 Representations: External and Internal
4.2 Mental vs. Neural Representations
4.3 Content-Determination and the Job-Description Challenge
4.4 Why Contents Are Not Explanatorily Relevant
References
Chapter 5: Indicator Contents
5.1 What Is Indicator Content?
5.2 Probabilities for Indicator Contents
5.3 Assessing Indicator Contents Based on Frequentist Chances
5.4 Assessing Indicator Content Based on Propensities
5.5 Conclusion
References
Chapter 6: Structural Contents
6.1 Defining Structural Representation
6.2 Mapping Relations for Structural Representation
6.3 Structural Representation and Locality
6.4 Structural Representations and Mutual Dependence
6.5 Conclusion
References
Chapter 7: Teleosemantics
7.1 Teleosemantic Analyses of Content
7.2 Teleosemantics with History-Dependent Functions
7.3 Teleosemantics with Synchronic Functions
7.4 Teleosemantics with Cybernetic Norms
7.5 Conclusion
References
Chapter 8: The Dual-Explananda Defence
8.1 The General Form of the Dual-Explananda Defence
8.2 The Fittingness Explanandum
8.3 The Success Explanandum
8.4 Conclusion
References
Chapter 9: The Pragmatic Necessity Defence
9.1 Egan´s Deflationary Realism
9.2 Options for a Mechanistic Answer to Deflationary Realism
9.3 Reinterpreting Mathematical Content
9.4 Roles of the Intentional Gloss
9.5 Rejecting Cognitive Contents
9.6 Conclusion
References
Chapter 10: Conclusions and Future Directions
10.1 Consequences for Mainstream Philosophy of Cognitive Science
10.2 Consequences for Previous Non-representational Theories
10.3 Future Directions
References
Index
Recommend Papers

Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience
 3031267451, 9783031267451

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Brain and Mind  22

Matej Kohár

Neural Machines: A Defense of NonRepresentationalism in Cognitive Neuroscience

Studies in Brain and Mind Volume 22

Series Editor Gualtiero Piccinini, University of Missouri - St. Louis, St. Louis, MO, USA Editorial Board Members Berit Brogaard, University of Oslo, Norway, University of Miami, Coral Gables, FL, USA Carl Craver, Washington University, St. Louis, MO, USA Edouard Machery, University of Pittsburgh, Pittsburgh, PA, USA Oron Shagrir, The Hebrew University of Jerusalem, Jerusalem, Israel Mark Sprevak, University of Edinburgh, Scotland, UK

The series Studies in Brain and Mind provides a forum for philosophers and neuroscientists to discuss theoretical, foundational, methodological, and ethical aspects of neuroscience. It covers the following areas: • • • • • •

Philosophy of Mind Philosophy of Neuroscience Philosophy of Psychology Philosophy of Psychiatry and Psychopathology Neurophilosophy Neuroethics

The series aims for a high level of clarity, rigor, novelty, and scientific competence. Book proposals and complete manuscripts of 200 or more pages are welcome. Original monographs will be peer reviewed. Edited volumes and conference proceedings will be considered provided that the chapters are individually refereed. This book series is indexed in SCOPUS. Initial proposals can be sent to the Editor-in-Chief, prof. Gualtiero Piccinini, at [email protected]. Proposals should include: • • • •

A short synopsis of the work or the introduction chapter The proposed Table of Contents The CV of the lead author(s) If available: one sample chapter

We aim to make a first decision within 1 month of submission. In case of a positive first decision the work will be provisionally contracted: the final decision about publication will depend upon the result of the anonymous peer review of the complete manuscript. We aim to have the complete work peer-reviewed within 3 months of submission. For more information, please contact the Series Editor at [email protected].

Matej Kohár

Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience

Matej Kohár Institute of History and Philosophy of Science, Technology, and Literature Technical University of Berlin Berlin, Germany

ISSN 1573-4536 ISSN 2468-399X (electronic) Studies in Brain and Mind ISBN 978-3-031-26745-1 ISBN 978-3-031-26746-8 (eBook) https://doi.org/10.1007/978-3-031-26746-8 This work was supported by Deutsche Förderungsgemeinschaft © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgements

This work was completed under the auspices of the DFG Graduiertenkolleg “Situated Cognition”, a collaboration between the Ruhr-Universität Bochum and the Universität Osnabrück. I would like to thank my supervisors Beate Krickel and Albert Newen for their insightful comments, many hours of formal and informal discussions about the topic and general support. My thanks belong also to all my colleagues in the Graduiertenkolleg for their feedback at workshops. Special thanks go to the Samantha Ehli, Robin Löhr, Julian Packheiser, Elmarie Venter and Julia Wolf, who have read and commented on large portions of the book, as well as all the empirical scientists in the cohort who were ready to discuss these abstract philosophical worries with limited immediate impact on their work. I would further like to thank colleagues from chair Newen and the rest of the Institute, especially Sabrina Coninx, Francesco Marchi and Alfredo Vernazzani, for reading and commenting on parts of this work. I completed parts of this work during a research stay at the University of Cincinnati. I would like to thank Tony Chemero for supervising my progress while at Cincinnati and to the members of the UC Psychology Department for inviting me to present my work. My thanks extend to the RUB Research School, which funded my trip through its PR.INT grant programme. I would also like to thank the guests of the “Mental Representations in a Mechanical World” workshop held in Bochum, which I had the privilege to organise together with Beate Krickel in November 2019. This workshop, which served in part to gather yet more feedback on the book, was likewise funded by the RUB Research School. My thanks also belong to Gualtiero Piccinini and an anonymous reviewer who pushed me to clarify and strengthen my arguments throughout the publication process. Part of this work (Sect. 3.4) has appeared in a joint paper with Beate Krickel. Finally, I would like to thank my parents for supporting me in my studies, even though the best I could ever do to explain what I’m working on was along the lines of “philosophizing on how brains work”. v

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 5

2

The New Mechanistic Theory of Explanation: A Primer . . . . . . . . . 2.1 The Concept of Mechanism: Mechanisms, Phenomena, and Constitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Constructing Mechanistic Models . . . . . . . . . . . . . . . . . . . . . . 2.3 Mechanistic Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 8 19 23 27 27

Mechanistic Explanatory Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Craver and Kaplan’s Contrastive Account of Mechanistic Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Problems with Craver and Kaplan’s Account . . . . . . . . . . . . . 3.3 Mechanism Descriptions and Mechanistic Explanatory Texts . . 3.4 Constructing Mechanistic Explanatory Texts . . . . . . . . . . . . . . 3.5 Conclusion: Solving the Problems . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.

31

. . . . . .

32 34 39 41 49 50

4

Representations and Mechanisms Do Not Mix . . . . . . . . . . . . . . . 4.1 Representations: External and Internal . . . . . . . . . . . . . . . . . . 4.2 Mental vs. Neural Representations . . . . . . . . . . . . . . . . . . . . . 4.3 Content-Determination and the Job-Description Challenge . . . . 4.4 Why Contents Are Not Explanatorily Relevant . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

53 53 58 60 64 72

5

Indicator Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 What Is Indicator Content? . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Probabilities for Indicator Contents . . . . . . . . . . . . . . . . . . . . 5.3 Assessing Indicator Contents Based on Frequentist Chances . .

. . . .

77 78 82 87

3

vii

viii

Contents

5.4 Assessing Indicator Content Based on Propensities . . . . . . . . . . 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92 95 95

6

Structural Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Defining Structural Representation . . . . . . . . . . . . . . . . . . . . . . 6.2 Mapping Relations for Structural Representation . . . . . . . . . . . . 6.3 Structural Representation and Locality . . . . . . . . . . . . . . . . . . . 6.4 Structural Representations and Mutual Dependence . . . . . . . . . . 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99 100 105 109 113 116 116

7

Teleosemantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Teleosemantic Analyses of Content . . . . . . . . . . . . . . . . . . . . . 7.2 Teleosemantics with History-Dependent Functions . . . . . . . . . . 7.3 Teleosemantics with Synchronic Functions . . . . . . . . . . . . . . . . 7.4 Teleosemantics with Cybernetic Norms . . . . . . . . . . . . . . . . . . 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

119 120 126 129 136 139 140

8

The Dual-Explananda Defence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 The General Form of the Dual-Explananda Defence . . . . . . . . . 8.2 The Fittingness Explanandum . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 The Success Explanandum . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143 144 146 153 159 159

9

The Pragmatic Necessity Defence . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Egan’s Deflationary Realism . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Options for a Mechanistic Answer to Deflationary Realism . . . . 9.3 Reinterpreting Mathematical Content . . . . . . . . . . . . . . . . . . . . 9.4 Roles of the Intentional Gloss . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Rejecting Cognitive Contents . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161 162 164 166 169 171 176 176

10

Conclusions and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Consequences for Mainstream Philosophy of Cognitive Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Consequences for Previous Non-representational Theories . . . . 10.3 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

179 183 188 191

. 179 . . . .

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

List of Acronyms

CCC EIO FFI HFI IC I1c – I4c LPI MET SC SCC SPI SRC TC TC-ancestral TC-cybernetic TC-goal TC-modal TC-pressure TC-proper

common-sense cognitive concept (Chap. 9) entity-involving occurrent (Chap. 2) finite frequentist version of indicator content (Chap. 5) hypothetico-frequentist version of indicator content (Chap. 5) indicator content (Chap. 5) conditions on ideal interventions on constitutive mechanisms (Chap. 2) long-run propensity version of indicator content (Chap. 5) mechanistic explanatory text (Chap. 3) Salmon-completeness (Chap. 3) scientific cognitive concept (Chap. 9) single-case propensity version of indicator content (Chap. 5) structural representational content (Chap. 6) teleosemantic content (Chap. 7) teleosemantic content with ancestral history functions (Chap. 7) teleosemantic content with cybernetic norms (Chap. 7) teleosemantic content with organismic goal conception of function (Chap. 7) teleosemantic content with modal conception of function (Chap. 7) teleosemantic content with selection pressures (Chap. 7) teleosemantic content with proper functions (Chap. 7)

ix

Chapter 1

Introduction

The goal of cognitive neuroscience is to uncover neural mechanisms responsible for intelligent behaviour in humans and animals. Intelligent behaviour has traditionally been taken to include decision-making, use of language and other high-level cognitive phenomena. Over time, however, the scope of what is meant by intelligent behaviour for the purposes of determining the proper subject matter of cognitive sciences (including cognitive neuroscience) has expanded to include any contextdependent responses to stimuli. Cognitive neuroscience therefore engages in search for neural mechanisms underlying sensory perception, memory, navigation, objectrecognition, tracking, avoidance, etc. That is, the scope of cognitive neuroscience covers the search for neural mechanisms all the way from sensory processing, through response selection to motor control. Importantly, the scope of the field is not confined to a single species, such as the human, but includes, at least in principle, also the study of animal cognition – either for its own sake or as a model for the human case, when ethical and/or practical considerations prohibit investigation into the human case directly. This book is concerned with the nature of explanation in cognitive neuroscience. That is, the question I am tackling is what form (good) explanations in cognitive neuroscience have, which factors are explanatorily relevant, which are not, and how explanations in cognitive neuroscience should be evaluated. This issue is multifaceted and complex. Therefore, in my analysis I focus on a specific problem, namely, whether good explanations in cognitive neuroscience rely on identifying neural representations. This is a central issue in philosophy of cognitive neuroscience, as representational explanations are widespread. Thus, if it turned out that representational explanations are defective, this would have equally widespread repercussions on the explanatory practices of the field (or, more realistically, on the level of confidence that could be commanded by such defective explanations). Let me now briefly introduce what representational explanations are (see Chap. 4 for more detail). In representational explanations, the cognitive system is described as manipulating representations of relevant objects or states of affairs. A © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8_1

1

2

1

Introduction

representation is a neural structure or activity which refers to, or is about, something else. For example, navigation might be explained by supposing that the cognitive system contains representations of the environment and constructs from them a representation of a path towards the target location. The organism then moves and receives sensory stimulation, which is compared to the representations of landmarks stored in memory. In this way, the cognitive system can know where it is along the path and issue appropriate motor commands to move the organism towards the next landmark. The term representationalism has multiple uses, but in this book it is understood as the view that representational explanation is appropriate and necessary in cognitive science. The core commitment of representationalism is that mechanisms underlying intelligent behaviour manipulate representations. In other words, some components of these mechanisms are individual representations while other components produce, process, consume or modify these representations. Usually, the manipulation of representations is thought to be a computational process. The neural components, referred to in this context as representational vehicles, refer to, are about, or carry representational contents. And, importantly, the representational contents carried by the representational vehicles are explanatorily relevant for explaining intelligent behaviour. One of the most serious problems with representationalism concerns the relation between representational vehicles and representational contents. The idea that a physical object refers to, or is about, a different object or state of affairs can be relatively straightforwardly understood when it comes to conventional signs. For instance, an arrow stands for, or refers to, a particular direction because a convention to that effect has been in use. Intelligent beings have commonly stipulated what the content of the arrow is. Similarly, I can unilaterally stipulate that a physical object, for instance, an egg-shaped chalk mark represents the Earth while a large semi-circular mark to the left represents the Sun and a smaller circular mark to the right represents the Moon. I can further stipulate that dashed lines leading from the “Sun” towards the “Earth” represent sunlight, and all the marks together are about lunar eclipses. However, explanations in neuroscience are usually taken to be objective. Therefore, merely stipulating which neural activity represents which external objects will not do. Instead, a naturalistic relation linking representational vehicles to their contents must be found. As we will see, there are numerous proposals for what such a relation might be. However, it is fair to say that none of the proposals has conclusively won out. Furthermore, even though the proposed relations are bona fide naturalistic, which one is the correct one cannot be decided empirically. For this reason, among others, some researchers have grown sceptical of representational explanations in cognitive neuroscience. Instead, these researchers propose alternative non-representational explanatory frameworks.1 Well-known non-representational frameworks include dynamicism

1

Non-representationalists are often also referred to as anti-representationalists. I avoid this term as it emphasises the opposition to representationalism, rather than the fact that they provide an alternative.

1

Introduction

3

and enactivism (which are sometimes taken to be complementary). Dynamicists (e.g., Chemero, 2009; Port & van Gelder, 1995) contend that intelligent behaviour is best explained by exploring mathematical relationships which hold between a small number of factors empirically determined to be relevant to the behaviour in question. Specifically, according to dynamicists, the mathematical relationships will take the form of sets of differential equations. The tools of dynamical systems analysis can then be used to understand how changes in relevant variables influence the type of intelligent behaviour under investigation. Another alternative, enactivism, sees cognition as perceptually guided embodied action. According to the enactivists, cognition does not involve the construction and manipulation of internal representation. Various ways of cashing out the notion of perceptually guided embodied action have arisen in the literature, including autopoietic enactivism (Varela et al., 1991), sensorimotor enactivism (O’Regan & Noë, 2001) and radical enactivism (Hutto & Myin, 2013, 2017). Autopoietic enactivists view the maintenance of organismic unity as an intrinsic goal of an organism and contend that as the organism interacts with the environment, it projects a network of values onto it, according to whether the interactions promote or endanger organismic unity. Sensorimotor enactivists focus on the role of bodily movement and so-called sensorimotor knowledge in perception. They contend that perception is not passive recovery of information from sensory stimuli but an active process in which the cognitive system attunes to the environment, bringing to bear implicit practical knowledge of sensorimotor contingencies. Radical enactivists view representational explanation as suspect because they contend that content is inherently normative and, as such, simply unavailable at sub-personal levels. Their project seeks to distinguish “basic” contentless minds, which exhibit a form of contentless directedness at stimuli, from contentful minds, in which content arose through socially embedded processes. Non-representationalist explanatory frameworks do not face the problem of naturalising representational contents, as they do not employ representations as explanatorily relevant factors.2 However, the non-representational alternatives available at present are significantly less well-developed in comparison to representationalism. In other words, there are more individual representational explanations of concrete cognitive phenomena than non-representational ones. Furthermore, representational explanations are generally more detailed and more targeted towards specific phenomena. Certain areas, such as higher cognition, are particularly difficult to explain using the resources of the existing non-representational frameworks. In this book, I aim to show that one plausible way forward for nonrepresentationalism lies in adopting a mechanistic framework of explanation. Mechanistic explanations aim to explain phenomena by examining the system

2

They might still need to naturalise basic (non-representational) intentionality (see Hutto & Satne, 2015), but in, e.g., autopoietic enactivism that feature is often thought to be built-in.

4

1 Introduction

which exhibits the phenomenon in question and identifying those components of the system which are relevant to the occurrence of the phenomenon. These components interacting to produce the phenomenon together make up the mechanism for the phenomenon. For instance, to explain the phenomenon of rat maze navigation, we must identify those parts of the system (in this case, the rat) which are relevant to the phenomenon of navigation exhibited by the rat. Crucially, not all parts of the system are mechanism components. Only those parts which are mutually dependent on the phenomenon qualify. I will give more detail on mechanistic explanation and mutual dependence between mechanism components and phenomena in Chap. 2. For now, it will suffice to say that in a mechanistic explanatory framework explanatorily relevant factors are both (a) parts of the system exhibiting the phenomenon to be explained and (b) in a special dependence relation with the phenomenon to be explained. The first condition, parthood, must be understood both spatially and temporally. That is, only parts of the system at the time the phenomenon occurs can be explanatorily relevant. If my finger were chopped off in a horrific accident, it cannot then be explanatorily relevant to how I prepare dinner, even though it used to be part of the system exhibiting that phenomenon (i.e., my body). The second condition guards against a number of infelicitous explanations, especially where the occurrence of a phenomenon changes something about the system, but this change is merely a by-product of the operation of the mechanism responsible for the phenomenon. For instance, my heart’s beating has an influence on my writing – should the heart cease beating, the writing would likewise cease. However, the writing itself has no influence on the heart’s beating. Thus, considering the heartbeat, or lack thereof, as explanatorily relevant to writing would be misguided. My central claim is that representational contents fail to satisfy these two criteria for explanatory relevance in the mechanistic explanatory framework, and therefore, the mechanistic explanatory framework can serve as a basis for defending a non-representational way of thinking about explanations in cognitive neuroscience. This is particularly exciting because representationalists (i.e., my opponents) claim that their goal is to discover mechanisms underlying intelligent behaviour as well, and thus they commit themselves to the view that explanation in neuroscience is mechanistic. However, as I will show in this work, the explanations they endorse are faulty – they rely on factors which are not explanatorily relevant in the mechanistic framework. On the other hand, current non-representational explanatory frameworks usually shun mechanistic explanations because of the association between mechanisms and representationalism. Breaking this association, as I do in this work, redraws the boundaries between sides in the debate about the nature of explanation in cognitive neuroscience. Thinking of explanation as uncovering mechanisms for intelligent behaviour does not support representationalism, but non-representationalism. And mechanistic explanation is not antithetical to other non-representational approaches

References

5

but can be used in conjunction with them in order to better understand the subject matter of cognitive neuroscience. This book is divided into 10 chapters. In the following two chapters, I introduce the mechanistic theory of explanation in more detail, defining the terms and comparing it to other theories of explanation. Chapter 2 defines what phenomena and mechanisms are, how they relate, and how mechanisms are discovered. Chapter 3 gives an account of the standards to which mechanistic explanations are held. The purpose of these chapters is to show in detail how mechanistic explanations are constructed and to motivate the thought that mechanistic explanations are the right tool for explaining intelligent behaviour. Next, I argue for the main thesis of this work – that representational contents are not an explanatorily relevant factor in good mechanistic explanations of intelligent behaviour. Chapter 4 introduces representationalism in more detail, giving concrete examples of representational explanations in neuroscience, and presents the argument formally. Chapters 5, 6, and 7 then go through a number of variants of representationalism distinguished by the way in which they characterise the relation between representational vehicles and their contents and prove the crucial premises of the argument for each variant. In Chaps. 8 and 9, I consider two objections to my main thesis. One of the objections (the dual-explananda defence) argues for a separation of concerns between mechanistic explanations and representational explanations. According to this objection, mechanistic explanations are appropriate for certain questions about intelligent behaviour but cannot account for all the facts we might wish to explain about intelligent behaviour. For these additional facts, so the objection goes, representational explanations are still the best bet. I, on the contrary, show that mechanistic explanations can handle these further facts about intelligent behaviour at least as well as representational explanations. The other objection is the pragmatic necessity defence, according to which representational contents are strictly speaking explanatorily irrelevant, but explanations which do refer to them are pragmatically superior to pure mechanistic explanations, which justifies continuing the practice of including representational contents in explanations. In response to this objection, I argue that the supposed pragmatic benefits of including representational contents in explanations are either outweighed by pernicious effects, or do not decide the issue one way or another. In Chap. 10, I flesh out in more detail the takeaways for philosophy of cognitive neuroscience – both for representationalism and non-representationalism. I also present a number of avenues for future research based on this work.

References Chemero, A. (2009). Radical embodied cognitive science. MIT Press. Hutto, D., & Myin, E. (2013). Radicalizing enactivism. MIT Press. Hutto, D., & Myin, E. (2017). Evolving enactivism. MIT Press.

6

1

Introduction

Hutto, D., & Satne, G. (2015). The natural origins of content. Philosophia, 43(3), 521–536. https:// doi.org/10.1007/s11406-015-9644-0 O’Regan, K. J., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(5), 939–973. https://doi.org/10.1017/S0140525X01000115 Port, R. F., & van Gelder, T. (Eds.). (1995). Mind as motion: Explorations in the dynamics of cognition. MIT Press. Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. MIT Press.

Chapter 2

The New Mechanistic Theory of Explanation: A Primer

Abstract In this chapter, I introduce the new mechanistic framework of explanation, focusing in particular on constitutive mechanistic explanation. I evaluate the accounts of mechanistic constitution derived from Craver’s influential mutual manipulability account. I argue that the horizontal surgicality account due to Baumgartner, Casini and Krickel is compatible with the reciprocal manipulability account due to Krickel, and that together they can be used to evaluate claims about mechanistic constitution. Furthermore, I review the established accounts of how mechanistic models are developed and compare the mechanistic framework of explanation with other theories of scientific explanation, such as the covering-law model, unificationism and older causal theories of explanation. Keywords Mechanistic constitution · Mutual manipulability · Horizontal surgicality · Mechanism discovery · Mechanistic explanation In this chapter, I introduce the key aspects of the new mechanistic theory of explanation. In Sect. 2.1, I introduce the main concepts connected with the theory: mechanisms, phenomena, and mechanistic constitution. In Sect. 2.2, I briefly introduce the practices leading to the formulation of mechanistic explanations, and finally in Sect. 2.3, I compare the mechanistic theory of explanation to other established theories of explanation. This chapter together with the following chapter forms the necessary background for formulating the main argument of this book in Chap. 4. Consider the following example of mechanistic explanation: How do the kidneys filter blood? First, blood enters the kidney through the renal artery. This artery branches into many smaller blood vessels, leading blood into the individual nephrons, which are functional units of the kidney. Each nephron consists of two parts: the glomerulus, which is a cluster of tiny blood vessels with very thin walls, and the tubule. Smaller molecules pass through the thin walls of capillaries in the glomerulus into the tubule, while larger molecules continue out of the glomerulus to the renal vein and back into the body. Small capillaries run along the tubule and selectively reabsorb minerals, glucose, and most water. The selective absorption is achieved by gated receptors in the cell walls of the tubule and the peritubular capillaries. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8_2

7

8

2 The New Mechanistic Theory of Explanation: A Primer

The capillaries re-join the renal vein and bring the reabsorbed nutrients back to the body. The remaining material in the tubule continues into the ureter and is excreted from the body as urine. This is how the example explanation works: The phenomenon of interest (here blood filtration) is explained by decomposing the system (the kidney) into parts and identifying how the parts contribute to the occurrence of the phenomenon. So, we learn that the renal artery leads blood into the kidney, the glomerulus retains large molecules while allowing smaller ones to pass, and the receptors in the peritubular capillaries reabsorb minerals and nutrients. Explanations like this are ubiquitous in biology and other special sciences. The next section goes into more detail about the concepts of mechanisms, phenomena, and the relation between them.

2.1

The Concept of Mechanism: Mechanisms, Phenomena, and Constitution

The new mechanistic theory of explanation (new mechanism, or neo-mechanism, in short) aims to describe the explanatory practices in the life sciences and provide a normative framework for evaluating purported cases of explanation resulting from these practices. The core of the new mechanistic account is the concept of a mechanism. Mechanisms are sets of entities (e.g., renal artery, glomerulus . . .) and activities (filtration, reabsorption . . .) which are organised in a certain way to produce, or bring forth a phenomenon (e.g., blood filtration). The organised operation of the mechanism is then thought to explain the phenomenon for which the mechanism is responsible. The characterisations of mechanisms in the extant literature closely resemble each other: Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions. (Machamer et al., 2000, p. 3) A mechanism is a structure performing a function in virtue of its component parts, component operations, and their organization. The orchestrated functioning of the mechanism is responsible for one or more phenomena. (Bechtel & Abrahamsen, 2005, p. 423) A mechanism for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon. (Glennan, 2017, p. 18)

The exact scope of the terms entity and activity in the preceding definitions is subject to some debate in metaphysics of mechanism (Krickel, 2018b), but the intuitive (and working) characterisation of the two is as follows: Entities are the objects, which make up the mechanism. Entities can be of varying sizes and can occupy various locations within the mechanism. Activities are what the entities do: the processes, in which entities are involved. Activities take time, they occur at certain rates throughout the operation of the mechanism, or they occur at specific times, or timespans. While the metaphysicians debate the exact scope of the terms, working scientists are generally quite good at identifying particular entities and activities. Some of the

2.1

The Concept of Mechanism: Mechanisms, Phenomena, and Constitution

9

entities in the blood-filtration mechanism sketched above include: the glomeruli, the nephrons, the tubule, the ureter. Some of the activities include: entering, passing through walls, reabsorbing, selective absorbing. Entities and activities can be intuitively easily distinguished, and some authors argue that the duality of entities and activities is crucial to the mechanistic framework (Machamer, 2004). However, the fact remains that entities and activities are closely connected. It is the entities that perform the activities in the mechanism. This has led authors to refer to “acting entities” as mechanism components (Craver, 2007). Krickel (2018b) defends a similar position, but argues that aside from activities, we should consider the entities’ states as potentially explanatorily relevant. Furthermore, any properties the entities might possess are, according to Krickel, to be analysed as similarities in state between various token entities. As such, they too may be explanatorily relevant. States together with activities fall under the heading of “occurrents”. One important characteristic of occurrents is that they have distinct temporal parts. A canonical example of an occurrent is a football game. The football game begins at a particular time, it lasts for 90 min, and it can be divided into the first and the second half, as well as many finer-grained temporal parts. Occurrents of interest to neuroscience likewise have temporal parts. For example, an action potential can be divided into the depolarisation phase, the repolarisation phase, and the refraction phase, as well as more fine-grained, perhaps arbitrarily chosen temporal parts. Krickel thus arrives at so-called “entity-involving occurrents” as the sole category of mechanism components – these exhibit the characteristics of both entities and occurrents. Mechanisms are then collections of entity-involving-occurrents. I will adopt Krickel’s ontological view but insist that in the context of explanation the distinction between entities and occurrents should be preserved. This is because explaining particular features of the phenomenon might require highlighting either the entity-like or the occurrent-like characteristics of an entity-involving occurrent. This view has a terminological consequence which deserves highlighting: since mechanisms are composed of entity-involving-occurrents, I will often speak of mechanisms occurring. I will use the term system to refer to a more or less stable structure which exhibits one or more phenomena. Systems differ from mechanisms in that they are not occurrents. Systems are made up of entities, not entity-involving-occurrents. When a system exhibits a phenomenon, the mechanism for the phenomenon occurs in the system. Organisation refers to the spatial arrangement of the entities involved in the mechanism and the temporal arrangement of the activities involved in the mechanism (Machamer et al., 2000, p. 3; Craver & Darden, 2013, p. 20). For an example of how spatial organisation is important, consider how the peritubular capillaries circle around the tubule. If the capillaries were further away from the tubule, reabsorption would cease, and blood filtration would stop. Temporal organisation is likewise crucial. During reabsorption, different sections of the peritubular capillaries are responsible for reabsorbing particular substances. If the order were switched, mechanisms responsible for subsequent steps could not function. The reference to mechanisms responsible for subsequent steps in the bloodfiltration mechanism alerts us to a further important feature of mechanisms. Namely,

10

2

The New Mechanistic Theory of Explanation: A Primer

the fact that mechanisms nest. The mechanism for blood-filtration is composed of filtration in the glomerulus and reabsorption in the tubule. We can now zoom in on one of the steps - reabsorption. Reabsorption is a phenomenon worthy of explanation in its own right, and there is a mechanism for it, including the gated receptors in the walls of the capillaries mentioned above. This nesting can continue through multiple levels,1 until it bottoms out (see also Kohár & Krickel, 2021). While the concept of a mechanism is relatively well understood, the related concept of the explanandum phenomenon requires some elucidation. The seminal work on phenomena as explananda in science (Bogen & Woodward, 1988) predates the first formulations of new mechanism in the early 90s. Bogen and Woodward distinguished phenomena from data and argued against the then current view that the role of scientific theories is to explain data or observations. According to Bogen and Woodward, singular observations do not call for explanation. Rather, systematic observation uncovers patterns in the data. From these patterns, investigators infer to regular occurrences, or repeatable behaviours of observed systems. These repeatable behaviours are the phenomena which need to be explained. For instance, systematic observation of the night sky uncovers, among other things, that the position of certain objects (planets) generally changes in a particular direction from night to night. Sometimes, however, the position of the planets in the night sky changes in the opposite direction. This set of observations uncovers two phenomena – prograde and retrograde motion of the planets. An adequate theory of celestial mechanics must account for these phenomena, not for each and every observation. As we can see, Bogen and Woodward’s argument is independent of any particular theory of explanation, though it does constrain the scope of permissible theories. In particular, following Bogen and Woodward, explananda must be mind-independent, and repeatable. These features are generally reflected in the characterisations of mechanisms from the literature. Machamer, Darden, and Craver, for instance, define mechanisms as “productive of regular changes from start or set-up to finish or termination conditions” (Machamer et al., 2000, p. 3), which implies that the operation of the mechanism is regular, and the phenomenon repeatable. An exception is Glennan (2017), whose minimal characterisation of mechanisms does not have a regularity condition, and he explicitly allows for non-repeatable (“ephemeral”) mechanisms. I will follow Glennan in dropping the repeatability condition for the following reasons: (1) singular events have traditionally been thought as appropriate targets for explanation (Hempel, 1965), (2) multiple viable classifications of singular events into types is possible, some of which cross-cut. Since multiple viable classifications exist, scientists need to choose a classification scheme. This suggests that type-identification is mind-dependent. (3) it is possible to infer to singular events on the basis of multiple data-points. Thus, the possibility of singular phenomena is not excluded by Bogen and Woodward’s contention that singular data-points are not explananda.

1

The concept of levels used here and in much literature on mechanisms is relatively weak. Mechanistic levels do not map onto “levels of nature” in any substantial sense. See Craver (2015).

2.1

The Concept of Mechanism: Mechanisms, Phenomena, and Constitution

11

Within the new mechanistic literature, there is an orthogonal discussion about the ontological status of phenomena. Opinion is divided along several fault-lines. The first distinction can be made between theorists who view phenomena as a unified category, and those who accept that things of many different categories can be explanandum phenomena for mechanistic explanation. In the first camp, Krickel (2018b, see also Kaiser & Krickel, 2017) maintains that phenomena, like mechanism components, are entity-involving-occurrents. Krickel argues that this position follows from the hierarchical nature of mechanisms. If mechanism components can be further decomposed into lower-level components, then they must be of the same ontological category as phenomena. Given that there is no privileged highest level of ultimate explananda, phenomena can also combine to produce mechanisms for a higher-level phenomenon (Bechtel, 2009). Craver (2007) on the other hand maintains that phenomena are input-output pairings, although in some places he writes of phenomena as acting entities, which would closely resemble the view put forward by Krickel. Glennan (2017) views phenomena as behaviours of mechanisms, or perhaps behaviours of systems which contain mechanisms. The pluralist side of the debate maintains that all sorts of beings can be phenomena. The most thoroughgoing version of this view can be attributed to Bechtel (2008), who explicitly endorses Bogen and Woodward (1988). Since Bogen and Woodward’s contribution does not concern the ontology of phenomena directly, they are very permissive, counting objects, events, functions, dispositions, or property instantiations as possible candidates, if it is possible to infer their presence on the basis of observational data. In this work, I will adopt Krickel’s position on the ontology of phenomena, because it best captures the possibility of iterative decomposition, where a component may be considered a phenomenon for the purposes of probing lower levels of the mechanism. Having clarified the concepts of mechanism and phenomenon, we should next consider the relation which holds between them. The definitions of mechanisms from the literature cited above are not very specific. Bechtel and Abrahamsen (2005), as well as Glennan (2017), write of mechanisms being “responsible” for the phenomena. Machamer et al. (2000) write that mechanisms are “productive of regular changes from set-up to termination conditions”, where these regular changes are meant to be the explanandum phenomena. One reason for this lack of specificity in the definition is that there are two distinct ways in which mechanisms and phenomena can be related. In some paradigm cases of mechanistic explanation, the mechanism causes the phenomenon to occur. In other paradigm cases of mechanistic explanation, the mechanism constitutes the phenomenon. These two options correspond to two distinct modes of mechanistic explanation.2 Etiological mechanistic explanations explain by reference to the mechanism which causes the phenomenon to manifest. A standard example is the mechanistic explanation of neurotransmitter release. First, an action potential is propagated to the axon terminal. This causes

A third type of mechanisms – homeostatic mechanisms, which maintain a phenomenon, is sometimes distinguished (Craver & Darden, 2013).

2

12

2 The New Mechanistic Theory of Explanation: A Primer

depolarisation of the membrane at the axon terminal, the opening of the voltagegated Ca2+ channels, and finally the release of the neurotransmitter into the synaptic cleft. Constitutive mechanistic explanations, on the other hand, explain phenomena by reference to mechanisms which constitute the phenomenon. The blood-filtration example from the beginning of this chapter is an example of constitutive mechanistic explanation. The difference between the two lies in the fact that etiological mechanisms occur before the phenomenon manifests, while constitutive mechanisms occur at the same time as the phenomenon manifests. Another difference concerns the hierarchical relationship between mechanisms and phenomena. Only constitutive mechanistic explanation decomposes the phenomenon into lower-level components. Because of this, it is fair to say that the notion of a constitutive mechanism is the core contribution brought about by the new mechanistic theory. Etiological explanation has been studied extensively before the advent of new mechanism by authors such as Salmon (1984) and Woodward (2003). The relation between the new mechanistic theory of explanation and causal theory of explanation will be briefly summarised in Sect. 2.3. In what sense does the mechanism constitute the phenomenon? Mechanistic constitution is a part-whole relation – the entity-involving-occurrents making up the constitutive mechanism are proper parts of the phenomenon. However, mechanistic constitution is more demanding than simple parthood. Not all parts of the phenomenon are mechanism components. For example, blood-filtration in the kidneys is a spatiotemporal part of my philosophising. Nevertheless, blood-filtration is not part of the mechanism for philosophising. Mechanisms and phenomena must, in addition to proper parthood, stand in a further mutual-dependence relation of some sort. The best-developed accounts of this relation are based on Craver (2007, Ch. 4). Craver’s account of mechanistic constitution is based on two conditions: (a) proper parthood, also referred to as locality (Illari & Williamson, 2011); and (b) mutual manipulability between the phenomenon and mechanism components. In Craver’s own words, a component (x’s φ-ing) is constitutively relevant to the phenomenon (S’s ψ-ing), iff: (i) (Locality): x is part of S; (ii) (Bottom-up manipulability): in the conditions relevant to the request for explanation there is some change to x’s φ-ing that changes S’s ψ-ing; (iii) (Top-down manipulability): in the conditions relevant to the request for explanation there is some change to S’s ψ-ing that changes X’s φ-ing. (Craver, 2007, p. 153, formatting mine)

Craver’s locality condition has been standardly in the literature amended to read: (Locality*) x’s φ-ing is part of S’s ψ-ing

to highlight the need for constituent mechanisms to co-occur with the explanandum phenomena. (Krickel, 2018b) As we can see, mutual manipulability is based on the idea that if a part of the system belongs to the mechanism for the explanandum phenomenon, one should be able to manipulate the phenomenon by manipulating the candidate component part

2.1

The Concept of Mechanism: Mechanisms, Phenomena, and Constitution

13

and vice versa. Craver imposes further conditions on the kind of manipulation required for establishing constitutive relevance. Craver’s conditions correspond closely to Woodward’s (2003) conditions for ideal interventions. For bottom-up manipulability the conditions are as follows: (I1c) the intervention I does not change ψ directly; (I2c) I does not change the value of some other variable φ that changes the value of ψ except via the change introduced into φ; (I3c) I is not correlated with some other variable M that is causally independent of I and also a cause of ψ; and (I4c) I fixes the value of φ in such a way as to screen off the contribution of φ’s other causes to the value of φ. (Craver, 2007, p. 154)

where I is a variable representing the intervention itself, ψ is a variable representing the phenomenon, and φ is a variable representing a candidate component. An equivalent set of conditions is supposed to obtain for top-down interventions, swapping φ for ψ and vice-versa throughout. In our blood-filtration example, we can establish that, for instance, the reabsorption of water by peritubular capillaries is constitutively relevant for blood-filtration because the reabsorption takes place in the kidney at the same time as blood-filtration, and there are ways of manipulating the reabsorption by manipulating the filtration and vice versa. However, the mutual manipulability criterion as formulated by Craver has attracted a number of criticisms in recent years. Leuridan (2012) has pointed out that the mutual manipulability criterion implies that mechanistic constitution is a causal relation contra Craver’s stated view (Bechtel & Craver, 2007). This follows if we assume Woodward’s interventionist account of causation, as mechanists, including Craver, often do. On Woodward’s account, one-way manipulability through an ideal intervention establishes causal relevance. Following from this, each of conditions (i) and (ii) establishes a causal link between x’s φ-ing and S’s ψ-ing. If (i) is satisfied, then x’s φ-ing causes S’s ψ-ing. On the other hand, if (ii) is satisfied, then S’s ψ-ing causes x’s φ-ing. If both are satisfied, then there is reciprocal causation between the phenomenon and its components. However, Craver and other mechanists are committed to the view that mechanistic constitution is non-causal.3 Another problem with the original mutual manipulability account has been discussed independently by Baumgartner and Gebharter (2016) and Romero (2015). This is the so-called fat-handedness problem. The fat-handedness problem arises because the four criteria for ideal interventions cannot be satisfied when intervening on phenomena and mechanistic components. Since phenomena supervene on mechanisms, one cannot intervene on a phenomenon without thereby intervening on at least some of the components. Thus, in a top-down intervention trying to manipulate x’s φ-ing by intervening on S’s ψ-ing, conditions I1c and/or I2c are always violated. If the intervention sets S’s ψ-ing by directly setting x’s φ-ing, it violates Ic1 because in a top-down intervention x’s φ-ing should be changed mediately through changing S’s ψ-ing. If the intervention changes S’s ψ-ing by

3

Leuridan himself argues that this view should be reconsidered.

14

2 The New Mechanistic Theory of Explanation: A Primer

changing some other component x*, then I2c is violated, because it is unclear whether it is the change in S’s ψ-ing or the change of x* which is responsible for the observed change in x’s φ-ing. Interventions on phenomena and mechanisms are therefore always fat-handed - they target more than one relevant variable. As an example, consider the phenomenon of a car travelling. Suppose we wish to know whether the movement of the pistons in the engine is a component in the mechanism for this phenomenon. We intervene on the car’s travelling top-down by pressing the throttle and observe that as the car begins to move, so do the pistons. Since pressing the throttle changes the phenomenon, it must necessarily also change at least one component in the mechanism. One option is that it directly manipulates the movement of the pistons, in which case 1c1 is violated. This happens to not be the case here. However, the other option is that it changes some other component of the mechanism. Here, pressing the throttle increases the amount of fuel injected into the engine. This then leads to faster movement of the pistons. Hence, condition Ic2 is violated. In response to the fat-handedness problem, defenders of interventionist approaches to mechanistic constitution have introduced variants of the original mutual manipulability account. While the locality criterion remains an integral part of all the variant accounts of mechanistic constitution, the mutual manipulability criterion is either amended or replaced by a different kind of mutual dependence relation. The first variant is usually credited to Woodward (2015), although his paper was not concerned with mechanistic constitution directly. Woodward (2015) loosens the conditions on ideal interventions so that interventions on a variable X may also affect the supervenience base of X and still be considered ideal. Let us call these interventions ideal*. If mechanists adopt ideal* interventions as the basis of a mutual manipulability account of constitution, the fat-handedness problem is resolved. Interventions into mechanisms are fat-handed because they necessarily affect the candidate component and the phenomenon. However, because phenomena supervene on mechanisms, the fat-handedness does not mean that the interventions are not ideal*. The problem with this easy fix is that in applying the mutual manipulability test we cannot presuppose that the candidate component is part of the mechanism for the phenomenon. After all, this is exactly what we want to establish. But if we cannot presuppose that x’s φ-ing is part of the mechanism for S’s ψ-ing, then we cannot presuppose that it is part of the supervenience base for S’s ψ-ing. And this presupposition is vital in determining whether the intervention into x’s φ-ing is ideal*. Another variant, due to Baumgartner and Gebharter (2016), actually leverages the pervasive fat-handedness of interventions into mechanisms. According to this approach, we can defeasibly infer constitution from a set of interventions on the phenomenon because every intervention on the phenomenon does change at least one component of the mechanism. Thus, if we see that a spatiotemporal part of the system which exhibits the phenomenon is affected by a number of various interventions on the phenomenon, we can reasonably conclude that this is one of the components. Similarly, if intervening on a phenomenon seems to require intervening

2.1

The Concept of Mechanism: Mechanisms, Phenomena, and Constitution

15

on a particular spatiotemporal part of the system exhibiting it, then we may reasonably infer that this part is one of the components. An abductive theory of constitution on this basis is elaborated by Baumgartner and Casini (2017). However, abductive inferences to constitution would always only be defeasible and cannot reliably be used to distinguish between constitution and other dependence relations, including causal relations, without further specifications. In response to that problem, Kästner (2017) advocates dropping the notion of constitution in favour of a more general notion of difference-making relations in analysing mechanisms. However, this conclusion is unsatisfactory, because constitution is different from causation and other difference-making relations even on Kästner’s picture. These different types of difference-making relations only get lumped together due to the epistemic difficulties with keeping them apart just based on fat-handed interventions. In this work, I will combine two complementing accounts of mutual dependence between mechanisms and phenomena developed in recent years. The first one is due to Baumgartner, Casini and Krickel (2020). Their so-called horizontal surgicality account circumscribes a class of fat-handed interventions which do allow inference straight to constitution as opposed to an unspecified kind of difference-making relation as other fat-handed interventions do. On Baumgartner, Casini and Krickel’s account, interventions permit inference to constitution if they fulfil two criteria. The first criterion is that the intervention changes both the phenomenon and the candidate component simultaneously. The simultaneity of change in the phenomenon and the candidate component ensures that there is no causal relationship between the two variables, as causation takes time. The second criterion is that the intervention be horizontally surgical. An intervention is horizontally surgical iff it is the direct cause of only one variable at each level. However, care must be taken to avoid circularity. Levels of the mechanistic hierarchy only relate phenomena and their components, and so we cannot presume a mechanistic hierarchy in defining horizontal surgicality. This problem is avoided by defining horizontal surgicality with respect to levels of direct proper parthood (Eronen, 2013). As an example of a horizontally surgical intervention which changes the component and the phenomenon simultaneously, consider intervening on a car’s front left wheel to affect its movement. As soon as a car’s wheel is blocked, the car undergoes deceleration. Additionally, the car also simultaneously starts skidding to the left. Notice that this intervention is by design horizontally surgical – it targets the single wheel and no other parts of the car distinct from the wheel. The horizontal surgicality account concedes that ideal interventions into mechanisms are impossible but aims to disqualify the same set of interventions as the original mutual manipulability account. These objectionable interventions are those which cannot distinguish between the effects on a component and the effects on a causally related confounding variable. These interventions are still disqualified by horizontal surgicality, because causally related confounds are on the same level as the components which they confound. Therefore, an intervention which targets only one variable per level cannot be confounded by causally related confounding variables. Put another way, the dependence relation between mechanisms and

16

2 The New Mechanistic Theory of Explanation: A Primer

phenomena which makes interventions into phenomena fat-handed is not problematic - it is precisely this relation (mechanistic constitution) that we want to capture. This relation is characterised as obtaining across levels. Therefore, interventions which are fat-handed across levels are not problematic. Only interventions which are fat-handed on one level are problematic. In this work, I will use the horizontal surgical criterion as one way to diagnose cases of purported mechanistic constitution. However, I will make one friendly amendment to the original account. The amendment concerns Baumgartner, Casini and Krickel’s requirement that interventions be surgical on all levels of the hierarchy. This condition should be changed to read all levels higher and equal to the candidate component. The reason for this change is that the original condition implausibly restricts the set of allowed interventions. Baumgartner, Casini and Krickel subscribe to the principle of universal constitution, according to which the mechanistic hierarchy goes all the way down to elementary physical building blocks such as leptons and quarks. If we were to insist on horizontal surgicality on all levels of the hierarchy, then only those interventions which target single elementary particles would be allowed. The problem here is not just that such interventions are impracticable - after all, interventions need only be metaphysically possible in order to establish constitution. The problem is rather that such interventions are likely to be inert with respect to any macro-level phenomena. The effects of changes on the elementary level are screened off because the higher levels are multiply realisable. Only non-surgical interventions to elementary particles have any chance of affecting higher-level phenomena. My amendment recognises that in order to intervene on a mechanism component x at level L -1, it might be necessary to simultaneously intervene on many variables representing EIOs at level, e.g., L -3. However, as long as none of the affected EIOs on L -3 is a component in another L -1 EIO, this intervention should be allowed. If all the L -3 components affected by the intervention are components of the candidate component x, then this intervention’s eventual effect on the phenomenon is due to its effect on x. In some, perhaps all cases, at least one of the affected L -3 components will only be a background condition or a sterile effect of x. However, even that should not disqualify the intervention, unless they are at the same time components of some other EIO at L -1. In other words, if some of the affected L -3 EIOs have an effect on the phenomenon independent on their role in constituting x on L -1, this will be propagated by some EIO on L-1 other than x. Therefore, if the intervention is surgical on L -1 and L 0, we can conclude that x partly constitutes the phenomenon. If we were to probe for components on L -3, we would need to make sure that we are not disturbing more than one component on L -3, through L 0. The second complementing account of mutual dependence is due to Krickel (2018b). Krickel attempts to solve the problems with Craver’s mutual manipulability account by defining constitution as a two-way causal relation between the component and some temporal part of the phenomenon. While Krickel herself considers her account a defence of Craver’s mutual manipulability account with some modifications, I will refer to Krickel’s account as reciprocal manipulability in order to avoid confusion.

2.1

The Concept of Mechanism: Mechanisms, Phenomena, and Constitution

17

The idea that phenomena have temporal parts follows directly from their characterisation as EIOs. An occurrent, like a football game, occurs through time and has various temporal parts - the first half, the second half, the time between the first goal and the first substitution, etc. Similarly, phenomena as EIOs have temporal parts. For instance, the rat’s navigating a maze consists of the time the rat is sniffing around, the time it is running along the maze’s arm and many smaller time slices besides. On Krickel’s account, spatiotemporal parts of the phenomenon are mechanism components if the following holds: (a) there is an ideal* intervention on the candidate component which causes a change in a temporal part of the phenomenon which does not occur at the same time as the candidate component, and (b) there is an ideal* intervention on a temporal part of the phenomenon which does not occur at the same time as the candidate component which causes a change in the candidate component. This is the reciprocal manipulability criterion. As an example, consider the case of a rat navigating a maze and the activity in the rat’s cerebellum. Reciprocal manipulability here requires that an intervention on a preceding temporal part of the rat’s navigating affect the cerebellar activity. One such intervention could be placing the rat in a maze in the first place. Upon placing the rat in the maze, the rat’s navigating behaviour will commence, and the activity in the cerebellum will increase. Furthermore, there are interventions on the cerebellar activity which affect latter temporal parts of the rat’s navigation. Should the cerebellum be lesioned in the course of the navigation (e.g., by optogenetic manipulation), the rat will subsequently become confused. There are two worries with respect to Krickel’s account. Firstly, notice that the account does rely on Woodward’s ideal* interventions. One might worry that Krickel’s account, just like the Woodward-inspired account above, must presuppose that the candidate component is part of the supervenience-basis of the phenomenon, which would already presuppose that the constitution relation obtains between the two. However, this is not the case. The interventions involved must be ideal* interventions, because intervening into a temporal part of the phenomenon requires intervening into some component of the phenomenon. Thus, we must presuppose that constitution obtains between the phenomenon and some component, but not that it obtains between the phenomenon and the candidate component we are investigating. In fact, because the temporal part of the phenomenon and the candidate component do not overlap temporally, we can surmise that the temporal part of the phenomenon supervenes on a different component. It is this different component which is fat-handedly manipulated by the ideal* intervention. Secondly, one might worry that Krickel’s account blurs the distinction between constitution and causation. This worry arises because the relations between the components and the temporal parts of the phenomenon probed by the ideal* interventions are causal relations. So, an opponent might say, either Krickel is confusing constitution with causation, or she is denying the principle that causes and effects are wholly distinct, by allowing causation between parts and wholes. However, both of these issues can be addressed. Constitution is a relation between phenomena as wholes and components. The causal relations relied on to establish constitution only hold between components and temporal parts of phenomena. Thus,

18

2

The New Mechanistic Theory of Explanation: A Primer

Fig. 2.1 The phenomenon is constituted by components x1, x2, and x3. I1 shows the influence of an intervention on an earlier temporal part of the phenomenon on x1. I2 shows the influence of an intervention on x1 on a later temporal part of the phenomenon

constitution is still different from causation, despite the fact that we use one to infer to the other. Furthermore, Krickel’s account does not violate the principle that causes are wholly distinct from their effects. This is because the temporal parts of the phenomenon which cause or are caused by the component occur at different times than the component. See Fig. 2.1 for an illustration. In order to analyse the relation of mechanistic constitution, I will use both Baumgartner, Casini and Krickel’s horizontal surgicality account and Krickel’s own reciprocal manipulability account. Both of these accounts are intended to provide a test which, along with Locality, is jointly sufficient for constitution. So, if a local candidate component EIO passes either the simultaneous manipulability by horizontally surgical interventions test or Krickel’s reciprocal manipulability test, we can conclude that it is a component. I will deviate from the intentions of the original authors and additionally require that a component must pass at least one of these criteria. That is, in my view, any component is either simultaneously manipulable with the phenomenon by a horizontally surgical intervention or reciprocally manipulable with the phenomenon in Krickel’s sense. The reason for adopting this view is that the two tests taken together cover all standard cases of mechanistic constitution in the literature. It seems then that if a candidate component passes neither, then those insisting that it is nevertheless a component should show what makes it the case. To recapitulate, the following relation holds between phenomena and components of mechanisms underlying said phenomena: (Mechanistic Constitution) x’s φ-ing is a component of S’s ψ-ing iff: (Locality) x’s φ-ing is a part of S’s ψ-ing, and (Mutual dependence) x’s φ-ing and S’s ψ-ing are mutually dependent

2.2

Constructing Mechanistic Models

19

Mutual dependence between phenomena and mechanisms requires that: (Reciprocal manipulability) x’s φ-ing and S’s ψ-ing are reciprocally manipulable, or (Horizontal surgicality) x’s φ-ing and S’s ψ-ing can be simultaneously manipulated by a horizontally surgical intervention

2.2

Constructing Mechanistic Models

Now that the basic conceptual framework of new mechanism is in place, let me turn to the question of discovering mechanisms for phenomena and constructing mechanistic models of phenomena. In this section, I introduce three important aspects of the process that leads to the construction of mechanistic models: (a) decomposition and localisation (Bechtel & Richardson, 1993) (b) schema articulation (Craver, 2007) (c) inter-level experiments (ibid.) These three are aspects of the same process and work in tandem. The accounts surveyed here differ in focus rather than in substance, aside from limited tensions which I will highlight in due course. Decomposition and localisation, according to Bechtel and Richardson, are heuristic procedures which scientists use to understand the behaviour of complex systems. In short, the system is assumed to be decomposable or nearly decomposable. Decomposability is a technical term introduced by Simon (1969) and refers to the scale and character of interactions between the system’s parts. Decomposability of a system determines the way in which the behaviour of the parts influences the behaviour of the entire system. In a fully decomposable system, there is no interaction between the parts – instead, the parts, if they act at all, are completely driven by their internal dynamics. A heap of sand is the paradigm example of full decomposability – individual grains of sand do not interact, they can be added or removed at will, and they only influence the properties of the system in an orderly linear fashion. For instance, the mass of the heap is simply the sum of the masses of individual grains of sand in the heap. In a nearly-decomposable system, the parts interact, but only in a linear fashion. The activity of one part is just an input for the activity of the next part in line. There are no feedback loops or couplings, so that studying a part in isolation from the system will yield reliable information on how the part behaves in the system. According to Bechtel and Richardson, most biological systems are not decomposable in this way, but assuming that they are and modelling them on that basis serves as a heuristic – even if the assumption is wrong, systematic failures of the linear model will yield understanding of the way in which the system is actually organised. The heuristic consists first of functional decomposition. The activity of the whole system (the phenomenon) is analysed into component activities, as if one was

20

2 The New Mechanistic Theory of Explanation: A Primer

developing an algorithm for the task. A classic example is the functional analysis of protein synthesis into transcription and translation. This functional decomposition is driven by behavioural data and pre-existing knowledge about the structure of the system. The next step is localisation, where the component activities identified in the functional decomposition are paired up with parts of the system. In the case of protein synthesis, transcription of DNA into mRNA is performed by RNA polymerases. Translation of mRNA into proteins is performed by ribosomes. As is to be expected, functional decomposition and localisation are interdependent – the system might be divided into different parts depending on which activities the parts are thought to perform based on the result of functional decomposition. Similarly, failure to localise a hypothesised activity may lead to the development of a new functional analysis. This process of decomposition and localisation continues iteratively, as the model approaches greater empirical adequacy. However, since, according to Bechtel and Richardson, the process depends on the false assumption of near-decomposability, the result must not be mistaken for a true representation of the workings of the system in question. Craver’s sketch-schema-model account of constructing mechanistic models is similar to Bechtel and Richardson’s account. However, Craver does not view decomposition and localisation as mere heuristics for investigating biological systems. Instead, constructing mechanistic models is the aim of inquiry into biological systems. However, mechanistic models actually available to the scientific community come in various degrees of articulation and command various epistemic attitudes from the scientific community. The attitude taken towards a model depends on its level of articulation and its empirical support. Generally, models move from lower to higher levels of articulation and from being put forward as mere hypotheticals to full-blown assent. The articulation of a model varies on a scale between a mechanism sketch and a complete model, with mechanism schemata forming an intermediate category (Craver, 2006, p. 360). Mechanism sketches contain little information about the entities and activities involved in the mechanism for the phenomenon. Entities are often identified only by functional descriptions, and generic activities, whose exact nature is to be specified by further research, are involved. As details are filled in, the sketch graduates to a mechanism schema, and finally, when all the entities and activities have been filled in, and the spatial and temporal organisation of the mechanism is fully determined, we arrive at a complete mechanistic model. Models are distinguished according to the attitude they command into how-possibly and how-actually models, with an intermediate category of howplausibly models (ibid., pp. 361–362). How-possibly models do not purport to represent the mechanism as it is in reality. Rather, they show that a mechanism of a particular type would produce the phenomenon in question. How-possibly models are often mechanism-sketches. Numerous how-possibly models of the same phenomenon might be developed at the same time. How-plausibly models incorporate constraints on the admissible types of entities and activities, in order to reflect pre-existing knowledge of the system or of related systems. How-plausibly models can become how-actually models, which purport to represent the actual mechanism

2.2

Constructing Mechanistic Models

21

for a phenomenon if they are sufficiently articulated, and if they mesh well with empirical data. In contrast to Bechtel and Richardson, Craver does allow for the possibility that the mechanistic model accurately represents the mechanism responsible for the phenomenon largely because he does not think that determining which activities and entities are responsible for the phenomenon depends on the assumption of neardecomposability or on the assumption that the interactions between the components are linear. On the contrary, Craver and Darden (2013) offer numerous techniques for probing more complex kinds of organisation including causal loops and couplings. This follows from the insight that only the behaviour of parts within a system explains the phenomenon, not the behaviour of parts when decoupled from the system. This holds even if the behaviour of decoupled parts is used to infer their possible behaviours within the system. Finally, Craver also provides an account of the kinds of empirical data relevant for the articulation of mechanistic models, namely data obtained from inter-level experiments (Craver, 2002). Inter-level experiments are used to probe the constitution relation between the mechanism and the phenomenon. They are divided into two categories corresponding to the two prongs of mutual manipulability in Craver’s account of mechanistic constitution. Top-down experiments manipulate the phenomenon by changing the background conditions in which the system operates. The effects of these manipulations on the various parts of the system are then measured. Bottom-up experiments, on the other hand, manipulate specific parts of the system directly. The effects of these manipulations on the phenomenon are then recorded. Orthogonally, inter-level experiments can be subdivided into excitatory and inhibitory experiments. Excitatory experiments increase the activity of a component, or of the system as a whole. Inhibitory experiments, on the other hand decrease the activity of a component or block its influence on the rest of the system. Putting the two axes together yields a complete classification in Table 2.1. Activation experiments involve comparing the activity of the various parts of the system at rest with their activity during the performance of the phenomenon (Craver, 2002, pp. S92–S93, 2007, pp. 151–152). Many imaging paradigms are instances of this type. The subject’s neural activity at rest is measured, then the subject performs a number of trials on some cognitive task (reading, rotating mental images, etc.), and the neural activity at rest is subtracted from the neural activity recorded during the performance of the task. The “leftover neural activity” is then taken to be specific to the task, i.e., it is a candidate component of that phenomenon. More sophisticated Table 2.1 Types of inter-level experiments

22

2 The New Mechanistic Theory of Explanation: A Primer

versions of such imaging paradigms compare neural activity across a range of tasks, in order to differentiate more closely between activity specific to particular aspects of a complex phenomenon. Top-down manipulability established through activation experiments suggests constitution, but is not sufficient, since the change in activity observed in a particular part of the system might be due to downstream effects unrelated to the phenomenon. Additional bottom-up experiments must be used in order to exclude this possibility. Stimulation experiments excite parts of the system, increasing their level of activity from the baseline (Craver, 2007, pp. 149–151; see also Craver, 2002, pp. S94–S95). If this manipulation results in changes to the phenomenon, the part intervened on is a candidate component. A paradigm case of stimulation is, e.g., Penfield’s work inducing phosphenes in patients undergoing brain-surgery by passing current through electrodes inserted into their occipital lobes (Penfield & Jasper, 1954). Similarly, certain experimental protocols using transcranial magnetic stimulation (TMS) can be used to increase the likelihood of neural firing in a targeted part of the brain. Stimulating the motor cortex in this way results in visible twitches, indicating that this part of the brain is related to muscle movement. Interference experiments on the other hand inhibit the activity of a part of the system, either by lesioning it out of the system or by compensating for its influence on the system by other means (Craver, 2002, pp. S93–S94, 2007, pp. 147–149). If this manipulation results in a decreased incidence of the phenomenon, we can conclude that the part’s activity is a candidate component. Paradigmatic examples of interference experiments include lesion studies, either natural or induced, as well as many TMS studies, where the excitability of neurons in a particular brain area can be decreased, creating a temporary reversible lesion. A famous example of this kind of experiment is the observation that patients with lesions in their fusiform gyrus have difficulty recognising faces compared to the baseline population (for a review, see Kanwisher & Yovel, 2006). This suggests that the area known nowadays as fusiform face area is connected to face recognition. A similar deficit has been induced by TMS to the so-called occipital face area, which suggests that this area is also involved in face recognition (Solomon-Harris et al., 2013). As in the case of activation experiments, one must be careful to not conclude too much from interference experiments. These show merely that a particular activity is necessary for the occurrence of a phenomenon. Background conditions, such as the heart’s beating for most cognitive phenomena will exhibit bottom-up manipulability via interference experiments. The inference to constitution, and thus a case for inclusion in the full mechanistic model, can only be made via a conjunction of top-down and bottom-up experiments. This is by no means an exhaustive list of experiment types relevant for constructing mechanistic models. For instance, in addition to inter-level experiments, which probe the mechanistic constitution relation between entities and activities on the one hand and the phenomenon on the other hand, experiments investigating the temporal progression of the phenomenon and the temporal activity profile of various components must be performed. Electro-encephalographic studies investigating event-related potentials (ERPs) help determine the speed at which the

2.3

Mechanistic Explanation

23

relevant processes take place and constrain the range of available models (Bechtel, 2008). In sum, mechanistic models are constructed by an iterative process of functional analysis, localisation of the functions in parts of the system identified by inter-level experiments as candidate constituents and revising the models in response to the accumulation of evidence.

2.3

Mechanistic Explanation

In the previous sections of this chapter, I introduced the concept of mechanism and the process of constructing a mechanistic model of a phenomenon. In this section, I take up the question of how such models can be thought of as explaining the phenomena which they model. In the literature on scientific explanation, the idea that explanations answer whyquestions is pervasive (Bromberger, 1966; Hempel, 1965; van Fraassen, 1980). The explanation of a phenomenon P, on this view, answers the question “Why did/does P occur?” It is especially fruitful to characterise explanations of singular events in this way. For instance, to explain a solar flare is plausibly to answer the question why the solar flare occurred. On face value, mechanistic models do not answer whyquestions, but rather how-questions. The mechanistic model of solar flares or of blood-filtration does not show why one or the other occurred, but rather how the lower-level components acting together exhibit these phenomena. Intuitively, appropriate answers to the why-questions in these cases will not refer to mechanisms, but to causal antecedents for the solar flare example, and to biological function in the blood-filtration example. Even if some explananda are how-questions as opposed to why-questions (Jaworski, 2009), the scope of the mechanistic theory of explanation would be severely limited if the connection to why-explanation is not made clear. There are three ways the connection can be made. The first and standard way in the literature asserts that mechanistic models “explain why, by explaining how”. This means that the occurrence of the mechanism in specific background conditions is sufficient for the occurrence of the phenomenon.4 So, if one knows the mechanism for the phenomenon, one also knows why the phenomenon occurred. Secondly, it can be argued that mechanistic models satisfy many of the desiderata expected from a successful explanation. Therefore, they serve the same function as a would-be why-explanation.

4 Strictly speaking, many examples of type-level mechanisms sometimes fail to produce the phenomena they are responsible for. Nevertheless, the connection between the occurrence of the mechanism and the occurrence of the phenomenon can support counterfactuals. See Krickel (2018a) for a more nuanced treatment.

24

2

The New Mechanistic Theory of Explanation: A Primer

Hochstein (2017, p. 1106) lists five “explanatory goals”, i.e., goals expected from a successful explanation of a phenomenon: 1. Understanding: “Successfully conveying understanding about the target phenomenon, or making it intelligible, to an audience or inquirer.” (ibid.) 2. Prediction: “Determining when a given phenomenon is expected to occur, and under what conditions.” (ibid.) 3. Generalisation: “Identifying general principles or patterns that all instances of the explanandum phenomenon adhere to and/or constraints that the phenomenon must conform to.” (ibid.) 4. Knowledge of mechanism: “Identifying the particular physical mechanisms that generate and sustain the target phenomenon.” (ibid.) 5. Manipulation: “Providing information sufficient to control, manipulate, and reproduce the target phenomenon.” (ibid.) Pace Hochstein’s contention that no single model may satisfy all of these goals at once, mechanistic models do quite well in providing the relevant information. I will defer discussion of whether mechanistic explanations can provide understanding until Chap. 9., in which I deal with the supposed pragmatic benefits of referring to representational contents in explanations over a purely mechanistic approach to explanation. With respect to prediction, many mechanistic models contain information about the set-up conditions of the mechanism (Machamer et al., 2000), and thus allow one to predict the occurrence of the phenomenon. Mechanistic typeexplanations also support generalisations, since aspects of the phenomenon depend on the properties of the components. For instance, the rate of blood-filtration in the kidney depends on the capillary density of the glomeruli, the thickness of glomerulus walls and other properties of the components. Mechanistic explanations naturally do provide knowledge of mechanisms. Finally, because constitutive relevance of components for phenomena is established through interventions, mechanistic explanations provide a wealth of information about the possibility of manipulating the explanandum phenomenon through manipulating the components of the mechanism. We can thus see that mechanistic explanations are capable of meeting explanatory goals. Thirdly, a strategy of considering different aspects of the phenomenon may be adopted. This leads to the view that mechanistic models explain why the phenomenon occurred the way it did, not just why the phenomenon occurred simpliciter. This last way of showing that mechanistic explanations can serve as why-explanations requires introducing a contrastive element into the picture. Craver and Kaplan (2020) argue that explanandum phenomena should be viewed as contrasts between the actual behaviour of a system and some counterfactual possibility. There would be different mechanistic models for each such contrast. According to Craver and Kaplan, it is then strictly speaking false to think of a mechanistic model of blood-filtration as such. Instead, there might be numerous models, each containing only components relevant for a particular aspect of the blood-filtration – rate, substances filtered out, efficiency, etc. The mechanistic model for each of these contrasts would show why the blood-filtration phenomenon occurs as it does with

2.3

Mechanistic Explanation

25

respect to the relevant parameter and not in some other way. The mechanistic model showing the entities and activities responsible for the rate of blood-filtration shows why the blood-filtration-rate is such-and-such and not different. While Craver and Kaplan are correct in their argument that mechanistic explanation relies on a contrastive element, there are a number of problems with the details of their account, which I will discuss and resolve in Chap. 3. For now, it is sufficient to note that mechanistic explanations can be shown to explain why phenomena occur and why they have particular characteristics. Let me now briefly compare mechanistic explanations to other leading accounts of scientific explanation. The mechanistic model of explanation was formulated directly in opposition to the covering-law model of explanation. Covering-law explanations explain singular explananda by deriving them from general laws and background conditions (Hempel, 1965). The main reason for opposing the use of covering-law explanations in biology stems from the paucity of law-like generalisations in the domain (Woodward, 2001, 2002). Even when generalities can be found, such as certain statistical regularities in classical genetics, these generalities lack the appropriate counterfactual-supporting force. The scope of these generalities also comes into question, since in complex biological systems many factors are at play, determining the outcome of any set of background conditions. Thus, laws in biology, such as there may be, hold only ceteris paribus. Similarly, in psychology and cognitive science, generalities do not seem to play an explanatory role. Cummins (2000) argues that general statements in psychology do not explain the singular instances thereof. For example, consider the McGurk effect, whereby incongruous phonemes presented to a human subject visually and auditorily result in a percept of a third phoneme, qualitatively between the two stimuli. One can derive singular occurrences of the McGurk effect from its general statement as follows: 1. A human subject presented auditorily with the sound /ba/ and visually with a recording of a speaker pronouncing /ga/ will report perceiving the sound /da/. 2. John Doe was presented auditorily with the sound /ba/ and visually with a recording of a speaker pronouncing /ga/. 3. Therefore, John Doe reported perceiving the sound /da/. However, this derivation does not seem to provide any insight into the McGurk effect. It amounts merely to restating it. Although we can, with help of the generalisation, predict the occurrence of the McGurk effect and induce it, other explanatory goals are not fulfilled. In particular, there is no understanding, nor any insight into the causes of the McGurk effect, nor any knowledge of how to manipulate the effect (inhibit it, for example). Notice that one reason the explanation above is unsuccessful is because the general law’s scope is restricted to human subjects. The question which remains unanswered is why humans, as opposed to other subjects, show the effect. Answering that question requires showing the mechanism responsible for the production of McGurk effect in humans. Glennan (1996, 2002) argues that lawful relationships are normally explained by exposing the mechanism connecting the background conditions to the effect of interest.

26

2

The New Mechanistic Theory of Explanation: A Primer

According to Kitcher’s (1989) theory of explanation as unification, phenomena within a domain are explained to the extent that common principles can be used to account for (derive) the diverse phenomena. According to Kitcher’s theory, explanatory work consists in the search for argument schemata, in which some non-logical terms are replaced by variables. These variables can then be filled in according to specified rules, yielding complete arguments whose conclusions describe the various explananda. Kitcher’s theory thus fixes one issue with the covering-law model: If we want to claim explanatory success, then the general laws from which explananda are derived must be systematically related within a discipline. Cummins’ problem with the McGurk effect can therefore be avoided by proponents of Kitcher’s unificationism, at least in the sense that a mere derivation of the effect from the general law does not count as explanatory without the ability to relate that effect to other phenomena through an argument schema. Nevertheless, the problem with paucity of general laws or principles in biology and especially cognitive sciences remains. If anything, since unificationism requires there to be a systematic relationship between the general laws, the problem becomes more acute in the life sciences (but see Kitcher’s attempt at showing that selection phenomena within evolutionary biology can be thought of as unified; Kitcher, 1989, pp. 442–445). Although Kitcher’s unificationism is more closely related to the covering-law theory rather than the mechanistic theory, a variant due to Skipper (1999) suggests the possibility of replacing argument schemata with mechanism schemata for unifying disparate phenomena. Recently, Green (2015) has argued that the renewed interest in constraint-based explanation in biological systems can be situated within the unificationist framework. Green argues that certain explanations in biology rely on recognising constraints applicable to wide classes of biological systems. For instance, Gould (1977) shows that the size of insects is constrained by the load-bearing capacity of the exoskeleton and the fact that open respiration systems (such as those in insects) cannot operate from a certain volume upwards. Thus, to explain why insects are small, we do not need to look for mechanisms bringing about their small size or even for the adaptive value the small size might confer on insects. It is simply impossible that insects should grow much larger than they are. According to Green, similar constraints can be shown to dictate which network architecture is suitable for performing particular functions in neurobiology, immunology or other regulatory networks. The networks investigated in these various fields differ substantially in their material substrates (entity and activity types involved) but show the same patterns of organisation. Although Green takes her account to be opposed to the mechanistic theory of explanation, we should note that the network architectures examined by Green can be thought of as mechanism schemas, exhibiting the organisational elements of the various mechanisms found in the various systems. Organisational elements have been long understood to play an essential role in mechanistic explanation, so the supposed conflict between the two perspectives turns out to be more a matter of emphasis than a real disagreement.

References

27

Mechanistic explanation is closely related to causal explanation. The main feature of causal theories of explanation is that phenomena are explained by their causes. The explanatory endeavour is therefore aimed at “situating phenomena in the causal nexus” (Salmon, 1984). In mechanistic explanation, causation also plays an important role, since mechanism components interact causally. As we saw above (Sect. 2.1), the term “etiological mechanism” is sometimes used to denote a mechanism which causally produces the explanandum phenomenon. Mechanistic explanation in terms of etiological mechanisms is the same thing as causal explanation.5 The most fruitful way of situating constitutive mechanistic explanation with respect to the older theory of causal explanation is to think of it as extending the causal theory. Whereas the causal theory only deals with etiological mechanisms, the new mechanistic theory also investigates the explanatory role of part-whole and constitutive relations on top of causal relations between parts. Purely causal theories of explanation run into problems when dealing with such cases because it is often held that causes and effects must be wholly distinct (Lewis, 1973). Since wholes are not wholly distinct from their parts and simultaneous events overlap temporally, causal explanation cannot account for the explanation of system-activity in terms of the simultaneous interaction of the system’s parts. The new mechanistic theory of explanation fills this gap.

2.4

Conclusion

In the previous three sections, I gave an overview of the ontology of mechanisms, the process of creating mechanistic models and the way in which mechanisms are thought to explain how phenomena for which they are responsible occur. I also highlighted three ways in which mechanisms may be used to explain why a phenomenon occurs. In the next chapter, I will focus on one of the suggestions above, namely, the role of contrastive explananda in the mechanistic theory of explanation.

References Baumgartner, M., & Casini, L. (2017). An abductive theory of constitution. Philosophy of Science, 84(2), 214–233. https://doi.org/10.1086/690716 Baumgartner, M., & Gebharter, A. (2016). Constitutive relevance, mutual manipulability and fat-handedness. British Journal for the Philosophy of Science, 67(3), 731–756. https://doi.org/ 10.1093/bjps/axv003

5

Perhaps, for some authors, using the term mechanism might connote a degree of regularity which needn’t feature in causal explanations in general.

28

2

The New Mechanistic Theory of Explanation: A Primer

Baumgartner, M., Casini, L., & Krickel, B. (2020). Horizontal surgicality and mechanistic constitution. Erkenntnis, 85(3), 417–430. https://doi.org/10.1007/s10670-018-0033-5 Bechtel, W. (2008). Mental mechanisms. MIT Press. Bechtel, W. (2009). Looking down, around, and up: Mechanistic explanation in psychology. Philosophical Psychology, 22(5), 543–564. https://doi.org/10.1080/09515080903238948 Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C, 36(2), 421–441. https://doi.org/10.1016/j.shpsc.2005.03.010 Bechtel, W., & Craver, C. F. (2007). Top-down causation without top-down causes. Biology and Philosophy, 22(4), 547–563. https://doi.org/10.1007/s10539-006-9028-8 Bechtel, W., & Richardson, R. C. (1993). Discovering complexity: Decomposition and localization as strategies in scientific research. Princeton University Press. Bogen, J., & Woodward, J. (1988). Saving the phenomena. The Philosophical Review, 97(3), 303–352. https://doi.org/10.2307/2185445 Bromberger, S. (1966). Why-questions. In R. G. Colodny (Ed.), Mind and cosmos (pp. 86–111). University of Pittsburgh Press. Craver, C. F. (2002). Interlevel experiments and multilevel mechanisms in the neuroscience of memory. Philosophy of Science, 69(S3), S83–S97. https://doi.org/10.1086/341836 Craver, C. F. (2006). When mechanistic models explain. Synthese, 153(3), 355–376. https://doi.org/ 10.1007/s11229-006-9097-x Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford University Press. Craver, C. F. (2015). Levels. In T. K. Metzinger & J. M. Windt (Eds.), Open MIND. MIND Group. https://doi.org/10.15502/9783958570498 Craver, C. F., & Darden, L. (2013). In search of mechanisms: Discoveries across the life sciences. University of Chicago Press. Craver, C. F., & Kaplan, D. M. (2020). Are more details better? On the norms of completeness for mechanistic explanation. British Journal for the Philosophy of Science, 71(1), 287–319. https:// doi.org/10.1093/bjps/axy015 Cummins, R. (2000). “How does it work?” versus “what are the laws?”: Two conceptions of psychological explanation. In F. C. Keil & R. A. Wilson (Eds.), Explanation and cognition (pp. 117–145). MIT Press. Eronen, M. I. (2013). No levels, no problems: Downward causation in neuroscience. Philosophy of Science, 80(5), 1042–1052. https://doi.org/10.1086/673898 Glennan, S. (1996). Mechanisms and the nature of causation. Erkenntnis, 44(1), 49–71. https://doi. org/10.1007/BF00172853 Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of Science, 69(S3), S342–S353. https://doi.org/10.1086/341857 Glennan, S. (2017). The new mechanical philosophy. Oxford University Press. Gould, S. J. (1977). Ontogeny and phylogeny. Harvard University Press. Green, S. (2015). Revisiting generality in biology: Systems biology and the quest for design principles. Biology and Philosophy, 30(5), 629–652. https://doi.org/10.1007/s10539-0159496-9 Hempel, C. G. (1965). Aspects of scientific explanation: And other essays in the philosophy of science. Free Press. Hochstein, E. (2017). Why one model is never enough: A defence of explanatory holism. Biology and Philosophy, 32(6), 1105–1125. https://doi.org/10.1007/s10539-017-9595-x Illari, P. M., & Williamson, J. (2011). Mechanisms are real and local. In P. M. Illari, F. Russo, & J. Williamson (Eds.), Causality in the sciences (pp. 818–844). Oxford University Press. Jaworski, W. (2009). The logic of how-questions. Synthese, 166(1), 133–155. https://doi.org/10. 1007/s11229-007-9269-3 Kaiser, M., & Krickel, B. (2017). The metaphysics of constitutive mechanistic phenomena. British Journal for the Philosophy of Science, 68(3), 745–779. https://doi.org/10.1093/bjps/axv058 Kanwisher, N., & Yovel, G. (2006). The fusiform face area: A cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society B, 361(1476), 2109–2128. https://doi.org/10.1098/rstb.2006.1934

References

29

Kästner, L. (2017). Philosophy of cognitive neuroscience: Causal explanations, mechanisms and experimental manipulations. De Gruyter. Kitcher, P. (1989). Explanatory unification and the causal structure of the world. In P. Kitcher & W. C. Salmon (Eds.), Scientific explanation (pp. 410–505). University of Minnesota Press. Kohár, M., & Krickel, B. (2021). Contrast and compare: How to choose the relevant details for a mechanistic explanation. In F. Calzavarini & M. Viola (Eds.), Neural mechanisms: New challenges in the philosophy of neuroscience (pp. 395–424). Springer. Krickel, B. (2018a). A regularist approach to mechanistic type-level explanation. British Journal for the Philosophy of Science, 69(4), 1123–1153. https://doi.org/10.1093/bjps/axx011 Krickel, B. (2018b). The mechanical world. Springer. Leuridan, B. (2012). Three problems for the mutual manipulability account of constitutive relevance in mechanisms. British Journal for the Philosophy of Science, 63(2), 399–427. https://doi.org/10.1093/bjps/axr036 Lewis, D. K. (1973). Causation. Journal of Philosophy, 70(17), 556–567. https://doi.org/10.2307/ 2025310 Machamer, P. (2004). Activities and causation: The metaphysics and epistemology of mechanisms. International Studies in the Philosophy of Science, 18(1), 27–39. https://doi.org/10.1080/ 02698590412331289242 Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25. https://doi.org/10.1086/392759 Penfield, W., & Jasper, H. H. (1954). Epilepsy and the functional anatomy of the human brain (Vol. 47, p. 704). J. & A. Churchill. Romero, F. (2015). Why there isn’t inter-level causation in mechanisms. Synthese, 192(11), 3731–3755. https://doi.org/10.1007/s11229-015-0718-0 Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton University Press. Simon, H. A. (1969). The sciences of the artificial. MIT Press. Skipper, R. A. (1999). Selection and the extent of explanatory unification. Philosophy of Science, 66(S3), S196–S209. https://doi.org/10.1086/392725 Solomon-Harris, L. M., Mullin, C. R., & Steeves, J. K. E. (2013). TMS to “occipital face area” affects recognition but not categorization of faces. Brain and Cognition, 83(3), 245–251. https:// doi.org/10.1016/j.bandc.2013.08.007 van Fraassen, B. (1980). The scientific image. Clarendon. Woodward, J. (2001). Law and explanation in biology: Invariance is the kind of stability that matters. Philosophy of Science, 68(1), 1–20. https://doi.org/10.1086/392863 Woodward, J. (2002). There is no such thing as a ceteris paribus law. Erkenntnis, 57(3), 303–328. https://doi.org/10.1007/978-94-017-1009-1_2 Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press. Woodward, J. (2015). Interventionism and causal exclusion. Philosophy and Phenomenological Research, 91(2), 303–347. https://doi.org/10.1111/phpr.12095

Chapter 3

Mechanistic Explanatory Texts

Abstract In this chapter, I address the question how mechanistic models provide why-explanations. I argue for a contrastive approach to mechanistic explanation along the lines of Craver and Kaplan. However, I show that Craver and Kaplan’s view leads to four problems: the Contrasts in Ontology problem, the problem of Identifying Relevant Switch-Points, the problem of Single-Model Explanation and the problem of Empirical Adequacy. I show that the problems can be resolved by adopting a distinction between ontic mechanisms, mechanism descriptions and mechanistic explanatory texts. Mechanistic explanatory texts are answers to contrastive why-questions formulated by comparing the actual mechanism for a phenomenon with a counterfactual mechanism which would underlie the closest contrast phenomenon. Keywords Mechanistic explanation · Contrastive explanation · Ontic explanation · Epistemic explanation · Edit distance In this chapter, I will show how why-explanations can be derived from knowledge of mechanisms. As we saw in Chap. 2, the standard view is that mechanisms “explain why by explaining how”. Though this is a useful slogan for the mechanistic program, more detail is required in order to substantiate the claim. Otherwise, detractors might prefer to speak of explaining how instead of explaining why. To bridge the gap, I will refine Craver and Kaplan’s idea that mechanistic explanation concerns contrastive explananda. In Sect. 3.1, I will summarise and critique Craver and Kaplan’s account. In Sect. 3.2, I will develop a distinction between mechanisms, mechanism descriptions and mechanistic explanatory texts that will be crucial for my account.1 In Sect. 3.3, I will argue that, in the mechanistic theory of explanation, explaining why is a matter of constructing mechanistic explanatory texts which highlight the set of minimal common differences between the mechanism for the actual phenomenon and the hypothetical mechanisms for a contrast phenomenon. Section 3.4 provides more detail on how such mechanistic explanatory texts are constructed. Section 3.5 1

The underlying idea has already been published in Kohár and Krickel (2021).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8_3

31

32

3 Mechanistic Explanatory Texts

concludes by showing how my account resolves the problems with Craver and Kaplan’s original proposal.

3.1

Craver and Kaplan’s Contrastive Account of Mechanistic Explanation

Craver and Kaplan (2020) use the idea that mechanistic explanation concerns contrasts in response to the so-called More Details Are Better objection put forward by, e.g., Batterman and Rice (2014), Levy (2014) and Chirimuuta (2014). The More Details Are Better objection alleges that mechanists are committed to the claim that mechanistic models with more details always provide better explanations than mechanistic models with fewer details. However, so the objection goes, models and explanations are frequently improved by omitting and abstracting from some details. For instance, a model of a car travelling down the road should abstract from the colour of the car’s bodywork. This detail is irrelevant to the car’s travelling, and its inclusion in the model is therefore infelicitous. It decreases the informational and pragmatic value of the explanation provided by the model. But, according to the objectors, mechanists are committed to the view that even this irrelevant detail makes the model better because they interpret the mechanists as holding that mechanistic models must be complete—or at least are better if they are complete. Analogously, the objectors charge the mechanists with the view that interactions of quarks should be explanatorily relevant to macroscopic phenomena because they are lower-level components of the macroscopic phenomena, and models going deeper in the hierarchy contain more details about the overall mechanism than shallower ones. Craver and Kaplan agree with the objectors that adding any and all detail to a mechanistic model need not make the model more explanatory. However, they contend that the objectors mischaracterise the mechanistic account. Mechanists are not committed to the view that the best mechanistic model will contain all details. Instead, only relevant details should be included. The set of relevant details is determined by the criterion of Salmon-Completeness (SC): (SC): The Salmon-complete constitutive mechanism for P versus P′ is the set of all and only the factors constitutively relevant to P versus P′. (Craver & Kaplan, 2020, p. 300)

A mechanistic model is more or less explanatory to the extent that it reveals the Salmon-complete mechanism for its target phenomenon. Two crucial observations should be made about the SC criterion. Firstly, the criterion explicitly endorses the view that explananda are contrastive. Craver and Kaplan write that this view has implicitly been endorsed by mechanists before, but they do not offer any textual evidence for this claim. Regardless, I consider endorsing a contrastive view of explananda to be a welcome step for new mechanists because it allows us to show how mechanistic models can provide whyexplanations, as we will see later in this chapter. This step towards contrastive formulations of the phenomenon must, however, be reconciled in some way with the prevalent practice of characterising phenomena

3.1

Craver and Kaplan’s Contrastive Account of Mechanistic Explanation

33

without specifying explicit or even implicit contrasts. To address this issue, Craver and Kaplan write: Mechanists often characterize phenomena such as ‘working memory’ or ‘the action potential’ as multifaceted. These construct-terms are shorthand for a host of features, each of which must be explained to explain the multifaceted phenomenon in its entirety. (ibid., pp. 296–297)

We thus arrive at a view according to which a phenomenon, strictly speaking, is contrastively defined, but loosely speaking, a conjunction of different contrast phenomena can be viewed as a single multi-faceted phenomenon. The SC criterion determines which factors should be included in the best explanatory model for a particular contrast phenomenon. Where a model aims to explain a multi-faceted phenomenon, its explanatory power will depend on the extent to which it includes factors relevant to the contrasts which make-up the multifaceted phenomenon. For example, the phenomenon of working memory is multifaceted: some of its features include its capacity, its retrieval speed, its rate of retention, the primacy and recency effects, etc. A model of working memory which accounts for its capacity but not for recency effects would be less explanatorily powerful than one which accounts for both of these factors. Secondly, the criterion SC is formulated in terms of mechanisms and phenomena, not in terms of models, descriptions or other scientific representations. This is in keeping with Craver’s considered view of the nature of explanation as a real-world relation holding between real-world phenomena and real-world mechanisms. Craver and Kaplan reaffirm their commitment to this so-called ontic conception of scientific explanation writing: English speakers also use the word ‘explanation’ to refer not to a model but to the thing in the world that ‘explains’ (in an ontic sense) the phenomenon in question. The claim that global warming explains the rise in sea levels is not about a model or a representation; it is about the rise in mean temperatures and its causal relationship to sea levels. From this ontic perspective, models are tools for representing objective explanations (Craver [2014]). The complete constitutive explanation for a given explanandum, in this ontic sense, includes everything relevant (that is, everything that makes a difference) to the precise phenomenon in question. (ibid., pp. 299–300)

The ontic view contrasts, according to Craver (2014), with the so-called epistemic conception of explanation, which relates descriptions of phenomena to models or descriptions of mechanisms (e.g., Bechtel & Abrahamsen, 2005). It is these descriptions and models which are the explananda and explanantia on the epistemic conception.2 The fact that the SC criterion concerns phenomena and mechanisms, as opposed to descriptions thereof, can be further confirmed by Craver and Kaplan’s appeal to

2

Craver’s usage is slightly divergent from the way Salmon (1984) uses the terms. In Salmon’s usage, ontic conceptions judge the quality of explanation by reference to objective facts about the world, while epistemic conceptions judge the quality of explanations based on their capacity to provide understanding; but see Wright (2015).

34

3 Mechanistic Explanatory Texts

constitutive relevance in defining an SC mechanism. Constitutive relevance, as we saw in Chap. 2, is a relation holding between phenomena and components of the mechanisms for the phenomena. In Craver and Kaplan’s current account, it is to be determined by mutual manipulability. Components of the mechanism for P vs. P′ are those entities, activities and organisational properties which act as “switch-points” between P and P′ (Craver & Kaplan, 2020, p. 304). Craver and Kaplan’s adherence to the ontic conception of explanation must be reconciled with their adoption of a contrastive view of phenomena. According to Craver and Kaplan, a contrastive phenomenon is “characterised in terms of the instantiation of an input-output mapping versus some other input-output mapping” (ibid., p. 298). This is a departure from Craver’s earlier views of phenomena as acting entities or as simple input-output mappings, which do not admit a contrastive characterisation. An acting entity like, e.g., a rat’s navigating, is not in itself contrastive. It may be contrasted with other, perhaps counterfactual, acting entities, but none of these contrasts are intrinsic to the acting entity. Similarly, a single inputoutput pairing may be contrasted with other input-output pairings, but none of these contrasts are intrinsic to it. One final detail of Craver and Kaplan’s account worth mentioning is the fact that, in their view, even explaining a single contrastive phenomenon may require multiple models. The different models, presumably, highlight different components of the SC mechanism for the phenomenon (ibid., pp. 309–310). The dictum that models with more relevant details about the SC mechanisms have more explanatory power than ones with fewer relevant details thus does not imply that a model must contain all the relevant details to be explanatory. Furthermore, the dictum must be applied only ceteris paribus because direct comparison between models can only be made if one model refers to a proper subset of the relevant details contained in the other model. Where this is not the case, the two models’ explanatory power may only be compared if some relevance measure is applied. The SC criterion, however, does not provide a relevance measure but merely a yes-no relevance test.

3.2

Problems with Craver and Kaplan’s Account

As formulated in Sect. 3.1 above, Craver and Kaplan’s account of contrastive mechanistic explanation has the potential to solve the issue of mechanistic why-explanation introduced in Chap. 2. The guiding idea is that models which highlight the factors relevant for the occurrence of P vs. P′ thereby show why P occurred as opposed to P′. However, Craver and Kaplan’s formulation of this contrastive strategy leads to a number of problems. I will concentrate on four problems (for a similar list of problems see Kohár and Krickel (2021)): (1) the problem of Contrasts in Ontology, (2) the problem of Identifying Relevant SwitchPoints, (3) the problem of Single-Model Explanation and (4) the problem of Empirical Adequacy.

3.2

Problems with Craver and Kaplan’s Account

35

Contrasts in Ontology The first problem for Craver and Kaplan’s account is that it defines phenomena as inherently contrastive while at the same time maintaining that explanation is ontic, and therefore that explanatory relations obtain between realworld entities. This suggests that contrasts are part and parcel of the ontology of the mechanistic framework. We should note that this way of individuating phenomena is inconsistent with several well-established principles of mechanistic theory. Firstly, phenomena and mechanisms are generally thought of as forming a hierarchy, where components of a phenomenon are themselves constituted by lower-level mechanisms. In this case, the original mechanistic components play a double role in the model; they are the components of the original phenomenon, but they are themselves phenomena constituted by lower-level mechanisms. As I argued in Chap. 2, this role is well filled by acting entities or entity-involving occurrents. However, contrastive phenomena, as defined by Craver and Kaplan, i.e., instantiations of input-output mappings as opposed to other input-output mappings, do not stack in this way. This is because mechanism components are not themselves contrastive. Furthermore, it is unclear how Craver and Kaplan’s contrastive phenomena could enter into constitution relations at all, given that contrasts are not spatiotemporally located. A contrast relates the actually occurring EIO with some counterfactual EIO which does not occur but could have done. Only the actual EIO is spatiotemporally located. The other, counterfactual, EIO which makes up the contrast is not. Therefore, the contrast as a whole is not spatially located, only the actual EIO is. Spatiotemporal location, however, is one of the preconditions for constitution, as constitutive relevance, according to Craver and Kaplan, is determined by the mutual-manipulability criterion, which requires proper parthood. Instantiations of input-output mappings without contrasts might be spatiotemporally located. By an instantiation of an input-output mapping, I mean a single causal process connecting an input and an output, which happens to fall under a general description. While a mapping is an abstract relation between input types and output types, its instantiations are singular causal processes. If this is right, Craver and Kaplan could claim that it is the simple input-output mapping instantiated in one place at a time which enters into the constitution relation. However, if that is the route they choose, then they cannot use principle SC to differentiate between relevant and irrelevant factors, because entities, activities and organisational factors relevant for any contrast will enter into the constitution relation with such non-contrastive phenomenon. In any case, according to the view of phenomena I adopt in this book, phenomena are concrete entity-involving occurrents. These are physical entities and therefore not individuated based on contrasts either. Thus, even if I were wrong about the incongruence between Craver and Kaplan’s views on phenomena with their account of explanation as ontic, I would still need to resolve the tension for my own account, given that I want to commit to some variant of contrastive mechanistic explanation. However, even if we granted to Craver and Kaplan the possibility of phenomena as contrasts entering into hierarchies of mechanisms related by constitution, there would still be further problems related to the supposition that ontic phenomena are contrasts. The first of these related issues for Craver and Kaplan is the excessive

36

3 Mechanistic Explanatory Texts

multiplication of phenomena which results from adopting the view. If phenomena are individuated as instantiations of input-output mappings as opposed to some other mapping, then for every input-output mapping actually instantiated, there will be innumerable contrastive phenomena. This is the case because for every input-output mapping P we can coherently ask why it is instantiated rather than any other mapping P′. In other words, the class of contrast mappings P′ for any actual mapping P is unbounded. Since, following Craver and Kaplan, we are to think of every single of these contrasts P vs. P′ as a single phenomenon (though multiple contrasts may be conjoined into a multi-faceted phenomenon for convenience’s sake), and each of these single phenomena has its own SC mechanism underlying it, there are in effect innumerable phenomena and mechanisms associated with any actually instantiated input-output mapping. As an example, consider a car travelling in a straight line with speed measured at 90 km/h. In this case, following the spirit if not the word of Craver and Kaplan’s account, there would be one phenomenon “car travelling straight vs. along a circle”, another one “car travelling straight vs. along a parabola”, a third one “car travelling at speed 90 km/h vs. standing still”, etc., for any conceivable contrast. Indeed, the first of these supposed phenomena can be further subdivided into “car travelling straight vs. along a circle with radius r”, for any value of r. This explosion of ontic phenomena should be rejected; our theory of explanation might require contrastive explananda, but our ontology should not follow suit with innumerable contrastive phenomena. Note that the ontological explosion is the more repugnant, since all of these contrastive phenomena are, from an empirical point of view, one and the same phenomenon (the car travelling in a straight line with 90 km/h). The only reason they are counted as separate is the contrastive criterion of phenomena-individuation by Craver and Kaplan. Corresponding to the problem of multiplication of phenomena is the problem of multiplication of mechanisms. Like the former, the latter is a consequence of admitting contrastively individuated phenomena into our ontology. Mechanisms are, according to the new mechanists, for a particular phenomenon. In other words, mechanisms are individuated by the phenomena they bring about (Illari & Williamson, 2010, p. 282). If P vs. P′ and P vs. P″ are two different phenomena, then their mechanisms are two different SC mechanisms. Hence, another consequence of Craver and Kaplan’s account is a multiplication of mechanisms. My diagnosis is that all these problems arise because, in Craver and Kaplan’s view, phenomena play a double role. On the one hand, they are real things in the world. On the other hand, these very real things in the world are explananda. Since explananda need to be defined contrastively, Craver and Kaplan land in trouble because these same contrastive explananda must also be real things out there in the world. The solution, as we will see in the next section, is to separate ontic phenomena from explananda. That way, contrasts can be safely sequestered in the realm of explanation without causing ontological issues. Identifying Relevant Switch-Points In Craver and Kaplan’s view, the SC mechanism for a contrast phenomenon P vs. P′ contains all and only those factors

3.2

Problems with Craver and Kaplan’s Account

37

constitutively relevant for the contrast P vs. P′. Specifically discussing the role of abstraction in mechanistic models, Craver and Kaplan write that the constitutively relevant factors must be “switch-points” for P vs. P′. For instance, one of the relevant factors for the contrast P‘water is liquid’ vs. P′‘water is solid’ is the fact that the water’s temperature is above 0 °C. Suppose the water’s temperature is in fact 17 °C. According to Craver and Kaplan, it is wrong to consider the water’s exact temperature as a component of the SC mechanism underlying the water’s liquid state because changing the exact temperature to, say, 16 °C or 4 °C does not change P into P′. In order to identify the constitutively relevant factors, one must identify switch-points, that is, factors which change P into P′. This is well-taken and the account I develop here will also use a similar strategy in order to allow for abstraction. However, Craver and Kaplan’s account misses an important issue. There might be multiple ways in which a system could exhibit some P′. For instance, consider the phenomenon P‘car is travelling forward’ vs. P′‘car is immobile’. There are multiple ways in which a car may be immobile. Perhaps its engine is shut off. Or its breaks are engaged. Or it’s simply idling and come to a stop through the influence of friction and air resistance. Or it has encountered a barrier impeding its travel. In each of these cases, different changes in the components will be required to make the switch, even though they all fall under the description of P′ with which the actual phenomenon is contrasted. I will refer to all these different ways in which P′ might be exhibited as the contrast class. Often, the contrast class has more than one member. The fact that each member of the contrast class goes along with a different set of constitutively relevant factors indicates that there are multiple competing explanations for why the car is travelling as opposed to immobile. Craver and Kaplan’s account lacks the resources to decide which of these competing explanations is the correct one. Note that a similar problem applies also to Craver and Kaplan’s temperature example. The class P′‘water is solid’ contains individual cases where the temperature of water is 0 °C, -1 °C, -3 °C all the way to absolute zero. Each of these different members of the class P′ can be brought about by some intervention on P‘water is liquid’. Each of these interventions, setting the temperature from 17 °C to 0 °C, or from 17 °C to -1 °C or from 17 °C to -3 °C would act as a switch to bring about P′ as opposed to P. How then do we decide that one of them should be selected over any other as the appropriate one to abstract towards? The problem then is two-fold. Craver and Kaplan lack a means of deciding on the correct explanation when different members of the contrast class require changes in different components, as in the car example, and justifying the relevance of any one component over any other. They also lack a way of identifying the correct switchpoint with respect to a single component, as in the freezing water example. Craver and Kaplan do gesture at a role for pragmatic considerations in deciding which components identified by SC are more or less relevant (Craver & Kaplan, 2020, p. 305), but this cannot do trick for two reasons. Firstly, in cases like the car example, the set of constitutively relevant factors will differ depending on what we choose as a representative example of the contrast class. Secondly, if pragmatic

38

3

Mechanistic Explanatory Texts

factors are the only ways to decide which constitutively relevant factors are to be considered the correct explanation, then we cannot maintain the view that the quality of explanation or explanatory power of a model is a matter of objective fact. Single-Model Explanation According to Craver and Kaplan, even those models only aiming to explain a single contrastive phenomenon P vs. P′ do not, as a rule, provide a complete explanation of that contrastive phenomenon (Craver & Kaplan, 2020, p. 309ff). Instead, multiple models are needed, which highlight different constitutively relevant factors, in effect exhibiting subsets of the SC mechanism for the contrast phenomenon. Craver and Kaplan argue that in principle these different models could be conjoined into an “über-model” containing all the constitutively relevant factors because they each model the same real-world mechanism. Since reality is internally consistent, such an über-model would be a simple conjunction of all the partial models. However, they also argue that this über-model is a mere philosophical fiction. In real sciences, models are limited to only partial coverage of the SC mechanism. This view is problematic, because it in effect bars scientists from ever accessing complete explanations of any phenomenon. Craver and Kaplan define stores of explanatory knowledge as sets of models possessed by a scientific community or by an individual scientist (ibid.). The explanatory power of a whole store of explanatory knowledge depends on the amount of relevant detail it contains. In other words, the more details about the SC mechanism for P vs. P′ a store of explanatory knowledge as a whole contains, the more explanatory power with respect to P vs. P′ it has. This holds, according to Craver and Kaplan, without regard to the number and characters of the models making up the store of explanatory knowledge. Thus, a store of explanatory knowledge containing a large number of partial models is on equal footing with a store of explanatory knowledge containing only a handful or even a single model, as long as the relevant detail contained in the more unified store does not exceed the relevant detail contained in the disconnected store. However, a scientist or a scientific community whose store of explanatory knowledge about the contrastive phenomenon P vs. P′ contains a single model with all the relevant factors is the only one which can actually produce a full explanation of why P occurred rather than P′. In contrast, if only a set of disparate models, each highlighting only some relevant details, are available (and by hypothesis not conjoined), then only partial explanations are available at any given time.3 In this sense, then, the unified store of explanatory knowledge is preferable to the disconnected one.

3

Note that even though the disparate models might contain all of the information in the hypothetical über-model, the information cannot be accessed at the same time, since this would amount to conjoining the disparate models. However, Craver and Kaplan explicitly reject the possibility of actually constructing such a model (Craver & Kaplan, 2020, p. 314).

3.3

Mechanism Descriptions and Mechanistic Explanatory Texts

39

Empirical Adequacy One further problem with Craver and Kaplan’s account has to do with an avowed goal of the new mechanists to develop an account of explanation faithful to scientific practice. Models purporting to provide contrastive explanations are undoubtedly part of scientific practice. However, many models do not seem to be of this sort. Scientists often take themselves to be describing a single mechanism for a single phenomenon without regard for any particular contrast. As we saw above, Craver and Kaplan do allow for the existence of “multi-faceted phenomena” made up from a set of related contrast phenomena. They also allow that a single model may be used in order to highlight factors constitutively relevant to a multi-faceted phenomenon. It seems then, that Craver and Kaplan’s account does have adequate resources to account for the apparent prevalence of models which do not appear to relate to any particular contrast, but to a non-contrastive phenomenon. However, it still appears that Craver and Kaplan have the relationship between models of non-contrastive/multi-faceted phenomena and explanations of contrasts the wrong way around. Most models do not aim at explaining one or even multiple contrasts. They aim at describing the mechanism underlying a phenomenon (where a phenomenon is conceived as an acting entity). The information contained in these models may be used to explain one or more contrasts, but the models themselves do not wear the contrasts on their sleeves. This point is related, in a way, to the Contrasts in Ontology problem. Scientists model or describe what is out there. Since contrasts are not an ontological category, they are not targeted by models. It is acting entities, entity-involving occurrents or systems that are modelled, because these are the sorts of things that exist in the world. The models of these things (i.e., of old-fashioned non-contrastive phenomena) can then be used to answer contrastively formulated explanation-seeking questions. In the next section, I will present a variation on Craver and Kaplan’s contrastive account of mechanistic explanation, which resolves these four issues. That account will relegate contrasts out of ontology into the realm of explanation. It will provide a standard for uniquely deciding which factors are relevant for explaining a particular contrast. It will allow for complete explanations of particular contrasts to be accessible to individual scientists, and it will be empirically adequate to scientific practice.

3.3

Mechanism Descriptions and Mechanistic Explanatory Texts

As we saw above, one of the crucial problems of Craver and Kaplan’s account of contrastive mechanistic explanation is its conflation of phenomena as things existing out there in the world and phenomena as explananda. Avoiding this problem is impossible if we insist on Craver’s ontic conception of explanation. My own account of contrastive mechanistic explanation is therefore not ontic, but epistemic in Craver’s sense, i.e., it is epistemic in the sense that both explananda and explanantia

40

3 Mechanistic Explanatory Texts

are “knowledge-items”, such as questions and descriptions, but it nevertheless is about things in the world/the causal pattern of the world. This would make the account ontic according to Salmon’s earlier classification. The core of the account is a three-way distinction between ontic phenomena/ mechanisms, mechanism descriptions, and mechanistic explanatory texts (METs). Ontic Phenomena are entity-involving occurrents which exist out there in the world. They are constituted by equally real ontic mechanisms. These ontic mechanisms are sets of lower-level entity-involving occurrents organised in such a way that they produce the phenomenon. The constitutive relation between ontic phenomena and ontic mechanisms can be tested by the reciprocal manipulability or the horizontal surgicality test as introduced in Chap. 2. In short, ontic phenomena are simply phenomena as they are known from other mechanistic literature. They are not individuated with the help of contrasts. Ontic mechanisms are mechanisms as they are known from other mechanistic literature. Importantly, ontic phenomena are not explananda, and ontic mechanisms are not explanantia. For example, a car’s travelling in a straight line, a neuron’s firing or a body of water’s freezing are all ontic phenomena. Each of these is constituted by an ontic mechanism, which includes, e. g., an engine’s running, sodium channels’ opening or molecules’ slowing down. Mechanism Descriptions are texts or graphical representations of a mechanism responsible for a particular ontic phenomenon. A complete mechanism description would describe all the components of the mechanism for a phenomenon, that is, all parts of the phenomenon which are either reciprocally manipulable with the phenomenon, or which can be manipulated simultaneously with the phenomenon by horizontally surgical interventions. It would also describe all of these components in maximal detail, that is, describe all of the constitutively relevant facts about each component. Needless to say, complete mechanism descriptions are not available for any phenomenon. However, partial mechanism descriptions, in the form of textual, graphical or computational models are the bread and butter of scientific work. These can typically be found in journals, textbooks or other media through which scientists communicate. For example, a diagram showing the transfer of rotational movement from the engine drum through the gearbox to the wheels is a partial description of the mechanism for the car’s travelling. A (partial) mechanism description answers the explanation-seeking how-question: “How does the phenomenon come about?”. In other words, a (partial) mechanism description is a (partial) how-explanation for the occurrence of the phenomenon. Mechanistic Explanatory Texts (METs) are why-explanations. They are formulated in terms of an answer to a contrastively formulated explanation-seeking whyquestion (i.e., explanation request). The explanation request contrasts a phenomenon P with a contrast class of phenomena P′. The appropriate MET lists the differences between the mechanism description for P and the mechanism description for the closest member of P′ shared by all members of P′. There is in principle a single correct complete MET for any contrastive explanandum. For example, an explanation request such as “Why is the car travelling straight as opposed to turning right?”

3.4

Constructing Mechanistic Explanatory Texts

41

contrasts the phenomenon P‘car travelling straight’ to a class of contrast phenomena P′‘car turning right’. The MET for this contrast will not mention the fact that the car’s engine is running, even though the engine’s running is a component of the mechanism for P. This is because in the closest member of P′ the engine is also running. However, it might mention the fact that the car’s front wheels are parallel to the car’s rear wheels, as this is one of the differences between the mechanism for P and the mechanism for the closest member of P′. The distinction between ontic phenomena/mechanisms, mechanism descriptions and METs directly solves the Contrasts in Ontology problem encountered by Craver and Kaplan’s account. Ontic phenomena and mechanisms are, on the present account, not contrasts, and consequently, none of the ontological worries apply. Furthermore, the distinction also solves the Empirical Adequacy problem because it allows for the fact that models are often also concerned with phenomena and mechanisms individuated non-contrastively. In my schema, these models count as mechanism descriptions. Mechanism descriptions are simply descriptions of ontic mechanisms for ontic phenomena and therefore, just like ontic phenomena, do not refer explicitly or implicitly to any contrasts. However, my account also provides for the possibility that some models are concerned only with particular contrasts. Such models are METs – they are explanations formulated in response to a particular contrastive explanation request. However, two of the four problems remain unsolved as yet. In order to address the problem of Identifying Relevant Switch-points and the problem of Single-Model Explanation, I must go into more detail about the way in which contents of METs are selected and evaluated.

3.4

Constructing Mechanistic Explanatory Texts4

The question of which details from mechanism descriptions are selected as contents of METs is equivalent to the question of the relation between constitutive relevance and explanatory relevance. Mechanism descriptions describe all the constitutively relevant factors for P, and METs contain only the explanatorily relevant factors for P vs. P′. Mechanists typically equate explanatory relevance with constitutive relevance. However, this option is not open to any account of mechanistic explanation which views explananda as contrastive. Craver and Kaplan persist in viewing explanatory relevance and constitutive relevance as equivalent, but the mutual manipulability account of constitutive relevance has never been explicitly combined with a

4

Reprinted/adapted by permission from Springer Nature: Neural Mechanisms: New Challenges in the Philosophy of Neuroscience by Marco Viola & Fabrizio Calzavarini (Eds.), “Chapter 17: Compare and contrast: how to assess the completeness of mechanistic explanation” by Matej Kohár and Beate Krickel, 2021. (Kohár & Krickel, 2021, § 5.1).

42

3

Mechanistic Explanatory Texts

contrastive account of the phenomenon. Indeed, as presented earlier, the mutual manipulability account of constitutive relevance defines under which conditions an acting entity (X’s ϕ-ing) is constitutively relevant for another acting entity (S’s ψ-ing). In their paper, Craver and Kaplan do not explain how a combination of the mutual manipulability account and a contrastive phenomenon is supposed to work. According to the view I defend here, explanatory relevance is distinct from constitutive relevance. Still, constitutive relevance is a necessary condition for explanatory relevance: all explanatorily relevant details concern mechanistic components, i.e., entities, activities and their organisational features that are constitutively relevant to the explanandum phenomenon.5 However, not all constitutively relevant detail is explanatorily relevant for a given contrast. Therefore, I must provide a further criterion for identifying explanatorily relevant details. As we saw in Sect. 3.3, mechanistic explanatory texts answer contrastive whyquestions. The why-question, or explanation request, determines the class of contrast phenomena P′, with which the actual phenomenon is compared. For example, one might ask “Why is the water in the glass frozen?”. This explanation request is incomplete, because it does not specify a contrast class explicitly. However, implicitly, we could read the contrast in as “Why is the water in the glass frozen, rather than thawed?”.6 This picks out the phenomenon P‘water in the glass is frozen’, and the contrast class P′‘water in the glass is thawed’. Other why-questions could be posed which pick out different contrast classes. For instance, one could ask: “Why is the water in the glass frozen rather than evaporating?”. This why-question picks out the same phenomenon P, but a different contrast class P′‘water in the glass is evaporating’. Mechanists and interventionists have held that one of the crucial purposes of explanation is finding so-called crucial points of intervention, i.e., variables which allow us to control the phenomenon (Hochstein, 2017; Woodward, 2003). This is the spirit behind Craver and Kaplan’s focus on “switch-points”, i.e., values of such crucial intervention variables which transform the phenomenon in the way specified by the contrast. Based on the mechanistic idea that explanation is about finding crucial points of intervention, explanatory texts should identify these crucial points of intervention. Crucial points of intervention with respect to the contrast P vs. P′ are those which allow one to change P into P′ with minimal effort. A preliminary characterization of the contents of mechanistic explanatory text, thus, is the following: (Contents of METs – preliminary) A mechanistic explanatory text T explaining a contrast “P vs. P′” has the form “because C rather than C′”, where C is a set of components of the description of the actual mechanism Mactual for P and C′ is a

5

Background conditions and the context in which the mechanism occurs are therefore explanatorily irrelevant in a constitutive mechanistic explanation. However, they might be relevant in an etiological explanation for the occurrence of the phenomenon (see Craver, 2007, Ch. 3). 6 Note that strictly speaking any contrast can be filled in, leading to the worries discussed in Sect. 3.2. Only pragmatic factors decide that the contrast between frozen and thawed water is perceived as salient in this case.

3.4

Constructing Mechanistic Explanatory Texts

43

set of components of a description of a possible mechanism Mpossible, where the following holds: (i) Mpossible is a member of a set S of possible mechanisms each sufficient to bring about P′, (ii) C and C′ contain all and only components that differ in the description of Mactual and the description of Mpossible and that are also differences between Mactual and all other members of S, (iii) Mpossible could in principle be created by means of intervening into Mactual with minimal effort compared to the effort that would be necessary for the creation of each other member of S. Condition (i) in the definition above should be self-explanatory. The mechanistic explanatory text explains why P occurred rather than P′, by exhibiting the components which were crucial to the occurrence of P and not P′. Therefore, we must compare the mechanism for P with some mechanism for P′ in order to isolate these crucial components. There are usually many ways in which a phenomenon from the contrast class P′ could be constituted. Consider the example of the action potential. If we want to explain why the neuron fired as opposed to maintaining resting membrane potential, we must contend with the fact that there are innumerable differences within the class of neurons maintaining resting potential (e.g., in the number of ion channels open, etc.). We must select one mechanism for P′ from the set of possible mechanisms to compare to the actual mechanism. Condition (ii) further specifies that once we have selected one particular possible mechanism for comparison, we are interested only in those differences between this contrast mechanism and the actual mechanism for the phenomenon that apply to all the other mechanisms sufficient to bring about the contrast phenomenon. This is what enables us to say that the correct explanation for “Why is the water in the glass frozen rather than thawed?” is that the temperature of the water is below 0 °C, or more exactly, that the temperature of the water is -17 °C rather than above 0 °C. The difference between -17 °C in the actual mechanism and above zero degrees holds for all instances of the contrast class (all other things being equal). Condition (iii) identifies which of the many possible mechanisms sufficient for bringing about the contrast phenomenon should serve as the basis of the comparison. ‘Minimal effort’ can be defined as a function of (a) the number of required interventions and (b) the similarity of the required interventions (the more similar the required interventions are, the less effort for the scientists). Hence, we have to know how to count and compare interventions in order to identify the mechanism Mpossible that is to figure in our comparison. Woodward (2003, p. 98) defines interventions with the help of three variables {I, X, Y}, where I is the intervention variable whose assuming a particular value sets the value of X; X is the putative cause; and Y the putative effect. Since I am here concerned with constitutive rather than causal explanation, I do not take X and Y to be putative causes and effects. Instead, X is a variable representing the mechanism component, and Y is a variable representing the phenomenon P (or temporal parts

44

3 Mechanistic Explanatory Texts

thereof, see Krickel (2018)). According to this view, two interventions are identical, if and only if their defining I-variables, X-variables, and Y-variables are the same. Since we are interested in mechanism descriptions that describe mechanisms that are sufficient to bring about the contrast phenomenon P′, we can assume that Y is the same for all relevant interventions (Y represents the phenomenon and can take the values that represent P and P′). Hence, in order to determine which mechanism description of the counterfactual mechanisms that bring about P′ is most similar to the mechanism description of the actual mechanism, we have to determine which counterfactual mechanism requires the least effortful interventions individuated by their Is and their Xs, giving us a set of required interventions {[I1, X1], . . . [In, Xn]} for each possible mechanism for P′. In order to determine which of these sets of interventions involves the ‘least effort’, we have to know how to count Is and Xs and how to determine their similarity. There is a practical problem for the counting and comparing of the intervention variables I1–In. In many cases, scientists know what would have to be changed in order to build a counterfactual mechanism M* from an actual mechanism M. However, they do not know how this change could be brought about. For example, scientists do often know what component of a pathological mechanism is responsible for the symptoms and therefore know that changing this component would lead to an improvement of the symptoms. But they do not know how to change this component. Much effort in medical research goes into inventing better drugs to be able to change mechanisms in the right way. In the present context, the consequence is that, in practice, when formulating the explanatory text for a given contrastive request for explanation we often do not know what an intervention variable represents, and how similar or different it will be compared to other interventions. We therefore need to decouple the measure of minimal changes from the count of intervention variables. However, the measure we ultimately do choose should respect the interventionist insight that the number of intervention variables matters. This can be achieved if we make our measure sensitive to similarities and differences between Xs. If the targets of interventions are similar in specific ways, it is likely that they can be intervened on with just a single intervention variable I. The practical problem does not arise for the counting and comparison of the Xs. Counting and comparing Xs means counting and comparing mechanistic components. These mechanistic components, in my framework, are described in the mechanism descriptions. Hence, in the end, in order to determine which interventions require the least effort, we have to count and compare the differences between the descriptions of all (nomologically) possible mechanisms for P′. This results in the following characterisation of the contents of mechanistic explanatory texts: (Contents of METs) A mechanistic explanatory text T explaining a contrast “P vs. P′” has the form “because C rather than C′”, where C is a set of components of the

3.4

Constructing Mechanistic Explanatory Texts

45

description of the actual mechanism for P – Mactual – and C′ is a set of components of a description of a possible mechanism Mpossible, where the following holds: (i) Mpossible is a member of a set S of possible mechanisms each sufficient to bring about P′, (ii) C and C′ contain all and only components that differ in the description of Mactual and the description of Mpossible and that are also differences between Mactual and all other members of S, (iii) the description of Mpossible is more similar to the description of Mactual than the description of any other member of S. Thus, we have to be able to determine when a mechanism description D is more similar to another mechanism description D’ than some further mechanism descriptions D*, D** etc. Let us look at two more informal examples of the kind of comparison that figures in constructing METs. For example, suppose we are trying to construct an explanatory text answering the question: “Why did the car go straight, as opposed to turning right?”. The mechanism description for the ontic mechanism in which the car is going straight at speed 90 km/h with rattling bumpers includes a number of components describing the activity of the spark plugs. However, the mechanism description of the most similar mechanism Mpossible, which would underlie the car’s turning right includes the very same components describing the activity of spark plugs. Therefore, when answering the question regarding going straight in contrast to turning right, this information will not be included in the explanatory text. Spark plug activity is not different across the two cases. To make a car turn right, rather than straight, one should intervene on the wheels, not on the spark plugs. As already mentioned, the problem of constructing the correct MET will be compounded by the fact that there may be numerous ways of exhibiting the contrast phenomenon. For instance, let us look at explaining why the car goes straight rather than standing still. Will the explanatory text mention spark-plug activity? Perhaps surprisingly, the answer is still no. Although the paradigm case in which the car stands still is one where the engine does not run, and spark plugs do not spark, there is another class of situations in which cars stand still, i.e., when they are idling with the engine running in neutral gear, or when brakes have been applied. In these cases, spark plugs do spark in the same way as when the car goes straight. The mechanism description for the idling case or the braking case will be closer to the description of the actual mechanism because all the (many) engine parts will work in the same way and thus receive the same description as in the actual mechanism. In fact, the contrast class might be too heterogeneous to admit any set of differences satisfying point (ii) of the definition. This would suggest that the contrast must be explained piecemeal. The matter of comparing mechanism descriptions is complicated by the fact that mechanism descriptions can be given in various forms, such as spoken word, written text, diagram, etc., and two mechanism descriptions can contain the same information about the same mechanism, even though they superficially differ. In order to

46

3

Mechanistic Explanatory Texts

resolve this issue, I stipulate that mechanism descriptions can be transformed into a canonical form: (Canonical Form of MDs): A mechanism description in its canonical form is a set of 4-tuples , where E stands for a set of one or more entities, A, for the activity these entities are engaged in, S for the (relative) spatial region in which this activity is performed, and T for the time during which the activity is performed. A single mechanism description will consist of many such 4-tuples stringed together. In the rest of this section, we will need to distinguish between ‘components’, i.e., 4-tuples in a mechanism description, and ‘elements’, which are any of the 4 parts which make up a component. Note that components in a mechanism description describe ontic components. When I refer to components of ontic mechanisms, this will always be specified in full. Mechanism descriptions in sentential or diagrammatic form can be, at least in principle, converted to this canonical form. The first two elements of a component in our canonical form correspond to what is standardly taken as components/components of a mechanism, namely, acting entities, or entity-involving occurrents. The latter two elements (space and time) are meant to cover spatial and temporal organisation of the mechanism. More complex forms of organisation are implicit in the mechanism description. Feedback loops, for instance, would be described as a series of components with alternating complementary activities.7 Connection between components is a matter of overlap between their spatial regions. Two further questions arise with respect to mechanistic descriptions: the question of grain and the question of sameness. The question of grain asks how detailed mechanism descriptions are. In practice, the answer varies because different particular mechanism descriptions will be exhibited with varying detail. However, we can at least specify the conditions under which a mechanism description is better than another one characterising the same ontic mechanism. The answer is: the more detailed a mechanism description the better. The best mechanism description describes all ontic components, and it describes all of them to maximum constitutively relevant detail. A scientific community which has more fine-grained mechanism descriptions at its disposal is better off than a scientific community with only coarse-grained mechanism descriptions. This is because the former scientific community can explain more contrasts than the latter one. Further note, that in practice scientific communities have descriptions at various levels of grain available to them, and they can construct coarser descriptions (if need be) by substituting less determinate denotations for entities, activities, places and times. Thus, the scientific 7

One example for such a feedback loop is the interaction between pyruvate kinase, pyruvate, and acetyl-CoA, where the first helps to produce the second; the second produces the third; the third inhibits the first, and so on (example from Bechtel & Abrahamsen, 2005, p. 429). In the canonical form, the mechanism description would be as follows: , , , etc.

3.4

Constructing Mechanistic Explanatory Texts

47

community with the more fine-grained description will always also be in possession of the coarse-grained description. This clarifies one sense in which more details are indeed better, although see Kohár and Krickel (2021) for a more complete consideration of the More Details Are Better objection. Secondly, when are two mechanism descriptions equivalent? For mechanism descriptions in non-canonical forms, the answer is simple – two such descriptions are equivalent if and only if they can be transformed into the same canonical form without adding or leaving out any empirical content. Two mechanism descriptions in the canonical form are the same if they contain the same components. Can the comparison between mechanism descriptions be formalized such that a general recipe for how to compare mechanisms can be made available? My proposal is that the minimal set of differences between mechanism descriptions D and D* can be computed based on adapting the concept of generalised edit distance. This measure is frequently used in computer science to reason about string matching and indirectly about graph matching. Adopting this framework is licensed by the fact that mechanism descriptions can be transformed into our canonical form. In computer science, edit distance of string s from s* is based on the number of steps required to transform s into s*. Each step consists of applying one of a set of permitted edit operations to one character of s. Different applications sanction different sets of permitted edit operations. Additionally, a cost or weight is associated with each permitted edit operation. The edit distance is the sum of these costs (Cohen et al., 2003; see also papers cited therein). The most well-known version of edit distance for strings is the so-called Levenshtein distance (Levenshtein, 1966). Levenshtein distance permits the following operations: character insertion, character deletion and character replacement, all of which are of equal cost 1. The Levenshtein distance from the string ‘dogged’ to ‘froggy’ is 4 – 1 addition, 2 replacements, and 1 deletion.8 Other related measures use a more restrictive set of edit operations (e.g., disallowing direct replacement; Wagner & Fischer, 1974) or, on the contrary, a more permissive set of edit operations (e.g., allowing direct transposition, etc.; Damerau, 1964). For some applications, weights different from 1 are used, so that some operations are costlier to per-form than others (Monge & Elkan, 1997, cited in Cohen et al., 2003). Although the edit-distance framework was originally devised for imprecise string-matching, similar measures have now been used for comparing graphs, such as semantic networks (Bunke & Shearer, 1998). A version of edit distance can be straightforwardly applied to mechanism descriptions in their canonical forms. Instead of performing edit operations on characters in a string, we can define edit operations on components in mechanism descriptions. Two mechanism descriptions D and D* are the same, if the edit distance from D to D* is 0.9 Alternative ways of exhibiting contrast phenomena described by contrast mechanism descriptions D*, D**, D*** etc. can be ranked according to their edit

‘dogged’ → ‘fdogged’ → ‘frogged’ → ‘froggyd’ → ‘froggy’. This criterion is equivalent to the one introduced informally above. The edit distance from D to D* is 0 iff D and D* have the same components.

8 9

48

3 Mechanistic Explanatory Texts

distance from the actual mechanism description D. The one with the lowest edit distance from D is the appropriate contrast. At this point we are left to specify the appropriate set of edit operations for mechanism descriptions. Firstly, there are straightforward equivalents for insertion and deletion. Component-addition and component-deletion are equivalent to character-insertion and character-deletion respectively. There is also an operation roughly equivalent to character-replacement. This is element-replacement, which consists in replacing one of the 4 elements in a component with a different element of the same category. Thus, one is permitted to change to any of the following: , , and . Component-insertion and component-deletion are both weighted 1. Elementreplacement, on the other hand, is weighted 0.5. This is to ensure that the cost for changing into is higher than the cost of changing fewer elements in a component. The distance from to is 1.5, by 3 element-replacements. The distance from to is 2 – either 4 element-replacements at 0.5 each, or 1 componentdeletion and 1 component-insertion at 1 each. This also ensures that this version of edit distance satisfies the requirements for being a metric. Apart from equivalents for the standard string edit operations, I introduce an edit operation unique to comparing mechanism descriptions, called mass elementreplacement. Mass element-replacement is my attempt at discounting such systematic changes to multiple components, for which a single intervention variable is likely to be responsible. Such systematic changes should be discounted because in formulating mechanistic explanatory texts we are interested in finding crucial points of intervention, where one can intervene with minimal effort. In mass element-replacement, applying the same change to a group of relevantly similar components has the same cost as a simple element-replacement: 0.5. By relevant similarity, I mean that: (a) The entity elements E of these components can be subsumed by the same type description. Thus, we can apply a change to, e.g., all components whose entity element is an electron. (b) The activity element A of these components can be subsumed under the same type description. Thus, we can apply a change to, e.g., all components whose activity element is a fission. (c) The space element S of these components falls in a specific range, say a sphere, with a defined centre and radius. (d) The time elements T of these components are synchronous or fall into a determined interval. The range of components targeted by a mass element-replacement at once can be narrowed down by specifying that they be similar according to two or more of these similarity criteria. For example, we can specify that we want to change components involving electrons, but only those within 20 centimetres of an electric coil. The change applied to a group of components specified in this way need not concern the

3.5

Conclusion: Solving the Problems

49

element on which their similarity depends. We can specify a group of components by noting that they involve electrons and systematically change their locations in space or slow them down, for instance. The notion of applying the same change to a group of components also requires elucidation. Mass element-replacements are element-replacements on every component in the specified group. However, only certain types of replacement should be discounted. Specifically, I propose that: (a) The activity elements of a group of relevantly similar components can be replaced at once with cost 0.5 if all the activity elements A are replaced by elements A* which can be subsumed under the same type descriptions. Replacing all fissions in the mechanism description with fusions, for example, is an edit operation with cost 0.5. (b) The space elements of a group of relevantly similar components can be replaced at once with cost 0.5 by scaling (changing the size of components by a constant ratio), translation (moving the components in a uniform fashion, e.g., all 30 cm to the right) and rotations (tilting the components over). (c) The time elements of a group of relevantly similar components can be replaced at once with a cost of 0.5 by scaling (changing the duration of components by the same ratio). The mechanism description edit distance with these edit operations (componentinsertion, component-deletion, element-replacement and mass element-replacement) as specified here is meant to guide judgments about the minimal differences between mechanism descriptions in a way that parallels the results one would obtain by counting interventions, without requiring us to know the intervention variables but only based on the target variables Xi-Xn. In particular, the rules for mass elementreplacement are founded on the intuition that similar things can be changed by a single intervention in a systematic way. At the same time, however, this proposal is provisional, and subject to amendment as the framework is further developed. I include it to demonstrate the possibility of developing sophisticated semi-formal modes of reasoning about mechanistic explanation.

3.5

Conclusion: Solving the Problems

In this chapter I argued that mechanistic why-explanation is best thought of as contrastive explanation. I analysed the best developed account of contrastive mechanistic explanation due to Craver and Kaplan. I showed that there are 4 problems with Craver and Kaplan’s account – the Contrasts in Ontology problem, the Empirical Adequacy problem, the Identifying Relevant Switch-Points problem, and the Single-Model Explanation problem. The Contrasts in Ontology problem and the Empirical Adequacy problem are solved by distinguishing between ontic mechanisms, which are (collections of)

50

3

Mechanistic Explanatory Texts

EIOs, and therefore not contrastive, mechanistic descriptions, which describe ontic mechanisms, and mechanistic explanatory texts (METs), which answer contrastive explanatory requests by comparing mechanism descriptions. The Empirical Adequacy problem does not arise in my account, because models of mechanisms which apparently do not focus on any particular contrast can be treated as mechanism descriptions. Thus, both contrastive and non-contrastive models of mechanisms are accounted for. The two remaining problems I have identified in Sect. 3.2 are also solved. The Identifying Relevant Switch-Points problem is solved by the procedure for constructing METs from mechanism descriptions using the adapted edit-distance framework. This procedure allows us to compare and contrast mechanism descriptions in order to obtain a unique MET for any contrast we might be interested in explaining. There is only one wrinkle, namely situations where the contrast class is disjoint so that no differences between the mechanism description for P and the most similar description for P′ are shared by all other members of the contrast class. In these cases, the explanandum may need to be further specified in such a way that a cohesive contrast class results. The original explanandum, however, must be explained piece-meal, by splitting the contrast class into subclasses. This point will become important in Chap. 8. The Single-Model Explanation problem is also solved by the above procedure because the procedure identifies a unique set of components which are explanatorily relevant for a given contrast. A single MET might refer to these components and no others. This MET would then just be the single-model explanation for this contrast.

References Batterman, R. W., & Rice, C. (2014). Minimal model explanations. Philosophy of Science, 81(3), 349–376. https://doi.org/10.1086/676677 Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C, 36(2), 421–441. https://doi.org/10.1016/j.shpsc.2005.03.010 Bunke, H., & Shearer, K. (1998). A graph distance metric based on the maximal common subgraph. Pattern Recognition Letters, 19(3–4), 255–259. https://doi.org/10.1016/S0167-8655(97) 00179-7 Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness of computational explanation in neuroscience. Synthese, 191(1), 127–153. https://doi.org/10.1007/ s11229-013-0369-y Cohen, W., Ravikumar, P., & Fienberg, S. (2003). A comparison of string distance metrics for name-matching tasks [Paper presentation]. IIWeb 2003 (IJCAI 2003 Workshop). Acapulco, Mexico. http://dc-pubs.dbs.uni-leipzig.de/files/Cohen2003Acomparisonofstringdistance.pdf Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford University Press. Craver, C. F. (2014). The ontic account of scientific explanation. In M. I. Kaiser, O. R. Scholz, D. Plenge, & A. Hüttemann (Eds.), Explanation in the special sciences: The case of biology and history (pp. 27–52). Springer.

References

51

Craver, C. F., & Kaplan, D. M. (2020). Are more details better? On the norms of completeness for mechanistic explanation. British Journal for the Philosophy of Science, 71(1), 287–319. https:// doi.org/10.1093/bjps/axy015 Damerau, F. J. (1964). A technique for computer detection and correction of spelling errors. Communications of the ACM, 7(3), 171–176. https://doi.org/10.1145/363958.363994 Hochstein, E. (2017). Why one model is never enough: A defence of explanatory holism. Biology and Philosophy, 32(6), 1105–1125. https://doi.org/10.1007/s10539-017-9595-x Illari, P. M., & Williamson, J. (2010). Function and organization: Comparing the mechanisms of protein synthesis and natural selection. Studies in History and Philosophy of Biological and Biomedical Sciences, 41(3), 279–291. https://doi.org/10.1016/j.shpsc.2010.07.001 Kohár, M., & Krickel, B. (2021). Contrast and compare: How to choose the relevant details for a mechanistic explanation. In F. Calzavarini & M. Viola (Eds.), Neural mechanisms: New challenges in the philosophy of neuroscience (pp. 395–424). Springer. Krickel, B. (2018). The mechanical world. Springer. Levenshtein, V. I. (1966). Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8), 707–710. Levy, A. (2014). What was Hodgkin and Huxley’s achievement? British Journal for the Philosophy of Science, 65(3), 469–492. https://doi.org/10.1093/bjps/axs043 Monge, A., & Elkan, C. (1997, May). An efficient domain-independent algorithm for detecting approximately duplicate database records [Paper presentation]. SIGMOD workshop on research issues in Data Mining and Knowledge Discovery (DMKD). Tucson, Arizona. Salmon, W. C. (1984). Scientific explanation: Three basic conceptions. In PSA: Proceedings of the biennial meeting of the philosophy of science association, 1984 (pp. 293–305). https://doi.org/ 10.1086/psaprocbienmeetp.1984.2.192510 Wagner, R. A., & Fischer, M. J. (1974). The string-to-string correction problem. Journal of the ACM, 21(1), 168–173. https://doi.org/10.1145/321796.321811 Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press. Wright, C. D. (2015). The ontic conception of scientific explanation. Studies in History and Philosophy of Science, 54(1), 20–30. https://doi.org/10.1016/j.shpsa.2015.06.001

Chapter 4

Representations and Mechanisms Do Not Mix

Abstract In this chapter, I outline an argument against explanatory relevance of neural representations. After clarifying the concept of neural representation and distinguishing it from person-level mental representation, I argue that a truly representational explanation requires that the content of neural representations be explanatorily relevant. I then argue that this is inconsistent with the requirements for mechanistic constitution, and consequently a constitutive mechanistic explanation cannot be representational. The inconsistency stems from the fact that naturalistic analyses of representational contents render them either non-local to, or not mutually dependent with cognitive phenomena (the details of this argument are expounded in Chaps. 5, 6, and 7). This argument is then compared with its forerunners in the literature – the arguments about causal exclusion of contents; the arguments about internalism/externalism about meanings and mental state individuation; and from methodological solipsism. I also defend this argument from prima facie objections. Keywords Neural representation · Content indeterminacy · Job-description challenge · Causal exclusion · Locality The core argument of this book is that the contents of neural representations cannot be mechanism components and therefore cannot appear in mechanistic explanatory texts. This means that neural representations, qua representations, are not explanatorily relevant in constitutive mechanistic explanations of cognitive phenomena. In this chapter, I introduce the notion of neural representation, provide the general outline of the argument, discuss its proper scope, and compare it with various related arguments known from the literature.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8_4

53

54

4.1

4

Representations and Mechanisms Do Not Mix

Representations: External and Internal

The theoretical concept of mental representation gained popularity in philosophy and cognitive science between 1950 and 1970. The adoption of the representational theory of mind was a crucial step in the rejection of behaviourism. Whereas logical and psychological behaviourists viewed inference to internal states of the cognitive system as suspect, cognitivists had no qualms with explaining intelligent behaviours by positing internal states, specifically internal representational states (Burge, 1992). An internal state is a representation only if it carries content about something else.1 The idea of content-carrying is easier to understand in terms of the many external representations we know from our daily life. One particular group of external representations is linguistic representations, such as words or sentences (Boghossian, 1995; Fodor, 1975; Jacob, 2019). Consider the sentence “Geneva is the capital city of Switzerland.” This sentence is physically realised as a pattern composed of ink-marks. This pattern of ink-marks, however, differs from a random inkblot because, unlike a random ink-blot, the sentence bears a certain relation to the city of Geneva. Namely, the sentence refers to, or is about, the city. The city is its intentional object. The pattern of ink-marks that physically composes the sentence is technically called the representational vehicle. The sentence represents Geneva to be a certain way – i.e., as the capital of Switzerland. Its meaning, or representational content, can be identified with the conditions under which the sentence is true (Frege, 1893). This particular sentence is, as it happens, false. Geneva is not in fact the capital city of Switzerland. This illustrates an important point about representations – their representational content can differ from the states of affairs actually obtaining in the world. In other words, representations can misrepresent their intentional objects, or even represent non-existent objects (Brentano, 1874/ 1973). Another everyday example of an external representation is a map (Rescorla, 2009). A map of Bochum is about the city of Bochum and its various streets and landmarks. It represents these streets and landmarks as being located in a particular direction and distance from one another. Here the representational vehicle is again a set of ink-marks. The representational content, in this case, is identified as a set of accuracy conditions. The map can be more or less accurate, depending on the ratio of the represented relations between places in Bochum it gets right and the total number of represented relations. The basic tenet of representationalism in cognitive science is that explaining intelligent behaviour requires positing or discovering internal representational states – states which are like sentences or maps, but which form part of the cognitive system itself. The idea is that just as we can use external representations to reason about the world, the cognitive system manipulates internal representations in order to 1 Content-carrying is a necessary condition of being a representation but might not be sufficient. See Chap. 6 for some examples of theories which require representations to fulfil additional conditions.

4.1

Representations: External and Internal

55

extract information through the sensory system and transforms them to plans of action to be executed by the motor system (Pitt, 2020). Some examples of such internal representations, or theories positing internal representations, include: Universal Grammar Chomsky (1959) argues that the human cognitive system must contain innate representations of universal grammar in order to explain the rapid acquisition of language by infants. The universal grammar is a set of constraints on possible grammatical rules in any language. From these constraints and the value of a number of parameters, one can construct the grammar of any language. If the infant has such a universal grammar, they can determine the appropriate values of the parameters from the limited linguistic input they receive from their caretakers and determine the correct grammar for their first language. Chomsky uses evidence from comparative linguistics to deduce the set of universal grammar rules, constraints and parameters represented in the infant mind, thereby spawning the discipline of generative syntax (Chomsky, 1965). Short Term Memory Miller (1956) used behavioural data to deduce the capacity of short-term memory, that is, the number of items subjects were able to recall from a list a short time after the list had been presented to them. Miller et al. (1960) explained the ability to recall the items by positing that representations of the items were held in a short-term memory buffer. The limit on the number of items available for recall was then explained by the limited capacity of said buffer. Concepts Fodor (1975) argues that thought is best accounted for by positing a limited set of basic concepts, which are representations of objects, properties, relations etc. These basic concepts can be composed into sentence-like units, just like words in a language of thought. The processes responsible for composing concepts into sentences of the language of thought exhibit productivity and systematicity. This accounts for the productivity and systematicity of thought. Sensory and Motor Homunculi (Penfield & Boldrey, 1937; Schott, 1993) The sensory homunculus is a section of the cerebral cortex in the parietal lobe where neurons activate in response to tactile stimulation of particular body parts. Portions of the sensory homunculus are topographically organised in parallel with the layout of the body parts. E.g., the sets of neurons which activate in response to the touch of each finger are next to each other, but far away from the neurons which activate in response to stimulation of the toes (Sutherling et al., 1992). The sensory homunculus thus represents the body parts and activity in a portion of the homunculus represents tactile stimulation of the corresponding body part. A parallel structure in the motor cortex in the frontal lobe is the motor homunculus, where groups of topographically organised neurons activate when a particular body part is moved. The motor homunculus represents the body parts and activity of a set of neurons in the motor homunculus represents movement of the corresponding body part.

56

4

Representations and Mechanisms Do Not Mix

Edge Detectors Hubel and Wiesel (1962) found a type of neurons in the cat primary visual cortex whose activity correlates with sharp changes in illumination in particular portions of the visual field. Hence, exposing the cats to a bar-shaped light source on a dark background would cause the edge-detectors whose receptive fields the bar passes through to activate. The activity of the edge detector cells is taken to represent the presence of an edge in a particular location in the visual field, since sharp changes of illumination correspond to edges in a naturalistic environment. Similar results were found in monkey striate cortex (Hubel & Wiesel, 1968). Place Cells O’Keefe and Dostrovsky (1971; see also O’Keefe & Nadel, 1978) reported finding a type of cells in the rat hippocampus which were most active when the rat was located in a specific location in an enclosure or a maze. These “place cells” together with “head direction cells”, whose activity correlates with specific directions in which the animal’s head is turned are thought to be involved in spatial navigation. Place cells represent location and thus enable the rat to “know where it is”. Like external representations, internal representations also exhibit the vehicle/ content distinction and the ability to misrepresent. For instance, in the case of Fodor’s language of thought, the vehicles have a sentence-like structure, while their contents are propositions referring to states of affairs. Demonstrably, we can think false thoughts. When we do, our cognitive system tokens sentences in the language of thought which misrepresent the world. For another example, take edge detectors. These likewise exhibit the vehicle/content distinction. The vehicle here is neural activity – the frequency of action potentials released by the cell. The content is something like: “Horizontal edge in upper left of the visual field” or similar. Edge detectors can misrepresent, for instance, if the scene is poorly illuminated, or if the visual system is under the influence of drugs among other reasons. There is one important difference between external representations and internal representations. External representations, such as sentences of a language and maps, are interpreted by intelligent users who are able to understand the content carried by the vehicles. Internal representations, however, are not interpreted in this way.2 Manipulation of internal representations is supposed to explain the intelligent behaviour of humans and animals. Therefore, positing internal interpreters able to understand the representation’s content would lead to regress – the intelligence of these internal interpreters would have to be explained. This problem is known as the homuncular fallacy (Dennett, 1975).3 In order to avoid the homuncular fallacy, cognitive scientists take cue from computer science. A computer is often regarded as a device able to manipulate

2

In Dretske’s (1988, Ch. 3) terms, external representations form a Type I representational system, while neural representations form a Type III representational system. 3 Note that the homunculi involved in the homuncular fallacy are not the sensory or motor homunculi mentioned above.

4.1

Representations: External and Internal

57

representations of data without interpreting them.4 The computer hardware is set up in such a way that it manipulates representational vehicles purely according to their syntactic properties (Piccinini, 2015). Syntactic properties are a subset of physical properties to which the computer differentially responds. For instance, a computer processor might differentially respond to high and low voltage on the pins of a chip. Setting the voltage high on a particular pin or a set of pins is then a way to get the processor to perform a different action. This means that the voltage levels are syntactic properties. Each possible setting of the voltage levels across the pins is a different representational vehicle. There is a mapping between the syntactic properties of the representational vehicles processed by a computer and their contents. Particularly, each of the pins might encode a binary digit and a set of the pins might be used together to encode a number. Likewise, there is a mapping between the manipulations performed by the computer on the representational vehicles and the transformations of contents carried by the vehicles. The computer chip can be, for instance, set up in such a way that, through the operation of its internal components, it will always set the voltage on a set of designated output pins to voltage levels that encode the increment of the number encoded by the input pins. The computer’s sensitivity to syntactic properties of the vehicles can thus mimic sensitivity to their semantic contents. This is because the syntax and the semantics march in step. In Haugeland’s words: “Given an appropriate formal system and interpretation, the semantics takes care of itself.” (Haugeland, 1981, p. 24). Cognitive science applies the insight from computer sciences to the investigation of the mind by considering the cognitive system a kind of natural computer (Clark, 2014; Rescorla, 2020). The mind is taken to be the “software” – a set of algorithms for transforming representational states running on neural “hardware”. Cognition consists in neural computation – a process in which internal representations are manipulated because the brain is set up in such a way that causal sequences of brain states taken as representational vehicles correspond to steps in an algorithm for performing some calculation defined over the contents carried by those vehicles. This dispenses the need for interpreting the representational contents of the vehicles. One difference that remains between artificial computers and the brain is that the contents of the representations in an artificial computer are conventionally determined, whereas contents of internal representations in the mind/brain should be determined by objective naturalistic properties. I say more on this in Sect. 4.3 and in Chaps. 5, 6, and 7. Representationalism as I characterise it here is a fairly broad position – any position that views representational contents of internal states as explanatorily relevant will qualify provided that the vehicle/content distinction is observed and the possibility of misrepresentation is affirmed. However, the term “representation”

4

Whether manipulating representations with semantic contents is essential to computation is a matter of some debate. See e.g., Shagrir (2001), Piccinini (2015) or Dewhurst (2018). However, to make the computer analogy plausible, it is only necessary that computers be capable of manipulating representations, not that this be essential to them.

58

4 Representations and Mechanisms Do Not Mix

and, correspondingly, “representationalism” is also sometimes used to pick out more demanding positions. In particular, some theorists may require that representations have a particular format (e.g., that they all be language-like; see Pylyshyn, 2002). There might also be additional requirements on representational vehicles (perhaps that vehicles cannot be distributed; see Fodor & Pylyshyn, 1988). Finally, additional requirements might be placed on the functional role of representations in the cognitive system (such as that they mediate perceptual constancies; see Burge, 2010). I view these more circumscribed uses as species of the more general position which I target. Indeed, usually the debates between those endorsing one of these circumscribed characterisations and those opposed is understood by its participants as an internal debate about the character of representations understood in the more general sense. When non-representationalists adopt such a demanding view of representationalism (as, e.g., Hutto & Myin, 2013), they put themselves at risk of being accused of strawmanning. I consider this risk not worth taking. Finally, there are also some uses of the term “representationalism” which are fully orthogonal to mine. Especially in philosophical debates of consciousness, representationalism is identified with the view that conscious experience has representational content (i.e., that it is a representation) or that the phenomenal character of conscious experience reduces to its representational content (see, e.g., Tye, 1995). I have nothing to say about these issues in this work.

4.2

Mental vs. Neural Representations

Up until this point I have used the term internal representation in order to cover two distinct, but related, sub-classes: a) mental representation and b) neural representation. In older literature, these concepts are not always distinguished, which may lead to confusion. Instead, the term mental representation is used for all internal representations. However, since I only intend my argument to apply to neural representations, I will from now on distinguish the two. Universal grammar, short-term memory as investigated by Miller, as well as concepts and thoughts, are examples of mental representations. On the other hand, sensory/motor homunculi, edge detectors and place cells are examples of neural representations. Paradigmatic examples of mental and neural representations differ along the following dimensions: 1. Conscious accessibility: The contents of mental representations are usually consciously accessible, or at least potentially consciously accessible in response to an appropriate stimulus. For instance, the contents of short-term memory can be recalled when an experimenter asks about them. The contents of thoughts and perceptual states are conscious, and we can introspect them at will. The contents of neural representations are typically not consciously accessible. For instance, we cannot introspect the sensory homunculus, or the representations of edges in our visual cortex.

4.2

Mental vs. Neural Representations

59

2. Personal/sub-personal attribution: Mental representations are usually personlevel representations. The rules of universal grammar, for example, are possessed by the learning infant, as are the contents of short-term memory, and thoughts. We use locutions such as “They think . . .,” “They remember . . .”, etc. when attributing mental representations. Neural representations are sub-personal. Their contents are not attributed to the person (or the cognitive system as a whole in case of non-person cognitive systems). Instead, they are possessed by a limited part of the cognitive system. For instance, the contents of place cells are not usually thought of as belonging to the rat in any way, and they are not available to the whole of the rat’s cognitive system.5 3. Importance of implementation: For research centred on mental representation, implementational details do not play a significant role and can be abstracted from. Usually there is a background commitment to neural implementation of some sort, but the way this implementation works is considered of little importance for the central explanatory goals to be achieved. There might be a one-to-one mapping between the contents of the mental representations posited by the theory and their neural realisers. Or the realisers might be neural representations whose contents only correspond loosely or in an aggregate way to the contents of the posited mental representations. Or perhaps, mental representations are realised by contentless and therefore non-representational neural happenings. All of these options are open to theorists committed to mental representations, and none of them has any bearing on the types of models they develop.6 On the other hand, the concept of neural representation is directly concerned with implementational details. For all paradigm cases of neural representations, we know the way in which the representation is implemented in the brain – by groups of cortical neurons, as in the case of sensory/motor homunculi, or individual cells in the early visual system, like the edge detectors, or by pyramidal cells in the hippocampus. When a neuroscientist claims that they discovered a neural representation of something, they are attributing content to a particular neural structure or state. The cognitive psychologist concerned with mental representations can be much more liberal with the positing of various representational states. 4. Behaviour-based/neurophysiology-based epistemology: The difference in the way mental and neural representations are investigated is also connected to the difference in focus on implementation details. The contents of mental representations are usually determined based on behavioural data. For example, the structure of universal grammar was posited based on a comparison between linguistic features existing in the various world languages. The contents of working memory are investigated by report, or by behavioural measures. The contents of thoughts are either established by report, or attributed by interpreting the behaviour of the subject. On the other hand, contents of neural representations

5

See Dennett (1969) and Drayson (2014) for more detail on the personal/sub-personal distinction. Cf. the literature on the autonomy of psychology, e.g., Dennett (1969), Feest (2003), or Aizawa and Gillett (2011).

6

60

4

Representations and Mechanisms Do Not Mix

are discovered by measuring correlations between neural activity and various candidate stimuli, such as in the case of edge detectors, which are most active when edges are presented in particular portions of the visual field, or in the case of sensory homunculi, in which cells are selectively sensitive to touch of a particular body part of the subject. 5. Entailment/mathematical relations between contents: Another difference between mental and neural representations is the type of relations into which their contents typically enter. The contents of mental representations usually enter into relations of entailment or logical dependence. This is because the paradigm cases of mental representations, such as thoughts, are viewed as having compositional structure, and their contents are thought of as conceptual in nature. Therefore, the processes in which these representations play a role are thought of as sensitive to this structure and as performing inferences, for instance. On the other hand, the contents of neural representations are usually thought of as non-propositional and cannot enter into entailment relations. Instead, paradigmatic neural representations often represent some quantitative feature of the environment, such as the current location in a tessellated hexagonal grid, as with place cells, or the location of an edge in the visual field, in the case of edge detectors. The vehicles which carry these contents are typically the frequency of action potentials, or the timing of action potentials produced by the cell. The nature of the contents and the vehicles is co-responsible for the fact that the processes thought to manipulate these representations are usually thought of as mathematical processes – integration, addition, or other mathematical transformations (e.g., Marr, 1982; Shadmehr & Wise, 2004). Note that not all cases of mental/neural representation exhibit all the features on the list. For instance, the rules of universal grammar are not consciously accessible and cannot be introspected. However, attributions of representational content will typically fall into one or the other camp by exhibiting most of the features typical for neural or mental representations. Universal grammar, though not consciously accessible, is predicated to the whole person, investigated by behavioural measures, abstracted from implementation, and its contents are propositional, interacting with hypotheses about the grammar of the infant’s first language.

4.3

Content-Determination and the Job-Description Challenge

What sets neural representations apart from other neural states is the fact that neural representations supposedly carry contents, whereas other neural states do not. But what content does a neural vehicle carry, and in virtue of what? External representations often get their contents by fiat, as when we decide that the points on a map will represent cities and lines will represent roads, red will represent stop and green go. Often the conventions establishing the contents of external representations are

4.3

Content-Determination and the Job-Description Challenge

61

chosen in such a way that their vehicles are especially suited to carry the contents they do on account of their physical properties. For instance, maps are often made such that blue lines represent rivers, because large bodies of water do look blue, and rivers do look like lines from above. But they might be selected purely arbitrarily. For instance, many maps use a red cross to symbolise hospitals, which is a product of pure historical accident. In either case, what determines the symbol used is just a convention. One could make a map where rivers are represented by green dots at the source and mouth or a map where the symbol for a hospital is a model of the Charité in Berlin. Neural representations are supposed to help explain intelligent behaviour, and so their contents must not be determined by fiat by the researchers themselves. There must be an objective way to identify the contents of any particular neural representation. However, just like we cannot posit internal interpreters, we cannot posit internal attributors either, because doing so would lead to the homuncular fallacy. These considerations, among others, have led to the search for a naturalised account of content (Ryder, 2009; Shea, 2018). In fact, multiple such naturalised accounts are available. The most prominent of these are: indicator theories (e.g., Dretske, 1981; Eliasmith, 2000, 2013; Usher, 2001), structural resemblance theories (e.g., O’Brien & Opie, 2004; Shea, 2014; Swoyer, 1991), and teleosemantics (e.g., Millikan, 1984; Neander, 2017). These will be introduced in more detail in the following three chapters. At present all we need to know is that for indicator theorists, the content of a representation is determined by the probability with which the representation co-occurs with various stimuli. For structural resemblance theorists, the content is established by the existence of a mapping between the physical properties of the representational vehicle and the physical properties of the intentional object. For teleosemanticists, the content of a representation is determined by the biological function of either the representation’s producer, or its consumer or both. There is a set of problems with which any theory trying to naturalise representational content has to contend, which can be summarily referred to as the problem of content indeterminacy (Martinez, 2013; Rowlands, 1997; Summerfield & Manfredi, 1998). The gist of the issue is that the properties used to naturalise content do not determine the contents in a way which would allow us to distinguish cases of misrepresentation from cases of veridical representation, because the content is not determinate between two or more options, some of which are appropriate to the situation, others not. For example, consider the so-called bug-detector cells, discovered by Lettvin et al. (1959). These cells in the frog retina are selectively responsive to small black moving objects in the frog’s visual field. Biologists consider the function of these cells to be detecting bugs, which the frog could eat, hence the name. This would suggest that the content of the bug-detector’s activation is something like “bug at x, y, z” where x, y, and z are coordinates in the frog’s egocentric space, at least according to (some versions of) teleosemantics. However, biological function is usually taken to be whatever the trait with the function was selected for. In the frog’s natural environment, almost all black moving dots are bugs. Therefore, it is

62

4

Representations and Mechanisms Do Not Mix

indeterminate whether the function of the bug-detector activation is to detect specifically bugs, or small black moving objects, since these both confer adaptive advantages on the frog, given the correlation. The problem occurs when a wily experimenter starts flinging small black beads of plastic at the frog. When the bug-detector neurons activate, it is indeterminate, whether they are misrepresenting the presence of small black beads as the presence of bugs, or whether they are correctly representing the presence of small black beads.7 Since the ability to misrepresent is crucial to representationalism, the content indeterminacy problem is an important worry for all representationalists. Another, particularly pernicious variant of the content-indeterminacy problem is the so-called disjunction problem (Adams & Aizawa, 2017; Fodor, 1984; Roche & Sober, 2021). We can illustrate the disjunction problem using the same bug-detector example. Suppose, as before, that the bug-detector cells activate and the frog starts snapping its tongue at the small black beads flicked by the experimenter. The representationalist wants to say that, in this case, the frog misrepresents black beads as flying insects based on the fact that, so far, the bug-detector’s activation correlated with instances of flying insects. However, an objection can be made: the activation of bug-detector cells has correlated not just with the presence of flying insects, but also with the presence of . In fact, it seems that the correlation with this disjunctive type is more robust than the correlation with just flying insects. Sometimes when the bug-detector activates, it is presented with an instance of the disjunctive type that is not an instance of a flying insect after all. Importantly, this point generalises beyond the bug-detector example. By carefully restricting the scope of individual disjuncts while increasing their number, it is possible to craft a perfectly fitting disjunctive content for each representational vehicle in such a way that all tokens of the vehicle would correlate with an instance of the disjunctive content. Consider a vehicle which correlates with the presence of the colour red. Its content, assuming indicator semantics, is then just the presence of red. Now suppose that through some fluke the vehicle is tokened in the presence of green on one occasion. Even if the fluke never occurs again, there is still a possibility to assign the content where t1 is the time of the one occasion on which the vehicle was tokened in the presence of green. This content correlates with the tokening of the vehicle more strongly than the content “red”. Thus, what seemed like an obvious case of misrepresentation turns out to be a case of correct representation of the disjunctive content. This, of course, generalises. If the disjunction problem is not addressed, the difference between correct representation and misrepresentation cannot be drawn, and hence representationalism fails. These and other varieties of content-indeterminacy problems are well-known in the literature on representationalism, and the best theories have proposed solutions to the problem. Sometimes new theories have arisen specifically in order to address one or another variant of the content-indeterminacy problem (e.g., Fodor, 1990).

7

See, e.g., Neander (2017) or Artiga (2021) for other analyses of this case.

4.3

Content-Determination and the Job-Description Challenge

63

However, in this book I will not address the content-indeterminacy problem, or any of its many offshoots. The content-indeterminacy challenge is an ontological worry for representationalism. If it cannot be resolved, then there are no representations, because neuronal activity cannot be thought of as misrepresenting anything. The argument I run is not meant to show that there are no neural representations, only that they cannot play any explanatory role in constitutive mechanistic explanations in cognitive neuroscience.8 Arguments questioning the explanatory utility of neural representations are related to a problem referred to by Ramsey (2007) as the job-description challenge. The job description challenge for the proponents of representationalism is to show that the internal representations posited by representational theories of cognition play a “recognizably representational” (ibid., p. 28) functional role in the cognitive system. Should such a functional role not be found, argues Ramsey, calling these states “internal representations” is at best misleading. The term “job description challenge” suggests that the argument is not supposed to establish whether representationalism or non-representationalism is better supported. Instead, the idea is that the representationalist should elaborate their position, before it can be accepted. Indeed, Ramsey follows up his formulation of the job description challenge by examining whether some such specifically representational functional role is attributed by certain theories in cognitive sciences and ultimately answers in the affirmative. The argument I am about to introduce could be labelled an explanatory job-description challenge, analogously to Ramsey’s functional job-description challenge. While Ramsey views showing that “there [is] some explanatory benefit in describing an internal element of a physical or computational process in representational terms” (ibid., p. 34) as part of the job-description challenge, his focus is primarily on characterising what it is for an internal state to function as a representation. My main concern, on the other hand, is how (and whether) neural representations play any explanatory role qua neural representations. The framework of constitutive mechanistic explanation provides an explanatory job-description, which comes with well-specified preconditions. Explanatory relevance requires being a component in the constitutive mechanism for the phenomenon. For neural representations to fulfil this description qua representations, they must fulfil it in virtue of their contents. As I argue in the following, they cannot do this (with the exception of certain classes of neural emulators discussed in Chaps. 6 and 7).

8

An anonymous referee has pointed out that lack of explanatory relevance could be viewed as evidence of non-existence. However, I do not wish to push this angle here. Focusing on explanatory relevance allows me to sidestep the issue of whether “representation” is a theoretical or an observational predicate raised by Thomson and Piccinini (2018).

64

4.4

4

Representations and Mechanisms Do Not Mix

Why Contents Are Not Explanatorily Relevant

The core argument of this book goes as follows: 1. All and only explanatorily relevant factors are mentioned in mechanistic explanatory texts. 2. Only mechanism components are mentioned in mechanistic explanatory texts. 3. All mechanism components are local to the phenomenon. 4. All mechanism components satisfy either the horizontal surgicality or the reciprocal manipulability criterion for mutual dependence with the phenomenon. 5. All representational contents are non-local to any cognitive phenomenon, or they fail both criteria for mutual dependence with the phenomenon.9 6. Therefore, no representational components are mechanism components. (3, 4, 5) 7. Therefore, no representational contents are mentioned in mechanistic explanatorily texts. (2, 6) 8. Therefore, no representational contents are explanatorily relevant factors. (1, 7) Premises 1 and 2 are consequences of adopting the mechanistic framework of explanation. Of course, one might instead adopt a pluralist position, whereby mechanistic explanations would be appropriate for some explananda, and other types of explanations for other explananda. However, even the pluralist would accept that when dealing with explananda for which mechanistic explanations are appropriate, one should accept premises 1 and 2. As a corollary, one should judge purportedly mechanistic explanations against the standards set by premises 1 and 2, so that a purportedly mechanistic explanation which mentions non-components in the mechanistic explanatory text is defective. Premises 3 and 4 were defended in Chap. 2, where I pointed out that locality, together with the disjunction of simultaneous manipulability with horizontally surgical interventions (Baumgartner et al., 2020) and reciprocal manipulability along the lines of Krickel’s (2018) treatment are the best way to analyse mechanistic constitution in light of the problems with the original mutual manipulability account raised by Baumgartner and Gebharter (2016) and Romero (2015). The crucial step of the argument is premise 5. The locality of contents to cognitive phenomena depends on which particular theory of content determination is adopted by the representationalist. Similarly, whether representational contents can be simultaneously manipulated with cognitive phenomena by a horizontally surgical intervention or whether it can be reciprocally manipulated with cognitive phenomena also depends on particular theories of content determination This is because the representationalist position implies that it is the particular content carried by the representational vehicle which plays an explanatory role. The mere fact that the vehicle carries any content as opposed to no content at all cannot be used to explain 9

See Chaps. 6 and 7 for the case of neural emulators. Since only some (not all) neural emulators form an exception to premise 5, I am still confident that the argument holds in principle. The extent to which this detracts from my conclusion is for the reader to judge.

4.4

Why Contents Are Not Explanatorily Relevant

65

any intelligent behaviour. Consider the example of a coded message which looks on the surface as a pasta recipe but can be decoded as a message concerning the movements of enemy submarines. If I take this message and decide to cook the pasta, my intelligent behaviour is presumably explained by the fact that the message carried content about how to cook pasta, not by the fact that the message also carried content about submarines, or by the fact that the message carried content as opposed to being gibberish. This shows that it is the vehicle’s property of carrying a particular content which would have to be a mechanism component in order for representational contents to be explanatorily relevant. But the locality and mutual dependence between this property and the phenomenon crucially depends on the way in which the contents are fixed. Chapters 5, 6, and 7 will argue for premise 5 of the argument by evaluating the complex properties fixing contents according to indicator, structural resemblance, and teleosemantic theories of contents against the norms of locality, horizontal surgicality and reciprocal manipulability. In each of these chapters I will attempt to establish that the evaluated property is both non-local and fails both criteria for mutual dependence, even though, technically, failure of either locality or mutual dependence suffices in order to show that they are not components. The conclusions then follow straightforwardly. One might think that the locality of properties is derivative on the locality of the bearers of these properties and, therefore, wonder what point there is in investigating the locality of representational contents. But this is a wrong way to look at the issue. The problem is that some property ascriptions act as a covert way of asserting that certain states of affairs obtain. For instance, ascribing the property widowed to someone is equivalent to asserting that this person’s spouse has died. As we will see in Chaps. 5, 6, and 7, all prominent theories of representational content take content to be a property analogous to widowhood in the sense that ascribing contents is a way of asserting that a complex state of affairs obtains, has obtained, or would obtain in some counterfactual situations. Therefore, I contend that the locality of contents is independent from the locality of vehicles. One way to see why we should worry about the locality of contents is to suppose that contents turn out to be mechanism components. If contents are mechanism components, then it should be possible to identify lower-level mechanisms which constitute the contents. Without prejudging the issue of locality, we can straightaway note that since content ascriptions are equivalent to asserting the obtaining of certain states of affairs, the only way to manipulate contents is by changing these states of affairs. This suggests that the only candidate components for such a mechanism are the states of affairs covertly asserted to obtain by the contentascription according to a particular theory of content. Since these are the only candidate components for a lower-level mechanism constituting the representational content, if they are all non-local to the original explanandum phenomenon, then the content must also be non-local to the phenomenon, because the components of the mechanism responsible for content must be local to the content. See Fig. 4.1 for a graphical representation of this idea.

66

4

Representations and Mechanisms Do Not Mix

Fig. 4.1 The phenomenon is constituted by x1, representational content (RC) and x3. The components of x3 are spatiotemporally located in the space-time region of the phenomenon. The supervenience base of RC – e.g., environment or evolutionary history is non-local to the phenomenon. Hence, the mechanistic hierarchy does not line up

If a subset of the states of affairs covertly asserted to obtain by the property ascription happens to be local to the phenomenon, while another subset is non-local, then only the local subset can be components in the mechanism for the phenomenon. The property ascription, however, asserts the obtaining of both the local and non-local states of affairs. Suppose we decided to consider the property on the whole local to the phenomenon and included it in mechanism descriptions and explanatory texts. Then the mechanism descriptions and explanatory texts would include a covert assertion of non-local, and hence not explanatorily relevant states of affairs. Such a description or explanatory text would be defective. Therefore, a property whose ascription covertly asserts the obtaining of a state of affairs is local only if the state of affairs as a whole is local to the phenomenon. In arguing against simultaneous manipulability of representational contents with cognitive phenomena by horizontally surgical interventions I consider firstly experimental interventions which can be actually performed in the lab, where such interventions exist. However, some of the theories of content I survey identify representational contents with properties on which no actual interventions can be performed. In those cases, I attempt to construct a merely possible intervention which would change the representational contents and the phenomenon. There is no one general way in which the different theories of content fail the test. Some ways in which the test can be failed include but are not limited to the following: (a) there are no interventions on representational contents (b) interventions cannot manipulate the phenomenon and representational contents simultaneously (c) interventions cannot be horizontally surgical on the level of the phenomenon

4.4

Why Contents Are Not Explanatorily Relevant

67

The general strategy I use when arguing that the properties fixing representational contents are not reciprocally manipulable with the phenomenon is as follows: top-down manipulability usually fails because it would require backward causation. An intervention would have to change the contents by changing the cognitive phenomenon in some way. However, changing the content requires either changing the past, or changing the truth value of some counterfactuals (more details in Chaps. 5, 6, and 7). For bottom-up manipulability, the argument is that for any intervention which changes the cognitive phenomenon there is a separate causal pathway from the intervention variable to the variable representing the cognitive phenomenon via the physical properties of the representational vehicle. This causal pathway is independent from the pathway through the variable representing the representational contents. As such, the vehicle variable must be held fixed in ideal interventions on contents with regard to the cognitive phenomenon. But as we will see, such ideal interventions do not affect the cognitive phenomenon. While this argument is meant to hold for representational contents in general, there is one exception. Representational contents of some neural emulators (Grush, 1997, 2004), namely those which emulate internal components of the cognitive system or of the organism can be mechanism components. I will say more on this exception in Chap. 6, when I deal with structural representations. There I will also show why this exception does not vindicate the entire representationalist enterprise. There is a number of arguments in the literature which look superficially like the argument I advance here. Let me therefore spend the rest of this section to differentiate this argument from those forerunners. Forerunner 1 (Causal Exclusion) One might think about the causal-exclusion arguments against the possibility of mental causation and by extension against forms of dualism and non-reductive physicalism (see e.g., Bennett, 2008; Kim, 1998; Kroedel, 2008). These arguments are based on the observation that the physical realm is causally closed. This means that for every event there is a sufficient physical cause. Any mental events or properties must either be epiphenomenal, having no causal influence on the physical, or they must overdetermine their physical effects, since there is already a perfectly good sufficient physical cause. These arguments might resemble the general spirit of my argument. The causal exclusion arguments assume that special weight should be placed on causal efficacy, so that properties that lack causal efficacy can be safely discounted. Similarly, I place particular weight on mechanistic constitution and explanatory relevance and argue that representational content can be discounted based on lacking those properties. There is another connection between my argument and the causal exclusion arguments. Namely, part of my argument uses the reciprocal manipulability test for mechanistic constitution. The account of reciprocal manipulability I use requires mechanism components to be causally relevant to some temporal parts of the phenomenon constituted by the mechanism. One might therefore wonder whether my argument against reciprocal manipulability between representational contents and cognitive phenomena is not just a rehashing of well-known causal exclusion arguments.

68

4

Representations and Mechanisms Do Not Mix

However, the interventionist account of constitution and causation has been suggested as a way of allowing for mental causation (Krickel, 2018, Ch. 8; Strevens, 2008; Woodward, 2015). This is because, in this account, higher-level variables may be causally relevant in producing some effects. While only their low-level realisers are causally efficacious, higher-level variables, including e.g., mental/neural representational contents, may nevertheless be causally relevant provided an appropriate dependence relation obtains between the higher-level variable and the lower-level causally efficacious realiser. Thus, one could have mental causes in the picture if the variables representing these mental causes in the causal model do make a difference on the effect variables. If one thinks that the problem of causal efficacy of the mental is analogous to the problem of mechanistic componency for representational content, one should therefore expect the mechanistic framework of explanation to vindicate representational contents. After all, it is based on the interventionist account of causation. However, as I show, even assuming the mechanistic theory of constitutive explanation, which is generally amenable to the explanatory relevance of higherlevel variables, representational contents cannot be explanatorily relevant for cognitive phenomena. Moreover, the criteria for mechanistic constitution rely only in part on analysing the causal relevance relations between candidate components and (temporal parts of) phenomena. Recall that some components need not be reciprocally manipulable with the phenomenon, if they instead satisfy the criterion of simultaneous manipulability with the phenomenon by horizontally surgical interventions. Also recall, that mutual dependence is not on its own sufficient for mechanistic constitution. A mutually dependent EIO will still fail to be a component if it fails the locality criterion. Therefore, the issue of causal exclusion does not prejudge the result of this investigation one way or another. Forerunner 2 (Internalism) One might think about the issue of meaning internalism/externalism and the related issue about the causal efficacy of externalist contents. These debates go back to seminal papers by Putnam (1975) and Burge (1979). Putnam argues that meanings of terms and concepts are partly determined by how the world is, not just by their relations to other terms in the language. For example, the meaning of the term “water” is at least partly determined by the fact that water in our world has the molecular structure H2O. Had the molecular structure of water been different, the meaning of the word “water” would have been different as well. If we were to find a substance with the surface properties of water, i.e., transparent tasteless liquid which quenches thirst, we might call it water, but this would be an error, according to Putnam. Since our word “water” refers to the stuff with structure H2O, using it to refer to any other substance leads to strictly false utterances. Burge (1979) provides a similar argument for social externalism regarding technical terms, such as arthritis. The meaning of this term is governed by norms established by experts in the community, in this case by physicians. This means that certain folk uses of the term are infelicitous, because they break these norms, even though the person making such utterances might be unaware of the norms governing the use of the term.

4.4

Why Contents Are Not Explanatorily Relevant

69

The scope of this argument was extended to cover mental states and the contents of subpersonal representational states by Burge (1986), with significant responses from Segal (1989, 1991) and rejoinders by Davies (1991, 1992). Burge (1986) argues that mental states and subpersonal representational states should be individuated by their contents, which partly supervene on the environment, and thus are externalist. Segal (1989) counters that contents assigned in Marr’s celebrated theory of vision do not supervene on the environment. The argument I advance differs from the arguments on offer in the externalism/ internalism debate in the following respects: Firstly, arguments advanced in the externalism/internalism debate use causation as a criterion to establish whether contents supervene narrowly or broadly. Considerations of whether transplanting the cognitive system from its usual environment into a different one changes the behaviour of the system are routinely used in order to argue for either externalism or internalism. I view the criterion of locality and the criterion of mutual dependence as independent. Locality does not entail mutual dependence nor does lack of one entail lack of the other. Secondly, the externalism/internalism debate as it currently stands pays little attention to the differences between various theories of content. As we will see, the internalist claim that broad contents are generally causally inert is false. Thirdly, within the context of constitutive mechanistic explanation, dependence between the phenomenon and its components must be mutual. The externalism/ internalism debate does not address the possibility that the phenomenon and representational contents could be non-causally dependent in accordance with the horizontal surgicality criterion. Likewise, it does not address top-down manipulability at all. Fourthly, the internalists’ commitment to content essentialism leaves them saddled with the idea of “narrow contents”, which are notoriously plagued with indeterminacy (Fodor & Lepore, 1992; Shea, 2013). The stronger non-representational conclusion advanced here avoids this problem. Forerunner 3 (Methodological Solipsism) Fodor’s methodological solipsism (Fodor, 1980), which argues against individuating mental states according to their semantic properties is tied to the internalism/externalism debate. According to Fodor, all psychologically relevant differences between mental states are formal (approx. syntactic). Semantic properties like the truth of beliefs, or the meaning of mental symbols, do not play a role in individuating mental states. In denying that truth plays an individuating role for mental states, Fodor opposes disjunctivism and naive realism. This aspect of methodological solipsism has no relation to my view on the explanatory irrelevance of representational content. However, in denying that meaning individuates mental states, methodological solipsism can be seen as a forerunner. Nevertheless, Fodor’s view differs from mine in two important respects: firstly, while Fodor denies the role of content in individuating mental states, he suggests that syntactic individuation and content-ascription might march in step. So long as syntactic differences march in step with differences in content ascription, Fodor is

70

4 Representations and Mechanisms Do Not Mix

happy to count individuation by content as formal. In such a case, a formal mentalstate type is referred to using the corresponding content. I argue against a similar view in Chap. 9. Furthermore, Fodor’s arguments against individuating mental states by meaning/ content is based on analysing the Twin-Earth scenarios above, as well as considerations having to do with individuating referents. To individuate mental states by content, one would need to know how to individuate their referents. However, argues Fodor, in order to do that, one would need to engage not in psychology (or neuroscience), but in physics, and other branches of natural science. Fodor even argues that one would need to have a complete natural science in order to properly individuate referent types and semantically individuated mental state types. While such an argument is interesting, it has little in common with the argument I advance here. I take neuroscientists’ attempts to identify the content of neural states at face value – the problem I focus on is not that content cannot be reliably identified. Rather, it is that even where content is identified, it has no explanatory role to play in constitutive mechanistic explanations. In addition to disambiguating my argument from other related arguments, let me also answer four general objections. Objection 1 (Relational Properties) One might wonder whether the focus on locality does not exclude all relational properties from mechanistic explanations. If this were the case, then many explanations in, e.g., biochemistry would become problematic. For instance, biocatalysis often relies on matching structure between the substrate and the enzyme. The substrate molecule must be the right shape to fit the enzyme molecule and bind to its active site. Explanations involving relational properties like these are some of the paradigmatic examples of mechanistic explanation (Bechtel & Richardson, 1993). If they turned out not to pass the locality criterion, it would suggest that the criterion must be relaxed. Fortunately, the locality criterion does not exclude all relational properties. Relations between two or more entities all of which are local to the explanandum phenomenon are not against the locality criterion. Thus, in an explanation of digestion, relations between the enzymes and substances found in the digested food can be mechanism components, because both the food substances and the enzymes are local to the digestive system. Similarly, in the mechanisms for active transport, the relational properties governing the opening of gated channels or the operation of cellular pumps are allowable, since these are local to the cell-membrane through which the substances are transported at the time of transport. There is also another way for relational properties to enter the explanation. Recall that mechanistic explanatory texts are constructed by finding such changes to the mechanism-description for a phenomenon that would lead to a description of a mechanism for a contrast phenomenon via the shortest possible edit-distance. However, those changes only make it to the mechanistic explanatory text if they hold in general for all the various mechanisms which could bring about the contrast phenomenon. One way in which this is achieved is by abstracting to a less determinate description of the contrast phenomenon. And here, it can be the case that all the

4.4

Why Contents Are Not Explanatorily Relevant

71

various ways to bring about the contrast phenomenon differ from the actual phenomenon in some way best summarised as a relational property. For instance, in the case of paper combustion, the actual mechanism might differ from all the ways in which the paper could have not caught fire in that the temperature was higher or equal to 451 °F. Objection 2 (Dual-Explananda) One might charge me with strawmanning the representationalist position. Maybe representationalists do not think representational contents are mechanism components. Perhaps they play another role in the explanation of cognition – maybe they explain how the mechanism fits with the task demands of the environment, or why certain episodes of intelligent behaviour are successes or failures. I call this the dual-explananda defence and deal with it in Chap. 8. Here let me just note that though there might be some philosophers who take this route, in general neuroscientific literature these niceties are obscured. Entities in supposed mechanism descriptions are often labelled as e.g., representations of distance, suggesting that the representational content is supposed to play a role in the functioning of the mechanism. This occurs even if there might be other ways of characterising the entities in question, such as e.g., a burst of action potentials with such and such frequency emitted by such and such cells. Objection 3 (Pluralism) Related to the issue of dual-explananda is the worry that my argument presupposes mechanistic explanatory monism – the view that only mechanistic explanations are acceptable. However, one might argue that explanatory pluralism holds – multiple different types of explanation are acceptable in addition to mechanistic explanation. Representational explanations might not be good constitutive mechanistic explanations, but they could still be good explanations for all that. In response I must first clarify that my argument does not presuppose explanatory monism. Nor does it presuppose explanatory pluralism. The argument concludes that representational contents play no explanatory role in constitutive mechanistic explanations of cognition. That conclusion holds whether monism or pluralism holds. Under monism, this conclusion will also entail that representational contents play no explanatory role at all. What, if anything, further the conclusion entails under pluralism depends on how exactly pluralism is to be understood, and how representational explanation is conceptualised in a pluralist framework. There are multiple existing typologies of explanatory pluralism (see Muszynski & Malaterre, 2021). Since I do not wish to enter into the debate on pluralism, I will not commit myself to a set of labels. Instead, in what follows I characterise three ways in which pluralism could be understood and show the implications of my argument under each of these. Pluralism can be understood as the thesis that different scientific disciplines use different types of explanation. This restricted pluralism poses no problems for my argument – if I am correct in my assessment that cognitive neuroscience uses constitutive mechanistic explanation, then my argument shows that representational contents play no explanatory role in cognitive neuroscience. Note that a discipline can be committed to constitutive mechanistic explanation even if many explanations produced by that discipline are defective.

72

4

Representations and Mechanisms Do Not Mix

Alternatively, pluralism might be understood intradisciplinarily. This would mean that even within one discipline, multiple different types of explanation are acceptable. A weaker version of this view would allow different types of explanation for different explananda. Under this definition, the issue of pluralism collapses into the dual-explananda defence. Are there any explananda in cognitive neuroscience which cannot be explained mechanistically? In Chap. 8, I answer negatively. Furthermore, when representational contents are invoked in explanations of those explananda that do call for constitutive mechanistic explanation, my argument shows this is a mistake. Lastly, there might be a stronger version of intradisciplinary pluralism. According to this view, multiple types of explanation might be acceptable for even a single explanandum. If this view is adopted, then the success of my argument hinges on the question whether representational explanations in cognitive neuroscience are put forward as constitutive mechanistic explanations. If they are, then the mechanistic criteria for explanatory relevance apply, and the explanations are defective. If representational explanations are put forward as some other, different type of explanation, then my conclusion shows only that the distinction between representational and constitutive mechanistic explanations cannot be ignored. Objection 4 (Representation Observed) One might argue with Thomson and Piccinini (2018) that the existence of neural representations has been empirically verified, and therefore non-representationalism cannot be maintained. However, this misses the mark. The question for this book is not whether there are neural representations or not, but rather whether representational contents play any explanatory role in mechanistic cognitive neuroscience. Even if we suppose, for the sake of argument, that Thomson and Piccinini are correct, this still does not mean that representational explanations are fortuitous – the mere existence of neural representations does not automatically confer explanatory relevance on them as representations. Additionally, my mechanistic view is consistent with assigning an explanatory role to representational vehicles, and therefore can accommodate Thomson and Piccinini’s empirical evidence. In conclusion, the argument schema I have presented here is a novel way of showing the explanatory irrelevance of representational contents of neural representations for cognitive phenomena. In the next 3 chapters I fill in the details arguing that versions of this argument can be constructed for indicator theories of content, structural resemblance theories of content as well as for teleosemantics.

References Adams, F., & Aizawa, K. (2017). Causal theories of mental content. In E. Zalta (Ed.), Stanford encyclopedia of philosophy (Summer 2017). https://plato.stanford.edu/archives/sum2017/ entries/content-causal/ Aizawa, K., & Gillett, C. (2011). The autonomy of psychology in the age of neuroscience. In P. M. Illari, F. Russo, & J. Williamson (Eds.), Causality in the sciences (pp. 202–223). Oxford University Press.

References

73

Artiga, M. (2021). Beyond black dots and nutritious things: A solution to the indeterminacy problem. Mind & Language, 36(3), 471–490. https://doi.org/10.1111/mila.12284 Baumgartner, M., & Gebharter, A. (2016). Constitutive relevance, mutual manipulability and fat-handedness. British Journal for the Philosophy of Science, 67(3), 731–756. https://doi.org/ 10.1093/bjps/axv003 Baumgartner, M., Casini, L., & Krickel, B. (2020). Horizontal surgicality and mechanistic constitution. Erkenntnis, 85(3), 417–430. https://doi.org/10.1007/s10670-018-0033-5 Bechtel, W., & Richardson, R. C. (1993). Discovering complexity: Decomposition and localization as strategies in scientific research. Princeton University Press. Bennett, K. (2008). Exclusion again? In J. Howhy & J. Kallestrup (Eds.), Being reduced: New essays on reduction, explanation and causation (pp. 280–306). Oxford University Press. Boghossian, P. A. (1995). Content. In J. Kim & E. Sosa (Eds.), A companion to metaphysics (pp. 94–96). Blackwell. Brentano, F. (1874/1973). Psychology from an empirical standpoint. Routledge and Kegan Paul. Burge, T. (1979). Individualism and the mental. Midwest Studies in Philosophy, 4(1), 73–122. https://doi.org/10.1111/j.1475-4975.1979.tb00374.x Burge, T. (1986). Individualism and psychology. The Philosophical Review, 95(1), 3–45. https:// doi.org/10.2307/2185131 Burge, T. (1992). Philosophy of language and mind: 1950–1990. The Philosophical Review, 101(1), 3–51. https://doi.org/10.2307/2185043 Burge, T. (2010). Origins of objectivity. Oxford University Press. Chomsky, N. (1959). A review of BF Skinner’s verbal behavior. Language, 35(1), 26–58. https:// doi.org/10.2307/411334 Chomsky, N. (1965). Aspects of the theory of syntax. MIT Press. Clark, A. (2014). Mindware: An introduction to the philosophy of cognitive science. Oxford University Press. Davies, M. (1991). Individualism and perceptual content. Mind, 100(4), 461–484. https://doi.org/ 10.1093/mind/c.400.461 Davies, M. (1992). Perceptual content and local supervenience. Proceedings of the Aristotelian Society, 92(1), 21–45. https://doi.org/10.1093/aristotelian/92.1.21 Dennett, D. (1969). Content and consciousness. Routledge. Dennett, D. (1975). Why the law of effect will not go away. Journal for the Theory of Social Behavior, 5(2), 169–188. https://doi.org/10.1111/j.1468-5914.1975.tb00350.x Dewhurst, J. (2018). Individuation without representation. British Journal for the Philosophy of Science, 69(1), 103–116. https://doi.org/10.1093/bjps/axw018 Drayson, Z. (2014). The personal/subpersonal distinction. Philosophy Compass, 9(5), 338–346. https://doi.org/10.1111/phc3.12124 Dretske, F. I. (1981). Knowledge and the flow of information. MIT Press. Dretske, F. I. (1988). Explaining behavior. MIT Press. Eliasmith, C. D. (2000). How neurons mean? A Neurocomputational theory of representational content. Unpublished doctoral dissertation, Washington University, St. Louis. Eliasmith, C. D. (2013). How to build a brain: A neural architecture for biological cognition. Oxford University Press. Feest, U. (2003). Functional analysis and the autonomy of psychology. Philosophy of Science, 70(5), 937–948. https://doi.org/10.1086/377379 Fodor, J. A. (1975). The language of thought. Thomas Y. Crowell. Fodor, J. A. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences, 3(1), 63–73. https://doi.org/10.1017/ S0140525X00001771 Fodor, J. A. (1984). Semantics, Wisconsin style. Synthese, 59(3), 231–250. https://doi.org/10.1007/ BF00869335 Fodor, J. A. (1990). A theory of content and other essays. MIT Press. Fodor, J. A., & Lepore, E. (1992). Holism: A shopper’s guide. Wiley-Blackwell.

74

4

Representations and Mechanisms Do Not Mix

Fodor, J. A., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1–2), 3–71. https://doi.org/10.1016/0010-0277(88)90031-5 Frege, G. (1893). Grundgesetze der Arithmetik (Vol. 1). Hermann Pohle. Grush, R. (1997). The architecture of representation. Philosophical Psychology, 10(1), 5–23. https://doi.org/10.1080/09515089708573201 Grush, R. (2004). The emulation theory of representation: Motor control, imagery and perception. Behavioral and Brain Sciences, 27(3), 377–396. https://doi.org/10.1017/S0140525X04000093 Haugeland, J. (1981). What is mind design. In J. Haugeland (Ed.), Mind design (pp. 1–29). MIT Press. Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Neuroscience, 160(1), 106–154. https://doi. org/10.1113/jphysiol.1962.sp006837 Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Neuroscience, 195(1), 215–243. https://doi.org/10.1113/jphysiol.1968. sp008455 Hutto, D., & Myin, E. (2013). Radicalizing Enactivism. MIT Press. Jacob, P. (2019). Intentionality. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2019). https://plato.stanford.edu/archives/win2019/entries/intentionality/ Kim, J. (1998). Mind in a physical world: An essay on the mind-body problem and mental causation. MIT Press. Krickel, B. (2018). The mechanical world. Springer. Kroedel, T. (2008). Mental causation as multiple causation. Philosophical Studies, 139(1), 125–143. https://doi.org/10.1007/s11098-007-9106-z Lettvin, J. Y., Maturana, H. R., McCulloch, W. S., & Pitts, W. H. (1959). What the frog’s eye tells the frogs brain. Proceedings of the IRE, 47(11), 1940–1951. https://doi.org/10.1109/JRPROC. 1959.287207 Marr, D. (1982). Vision. MIT Press. Martinez, M. (2013). Teleosemantics and indeterminacy. Dialectica, 67(4), 427–453. https://doi. org/10.1111/1746-8361.12039 Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2), 81–97. https://doi.org/10.1037/ h0043158 Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the structure of behavior. Henry Holt and Company. Millikan, R. G. (1984). Language, thought, and other biological categories. MIT Press. Muszynski, E., & Malaterre, C. (2021). A roadmap to explanatory pluralism: Introduction to the topical collection the biology of behaviour. Synthese, 199(1–2), 1777–1789. https://doi.org/10. 1007/s11229-020-02856-0 Neander, K. (2017). A mark of the mental: In defense of informational Teleosemantics. MIT Press. O’Brien, G., & Opie, J. (2004). Notes toward a structuralist theory of mental representation. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind: New approaches to mental representation (pp. 1–20). Elsevier. O’Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map: Preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34(1), 171–175. https://doi.org/10. 1016/0006-8993(71)90358-1 O’Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. Clarendon. Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389–443. https://doi.org/10.1093/ brain/60.4.389 Piccinini, G. (2015). Physical computation: A mechanistic account. Oxford University Press. Pitt, D. (2020). Mental representation. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2020). https://plato.stanford.edu/archives/spr2020/entries/mental-representation/

References

75

Putnam, H. (1975). The meaning of ‘meaning’. Minnesota Studies in the Philosophy of Science, 7, 131–193. Pylyshyn, Z. (2002). Mental imagery: In search of a theory. Behavioral and Brain Sciences, 25(2), 157–182. https://doi.org/10.1017/S0140525X02000043 Ramsey, W. (2007). Representation reconsidered. Cambridge University Press. Rescorla, M. (2009). Predication and cartographic representation. Synthese, 169(1), 175–200. https://doi.org/10.1007/s11229-008-9343-5 Rescorla, M. (2020). The computational theory of mind. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2020). https://plato.stanford.edu/archives/spr2020/entries/computa tional-mind/ Roche, W., & Sober, E. (2021). Disjunction and distality: The hard problem for purely probabilistic causal theories of content. Synthese, 198(8), 7197–7230. https://doi.org/10.1007/s11229-01902516-y Romero, F. (2015). Why there isn’t inter-level causation in mechanisms. Synthese, 192(11), 3731–3755. https://doi.org/10.1007/s11229-015-0718-0 Rowlands, M. (1997). Teleological semantics. Mind, 106(422), 279–303. https://doi.org/10.1093/ mind/106.422.279 Ryder, D. (2009). Problems of representation II: Naturalizing content. In F. Garzon & J. Symons (Eds.), The Routledge companion to philosophy of psychology (pp. 251–279). Routledge. Schott, G. D. (1993). Penfield’s homunculus: A note on cerebral cartography. Journal of Neurology, Neurosurgery and Psychiatry, 56(4), 329–333. https://doi.org/10.1136/jnnp.56.4.329 Segal, G. (1989). Seeing what is not there. The Philosophical Review, 98(2), 189–214. https://doi. org/10.2307/2185282 Segal, G. (1991). Defence of a reasonable individualism. Mind, 100(4), 485–494. https://doi.org/10. 1093/mind/C.400.485 Shadmehr, R., & Wise, P. (2004). The computational neurobiology of reaching and pointing: A Foundation for Motor Learning. MIT Press. Shagrir, O. (2001). Content, computation and externalism. Mind, 110(438), 369–400. https://doi. org/10.1093/mind/110.438.369 Shea, N. (2013). Naturalising representational content. Philosophy Compass, 8(5), 496–509. https:// doi.org/10.1111/phc3.12033 Shea, N. (2014). Exploitable isomorphism and structural representation. Proceedings of the Aristotelian Society, 114(2), 123–144. https://doi.org/10.1111/j.1467-9264.2014.00367.x Shea, N. (2018). Representation in cognitive science. Oxford University Press. Strevens, M. (2008). Depth: An account of scientific explanation. Harvard University Press. Summerfield, D. A., & Manfredi, P. A. (1998). Indeterminacy in recent theories of content. Minds and Machines, 8(2), 181–202. https://doi.org/10.1023/A:1008243329833 Sutherling, W. W., Levesque, M. F., & Baumgartner, C. (1992). Cortical sensory representation of the human hand: Size of finger regions and non-overlapping digit somatotopy. Neurology, 42(5), 1020. https://doi.org/10.1212/WNL.42.5.1020 Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87(3), 449–508. https://doi.org/10.1007/BF00499820 Thomson, E., & Piccinini, G. (2018). Neural representations observed. Minds and Machines, 28(1), 191–235. https://doi.org/10.1007/s11023-018-9459-4 Tye, M. (1995). Ten problems of consciousness. MIT Press. Usher, M. (2001). A statistical referential theory of content: Using information theory to account for misrepresentation. Mind & Language, 16(3), 311–334. https://doi.org/10.1111/1468-0017. 00172 Woodward, J. (2015). Interventionism and causal exclusion. Philosophy and Phenomenological Research, 91(2), 303–347. https://doi.org/10.1111/phpr.12095\

Chapter 5

Indicator Contents

Abstract In this chapter I argue that indicator contents cannot be explanatorily relevant in constitutive mechanistic explanations of cognitive phenomena. The argument relies on the observation that indicator contents are based in conditional probabilities. I first argue that these probabilities must be viewed as physical chances rather than epistemic probabilities or credences. Then I examine the most prominent views on the nature of physical chances to figure out which states of affairs are described by the claim that a neural vehicle carries a particular content. I show that, under both frequentist and propensity interpretations of chances, the property of carrying representational content X is not local to any cognitive phenomena. Furthermore, I show that this property is only mutually dependent with cognitive phenomena under a long-run propensity interpretation of physical chances. However, owing to the failure of locality, I conclude that indicator contents are never explanatorily relevant in constitutive mechanistic explanations in neuroscience. Keywords Informational semantics · Covariance · Physical probability · Frequentism · Propensity theory In this chapter I apply the argument sketched in Chap. 4 to the covariance account of content as applied to neural representations. According to the covariance theory of content, neural representations indicate or covary with the states of the environment about which they carry contents. This indication or covariance relation is what fixes the content of neural representations. I will first provide an up-to-date canonical version of the covariance account of content, synthesizing developments from Dretske’s (1981) original proposal to date. This target account must be sensitive to the way in which covariance relations are appealed to in modern cognitive neuroscience. This is because indicator accounts are arguably doing the work behind representation-talk in empirical investigations (Eliasmith, 2000, 2003, 2005; but see Boone & Piccinini, 2016; Williams & Colling, 2018; this book, Chap. 6). For now, I will exclude from my analysis accounts with combined indicator and teleosemantic features (Dretske, 1988; Millikan, 2004; Neander, 2017). Combined teleosemantic-indicator accounts ultimately fix the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8_5

77

78

5 Indicator Contents

contents of representations by reference to teleofunctions, and thus inherit the problems of both indicator and teleosemantic accounts discussed in Chap. 7. Next, I will show that indicator theories of representational content are ambiguous with respect to the property they take to constitute contents of representational vehicles. This is because covariance (mutual information) is calculated from probabilities. The property of representational vehicles which covariance measures therefore depends on the interpretation of probabilities involved in calculating it. Relatively little attention has been paid to the interpretation of these probabilities in the literature (for an exception see Kraemer, 2015). I will show, however, that finding an adequate interpretation for the probabilities on which mutual information and thus covariant contents depend is important for assessing whether or not covariant contents can play an explanatory role in a mechanistic account of cognition. Finally, I will show that, under any plausible interpretation of probabilities involved, the resulting notion of covariant contents fails either the norm of locality or both norms of mutual dependence introduced in Chap. 2. This means that the contents of indicator representations play no explanatory role in any mechanistic explanation of cognition.

5.1

What Is Indicator Content?

Fred Dretske’s classic Knowledge and the Flow of Information (1981) develops a theory of representational content rooted in the notion of information as understood in communication engineering (Shannon information; Shannon, 1948). Shannon’s theory is exclusively concerned with the quantity of information transmitted on average through a communication channel (Lombardi et al., 2016). For the purposes of grounding representation, however, the theory must be expanded in order to not just handle the amount of information transmitted, but also to identify what the information transmitted through the channel is. For ease of exposition, let us first consider a simple model proposal for determining information content of a signal (Shea, 2013, p. 499): a signal (R) carries information about a state of affairs (S) just in case the probability of S given R is higher than the prior probability of S. For example, take R to be some neural state of a simple creature. R carries information about the presence of deer (S) just in case the probability that deer are present given that R occurs is higher than the prior probability that deer are present. For Dretske’s purposes, however, this simple proposal is not sufficient. This is because the information information-carrying relation is rather weak – there are many states of affairs whose probability given a particular signal is higher than the prior probability in absence of the signal (Shea, 2013, ibid.). For example, the same R may raise the probability of the presence of deer, as well as the probability of the presence of elk, or the probability that the creature is in the northern hemisphere. Dretske therefore proposes a stronger version of the information-carrying relation, which he takes to be sufficient to ground

5.1

What Is Indicator Content?

79

representation. According to Dretske, a signal only carries information about those states of affairs with which it covaries absolutely – that is those states of affairs whose probability given the signal is 1 (Dretske, 1981, p. 65). According to Dretske then, the neural state R only represents the presence of deer if deer are around anytime R occurs. Dretske’s requirement that the covariance between a representation and its referent be absolute means that his theory is not suitable for making sense of representational contents in cognitive neuroscience. Real-life biological systems are inherently noisy, so signals do not raise the probability of their referents to 1. Dretske’s attempts to solve this issue do not make the theory more adequate for the purposes of empirical research in cognitive neuroscience. He relativizes content to channel conditions and background knowledge (K ) of the signal consumer. On the revised account, R represents S if P(S|R&K) would be 1 in a noiseless channel (Dretske, 1981, Ch. 5). But, absent a noncircular way of identifying noiseless channels and evaluating counterfactuals about them, the revised account cannot provide neuroscientists with the tools to empirically determine contents of neural representations. Furthermore, absent a non-semantically-laden notion of background knowledge, the revised account does not offer a fully naturalised account of neural representation. Fodor (1990) proposes a theory where the decisive factor in ascribing content to a signal is not its statistical, but rather its nomological dependence on the stimulus. That is, according to Fodor, R represents S iff R is lawfully caused by instances of S. We should treat this theory together with covariance theories proper, because the same kind of statistical dependence which for Dretske determines content of a signal is for Fodor a good indicator of the content – the lawful connection between the signal and its content is reflected in the statistical dependence that comes to obtain as a result of this lawful causal connection. I will not incorporate lawful connection between stimuli and representations into the target account of indicator representations I am about to evaluate in this section. There are three reasons for this. Firstly, the notion that laws are explanatorily relevant is contentious in the life sciences (Cummins, 2000) and some of the strongest challenges against law-involving explanations come from the causal/mechanistic camp (e.g., Salmon, 1984; for summary see Craver, 2007, pp. 34–40). Secondly, the laws Fodor invokes hold only in situ that is contingently and for a limited time in a single cognitive system.1 Thus, the explanatory force of the appeal to laws is not greater than the explanatory force of appealing to the covariance which Fodor takes as evidence for a lawful connection. This is because the supposed laws do not generalise across individuals readily. Thirdly, Fodor’s appeal to lawful connection was introduced as a way to solve the problem of misrepresentation and disjunction, but later empirically informed approaches of Eliasmith and Usher have found mathematical tools to

1

Individual differences in learning and developmental plasticity likely equip each subject with a unique set of physical vehicles. This means that the laws in question do not readily generalise to other individuals, or to other token phenomena, as the vehicles involved there will be different.

80

5 Indicator Contents

accommodate misrepresentation and avoid the disjunction problem with just statistical dependence without appeal to laws. Both Dretske’s and Fodor’s theories are inspired by specifically philosophical concerns, and this hampers their usefulness as a theory of content in cognitive neuroscience. However, the core notion of covariance, mutual information, or statistical dependence, does play a role in cognitive neuroscience. Eliasmith (2000), Usher (2001), and Skyrms (2010) use it to provide a theory of content fit for cognitive neuroscience. Eliasmith notes that his theory is an elaboration on methods used with some degree of success to identify neural contents in experimental practice (Bialek et al., 1996). Although the point of this chapter is to show that neuroscientists’ reliance on representational contents is ultimately a mistake, taking a cue from neuroscience itself in the matter of identifying contents is a step in the right direction, since the notion of content thus identified will be the one of interest to practicing empirical researchers in the field. Thus, adopting this notion of content for my further discussion in this section will make the argument all the more relevant. For both Eliasmith and Usher, the content of a signal Ra is the stimulus Sa, with which Ra has the highest pointwise mutual information from the set of stimuli Sa . . . Sn. Conversely, a stimulus Sa is represented by that signal Ra which has the highest pointwise mutual information with Sa from the set of signals Ra . . . Rn. These conditions are equivalent, but distinct experimental practices make use of either one or the other depending on whether the set of signals R or the set of stimuli S is taken as a given (Eliasmith, 2000, p. 34; Usher, 2001, pp. 319–323). Pointwise mutual information (p.m.i.) is a measure of statistical dependence between values of two random variables. It is given by the equation: p:m:i:ðx, yÞ = log

pðyjxÞ pðx, yÞ pðxjyÞ = log = log pð x Þ pðyÞ pðxÞpðyÞ

ð5:1Þ

Ignoring the logarithms and substituting R and S for x and y, the quantity to be maximised in order to find the content of Ra is: pðR, SÞ pðRjSÞ pðSjRÞ = = pðRÞpðSÞ pðRÞ pð SÞ

ð5:2Þ

for each S. Similarly, to find which vehicle (R) represents Sa we maximise the same quality keeping S constant and cycling through available values of R (Usher, 2001, p. 319). As we can see, p.m.i. can be expressed in three equivalent ways. The first one is based on the joint probabilities p(R, S) of stimuli and responses. The second one is based on forward conditional probabilities p(R|S), and the third one is based on backward conditional probabilities p(S|R). While the formulation based on joint probabilities is the basic one, it is not directly used in research practice. Both the forward and the backward conditional probabilities formulations are used, however (Eliasmith, 2000, pp. 29–31; Usher, 2001, p. 320).

5.1

What Is Indicator Content?

81

Determining representational contents from forward conditional probabilities of a response Ra given Sa. . .n is called taking the observer perspective (Eliasmith) or using the external schema (Usher). In this procedure, researchers record action potentials from the neuron and vary the stimuli present. When the activity of the neuron raises above its resting rate, the researcher notes what stimulus is being presented. From data like this, the researcher can construct a probability distribution of the neuron’s activity conditional on the stimulus presented p(R|S). Given that the range of stimuli presented in the experiment was large enough, the researcher concludes that the neuron’s activity represents the stimulus S for which P(R|S) is maximal (Eliasmith, 2000, p. 29; Usher, 2001, pp. 320–322). Determining representational contents from backward conditional probabilities is called taking the animal’s perspective (Eliasmith) or using the internal schema (Usher). Here the researchers observe patterns of neural activity and attempt to predict (or retrodict) which stimuli were presented to the animal. This is a more complicated procedure, requiring in the first step constructing the probability distribution p(S|R) for a range of responses Ra. . .n. Then, once sufficient data has been gathered, the researcher can observe neural activity, and based on the response R observed, select the fitting stimulus S (Eliasmith, 2000, p. 30; Usher, 2001, pp. 322–323). Both Eliasmith and Usher encourage the use of the animal perspective/internalbased schema, because it is more similar to the task faced by the cognitive system as such – discern the identity of stimuli and react appropriately, given the signal received from sensory periphery (ibid.). However, both Eliasmith and Usher offer proofs that, given a sufficiently diverse and ecologically plausible set of stimuli is used in experiments employing the external-based schema, the two procedures will agree on content determination (Eliasmith, 2000, p. 30; Usher, 2001, p. 331). This is to be expected, as the forward and backward conditional probabilities can be related via Bayes’ rule. Note that Eliasmith and Usher both add a further condition for Ra’s carrying the content Sa, namely that Sa must be able to cause Ra. This condition excludes the possibility that neural representations get their content just in virtue of co-occurring with some states of affairs through mere coincidence. The causal connection between represented objects and representational vehicles furthermore mirrors a generally accepted schema of perception (Eliasmith, 2000, pp. 36–39, 67; Usher, 2001, p. 316).2 Note, however, that Eliasmith and Usher do not claim that the content of a vehicle is whatever caused the vehicle. Instead, merely the possibility that S caused R is required. The content is instead the most likely cause of R as determined by the mutual information criterion (Eliasmith, 2000, p. 67; Usher, 2001, p. 316).

2

Eliasmith’s and Usher’s accounts are both formulated in terms of contents with mind-to-world direction of fit. For imperative representations, the causal requirement would run in the opposite direction, i.e., Ra causing Sa.

82

5

Indicator Contents

To sum up, I take it that the indicator account of neural content stipulates that: (IC) Sa is the content of Ra, if (a) Sa can cause Ra, (b) the conditional probability P(Sa|Ra) is higher than any conditional probability P(Sa|Rb) for all b ≠ a, (c) the conditional probability P(Ra|Sa) is higher than any conditional probability P(Ra|Sb) for all b ≠ a.3

5.2

Probabilities for Indicator Contents

In this section I argue that the probabilities P(R|S) and P(S|R) involved in IC must be interpreted as physical probabilities, that is, as objective properties of the world. I will briefly introduce the leading accounts of physical probabilities, namely frequentism and the propensity theory, and their variants. This exposition is a necessary background for testing indicator contents against the standards of locality and mutual dependence. As we will see below, neural vehicle’s carrying the content a deer is present is analysed as a different property under variants of frequentism than under variants of propensity theory. Pinpointing the content-determining property exactly is necessary to apply the tests. I argue that the probabilities involved in indicator representation must be physical probabilities. According to Hájek (2011), there are three separate notions of probability, which are in use in everyday life and in the sciences, namely: (a) Physical probabilities (b) Measures of evidential support (c) Subjective measures of confidence Mellor (2005) proposes a roughly equivalent division of probability concepts into chances, epistemic probabilities, and credences. Physical probabilities (chances) are “objective properties of some systems producing nondeterministic outcomes” (Hájek, 2011).4 An intuitive example of such physical probabilities could be the probability that the throw of a fair die results in a six. Probabilities as measures of evidential support (epistemic probabilities) are used An account in the spirit of IC has been hinted at by Godfrey-Smith (1991). Rupert’s (1999) “besttest theory” combines IC’s requirements (a) and (c) but omits (b). In Skyrms’ theory (2010), pointwise mutual information is also used to fix contents of signals, but instead of restricting the contents of a signal to the stimulus with which the signal most covaries, Skyrms views signals as carrying (informational) content about all states with which they covary at all. Since none of these views differ from Eliasmith’s and Usher’s views in terms of compatibility with the mechanistic framework of explanation, I will not consider them further. 4 Where the strength of ‘non-deterministic’ might depend on the grain with which we type events – both setups and outcomes. 3

5.2

Probabilities for Indicator Contents

83

in legal contexts and in statistical hypothesis testing. Juries in civil cases are asked, for instance, to judge whether, on the preponderance of evidence, a valid contract was breached, i.e., whether the probability that a contract was breached is higher than 50% given the evidence (Mellor, 2005, pp. 11–12). This is different from the third notion of probability as subjective measure of confidence. The latter measures individual confidence of a particular subject in the truth of a certain proposition. Probabilities as subjective measures of confidence (credences) capture statements like “I think it’s unlikely that Jones embezzled the money.” Note that one might think it unlikely that Jones embezzled the money even if the evidence clearly favours the opposite conclusion. A juror who confused the two notions would not be fulfilling their duties properly (Mellor, 2005, pp. 12–13). It should be clear that the probabilities involved in IC cannot be credences unless one abandons realism about representational contents in favour of some form of pragmatism. If representational contents are objective properties of neural vehicles, they cannot depend on a subjective probability assignment. Individual differences in degrees of belief would result in a lack of agreement about the content of neural signals among researchers. An argument can be made for the conclusion that IC involves epistemic probabilities. The notion of indicator representation has its origin in communication theory and builds upon the concept of Shannon information. Epistemic probabilities are involved in standard communication theory, where signals are thought to provide evidence about the state of their sources (Lombardi et al., 2016). The informal gloss used to talk about neural firings raising the probability of a certain states of affairs also seems to point in this direction. However, epistemic probabilities are, just like credences, unsuitable for a naturalistic theory of representational content, because they are always conditional on evidence. Different evidence confers different epistemic probability on the same event (Mellor, 2005, p. 81). This raises the issue of differences in evidence available to different subjects, as well as their prior knowledge, i.e., the subject’s knowledgebase. It is then unclear whose knowledge-base should be considered for the purposes of determining content. Is it the investigator’s knowledge-base, or the experimental subject’s? Perhaps we should follow Usher’s and Eliasmith’s advice to use the animal’s perspective, i.e., the animal’s knowledge-base. But note that this makes the procedure for converting between the animal perspective, where P(S|R) is used to the observer perspective where P(R|S) is used much more difficult. This is because to determine P(S|R) in the animal perspective, we must relativize to P(S|R&Ka), where Ka is the knowledge-base of the animal. But determining content from observer’s perspective with P(R|S), we relativize to P(R|S&Ko), where Ko is the knowledge-base of the observer. Thus, the easy Bayes’ rule conversion is no longer available. Furthermore, it is difficult to aggregate results from across different experimental subjects, or even from across different trials with the same subject, given the possibility that the knowledge-bases to which the conditional probabilities must be relativized may differ. Lastly, for a large group of content-determination studies, the notion of epistemic uncertainty, or of a knowledge-base, may not even apply. It is difficult to make sense of ascribing knowledge, or epistemic uncertainty to isolated

84

5 Indicator Contents

retinal neurons, or even neural populations, e.g., cortical columns. What it would mean for such entities to be in possession of evidence or considering evidence is at best unclear (but see Figdor, 2018). The other option would be to only consider the experimenter’s epistemic probabilities. But the experimenter’s epistemic situation has no influence on the working of the cognitive mechanism under investigation, save indirectly through influencing the observer’s decisions about what experiments to run. Since neither credences nor epistemic probabilities are suitable, the probabilities used in IC must be chances. Therefore, IC should be amended for the sake of clarity to: (IC*) Sa is the content of Ra, if (a) Sa can cause Ra, (b) the conditional chance P(Sa|Ra) is higher than any conditional chance P(Sa|Rb) for all b ≠ a, (c) the conditional chance P(Ra|Sa) is higher than any conditional chance P(Ra|Sb) for all b ≠ a. The next step for determining whether indicator contents can be involved in mechanistic explanations of cognition is to identify the objective properties which underlie the conditional chances invoked in IC*. The two principal options for interpreting the concept of chances are (a) frequentism and (b) propensity theories (Hájek, 2011). Frequentism identifies the chance that an outcome A occurs P(A) with the frequency with which the outcome A occurs f(A) (Mellor, 2005, p. 39). So, the chance that a die comes up with six is the frequency with which the die comes up with six. Frequentism further divides into finite frequentism and hypothetical frequentism (Hájek, 2011). In finite frequentism, the frequency f(A), which is identical to the probability P(A) is the ratio of outcomes A in a reference set σ of outcomes of relevant actual trials (Hájek, 1996). The chance P(6) that the result of a die roll is 6, is the frequency of rolls resulting in 6 in the set of all actual die rolls. Conditional chances P(A|B) are frequencies of A outcomes in the subset of all B outcomes in σ (Gillies, 2000a, p. 111). The chance that a die roll results in 6 given that it results in an even number P (6|even) is the frequency of die rolls resulting in 6 among all die rolls which resulted in an even number. The converse chance P(even|6) is the frequency of even results among all die rolls which resulted in 6. Then if the conditional chances in IC* are interpreted in line with finite frequentism, we get: (FFI) Sa is the content of Ra, if (a) Sa can cause Ra, (b) the frequency of Sa occurrences in conditions characterised by the occurrence of Ra is higher than the frequency of Sa occurrences in conditions characterised by the occurrence of Rb for any b ≠ a (c) the frequency of Ra occurrences in conditions characterised by the occurrence of Sa is higher than the frequency of Ra occurrences in conditions characterised by the occurrence of Sb for any b ≠ a.

5.2

Probabilities for Indicator Contents

85

In hypothetical frequentism, on the other hand, the frequency f(A) = P(A), is the limiting frequency of outcomes A in a hypothetical infinite collective of outcomes ω which would be realised if the relevant experiment were repeated indefinitely (Gillies, 2000a, p. 90). The chance P(6) that a die roll results in a 6, is then the limiting frequency with which 6 occurs in the hypothetical collective of infinite die roll results. The treatment of conditional chances in hypothetical frequentism is analogous to the finite frequentist approach. Conditional chances P(A|B) are the limiting frequencies of A outcomes among all B outcomes in ω (Gillies, 2000a, p. 111). The conditional chance P(6|even) is the limiting frequency of rolls resulting in 6 that would be obtained by rolling a die indefinitely and noting only the results of even rolls. The converse conditional chance P(even|6) is the limiting frequency of even numbers which would be obtained if a die were rolled indefinitely and the results of all rolls coming up with 6 were checked for parity. Applying this analysis to IC*, we obtain this characterisation of indicator contents: (HFI) Sa is the content of Ra, if (a) Sa can cause Ra, (b) the frequency with which Ra would be accompanied by Sa if Ra were to occur infinitely many times would be higher than the frequency with which any Rb would be accompanied by Sa if Rb were to occur infinitely many times (c) the frequency with which Sa would be accompanied by Ra if Sa were to occur infinitely many times would be higher than the frequency with which any Sb would be accompanied by Ra if Sb were to occur infinitely many times. Propensity theories, as opposed to frequentism, identify chances with dispositions of non-deterministic systems to produce outcomes (Gillies, 2000a, b). I will refer to these non-deterministic systems as experimental setups, or as chance setups. Propensity theories also come in two broad categories – (a) long-run propensity theories and (b) single-case propensity theories (Gillies, 2000a, p. 126, 2000b). In long-run propensity theories, if the chance that an outcome A occurs is x, this means that the chance setup under investigation has the propensity to produce outcome A with a relative frequency that approaches x in the limit (Gillies, 2000a, p. 127; Popper, 1959, p. 35). Note that the chance itself is not identified with frequency x, as in frequentism. Instead, the chance is identified with the propensity or disposition to produce outcomes with frequency x. In other words, according to long-run propensity theories of chance, the chance of A is a disposition of experimental setups to produce sequences of outcomes in which A-type outcomes occur with some frequency. This frequency is the numerical value associated with the chance that A. The chance that a die-roll results in 6 [P(6)] is the propensity of the experimental setup in which a die is thrown to produce series of outcomes in which 6 appears with the frequency f equal to the numerical value of P(6). Conditional chances P(A|B) are propensities of experimental setups to produce sequences of outcomes in which outcomes of type A accompany outcomes of type B with a frequency f equal to the numerical value of P(A|B) (Gillies, 2000a, p. 132). The conditional chance P(6|even) is the propensity of the dice-throwing experimental setup to produce sequences of outcomes in which a certain proportion of even results

86

5

Indicator Contents

are 6s. This proportion tends to 1/3 for a fair die, but could be higher or lower, if the die is weighted. Using a long-run propensity interpretation of chance, IC* turns into: (LPI) Sa is the content of Ra, if (a) Sa can cause Ra, (b) relevant experimental setups have a propensity to produce series of outcomes in which Sa occurs in a higher proportion of trials in which the setup produces5 Ra than trials in which the setup produces any Rb (c) relevant experimental setups have a propensity to produce series of outcomes in which Ra occurs in a higher proportion of trials in which the setup produces Sa than trials in which the setup produces any Sb. In single-case propensity theories the chance that A occurs is the strength of the relevant chance setup’s propensity to produce outcome A on a particular occasion (Gillies, 2000a, p. 126). According to single-case propensity theories, chance-setups possess propensities of varying strengths (one for each outcome type), to produce single outcomes on each run of the chance setup. These propensities are also timedependent (McCurdy, 1996).6 This means that the chance that A occurs at time tx is the propensity at some time ty, earlier than tx, of the relevant chance-setup to produce A at tx.7 If the propensity is stronger, the chance is higher, and on the contrary, if the propensity is weaker, the chance is smaller. The chance that a die roll at tx results in 6 is the propensity of the relevant chance setup at ty to produce 6 at tx. Note that this propensity may vary with the values of both tx and ty. To demonstrate this, let us amend our die-roll example slightly. Suppose there is a famous magician who has developed a way to roll a 6 with a very high chance thanks to a special wrist movement she performs. Since successful execution of this technique requires high precision, the chance that the magician rolls a 6 on her regular die at tx = 1 when fully concentrated on the task might be greater than the chance that she rolls a 6 at tx = 2 while severely inebriated. But even the magician’s chance of rolling a 6 at tx = 2 while inebriated could be different at ty = a before she executed a crucial wrist movement than at ty = b after the wrist movement was successfully accomplished. The magician example can be used to illustrate the nature of conditional chances as envisaged by single-case propensity theories. The chance at t1 that the magician rolls a 6 at t3 given that she executes the crucial wrist movement at t2 [Pt1(6t3|wristt2)] is the propensity of the experimental setup leading up to the magician’s die roll to produce a 6 at t3 given that it evolves in such a way as to produce the magician’s Instead of “produces”, one might opt for “includes”. The time-dependence is introduced to resolve Humphreys’ paradox – the inconsistency between the fact that probability requires backward conditional probabilities, but propensity is a causal notion, and hence backward conditional propensities are metaphysically suspect. Other solutions would view backward propensities as mathematical fictions (see Humphreys, 1985; Milne, 1986; Salmon, 1979). 7 ty must be earlier than tx, because by tx the result has already been produced, and thus the system has no propensity to produce it at that time anymore. 5 6

5.3

Assessing Indicator Contents Based on Frequentist Chances

87

wrist-movement at t2. The converse conditional chance at t1 that the magician executes the crucial wrist-movement at t2 given that she rolls a 6 at t3 is the propensity of the same experimental setup leading to the magician’s die roll to produce the wrist-movement at t2 if it then goes on to produce the 6 at t3. Using this analysis of conditional chances to interpret IC*, we get: (SPI) Sa is the content of Ra, if (a) Sa can cause Ra, (b) the propensity at t1 of the relevant experimental setup to produce Sa at t2 given that it then goes on to produce Ra at t3 is higher than its propensity at t1 to produce Sa at t2 given that it then goes on to produce any Rb at t3 (c) the propensity at t1 of the relevant experimental setup to produce Ra at t3 given that it also produces Sa at t2 is higher than its propensity at t1 to produce Ra at t3 given that it also produces any Sb at t2. See Fig. 5.1. for a summary of the different concepts of probability. The four major candidate interpretations of the concept of chance thus combine with the general notion of indicator content IC to give us four high-level relational properties FFI, HFI, LPI and SPI. Each of these four can be conceivably used to ground the notion of representational content, and they are determinate enough to allow us to investigate whether they can be constitutively relevant to cognitive phenomena. In the rest of the chapter, I show that they do not pass either the locality test or the mutual dependence test, and thus the answer is negative.

5.3

Assessing Indicator Contents Based on Frequentist Chances

In this section, I show that properties described by FFI and HFI (the finite frequentist and hypothetico-frequentist versions of indicator contents) cannot be elements of mechanism components. This is because both FFI and HFI fail both the locality test

Fig. 5.1 Interpretations of probability

88

5 Indicator Contents

and the mutual dependence test for mechanistic constitution. Since FFI and HFI cannot be elements of mechanism components, they play no explanatory role in mechanistic explanations of cognitive phenomena. If these properties underlie representational content, then representational content likewise plays no explanatory role in mechanistic explanations of cognitive phenomena. In order to illustrate these issues, I will expand the deer perception example from Sect. 5.1 so that it aligns with Ruth Millikan’s time-honoured deer/tiger example (Millikan, 1989; Usher, 2001). Imagine our simple creature responds differentially to the presence of deer and the presence of tigers. Deer and tigers, in other words, are the stimuli Sdeer and Stiger. The creature detects deer and tigers visually and tokens a characteristic neural pattern in response to each. These neural patterns are responses R‘deer’ and R‘tiger’. Suppose that IC* holds of Sdeer and R‘deer’ as well as of Stiger and R‘tiger’. This creature then has two representational states, one representing the presence of deer and the other one representing the presence of tigers. Now suppose that the system encounters a deer, tokens R‘deer’, moves towards the deer and hunts it down. I will demonstrate that in this episode of correct deer perception by our toy creature the representational content of R‘deer’ is non-local on either of the frequentist interpretations FFI and HFI. I will also show that the representational content of R‘deer’ is not mutually dependent with the deer perception episode. Therefore, it is not explanatorily relevant in mechanistic explanation of this token cognitive phenomenon. Let us first concentrate on the norm of locality. Recall that, as explained in Chap. 2, mechanism components are located in the spatiotemporal region of the phenomenon which they underlie. Thus, all entities, activities and property instantiations which make up the mechanism must be located where the phenomenon is located and take place when the phenomenon takes place. Let us stipulate that, in the example above, the phenomenon takes place at or around the location of the creature perceiving the deer. Furthermore, the phenomenon takes place in the period between the time the photons reflected from the deer hit the perceiver’s eyes and the time the perceiver finishes taking the appropriate action. As we saw, indicator content is a complex property partly determined by pointwise mutual information between the representational vehicle and whatever is represented. Pointwise mutual information depends on marginal probabilities and conditional probabilities according to the Eq. 5.2 above. On the frequentist interpretation of probabilities, IC* is further specified as FFI or HFI. Let us consider, whether the finite frequentist version FFI is local, when applied to the toy example. According to FFI, the content of R‘deer’ is constituted by the frequency with which R‘deer’ occurs in conditions characterised by the presence of deer in front of the perceiver. Let us consider the set of all the perceiver’s perceptual encounters with deer, and with tigers. This set consists of pairs such as or . There is one pair for each token episode of perception. FFI then tells us that Sdeer is the content of R‘deer’ if, in this set of many pairs, those pairs containing Sdeer also contain R‘deer’ more often than R‘tiger’, and the pairs containing R‘deer’ also contain Sdeer more often than Stiger. But remember that only one of these pairs corresponds to the token perceptual episode with which our example is concerned.

5.3

Assessing Indicator Contents Based on Frequentist Chances

89

All the other pairings are occurrences of past, or future perceptions. This means that the content of R‘deer’ is constituted partly by happenings outside the spatiotemporal region of the token explanandum phenomenon. For instance, the content of the R‘deer’ token which occurs during the single episode of deer perception in our example depends constitutively on whether Sdeer is accompanied by R‘deer’ more reliably than Stiger is accompanied by R‘deer’. But this does not depend just on what happens on the single occasion whose mechanism we are looking for. Instead, it depends on what happens on many other occasions. It is therefore a property non-local to the phenomenon under investigation. Indicator content in the hypothetico-frequentist interpretation HFI is also non-local to cognitive phenomena. According to HFI, the content of token R‘deer’ depends constitutively on the frequency of pairings in an infinite collective of similar pairings. This infinite collective is hypothetical – it would be realised, if the creature from our example were to perform infinitely many deer/tiger differentiations. Therefore, the collective consists not just of actual stimulusresponse pairings, but also of many counterfactual pairings – the results of all the indefinitely many hypothetical perceptual episodes mandated by HFI. These counterfactual pairings do not take place anywhere or anytime. As such, they are clearly non-local to the explanandum phenomenon. This means that the content of the R‘deer’ token determined by HFI is also non-local to the explanandum phenomenon. The frequentist versions of indicator content also fail the mutual dependence test for mechanistic constitution. Recall that a candidate component is mutually dependent with the phenomenon if at least one of the following obtains: (a) the phenomenon and the candidate component can be simultaneously manipulated by a horizontally surgical intervention, or (b) the phenomenon and the candidate component are reciprocally manipulable (see Chap. 2). As I will show, the finite frequentist version of representational content FFI fails this test, because all interventions on representational content which change the phenomenon are not horizontally surgical, failing (a). Furthermore, any interventions on representational content which change a subsequent temporal part of the phenomenon must also change an unrelated variable, namely, some syntactic properties of the representational vehicle. Thus, FFI also fails (b). The hypotheticofrequentist version HFI fails because there are no available interventions on representational content at all. Let’s look at the finite-frequentist alternative first and start by considering manipulability with horizontal surgicality.: In order to pass this test, there must be at least one intervention which changes the candidate component and the phenomenon simultaneously, and this intervention must not directly change more than one EIO on any level higher or equal to the level of the candidate component. Returning to the deer-perception example: the creature is perceiving a deer and R‘deer’, its deer representation, activates. According to FFI, the content of R‘deer’ is determined by the proportion of pairs in the set of trials [; ; ; ; . . .] Researchers can change this proportion directly by stimulating the creature’s neural firing so that R‘tiger’ is tokened while a deer is present. If this is done often enough

90

5 Indicator Contents

and in combination with other interventions, they can succeed in changing the content of R‘deer’. Then they can observe any changes in the creature’s behaviour. Perhaps, on next encounter with a deer, the creature will flee rather than approach. An analogous intervention could be performed indirectly, perhaps by placing the creature in an anomalous situation, where non-deer stimuli consistently elicit the tokening of R‘deer’, thus changing the frequency. Now the question is whether any changes in the animal’s behaviour will be observed, if the content of R‘deer’ is the only variable changed on its level of the part-whole hierarchy making up the phenomenon. The phenomenon in this case is the creature’s differential response to deer. Which other variables can we identify on the same level as the content of R‘deer’? Some further variables will be upstream neural processes which cause R‘deer’ and downstream neural processes caused in part by R‘deer’. Furthermore, the physical properties of R‘deer’ can also be modelled as if they were on the same level as its content. To see this, consider whether the content of R‘deer’ supervenes on its physical properties. We can see that according to FFI it does not. Therefore, one of three possibilities obtains: (a) the physical properties of R‘deer’ are on the same level as its content, or (b) the physical properties of R‘deer’ are on a lower level than its content, but are components in some unspecified EIO on the same level as the content of R‘deer’, or (c) the physical properties of R‘deer’ are on a higher level than its content, but they have components on the same level as the content of R‘deer’. These possibilities follow from the notion of universal constitution inherent to the horizontal surgicality account of mechanistic constitution. Naturally, it might be the case that for different physical properties of R‘deer’ different options from among (a), (b), or (c) are appropriate. The important point is that for each physical property of R‘deer’ one of the options will hold. And whichever one it is, there will be at least one variable on the same level as the content of R‘deer’ which must change if any of the physical properties of R‘deer’ change. Given this, it is easy to show that horizontally surgical interventions into the content of R‘deer’ will not produce any changes in the phenomenon, i.e., in the creature’s differential response to deer. In order to change the phenomenon, either some of the physical properties of R‘deer’ or some downstream neural processes must adapt to the change in content. Either way, interventions which only change the content of R‘deer’ will not be enough, and any interventions which do change the phenomenon also change at least one other variable on the same level as the content of R‘deer’. Thus, the content of R‘deer’ and the phenomenon cannot be manipulated by the same intervention with horizontal surgicality. It can also be shown that, assuming FFI, the representational content of R‘deer’ is not reciprocally manipulable with the phenomenon. Reciprocal manipulability requires that there be both an ideal* intervention on an earlier temporal part of the phenomenon which changes the representational content of R‘deer’ and an ideal* intervention on the representational content of R‘deer’ which changes a later temporal part of the phenomenon. As we have seen above, the content of R‘deer’ is changed by changing the frequency with which R‘deer’ co-occurs with Sdeer. Furthermore, as we have seen

5.3

Assessing Indicator Contents Based on Frequentist Chances

91

above, these interventions on the representational content of R‘deer’ do not change the phenomenon unless they also change some physical properties of the vehicle R‘deer’. As changes in physical properties of R‘deer’ are necessary to change the phenomenon in general, they are also necessary to change any one temporal part of the phenomenon. However, this means that the interventions are not ideal* as required by the reciprocal manipulability criterion. The change in the later temporal part of the phenomenon is due to the fact that the interventions change the physical properties of the vehicle, not due to the fact that they also change the frequency of co-occurrence between the vehicle and stimulus types. Note that the change in frequency with which R‘deer’ and Sdeer co-occur is independent from the change in physical properties of R‘deer’. The physical properties of R‘deer’ do not cause the frequency with which it co-occurs with Sdeer and viceversa. Thus, the intervention changing both the representational content of R‘deer’ and its physical properties manipulates two separate difference-making paths. The fact that merely changing the representational content without changing the physical properties of the vehicle has no effect on the phenomenon tells us, which of these two independent paths is responsible for the effect. In sum, there are no bottom-up ideal* interventions on the representational content which change the phenomenon. Therefore, there is no reciprocal manipulability between representational content and the phenomenon. The hypothetico-frequentist alternative HFI fares even worse because it does not allow for any interventions on representational contents at all. This is because, according to the hypothetico-frequentist interpretation of chance, chances are limiting frequencies of outcomes in a hypothetical infinite collective. According to HFI, representational content of R‘deer’ is determined partly by comparing the limiting frequency of among an infinite collective of R‘deer’ trials to the limiting frequency of among an infinite collective of R‘tiger’ trials. So, changing the representational content of R‘deer’ would require changing one of these limiting frequencies. But these collectives are infinitely large, and so to change the limiting frequencies of outcomes would require adding or subtracting an infinite number of particular outcomes into the collective. Adding any finite number of outcomes to an infinite collective of outcomes will not change the relative frequency of outcomes in this collective. Compare the situation to: how many 1s must I add to the infinite sequence [0, 1, 0, 1, 0, 1 . . .] in order to change the limiting frequency f(1) = 1/2 to 2/3? Clearly no finite number will do, since every 1 added will be offset by one of the infinite 0s. In the same way, no matter how finitely many R‘deer’, Stiger outcomes I add to the infinite collective of outcomes, the limiting frequencies of all outcome types will remain unchanged. This is why there is no intervention of any kind on representational contents determined by HFI. In summary, representational contents analysed with either of the frequentist accounts of probability are non-local to cognitive phenomena and cannot be manipulated simultaneously with cognitive phenomena by horizontally surgical interventions.

92

5.4

5

Indicator Contents

Assessing Indicator Content Based on Propensities

Let us now turn to variants of indicator content obtained by analysing chances as propensities. There are two such variants. One, LPI, results from analysing chances as long-run propensities. The other, SPI, results from analysing chances as singlecase propensities. In this section, I will show that both LPI and SPI are non-local to cognitive phenomena. I will also show that SPI fails both versions of the mutual dependence test. Since both locality and mutual dependence are required for mechanism components, I conclude that neither LPI nor SPI can be mechanism components. Therefore, they are not explanatorily relevant in mechanistic explanations of cognition. I will again use the deer perception example to illustrate the issues, as in the previous section. Let us first apply the locality test to LPI and SPI. I argue that both fail the test, because propensities are properties of experimental setups, and experimental setups are non-local to the organism whose cognitive behaviour we want to explain. As we saw in Sect. 5.2, both LPI and SPI make representational content depend on a relation between certain propensities of relevant experimental setups. Experimental setups (or chance-setups) are indeterministic systems which can produce a range of outcomes. Representational content according to LPI and SPI constitutively depends on conditional propensities. Therefore, we must determine whether the relevant experimental setups mentioned in LPI and SPI are local to cognitive phenomena. Both long-run propensity theories and single-case propensity theories require that experimental setups relevant to P(A|B) produce both A and B outcomes (for long-run propensity theories see Gillies, 2000a, pp. 131–132; for single-case propensity theories see e.g., McCurdy, 1996). Looking back at how indicator representations get cashed out in LPI and SPI, we see that the relevant experimental setups produce, or include, both responses and stimuli. Using the deer perception example for illustration, we see that, according to LPI, the neural pattern R‘deer’ carries the content Sdeer if an experimental setup has the propensity to produce Sdeer more frequently in cases when it produces (or includes) R‘deer’ than in cases where it produces (or includes) R‘tiger’. By definition then, the relevant experimental setup produces not just the response, but also some causes of the stimulus. This is a result that is difficult to square with representational content’s locality to cognitive phenomena. When representational contents are interpreted in accordance with SPI, the locality test comes out much the same. Firstly, the experimental setup whose propensities at t1 determine representational contents produces both stimuli Sdeer at t2 and neural responses R‘deer’ at t3. Therefore, it must include at least a common cause of the two. In fact, prominent single-case propensity theories identify experimental setups with the state of the whole universe, or the whole light-cone (Miller, 1994, pp. 185–186, cited in Gillies, 2000a, p. 127) or the whole set of causally and nomologically relevant facts (Fetzer, 1982). Experimental setups for single-case propensities are thus at least as bloated as experimental setups for long-run

5.4

Assessing Indicator Content Based on Propensities

93

propensities, if not more. If representational contents are properties of these experimental setups, then they are surely non-local to cognitive phenomena. Now I will assess whether propensity-based interpretations of indicator representational contents are mutually dependent with cognitive phenomena. Recall that mechanistic constitution can be inferred based on interventions which simultaneously change both the phenomenon and the candidate component, provided that these interventions are horizontally surgical. Horizontally surgical interventions are those which directly change only one EIO on each level higher than or equal to that of the candidate component. I argue that in the case of the long-run propensity version of indicator representation LPI, it is an empirical question whether there are such horizontally surgical interventions. In the case of the single-case propensity version SPI, however, such interventions cannot exist, because representational content cannot be changed simultaneously with the phenomenon. According to LPI, conditional propensity P(R‘deer’|Sdeer) is together with other conditional propensities constitutively responsible for the representational content of R‘deer’. To change the representational content, one would need to change this conditional propensity, or one of the other conditional propensities which stand in relations specified by LPI. The conditional propensities are properties of relevant experimental setups. Since they are closely related to dispositions, it is reasonable to assume, that they supervene on the structural properties of the relevant experimental setup. As we saw earlier in this section, this experimental setup includes, at least partly, the perceiver-creature and its environment. Interventions which change representational content based on LPI, thus, will change some of the structural properties of the experimental setup relevant for P(R‘deer’|Sdeer). In this case, the structural properties of the experimental setup (including the perceiver-creature) are on a lower level than representational contents of R‘deer’, because the content supervenes on the propensities of the experimental setup, which in turn supervene on the physical properties of the experimental setup. Therefore, interventions which change the physical properties of the experimental setup, for instance by stimulating the creature’s neurons in such a way that R‘deer’ is tokened in the presence of a tiger, do not break the horizontal surgicality requirement. If such interventions change the creature’s behaviour, then they should count as constitution-probing.8 This contrasts with the case of finite frequentist interpretation of P(R‘deer’|Sdeer). As we saw in Sect. 5.3, interventions on the conditional chance P(R‘deer’|Sdeer) interpreted as a conditional frequency are not horizontally surgical. This is because the frequency f(R‘deer’|Sdeer) depends on the structural properties of e.g., the perceiver’s brain only causally, not constitutively. Hence, interventions which change these structural properties are not horizontally surgical for the finite frequentist, but permissible for the long-run propensity theorist. Whether there are in fact any interventions on representational content with respect to cognitive phenomena or not, is an empirical question.

8

Note, however, that representational content still fails the locality criterion, and so cannot be considered a component of the mechanism for the phenomenon.

94

5

Indicator Contents

A similar line of reasoning applies when we consider whether representational contents analysed with the help of LPI are reciprocally manipulable with cognitive phenomena. Since the physical properties of representational vehicles are part of the experimental setup which determines the values of long-run propensities, constitution-probing interventions on representational content can change the physical properties of representational vehicles. Reciprocal manipulability requires that there be an ideal* intervention on an earlier temporal part of the phenomenon with respect to representational content. It is likely that at least in some phenomena this may be the case. If the phenomenon takes long enough for the creation of new connections between neurons, and this change in neural connectivity changes the propensities on which the representational content supervenes, then an activation experiment will count as a top-down intervention on the phenomenon with respect to representational content. In addition, reciprocal manipulability also requires that there be ideal* interventions on representational content with respect to a later temporal part of the phenomenon. Here it seems plausible that an intervention which changes the physical properties of the representational vehicle can thereby also change its representational content, and this change in physical properties of the vehicle can have consequences for the way in which later temporal parts of the phenomenon unfold. According to SPI, the single-case propensity version of IC*, representational content also constitutively depends on conditional propensities. But single-case propensities differ from long-run propensities in that they are time-relative, in the way described in Sect. 5.2. The time-relativity carries through to representational content. Since representational content depends on Pt1(Rt3|St2), representational content of R‘deer’ can change as different times are substituted for t1. As the reader will notice, it is unclear which moment t1 is decisive for determining representational content during a token cognitive phenomenon. It is, however, clear that t1 must be earlier than the moment of stimulus onset t2, because the propensity Pt1(Sdeer|R‘deer’) is the propensity of the experimental setup at t1 to produce or bring about Sdeer at t2. At t2 itself and after, there is no such propensity, since Sdeer will have already been produced or brought about. Given that the propensities on which the representational content of R‘deer’ depends are all indexed to t1 Mi); it is the cause of that movement or result (Mi) that is the product of the behavior. This is not to deny that beliefs and desires figure in the explanation of behavior. Quite the contrary. As we shall see, it is what we believe and desire, the fact that we believe this and desire that, that explains our behavior by explaining, not why Mi occurs (for Mi is not the behavior), but why C causes Mi. (Dretske, 1988, p. 38; original emphasis)

I reproduce Dretske’s visualisation of the relationships between mechanism components, bodily movements and indicated states-of-affairs as Fig. 8.1. below.

2

In the case of artefacts, another type of task-function, i.e., design function can be considered. However, for neural representation in natural cognitive systems, design functions play no role.

148

8

The Dual-Explananda Defence

Fig. 8.1 Dretske’s account of the relations between a representational vehicle, movement of the organism and the intentional object of the representation. (Adapted from Dretske (1988, p. 84). With permission from MIT Press)

Dretske’s account is pitched in terms of a particular view on the nature of behaviour, and in terms of causal rather than constitutive explanation. However, abstracting from these particularities of exposition, it is clear that this is a version of the Fittingness explanandum defence. We can easily amend the account so that the internal state C is a component of a mechanism for a cognitive phenomenon rather than a cause of a movement Mi. The indicator relation between C and F would then be taken to explain the constitutive relevance of C to the cognitive phenomenon, i.e., why component C is fitting for the cognitive phenomenon. Instead of asking why C is appropriate for causing Mi, we may just as well ask: “Why is C an appropriate component in a constitutive mechanism for Mi?” Dretske’s answer in this case would be: because C indicates F. In other words: Dretske’s account is a version of the Fittingness explanandum defence. Having introduced both Shea’s and Dretske’s versions of the Fittingness explanandum defence, I will now demonstrate that the defence fails. As a first step to defeating the defence, we must define an explanation-seeking why-question corresponding to the Fittingness explanandum. For ease of exposition, let us take up the case of spatial navigation and its connection to the activity of the rat hippocampus from Chap. 6. Here, the primary explanandum is the rat’s navigating a maze, and the mechanism for any of the phenomena falling under this description will be partly constituted by the activity of place-cells in the hippocampus, grid-cells, head-direction cells, and other neural assemblies. I will explore two ways in which one might read the Fittingness explanandum as explanation-seeking why-questions consistent with the account of explanation requests from Chap. 3. Firstly, the Fittingness explanandum could be rendered as follows: (Fittingness I) “Why do interactions of neurons in the hippocampus subserve the rat’s navigating a maze, rather than subserving X?” (where X can stand for any number of cognitive phenomena which do not involve hippocampus activity)

The second option reads as follows: (Fittingness II) “Why does a rat’s navigating a maze rely on the interactions of neurons in the hippocampus, rather than on Y?” (where Y can stand for any other pattern of interactions, perhaps in other neural populations)

Shea’s statement of the Fittingness explanandum is ambiguous between Fittingness I and Fittingness II. In the case of Dretske, it is clear that Fittingness II is meant. Neither Fittingness I, nor Fittingness II are straightforward mechanistic explanation

8.2

The Fittingness Explanandum

149

requests, since they do not pit a phenomenon against a contrast class of other phenomena directly. However, for both Fittingness I and Fittingness II a related mechanistic explanatory request can be found such that the answer to this mechanistic explanatory request will also answer Fittingness I or Fittingness II. For Fittingness I, the proper mechanistic explanation request is simply “Why, in the presence of hippocampus activity, does the rat navigate a maze rather than doing X?” (where X can once again stand for any cognitive phenomenon which does not actually involve hippocampus activity). Answering this explanation request requires some knowledge of what the mechanisms for tokens belonging to X would be. With this knowledge we can show that hippocampus activity of the sort found in maze navigation is not required for X. In other words, hippocampus activity is not a component of the mechanisms for token phenomena falling under X. Furthermore, we can reason about how X would change if the hippocampus activity observed during the rat’s maze navigation were added to the mechanism for X. This allows us to conclude that if hippocampus activity were somehow added to the mechanism for X, it would be either redundant, or the resulting mechanism would no longer perform X, but some other phenomenon. Generally speaking, the concern motivating Fittingness I is that we do not know enough about actual mechanisms for various cognitive phenomena, and so neural activity in one part of the system seems as good a candidate for being a component in any of the mechanisms as any other. But as we learn more about the actual range of mechanisms occurring in cognitive systems, the thought of swapping a component into an unrelated mechanism will seem less plausible, since we will know what effects such an alteration would have. Fittingness II is the reverse of Fittingness I. Whereas Fittingness I asks why a mechanism is involved in a particular phenomenon (rather than any other phenomenon), Fittingness II asks why a particular phenomenon relies on a particular mechanism (rather than any other mechanism). Fittingness II is similar to Fittingness I in that it does not conform to the usual pattern of mechanistic explanation requests. Like in Fittingness I, here too the actual phenomenon is not contrasted to a class of other phenomena. Rather, Fittingness II contrasts the mechanism for the phenomenon with a class of contrast mechanisms Y, with the implication that these could just as well have produced the phenomenon. Instead of the hippocampal neurons interacting as they do during the rat’s maze navigation, it could have been brainstem neurons, interacting in some other way. Rephrasing Fittingness II into the usual form of explanation request yields: “Why, in the presence of rat maze navigation, do the hippocampal neurons interact rather than Y?” where Y can be replaced by any other pattern of neural activity. The most straightforward way to answer that question is to notice that the explanandum phenomenon has shifted a level down. Then we can investigate the mechanism for the interactions between the hippocampal neurons and compare this mechanism to the mechanisms for the contrast pattern of neural activity Y. However, this answer is unlikely to appease the defenders of representationalism, because they are not so much interested in why the interactions of hippocampal neurons occur, but rather why the interactions of hippocampal neurons partly constitute rat maze

150

8 The Dual-Explananda Defence

navigation. Dipping a level down will not help to clarify why the constitution relation between the phenomenon and the mechanism holds. Instead, we can note that as with Fittingness I, Fittingness II also derives part of its urgency from the fact that the class of contrast mechanisms Y is left unbounded. For many members of Y, we do not know what phenomena they would produce if they occurred instead of the actual mechanism for rat maze navigation. Nevertheless, many subclasses of Y would clearly not produce rat maze navigation had they occurred instead of the interactions of hippocampal neurons which figure in the mechanism for rat maze navigation. For instance, a mechanism in which the cortical neurons followed a regular oscillatory pattern of activity would produce a seizure and accompanying behaviours instead. Thus, the answer to “Why does rat navigation rely on interactions of neurons in the hippocampus rather than on oscillatory activity in the cortex?” is clear. It is because the latter pattern of activity produces a different phenomenon. Here, my opponents might protest that the answer they were after should point out some relevant properties of the neurons in the hippocampus which make these neurons suitable for maze navigation. Instead, the answer I provide merely restates the previously known fact that Y produces a different phenomenon. However, a complete mechanism description of the mechanism for rat maze navigation will already include all and only those properties of the neurons in the hippocampus which are relevant for maze navigation. And furthermore, comparing that mechanism description to descriptions of other mechanisms (which produce different phenomena) tells us which properties in the mechanism description make the neurons in the hippocampus suitable for maze navigation and not suitable for these other phenomena. Fittingness II, as Fittingness I, dissolves with increasing knowledge of the range of possible mechanisms and the phenomena associated with them. Unfortunately, there is one further problem. Most of the mechanisms in contrast class Y would not produce anything like maze navigation behaviour. However, there might be some subclasses of Y, which do. Such alternative ways of achieving maze navigation – different algorithms for the maze navigation task – might be available. Since these different algorithms all implement maze navigation, one cannot explain why the actual mechanism occurs and not these alternatives in the same way as with other members of Y which do not implement maze navigation. To solve this problem, one may point at more fine-grained individuation criteria for phenomena. It might be that the phenomena produced by the alternative mechanisms would also fall under the concept maze navigation. However, there would likely still be differences in some aspects of the behaviour from the actual explanandum. The rat might navigate faster, or slower. Or it might navigate on a different path. If we identify phenomena in a more fine-grained manner, we can once more answer that the actual mechanism produces the explanandum phenomenon, while the alternative mechanism Y produces a slightly different phenomenon. Nevertheless, in practice such fine-grained differentiation of phenomena is unusual, and potentially unproductive for scientific research. Explanations should offer some level of generality, instead of splitting phenomena into innumerable classes.

8.2

The Fittingness Explanandum

151

The defender of representationalism can offer the following explanation for Fittingness II when considering an implementation of a different algorithm as a contrast: The components of the actual mechanism represent the position of the animal, the orientation of its head and a number of landmarks in the maze. A later component of the mechanism represents the path from the animal’s location to the location where food is hidden. In order for the alternative mechanism Y to occur, components which represent, e.g., the position of the sun in the sky would have to be available. Since all the representations needed for the occurrence of the actual mechanism are available, while some components needed for the occurrence of the alternative mechanism are not available, the alternative mechanism does not occur. Can we find a mechanistic non-representational alternative to this explanation? Yes, but it is a causal not a constitutive mechanistic explanation. The mechanism description for rat maze navigation does not contain the information necessary to account for this contrast. However, that should not be a surprise. The question at its core concerns the constitution relation itself. Mechanism descriptions tell us which parts of the system constitute the explanandum phenomenon in which the system engages. They do not tell us why the system has these parts instead of some other parts. For this we need causal explanations. In the case of biological cognitive systems such as rats, the correct way to answer Fittingness II is by uncovering the ontogenetic and developmental mechanisms responsible for the structure of the rat’s cognitive system, the range of mechanisms which occur in it and the phenomena they produce. The ontogeny and development of the rat will explain why the actual mechanism rather than the alternative mechanism Y occurs. For the mechanism Y to occur, the ontogeny would have to differ from the actual way the rat developed. For example, connections between the neurons and whole brain regions would have to be organised in ways enabling the occurrence of Y. How exactly the ontogeny would have to differ naturally depends on the exact specification of Y. This albeit sketchy answer does much better at explaining Fittingness II than the representational alternative. This is because in the representational explanation the representational relation between, e.g., place cell activity and a location in the maze is treated as a given. That is, the explanation uses the representational relation as an explanans, but does not offer any reason as to why the representational relation might hold.3 Looking back at Fig. 8.1 illustrating Dretske’s view: the indicator relation between C and F explains why C causes M, but there is no explanation as to why the indicator relation holds in Dretske’s schema. In this, the representational explanatory schema is similar to the constitutive mechanistic one. The indicator relation is a given for Dretske, just as constitution is a given for the constitutive mechanist.4

3

Note that various theories of representation tell us only what the representational relation is, not why it holds between particular entities. 4 This does not mean that these relations are epistemic givens. Both must be empirically discovered. However, both act as explanatory primitives in their respective explanatory schemas.

152

8

The Dual-Explananda Defence

The advantage of the appeal to ontogenetic mechanisms is that these mechanisms will be able to causally explain both Fittingness II and the indicator relations appealed to by Dretske. The rat maze navigation relies on interactions of neurons in the hippocampus rather than on some other activity Y, because rats undergo ontogenetic development O which bring about the mechanisms including interactions of the neurons in the hippocampus rather than Y. The referent of O depends on the exact specification of Y and can be fully determined only on the basis of empirical research into rat ontogenesis and development. Crucially, however, ontogenesis and development of the rat will also causally explain the indicator relations holding between the place cells in the hippocampus and the locations in the maze. The indicator relation is not given but arises as the rat develops, matures, and learns.5 Interventions on the development of the rat will result in changes to the indication relation. Dretske is right in saying that there is some connection between the constitution relation and the indicator relation. However, it is preferable to construe this connection as a common cause/explanation by ontogenesis rather than privileging the indicator relation over the constitution relation. The resulting scheme can be seen in Fig. 8.2. Here O (ontogenetic mechanism) causes and explains both the fact that C constitutes Mi and the fact that C indicates F. Dretske might object that even if both the indicator relation between C and F and the constitutive relation between C and Mi are commonly caused by the same ontogenetic mechanisms, the indicator relation could still be explanatorily more fundamental. However, it is unclear on what grounds this could be supported. The indicator relation can be viewed as explanatorily more fundamental on Dretske’s original picture, because it is taken to hold prior to the constitutive relation. In Dretske’s view, fully-fledged indicators are merely recruited by learning mechanisms to play specific causal roles in cognitive mechanisms. But this aspect of Dretske’s view is notoriously untenable – the indicator relation arises during development at the same time as the mechanism for the phenomenon. There is no sense in which it is temporally, causally, or ontically prior to the constitutive relation. Therefore, there is no prima facie justification for viewing it as explanatorily prior to the constitutive relation either – especially since on my view, these relations share a common cause.

Fig. 8.2 Relations between the representational vehicle, movement of the organism and the intentional object once ontogenetic mechanisms are considered

5

In Dretske’s view, we can explain the causal relationship between C and Mi by selection and learning, but the indicator relation between C and F is the reason for why C is selected (“recruited”) as a cause of Mi. Selection thus does not explain the indicator relation itself.

8.3

The Success Explanandum

8.3

153

The Success Explanandum

Together with the Fittingness explanandum, Shea also endorses the dualexplanandum defence with the Success explanandum: “[Semantic contents] are properties of internal vehicles that allow for the explanation of successful and unsuccessful behaviour of an organism in terms of correct and incorrect representation.” (Shea, 2018, p. 31). This supports my contention that various kinds of secondary-explananda are often run together. It is unclear to what extent Shea believes that the two different secondary explananda as I have individuated them here are separate concerns. However, the clearest statement of the dual-explananda defence with the Success explanandum comes from Gładziejewski and Miłkowski (2017). In their version, the success or failure of a cognitive process is in part causally explained by the accuracy of neural representation (specifically structural representations): Explanations that invoke S-representations should thus be construed as causal explanations that feature facts regarding similarity [between the representational vehicle and the environment] as an explanans and success or failure as an explanandum. (Gładziejewski & Miłkowski, 2017, p. 340)

The fact that the explanation is supposed to be causal is in itself interesting because Gładziejewski and Miłkowski use a new mechanistic analysis of causal explanation together with Woodward’s (2003) interventionist account of causality which also inspired the mutual manipulability criterion for constitutive relevance: The structural similarity between the representational vehicle and the target is causally relevant for success by virtue of the fact that interventions in similarity would be associated with changes in the success of whatever capacity that is based on, or guided by the representation in question. (Gładziejewski & Miłkowski, 2017, p. 341)

I will therefore devote a few paragraphs to showing that Gładziejewski and Miłkowski’s argument fails to establish the causal explanatory relevance of representational content to cognitive phenomena before moving to the main business of this section dealing with constitutive relevance. Gładziejewski and Miłkowski claim that if a cognitive phenomenon is explained by structural representations, there is a causal relationship between similarity and success represented by variables X and Y respectively. X is “capable of taking a range of values {X1, X2, . . ., Xn}, where each increasing value corresponds to an increased degree of similarity between the vehicle and the target” (ibid., p. 342) and “[i] ncreasing values of Y = {Y1, Y2, . . ., Yn} would correspond to increasing degrees of success” (ibid., p. 343, my formatting). Interventions on X, according to Gładziejewski and Miłkowski, result in changes in Y. Furthermore, the relation between values of X and values of Y is orderly: for a range of values of X, interventions setting X to a higher value result in higher values of Y. Importantly, they claim that the structure of the representational vehicle itself is not related to degrees of success in this way. Interventions on the structure of representational vehicles do not regularly result in changes in degrees of success,

154

8 The Dual-Explananda Defence

because compensatory interventions on the target of representation are available: “it is impossible to say how manipulating the vehicle’s structure . . . will change success independently of facts about the target; . . . interventions on the vehicle’s structure change the success only insofar as they change the degree of similarity between the vehicle and the target” (ibid., p. 346, original emphasis). In other words, intervening on the structure of the vehicle only changes success if the environment is not changed to fit the new structure of the vehicle. In order for Gładziejewski and Miłkowski’s argument to succeed, they need to have established (a) that there is a causal relationship between structural similarity (of the vehicle and the target) and success, and (b) that there is no such relation between the structure of the vehicle itself and success. Point (b) is vital, because if there is a causal relation between the structure of the vehicle and success, the structure of the vehicle can (partly) causally explain success. And since structural similarity supervenes on the vehicle’s structure, Gładziejewski and Miłkowski’s opponents could reasonably argue that the causal relation between structural similarity and success is parasitic on the causal relation between the vehicle’s structure and success. This would make an explanation of success purely in terms of structure available, defeating the purpose of the dual-explananda defence. The problem with Gładziejewski and Miłkowski’s argument is that, contrary to their analysis, the structure of the representational vehicle does have an orderly causal relationship with the degree of success. According to Woodward’s definition of intervention, which Gładziejewski and Miłkowski accept, an intervention I on X with respect to Y indicates causation only if I only changes the value of X and Y, keeping all other variables which could influence the value of Y that are not on the causal path between X and Y fixed (Woodward, 2003, p. 98; and see Gładziejewski & Miłkowski, 2017, p. 342). The various states of the target of the representation in the environment are clearly independent from the structure of the representational vehicle. Therefore, interventions on the structure of the representational vehicle aimed at assessing its causal connection to the success of cognitive phenomena must keep the target of the representation constant. The target’s state is a confounding variable, because it provides a separate causal path to changing the degree of success variable Y. Once the state of the representational target is kept constant as required by Woodward’s account, changes in the structure of the representational vehicle lead to orderly changes in the degree of success. This means that, contrary to Gładziejewski and Miłkowski’s position, the structure of the representational vehicle does stand in a causal relation to success/failure, and a non-representational mechanistic explanation of success/failure is available. This point will be clearer if we specifically consider a causal model for the situation including variables for behaviour B, the structure of the target of representation T, and both the structure of the representational vehicle V and the structural similarity between the vehicle and the target S. There is a non-causal dependence relation between S and the pair of variables T and V, suggesting that we need to use Woodward’s (2015) amended account of mutual manipulability with ideal* interventions to judge whether V, S, or T cause B. Under this standard, interventions on S count as ideal with respect to B even if they do not screen off the influence of V and

8.3

The Success Explanandum

155

Fig. 8.3a Non-causal dependence relations in dashed lines. The intervention variable I1 causally affects both T and S. Since T is in the supervenience basis of S, this intervention can probe causal relation between S and B

Fig. 8.3b The intervention variable I2 is meant to probe the causal relation between V and B. In order to do this, it must not at the same time intervene on T, as there may be an independent causal route from T to B

T (see Fig. 8.3a). Interventions on V with respect to B, meanwhile, need not screen off S, but must not at the same time change the value of T, because there is no dependence relation between V and T (see Fig. 8.3b). An intervention which does change both V and T breaches Woodward’s requirement I2 – all variables other than the putative cause variable, which are causally connected with the effect must be kept undisturbed by the intervention. And, as I hope is clear, at least some interventions on V which do not at the same time change T do change B. This establishes the causal relation between V and B denied by Gładziejewski and Miłkowski. All that being said, one might still reasonably worry whether structural similarity might not be a more reliable guide to success/failure than the vehicle’s structure on its own. After all, so far, I have only shown that the vehicle’s structure is causally relevant to success. And while that point is expressly denied by Gładziejewski and Miłkowski, admitting it might not be a decisive blow for the representationalist. If the only cases in which changes to the vehicle’s structure influence success/failure are those in which structural similarity is also changed, we are at an impasse. The non-representationalist considers the vehicle’s structure explanatory, the representationalist on the other hand deals with structural similarity. However, there is a good reason to privilege the vehicle’s structure. It turns out that there are ways to intervene on success/failure via the vehicle’s structure which do not at the same time change structural similarity. Specifically, in the case of the maze-navigating rat I have in mind scaling or mirroring the hippocampal cognitive map. Scaling and mirroring preserve the degree of structural similarity between the hippocampus and the maze – what changes is only the representing relation. It also changes the degree of success – a rat whose hippocampal map gets mirrored would turn left instead of right in a T-maze. Gładziejewski and Miłkowski’s argument concerns causal relevance. Having considered it, let me now move to the Success explanandum in the context of constitutive mechanistic explanation. The mechanistic framework of explanation as presented here is also sufficiently equipped to deal with the Success explanandum.

156

8

The Dual-Explananda Defence

This case is more straightforward than Fittingness explanandum, in that there is only one version of the Success explanandum, and it is closer to a standard explanation request. The Success explanandum contrasts the actual cognitive phenomenon with a class of contrast phenomena F taken to be failures. A mirror case we might call Failure explanandum contrasts an actual phenomenon considered to be a failure to a contrast class of successes. In what follows, I will deal primarily with the Success case, as the treatment of the Failure case is equivalent. An explanation request corresponding to the Success explanandum for our rat spatial navigation example reads as follows: (Success) Why does the rat navigate the maze successfully rather than unsuccessfully? In order to explain this contrast, we must first identify the class of failures F. Clearly, there are many ways in which a rat may be said to have failed to navigate the maze. Perhaps the experimenter has hidden food in branch 6 of the ray maze, and the animal runs into branch 5. Alternatively, the rat may stand still in the middle of the maze, not choosing any single branch. Or the rat finds the hidden food, but only after randomly sampling each and every branch of the maze. The rat could even run around in a circle, seemingly trying to bite its own tail. What counts as a failure seems to be largely dependent on task specification, and failure is often ascribed externally by the investigator.6 However, the cases mentioned above would likely count as failures to navigate the maze, and therefore are members of class F.7 Clearly, the mechanisms underlying each of these different failure types would differ from each other considerably. For instance, most of the cases feature running behaviours, and corresponding mechanism components, but the case in which the rat stands still in the middle of the maze does not. Differences in brain activity among the various cases are likely just as pronounced. The fact that the class of failures F is extremely heterogeneous suggests that a single mechanistic explanatory text cannot be formulated for the contrast between the phenomenon and class F as a whole. This is because the differences between the mechanism for the phenomenon and the most similar mechanism for a member of F will not be shared by all members of F, as required by the procedure for constructing METs introduced in Chap. 3. Therefore, as in other cases where the contrast class is heterogeneous, the contrast class must be broken up into several subclasses to allow for meaningful comparison between the phenomenon and the (multiple) contrasts. Comparing the mechanism for the phenomenon with the most 6

While some task specifications coincide with what might be viewed as objective goals of the organism, like getting food, in other paradigms, such as operant conditioning, the organism’s response can be extensively shaped by the experimenter. Here, the animal gets food because it successfully performs the task set by the investigator, not the other way around. 7 The case in which a rat finds the food after randomly sampling each branch of the maze shows that the role of the investigator in assessing failure cannot be easily discounted, for in a certain sense the rat has successfully located the food, and thereby fulfilled its nutritional needs.

8.3

The Success Explanandum

157

similar mechanism from each homogeneous sub-class delivers a piece-meal answer to the original Success explanandum. In the rat navigation case, appropriate subclasses could include cases where the rat stays still, cases where the rat runs into a different branch of the maze, cases where the rat takes too long to decide on the branch, etc. The defender of representationalism might point out that, by dividing the contrast class up, we are losing the generality that comes with the simple representationalist explanation. But dividing the class up is not an arbitrary decision on part of the mechanist. Rather, the class is divided up, because its members are dissimilar. This is not surprising, given that the class of failures is circumscribed by researchers’ fiat. The generality that is lost is only apparent. The mechanistic analysis reveals the underlying messiness.8 Here, in order to show that there is a unified explanandum missed by the mechanist, the representationalist might argue that only some types of failures are of interest to cognitive science. These types of failures should then form the class of relevant failures F. And as it happens, these types of failures all involve misrepresentation. However, this approach critically turns on whether such failures of special interest to cognitive science can be identified independently from the concept of misrepresentation. If the only way we can distinguish interesting failures from accidental ones is by noting that the former involve incorrect representation, then the argument fails. Under this supposition, misrepresentation is what makes something a relevant failure. This means that the presence of correct/incorrect representations is a shared difference between the cases of success and the cases of failure. So far, so good for the representationalist – correct representation seems to play exactly the kind of explanatory role I envisaged in Chap. 3. The problem is that the statement “The rat succeeded in navigating the maze rather than failed, because it represented correctly rather than incorrectly” turns out to be analytic, and vacuous. One might as well say “The rat succeeded in navigating the maze rather than failed because it succeeded in navigating the maze rather than failed”. The representationalist might now protest that there are different ways in which the rat could have represented incorrectly, and these explain the different ways in which the rat could have failed. So, for example, the fact that the rat represented the food pellet as in branch 5 and not in branch 6 explains why the rat found the pellet (success) rather than ran into branch 6 (failure).9 But if that is the answer, then the representationalist is partitioning the class of failures, just like the mechanist. Hence, there is no increase in generality over the mechanistic alternative after all. Perhaps there is another way of demarcating the relevant class of failures, namely by enumeration or by exclusion. If empirical investigation then showed that all these cases involve incorrect representation, this finding would be interesting and relevant for the purposes of explaining success.

8

Additionally, the ultimate goal of the representationalist analysis is to uncover patterns of failure (Shea, 2018, p. 38), which requires partitioning the class of failures in the end anyway. 9 The expressions “branch 5” and “branch 6” should be read de re here.

158

8

The Dual-Explananda Defence

However, as it turns out, cases of failure investigated in cognitive science do not neatly map onto cases of putative incorrect representation. There are important cases of both failure with correct representation and success despite incorrect representation. For a case of failure with correct representation consider the classic frame problem, studied extensively in theoretical artificial intelligence research (Dennett, 1978, Ch. 7; Dreyfus, 1992; Pylyshyn, 1987). In this case, an imagined artificial cognitive system is placed under time constraints and asked to solve a problem without pre-existing relevance values attributed to features of the environment. If the cognitive system attempts to build an accurate representation of the task domain, it will run out of time because the number of parameters which could be relevant to the task is unbounded. This is a case of failure which is clearly interesting to cognitive science and has in fact spawned extensive literature. Yet, failure in this case is not due to incorrect representation, but rather apparently due to “over-correct” representation. For success with incorrect representation, consider the case of illusory superiority well-known from cognitive psychology (e.g., Alicke, 1985; Schmidt et al., 1999). Humans tend to overestimate their competence at various activities in comparison to others. This incorrect evaluation of one’s own competence can, however, lead to long-term success because it may contribute to affect regulation (Roese & Olson, 2007).10 Some Gettier-like cases can also be conceptualised as success despite incorrect representation. Suppose a rat’s perceptual system produces a cat representation in response to a black plushie toy. This clear misrepresentation leads the rat to scurry away, thus successfully avoiding the cat hiding behind the plushie. The cases of failure with correct representation and success with incorrect representation show that Shea’s dictum about success being explained by correct representation and failure by incorrect representation cannot hold. These cases also put pressure on Gładziejewski and Miłkowski’s analysis in which there is an orderly relation between correctness and success. This is especially true for the frame problem, in which the effort to produce and manipulate a maximally correct representation leads to failure. Gładziejewski and Miłkowski could retort that the relationship between correctness and success need only hold in certain narrow boundary conditions (Woodward, 2003, p. 203) and yet count as explanatory. Nevertheless, the problem cases still fall outside the explanatory scope of the representational paradigm. The mechanist on the other hand, can in principle explain success in contrast with any homogeneous class of failures. The correctness or incorrectness of representation does not play any explanatory role but neither does it hinder the explanatory effort. After all, it is possible to contrast the mechanism underlying the success cases with mechanisms which would produce failures independent of the correctness or incorrectness of representational content. At the end of the day then, the mechanist can explain more than the representationalist.

10 This case admittedly concerns mental representations rather than neural representations. However, the principle should hold for neural representations as well.

References

8.4

159

Conclusion

The dual-explananda defence tries to show that mechanistic explanations of cognitive phenomena are incomplete because they cannot account for the fit between mechanisms and the environment and isolate the factors relevant for the success of cognitive systems in performing particular tasks. In this chapter, I showed that this is not the case. By accumulating contrastive mechanistic explanations, we can learn why particular mechanisms produce the particular phenomena because contrastive mechanistic explanations contain information about the crucial points of intervention – the components which would have to change in order to transform a phenomenon into a different one (see Chap. 3). Knowing how changes to the mechanism would impact the phenomenon means that we know why these particular components produce this particular phenomenon. Furthermore, the mechanistic framework can also explain why a particular phenomenon is produced by a particular set of components as opposed to some other compatible set of components. The best answer to that question can be obtained by examining the ontogenetic mechanisms which caused the cognitive system to assemble the way it did and enabled the occurrence of particular mechanisms in the system. These ontogenetic mechanisms can also causally explain the presence of the indicator and structural resemblance relations on which putative representationalist explanations rely. The mechanistic explanatory strategy therefore results in deeper explanations. The mechanistic framework can also account for the success of cognitive systems in particular tasks. Contrastive mechanistic explanations of success vs. particular subclasses of failure can be achieved by simply comparing the actual mechanism leading to success with the mechanism which would result in a failure of that particular class. The representationalist cannot claim that their explanatory strategy is more general than the mechanistic one, because the only way to get the class of successes to correlate with cases of correct representation is by defining it to be so. Therefore, the representationalist can also only explain success in contrast to particular cases of failure. However, since the representationalist is committed to the view that correct representation explains success and incorrect representation explains failure, they cannot account for the cases in which failure occurs despite correct representation and success despite incorrect representation. The mechanist meanwhile can account for all classes of failure regardless of representational correctness. Therefore, the success explanandum is also better accounted for in the mechanistic framework.

References Alicke, M. D. (1985). Global self-evaluation as determined by the desirability and controllability of trait adjectives. Journal of Personality and Social Psychology, 49(6), 1621–1630. https://doi. org/10.1037/0022-3514.49.6.1621

160

8

The Dual-Explananda Defence

Dennett, D. (1978). Brainstorms. MIT Press. Dretske, F. I. (1988). Explaining behavior. MIT Press. Dreyfus, H. (1992). What computers still can’t do. MIT Press. Egan, F. (1995). Computation and content. Philosophical Review, 104(2), 181–203. https://doi.org/ 10.2307/2185977 Egan, F. (2014). How to think about mental content. Philosophical Studies, 170(1), 115–135. https://doi.org/10.1007/s11098-013-0172-0 Gładziejewski, P., & Miłkowski, M. (2017). Structural representations: Causally relevant and different from detectors. Biology and Philosophy, 32(3), 337–355. https://doi.org/10.1007/ s10539-017-9562-6 Pylyshyn, Z. (Ed.). (1987). The Robot’s dilemma: The frame problem in artificial intelligence. Ablex. Roese, N. J., & Olson, J. M. (2007). Better, stronger, faster: Self-serving judgment, affect regulation, and the optimal vigilance hypothesis. Perspectives on Psychological Science, 2(2), 124–141. https://doi.org/10.1111/j.1745-6916.2007.00033.x Schmidt, I. W., Berg, I. J., & Deelman, B. G. (1999). Illusory superiority in self-reported memory of older adults. Aging, Neuropsychology, and Cognition, 6(4), 288–301. https://doi.org/10.1076/ 1382-5585(199912)06:04;1-B;FT288 Shea, N. (2018). Representation in cognitive science. Oxford University Press. Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press. Woodward, J. (2015). Interventionism and causal exclusion. Philosophy and Phenomenological Research, 91(2), 303–347. https://doi.org/10.1111/phpr.12095

Chapter 9

The Pragmatic Necessity Defence

Abstract In this chapter, I discuss the view that referring to representational contents is necessary in non-explanatory/extratheoretical contexts. I concentrate on Egan’s account of the cognitive/intentional gloss and its functions. On my reading, the functions of the intentional gloss include connecting scientific theory with pretheoretical explananda, acting as a heuristic, tracking the progress of neural computations, and playing a didactic role. I argue that the use of representational contents to connect cognitive theory with pre-theoretical explananda obscures a series of complex mappings between common-sense concepts, scientific concepts, tasks, and neural activity, which is made all the more relevant by the increasing evidence of neural reuse. I further argue that the use of representational contents for didactic purposes may have pernicious effects for this very reason. I accept that representational contents might have a heuristic and tracking role in the context of discovery, but contend that there are other usable heuristics, and should representational contents be used in this way, care must be taken to clearly differentiate between the context of discovery and the context of explanation. I also take the opportunity to discuss whether Egan’s mathematical contents can be explanatorily relevant in mechanistic explanations of cognition. I conclude that mathematical contents are not constitutively relevant to cognitive phenomena. However, describing components as computing certain mathematical functions might be appropriate where such description adequately expresses the difference between the mechanism for a phenomenon and the mechanisms underlying the phenomena in a contrast class in accordance with the procedure introduced in Chap. 3. Keywords Egan · Cognitive contents · Mathematical contents · Cognitive concepts · Pragmatics of explanation In this chapter I consider what I call the “pragmatic necessity defence” of representationalism. The pragmatic necessity defence holds that purely mechanistic explanations of cognitive phenomena fail to fulfil some of the pragmatic functions of explanation. In order to fulfil these pragmatic functions, mechanism components must be considered as representational vehicles carrying particular content. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8_9

161

162

9 The Pragmatic Necessity Defence

Explanations which refer to representational content are then ceteris paribus better than non-representational mechanistic ones, because they can serve some purposes which non-representational explanations cannot serve. The best developed example of the pragmatic necessity defence in the literature is due to Egan (1995, 2010, 2014). Her so-called deflationary realism about representational contents also deserves attention because it is not committed to any naturalistic theory of representational contents and is thus immune from the arguments I presented in Part II. In this chapter I will first introduce Egan’s deflationary realism in Sect. 9.1. This includes her crucial distinction between mathematical contents and cognitive contents. Only cognitive contents are the subject of the pragmatic necessity defence. In Sect. 9.2 I will list the options for answering Egan’s challenge from the mechanistic point of view. In Sect. 9.3 I discuss the role of mathematical contents in mechanistic explanations of cognitive phenomena. In Sect. 9.4 I list the supposed pragmatic benefits of ascribing cognitive contents. Finally, in Sect. 9.5 I argue that the pragmatic necessity defence fails, because the supposed pragmatic benefits of cognitive contents either do not obtain, or are pernicious, or are irrelevant to explanation.

9.1

Egan’s Deflationary Realism

Frances Egan defends a deflationary variant of representationalism. According to Egan, two types of representational contents are ascribed to representational vehicles in cognitive science: mathematical contents and cognitive contents. Mathematical contents are ascribed in the course of constructing functiontheoretic explanations of cognitive capacities. A function-theoretic explanation explains a cognitive capacity by interpreting parts of the organism as computing certain mathematical functions. States of the system are then ascribed mathematical contents corresponding to the inputs and outputs of said functions (Egan, 2014). One of Egan’s preferred examples is Marr’s characterisation of the edge detector as computing the Laplacian of a Gaussian in the course of visual perception. The activity of neurons feeding into the edge detector is here taken as representing the inputs to the function computed by the edge detector, i.e., an array of numerical values. The edge detector’s activity represents the output of the function – a transformation of the input numerical array. According to Egan, cognitive components carry particular mathematical contents essentially. In other words, mathematical contents type-identify cognitive components (Egan, 2010, p. 255ff, 2014, p. 122, cf. ibid., p. 117). That is, the edge-detector is, from the perspective of cognitive science, essentially a Laplacian-of-a-Gaussian computer. It is only incidentally an edge-detector, owing to the kind of environment it is embedded in. The mathematical contents thus provide a canonical environmentindependent characterisation of the cognitive components involved in the performance of some capacity (ibid., p. 123). Crucially, according to Egan, changing the

9.1

Egan’s Deflationary Realism

163

mathematical contents carried by a cognitive component is equivalent to changing the component’s type. If the edge-detector stopped computing the Laplacian-of-aGaussian and computed a different function instead, it would become a different device. Importantly, mathematical contents cannot be naturalised. Ascribing mathematical contents to inputs and outputs of a component requires a mapping between the inputs and outputs on the one hand, and a set of abstract entities corresponding respectively to the domain and the range of the computed function. Because the domain and the codomain are sets of abstracta, there are no naturalistic mind-independent relations holding between them and the representational vehicles (Egan, 2014, p. 123). Though Egan does not explicitly specify how appropriate mathematical contents are chosen, it can be surmised that some of the factors at play include the extent to which the behaviour of the component corresponds to the candidate functions, the simplicity of the candidate functions, and the ease with which the functions can be computed and transformed by researchers. These factors directly impact the extent to which the function-theoretical explanation can deliver its goals. As Egan writes: This [function-theoretic] description characterizes the mechanism as a member of a wellunderstood class of mathematical devices. This is an essential feature of [function-theoretic] accounts: they allow us to bring to bear knowledge about how such functions can be executed. This [function-theoretic characterisation] gives us a deeper understanding of the device; we already understand such mathematical functions as vector subtraction, Laplacian of Gaussian filters, integration, etc. (Egan, 2014, p. 122)

As we saw in Chap. 4, the possibility of misrepresentation is an important desideratum for any representationalist theory. According to Egan, mathematical contents can be non-veridical if the component malfunctions (Egan, 2014, p. 127fn18). However, she does not offer any detail about what constitutes malfunctioning and how to distinguish malfunctioning components from non-malfunctioning components that simply realize a different mathematical function. Since, according to Egan, components are type-identified according to the function they compute, and a mathematical function is an input-output mapping, a “malfunctioning” component may simply be a functioning component that realizes some other mathematical function. Thus, a “malfunctioning” component cannot be objectively distinguished from a “well-functioning” component of a different type. The second type of content ascribed to representational vehicles in Egan’s view is what she calls cognitive content (Egan, 2014, p. 124). Cognitive contents, according to Egan, are distal contents ascribed to the vehicles on the basis of covariance or structural correspondence between the vehicles and the referents. In other words, cognitive contents are the kind of contents generally posited by non-deflationary representationalist accounts of cognition (Egan, 2010, p. 256). However, pragmatic considerations are decisive in ascribing cognitive contents. Importantly, according to Egan, cognitive contents do not play any role in cognitive-scientific theories, which are limited to function-theoretic characterisations using mathematical contents (ibid., Egan, 2014, pp. 127–128). Instead, cognitive contents are part of an informal or explanatory gloss, which is an add-on to the theory and serves a number of pragmatic functions. Furthermore, cognitive contents are not essential to the vehicles, i.e., they do not serve to type-identify the vehicles (ibid., p. 127). This means, that one and the

164

9 The Pragmatic Necessity Defence

same type of vehicle can have different contents, depending on the environment in which it is located, but also on other considerations, such as ease of tracking, ease of exposition etc. Because of this, in Egan’s view, cognitive contents are not naturalistic – there is no privileged way to fix cognitive contents, even though covariance and structural correspondence are prominent (ibid., p. 124). Pragmatic considerations may, and often do, influence which contents are ascribed, so that contents may be ascribed even in absence of covariance or structural correspondence. This also means that most content-indeterminacy worries do not apply to Egan’s deflationary representationalism. When contents seem indeterminate, pragmatics determine which alternative is used (ibid., p. 125). Consequently, a single theory can have multiple glosses with disparate cognitive content assignments, each appropriate for different purposes (ibid., p. 131). Egan continues her analysis of the edge detector case by arguing that the location of edges in the visual field is the cognitive content of the edge detector’s output, at least in Marr’s theory of vision. This content is ascribed to the vehicle for pragmatic purposes and not because of any relation holding uniquely and necessarily between the edge detector and edges. If the edge detectors were found in other environments, or hooked up to different sensory peripheries, the content would have been different. Egan illustrates this point with the example of two systems Visua and Twin-Visua (ibid., pp. 125–127). Visua evolved on Earth and its activity correlates with the presence of edges in the visual field. Furthermore, it seems that its detecting edges makes it adaptive in this environment. Consequently, it is glossed as an edge-detector. Twin-Visua, on the other hand, is mathematically the same system as Visua, computing the same function. However, it evolved in a different environment, in which light behaves weirdly, so that there is no correlation between Twin Visua’s activity and edges. However, there is a correlation between Twin Visua’s activity and shadows. Furthermore, detecting shadows makes the system adaptive in this environment. Egan argues that Twin-Visua would be glossed as a shadow-detector, not an edge detector. Furthermore, neither Visua nor Twin-Visua would be glossed as detectors of some higher-level disjunctive property encompassing both shadows and edges “EDGEDOWS”. This is because such higher-level properties are less intelligible to students and researchers. Practical and didactic purposes would require supplementing the high-level gloss with a way to translate EDGEDOW into the lower-level predicates when talking about Visua, or Twin-Visua. The higher-level gloss would be superfluous and the role of drawing out the relation between the two systems is better achieved by the function-theoretic characterisation with mathematical contents.

9.2

Options for a Mechanistic Answer to Deflationary Realism

Egan’s deflationary representationalism certainly omits some of the more objectionable features of representationalism. Distal contents, which are at the forefront of classical representationalist accounts, are relegated to a less important role. In fact,

9.2

Options for a Mechanistic Answer to Deflationary Realism

165

within the realm of computational cognitive science, distal contents, i.e., cognitive contents, play no role at all. However, Egan still insists that these cognitive contents play a host of important pragmatic functions and are therefore indispensable. She rejects both Stich’s (1983) and Chomsky’s (1995) non-representational interpretations of cognitive science as inadequate. Because Egan’s view improves on classical representationalist interpretations, there is a number of stances a non-representationalist mechanist could take towards it. The first option is to accept Egan’s view on both mathematical and cognitive contents. This option is open because cognitive contents only play a pragmatic role in Egan’s view. Since I adopt the mechanistic theory of explanation, my primary concern is about the epistemic dimension of explanation. I could therefore grant Egan that ascribing cognitive contents might be pragmatically necessary while denying that it adds anything epistemically significant to any given explanation. All that matters for evaluating the explanatory power of a mechanistic explanation is the extent to which it identifies the relevant differences between the mechanism for the phenomenon and the contrast class, regardless of whether they are identified by computational description, or by the intentional gloss. One could even argue that the intentional gloss is, even on Egan’s own view, confined to use in non-explanatory contexts – in teaching, discovery, popular science etc. In an explanatory context, one would use the canonical mathematical characterisations of the components. In mechanism descriptions, these canonical function-theoretic characterisations would serve to type-identify the entity. In METs they would be used in order to show that components computing these functions and not some alternative functions are responsible for the occurrence of the phenomenon over a contrast. This is compatible with the fact that Egan views cognitive content ascription as relevant for an explanatory gloss, because Egan’s view of explanation seems to be pragmatic rather than epistemic. The second option is to reject both mathematical and cognitive contents. This view amounts to claiming that even mathematical contents cannot feature in mechanistic explanatory texts. If we accept Egan’s analysis of what it takes to have mathematical content as bearing a certain relation to abstracta, one could argue that this relation is non-local to cognitive phenomena, because abstracta such as numbers and other mathematical objects do not have a location in space or time. Hence, this option seems preferable to the first one. However, taking this position would effectively banish all mathematical descriptions from mechanistic explanation. For instance, a neuron’s property of firing with a frequency of 80 Hz would also turn out to be a relation to an abstract entity, and therefore likewise unacceptable. Most paradigm cases of mechanistic explanation would fall into the same problem. Therefore, this option must be avoided. However, it points to the fact that mathematical content must be reinterpreted in some way, such that it fits with the mechanistic theory of explanation. The third option is just that – reinterpreting mathematical content and integrating it into the mechanistic framework, but at the same time rejecting cognitive contents as explanatorily irrelevant—even pragmatically. This is the line I take. In order to justify this position, I will show that the supposed pragmatic benefits of cognitive

166

9 The Pragmatic Necessity Defence

contents (a) do not obtain, (b) can be achieved in alternative ways within the mechanistic framework, or (c) are immaterial to explanatory practices. The next 3 sections articulate and defend this third option.

9.3

Reinterpreting Mathematical Content

Egan’s mathematical contents can be integrated into the mechanical framework for explaining cognition if we look at the function-theoretic characterisation as a way of describing a component activity of the mechanism for a cognitive phenomenon. A straightforward way in which this description could play a role in mechanistic explanation is as part of a mechanism description. Take, for example, a neural mechanism responsible for visual discrimination. According to the present suggestion, the mechanism description of this mechanism would contain components such as . However, this cannot work for the following reason: a token instance of the component would only instantiate one input-output pair from the unbounded number of input-output pairs which determine the Laplacian-of-a-Gaussian function. This single input-output pairing is insufficient to determine that the function computed is the Laplacian-of-a-Gaussian, and not a different function which happens to share this single input-output pairing with the Laplacian-of-a-Gaussian. This means that describing the edge-detector’s activity as computing the Laplacian of a Gaussian and not some other function in this particular instance requires tacit reference to other instances where the edge-detector also produced input-output pairings consistent with the Laplacian of a Gaussian, but inconsistent with other candidates. In fact, the exact function would not be fully determined without counterfactual cases, in which the edge-detector would produce the characteristic outputs, if certain inputs were received. An appropriate description for the single instance would therefore be something like . This would be preceded by descriptions of the activity of the input neurons at t0. The function-theoretic description might appear in a mechanism sketch or a schema, but would, in that case, be a mere place-holder for the more specific description in the complete mechanism description. Thus, mathematical contents do not figure in explanations via the mechanism descriptions. The function-theoretic characterisation could, however, play a role in certain mechanistic explanatory texts. Recall that mechanistic explanatory texts contain the differences between the mechanism for the explanandum phenomenon and the mechanism for the closest member of the class of contrast phenomena, but only those ones shared by all the mechanisms for the various phenomena in the contrast class. As we have seen in Chap. 3 (see also Kohár & Krickel, 2021), one way in which differences can be shared by all members of the contrast class is because all the differences fall under a single description. For instance, Kohár and Krickel consider the contrast between the phenomenon of a neuron’s firing and the contrast

9.3

Reinterpreting Mathematical Content

167

class of its staying at rest. The mechanism for the explanandum phenomenon includes the membrane voltage depolarizing to -45 mV. The mechanism for the closest member of the contrast class includes the membrane voltage being at 46 mV. However, the mechanistic explanatory text does not contain the difference between -45 and -46 mV, but rather the difference between -45 mV and less than -45 mV, because the mechanisms for the various other members of the contrast class do not share a single value of the membrane potential. However, the membrane potential in each of the mechanisms falls under the same higher-level property description “less than -45mV”. Therefore, the mechanistic explanatory text contains the contrast between “-45 mV” and “less than -45 mV”. Similarly, the function-theoretic description “computing the Laplacian of a Gaussian” can be used to highlight the difference between the explanandum phenomenon of visual discrimination and a contrast class: all the mechanisms for members of the contrast class differ from the mechanism for the contrast phenomenon because the edge-detectors produce levels of activity inconsistent with computing this function. In such cases, describing the relevant difference as “computing the Laplacian of a Gaussian as opposed to not computing it” appears to be the correct way of picking out the switch-point between the phenomenon and the contrast class. This shows that the mechanistic framework for explaining cognition can accommodate mathematical contents. However, there is an important difference between Egan’s view and the mechanistic perspective. As we saw in Sect. 9.1, Egan thinks that the function-theoretic characterisation of a component is a privileged, canonical way of describing what the component does. This means that, for Egan, the functiontheoretic characterisation is the only way of identifying the component relevant to cognitive science. However, in the mechanistic framework, this is not the case. Although the function-theoretic characterisation is relevant for the assembly of some mechanistic explanatory texts, there might be other contrastive explananda in which it plays no explanatory role. Using the function-theoretic description in mechanistic explanatory texts for these explananda would be infelicitous, adding unnecessary detail to the explanation. Next, we must not discount the option that two or more function-theoretic descriptions might be available for the same component. Consider an example discussed by Sprevak (2010), and Dewhurst (2018). Imagine an electronic device wired to receive two input voltages. If both of the input voltages are above 5 V, the device also outputs 5 V. If, however, any of the input voltages are below 5 V, the device outputs 0 V. Sprevak points out that such a physical device behaves in a way consistent with two different logical functions. If voltages above 5 V are interpreted as binary digits “1” and voltages below 5 V as binary “0”s, then the component computes the Boolean AND function. However, if the assignment is reversed so that voltages below 5 V are “1”s and voltages above 5 Vs are “0”s, the component appears to compute the Boolean OR function. Similar examples are given by Shagrir (2001). Sprevak (2010) argues that this ambiguity about the function computed by the component uncovers a central flaw in Egan’s contention that mathematical contents

168

9 The Pragmatic Necessity Defence

are essential to their vehicles. For Sprevak, the differentiation between AND computers and OR computers can only be made on the basis of what Egan would call cognitive contents. Naturally, I cannot accept Sprevak’s conclusion, as this would essentially collapse the distinction between mathematical and cognitive contents. Another way to look at the situation, however, is to conclude that the device can be viewed as computing either the OR function or the AND function for the purposes of constructing a mechanistic explanatory text. Which interpretation is correct depends not on interpreting the voltages in accordance with some mind-dependent gloss, but on the explanandum contrast. For some contrasts, the differences between the mechanism for the phenomenon and all the members of the contrast class might be summarised by pointing out that the phenomenon contains an AND-computer as opposed to some other component. For other contrasts, the relevant way of summarising the difference between the mechanism for the phenomenon and all the mechanisms for the contrast class might be to point out the presence/lack of an OR gate. However, this is still inadequate. As Dewhurst (2018) points out, there is a third way of describing the computation performed by the component, which is neutral on whether the computation is an AND or an OR. Instead, the component simply maps the voltages in accordance with the function f: f(0 V, 0 V) = 0 V, f(0 V, 5 V) = 0 V, f(5 V, 0 V) = 0 V, and f(5 V, 5 V) = 5 V. This function-theoretic description is much closer to the hardware; in fact, it is quite comparable to my example with firing frequencies above. But even then, there is still potential for one component to satisfy multiple function-theoretic descriptions. The token input-output pairing computed by the component on a single occasion underdetermines the mathematical function which is to be ascribed to the component. Therefore, it is possible that for some contrasts it is the fact that the input-output pairing is consistent with some function f: f(x) that matters, while for other contrasts the decisive difference might be the fact the inputoutput pairing is consistent with another function g: g(x). In Dewhurst’s example, consider the situation where the first of the inputs is 5 V, while the second one is 0 V. In this situation, the output will be 0 V. In these cases, the behaviour of the component is consistent with a function g, which takes only one input and flips it – g: g(5 V) = 0 V, g(0 V) = 5 V. Naturally, the two functions f and g diverge under other argument assignments, but given this specific input, the functions are equivalent. Now consider a phenomenon partly constituted by the occurrence of this particular input-output mapping. It seems conceivable that some contrast explananda concerning the phenomenon are explained partly by the fact that the input-output pairing conforms to f, while others are explained by the fact that the input-output pairing conforms to g. Therefore, there is no single canonical description of the component activity which would be uniquely poised to type-identify the component and highlight its explanatory relevance. Different contrasts may require different ways of describing the activities.

9.4

9.4

Roles of the Intentional Gloss

169

Roles of the Intentional Gloss

In the above characterisation of Egan’s view, I have omitted discussion of the roles played by the intentional, or explanatory gloss beyond noting that it is supposed to concern pragmatic aspects of explanation. Here I will expand on the various uses Egan ascribes to the gloss and the cognitive contents included in it. I should note here that Egan does not provide a definitive list of all purposes for the gloss. I have identified a number of concerns she returns to in her writings, but it is unclear to what extent she thinks of them as separate worries. Nevertheless, clearly identifying and separating the uses of the gloss will help develop an alternative account. Firstly, Egan argues that the gloss plays a role in showing how the computations performed by the cognitive system are appropriate for the performance of a task, or a capacity (Egan, 2010, p. 258, 2014, p. 123). This is a version of the dual-explananda defence I covered in the previous chapter, specifically the Fittingness explanandum defence. I will not cover it here again. Interestingly, Egan does not endorse the companion Success explanandum defence. In her view, computational theories in cognitive science presuppose success, and cognitive contents are ascribed with this in mind (Egan, 2010, p. 257). The researchers’ assumptions about what counts as success in performing a task serve to disambiguate between various available content ascriptions. Cognitive contents are assigned so that representations are correct in cases of success and incorrect in cases of failure. Therefore, the correctness of the contents cannot show why the system succeeds, because such explanation would be circular (see Chap. 8).1 Secondly, Egan argues that the gloss and cognitive contents are needed to connect the computational theory with the original pre-theoretic explananda which motivated the formulation of the theory to begin with (Egan, 2010, pp. 256–257). These explananda are often formulated in “intentional terms” (Egan, 2014, p. 119), and therefore intentional vocabulary of the gloss is appropriate for the explanantia. For example, Marr’s computational theory of vision is a theory of, well, vision – the ability of humans and animals to “see what is where”. By ascribing the content EDGE to the activity of the edge-detector, we can connect the computation it is performing to the original aim of the theory. The edge-detector’s computing the Laplacian of a Gaussian contributes to the process of seeing what is where by allowing the system to see where the edges are. Thirdly, Egan argues that the gloss is important for heuristic purposes (Egan, 2014, p. 128). A heuristic is a rule or a procedure which guides problem-solving in those cases where the problem is ill-defined, or where the space of possible solutions is too large (Simon & Newell, 1958). The role of a heuristic is to limit the number of possible solutions to be examined and thus assist the problem-solver in arriving at the correct solution faster. However, heuristics contrast with algorithms in that good 1

However, the cognitive gloss does, according to Egan, show how the system succeeds by showing why the mechanism is appropriate for performing a particular task. This is consistent with the Fittingness explanandum.

170

9 The Pragmatic Necessity Defence

heuristics, unlike good algorithms, do not always lead to the correct solution. Occasionally the correct solution is among those eliminated from consideration by the heuristic. On average, however, the heuristic leads to the correct answer more often than not. The heuristic usefulness of the gloss is supposed to stem from the fact that by thinking of the cognitive system as manipulating distal cognitive contents we can extrapolate from its behaviour in one test condition to other test conditions. By noting which properties of its environment, the system represents in performing some cognitive task in conditions A, and considering how those properties change in different conditions B, we can hypothesise how the system will behave, if we place it in conditions B. The supposed advantage of this approach is that we need not consider the physical properties of the system at all to formulate the hypothesis based on cognitive contents. Fourthly, Egan mentions in passing that the gloss is useful for keeping track of the various steps taken by the cognitive system in performance of the cognitive task (Egan, 2014, p. 125). We can, for instance, note that when the edge-detectors have fired, the system has extracted information about edges from the optical array, but it has not yet extracted the information about colour, because colour is represented only later in the visual processing. This allows us to predict patterns of failure. If we lesion the colour-representing neurons, the cognitive system will still be able to distinguish squares from triangles. However, if we lesion the edge-detectors, the system will also lose the ability to tell colours apart (unless there is a separate pathway from the retina to the colour-processing areas). This tracking function is connected to the previously mentioned heuristic function. Fifth, Egan hints at a didactic purpose of the cognitive gloss (Egan, 2010, p. 258, 2014, pp. 126–127). However, this argument is perhaps best taken as my own interpretation inspired by Egan’s work than as a part of her own position. Explaining the computational theory to students who are not yet in the know, or to lay-people would be difficult if there was no shared vocabulary. The gloss provides such a shared vocabulary because it relates the computational processes taking place in the cognitive system to the environment. The student or lay person knows how to talk and reason about the environment. Furthermore, they may have ideas about how they would go about solving the problem posed by the cognitive task using information about the environment. Therefore, the gloss acts as an interface, allowing the uninitiated to interact with the computational theory. Importantly, the intended audience influences which contents are ascribed to activities of the cognitive systems. The utility of ascribing the content EDGE to the activity of the edge detector for didactic purposes lies in the fact that the concept EDGE is familiar to the people to whom the gloss is directed. Even if it turned out that in radically different environments the edge-detector would also respond to e.g., shadows, we would still persist in ascribing the content EDGE rather than a disjunctive content, like say EDGEDOW (EDGE or SHADOW), because the latter content would be unwieldy, confusing and mostly useless. In normal conditions, it is edges that are detected, and in the radically different environment it is shadows. So long as we do not need to talk about the radically different environments, this nicety can be overlooked in giving an exposition to the computational theory of vision.

9.5

Rejecting Cognitive Contents

171

To summarise, the gloss performs five functions: 1. 2. 3. 4. 5.

Showing fittingness Connecting to pre-theoretical explananda Heuristic function Tracking function Didactic function

9.5

Rejecting Cognitive Contents

In this section I show that a mechanistic theory of cognition has sufficient resources at its disposal to deliver the benefits of Egan’s cognitive gloss, to the extent that these may be regarded as benefits. In other words, some uses of the cognitive gloss can be discharged by other means, while other uses are pernicious and to therefore should not be pursued. Let me first address the issue of connecting theory with the pre-theoretical explanandum. Egan argues that Marr’s computational theory can only be regarded as a theory of vision if the computations performed by the components of the visual system are understood as operating on representations of external objects, i.e., edges, shadows, etc. Vision, here, is the pre-theoretical explanandum, glossed by Marr as “seeing what is where”. The problem of relating the results of cognitive neuroscience with pre-theoretical explananda has been recently studied under the label translation problem by Francken and Slors (2014, 2018). According to Francken and Slors, the translation problem is the problem of relating so-called common-sense cognitive concepts (CCCs) to brain activity (Francken & Slors, 2018, p. 68). Common-sense cognitive concepts are folk-psychological (i.e., pre-theoretical) categories, such as attention, memory, emotions, etc. (Francken & Slors, 2014, p. 249). Vision as glossed by Marr and Egan would also qualify as a CCC. Francken and Slors argue that experiments and experimental results in neuroscience are only indirectly related to the CCCs through a number of translation steps. First, the CCCs as they exist pre-theoretically are not suitable for scientific inquiry, because they are often “too coarse-grained and unspecific” (Francken & Slors, 2018, p. 68). As a consequence, the extension of a CCC, such as memory or vision, cannot be fully determined. Take the case of vision. Using the pre-theoretical concept of vision, can we determine whether a blindsight patient sees their way around a maze, or not? They certainly move around the environment as if they were able to see what is where to some extent. However, they report no visual awareness of their surroundings. It seems that given some interpretations of the term “seeing” the blindsight patient sees, but on some other interpretations they do not. This is exactly what makes the blindsight case interesting. Another example: the pre-theoretical concept of memory can be used to refer to phenomena so different as remembering the smell of strawberries or remembering that it is your mother’s birthday. For these reasons, the CCCs cannot be directly investigated before a clearer definition is adopted. These definitions form the so-called scientific cognitive concepts (SCCs).

172

9

The Pragmatic Necessity Defence

A scientific cognitive concept is a clearly defined version of a common-sense cognitive concept, used in psychological and neuroscientific research (ibid.). However, the mapping between CCCs and SCCs is not one-to-one, but many-to-many. The pre-theoretical ontology of cognitive concepts can be partitioned into scientific cognitive concepts in multiple cross-cutting and incompatible ways (ibid.). For instance, memory can be classified into short-term, and long-term memory, procedural, semantic, and autobiographical memory, modal and amodal memory etc. Different researchers routinely use different classifications and classify the same phenomena as instances of different types (Roediger et al., 2007). Thus, the first instance of the translation problem arises in the conversion of CCCs into SCCs. But having a clearly defined SCC is not yet sufficient for the purposes of scientific research. After all, SCCs, just like CCCs cover a wide range of phenomena, and cannot be investigated directly. Rather, researchers use certain standardised experimental paradigms, or tasks, in order to study the SCCs. These tasks are taken to operationalise the SCC, so that it can be detected or measured by the performance of the test subjects on the task. According to Francken and Slors, however, the translation from SCCs to tasks is not straightforward. Any given SCC has a number of tasks associated with it, and vice versa, tasks are associated with more than one SCC (Francken & Slors, 2014, pp. 250–251). For instance, consider the classic Stroop task. In this task, a test participant is presented with variously coloured words and asked to report the colour of the word. The stimuli themselves are colour names, which results in two conditions (Stroop, 1935). The colour in which the word is written can be the same as the colour the word refers to, or it can be different. Participants make more mistakes and react slower in the non-matching condition than in the matching condition. This task can be interpreted as a response-inhibition task, but it can also be interpreted as a colour-naming task (Francken & Slors, 2014, p. 251). Thus, it is associated with at least two distinct SCCs. On the other hand, there are many response-inhibition tasks, as well as many other colour-naming tasks. The translation problem thus also exists in the transition between SCCs and tasks. There is a many-to-many mapping between SCCs and tasks, and so regarding the results from a single task as uniquely detecting or measuring an SCC is unwarranted. Finally, the translation problem also exists in the transition between tasks and brain activity (ibid.). Patterns of brain activity are not uniquely correlated with certain tasks. Rather, many brain regions participate in a single task, and a single brain region participates in many different tasks. Anderson (2010) has found that this is true even for quite finely localised brain areas. Depressingly, it is not the case that a brain area only participates in performing tasks related to the same SCC. Rather, the same brain area may be active in tasks measuring quite distinct SCCs – perception and attention, for instance. Note that this issue is not limited to functional brain imaging studies. Event-related potentials in electroencephalography can likewise be observed in unrelated tasks, and even single neurons show activity in varied circumstances during the performance of different tasks. The translation problem casts doubt on the idea that brain activity can be straightforwardly correlated with the pre-theoretical explananda. This is a problem for Egan’s view of the utility of cognitive contents, since cognitive contents are

9.5

Rejecting Cognitive Contents

173

supposed to perform exactly this function. One might think that this is a particular virtue of Egan’s account. Egan’s cognitive contents help us bypass the translation problem and relate brain activity to pre-theoretical explananda on the basis of pragmatic considerations. Unfortunately, there are two problems with this approach. Firstly, it underestimates the complexity of the translation problem by supposing that there is a direct connection between brain activity and common-sense cognitive contents, even if this connection is established on the basis of pragmatic factors. The prior translation steps from CCCs to SCCs and onwards to tasks are obscured by a supposedly direct relation between brain activity and the pre-theoretical explanandum. Furthermore, the first translation step from CCCs to SCCs presents a special difficulty for Egan. The function-theoretic or mechanistic theory of a cognitive capacity begins to take form only after the first translation step has been taken. Take Marr’s (1982) theory of vision as an example. Even though Marr does seek to understand the CCC “vision”, i.e. seeing what is where, his theorising is aimed at the SCC “vision”, i.e. extracting information from the environment, and more specifically at early vision, a subtype of vision which does not involve conceptualising the visual objects, thereby omitting a large part of the CCC gloss “seeing what is where”. The theory, therefore, is not a theory of the CCC “vision”, but of the SCC “early vision”. How these two are related is a legitimate question, but as the general form of the translation problem suggests, they will not map neatly onto one another. Egan’s approach instead proceeds as if Marr’s theory was a theory of CCC vision, obscuring the crucial translation step. Obscuring this step, however, is problematic, because it presupposes that there is a privileged mapping of CCCs onto SCCs. One issue it leads to is that points of conflict between rivalling theories are misidentified. For example, so-called ecological theories of visual perception (Gibson, 1966, 1979) are ostensibly also theories of CCC vision. Ecological theories, however, study a different SCC vision, because they do not consider vision to be a matter of extracting information, but rather of detecting invariants in the optical flow. Thus, the conflict between Marrian and ecological theories cannot be resolved solely by considering the claims they make with regards to mathematical or cognitive contents. The situation is somewhat simple when we compare Marrian and ecological theories, since ecological theories are so different from Marrian theories that the difference in the first translation step is difficult to miss. However, when two more outwardly similar theories supposedly aiming at explaining the same CCC are analysed according to Egan’s view, we can only learn that they assign different importance to different types of brain activity. The theories might, for all we know, concern different SCCs and/or different tasks. The cognitive gloss encourages overlooking these options and is in that sense pernicious for its supposed goal of connecting theories to pre-theoretical explananda. Another problem for the idea that cognitive contents relate theories to pre-theoretical explananda is presented by the above-mentioned issue that brain activity cannot be uniquely correlated with tasks neatly fitting into a single CCC or even SCC. This suggests that the CCCs such as attention, vision, language

174

9 The Pragmatic Necessity Defence

processing etc. do not map onto natural kinds, or else there would be a relatively well identifiable network of brain regions responsible for performing the tasks associated with these categories. To the extent that these categories do not map onto anything the brain is sensitive to, some conceptual revision of the SCCs and even CCCs might be warranted (Anderson, 2010, 2015; Francken & Slors, 2018). This calls into question whether pre-theoretical explananda even are a legitimate concern for cognitive neuroscience. Certainly, we cannot build up a conceptual scheme without taking our pre-theoretical categories as the basis, but researchers should rather be cautious in presupposing that the pre-theoretical explananda handed over by folk psychology correspond to actual explananda. In light of this, defending cognitive contents on the basis that they can secure a bridge between theories and pre-theoretical explananda seems wrong-headed. With respect to Egan’s third point on the heuristic uses of the intentional gloss, I note that there is no reason to argue for or against any particular heuristic. Ultimately, researchers should be free to use any heuristics they wish, if the heuristics help them formulate adequate theories or construct adequate models. This means that using cognitive contents as a heuristic for formulating function-theoretic or mechanistic accounts of cognitive phenomena is permissible. However, what is not permissible is confusing the heuristic with an explanation. Representationalists argue that content-carrying helps explain cognitive phenomena. The fact that cognitive contents may be used as heuristic in constructing mechanistic explanations does not vindicate this claim. It is to be noted that even Egan steers dangerously close to this error, since she often calls the intentional gloss an “explanatory” gloss. As we have seen, Egan’s notion of explanatoriness is deeply connected with pragmatic aspects of research. However, even pragmatic accounts of explanation distinguish (implicitly or explicitly) between explanatory contexts and contexts of discovery. Heuristics play a role in the discovery of explanatory information, but they do not supplement explanatory information. The fourth use of the intentional gloss, according to Egan – the tracking function – is also put into question by the evidence for neural reuse. Since the same parts of the brain are active in different networks subserving very different types of cognitive processes, it is unlikely that the set of operations defined over cognitive contents maps onto the operations actually performed by the brain. Because the same neurons routinely activate in service of very different cognitive processes with different environmental inputs and outputs, the contributions they make to any particular process cannot be easily described in terms of cognitive contents. Cognitive contents, after all, are domain-specific and environment-dependent. Anderson (2010) proposes that functional units of the brain do contribute specific low-level “workings” to every cognitive process in which they are involved, and through recombination of the various workings the brain accomplishes a variety of tasks with the same components. However, these workings can, of necessity, only be described on an abstract level, in a language of something like Egan’s mathematical contents. With respect to the argument that the intentional gloss and cognitive contents fulfil an important didactic function, there are reasons to be sceptical. The reasoning is likewise based on the fact that brain activity doesn’t neatly map onto

9.5

Rejecting Cognitive Contents

175

pre-theoretical explananda. This suggests further that the mechanisms employed by the brain to solve a particular task need not map onto strategies which a human agent would consider appropriate for accomplishing the task, were they to deliberate about it. In light of the translation problem and the evidence for neural reuse, the cognitive gloss can even be didactically pernicious, because it perpetuates folk-psychological categories with uncertain grounding in the working of the brain. Here one might object that if Egan’s deflationary view of representationalism is adopted, no harm will be done, since the student of cognitive neuroscience will realise that the gloss is just a gloss and will therefore not attach undue weight to it. Nevertheless, in actual didactic practice, the difference between the gloss and the theory is obscured and cognitive contents are assigned to vehicles as if they were possessed by the vehicles essentially. This essentialism must then either be corrected at a later point in scientific training, or, more worryingly, remains uncorrected and perpetuates strictly inadequate understanding of the working of the brain (strictly inadequate even according to Egan’s own view). However, the question of whether introducing the working of cognitive mechanisms via the gloss is useful in didactic contexts is ultimately an empirical one. My hypothesis is that the gloss imparts unwarranted and pernicious essentialism about cognitive contents. The contrary hypothesis is that the gloss is necessary to impart any insight into the working of the mechanisms. Which hypothesis is correct is in the end an empirical question best addressed by education scientists, rather than a philosophical one. There is one further complication. In Sect. 9.3 I argued that mathematical contents may turn out to be explanatory relevant for certain contrastive explananda because they can bring differences between the components in the mechanism for the phenomenon and the mechanisms for token members of the contrast class under a single description. Similar reasoning can be applied to cognitive contents. Just as the description “computing the Laplacian of a Gaussian” can be used to describe a component activity in the mechanism to contrast it with a contrast class in which the component computes a different function, so can the description “detecting edges” be used to describe the same activity for the purposes of creating a mechanistic explanatory text. That is, if all the members of the contrast class differ from the phenomenon in that they lack an edge detector, and the phenomenon has one, then this will be part of the mechanistic explanatory text. However, we should resist this conclusion. The reasoning works for mathematical contents, because the description of a component activity as computing a function is independent of the environment, or the context in which the mechanism operates. Recall that a mechanistic explanatory text is created from mechanism descriptions, and mechanism descriptions only describe constitutively relevant factors. Because of this, facts about the environment in which the mechanism occurs are absent from the mechanism description. After all, they are not local to the phenomenon. Mathematical contents do not depend on the environment in any way. The fact that, e.g., a neuron’s firing rate is consistent with the output of some mathematical function holds regardless of what environment the mechanism is in. Equally, the fact that

176

9

The Pragmatic Necessity Defence

some other firing rate would be inconsistent with the output of said function is also independent from the environment in which the hypothetical contrast mechanism occurs. This is why we may describe the neuron as computing a function based solely on what is contained in the mechanism description. The situation is different for cognitive contents. Here, the fact that the neuron’s firing may be described as “detecting edges” cannot be derived solely from the mechanism description. Likewise, the fact that the neurons in the contrast mechanisms are, e.g., “detecting shadows” cannot be determined solely based on the fact that the neurons’ firing patterns differ as recorded in the mechanism descriptions. Facts about the environment would have to be brought in, in order to justify that description. Because facts which cannot be derived solely from the mechanism descriptions are used to justify referring to the component as detecting edges, this strategy requires bringing constitutively irrelevant factors to bear on the mechanistic explanatory text. Because of this, cognitive contents, unlike mathematical contents, cannot be part of mechanistic explanatory texts.

9.6

Conclusion

In conclusion, the Pragmatic Necessity defence fails. While mathematical contents may play a role in explaining some contrastive explananda in cognitive science, cognitive contents do not. Importantly, even the supposed pragmatic benefits of cognitive contents fail to obtain in any explanatory contexts. Connecting the explanans to pre-theoretical explananda with cognitive contents is pernicious because it obscures several layers of complexity in pairing up common-sense cognitive concepts with neural mechanisms. The use of cognitive contents for didactic and tracking purposes encounters a similar issue. Heuristic use of cognitive contents may be beneficial, but ultimately does not bear on explanation. Even though mathematical contents may be needed in explaining some contrastive phenomena, they do not play a privileged role in identifying components, or in explaining cognition. They serve merely as a way to collectively describe a difference between a component activity and a set of alternative activities in the mechanisms for member phenomena of the contrast class.

References Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33(4), 245–266. https://doi.org/10.1017/S0140525X10000853 Anderson, M. L. (2015). After phrenology: Neural reuse and the interactive brain. MIT Press. Chomsky, N. (1995). Language and nature. Mind, 104(413), 1–61. https://doi.org/10.1093/mind/ 104.413.1 Dewhurst, J. (2018). Individuation without representation. British Journal for the Philosophy of Science, 69(1), 103–116. https://doi.org/10.1093/bjps/axw018

References

177

Egan, F. (1995). Computation and content. Philosophical Review, 104(2), 181–203. https://doi.org/ 10.2307/2185977 Egan, F. (2010). Computational models: A modest role for content. Studies in History and Philosophy of Science, 41(3), 253–259. https://doi.org/10.1016/j.shpsa.2010.07.009 Egan, F. (2014). How to think about mental content. Philosophical Studies, 170(1), 115–135. https://doi.org/10.1007/s11098-013-0172-0 Francken, J. C., & Slors, M. (2014). From commonsense to science and back: The use of cognitive concepts in neuroscience. Consciousness and Cognition, 29, 248–258. https://doi.org/10.1016/j. concog.2014.08.019 Francken, J. C., & Slors, M. (2018). Neuroscience and everyday life: Facing the translation problem. Brain and Cognition, 120, 67–74. https://doi.org/10.1016/j.bandc.2017.09.004 Gibson, J. J. (1966). The senses considered as perceptual systems. Houghton Mifflin. Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin. Kohár, M., & Krickel, B. (2021). Contrast and compare: How to choose the relevant details for a mechanistic explanation. In F. Calzavarini & M. Viola (Eds.), Neural mechanisms: New challenges in the philosophy of neuroscience (pp. 395–424). Springer. Marr, D. (1982). Vision. MIT Press. Roediger, H., Dudai, Y., & Fitzpatrick, S. (Eds.). (2007). Science of memory concepts. Oxford University Press. Shagrir, O. (2001). Content, computation and externalism. Mind, 110(438), 369–400. https://doi. org/10.1093/mind/110.438.369 Simon, H. A., & Newell, A. (1958). Heuristic problem solving: The next advance in operations research. Operations Research, 6(1), 1–10. https://doi.org/10.1287/opre.6.1.1 Sprevak, M. (2010). Computation, individuation, and the received view on representations. Studies in the History and Philosophy of Science, 41(3), 260–270. https://doi.org/10.1016/j.shpsa.2010. 07.008 Stich, S. (1983). From folk psychology to cognitive science: The case against belief. MIT Press. Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643–662. https://doi.org/10.1037/h0054651

Chapter 10

Conclusions and Future Directions

Abstract In this chapter, I highlight the consequences of my central thesis – that representational contents cannot be explanatorily relevant in mechanistic explanations – for both mainstream cognitive neuroscience and for pre-existing non-representational alternatives. I argue that for mainstream cognitive neuroscience, this book has shown representations to be of little importance, and that mechanistic explanatory texts should not mention representational contents. I further argue that the non-representational mechanistic approach enables better understanding of some experimental practices in cognitive neuroscience. The consequences for pre-existing non-representational approaches consist mainly in enabling hybrid dynamical/mechanistic and enactive/mechanistic explanations. I discuss in more detail how dynamical descriptions of mechanism components may be used in mechanistic explanatory texts. Finally, I discuss two avenues for future research – the possibility of using mechanistic compositionality to resolve the scaling-up/ representation-hunger problem and the possibility of generalising the account of mechanistic non-representational explanation from neuroscience to psychology. Keywords Explanatory relevance · GO/NOGO paradigm · Dynamicism · Scaling-up · Compositionality In this final chapter, I will present some implications for mainstream philosophy of cognitive science (Sect. 10.1) and for the leading non-representational alternatives (Sect. 10.2). Finally, I will present some open questions and avenues for further research (Sect. 10.3).

10.1

Consequences for Mainstream Philosophy of Cognitive Science

Let me briefly summarise the consequences which follow from adopting the non-representational mechanistic position as argued for in this book. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8_10

179

180

10

Conclusions and Future Directions

Representations are only of little importance (if at all) We should note that the importance of representational contents in explaining cognitive phenomena is much narrower than usually supposed. Firstly, representational contents do not have a privileged position among all the potentially explanatory factors in cognitive neuroscience. This is contrary to many proponents of representationalism, who claim that cognitive science is characteristically concerned with representational explanation (Fodor, Shea, Dretske . . .). As representational contents will appear, if at all, only in a limited subset of mechanistic explanatory texts concerned with cognitive phenomena, the strong representationalist claim can be rejected. Secondly, distal representational contents should not appear in mechanism descriptions. This is because mechanism descriptions are supposed to contain only mechanism components, and, as we saw in Chaps. 4, 5, 6, and 7, distal representational contents are not components. The two caveats to this general non-representationalist conclusion do not detract from its importance. As we saw in Chaps. 6 and 7, representational contents of some neural emulators may be considered mechanism components. Thus, they should be included in mechanism descriptions where applicable and in METs as appropriate. However, since this only applies to those emulators which represent parts of the organism/cognitive system, the range of applicable cases will be limited. In any case, Grush’s (2004) general emulator theory of representation is thereby not vindicated. This is because in Grush’s view neural emulators of distal states of affairs also serve as neural representations. However, the contents of any neural emulators which represent distal states of affairs are still precluded from appearing in mechanism descriptions. The other caveat is that Egan-style mathematical contents may be required in some METs as a way of summarily describing the differences between the mechanism for the phenomenon and all the mechanisms in the contrast class. This, however, is merely an extension of the general scientific practice of using mathematics to describe properties and activities involved in various mechanisms. The only substantial difference between Egan’s view and this general practice is her contention that mathematical contents canonically determine the type of a component. However, this feature of her view is specifically rejected in Chap. 9. Mechanism descriptions should not mention content Secondly, we should note a couple of problems which result from the prevalence of content ascriptions even in nominally mechanistic models of cognitive phenomena. These problems result from the abovementioned fact that representational contents are not components and therefore should not be included in mechanism descriptions. Nevertheless, representational models of cognitive phenomena or models combining mechanistic and representational elements are regularly presented as if they were (partial) mechanism descriptions. Such models are not just incomplete – mechanism descriptions are generally incomplete whether or not they include representational contents. There are two problems resulting specifically from including

10.1

Consequences for Mainstream Philosophy of Cognitive Science

181

representational contents in purported mechanism descriptions. The first problem occurs when mechanism descriptions include representational contents without specifying the identity of mechanism components carrying these contents. In this case, the inclusion of representational contents obscures a gap in the mechanism description. The relevant activity or state of the vehicle is glossed merely as “representing X”. Thus, the gap in the mechanism description persists. Furthermore, if mechanistic explanatory texts are created on the basis of such descriptions, they will invariably also refer to representational contents because the relevant components are not known. These mechanistic explanatory texts will be incomplete because the mechanism description on which they are based is itself defective. This gap persists, because researchers committed to representationalism are confident that representational content is an explanatorily relevant property. Adopting my view on the matter would then require re-examining available representational explanations and searching for the constitutively relevant properties of the neural entities involved in cognitive phenomena. A similar problem occurs in cases where mechanistic models do specify the component neural entities and their relevant properties but gratuitously add in the further details concerning representational contents carried by the vehicles. Even though the component states and activities are included, so are non-component representational contents. A mechanism description like this is less accurate than one lacking the representational gloss. This follows from general norms governing descriptions of any kind. A description of a dish naming every ingredient plus some ingredients not contained in the dish is inferior to one which refers to just the actual ingredients. I conjecture that mechanism descriptions gratuitously mentioning representational contents persist for two reasons. One option is that the researchers constructing these mechanism descriptions are convinced that representational contents are constitutively relevant. If this is the case, then my analysis in Chaps. 4, 5, 6, and 7 shows why this view is mistaken. Alternatively, these mixed descriptions might be a result of conflating explanatory and pragmatic factors. In that case, my arguments from Chap. 9 serve as a caution against using representational content for pragmatic purposes. But even if those arguments fall flat, the lack of differentiation between the pragmatic aspect of the description and its explanatory aspect remains problematic. Non-representationalism allows better understanding some experimental practices Aside from highlighting the problems with representational explanation, the account I have presented here also provides additional resources for understanding certain research practices common in cognitive neuroscience. In particular, a mechanistic non-representational account of explanation can easily account for what I call (a) directly constitution-probing experiments and (b) task-based contrasts in experimental design. These practices fit poorly with the representational model of explanation because their direct results cannot be easily construed as discovering representations, but they nevertheless seem to provide explanatorily relevant information. This explanatorily relevant information, to wit, is information about the components of mechanisms involved in the phenomena elicited by the experiments.

182

10

Conclusions and Future Directions

By directly constitution-probing experiments I mean the paradigm cases of what Craver would call “stimulation” and “activation experiments” (Craver, 2007, pp. 149–152). Many functional imaging experiments are of this sort. Researchers here are attempting to figure out which parts of the brain are active while a cognitive phenomenon occurs. These experiments are not sufficiently discerning to permit determining representational contents, and therefore should appear puzzling from a representationalist perspective. For example, consider the experiments utilizing binocular rivalry in order to study the mechanism underlying perceptual awareness (e.g., Tong et al., 2006). In these experiments, a different visual stimulus is projected into each of the subject’s eyes. The subjects report only ever experiencing one of the stimuli at a time. The subjects’ awareness involuntarily switches from one stimulus to the other and back. Combining this paradigm with functional imaging, one can distinguish parts of the brain active while the subject is aware of stimulus A and of stimulus B. However, one cannot, from this experiment, conclude that the parts of the brain which are differentially more active while the subject is aware of A represent A, or mutatis mutandis B.1 This is because the range of stimuli presented is too narrow, which prohibits the proper application of the mutual information criteria I surveyed in Chap. 5.2 Nevertheless, the information about the parts of the brain selectively active during awareness of A or B, as well as information about which parts of the brain are active during awareness of both A and B, seem to be relevant to explaining how perceptual awareness works, why the subject is aware of anything at all, and why they are aware of a particular stimulus. Information about particular representational vehicles, let alone contents, cannot be obtained in this case. This applies for most fMRI studies because the number of conditions and trials which would be necessary in order to present a sufficient variety of stimuli far exceeds any practicable measures. Even though individual researchers might claim to be discovering neural representations this way, the fact that the stimulus set is extremely limited suggests that what is meant is a neural correlate of the stimulus recognizable by an observer familiar with the constraints of the experiment. As we saw in Chap. 5, however, determining representational contents from the observer perspective without a representative set of stimuli is unreliable. Hence, it seems more plausible to view these experiments as straightforward attempts to localise the components of the mechanism for the phenomenon elicited by the experiment, such as visual perception. By “task-based contrasts” I mean experiments in which two or more conditions are contrasted based on what the experimental subject does. A typical example of such experiment is a GO/NOGO paradigm. In this paradigm, the subject performs Although neurons are described as representing, for example, “separate regions of visual space” (Tong et al., 2006, p. 503), this is best understood as concerning a neural correlate of conscious experience and seems to concern the presumed target of the representation, as opposed to its content. 2 I presume indicator semantics is the most likely to work in this case, since teleofunction or structural similarity cannot be established by this experiment at all. 1

10.2

Consequences for Previous Non-representational Theories

183

some action on a typical trial. For instance, the subject might push a button or raise their arm. However, on certain trials the subject is required to refrain from the cued action. Which trials are NOGO trials is, naturally, signalled to the subject by a cue. However, the identity of the cue is of no consequence to the way in which the results of these experiments are interpreted. The differences in neural activity between GO and NOGO trials are typically interpreted as due to the operation of inhibitory mechanisms triggered by the NOGO cue (e.g., Simmonds et al., 2008; Smith et al., 2013), not as neural representation of the NOGO cue. The conclusions gathered from these experiments therefore gain little from the supposition that the brain is in the business of representing and manipulating representations. One might insist that the fact that the subject is able to differentially react to the NOGO clue shows that there must be some neural representation of the clue, but this does not illuminate the subject matter of the experiment, namely, inhibitory mechanisms. In fact, in certain GO/NOGO studies, different NOGO cues are used between trials based on context (e.g., Fassbender et al., 2004; Wager et al., 2005). Hence, these studies specifically control for brain activation attributable to cue identification and look only for activation due to inhibitory mechanisms. This generalises to all cases where the contrast between the experimental conditions is primarily given by what the experimental subject does. Part of the differences between the conditions is due to the fact that the cue signifying what the subject is supposed to do on a particular trial must be identified by the subject. However, the point of the experiments is to identify the differences between the mechanisms involved in the different tasks. Neural activation due to the specific cue is a confound in these cases.

10.2

Consequences for Previous Non-representational Theories

Apart from mainstream philosophy of cognitive neuroscience, the work I undertook here also impacts earlier non-representational theories, such as dynamicism and enactivism. Dynamicism (see e.g., Chemero, 2009) models cognitive systems (or subsystems) as dynamical systems. According to dynamicists, studying behaviour of cognitive systems in real time is crucial for explaining cognitive phenomena. Abstracting time away or idealising it as a discrete quantity leads, according to dynamicists, to a false picture of the functioning of the cognitive system (van Gelder & Port, 1995). Dynamical models identify the rules governing the real-time progression of states through which the behaving system passes while performing a phenomenon. These rules take the form of a system of differential equations. The variables in these equations can be divided into two classes. State variables correspond to properties of the target system. The state of the system is fully determined by the values of its state variables. Typically, state variables occur in the equations under a derivative, so that

184

10

Conclusions and Future Directions

the equations do not determine the value of the variables directly, but rather the rate of change in the variables over time (Norton, 1995, pp. 45–47). In addition to state variables, the equations describing the dynamical system may contain one or more control parameters. The value of the control parameters influences the evolution of the state variables but is itself unaffected by the state of the system (van Gelder, 1995, p. 356). The set of all states which the system can occupy is called the state space. Each state of the system is a point in the state space. The state space has one dimension for each state variable. As the system evolves through time, it follows a trajectory in the state space. The equations in the dynamical model specify all trajectories available in the state space. By examining the structure of the state space, the dynamical modeller is able to predict how the system will behave given certain values of control parameters and initial conditions (Weiskopf, 2004, p. 92). A large number of variables would be needed to describe the cognitive system in a dynamical model (think about the number of neurons involved in any cognitive phenomenon). To avoid this complexity, dynamical models instead search for order parameters. Order parameters are variables which correspond to relations between parts of the system, or to summary properties of the system. A complex system with a large number of variables can be modelled as an equivalent lower-dimensional system if order parameters are used as state variables. Importantly, the model employing order parameters still covers all possible states of the target system despite the reduced dimensionality of the state space (Weiskopf, 2004, p. 92). The equations making up the model are often coupled. Coupling in this context means that in a model with two state variables a and b, the equation governing the evolution of a contains b in at least one of its terms and vice versa (e.g., Saltzman, 1995, p. 152). The prevalence of coupling and frequent use of non-linear differential equations in dynamical models means that exact solutions to the equations can only rarely be determined. The dynamicists therefore utilise the tools of dynamical systems theory to study the structure of the state space of the system and draw conclusions about the system’s behaviour. Dynamical systems theory (DST) is concerned with studying dynamical systems qualitatively rather than attempting to find solutions to the equations describing the system. DST identifies classes of dynamical systems with similar properties and patterns of behaviour (Norton, 1995, p. 46). DST has identified some features commonly seen in state spaces of various dynamical systems. Frequently, points or sets of points in the state space act as attractors. Attractors are points or sets of points in the state space with two defining properties. They are stable, so that when the system reaches a state belonging to the attractor, it will remain in the attractor unless perturbed by an external force. Secondly, all trajectories leading from points near the attractor will bring the system to the attractor (Norton, 1995, p. 56). Changing the values of the control parameters can radically alter the landscape of attractors in the state space. A well-known example of dynamical modelling is the so-called HKB model. Haken et al. (1985) asked experimental subjects to move both their index fingers left and right on the surface of the table with frequency determined by a metronome. They observed that at low frequencies the subjects quickly settled into one of two

10.2

Consequences for Previous Non-representational Theories

185

patterns of movement. Either they moved both fingers synchronously in the same direction, so that the distance between the fingertips remained constant (out-ofphase), or they moved each finger in the opposite direction, so that the distance between the fingertips rose and fell with every other beat of the metronome (in-phase). As the frequency of movement was increased, all subjects starting out in the out-of-phase pattern spontaneously shifted to the in-phase pattern following a brief period of instability. The subjects who moved their fingers in-phase at slow speeds remained in this pattern after the frequency was increased. When the frequency was decreased again, none of the subjects changed back to out-of-phase movement. Haken, Kelso and Bunz modelled this behaviour with the following equations: φ_ =

∂V ∂φ

V ðφÞ = - acosφ - bcos2φ

ð10:1Þ ð10:2Þ

In this equation, φ is the order parameter. φ corresponds to the difference between the angular displacement of each index finger from its position when pointing forward. φ can assume unique values between 0 (in-phase) and π (out-of-phase). The two parameters b and a are related in such a way that the ratio b/a depends on the movement frequency and serves as the control parameter of the system. In later developments of the model, parameters b and a were replaced by a single parameter k = b/a, leading to the following revised evolution equation (Kelso, 2008): φ_ = - sinφ - 2ksin2φ

ð10:3Þ

A dynamical system obeying this equation has two attractors if k is above 1/4. When k decreases below 1/4, one of the attractors disappears, and the entire state space lies in the attraction basin of the attractor corresponding to in-phase movement. When the frequency of movement is decreased again (leading to an increase in k), the second attractor corresponding to out-of-phase movement reappears, but the system will have by then settled into the in-phase attractor. Therefore, unless the system is perturbed externally, it will not spontaneously shift back (ibid.). Two arguments for the incompatibility between mechanistic and dynamicist explanation can be found in the literature. One of them has to do with the fact that the two programmes employ a different standard for judging explanations. Explanations using dynamical models can be arguably thought of as covering-law explanations (Bechtel, 1998; but see Zednik, 2011). The phenomenon is explained because the system exhibiting it can be shown to behave in accordance with the model, and the phenomenon can be derived from the model. Thus, dynamical explanations involve subsuming particular phenomena under general principles. Furthermore, dynamical models inherently involve abstracting from components in favour of higher-level relations modelled by order parameters (Kaplan & Craver, 2011). Note that it is not just that mechanistic explanations and dynamical

186

10

Conclusions and Future Directions

explanations employ different standards. The problem is that the standards are contradictory, so that to a mechanist, dynamicist explanations just look like bad explanations, and vice versa. Thus, mechanists (particularly Kaplan and Craver) have insisted that dynamical models are only explanatory to the extent that the variables in the equations correspond to mechanism components – so called modelto-mechanism mapping, or 3M, constraint (ibid.). The other argument for the incompatibility between dynamicism and mechanism stems from the assumption that mechanistic explanations involve representations. Dynamicism has a long history of rejecting representationalism, because it rejects the underlying computer metaphor as inadequate for capturing the contextsensitivity of cognitive systems, and as obscuring the role of time in intelligent behaviour (van Gelder, 1995). Because mechanistic explanation in cognitive science has usually run alongside representational explanation, it too became a target for the dynamicist. On the other hand, representationalists supposedly advocating for mechanistic explanation would reject dynamicism based on the fact that dynamicism was non-representational. We should, however, note that mechanistic explanation, unlike traditional computationalism, does consider the time in which various component EIOs occur to be an explanatory relevant factor. The way in which the phenomenon evolves over time is also one of the explananda. In this book, I have redrawn the lines, so that mechanism can now be considered separately from representationalism, and thus I have shown this latter argument to be based on confusion on both sides of the debate. Nevertheless, a question remains as to whether dynamicism and mechanism are opposed to one another, despite their shared opposition to representationalism. My work, particularly in Chaps. 2 and 3, and in Chap. 9, suggests a way in which dynamicism and mechanism can work together. The idea is that dynamical models can be used to provide descriptions of mechanism components for use in mechanistic explanatory texts. The argument for this conclusion is analogous to the one I presented in Chap. 9 with regards to Egan’s mathematical contents. Like function-theoretic explanations, dynamical models can be used to characterise the behaviour of mechanism components. Therefore, the argument for explanatory relevance of dynamical models in mechanistic explanatory texts goes forward. Recall that an MET consists of the set of differences between the mechanism for the phenomenon and the mechanism for the closest member of the contrast class, which are shared by all members of the contrast class. A dynamical model of a component can be explanatorily relevant if: (a) one of the differences between the mechanism for the phenomenon and the mechanism for the closest member of the contrast class is the state of a particular component, and (b) all members of the contrast class also differ in the state of this component, and (c) these differences can be summarised as the component in the mechanisms for the class of phenomena satisfying a different dynamical model. However, unlike mathematical contents, dynamical models of components can also be parts of mechanism descriptions. This requires more argument. At first blush, it may seem that the argument against including mathematical contents in mechanism descriptions should also analogously apply to dynamical models. A component

10.2

Consequences for Previous Non-representational Theories

187

will, in general, only occupy a limited number of states during the occurrence of a given phenomenon. Whether or not the component satisfies the equations of the dynamical model depends on the sum of all states which the component can occupy, not just on the subset of states exhibited while the phenomenon takes place. Thus, satisfying a dynamical model is not a local property, and cannot be part of the mechanism description. However, there is a difference between mathematical contents and dynamical models. Recall that dynamical systems theory enables researchers to identify stable trajectories through state-space, so-called attractors. An entity‘s being in a point attractor state or going through a trajectory of states which form an attractor in the state space can be a mechanism component provided all the states on the attractor are local and mutual dependence obtains as usual. This is the case because being on an attractor just means going through a set of states repeatedly. It does not matter what the system would do at any other times, or in any counterfactual situations. Functiontheoretic descriptions cannot be part of mechanism descriptions because mathematical functions are always underdetermined by a finite set of input-output pairings. Dynamical models share this problem because their equations also describe state variables as functions of each other. However, an attractor is fully defined by the set of states which it contains and the trajectory connecting them. The same attractor can be shared by an indefinite number of different dynamical models. Hence, the fact that temporally local states of affairs underdetermine the particular dynamical model appropriate for describing the component does not mean that the attractor in which the component is found is likewise underdetermined. To sum up, dynamical descriptions of components can be explanatorily relevant, and an entity‘s following an attractor can be a mechanism component. Note that this specifically and deliberately concerns dynamical models of components rather than dynamical models of phenomena. A dynamical model, or a component description formed on the basis of dynamical systems theory can be embedded within a mechanistic explanation. However, this argument shows nothing about whether dynamical modelling of a phenomenon can supplant or complement mechanistic explanation. This issue is more general and ultimately turns on whether coveringlaw explanations are appropriate for cognitive neuroscience and whether pluralism about theories of explanations prevails. Note that this is a different and independent role for dynamical models from Kaplan and Craver’s (2011) model-to-mechanism mapping constraint. While Kaplan and Craver consider only cases where the dynamical model concerns the phenomenon as a whole, I focus on dynamical models of components. Therefore, in these cases, Kaplan and Craver’s constraint is broken, because the model as a whole describes a component, and there’s no correspondence between its state-variables and the mechanism for the phenomenon. The account I presented here may also benefit the enactivist theory of cognition (Thompson, 2007; Varela et al., 1991). According to the enactivists, cognition is a basic process in which all living organisms are engaged. This wide conception of cognition is based on a specific understanding of life as involving autopoiesis (Maturana & Varela, 1980). Autopoiesis is a process of self-creation and self-

188

10

Conclusions and Future Directions

maintenance in which some systems are engaged. Autopoietic systems create boundaries between themselves and the environment outside and regulate their interactions with the environment. For instance, cells contain organelles which create proteins and assemble the cell membrane. The cell membrane delineates the cell from its environment and the cell regulates which substances pass through the membrane in either direction. This continued maintenance of the membrane and exchange of nutritious and waste products through the membrane constitute the existence and identity of the cell as living. When either ceases, the cell dies. The enactivists theorise that the cell’s autopoiesis automatically bestows normative valence onto parts of the environment. Nutritious substances are “good”, noxious ones “bad” for the cell. The cell’s engagement with the environment can therefore be reasonably described as cognition, though of a rudimentary kind. The cell has preferences and acts in certain ways to realise them. The enactivist program has long faced issues with accounting for cognition in more complex organisms. Clearly, even if we do subscribe to the basic idea of enactivism and contemplate single-celled cognition, there is a long way from the kinds of phenomena single cells are engaged in to the kinds of paradigm cases of cognition exhibited by humans and some other multi-cellular animals. Here the non-representational mechanistic model of explanation can help by looking for the mechanisms implementing the organism’s differential responses to environmental conditions, as well as for the mechanisms underlying autopoiesis on a cellular and organismic level. However, this suggestion requires much more work to fully flesh out, particularly since the search for autopoietic mechanism is in one sense the prerogative of cellular and molecular biology. This raises the question whether there is anything mechanistic cognitive science can do for enactivism that mechanistic biology does not already do.

10.3

Future Directions

In this last section, I will gesture at two open questions left unanswered by this work and avenues for future research opened up by the material contained herein. These are both concerned with expanding the account to cover psychological explanation in addition to neuroscientific explanation on which this work focused. Firstly, there is the question of scaling-up. The scaling-up problem (also known as the representation-hunger problem) is a staple of the representationalist/ non-representationalist debate in philosophy of cognitive science (see Clark & Toribio, 1994; Downey, 2020). Typically, it is worded as a challenge from representationalists to non-representationalists. The representationalist concedes that non-representational accounts of simple cognitive phenomena might be available or even preferable to overintellectualized representationalist accounts. However, the representationalist posits that cognition involves more than these simple phenomena.

10.3

Future Directions

189

Cognitive systems regularly adapt their behaviour to states of entities with which they are not currently sensorily coupled. Furthermore, human and animal cognition involves, in some cases, weighing up alternatives, considering counterfactuals, denying propositions etc. The representationalist accounts for these “representationhungry” phenomena by positing representations by means of which the cognitive system keeps track of the states of absent entities, and which it manipulates instead of the states of non-actual states of affairs in order to decide what to do. Although paradigm cases of “representation-hungry” cognition involve consciously considering propositions, etc., the issue is orthogonal to the problem of consciousness. “Representation-hungry” phenomena may occur on the subpersonal level and need not be conscious. The challenge for the non-representationalist is to explain how a cognitive system can perform “representation-hungry” cognition without representation. Or in other words, how the non-representational theory, which may explain simple cognitive phenomena, may be scaled-up to account for the full gamut of cognitive phenomena including all the representation-hungry ones. Here an intriguing possibility opens up for the mechanistic non-representational account of cognition I advocated in this work. This account can use the same conceptual resources as the representational accounts typically use in order to scale up their own representationalist theories of cognition. As we saw in Chaps. 5, 6, and 7, representationalist accounts of content usually rely on indication or structural similarity relations in order to fix contents of representations. Recently, there has been a growing awareness even among representationalists that these resources cannot fully account for all representation-hungry phenomena (e.g., Neander, 2017, pp. 205–215). For instance, there can be no indication of non-existent objects (Eliasmith, 2000, p. 79), ditto for structural similarity. However, representationalists can overcome the problem by (among other options) positing that the vehicles carrying some basic sensory-derived contents can recombine and compose in such a way that the resulting more complex vehicles carry contents which could not be fixed by indication, structural similarity or function directly (Fodor, 1975; see papers in Werning et al., 2012). For example, the content “unicorn” could be represented by composing vehicles carrying the content “horse” and “horn” in some appropriate way. Additionally, vehicles carrying perceptual contents can be decoupled from perception and reused in higher cognition, for instance to support thought or imagination. So long as the system is sufficiently connected to the world, these decoupled vehicles will continue to carry the appropriate contents. The mechanistic non-representational strategy for scaling-up can be based in a similar fashion on mechanism composition and component reuse. According to the mechanistic view, there are mechanisms subserving the various phenomena in which the cognitive system is engaged. These phenomena often are triggered by or in other ways differentially responsive to sensory stimuli. Therefore, the mechanisms subserving these phenomena are likewise differentially responsive to various sensory stimuli or states of affairs. The idea that simpler mechanisms can be composed to create more complex ones is at the heart of mechanical engineering as a science and technological art. Mechanical engineers are experts in setting up systems in such

190

10

Conclusions and Future Directions

a way that complex mechanisms composed of simpler mechanisms occur in them. Similarly, biological systems are set up in such a way that mechanisms form hierarchies and causal chains. The hypothesis here is just that higher-order cognition or representation-hungry phenomena can be explained in terms of composing mechanisms. For example, a mechanism differentially responsive to horses might interface with a mechanism differentially sensitive to, e.g., spikes. These two mechanisms might interact with a mechanism controlling hand movements resulting in drawing the shape of a unicorn. Naturally, the details are completely missing at this stage. However, two observations about this strategy should be made. Firstly, it is more promising than other non-representational alternatives. For instance, in dynamical systems explanations there is no compositionality at all. Dynamical models of higher-order cognition therefore cannot be extrapolated from models of lower-level cognition. In fact, there is rarely any congruence between the parameters used in dynamical models of even related phenomena. The mechanistic alternative on the other hand does provide a heuristic for investigating the mechanisms responsible for higher-level cognitive phenomena given our knowledge of mechanisms responsible for lower-level phenomena. Secondly, the mechanisms differentially responsive to some stimulus need not be representations of that stimulus. For instance, the mechanism controlling flight behaviour might be differentially responsive to horses even though it is not a representation of horses. This mechanism or its parts could be composed with other mechanisms in order to support higher-order cognition about horses even if it is not a representation thereof, and even if it continues to subserve flight behaviour at the same time. Thus, the first avenue of future research opened up by this work is mechanistic compositionality. A second avenue of further research is concerned with expanding the account presented here to explanations in psychology. Representational explanations in psychology are commonplace, perhaps even more so than in neuroscience (Fodor, 1968). Even though the concept of mental representation relevant for psychology differs from neural representations (see Chap. 4), some of the same considerations should apply. Particularly, as long as the theories of content used to fix contents of mental representations in psychology are the same as the theories of content used to fix contents of neural representations, mental representations cannot be explanatorily relevant in mechanistic explanations. One might, therefore, worry that accepting the current account of mechanistic explanation in neuroscience leads to an untenable version of eliminativism about mental representations in psychology. However, there is a number of issues to be considered in connection with this line of reasoning. Firstly, one can reasonably ask whether explanations in psychology are mechanistic or not (Shapiro, 2017). One alternative account of explanation in psychology holds that psychological explanations uncover reasons for action, rather than mechanisms underlying phenomena (Davidson, 1963; Iorio, 2015; Mackay, 1999). If this

References

191

is the case, representations might be permissible in psychological explanations even if they do not carry any explanatory power in mechanistic explanations in neuroscience. But in order to flesh out this option, one would need to say more about the way in which representations enter into reason-giving explanations and about the way in which such reason-giving explanations are derived from the experimental practices and theoretical resources of psychological science. On the face of it, much psychological experimentation does seem to be concerned with uncovering mechanisms, and models utilised in psychology often look like they are supposed to portray mechanisms (Bechtel & Wright, 2009). Another way to avoid the eliminativist conclusion is by investigating the extent to which a realist interpretation of representation language in psychology is warranted. Here there are two separate ways to explain away the representationalist leaning of some psychological theorising. One could hold that reinterpreting the representation talk in psychological explanations along an interpretivist line is sufficient to account for the kinds of explanatory or other epistemic benefit conferred by the explanations (Dennett, 1987). In other words, treating the representation talk as a fiction or idealisation might not diminish the benefits we reap by engaging in it. Conversely, one might hold that the explanations in which representation talk occurs are not the kind of explanations which can support an inference to the best explanation. If for instance, representational explanations are non-causal (e.g., Ginet, 2008), it might be argued that the entities or states they refer to need not exist in order to be explanatory relevant (whereas in causal explanations, only existing states and entities may be causally and therefore explanatorily relevant). Naturally, one might think that a fictionalist or interpretivist interpretation of mental representations is as bad as full-blown eliminativism. However, fictionalism and interpretivism do have at least the advantage of being able to co-opt the existing conceptual framework of psychological sciences, which eliminativism must discard. Therefore, it is a worthy option to examine in its own right.

References Bechtel, W. (1998). Representations and cognitive explanations: Assessing the dynamicist’s challenge in cognitive science. Cognitive Science, 22(3), 295–318. https://doi.org/10.1016/ S0364-0213(99)80042-1 Bechtel, W., & Wright, C. (2009). What is psychological explanation? In P. Calvo & J. Symons (Eds.), Routledge companion to philosophy of psychology (pp. 113–130). Routledge. Chemero, A. (2009). Radical embodied cognitive science. MIT Press. Clark, A., & Toribio, J. (1994). Doing without representing? Synthese, 101(3), 401–431. https://doi. org/10.1007/BF01063896 Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford University Press. Davidson, D. (1963). Actions, reasons, and causes. The Journal of Philosophy, 60(23), 685–700. https://doi.org/10.2307/2023177 Dennett, D. (1987). The intentional stance. MIT Press.

192

10

Conclusions and Future Directions

Downey, A. (2020). It just doesn’t feel right: OCD and the ‘scaling up’ problem. Phenomenology and the Cognitive Sciences, 19(4), 705–727. https://doi.org/10.1007/s11097-019-09644-3 Eliasmith, C. D. (2000). How neurons mean? A Neurocomputational theory of representational content. Unpublished doctoral dissertation, Washington University, St. Louis. Fassbender, C., Murphy, K., Foxe, J. J., Wylie, G. R., Javitt, D. C., Robertson, I. H., et al. (2004). A topography of executive functions and their interactions revealed by functional magnetic resonance imaging. Cognitive Brain Research, 20(2), 132–143. https://doi.org/10.1016/j. cogbrainres.2004.02.007 Fodor, J. A. (1968). Psychological explanation: An introduction to the philosophy of psychology. Random House. Fodor, J. A. (1975). The language of thought. Thomas Y. Crowell. Ginet, C. (2008). In defense of a non-causal account of reasons explanations. The Journal of Ethics, 12(3–4), 229–237. https://doi.org/10.1007/s10892-008-9033-z Grush, R. (2004). The emulation theory of representation: Motor control, imagery and perception. Behavioral and Brain Sciences, 27(3), 377–396. https://doi.org/10.1017/S0140525X04000093 Haken, H., Kelso, J. S., & Bunz, H. (1985). A theoretical model of phase transitions in human hand movements. Biological Cybernetics, 51(5), 347–356. https://doi.org/10.1007/BF00336922 Iorio, M. (2015). Reasons, reason-giving and explanation. In R. Stoecker & M. Iorio (Eds.), Actions, reasons and reason (pp. 61–76). De Gruyter. Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78(4), 601–627. https://doi.org/10.1086/661755 Kelso, J. A. S. (2008). Haken-Kelso-Bunz model. Scholarpedia, 3(10), 1612. https://doi.org/ 10.4249/scholarpedia.1612 Mackay, N. (1999). Reason, cause, and rationality in psychological explanation. Journal of Theoretical and Philosophical Psychology, 19(1), 1–21. https://doi.org/10.1037/h0091186 Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. D. Reidel Publishing Company. Neander, K. (2017). A mark of the mental: In Defense of informational teleosemantics. MIT Press. Norton, A. (1995). Dynamics: An introduction. In R. Port & T. van Gelder (Eds.), Mind in motion: Explorations in the dynamics of cognition (pp. 45–68). MIT Press. Saltzman, E. L. (1995). Dynamics and coordinate systems in skilled sensorimotor activity. In R. Port & T. van Gelder (Eds.), Mind in motion: Explorations in the dynamics of cognition (pp. 149–174). MIT Press. Shapiro, L. A. (2017). Mechanisms or bust? Explanation in psychology. British Journal for the Philosophy of Science, 68(4), 1037–1059. https://doi.org/10.1093/bjps/axv062 Simmonds, D. J., Pekar, J. J., & Mostofsky, S. H. (2008). Meta-analysis of Go/No-go tasks demonstrating that fMRI activation associated with response inhibition is task-dependent. Neuropsychologia, 46(1), 224–232. https://doi.org/10.1016/j.neuropsychologia.2007.07.015 Smith, J. L., Jamadar, S., Provost, A. L., & Michie, P. T. (2013). Motor and non-motor inhibition in the Go/NoGo task: An ERP and fMRI study. International Journal of Psychophysiology, 87(3), 244–253. https://doi.org/10.1016/j.ijpsycho.2012.07.185 Thompson, E. (2007). Mind in life: Biology, phenomenology and the sciences of mind. The Belknap Press. Tong, F., Meng, M., & Blake, R. (2006). Neural bases of binocular rivalry. Trends in Cognitive Science, 10(11), 502–511. https://doi.org/10.1016/j.tics.2006.09.003 van Gelder, T. (1995). What might cognition be, if not computation? The Journal of Philosophy, 92(7), 345–381. https://doi.org/10.2307/2941061 van Gelder, T., & Port, R. (1995). It’s about time. In R. F. Port & T. van Gelder (Eds.), Mind as motion: Explorations in the dynamics of cognition (pp. 1–43). MIT Press. Varela, F., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. MIT Press.

References

193

Wager, T. D., Sylvester, C. Y., Lacey, S. C., Nee, D. E., Franklin, M., & Jonides, C. (2005). Common and unique components of response inhibition revealed by fMRI. NeuroImage, 27(2), 323–340. https://doi.org/10.1016/j.neuroimage.2005.01.054 Weiskopf, D. (2004). The place of time in cognition. British Journal for the Philosophy of Science, 55(1), 87–105. https://doi.org/10.1093/bjps/55.1.87 Werning, M., Hinzen, W., & Machery, E. (Eds.). (2012). The Oxford handbook of compositionality. Oxford University Press. Zednik, C. (2011). The nature of dynamical explanation. Philosophy of Science, 78(2), 238–263. https://doi.org/10.1086/659221

Index

A Abrahamsen, A., 8, 11, 33, 46 Abstraction, 37, 109 Activities, 2, 8, 9, 19–22, 25, 26, 34, 35, 42, 45, 46, 48, 49, 55, 56, 60, 63, 81, 88, 104, 124, 135, 139, 152, 162, 164, 166–176, 180, 181, 183 Algorithm, 20, 57, 145, 150, 151, 169, 170 Analogue representation, 102 Autopoiesis, 187, 188

B Baumgartner, C., 55 Baumgartner, M., 13–16, 18, 64, 94 Bechtel, W., 8, 11, 13, 19–21, 23, 33, 46, 70, 185, 191

C Cartographic representation, 100 Causal exclusion, 67, 68 Chemero, A., 3, 183 Cognitive concept common sense, 172 scientific, 172 Cognitive gloss, 170, 171, 173, 175 Constitutive relevance, of relational properties, 87 Content cognitive, 161, 163–165, 169 mathematical, 163, 165 Content indeterminacy, 62, 63

Covariance, 77–80, 163, 164 Craver, C.F., 8–13, 16, 19–22, 24, 25, 31–39, 41, 42, 49, 79, 182, 185–187

D Decomposition, 11, 19, 20 Disjunction problem, 62, 80 Dretske, F.I., 56, 61, 77–80, 123, 126, 127, 144, 145, 147, 148, 151, 152, 180 Dual explananda, 5, 71, 72, 143–146, 153, 154, 159, 169 Dynamical models in METs, 186 Dynamical systems theory (DST), 184, 187 Dynamicism, 2, 183, 186 and representationalism, 186

E Edit distance, 47–50, 70 Egan, F., 144, 145, 162–171, 173–175, 186 Eliasmith, C.D., 61, 77, 80–83, 189 Eliminativism, 190, 191 Entities, 8, 9, 11, 20–22, 25, 26, 34, 35, 39, 42, 46, 48, 70, 71, 88, 105, 111, 112, 124, 151, 163, 165, 181, 187, 189, 191 Entity involving occurrent, 9, 11, 12, 35, 39, 40, 46, 110 Explananda, 10, 11, 23, 25–27, 31–33, 36, 39–41, 64, 72, 143–145, 153, 167–169, 171–176, 186 primary vs. secondary, 144, 145

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Kohár, Neural Machines: A Defense of Non-Representationalism in Cognitive Neuroscience, Studies in Brain and Mind 22, https://doi.org/10.1007/978-3-031-26746-8

195

196 Explanation completeness of, 41 contrastive, 25, 31, 34–36, 38–42, 44, 49, 144 covering-law, 25, 26, 185 epistemic, 33 function-theoretic, 162, 165, 166, 174 goals of, 24, 38, 39, 163 ontic, 33, 35, 39 psychological, 188, 190 of successes, 154, 169 as unification, 26 Explanatory pluralism, 71 relevance, 4, 41, 42, 63, 67, 68, 72, 153, 168, 186 relevance, and constitutive relevance, 41

F fMRI, 182 Fodor, J.A., 54–56, 58, 62, 69, 70, 79, 180, 189, 190 Francken, J.C., 171, 172, 174 Frequentism, 82, 84, 85 finite, 84 hypothetical, 84, 85 and locality, 87 and mutual dependence, 88, 89 Function ancestral, 124, 127 Cummins, 122 cybernetics, 125 proper, 123, 126, 127 synchronic, 125, 129, 134, 139 Functional imaging, 182

G Gebharter, A., 13, 14, 64 Gładziejewski, P., 108, 116, 125, 136, 144, 145, 153–155, 158 Glennan, S., 8, 10, 11, 25 GO/NOGO paradigm, 182 Grush, R., 67, 104, 110, 111, 180

H Haken-Kelso-Bunz (HKB) model, 184 Heuristics, 19, 20, 169–171, 174, 176, 190 Hippocampus, 56, 59, 104, 108, 109, 113–115, 136, 148–150, 152, 155 Homomorphism, 106–109

Index Homuncular fallacy, 56, 61 Horizontal surgicality, 15, 16, 18, 19, 40, 64, 65, 69, 89, 90, 93–95, 114, 116 How questions, 23, 40, 144

I Indicator semantics, 62, 122, 182 Inter level experiments, 19, 21–23 Internalism, 68, 69 Interpretivism, 191 Interventions, 13–19, 24, 37, 42–44, 48, 49, 64, 66–68, 89–91, 93–95, 100, 113–116, 120, 127–136, 138–140, 152–155, 159 fat-handed, 13–16 horizontally surgical, 15, 64, 66, 90, 94, 113–115, 120, 129, 132, 133, 135 ideal, 13–15, 17, 67, 90, 91, 94, 115, 116, 128, 131, 134, 135, 154 Isomorphism, 106–109

J Job-description challenge, 60–63

K Kaplan, D.M., 24, 25, 31–39, 41, 42, 49, 185, 187 Kitcher, P., 26, 125, 126, 134 Krickel, B., 8–12, 15–18, 23, 31, 34, 41, 44, 47, 64, 68, 94, 166

L Localisation, 19, 20, 23 Locality, 12, 14, 18, 64, 65, 68–70, 78, 82, 87, 88, 92, 95, 100, 109–113, 126, 127, 129, 131, 136, 139, 140

M Maze navigation, 4, 17, 111, 113, 114, 144, 148–152, 156 Mechanism definition of, 11 descriptions, canonical form, 46, 47 descriptions, comparison of, 47 developmental of, 151 levels of, 11 ontic, 35, 40, 41, 45, 46, 49 ontogenetic, 151, 152 schema, 20, 26

Index sketch, 20, 166 Mechanism descriptions, 31, 34–41, 44–50, 66, 70, 71, 144, 150, 151, 165, 166, 175, 176, 180, 181, 186, 187 Mechanistic compositionality, 190 Mechanistic constitution, 12 and causation, 13 criteria for, 136 Mechanistic explanation constitutive, 12, 27, 53, 63, 69–72, 109, 110, 143, 151, 155 etiological, 11 Mechanistic explanatory texts, 31, 39–50, 53, 64, 70, 131, 144, 146, 156, 165–168, 175, 176, 180, 181, 186 contents of, 42, 44 Mechanistic models, 19–25, 27, 32, 37, 180, 181, 188 how-plausibly, 20 how-possibly, 20 Methodological solipsism, 69 Miłkowski, M., 108, 116, 144, 145, 153–155, 158 Millikan, R.G., 61, 77, 88, 120, 121, 123, 128 Misrepresentation, 57, 61, 62, 79, 80, 112, 113, 119, 120, 122, 131, 139, 157, 158, 163 Model-mechanism mapping (3M) constraint, 186, 187 Mutual dependence, 4, 12, 14–16, 18, 19, 64, 65, 68, 69, 78, 82, 87–89, 92, 95, 100, 113–116, 120, 127, 129, 130, 132, 134, 135, 139, 140, 187 Mutual information, 78, 80, 81, 88, 95, 182 Mutual manipulability, 12–16, 21, 34, 35, 41, 42, 64, 126, 139, 153, 154 criticisms of, 13

N Nanay, B., 124, 126, 132 Neander, K., 61, 62, 77, 120, 121, 123, 128, 189 Neural computation, 57 Neural emulators, 63, 64, 67, 100, 110–112, 116, 136, 140, 180 Neural reuse, 174, 175

O O’Brien, G., 61, 99, 102, 105–109 Opie, J., 61, 99, 102, 105–109 Organisation, 9, 20, 21, 26, 46, 147

197 P Phenomena as entity-involving occurrents, 9, 11, 12, 35, 40, 111 as input-output pairs, 166 Phenomenon, 1, 3–5, 7–27, 31–45, 47, 53, 63–72, 87–95, 99, 100, 105, 109–116, 120, 126–140, 143–146, 148–154, 156, 159, 161, 162, 165–168, 171, 172, 174–176, 180–190 Piccinini, G., 57, 63, 72, 77, 124–126, 129, 130, 134 Place cells, 56, 58–60, 104, 105, 108–110, 113–115, 148, 151, 152 Probability conditional, 81–84 varieties of, 82–83 Propensities, 82, 84–87, 92–95 and locality, 92, 95 long-run, 85, 92, 93 and mutual dependence, 92, 95 single-case, 85, 86, 92, 94, 95

R Reciprocal manipulability, 16–19, 40, 64, 65, 90, 91, 94, 95, 113, 115, 116, 120, 127–130, 133–135, 138, 140 Representational contents, 2–5, 24, 54, 57, 58, 60, 61, 64–69, 71, 72, 78–81, 83, 87–95, 99, 100, 102, 105–107, 110, 112–116, 119–129, 132, 133, 135–140, 143–146, 153, 158, 162, 180–182 status, 106–108, 135 vehicles, 2, 5, 54, 57, 58, 61, 62, 64, 67, 72, 78, 81, 88, 89, 94, 107, 111, 114, 121, 122, 127–139, 146, 153, 154, 161–163, 182 Representationalism, 2–5, 54, 57, 58, 62, 63, 149, 151, 157, 161, 162, 164, 175, 180, 181, 186 Representation-hunger, 188 Representations external vs. internal, 54–58 mental vs. neural, 58–60

S Scaling-up, 188, 189 Schroeder, T., 125, 126

198 Second-order resemblance, 105–109, 112, 116 Semantic convention, 102 Sender-receiver model, 120, 121 Shea, N., 61, 69, 78, 99, 104, 107, 108, 144, 146–148, 153, 157, 158, 180 Structural representation, 67, 99–116, 119, 121, 134, 135, 153 Structural resemblance, 61, 65, 72, 100, 106, 107, 109, 112, 113, 115, 119, 159 and locality, 65 and mutual dependence, 65 Surrogative reasoning, 104, 110 Swoyer, C., 61, 99, 103, 104, 108, 110

T Teleosemantics, 61, 65, 72, 77, 78, 99, 107, 119–140

Index Transcranial magnetic stimulation, 22 Translation problem, 171–173, 175

U Usher, M., 61, 80–83, 88

V Van Gelder, T., 3, 183, 184, 186 Varela, F.J., 3, 187

W Why questions, 23, 40, 42, 144, 145, 148 Woodward, J., 10–13, 25, 42, 43, 68, 137, 153, 154, 158