The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment (Resources for Ecological Psychology Series) [1 ed.] 0367444712, 9780367444716

The Ecological Brain is the first book of its kind, using complexity science to integrate the seemingly disparate fields

131 92 6MB

English Pages 196 [212] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Endorsements
Half Title
Series
Title
Copyright
Dedication
Contents
Acknowledgments
Preface
1 Making everybody upset
1.1 The scope of the issue
1.2 Why think ecological psychology and neuroscience are irreconcilable?
1.2.1 Neuroscience approach to visually-guided action
1.2.2 Ecological psychology approach to visually-guided action
1.2.3 Radically different scientific worldviews
1.3 What’s to come
2 Why “ecological” psychology?
2.1 From computers to environments
2.1.1 Why “cognitive” psychology?
2.1.2 Why “ecological” psychology?
2.2 Conclusion
3 The sins of cognitivism visited upon neuroscience
3.1 From cognitivism to neuroscience
3.2 From Hippocrates and Aristotle to Mike the Headless Chicken
3.3 From Ramón y Cajal to McCulloch and Pitts
3.3.1 Hodgkin-Huxley model
3.3.2 McCulloch-Pitts model
3.3.3 The McCulloch-Pitts tradition versus the Hodgkin-Huxley tradition
3.4 Cognitivism finds a new home
3.4.1 Is neuroscience really cognitivist?
3.4.2 Yes, contemporary neuroscience is very much cognitivist
3.5 Conclusion
4 The varieties of ecological neuroscience
4.1 The story thus far
4.2 Preliminaries to an ecological neuroscience
4.2.1 Neuroscience and (titular) affordances
4.2.2 Ecological psychology and resonance
4.2.3 Playing nice with others
4.3 Reed and Edelman
4.4 Neural reuse
4.5 Bayesianism
4.6 Conclusion
5 Foundations of complexity science for the mind sciences
5.1 A way forward
5.2 What is complexity science?
5.3 The roots of complexity science
5.3.1 Systems theory
5.3.2 Nonlinear dynamical systems theory
5.3.3 Synergetics
5.4 Key concepts and ways to get a grip on them
5.5 Putting complexity science to work
5.6 Conclusion
6 What is NExT? NeuroEcological Nexus Theory
6.1 Assembling the pieces
6.2 What is NExT?
6.3 Six hypotheses
6.4 Conclusion
7 Putting the NeuroEcological Nexus Theory to work
7.1 Investigating an affordance via NExT
7.2 The affordance of pass-through-able
7.3 Making everybody happy? NExT = complexity science, ecological psychology, and neuroscience
7.3.1 NExT and complexity science
7.3.2 NExT and ecological psychology
7.3.3 NExT and neuroscience
7.4 What comes NExT?
7.5 Conclusion
8 Conclusion
8.1 The ecological brain
8.2 Challenges
8.3 So is everybody upset?
Index
Recommend Papers

The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment (Resources for Ecological Psychology Series) [1 ed.]
 0367444712, 9780367444716

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

“The brain is just a brain. It has no function without the body and the environmental niche it occupies. The Ecological Brain is an attempt to explain this trivial yet often neglected embeddedness, integrating recent knowledge from psychology and neuroscience research.” —György Buzsáki, M.D., Ph.D., Biggs Professor of Neural Sciences, NYU Neuroscience Institute, New York University, USA

“After decades of asking what your head’s inside of, ecological psychologists are beginning to ask, “what’s inside your head?” In The Ecological Brain, Luis Favela takes seriously the claim that mind is low-dimensional dynamics in a brain-body-environment system. By synthesizing complexity theory, nonlinear dynamics, and recent work on neural manifolds, he points a way forward for understanding perception and action at neural, organism, and ecological scales.” —William H. Warren, Chancellor’s Professor of Cognitive Science, Brown University, USA

The Ecological Brain

The Ecological Brain is the frst book of its kind, using complexity science to integrate the seemingly disparate felds of ecological psychology and neuroscience. The book develops a unique framework for unifying investigations and explanations of mind that span brain, body, and environment: the NeuroEcological Nexus Theory (NExT). Beginning with an introduction to the history of the felds, the author provides an assessment of why ecological psychology and neuroscience are commonly viewed as irreconcilable methods for investigating and explaining cognition, intelligent behavior, and the systems that realize them. The book then progresses to its central aim: presenting a unifed investigative and explanatory framework ofering concepts, methods, and theories applicable across neural and ecological scales of investigation. By combining the core principles of ecological psychology, neural population dynamics, and synergetics under a unifed complexity science approach, NExT ofers a compressive investigative framework to explain and understand neural, bodily, and environmental contributions to perceptionaction and other forms of intelligent behavior and thought. The book progresses the conversation around the role of brains in ecological psychology, as well as bodies and environments in neuroscience. It is essential reading for all students of ecological psychology, perception, cognitive sciences, and neuroscience, as well as anyone interested in the history and philosophy of the brain/mind sciences and their state-of-the-art methods and theories. Luis H. Favela is Associate Professor of Philosophy and Cognitive Sciences at the University of Central Florida, USA, and is a fellow with the Research Corporation for Science Advancement. His research is interdisciplinary, situated at the intersections of the cognitive sciences, experimental psychology, and the philosophies of mind and science.

Resources for Ecological Psychology A Series of Volumes Edited By Jefrey B. Wagman & Julia J. C. Blau [Robert E. Shaw, William M. Mace, and Michael Turvey, Series Editors Emeriti]

Dexterity and Its Development Edited by Nicholai A. Bernstein, Mark L. Latash and Michael T. Turvey Ecological Psychology in Context James Gibson, Roger Barker, and the Legacy of William James’s Radical Empiricism Harry Heft Perception as Information Detection Refections on Gibson’s Ecological Approach to Visual Perception Jefrey B. Wagman and Julia J. C. Blau A Meaning Processing Approach to Cognition What Matters? John Flach and Fred Voorhorst Behavior and Culture in One Dimension Sequences, Afordances, and the Evolution of Complexity Dennis P. Waters Afective Gibsonian Psychology Rob Withagen Introduction to Ecological Psychology A Lawful Approach to Perceiving, Acting, and Cognizing Julia J. C. Blau and Jefrey B. Wagman Intellectual Journeys in Ecological Psychology Interviews and Refections from Pioneers in the Field Edited by Agnes Szokolszky, Catherine Read and Zsolt Palatinus Places, Sociality, and Ecological Psychology Essays in Honor of Harry Heft Edited by Miguel Segundo-Ortin, Manuel Heras-Escribano and Vicente Raja The Ecological Brain Unifying the Sciences of Brain, Body, and Environment Luis H. Favela

The Ecological Brain Unifying the Sciences of Brain, Body, and Environment

Luis H. Favela

Designed cover image: Pranayama © Greg A. Dunn Design First published 2024 by Routledge 605 Third Avenue, New York, NY 10158 and by Routledge 4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2024 Luis H. Favela The right of Luis H. Favela to be identifed as author of this work has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifcation and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Favela, Luis H., author. Title: The ecological brain : unifying the sciences of brain, body, and environment / Luis H. Favela. Description: New York, NY : Routledge, 2024. | Series: Resources for ecological psychology series | Includes bibliographical references and index. Identifers: LCCN 2023037791 (print) | LCCN 2023037792 (ebook) | ISBN 9780367444716 (hardback) | ISBN 9780367444723 (paperback) | ISBN 9781003009955 (ebook) Subjects: LCSH: Environmental psychology. | Neurosciences. Classifcation: LCC BF353 .F384 2024 (print) | LCC BF353 (ebook) | DDC 155.9—dc23/eng/20231106 LC record available at https://lccn.loc.gov/2023037791 LC ebook record available at https://lccn.loc.gov/2023037792 ISBN: 978-0-367-44471-6 (hbk) ISBN: 978-0-367-44472-3 (pbk) ISBN: 978-1-003-00995-5 (ebk) DOI: 10.4324/9781003009955 Typeset in Sabon by Apex CoVantage, LLC

Para mi Nona.

Contents

Acknowledgments Preface

xi xiii

1

Making everybody upset 1.1 The scope of the issue 2 1.2 Why think ecological psychology and neuroscience are irreconcilable? 5 1.2.1 Neuroscience approach to visually-guided action 6 1.2.2 Ecological psychology approach to visually-guided action 9 1.2.3 Radically diferent scientifc worldviews 11 1.3 What’s to come 12

2

Why “ecological” psychology? 2.1 From computers to environments 19 2.1.1 Why “cognitive” psychology? 19 2.1.2 Why “ecological” psychology? 24 2.2 Conclusion 36

19

3

The sins of cognitivism visited upon neuroscience 3.1 From cognitivism to neuroscience 42 3.2 From Hippocrates and Aristotle to Mike the Headless Chicken 43 3.3 From Ramón y Cajal to McCulloch and Pitts 45 3.3.1 Hodgkin-Huxley model 48 3.3.2 McCulloch-Pitts model 50 3.3.3 The McCulloch-Pitts tradition versus the Hodgkin-Huxley tradition 54 3.4 Cognitivism fnds a new home 56 3.4.1 Is neuroscience really cognitivist? 56 3.4.2 Yes, contemporary neuroscience is very much cognitivist 58 3.5 Conclusion 61

42

1

x

Contents

4

The varieties of ecological neuroscience 4.1 The story thus far 69 4.2 Preliminaries to an ecological neuroscience 70 4.2.1 Neuroscience and (titular) afordances 70 4.2.2 Ecological psychology and resonance 72 4.2.3 Playing nice with others 75 4.3 Reed and Edelman 76 4.4 Neural reuse 78 4.5 Bayesianism 80 4.6 Conclusion 82

69

5

Foundations of complexity science for the mind sciences 5.1 A way forward 87 5.2 What is complexity science? 88 5.3 The roots of complexity science 90 5.3.1 Systems theory 90 5.3.2 Nonlinear dynamical systems theory 91 5.3.3 Synergetics 96 5.4 Key concepts and ways to get a grip on them 99 5.5 Putting complexity science to work 106 5.6 Conclusion 110

87

6

What is NExT? NeuroEcological Nexus Theory 6.1 Assembling the pieces 120 6.2 What is NExT? 121 6.3 Six hypotheses 122 6.4 Conclusion 147

120

7

Putting the NeuroEcological Nexus Theory to work 7.1 Investigating an afordance via NExT 157 7.2 The afordance of pass-through-able 157 7.3 Making everybody happy? NExT = complexity science, ecological psychology, and neuroscience 163 7.3.1 NExT and complexity science 163 7.3.2 NExT and ecological psychology 164 7.3.3 NExT and neuroscience 165 7.4 What comes NExT? 165 7.5 Conclusion 168

157

8

Conclusion 8.1 The ecological brain 170 8.2 Challenges 173 8.3 So is everybody upset? 177

170

Index

187

Acknowledgments

This book took too long to complete. But it’s not my fault! Well, it is, but there was a lot going on when the contract was frst signed and the time soon after. You know, little stuf, like a pandemic, promotion and tenure, and frst-time homeownership. I would like to start by thanking Julia J. C. Blau and Jef B. Wagman, editors of the Resources for Ecological Psychology Series, for their interest in the book and support along the way. A big thanks to Ceri McLardy and Emilie Coin, Routledge publisher and editor, for their patience, kindness, and support during those tough years. They never made me feel bad for delaying my submission—though they could have! There are too many colleagues to thank who provided feedback on various parts of the book that were presented at conferences and the like over the years. There are two that deserve to be explicitly thanked. First, thanks to Tony Chemero for comments on various parts of earlier drafts and chatting about material. Second, thanks to Mary Jean Amon for reading the entire manuscript and providing helpful edits and comments. Their eforts made the book much better. Any remaining errors or limitations are mine alone. This book drew from several earlier works of mine: • Favela, L. H. (2021). The dynamical renaissance in neuroscience. Synthese, 199(1–2), 2103–2127. • Favela, L. H. (2020). Dynamical systems theory in cognitive science and neuroscience. Philosophy Compass, 15(8), e12695, 1–16. • Favela, L. H. (2020). Cognitive science as complexity science. Wiley Interdisciplinary Reviews: Cognitive Science, 11(4), e1525, 1–24. I would like to thank the editors and reviewers at these venues for their helpful critiques and suggestions. Thanks also to Springer and Wiley for permission to reproduce text and fgures. For permission to use images, thanks to Troy Waters for his grandfather’s photo of Mike the Headless Chicken (Figure 3.1a), Jefrey Goldstein for the origins of complexity science image (Figure 5.1), and Aaron Lou and Christopher Matthew De Sa for the dynamic chart method image (Figure 7.2a). Thanks to Dr. Greg Dunn for allowing me to license his beautiful artwork for the book cover (Pranayama: Breath of Fire) and producing one more of that out-of-print image for my home. An immeasurable amount of thanks to my family: Nona, Mom, and Tia. Your neverending support helped keep me going during the challenging years that overlapped with

xii

Acknowledgments

much of the period of time spent writing this book. It was even helpful to hear you constantly asking me, “Is the book done yet?!” I love you, mi familia. I am incredibly grateful for my life partner and wife, MJ. Thanks for the love, support, and laughs. When things got toughest, you always reminded me to, “Watch your step, kid!” Thank you. MNLW.

Preface

Dear to me is Plato, dearer still is truth.1 (Aristotle)

James J. Gibson was no Plato, and I certainly am no Aristotle. Still, to paraphrase, “Gibson is dear to me, but dearer still is truth.” It is important to honor our sources of inspiration. But it is prudent that we not do so—to paraphrase another inspiration of mine—“come what may” (Quine, 1951). At the time I frst learned about Gibson, I was quite committed to mainstream understandings of mind: brain-centric, computations and representations, and reductionism. Exposure to ecological psychologists during graduate school changed all that.2 The ecological approach to perception that I learned about, along with ideas from complex and dynamical systems theory, undermined my confdence that perception and/or action could be reduced to brain activity, that cognition was fruitfully understood in terms of computations and representations, and that a full account of mind could be provided without substantial contributions from the body and environment. Although the path was painful at times (e.g., Socratic sessions during graduate seminars), I came out the other side confdent that ecological psychology, dynamical systems theory, and embodiment were the proper ways to understand mind in its various forms. While fruitful lines of philosophical and experimental research grew from these newfound and appreciated approaches, a substantial part of me remained quite interested in the brain. So I supplemented my philosophical and experimental psychology education and training with neuroscience-focused fellowships and visiting positions. Following on those later experiences, I realized something potentially paradoxical in my thinking: Even though I view much mainstream neuroscience as adhering to mistaken core commitments (e.g., computations and representations), I think ecological psychology will remain incomplete without providing serious accounts of the brain’s contributions to its phenomena of investigative interest. The bottom-line is that the brain is ecological—it is not a special, isolated entity that can be properly understood in ways that disregard its situatedness in bodies

1 Though this statement—or a version of it—is widely recited (e.g., Durant, 1933; Shorey, 1938), there is no specifc reference to its actual use. The real quote that it is based on is as follows: “Yet it would perhaps be thought to be better, indeed to be our duty, for the sake of maintaining the truth even to destroy what touches us closely, especially as we are philosophers or lovers of wisdom; for, while both are dear, piety requires us to honour truth above our friends.” (Aristotle, Nicomachean Ethics, 1096a15, in Aristotle, 1908). 2 Thanks Tony Chemero, Mike Richardson, Mike Riley, and Kevin Shockley!

xiv

Preface

and environments. So too must explanations of interactions of bodies and environments by organisms like us include a substantial account of the brain’s causal and constitutive contributions. So while Gibson’s lessons are dear to me, I cannot ignore the truth that is the signifcance of brains, even when investigating perception-action. What would an account look like that integrates lessons from ecological psychology and neuroscience? From that question, a book was born. Two aims guided my early forays into the book’s subject matter: First, to try to understand why ecological psychology has traditionally not provided substantial descriptions of the brain’s contributions and why neuroscience has not embraced lessons about embodiment and environmental embeddedness so compellingly motivated by ecological psychology. Second, to provide an investigative framework that can incorporate the best both felds have to ofer. I soon found that addressing these required doing quite a bit of history (the frst aim) and identifying a broadly-applicable set of methods and theories (the second aim). Since I am engaging with big issues that have vast literatures, Chapter  1 starts by setting the scope of the discussion (e.g., limiting the discussion to certain areas of neuroscience). The next part of the book (Chapters 2–4) is primarily historical and conceptual in nature. I explain why ecological psychology and neuroscience are typically understood as being incommensurable and irreconcilable by telling a story that traces both of their lines of development as originating during the World War II era. Twentieth century neuroscience can be understood in light of two traditions: one that lays emphasis on biological features of neurons (i.e., Hodgkin-Huxley tradition) and one that lays emphasis on abstract features of neurons (i.e., McCulloch-Pitts tradition). The division from ecological psychology was evident early as neuroscience seemed to embrace the tradition (i.e., McCulloch-Pitts) that accepted much of what ecological psychology rejected in other areas of psychology where cognitivism (i.e., information processing understanding of mind) came to be overrepresented. With a plausible story about the origins of the division provided, the frst part of the book concludes with an overview of prior attempts to integrate ecological psychology and neuroscience. The second part of the book ofers a solution. In Chapter  5, complexity science is described as ofering concepts, methods, and theories that are broadly applicable to phenomena of investigative interest by both ecological psychology and neuroscience. Chapter 6 ofers a specifc framework for integration, namely, the NeuroEcological Nexus Theory (NExT). As a complexity science, NExT leverages concepts, methods, and theories that are broadly applicable to the neural, bodily, and environmental scales. In particular, NExT ofers a framework that integrates neural population dynamics, body organization, and environmental information into an account of perception-action events. Chapter  7 demonstrates the potential of this framework by application to the case of mice navigating an environment of opportunities for action, or afordances. The book concludes with the third part in Chapter 8. Here, I present some challenges likely to be hurled at NExT (e.g., “real cognition”) and ofer responses. After, I summarize aspects of ecological psychology and neuroscience that each will need to compromise for the sake of integration (e.g., eliminating core concepts like resonance and representation, respectively). I conclude the book on a hopeful note and ofer reasons to think something like NExT is not just possible but that there are signs of it happening now. In this way, the book is a bit of a meditation. I am trying to understand a problem (i.e., the division between ecological psychology and neuroscience), what a solution needs (i.e., complexity science), test out a solution (i.e., NExT), and conclude with the hope that the story is correct and that the resolution can happen. Did I present ecological psychology

Preface

xv

and neuroscience in ways that will make everybody happy? Most likely not. But that is okay. I am content with what I learned during the journey and hope some folks out there learn a little and—just maybe—agree with some points made. Either way, readers might be entertained by my attempts to sprinkle in some humor here and there. References Aristotle. (1908). The works of Aristotle (W. D. Ross, Trans.). London, UK: Oxford University Press. Durant, W. (1933). The story of philosophy: The lives and opinions of the greater philosophers. New York, NY: Time Incorporated. Quine, W. V. (1951). Two dogmas of empiricism. The Philosophical Review, 60, 20–43. https://doi. org/10.2307/2181906 Shorey, P. (1938). Platonism: Ancient and modern. Berkeley, CA: University of California Press.

1

Making everybody upset

If you’re making everybody happy, then you’re doing something wrong. (Anonymous, n.d.)

The primary aim of this book is to reconcile two mind sciences that are often viewed as being at odds with each other: ecological psychology and neuroscience. In order to bring them together, I argue that complexity science provides a unifying set of concepts, methods, and theories for ecological psychology and neuroscience to adopt. The secondary aim is to provide an investigative framework that demonstrates how ecological psychology and neuroscience can be unifed as a complexity science. That framework is the NeuroEcological Nexus Theory (NExT). The third aim is to show that reconciling ecological psychology and neuroscience by way of a complexity science-based framework like NExT has lessons with broad applicability to other mind sciences and related issues. Achieving these three aims will facilitate progress toward a unifed framework for investigating, explaining, and understanding the various forms of mind and the systems those features are realized in.1 Before diving in, a word of caution. Big reward can come with big cost. As I will argue, the path to reconciliation will require both ecological psychology and neuroscience to loosen their grip on some of their core conceptual, methodological, and theoretical commitments. Due to that fact, my current proposal is bound to make everybody upset. There should be no doubt that both ecological psychology and neuroscience have theoretical sophistication and a wide range of empirical support and have greatly contributed to our understanding of mind and its physical realizers. Nevertheless, they have each only shed light upon a limited range of phenomena.

1 Throughout the book I will intentionally refer to investigation, explanation, and understanding as diferent features of a scientifc framework. In brief, “investigation” refers to the work scientists do, such as experimental setup, hypothesis testing, data analyses, etc., as they attempt to achieve an explanation and/or understanding of a target of inquiry. “Explanation” is a controversial term, especially in the philosophy of science (e.g., Kitcher & Salmon, 1989; Woodward, 2019). With the hope of providing as basic a sense of “explanation” as I can, I simply refer to the successful outcome of an investigation. A “successful investigation” results in the achievement of such goals as illuminating relevant details about a phenomenon (e.g., mechanisms), its causes, the ways it exemplifes a law or theory, and other scientifc virtues as supporting counterfactual reasoning. “Understanding” is also a controversial term (e.g., de Regt, 2017; Grimm, Baumberger, & Ammon, 2017; Khalifa, 2018). Scientifc understanding overlaps with explanation a great deal but is not equivalent. It requires an additional level of comprehension, such as the ability to generalize features of explanations to other cases and answer deeper “why” questions.

DOI: 10.4324/9781003009955-1

2

Making everybody upset

Having limited scope is not a shortcoming in itself. In a variety of ways, specialization is what facilitates many of science’s successes. Imagine a social psychologist investigating online chatroom behaviors but not being able to make any conclusions until they incorporated quantum-level efects and the infuence of the moon’s gravitational pull. The point is that scientifc disciplines beneft from focusing on certain parts of nature. At the same time, specialization can be a double-edged sword. By focusing on certain phenomena, a discipline needs to develop particular investigative frameworks to best conduct research and provide explanations. Scientists both within and outside a discipline can become perturbed—to put it mildly—when an investigative framework is applied to phenomena outside its originally intended purview. Those within may worry that their concepts and methods are being misapplied. Those from outside may worry that a feld is overstepping its boundaries. In order for reconciliation to occur in the current context, discipline insiders will need to be open to such occurrences as redefning concepts and new applications of their methods. Discipline outsiders will need to be open to incorporating new theories and reconsidering the adequacy of typical methods. Though doing so is, as I argue, necessary for progress, it is certainly going to make many practitioners upset—at least, at frst. Only time will tell if this situation ends up being an instance of Planck’s principle.2 Before providing an overview of what is to come, in the next section I defne the scope of the issue. After, I present an example phenomenon that illustrates the tensions between ecological psychology and neuroscience. These two sections are aimed at helping readers who are unfamiliar with the topics. Additionally, they are for those who know the relevant literature and are interested in the extent to which the current arguments are intended to apply. 1.1 The scope of the issue The purpose of this section is to set the scope of the issue. First, although more will be introduced throughout the upcoming chapters, it is helpful to present some basic terminology at this early stage. Note that my aim is not to present the necessary and sufcient conditions for these terms. I aim to provide defnitions that I think are minimally controversial and could reasonably be understood by multidisciplinary audiences. Here, “mind sciences” refers to those disciplines that investigate mind in a scientifc manner, such as the cognitive sciences, neuroscience, psychology, and their many subfelds. This could also include the philosophies of those sciences (e.g., philosophy of cognitive science, philosophy of neuroscience, and philosophy of psychology), as well as other disciplines such as computer science, education, and engineering. The term “mind” denotes those capacities of organisms commonly referred to as cognition, consciousness, goal-directed behavior, and mental processes. Examples of mind include decision making, perception, prospective control, and subjective experience. Connoting ‘mind’ with the aforementioned words may seem like defning by synonym and covering quite a wide range of phenomena. Since my current aim is not to defend a philosophical or theoretical thesis about what “mind” is, I am willing to accept

2 Often paraphrased as “science progresses one funeral at a time,” the actual quote from Max Planck is, “A new scientifc truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up familiar with it” (Planck, 1950, pp. 33–34; cf. Azoulay, Fons-Rosen, & Graf Zivin, 2019; Hull, Tessner, & Diamond, 1978).

Making everybody upset

3

those critiques. I hope that in lieu of a straightforward defnition, paradigmatic examples of phenomena investigated by the mind sciences will sufce for achieving my overall goal. In addition, what things have minds is not essential for current purposes. My current goals are not afected by what stance I take on whether computers, extraterrestrials, or nonhuman organisms have minds. Related, I take no stance on the metaphysics of mind other than the naturalist ontology of most contemporary science, namely, that science studies natural phenomena and that natural phenomena are physical. In view of that, “physical realizers” are those structural and functional features of an organism that instantiate the various facets of mind, such as neurophysiology and bodily dynamics. It is worth noting that the literature on physical realization—including multiple realization, physicalism, supervenience, etc.—is enormous (e.g., Fodor, 1974; Francescotti, 2014; Polger & Shapiro, 2016; Putnam, 1975; Shoemaker, 2007; Stoljar, 2017). I utilize the term here in as harmless a way as I can, with an aim toward capturing what a practicing scientist would mean, such as those phenomena accountable in the natural sciences. Second, it will help the discussion to know what I mean by “ecological psychology” and “neuroscience.” By “ecological psychology,” I refer to a research program and theoretical approach in experimental psychology that was originally developed by James J. Gibson in the 1950s and 1960s to investigate perception and action (e.g., Gibson, 1966). In its early days, Eleanor J. Gibson expanded the approach to include developmental psychology (e.g., Gibson, 1969). The details of ecological psychology as an investigative framework are outlined in the next chapter. For now, it is helpful to know that ecological psychology began as—and continues to be—an alternative to the more cognitivist approach to psychology. In the current context, cognitivism defnes mind as computational and representational in nature, and localizes it primarily in the central nervous system, especially the brain (e.g., Von Eckardt, 1995; Thagard, 2020). Ecological psychology rejects those commitments: Perception, for example, is in no literal way like a computer, does not involve indirect representations of the world, and is not realized primarily in neural systems. Instead, perception is essentially dynamic, direct (i.e., antirepresentational), and realized across organism-environment systems (e.g., Gibson, 1986/2015). The predominant form of ecological psychology to stem directly from Gibson was that of Neo-Gibsonians (Heft & Richardson, 2017), such as William M. Mace, Robert E. Shaw, and Michael Turvey (e.g., Turvey, Shaw, Reed, & Mace, 1981; Chemero, 2009, p.  109). Ecological psychology as a whole has evolved and been incorporated into other frameworks (e.g., radical embodied cognitive science; Chemero, 2009). Specifc features, such as the term ‘afordances,’ have been applied in ways not always directly compatible with Gibsonian or Neo-Gibsonian ecological psychology (e.g., Bruineberg, Kiverstein, & Rietveld, 2018; Cisek, 2007; Friston et al., 2012). For current purposes, I refer to “ecological psychology” in a basic sense, which I believe overlaps with many contemporary practitioners who call themselves “ecological psychologists.” To that end, I focus on three core principles commonly adhered to by ecological psychologists: perception is direct, perception and action are continuous, and afordances are the meaningful facets of perception (Chemero, 2013; Favela & Chemero, 2016; cf. Michaels & Palatinus, 2014; Lobo, Heras-Escribano, & Travieso, 2018; Richardson, Shockley, Fajen, Riley, & Turvey, 2008; Turvey, 2019). Explaining what I mean by “neuroscience” is not as straightforward. Unlike ecological psychology, neuroscience does not stem from a single school of thought, and the majority of its practitioners do not share a widespread set of concepts, methods, or theories. Consequently, it would be unreasonable for me to say by “neuroscience” that I mean the entire feld. A list of themes to select from when submitting an

4

Making everybody upset

abstract to a recent meeting of the Society for Neuroscience makes this point evident, with ten broad category options (e.g., cognition, development, and neurodegenerative disorders and injury) and well over 100 topics that span scales from the molecular to society (Society for Neuroscience, 2021). That being said, there are common commitments shared by those who identify their work as falling under the heading of “neuroscience.” From a Kuhnian perspective, a reasonable place to locate such commitments is textbooks used in the training of neuroscientists.3 Kandel’s, Schwartz’s, Jessell’s, Siegelbaum’s, and Hudspeth’s (2013) “Principles of Neural Science” is exemplary in this regard. As they state, “The current challenge . . . we outline in this book, is the unifcation of the study of behavior—the science of the mind—and neural science—the science of the brain” (Kandel et al., 2013, p. 5). To that end, they layout the core commitments of neuroscience, which I paraphrase as: First, the neuron is the elementary building block of brains and minds; also known as the “neuron doctrine” (Crick, 1994; Gold & Stoljar, 1999; Golgi, 1906/2021). Second, brains have distinct functional regions; that is, localization and modularity. Third, “mental processes” and behavior are the end product of brain processes (Kandel et al., 2013, p. 17). Either explicitly or implicitly, these commitments are central to other standard neuroscience textbooks as well (e.g., Bear, Connors, & Paradiso, 2016). While it may be fair to say that neuroscientists generally adhere to these three commitments, their targets of inquiry and methods can vary quite considerably. Examples include, but are not limited to, cellular and molecular neuroscientists mapping the mouse brain at the synaptic level via gene targeting techniques (Zhu et  al., 2018), cognitive neuroscientists studying frontal lobes during decision making via neuroimaging tools like magnetic resonance imaging (MRI; Gage & Baars, 2018), computational neuroscientists utilizing network theory to reveal structural connections in the brain (Sporns, 2011), and theoretical neuroscientists making use of formal theories like Bayesian inference (Abbott, 2008) and criticality (Beggs, 2008) to develop hypotheses and models of neuronal activity. From this very small sample, neuroscientists look like everything from geneticists and physiologists to mathematicians and physicists. It is for those reasons that I limit what I mean by “neuroscientists” to a particular range of practitioners. With an eye toward making clear diferences from ecological psychology, I will utilize the word ‘neuroscience’ as generally referring to that subset of disciplines that investigate the same range of phenomena as ecological psychologists typically do. For now, that limits the discussion to those neuroscientists who conduct research on those features of mind involving perception and action. As a result, moving forward “neuroscience” will primarily signify research in • Behavioral neuroscience: Aims to classify general forms of behavior and reduce them to physiological processes in the nervous system (Carlson, 2014). • Cognitive neuroscience: Utilize various recording techniques (e.g., difusion tensor imaging [DTI], electroencephalography [EEG], functional MRI [fMRI], etc.) to reveal neural

3 Thomas Kuhn (1962/1998) argued that the kind of scientifc practice commonly thought of can be referred to as “normal science.” In short, normal science is group problem solving by individuals who share a core set of beliefs, agree what problems are to be solved, and share methodologies for investigating and solving those problems. These beliefs and practices are encapsulated in textbooks: “These textbooks expound the body of accepted theory, illustrate many or all of its successful applications, and compare these applications with exemplary observations and experiments” (Kuhn, 1962/1998, p. 10; see also pp. 137–143).

Making everybody upset

5

bases of cognition, with emphasis on identifying functional specialization in the brain (Gazzaniga, Ivry, & Mangun, 2014). • Computational neuroscience: Two general aims, which are not necessarily exclusive: (1) Leveraging mathematical models and abstract and formal theories to guide experiments, and model and interpret data (Trappenberg, 2014). (2) Investigate the nervous system in terms of its processing and computing information (Piccinini & Shagrir, 2014). • Sensory neuroscience: Aims for functional localization of sensory processes by mapping stimuli to the somatosensory cortex (Barwich, 2020; Reid & Usrey, 2013). Although neuroscience as a whole may not be unifed (e.g., methods used, scales of investigation, etc.), as I argue in Chapter 3, the afore-listed subdisciplines come close to a common set of conceptual and theoretical commitments. What is more, I argue that those commitments are directly connected to a shared source, namely, the cognitivism that arose out of the cognitive revolution of the 1950s and 1960s (e.g., Cobb, 2020). Granting that it is clearer what I mean by ecological psychology and neuroscience, it is still likely—at least for those unfamiliar with these topics—that it is unclear why they are at odds. As such, I turn to the next section and present an example target of inquiry shared by both that illustrates their—at least at this point in the story—potentially irreconcilable diferences. 1.2 Why think ecological psychology and neuroscience are irreconcilable? Thus far, I have asserted that ecological psychology and neuroscience each hold vastly different conceptual, methodological, and theoretical commitments. I mentioned some of these main diferences in the previous section. Neuroscience—remember, as stated earlier, I refer to particular subfelds—is committed to a computational, representational, and brainfocused conception of mind. Ecological psychology is committed to a dynamic, antirepresentational, and organism-environment system-level understanding of mind. In this section, I provide a straightforward example that makes clear how such commitments lead to ecological psychology and neuroscience being understood as irreconcilable. What, though, is meant here by “irreconcilable?” By “irreconcilable,” I refer to four conficting features of the general investigative approaches of ecological psychology and neuroscience. First, each one has concepts that are not acknowledged as existing in the other or are contradictory. For example, mental representations are not accepted within ecological psychology, though they play indispensable roles in many neuroscience hypotheses and explanations. Second, there are methods not used by the other or that lead to conclusions not acceptable as proper explanations. For example, it is common for ecological psychologists to utilize “scaled” data—e.g., body-scaled units of measurement instead of absolute units—and methods that facilitate nonreductive dynamical explanations, whereas neuroscience does not utilize such scaled data and appeals to methods that facilitate reductive and mechanistic explanations. Third, each one appeals to theories that provide radically diferent responses to scientifc problems and goals. For example, an ecological psychologist explains the ability of somebody to pass through apertures of various widths as an afordance (i.e., pass-through-able) defned in terms of the aperture-to-shoulder-width ratio (A/S, where A is width of the aperture and S is shoulder width at the broadest point) that separates the transition from passing through the aperture while walking normally to needing to turn shoulders to ft through (Favela, Riley, Shockley, & Chemero, 2018;

6

Making everybody upset

Warren & Whang, 1987). In this way, the perceptual-motor (note the single concept signifying the continuous ontology) task of walking through apertures requires accounting for the afordance at the point of organism-environment interaction. On the other hand, a neuroscientist explains this perceptual and motor (note the separate concepts signifying the distinct ontology) task in terms of identifying the obtaining of the representation of the environment and how it is computed in the visual system, and then how that information sends motor commands to the body to successfully guide walking through the aperture (e.g., Edelman, 1999; Jeannerod, 1994; Jordan & Wolpert, 2000; Marr, 1982/2010; Pylyshyn, 1980). Finally, by irreconcilable I mean that each adheres to distinct guides to discovery. In science, guides to discovery are sources of new hypotheses that can assist in the development of new experiments (Chemero, 2009, 2013), and provide theory to constrain explanations of experimental results (Golonka & Wilson, 2012). For the ecological psychologist, afordances regularly serve as the guide to discovery. For neuroscientists, there is no single dominant guide to discovery as afordances in ecological psychology. However, there are various contenders, such as the Bayesian brain (Knill & Pouget, 2004), coordination dynamics (Bressler & Kelso, 2016), criticality (Beggs, 2008), free-energy principle (Ramstead, Badcock, & Friston, 2018), Neural Darwinism (Edelman, 1987), and neural reuse (Anderson, 2014). In sum, the investigative frameworks of ecological psychology and neuroscience are irreconcilable in crucial ways with regard to each of the four aforementioned features. Visually-guided action provides a clear example of these diferences. At its most general, visually-guided action can be understood as the voluntary control a perceptual system can exert to ensure successful movements while integrating and utilizing various sources of information, such as distance and shape of objects and location of limbs (Westwood, 2010). For organisms that rely on visual information (e.g., light, or the portion of the electromagnetic spectrum perceivable by a perceptual apparatus, like a mammalian eye), visuallyguided action is as commonplace as looking at a bug bite on one’s leg and then scratching it or reaching for a cup of cofee on a desk, to as specialized as dribbling a ball past an opponent’s team toward a goal or surgically removing a brain tumor. This general description of visually-guided action can be accepted by both ecological psychology and neuroscience. But that is where the agreement stops. Their respective investigative approaches are radically diferent and are framed in what follows in terms of contrasting concepts, methods, theories, and guides to discovery.4 1.2.1

Neuroscience approach to visually-guided action

The neuroscience approach to investigating visually-guided action is presented here via an attempt to address the following question: “How does a mammal (e.g., orangutan) grab a piece of fruit from a tree?” One way to understand the neuroscientifc approach is by breaking the issue into three subproblems, known as Marr’s three levels of analysis (Figure 1.1; Marr, 1982/2010, pp. 24–27). For the sake of ease of discussion, I refer to them as Levels 1, 2, and 3. Though arguments could be made about which level is most important, the

4 Admittedly, the ecological psychology and neuroscience accounts of visually-guided action I describe here are quite simplistic. I request that the reader suspend judgment of the details in favor of the overall points being made, which are to highlight the diferences of these approaches in a succinct and introductory-level manner.

Making everybody upset

7

Figure 1.1 Neuroscience account of visually-guided action via Marr’s three levels of analysis. (a) Level 1: Computational theory: What is that which is being computed for? Here, the problem of obtaining a piece of fruit from a tree branch. (b) Level 2: Representation and algorithm: How do we understand the computation in terms of inputs and outputs. Here, a modelbased view of sensorimotor control, with a schematic of the processes (left) and outline of anatomical pathways (right). (c) Level 3: Hardware implementation: Identify how the algorithms and representations are realized. Here, mapping of sensorimotor cortex via fMRI and intraoperative optic microscopic (bottom right). Source: (a) Modifed and reprinted with permission from Pixabay; (b) Modifed and reprinted with permission from Lan, Niu, Hao, Chou, and Dai (2019). Copyright 2019 IEEE CC BY 4.0; (c) Reprinted with permission from Rosazza et al. (2014). Copyright 2014 Public Library of Science; CC BY 4.0.

8

Making everybody upset

number is not intended to signify a hierarchy (see Eliasmith & Kolbeck, 2015 for discussion of Marr intending the three levels to be integrated). The most general level is Level 1, the computational theory, which asks, “What is being computed; what is it for?” In this case, the computation is the orangutan trying to fgure out how to traverse tree branches to bring fruit within grabbing distance. Level 2, representation and algorithm aims to provide an account for how the computation is implemented; that is, how do we understand the computation in terms of inputs (e.g., indirect mental representation of fruit hanging of branch) and outputs (e.g., control and coordination of limbs). Last, Level 3, is hardware implementation, which seeks to provide an account for how the algorithms and representations are realized, in this case, in the orangutans brain. By taking the three levels of analysis approach, a number of investigative commitments are revealed. First, with regard to concepts, it is clear that “computations” and “representations” are crucial to all three levels. Computations play the role both of formalizing the target phenomenon (Levels 1 and 2) and as providing the means to understanding the nature of the physical realization, for example, neuronal activity (Level 3). Representations also play the role of understanding what the algorithm processes (Level 2) and what the physical realizers should be doing (Level 3). Second, in terms of methods, the approach reveals the importance of decomposing phenomena to their parts (Level 2) and reducing them to neuroanatomy (Level 3). The third, theory, and fourth, guide to discovery, are combined. In many ways, this general approach is “theory broad,” though not quite “theory neutral.” It is theory broad in the sense that several theories can serve as guides to discovery under this general three levels approach, for example, Bayesian brain, free-energy principle, and Neural Darwinism. As long as some form of computationalism and representationalism is involved, hypotheses can be generated from a variety of approaches and experimental results explained as broadly. Another way to understand the neuroscience investigative approach to visually-guided action is as an instance of the outside-in framework (Buzsáki, 2019). Buzsáki presents the generally-accepted outside-in approach of neuroscience as centering on the theoretical commitment of understanding the brain as an information-processing system that receives information (e.g., light through the eyes), computes the information (e.g., acts on representations), and then responds (e.g., send motor command to limbs).5 The ecological psychology approach is not an outside-in approach and does not share any of the earlier concepts, methods, or guides to discovery, especially with regard to visually-guided actions.6

5 One need not look far to fnd confrmation of Buzsáki’s claim. As a recent neuroscience textbook states, “The brain’s representation of the visual world is dictated by the optics of the eye and, in particular, where light from the visual scene falls on the retina” (Postle, 2020, p. 90). 6 In Marr’s immensely infuential “Vision” (1982/2010), where he lays out the three levels approach and sets the foundation for investigating visual perception in terms of information processing, he briefy discusses Gibson’s ecological approach. Marr stated that Gibson made an “important contribution” (p. 29) to the feld by highlighting the signifcance of the structure of environmental information when attempting to explain visual perception. However, Marr thought Gibson “had a much oversimplifed view” of how to do that, which lead to two “fatal shortcomings” (pp. 29–30). First, not understanding that perception of environmental information is information-processing and, second, the detection of environmental information is difcult and complicated for systems.

Making everybody upset 1.2.2

9

Ecological psychology approach to visually-guided action

The ecological psychology approach to investigating visually-guided action is presented here via an attempt to address the following question: “How do mammals (e.g., dogs and humans) avoid obstacles?” When investigating such perceptual-motor occurrences, the ecological psychologist does not appeal to features of the neuroscientifc approach just presented. As stated earlier, the three core principles of ecological psychology are that perception is direct (i.e., not involving indirect mental representations), perception and action are continuous, and afordances are the meaningful facets of perception. Building on these core commitments, Neo-Gibsonians have incorporated various formal tools (e.g., dynamical systems theory) to facilitate the development of mathematically-rigorous explanations. A typical starting point for ecological psychologists is to identify the afordance of interest. Remember, afordances are the meaningful opportunities for behavior that an organism perceives (more on afordances in Chapter 2). Here, the afordance is obstacle avoidance (Figure  1.2a). In other words, if there are obstacles, then do they aford avoiding based on the environmental conditions and the bodily dimensions and capacities of the acting organism? The next task is to identify the relevant variables that capture features of the environment and organism during the target of investigation. That is to say, those features most critical to the perceptual-motor event. This step can involve creating a schematic of the environment based on body-scaled information, which are environmental dimensions as they relate to bodily dimensions of the organism (Figure 1.2b). A simple example of such body-scaled variables is the A/S ratio—aperture width (A) to shoulder width (S)—discussed previously, which captures the critical point that an aperture afords passing through or not for a human. In addition to more static features like body and object size, dynamic features such as locomotor capabilities must be identifed, such as approach speed and time-to-crossing (Figure  1.2c). With all relevant optical, spatial, and temporal variables identifed (Figure  1.2d), the model fully accounts for the obstacle-avoidance afordance. Explanations like these exhibit such scientifc virtues as controlled manipulations, predictions, and simplicity. Such work as Fajen’s (2013) afordance-based model of visually-guided action in the presence of moving objects—from which much of the current discussion is borrowed—is exemplary of an ecological psychology investigative approach. It adheres to the three main principles customary to ecological psychology since its earliest days and leverages the formalisms incorporated by Neo-Gibsonians. Moreover, it reveals a number of investigative commitments. First, with regard to both concepts and methods, it is clear that the ecological psychologist considers both features of the organism and environment as equally important to explaining perceptual-motor events. Accordingly, concepts like “body-scaled information” are utilized instead of absolute units. For example, distance between organism and environment is not in terms of an absolute space measured by a Cartesian coordinate system. Instead, distance is in terms of features like relative eye-height. Next, in terms of theory, this work reveals the crucial roles that theoretical commitments play in ecological psychology research. Specifcally, the continuity of perception and action (e.g., actionscaled information), as well as other nondualities like the organism-environment system. Such emphases on theory reveal the source of many hypotheses in ecological psychology research, namely, afordances. Afordances are the primary guide to discovery in ecological psychology and, as such, both inspire and inform hypotheses as well as provide a resource for explaining experimental results.

10

Making everybody upset

Figure 1.2 Ecological psychology account of visually-guided action. (a) Obstacle avoidance; a common visually-guided action. Here, a dog avoiding a series of pillar-like obstacles. (b) An example schematic of the target phenomenon. Here, a human is moving toward an obstacle. (c) Model of visually-guided locomotion in terms of afordances specifed by information in the optic array and accounting for physical size of both perceiver and obstacle. (d) Table of defnitions of symbols designating spatial, temporal, and optical variables. Source: (a) Modifed and reprinted with permission from Pixabay); (b–d) Modifed and reprinted with permission from Fajen (2013). Copyright 2013 Frontiers Media CC BY 3.0.

Making everybody upset 1.2.3

11

Radically diferent scientifc worldviews

The aim of the previous two sections was to provide a general overview of ways ecological psychology and neuroscience investigate, explain, and understand a common natural phenomenon, namely, visually-guided action. A summary of their diferences is presented in Table 1.1. At the risk of making everybody upset, I think each approach can be summed up as follows: Ecological psychology is a systems-level approach, in that phenomena of interest (i.e., afordances) emerge primarily at the level of systems, such as the perceptionaction system and organism-environment system (i.e., not the type of “system” of systems neuroscience, e.g., auditory system). Neuroscience is an information-processing approach, in that phenomena of interest (e.g., motor control) occur primarily in the central nervous system (CNS), especially brains (i.e., not the kind of “information” of ecological psychology, namely, invariant ecological information available in, for example, the optic array). Of course, no ecological psychologist would deny that the CNS plays necessary roles in mammalian perception-action. But they would not say the CNS is the primary target of investigation. Similarly, no neuroscientist would deny that the CNS cannot do its job without a body or environment. But they would not say embodiment or situatedness are the prime factors in their explanations. Granting that each acknowledges the other’s preferred investigative purview, the fact is that in many ways each conducts research from within radically diferent worldviews. The distinct concepts, methods, theories, and guides to discovery presented in Table 1.1 are not merely diferent aspects of the same world. Though a physicist studies subatomic particles, a chemist studies compounds and elements, and a biologist studies cells and organs, all of their targets of investigation exist within the same world. That is to say, compounds and elements, cells and organs are all constituted by subatomic particles—each is just investigated for various epistemic and practical reasons. The same is not necessarily true of the diferences between ecological psychologists and neuroscientists, which, in a number of crucial ways, have entirely diferent ontologies—or things that exist. The kind of “representations” posited by neuroscience are not merely outside the investigative purview of ecological psychology. For the ecological psychologist’s ontology, “representations” do not exist. In the same way, though neuroscientists may use the word ‘afordance’ loosely to refer to “potential actions” (e.g., Cisek, 2007, p. 1586; cf. Frey & Grafton, 2014), they do not use it in the stricter ecological sense, namely, directly perceivable opportunities for behavior based on properties of both organism and environment, and specifed by environmental information (e.g., Chemero, 2009; Stofregen, 2003). Consequently, the “afordances” of ecological psychology do not exist in neuroscientist’s ontology. Table 1.1 Key investigative diferences between ecological psychology and neuroscience  

Ecological Psychology

Neuroscience

Concepts

body-scaled information, actionscaled information mathematical modeling; dynamical explanations continuities of perception-action and organism-environment afordances

computation; representation

Methods Theories Guides to Discovery

Note: These examples are not intended to be exhaustive.

neural imaging; reductionist and mechanistic explanations information processing; outside-in Bayesian brain; free-energy principle, Neural Darwinism; neural reuse

12

Making everybody upset

The dissimilarities are also epistemic—or the ways things are known, e.g., deductions and inferences made based on data—especially with regard to explanations. As discussed earlier, a model of visually-guided action that captures all relevant variables and their interactions can serve as an explanation for an ecological psychologist (Figure 1.2c; cf.  Chemero & Silberstein, 2008; Dale, 2008; Silberstein, 2021; Stepp, Chemero, & Turvey, 2011). However, a neuroscientist is unlikely to accept such a model alone as explanatory (e.g., Bechtel & Abrahamsen, 2010; Craver, 2007; Kaplan & Craver, 2011; Piccinini & Craver, 2011; cf. Favela, 2021; Izhikevich, 2007; Ross, 2015). In the same way, identifying the sequence of neuronal fring patterns and imaging corresponding neuronal activity underlying representations contributing to visually-guided action can serve as an explanation for a neuroscientist. However, an ecological psychologist is unlikely to accept such an explanation as long as, among other reasons, it does not provide a compelling account of what “representations” are (Richardson et  al., 2008; Riley & Holden, 2012; Turvey, 2019; Turvey & Carello, 1981; see Dennett, 1971 on “loans of intelligence”). It is for these reasons and more (e.g., sociohistorical) that ecological psychology and neuroscience can be considered irreconcilable scientifc frameworks for investigating, explaining, and understanding mind. As I have argued previously, the situation is not one of merely diferent preferred concepts, methods, theories, and guides to discovery. Those components of their investigative frameworks are committed to some radically diferent ontological commitments (e.g., afordances and representations), as well as conficting epistemic standards for justifed explanations. With all that said, there are reasons to think that a bridge can be built between them. 1.3 What’s to come Though the details of the earlier argument can be debated, it is uncontroversial to accept the claim that ecological psychology and neuroscience are considered irreconcilable scientifc frameworks for investigating, explaining, and understanding mind. In response to that current state of afairs, the primary aim of this book is to defend the following thesis: ecological psychology and neuroscience can be reconciled via complexity science. This thesis will be motivated by demonstrating the ability of a specifc complexity science framework to integrate ecological psychology and neuroscience: the NeuroEcological Nexus Theory (NExT). The successful reconciliation of ecological psychology and neuroscience via NExT will exhibit a number of lessons with broad applicability to other mind sciences and related issues. There are at least three reasons to think such reconciliation is possible. First, and as a basic point, since at least the 1980s, ecological psychologists and neuroscientists have collaborated on various projects (e.g., Chemero & Heyser, 2009; Favela, Coey, Grif, & Richardson, 2016; Kugler, Kelso, & Turvey, 1980). Second, and of more signifcance, there is a recent trend in ecological psychology literature to explore ways to make stronger connections between the CNS and perception-action. Such work endeavors to supplement ecological psychology explanations of perceptual-motor events with explicit discussion of the contributions the brain and other facets of the nervous system make to those events. This work has been described as “Gibsonian neuroscience” and “ecological neuroscience” (e.g., de Wit & Withagen, 2019; Golonka & Wilson, 2019; van der Meer & van der Weel, 2020; van Dijk & Myin, 2019; though not by those names, attempts at this approach are found in Anderson, 2014; Raja, 2018; Raja & Anderson, 2019). Third, and most important for my purposes, various features of complexity science have already been successfully imported in a range of ecological psychology (e.g., Richardson & Chemero, 2014; Stephen & Van

Making everybody upset

13

Orden, 2012; Van Orden, Holden, & Turvey, 2005) and neuroscience (e.g., Beggs & Plenz, 2003; Favela, 2014, 2019; Kozma, Wang, & Zeng, 2015; Plenz & Niebur, 2014; Popiel et al., 2020; Timme et al., 2016; Van Orden, Hollis, & Wallot, 2012; Yaghoubi et al., 2018) research. I believe the possibility of successfully defending my thesis—ecological psychology and neuroscience can be reconciled via complexity science—is encouraged by these three points. If you are still onboard and open to the chance that you could agree with my thesis—in spite of the possibility of becoming upset by one of my claims at some point— then here is a preview of what is to come. The following part of the book centers on the relevant history of ecological psychology and neuroscience, with a focus on those parts of their respective backgrounds that lead to their being incompatible research agendas. In Chapter  2, I discuss the motivations for an “ecological” psychology, with emphasis on the way it is an alternative to the dominant cognitivism that emerged from the cognitive revolution of the 1950s and 1960s. Chapter 3 argues that neuroscience inherited those very features of cognitivism that ecological psychology dissented from, namely, the information-processing view of mind, with its brain-centric focus and emphasis on computations and representations. Chapter 4 presents overviews and critical assessments of prior attempts to reconcile ecological psychology and neuroscience. The next part lays out the means for reconciliation and ofers a successful demonstration of its application. Chapter 5 presents the foundations of complexity science and discusses how complexity science serves as an investigative framework in the mind sciences. Chapter  6 presents a way ecological psychology and neuroscience can be reconciled via complexity science in the form of the NeuroEcological Nexus Theory (NExT). Chapter 7 puts NExT to work by applying it to a specifc case. The book concludes with Chapter 8 and discussion of challenges to the approach and ofer reasons for optimism. Successfully making the case for reconciliation by way of a complexity science-based framework like NExT will make a positive contribution to our understanding of mind. References Abbott, L. F. (2008). Theoretical neuroscience rising. Neuron, 60, 489–495. https://doi.org/10.1016/j. neuron.2008.10.019 Anderson, M. L. (2014). After phrenology: Neural reuse and the interactive brain. Cambridge, MA: The MIT Press. Azoulay, P., Fons-Rosen, C., & Graf Zivin, J. S. (2019). Does science advance one funeral at a time? American Economic Review, 109(8), 2889–2920. https://doi.org/10.1257/aer.20161574 Barwich, A. S. (2020). Smellosophy: What the nose tells the mind. Cambridge, MA: Harvard University Press. Bear, M. F., Connors, B. W., & Paradiso, M. A. (2016). Neuroscience: Exploring the brain (4th ed.). New York, NY: Wolters Kluwer. Bechtel, W., & Abrahamsen, A. (2010). Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science. Studies in History and Philosophy of Science, 41, 321–333. https://doi.org/10.1016/j.shpsa.2010.07.003 Beggs, J. M. (2008). The criticality hypothesis: How local cortical networks might optimize information processing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1864), 329–343. https://doi.org/10.1098/rsta.2007.2092 Beggs, J. M., & Plenz, D. (2003). Neuronal avalanches in neocortical circuits. The Journal of Neuroscience, 23, 11167–11177. https://doi.org/10.1523/JNEUROSCI.23-35-11167.2003 Bressler, S. L., & Kelso, J. A. S. (2016). Coordination dynamics in cognitive neuroscience. Frontiers in Neuroscience, 10(397). https://doi.org/10.3389/fnins.2016.00397

14

Making everybody upset

Bruineberg, J., Kiverstein, J., & Rietveld, E. (2018). The anticipating brain is not a scientist: The freeenergy principle from an ecological-enactive perspective. Synthese, 195(6), 2417–2444. https://doi. org/10.1007/s11229-016-1239-1 Buzsáki, G. (2019). The brain from inside out. New York, NY: Oxford University Press. Carlson, N. R. (2014). Foundations of behavioral neuroscience (9th ed.). Essex, UK: Pearson. Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: The MIT Press. Chemero, A. (2013). Radical embodied cognitive science. Review of General Psychology, 17(2), 145– 150. https://doi.org/10.1037/a0032923 Chemero, A., & Heyser, C. (2009). Methodology and reduction in the behavioral neurosciences: Object exploration as a case study. In J. Bickle (Ed.), The Oxford handbook of philosophy and neuroscience (pp. 68–90). New York, NY: Oxford University Press. Chemero, A., & Silberstein, M. (2008). After the philosophy of mind: Replacing scholasticism with science. Philosophy of Science, 75, 1–27. https://doi.org/10.1086/587820 Cisek, P. (2007). Cortical mechanisms of action selection: The afordance competition hypothesis. Philosophical Transactions of the Royal Society B, 362, 1585–1599. https://doi.org/10.1098/ rstb.2007.2054 Cobb, M. (2020). The idea of the brain: The past and future of neuroscience. New York, NY: Basic Books. Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. New York, NY: Oxford University Press. Crick, F. (1994). The astonishing hypothesis: The scientifc search for the soul. New York, NY: Macmillan Publishing Company. Dale, R. (2008). The possibility of a pluralist cognitive science. Journal of Experimental and Theoretical Artifcial Intelligence, 20, 155–179. https://doi.org/10.1080/09528130802319078 Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68(4), 87–106. https://doi. org/10.2307/2025382 de Regt, H. (2017). Understanding scientifc understanding. New York, NY: Oxford University Press. de Wit, M. M., & Withagen, R. (2019). What should a “Gibsonian neuroscience” look like? Introduction to the special issue. Ecological Psychology, 31, 147–151. https://doi.org/10.1080/10 407413.2019.1615203 Edelman, G. M. (1987). Neural Darwinism: The theory of neuronal group selection. New York, NY: Basic Books. Edelman, S. (1999). Representation and recognition in vision. Cambridge, MA: The MIT Press. Eliasmith, C., & Kolbeck, C. (2015). Marr’s attacks: On reductionism and vagueness. Topics in Cognitive Science, 7, 323–335. https://doi.org/10.1111/tops.12133 Fajen, B. R. (2013). Guiding locomotion in complex, dynamic environments. Frontiers in Behavioral Neuroscience: Individual and Social Behaviors, 7(85). https://doi.org/10.3389/fnbeh.2013.00085 Favela, L. H. (2014). Radical embodied cognitive neuroscience: Addressing “grand challenges” of the mind sciences. Frontiers in Human Neuroscience, 8(796), 1–10. https://doi.org/10.3389/ fnhum.2014.00796 Favela, L. H. (2019). Integrated information theory as a complexity science approach to consciousness. Journal of Consciousness Studies, 26(1–2), 21–47. Favela, L. H. (2021). The dynamical renaissance in neuroscience. Synthese, 199(1–2), 2103–2127. https://doi.org/10.1007/s11229-020-02874-y Favela, L. H., & Chemero, A. (2016). An ecological account of visual “illusions.” Florida Philosophical Review, 16(1), 68–93. Favela, L. H., Coey, C. A., Grif, E. R., & Richardson, M. J. (2016). Fractal analysis reveals subclasses of neurons and suggests an explanation of their spontaneous activity. Neuroscience Letters, 626, 54–58. https://doi.org/10.1016/j.neulet.2016.05.017 Favela, L. H., Riley, M. A., Shockley, K., & Chemero, A. (2018). Perceptually equivalent judgments made visually and via haptic sensory-substitution devices. Ecological Psychology, 30(4), 326–345. https://doi.org/10.1080/10407413.2018.1473712

Making everybody upset

15

Fodor, J. A. (1974). Special sciences (or: The disunity of science as a working hypothesis). Synthese, 28, 97–115. https://doi.org/10.1007/BF00485230 Francescotti, R. (2014). Physicalism and the mind. New York, NY: Springer. Frey, S. H., & Grafton, S. T. (2014). Finding the actor in reciprocal afordance. In M. S. Gazzaniga & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 513–520). Cambridge, MA: The MIT Press. Friston, K. J., Shiner, T., FitzGerald, T., Galea, J. M., Adams, R., Brown, H., . . . Bestmann, S. (2012). Dopamine, afordance and active inference. PLoS Computational Biology, 8(1), e1002327. https:// doi.org/10.1371/journal.pcbi.1002327 Gage, N. M., & Baars, B. J. (2018). Fundamentals of cognitive neuroscience: A beginner’s guide (2nd ed.). San Diego, CA: Academic Press. Gazzaniga, M. S., Ivry, R. B., & Mangun, G. R. (2014). Cognitive neuroscience: The biology of mind (4th ed.). New York, NY: W. W. Norton & Company Ltd. Gibson, E. J. (1969). Principles of perceptual learning and development. New York, NY: AppletonCentury-Crofts. Gibson, J. J. (1966). The senses considered as perceptual systems. Boston, MA: Houghton Mifin. Gibson, J. J. (1986/2015). The ecological approach to visual perception (classic ed.). New York, NY: Psychology Press. Gold, I., & Stoljar, D. (1999). A neuron doctrine in the philosophy of neuroscience. Behavioral and Brain Sciences, 22, 809–869. https://doi.org/10.1017/S0140525X99002198 Golgi, C. (1906/2021). The neuron doctrine: Theory and facts. Nobel Lecture (pp.  189–217). NobelPrize.org. Nobel Media AB 2021. Retrieved January 15, 2021 from www.nobelprize.org/ prizes/medicine/1906/golgi/lecture/ Golonka, S., & Wilson, A. D. (2012). Gibson’s ecological approach: A model for the benefts of theory driven psychology. AVANT, 3(2), 40–53. Golonka, S., & Wilson, A. D. (2019). Ecological representations. Ecological Psychology, 31(3), 235– 253. https://doi.org/10.1080/10407413.2019.1615224 Grimm, S. R., Baumberger, C., & Ammon, S. (Eds.). (2017). Explaining understanding: New perspectives from epistemology and philosophy of science. New York, NY: Routledge. Heft, H., & Richardson, M. (2017). Ecological psychology. Oxford Bibliographies in Psychology. https://doi.org/10.1093/OBO/9780199828340-0072 Hull, D. L., Tessner, P. D., & Diamond, A. M. (1978). Planck’s principle. Science, 202(4369), 717–723. https://doi.org/10.1126/science.202.4369.717 Izhikevich, E. M. (2007). Dynamical systems in neuroscience: The geometry of excitability and bursting. Cambridge, MA: The MIT Press. Jeannerod, M. (1994). The representing brain: Neural correlates of motor intention and imagery. Behavioral and Brain Sciences, 17, 187–245. https://doi.org/10.1017/S0140525X00034026 Jordan, M. I., & Wolpert, D. M. (2000). Computational motor control. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (2nd ed., pp. 601–618). Cambridge, MA: The MIT Press. Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., & Hudspeth, A. J. (Eds.). (2013). Principles of neural science (5th ed.). New York, NY: McGraw-Hill. Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78, 601–627. https://doi. org/10.1086/661755 Khalifa, K. (2018). Understanding, explanation, and scientifc knowledge. New York, NY: Cambridge University Press. Kitcher, P., & Salmon, W. C. (Eds.). (1989). Scientifc explanation (Minnesota Studies in the Philosophy of Science, Vol. 13). Minneapolis, MI: University of Minnesota Press. Knill, D. C., & Pouget, A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences, 27(12), 712–719. https://doi.org/10.1016/j.tins.2004.10.007 Kozma, R., Wang, J., & Zeng, Z. (2015). Neurodynamics. In J. Kacprzyk & W. Pedrycz (Eds.), Springer handbook of computational intelligence (pp. 607–648). Berlin: Springer.

16

Making everybody upset

Kugler, P. N., Kelso, J. A. S., & Turvey, M. T. (1980). On the concept of coordinative structures as dissipative structures: I: Theoretical lines of convergence. In G. E. Stelmach & J. Requin (Eds.), Tutorials in motor behavior (pp. 3–47). New York, NY: North-Holland Publishing Company. Kuhn, T. S. (1962/1996). The structure of scientifc revolutions (2nd ed.). Chicago, IL: University of Chicago Press. Lan, N., Niu, C. M., Hao, M., Chou, C. H., & Dai, C. (2019). Achieving neural compatibility with human sensorimotor control in prosthetic and therapeutic devices. IEEE Transactions on Medical Robotics and Bionics, 1(3), 122–134. https://doi.org/10.1109/TMRB.2019.2930356 Lobo, L., Heras-Escribano, M., & Travieso, D. (2018). The history and philosophy of ecological psychology. Frontiers in Psychology: Cognitive Science, 9(2228). https://doi.org/10.3389/ fpsyg.2018.02228 Marr, D. (1982/2010). Vision: A computational investigation into the human representation and processing of visual information. Cambridge, MA: The MIT Press. Michaels, C. F., & Palatinus, Z. (2014). A ten commandments for ecological psychology. In L. Shapiro (Ed.), The Routledge handbook of embodied cognition (pp. 19–28). New York, NY: Routledge. Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183, 283–311. https://doi.org/10.1007/s11229-011-9898-4 Piccinini, G., & Shagrir, O. (2014). Foundations of computational neuroscience. Current Opinion in Neurobiology, 25, 25–30. https://doi.org/10.1016/j.conb.2013.10.005 Planck, M. (1950). Scientifc autobiography and other papers. London, UK: Williams & Norgate Ltd. Plenz, D., & Niebur, E. (Eds.). (2014). Criticality in neural systems. Weinheim, Germany: Wiley-VCH. Polger, T. W., & Shapiro, L. A. (2016). The multiple realization book. New York, NY: Oxford University Press. Popiel, N. J., Khajehabdollahi, S., Abeyasinghe, P. M., Riganello, F., Nichols, E., Owen, A. M., & Soddu, A. (2020). The emergence of integrated information, complexity, and “consciousness” at criticality. Entropy, 22(3), 339. https://doi.org/10.3390/e22030339 Postle, B. R. (2020). Essentials of cognitive neuroscience (2nd ed.). Hoboken, NJ: Wiley. Putnam, H. (1975). The nature of mental states. In Mind, language, and reality: Philosophical papers (Vol. 2, pp. 429–440). Cambridge, UK: Cambridge University Press. Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences, 3, 111–169. https://doi.org/10.1017/S0140525X00002053 Raja, V. (2018). A theory of resonance: Towards an ecological cognitive architecture. Minds and Machines, 28(1), 29–51. https://doi.org/10.1007/s11023-017-9431-8 Raja, V., & Anderson, M. L. (2019). Radical embodied cognitive neuroscience. Ecological Psychology, 31, 166–181. https://doi.org/10.1080/10407413.2019.1615213 Ramstead, M. J. D., Badcock, P. B., & Friston, K. J. (2018). Answering Schrödinger’s question: A freeenergy formulation. Physics of Life Reviews, 24, 1–16. https://doi.org/10.1016/j.plrev.2017.09.001 Reid, R. C., & Usrey, W. M. (2013). Vision. In L. R. Squire, D. Berg, F. E. Bloom, S. Du Lac, A. Ghosh, & N. C. Spitzer (Eds.), Fundamentals of neuroscience (4th ed., pp. 577–598). Waltham, MA: Academic Press. Richardson, M. J., & Chemero, A. (2014). Complex dynamical systems and embodiment. In L. Shapiro (Ed.), The Routledge handbook of embodied cognition (pp. 39–50). New York, NY: Routledge. Richardson, M. J., Shockley, K., Fajen, B. R., Riley, M. R., & Turvey, M. T. (2008). Ecological psychology: Six principles for an embodied-embedded approach to behavior. In R. Calvo & T. Gomila (Eds.), Handbook of cognitive science: An embodied approach (pp. 161–187). Amsterdam: Elsevier Science. Riley, M. A., & Holden, J. G. (2012). Dynamics of cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 593–606. https://doi.org/10.1002/wcs.1200 Rosazza, C., Aquino, D., D’Incerti, L., Cordella, R., Andronache, A., Zacà, D., . . . Minati, L. (2014). Preoperative mapping of the sensorimotor cortex: Comparative assessment of task-based and resting-state fMRI. PLoS One, 9(6), e98860. https://doi.org/10.1371/journal.pone.0098860

Making everybody upset

17

Ross, L. N. (2015). Dynamical models and explanation in neuroscience. Philosophy of Science, 82, 32–54. https://doi.org/10.1086/679038 Shoemaker, S. (2007). Physical realization. New York, NY: Oxford University Press. Silberstein, M. (2021). Constraints on localization and decomposition as explanatory strategies in the biological sciences 2.0. In F. Calzavarini & M. Viola (Eds.), Neural mechanisms: New challenges in the philosophy of neuroscience (pp. 363–393). Cham, Switzerland: Springer. Society for Neuroscience. (2021). Themes and topics. SfN Global Connectome. Retrieved January 15, 2021 from www.sfn.org/meetings/virtual-events/sfn-global-connectome-a-virtual-event/abstracts/ themes-and-topics Sporns, O. (2011). Networks of the brain. Cambridge, MA: The MIT Press. Stephen, D. G., & Van Orden, G. (2012). Searching for general principles in cognitive performance: Reply to commentators. Topics in Cognitive Science, 4(1), 94–102. https://doi.org/10.1111/j.17568765.2011.01171.x Stepp, N., Chemero, A., & Turvey, M. T. (2011). Philosophy for the rest of cognitive science. Topics in Cognitive Science, 3, 425–437. https://doi.org/10.1111/j.1756-8765.2011.01143.x Stofregen, T. A. (2003). Afordances as properties of the animal-environment system. Ecological Psychology, 15, 115–134. https://doi.org/10.1207/S15326969ECO1502_2 Stoljar, D. (2017). Physicalism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (winter 2017 ed.). Stanford, CA: Stanford University. Retrieved January 14, 2021 from https://plato.stanford.edu/archives/win2017/entries/physicalism/ Thagard, P. (2020). Cognitive science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (winter 2020 ed.). Stanford, CA: Stanford University. Retrieved January 15, 2021 from https:// plato.stanford.edu/archives/win2020/entries/cognitive-science/ Timme, N. M., Marshall, N. J., Bennett, N., Ripp, M., Lautzenhiser, E., & Beggs, J. M. (2016). Criticality maximizes complexity in neural tissue. Frontiers in Physiology: Fractal and Network Physiology, 7(425). https://doi.org/10.3389/fphys.2016.00425 Trappenberg, T. P. (2014). Fundamentals of computational neuroscience (2nd ed.). New York, NY: Oxford University Press. Turvey, M. T. (2019). Lectures on perception: An ecological perspective. New York, NY: Routledge. Turvey, M. T., & Carello, C. (1981). Cognition: The view from ecological realism. Cognition, 10(1– 3), 313–321. https://doi.org/10.1016/0010-0277(81)90063-9 Turvey, M. T., Shaw, R. E., Reed, E. S., & Mace, W. M. (1981). Ecological laws of perceiving and acting: In reply to Fodor and Pylyshyn (1981). Cognition, 9, 237–304. https://doi. org/10.1016/0010-0277(81)90002-0 van der Meer, A. L. H., & van der Weel, F. R. R. (2020). The optical information for self-perception in development. In J. B. Wagman & J. J. C. Blau (Eds.), Perception as information detection: Refections on Gibson’s ecological approach to visual perception (pp. 110–129). New York, NY: Routledge. van Dijk, L., & Myin, E. (2019). Ecological neuroscience: From reduction to proliferation of our resources. Ecological Psychology, 31(3), 254–268. https://doi.org/10.1080/10407413.2019. 1615221 Van Orden, G. C., Holden, J. G., & Turvey, M. T. (2005). Human cognition and 1/f scaling. Journal of Experimental Psychology: General, 134(1), 117–123. https://doi.org/10.1037/0096-3445.134.1.117 Van Orden, G., Hollis, G., & Wallot, S. (2012). The blue-collar brain. Frontiers in Physiology: Fractal and Network Physiology, 3(207). https://doi.org/10.3389/fphys.2012.00207 Von Eckardt, B. (1995). What is cognitive science? Cambridge, MA: The MIT Press. Warren, Jr., W. H., & Whang, S. (1987). Visual guidance of walking through apertures: Bodyscaled information for afordances. Journal of Experimental Psychology: Human Perception and Performance, 13, 371–383. https://doi.org/10.1037/0096-1523.13.3.371 Westwood, D. A. (2010). Visually guided actions. In E. B. Goldstein (Ed.), Encyclopedia of perception (Vol. 1–2, pp. 1089–1092). Thousand Oaks, CA: SAGE.

18

Making everybody upset

Woodward, J. (2019). Scientifc explanation. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (winter 2019 ed.). Stanford, CA: Stanford University. https://plato.stanford.edu/archives/ win2019/entries/scientifc-explanation/ Yaghoubi, M., de Graaf, T., Orlandi, J. G., Girotto, F., Colicos, M. A., & Davidsen, J. (2018). Neuronal avalanche dynamics indicates diferent universality classes in neuronal cultures. Scientifc Reports, 8(3417). https://doi.org/10.1038/s41598-018-21730-1 Zhu, F., Cizeron, M., Qiu, Z., Benavides-Piccione, R., Kopanitsa, M. V., Skene, N. G., . . . Grant, S. G. (2018). Architecture of the mouse brain synaptome. Neuron, 99(4), 781–799. https://doi. org/10.1016/j.neuron.2018.07.007

2

Why “ecological” psychology?

The past is a foreign country: they do things diferently there. (Hartley, 1953, p. 9)

2.1

From computers to environments

One of the aims of Chapter 1 was to begin to motivate the claim that ecological psychology and neuroscience are irreconcilable. Doing so required providing overviews of some of the concepts, methods, and theories that constitute their conficting investigative frameworks. When applied to the case of visually-guided action, the frameworks demonstrate apparently incommensurable explanations. How did these two sciences come to have such radically diferent understandings of the same—at least presumably—targets of inquiry? Chapters 2 and 3 answer that question by way of a little bit of history. The current chapter (Chapter 2) motivates the reasons behind the development of ecological psychology as being in many ways a reaction against the increasingly dominant cognitivism emerging from the cognitive revolution. Once that cognitivist approach is grasped, it is easier to understand why an ecological alternative was thought necessary. As will be discussed in Chapter 3, the core features of cognitivism that Gibsonians rallied against in psychology would come to be central to neuroscience as well. 2.1.1

Why “cognitive” psychology?

Appreciating the motivations for an ecological approach to psychology requires answering another question frst: Why “cognitive” psychology? Here, I do not limit discussion to cognitive psychology as a specifc feld. The question refers to the subfelds of psychology that are committed to a broadly cognitivist set of commitments. This includes much cognitive psychology research, but also certain forms of developmental, evolutionary, and experimental psychology, among others. Cognitive science is also included here, which is understood by many to overlap quite a bit with cognitive psychology. Yet whereas cognitive psychology utilizes more human-subjects research methodology, cognitive science appeals to more computational modeling and simulation (Anderson, 2015; Thagard, 2020). By “cognitivism/cognitivist,” I mean the approach to understanding psychology’s targets of investigation in terms of information processing. Whereas I could have used the simpler wording of “cognition,” I am purposely vague with the phrase “psychology’s targets of investigation” in order to be inclusive of common defnitions of cognition and also behavior, brain states, mental states, and perception, which are treated by some—but not all—as features of cognition or its physical realization.

DOI: 10.4324/9781003009955-2

20

Why “ecological” psychology?

Ulric Neisser (1967/2014) introduced the core elements of cognitivism: constructive processing and information processing.1 Constructive processing refers to the reconstruction or transformation of stimuli (i.e., from outside or within the system) into units for the cognitive system to act on. In more contemporary parlance, these units are called representations. Information processing refers to the ways those units are manipulated in order to produce behaviors, memories, perceptions, etc. In more contemporary parlance, these manipulations are called computations. This is, of course, a simplistic description of Neisser’s approach. For example, the two processes are not always distinct, as information processing can be a form of constructive processing and vice versa. Whether these two processes are distinct, one more important, or one reduces to the other has continued in debates conducted in terms of computations and representations (e.g., “no computation without representation;” Fodor, 1975, p. 34). The outcomes of these debates are not essential for my current purposes. Moving forward, I will refer to cognitivism as being committed to an information processing conception of psychology’s targets of investigation, which understands phenomena like behavior, brain states, mental states, and perception in terms of computations operating on (or, acting on, manipulating, performing over, etc.) representations (e.g., Piccinini, 2020; Pylyshyn, 1980; Rescorla, 2020; Thagard, 2005). Returning to the question that started this section: Why a “cognitive” psychology? That is to say, why did cognitivism become psychology’s core paradigm? Cognitivism emerged from the cognitive revolution of the mid-1900s. As George Miller (2003) claims, the cognitive revolution was actually a counter revolution. The prior revolution centered on reconceptualizing psychology as the science of behavior. This behaviorist approach became increasingly infuential in psychology—especially in the United States— during the latter part of the 1800s and early-1900s, with research by Ivan Pavlov, John B. Watson, and B. F. Skinner, among others (Hergenhahn & Henley, 2014). Granting there are various kinds of “behaviorism” (e.g., analytical/logical and methodological), psychological behaviorism is most relevant here. As a scientifc doctrine, psychological behaviorism can be understood as committed to the following three primary claims (paraphrased from Graham, 2019)2: 1. “Psychology is the science of behavior. Psychology is not the science of mind,” where “mind” refers to something that is not behavior, like mental states. 2. Behavior can be investigated and explained without necessitating reference to mind, that is, to internal processes like mental states. The sources of behavior are external/environmental and not internal/mental. 3. If mental-state terms are used during investigations or explanations of psychological phenomena, then they should be replaced by behavioral terms or translated into behavioral concepts. It is uncontroversial to identify the cognitive revolution as a reaction against psychology’s scientifc framework being guided by behaviorism’s primary claims. While there are various

1 As with many topics discussed in this book, the history and sources of cognitivism and cognitive psychology are debatable. Some argue it goes as far back as Thomas Hobbes in the 1600s (e.g., Stillings et al., 1995) and others to contemporaries of Neisser such as John von Neumann (1958/2012). 2 Psychological behaviorism is described by others as having diferent numbers of commitments, such as Boden who lists six assumptions (2006, pp. 238–239).

Why “ecological” psychology?

21

interpretations of its primary causes, the cognitive revolution can be understood as having three main infuences (Anderson, 2015; cf. Abrahamsen & Bechtel, 2012; Boden, 2006; Miller, 2003). First, was Claude Shannon’s and Warren Weaver’s information theory (Shannon & Weaver, 1949/1964), which provided psychologists both with a direct way to formalize and analyze their data as well as inspiration for other formal methods.3 The motivation for such formalisms originated during World War II and the demands to quantify performance in order to improve training of soldiers. Paul Fitts’ work on speed-accuracy tradeof was an early attempt at incorporating information-theoretic measures with the aim of understanding human performance in terms of information processing (Fitts, 1954), such as pilots’ control of instruments during stressful situations (Georgopoulos, 1986). Models based on Fitts’ law were seen as providing descriptions and explanations of human performance that behaviorist approaches could not. The second main infuence was research in artifcial intelligence. The name, artifcial intelligence (AI), originates around 1955 with computer and cognitive scientist John McCarthy and colleagues (McCarthy, Minsky, Rochester, & Shannon, 2006; Wooldridge, 2021). In many ways, the modern form of AI originated with Alan Turing’s pioneering work on the theoretical foundations of computers (Turing, 1950), which was further developed and formalized by people such as Alonzo Church, Allen Newell, Herbert Simon, and many others (Boden, 2006; Buchanan, 2005). Since its earliest years, AI research has been broadly approached in one of two ways (Wooldridge, 2021). The frst is the symbolic AI strategy, which aims to base AI on the mind (e.g., McCarthy and Turing)—that is to say, identify types of mental processes (e.g., problem solving) and formalize them (e.g., logic and rules) to be implemented in AI systems. The second is the neural networks strategy (e.g., McCulloch & Pitts, 1943), which aims to base AI on systems we already know are intelligent, specifcally, brains. Both research strategies provided the foundations and inspiration for understanding the “internal” architecture of cognitive systems (i.e., mental states; Figure 2.1). Cognitive architectures are those general frameworks that illustrate the various stages and relationships involved in the information processing of a system’s various capacities, such as memory and perception (Reed, 2012). There are currently estimated to be over 300 such cognitive architectures (Kotseruba & Tsotsos, 2020). Examples include ACT-R (Adaptive Control of Thought-Rational; Anderson et al., 2004), CLARION (Connectionist Learning with Adaptive Rule Induction On-line; Sun, Merrill, & Peterson, 2001), LIDA (Learning Intelligent Distribution Agent; Franklin & Patterson, 2006), and Spaun (Semantic Pointer Architecture Unifed Network; Eliasmith et al., 2012). Strictly speaking, in terms of their historical trajectory, the relationship between AI and cognitive architectures is not as straightforward as the former providing the foundations and theory for the latter. In practice, the relationship can reasonable be understood as one of mutual constraint. Investigators take inspiration from AI research to develop a cognitive architecture to explain human cognitive performance, such as parsing sentences. Then, the cognitive architecture should be able to be implemented in an AI. This implementation is then tested to see if the AI can parse 3 An interesting discrepancy is worth noting here. Whereas Anderson states that the application of Shannon and Weaver information theory in psychology was developed by George Miller, among others, and “such analyses soon pervaded all of cognitive psychology” (2015, p. 8), Miller himself states that he, “personally became frustrated in my attempts to apply Claude Shannon’s theory of information to psychology” (Miller, 2003, p. 141). Regardless, I think it is safe to say that information theory and other formal methods were indeed an infuence on the cognitive revolution regardless of who claims to have utilized them or not.

22

Why “ecological” psychology?

Figure 2.1 Cognitive architectures. The hypothesized architectures of cognitive systems can be depicted as abstractly as labeled boxes and process arrows (a and b, bottom) or identifed with neurophysiological regions (b, top). (a) ACT (Adaptive Control of Thought; Anderson et al., 2004). The ACT cognitive architecture has gone through various modifcations since this early version, with the latest version being ACT-R 7.0 (2013). (b) Spaun (Semantic Pointer Architecture Unifed Network; Eliasmith et al., 2012). Spaun is a largescale cognitive neural model (2.3 million spiking neurons), consisting of multiple modules that implement various operations (Stewart, Choo, & Eliasmith, 2012). Source: (a) Recreated and based on Figure 1.2 in Anderson, 1983; (b) Modifed and reprinted with permission from Eliasmith (2013). Copyright public domain.

Why “ecological” psychology?

23

sentences as good as humans. If there are discrepancies between the AI-implemented cognitive architecture and human performance, then the cognitive architecture can be adjusted and reimplemented in an AI system for further testing.4 Since their earliest days, all research in both AI and cognitive architectures have fallen along a two-dimensional axis: One axis captures the degree of abstraction—e.g., treating cognition in terms of formal rules (symbolic AI strategy), or “top-down” approach. The other axis captures the degree of biological inspiration or realism—e.g., attempting to account for cognition by focusing on the biological systems that realize it (neural networks strategy), or “bottom-up” approach (Vernon, 2014). For the current discussion, the main point is that research in both AI and cognitive architectures were thought to provide tools for opening the black box of the brain/mind and explaining its operations and contributions to psychology’s targets of investigation in ways that behaviorism could not. Linguistics was the third main infuence, especially the theory of Universal Grammar (UG). Primarily attributed to the work of Noam Chomsky (1965/2015), UG is a theory centering on the claim that humans are born with an innate (e.g., genetic) capacity for language (Pesetsky, 1999; cf. Evans & Levinson, 2009; Everett, 2012). An (“Hegelian”5) argument presented in favor of UG is what has come to be known as the “poverty of stimulus” (PoS) argument (Berwick, Pietroski, Yankama, & Chomsky, 2011; Marcus, 1999). Regarding human-language acquisition, the PoS argument deduces that children must have an innate language capacity because they learn language in a manner not dependent on stimulus, such as listening to their parent’s speaking. Language could not be learned primarily by listening alone because that stimulus is purported to be impoverished. That is to say, listening to people speak does not provide enough information for children to learn how to use language in the seemingly infnite variety of ways that humans do. Chomsky and UG played an important role in the cognitive revolution in at least two ways. One, is that, like AI, UG provided a way to think about the internal/mental aspect of language beyond just external behaviors, such as speaking and listening. The other way is related and was the direct assault Chomsky made on behaviorism by way of a critical assessment of Skinner’s book on explaining language via the functional analysis of verbal behavior (Chomsky, 1959). In an application of the PoS argument, Chomsky claimed that internal, mental operations were needed to efectively explain phenomena like language. Accordingly, stimulus-response and learning alone could not do the work. Taken together, these three main infuences—information theory, AI, and linguistics—provided the sparks that lit the fre of the cognitive revolution against behaviorism. But what about the question that started this discussion? We are now in a position to answer the question: Why “cognitive” psychology? The appeal of the cognitivism that emerged from the cognitive revolution can be viewed as stemming from two crucial conclusions. First, cognitivism was believed to provide convincing arguments against behaviorism. For example, the empirical successes of research that incorporated and were infuenced by information-theoretic methods (e.g., Fitts’ law) and the rational successes of arguments from linguists (e.g., PoS). Second, many started to believe that cognitivism provided a superior alternative to behaviorism. For example, AI research (e.g., cognitive architectures) and linguistics (e.g., UG) provided concepts and theories that

4 I owe this point about mutual constraint to comments from Tony Chemero. 5 Chemero (2009, pp. 4–15) defnes “Hegelian arguments” as those empirical propositions (e.g., the number of planets in our solar system) that are ruled out as false based on logical necessity. Chomsky’s Universal Grammar is discussed as a highly-infuential Hegelian argument about the nature of language.

24

Why “ecological” psychology?

could open and shine light in the black box of the mind. Such apparently damning conclusions bolstered those already critical of behaviorism. Critics had more support for viewing behaviorism as avoiding, ignoring, or being ignorant of compelling reasons and evidence to think (no pun intended) that the brain/mind was not just a black box that was irrelevant to explaining and understanding psychology’s targets of investigation. Examples include functional localization (e.g., the case of Phineas Gage; Damasio, 1994), introspection (Titchener, 1901/1922), and mental imagery (Anderson, 1978), to name a few. Now that it is clearer why psychology (and cognitive science) embraced cognitivism, we can return to the question that started this chapter: Why “ecological” psychology? 2.1.2

Why “ecological” psychology?

There should be no doubt that the cognitivism that emerged from the cognitive revolution is one of the most infuential investigative frameworks in the psychological sciences. As discussed in the previous section, there seems to be a number of strong reasons to embrace cognitivism, especially as an alternative to behaviorism. For example, it leverages ways to formalize (e.g., formal logic and rules) and mathematically quantify (e.g., information theory) and model targets of inquiry in sophisticated ways. In addition, cognitivism utilizes concepts (e.g., computations and representations) that are meant to allow investigators to open the black box of the mind and begin to explain and understand its structures, functions, and contributions to behavior. Behaviorism seemed to be in no way comparable to such investigative or explanatory power. What is more, cognitivism—in one form or another; e.g., classical computational theory of mind (Rescorla, 2020)—became a central framework outside the psychological sciences as well, such as education (e.g., learning theory), economics (e.g., prospect theory), and philosophy of mind (e.g., functionalism); as well as neuroscience, which will be discussed in detail in Chapter 3. With all that going for cognitivism, why conduct psychological research any other way? With regard to the current discussion, there are two notable responses to that question. First, though infuential, cognitivism has never been the only game in town.6 While the commonly-held narrative is that behaviorism was replaced by cognitivism, the fact is that behaviorism was not eliminated and continues to be practiced in various forms to this day (for discussion of several aspects of behaviorism’s continued practice during and after the cognitive revolution see Barrett, 2012, 2016; Chemero, 2003b; Staddon, 2014; Uttal, 2004; Watrin & Darwich, 2012). Limiting the discussion to research on perception and action, other investigative frameworks were practiced in the psychological sciences in addition to behaviorism and cognitivism. These included Gestalt psychology, various forms of psychobiology, and contemporary descendants of Jamesian functionalism, such as evolutionary approaches (Hergenhahn & Henley, 2014; Käufer & Chemero, 2015). Along these lines, it is false, historically speaking, to view cognitivism as ever having been psychology’s paradigm, especially to the exclusion of others. As presented earlier, cognitivism seemed like a vast improvement over behaviorism and a compelling theory of mind in its own right. Nevertheless, cognitivism has been viewed as quite fawed from its beginnings, which brings us to the second response. Part of the reason cognitivism was not the only game in town— and continues not to be—was due to criticisms that left some to view it as a fatally fawed

6 Actually, some (e.g., Hatfeld, 2002) have argued that cognitivism, especially of the symbol-processing kind (e.g., Fodor, 1975; Pylyshyn, 1980), never caught on with psychologists.

Why “ecological” psychology?

25

investigative framework, especially for particular targets of inquiry (e.g., perception). It is at this point that we can begin to answer the question: Why “ecological” psychology? My aim here is not to provide a comprehensive overview of ecological psychology. I refer readers to several other excellent introductions (Chemero, 2009; Gibson, 1986/2015; Heft, 2001/2016; Lobo, Heras-Escribano, & Travieso, 2018; Michaels & Palatinus, 2014; Reed, 1996; Richardson, Shockley, Fajen, Riley, & Turvey, 2008; Turvey, 2019). In this section, my aim is to motivate reasons for doing psychology ecologically. Doing so will demonstrate how antithetical ecological psychology is to cognitivism and, in turn, why ecological psychology has been viewed as irreconcilable with neuroscience. It is worth stating straightaway that in terms of history, ecological psychology has always been one of the games in town, as it was developed and practiced contemporaneously with cognitivism. Both ecological psychology and cognitivism fnd their origins in the 1950s and 1960s and really came into their own as rich investigative frameworks in the 1970s and 1980s, with continuing developments and refnements to this day. A historical point that I think is underappreciated is that ecological psychology and cognitivism not only originated in the same decades, but the event that encouraged their developments was the same, namely, World War II. Starting with cognitivism: First, as stated in the previous section, a major methodological advancement leveraged by cognitivists were formalisms and mathematical tools such as Shannon’s and Weaver’s (1949/1964) information theory. The roots of information-theoretic formalisms utilized by psychologists are found in Shannon’s cryptography research during World War II (Rogers, 1994), with early applications found in Fitts’ (1954) research on speed-accuracy tradeof. Second, Turing’s research on the foundations of computation—essential to the burgeoning feld of AI—were both directly and indirectly infuenced by his cryptography work during World War II (Boden, 2006; Turing, 1948/2004). Similarly, World War II provided much of the impetus for ecological psychology’s theory. As introduced in Chapter 1, ecological psychology is a research program and theoretical approach to perception and action originally developed by James J. Gibson in the 1950s and 1960s (e.g., Gibson, 1966). Though originally a framework for experimental psychology (especially perceptual psychology), ecological psychology would soon be employed in developmental psychology as well (e.g., Gibson, 1969). James Gibson became interested in the ecological nature of perception via work he did during World War II. As a member of the Army Air Force, Gibson conducted research to develop programs for selecting and training air crew personnel, such as bombardiers and pilots (Gibson, 2002). Gibson’s experience developing these programs led him to reject the accepted “air theory” of perception. As a theory of visual perception, the “air theory” holds that perception occurs within an abstract, Cartesian three-dimensional space and the objects of perception are experienced as existing as points within that space (Gibson, 1986/2015; Warren, 2020). Moreover, the air theory treats visual perception as primarily derived from the retinal image. Accordingly, if the air theory is true, then visual perception involves inferring three-dimensional features of the world (e.g., depth and size) based on a two-dimensional image created by retinal stimulation (Figure 2.2). As the example in Figure 2.2 demonstrates, it seems doubtful that the retinal image alone could be sufcient for informing visually-guided action, as the size and distance of objects must be known in order to, for example, navigate without colliding. As a result, if the air theory, or others like it, are to be salvaged, then additional processes must be involved in order to bolster what is provided by the retinal image. Gibson concluded that “an extra process of inference or construction” must be involved in visual perception (Gibson, 1986/2015, p. 141; see

26

Why “ecological” psychology?

Figure 2.2 Perception primarily based on retinal image. If visual perception is based primarily on reconstructions derived from the retina (e.g., mental representations), then a palm tree that is large and far away (a) and a palm tree that is small and closer (b) would be perceived as being the same size (c). Note also that the image cast on the retina is a concave version of the original. Source: Eye and palm tree images in the public domain, license CC0 1.0.

Neisser, 1967/2014 for an example of such “constructive processes”).7 Such inferences can take the form of rules the visual system has stored. For example, something along the lines (no pun intended) of the linear perspective rule, namely, when two lines are apart at the bottom of your visual feld and closer together at the top of your visual feld, then that indicates depth (a basic technique when learning to draw). Unfortunately for the air theory of perception, Gibson found that evaluations based on it were unsuccessful in predicting student pilot performance and failed to inform fruitful training protocols. Consequently, he concluded that the “accepted theory of depth perception did not work” (1986/2015, p. 140). As an alternative, Gibson developed the “ground theory” of perception (1950). He proposed that visual perception is not properly understood in terms of an observer positioned within absolute three-dimensional space (i.e., the “air”) making calculations about an object located somewhere in that same three-dimensional space (e.g., the distance between them; Figure  2.3a). Instead, the perception of objects is informed by the background (i.e., the “ground”) of those objects (1986/2015, p. 140). In other words, perception is not of points in space but of 7 It is important to make clear that Neisser was not necessarily a critic of Gibson or ecological psychology. Moreover, he was aware of serious limitations faced by cognitivist approaches. Uttal (2003, p.  36) quotes Neisser as stating: “Gibson’s view has certain striking advantages over the traditional [cognitivist] one. The organism is not thought of as bufeted about by stimuli, but rather as attuned to properties of its environment that are objectively present, accurately specifed, and veridically perceived.” Uttal then comments that, “Neisser, however, did not reject or ignore cognitive processing altogether in the extreme manner that Gibson did. Rather his ultimate goal was to fnd a middle ground in which the two views could be reconciled.” (2003, p. 36)

Why “ecological” psychology?

27

Figure 2.3 Air theory and ground theory of perception. (a) According to the air theory, visual perception involves an observer in an absolute three-dimensional space making calculations about an object in that same three-dimensional space. (b) According to the ground theory, visual perception does not involve calculating points in space but perceiving the layout of surfaces. For example, the distance to an airport runway and its length are revealed by the movement of the ground surface and the horizon ratio. Source: Airport runway and XYZ coordinate image in the public domain.

the layout of surfaces (Figure 2.3b). Features of the layout of surfaces that are informative for visual perception include recession along the ground to reveal depth and the amount of ground covered to reveal size, as with motion parallax (Gibson, 1986/2015, pp. 69, 152). Ground theory provided the foundation for Gibson’s ecological approach to perception in a couple key ways (Heft & Richardson, 2017; Warren, 2020). One way it was foundational was that it demonstrated the means in which perception is direct. For Gibson and future ecological psychologists, direct perception is the claim that perception is “not mediated by retinal pictures, neural pictures, or mental pictures” (Gibson, 1986/2015, p. 139; italics in original). In a word, it is antirepresentational. As other ecological psychologists have put it, direct perception accepts the “richness of perceptual experience,” but that richness is not a “cognitive process” (i.e., mental representations) because it is “the richness of the stimulation” (Michaels & Carello, 1981, p. 9; italics in original). Examples of rich stimulation include features of surface layouts that are informative enough to guide action, such as motion parallax revealing distance and object size.

28

Why “ecological” psychology?

Another way that ground theory was foundational was by supplying the underpinnings for Gibson’s theory of ecological information. Within ecological psychology, the word ‘information’ is not utilized in any comparable sense as it is used in cognitivism. Ecological information does not refer to Shannon and Weaver (1949/1964) information or measures of uncertainty (i.e., entropy) involved in communication between a signal and receiver (e.g., Gallistel & King, 2010; cf. Ross & Favela, 2019). More signifcantly, ecological information is not comparable to the notion of “information processing” at the heart of much cognitivist approaches. In particular, it does not involve any sense of “computation” or “representation” as being the defning nature of cognition or perception. In ecological psychology, “information” is the basis of meaningful interactions among an organism and its environment. Ecological information refers to “those patterns that uniquely specify properties of the world” (Richardson et al., 2008, p. 177; see also Gibson, 1986/2015, p. 67; Michaels & Carello, 1981). Ecological information does not refer to the kind of “stimulus information” invoked in the following statement, “Information from the world enters my perceptual system and is processed,”8 which would be the kind of information invoked in informationprocessing explanations of visual perception based on the retinal image (Figure 2.2; e.g., Edelman, 1999; Marr, 1982/2010; Pylyshyn, 1980). Ecological information refers to the distributions of energy that surround an organism. They are higher-order properties in that they are necessarily spatiotemporal in nature; that is to say, an organism perceives them in their surrounding space over time (Chemero, 2009, p. 122; Fajen, Riley, & Turvey, 2008; Gibson, 1966). Examples of such higher-order environmental information include ambient light (Figure 2.4a) and optic fow (Figure 2.4b). Though there may be certain energies (e.g., ambient light) in an environment, they are informative in relation to the environmental properties and the organism doing the perceiving. Environmental properties are the characteristics of perceived surfaces and substances. For example, a surface that can refect light has the property of high refectance, and a substance that does not give when pressed has the property of hardness. Thus, ambient light will not be informative if it is difused by fog, which will prevent it from refecting from a surface that would otherwise play a role in guiding action. Additionally, ambient light is not informative for a star-nosed mole, which is blind, but it is informative for an eagle, which has excellent visual perception due to large pupils that minimize the difraction of light. Due to the relational nature between environmental energies and properties and organisms that can or cannot perceive those energies, ecological information specifes meaningful features of the world for an organism. Specifcation refers to a correspondence relationship between an organism and the actions it can perform in an environment. Such relationships are invariant by way of their regularity (i.e., lawfulness). That is to say, given particular ecological information, environmental properties, and organism capabilities, there will always be the possibility of certain actions available. Those actions are understood in terms of afordances, which are perceivable opportunities for behavior (Chemero, 2003a, p. 182; Fajen et al., 2008, p. 79; Gibson, 1986/2015,

8 Buzsáki describes this as the “outside-in” strategy, which is “the dominant framework of mainstream neuroscience, which suggests that the brain’s task is to perceive and represent the world, process information, and decide how to respond” (2019, p. xiii).

Why “ecological” psychology?

29

Figure 2.4 Ecological information. (a) Optic array as perceived by an observer as they move. (b) Optic fow as perceived by an observer as they move. See also Figure 2.3b. Source: (a) Modifed and reprinted with permission from Di Paolo (2020), based on Gibson (1986/2015). CC BY 4.0; (b) Modifed and reprinted with permission from Wylie, Gutiérrez-Ibáñez, Gaede, Altshuler, and Iwaniuk (2018), based on Gibson (1986/2015). CC BY 4.0.

30

Why “ecological” psychology?

pp. 14–15; Heras-Escribano, 2019, pp. 27, 30, 110).9 Examples of afordances abound, including • • • • • • • •

avoid-able (e.g., if an obstacle can be dodged; Figure 1.2a) catch-able (e.g., catching a fly ball; Oudejans, Michaels, Bakker, & Dolne, 1996) cross-able (e.g., stepping over a gap in the ground; Burton, 1992) move-able (e.g., can an object be wielded; Shockley, Carello, & Turvey, 2004) pass-through-able (e.g., if an aperture can be walked through; Warren & Whang, 1987) reach-able (e.g., if a piece of fruit on a tree can be grasped; Figure 1.1a) sit-on-able (e.g., if a chair can be sat on; Mark, 1987) step-on-able (e.g., if stairs can be climbed; Warren, 1984).

Still, in what sense are afordances specifed by ecological information? Consider the example of a dog named Dingo, who wishes to get to the other side of a wall that is too tall for him to jump over. As Dingo walks the perimeter of the wall during the day, he sees an opening in the wall—an aperture, if you will. Dingo was able to visually perceive that aperture due to the ecological information available (i.e., ambient light in his optic array) and his perceptual capacities (i.e., a visual system that can detect the wavelength of the available ambient light). As Dingo walks toward the aperture, the structure of the optic array shifts (Figure 2.4a). Eventually, Dingo is facing the aperture. If Dingo can walk through the aperture, then that aperture afords pass-through-ability. In this scenario, the afordance pass-through-able is specifed by the ecological information available to Dingo, namely, the optic array. Moreover, the afordance can only be specifed if Dingo has the perceptual capacities to detect that form of ecological information. Such is the relational nature of afordances. As Gibson stated, An afordance .  .  . points two ways, to the environment and to the observer. So does the information to specify an afordance . . . the information to specify the utilities of the environment is accompanied by the information to specify the observer himself, his body, legs, hands, and mouth. (Gibson, 1986/2015, pp. 132–133) It is reasonable to understand Gibson as saying the following in the previous quote: Afordances are not explained in terms of just the features of the environment (e.g., light; perhaps as a behaviorist would emphasize) or the observer (e.g., visual system; perhaps as a cognitive psychologist would emphasize). Afordances are the result of the dynamic interplay between both. I will now summarize the main features of afordances before discussing the consequences such an understanding of perception entails. To understand what afordances are and the role they play in ecological psychology, it is crucial to grasp the following: First, afordances are action-relevant, meaning they are

9 There is a substantial amount of debate concerning the defnition of “afordances.” Here, I attempt to defne afordances in as simple and innocent way as I can, namely, as perceivable opportunities for behavior. To support this defnition, I reference a range of authors and provide specifc pages to their works that defne afordances along these terms. Still, there are defenders of various defnitions of afordances that may not be at all satisfed with my defnition. I refer readers to other works that present difering treatments of afordances (e.g., Chong & Proctor, 2020; Franchak & Adolph, 2014; Greeno, 1994; Jones, 2003; Michaels, 2003; Stofregen, 2003; Turvey, 1992).

Why “ecological” psychology?

31

always about what an organism can or cannot do in its environment. As Gibson puts it, “The afordances of the environment are what it ofers the animal, what it provides or furnishes, either for good or ill” (1986/2015, p. 119; italics in original). Second, afordances are higher-order physical properties, meaning they are the perceivable combinatorial features of the environment, such as the haptically-experienced roughness of an object detected by a hand as it moves over its surface. As Gibson puts it, “the afordance of anything is a specifc combination of the properties of its substance and its surface taken with reference to an animal” (Gibson, 1977, p. 67; italics in original). Third, afordances are relational. As Gibson stated in the earlier quote, afordances “point two ways,” that is to say, afordances require contributions by both the environment and organism. The concept of afordances and their ontological status result in three radical consequences for the practice of psychology qua ecological psychology. First, the target of investigation in the psychological sciences is not the isolated organism that passively perceives stimulus from the environment. Instead, the target of investigation is the organismenvironment system. As bird fight cannot be adequately explained without reference to the environment it acts in (Figure 2.4b), so too is the visually-guided action of a human not adequately explained without including the features of its environment. A related consequence is the conceptualization of perception and action as continuous, that is, perceptionaction as a cycle, loop, or single system. Accordingly, perception is understood as informing action and actions as guiding perception. Second, perception-action generates lawful regularities in organism-environment systems, that is, ecological laws (Gibson, 1962, p. 479; Raja, 2019; Warren, 2021, p. 3). Ecological laws are those patterns, or regularities, that emerge as an organism’s actions are informed and guided by ecological information, such as the information in the optic array generated during walking. Such ecological information lawfully specifes the afordances available to an organism. This lawful regularity in the specifcation of afordances is the logical result of organisms’ perceptual capacities converging on higher-order variables “that bear invariant relations over a wide range of conditions” (Warren, 2006, p. 382). Illustrative examples of invariant relations include the aperture-to-shoulder ratio involved in passing through apertures (Favela, Riley, Shockley, & Chemero, 2018; Warren & Whang, 1987) and the riser height-to-leg length ratio involved in climbing stairs (Warren, 1984). A related consequence is the development of new methodologies for describing and assessing such regularities. Experimental psychology experiments can be commonly understood as evaluating participants’ task performance in terms of absolute units of measurement (e.g., Euclidean distance; Figure 2.3a). Instead of such absolute, context-free units of measurements, ecological psychologists have developed measures based on action-scaled or body-scaled metrics. When evaluating afordances, a participant’s task performance is assessed in terms of action-scaled units, or a metric based on their action capabilities, such as how high they can step. Performance can also be evaluated in body-scaled units, or a metric based on their body’s measurements, such as their shoulder width. Employing these metrics allows an experimenter to contextualize task performance relative to participants’ action capabilities and body measurements. As such, the question for afordance-based experiments is not, “Can participants walk through an aperture of X centimeters,” but rather, “What actionscaled and body-scaled metrics diferentiate apertures that are pass-through-able from those that are not?” Furthermore, ecological psychologists are often interested in the critical point whereby features of the environment and characteristics of the organism no longer facilitate afordances, for example, the critical point at which an aperture afords passing through or not based on a participant’s shoulder width. This critical point is described in terms of a

32

Why “ecological” psychology?

A/S ratio, or aperture-to-shoulder ratio (Warren & Whang, 1987). Thus, a well-functioning perceptual system is sensitive to its own action- and body-scaled properties while perceiving what is aforded and what is not. A vast number of experiments have been conducted on the perception of afordances (for review see Turvey, 2019). Examples such as these demonstrate that afordances and their means of assessment (e.g., action-scaled units) provide a rich basis for assessing perceptual capabilities in a wide range of experimental conditions. Though organisms and environments are complicated and can be messy, that is not reason to not understand their regularities as lawful. Well aware of this, Gibson stated, They [organism-environment regularities] do not have the elegant simplicity of the motions of celestial bodies under the infuence of the force of gravity, but they are lawful and they have a kind of higher-order simplicity at their proper level of analysis [namely, the organism-environment system]. (Gibson, 1986/2015, p. 87) The third consequence, and perhaps the most radical, is the resulting understanding of what is meaningful to an organism or not. As discussed in the previous section, the cognitivist approach treats perception as inferential. That is to say, if vision begins with the retinal image, then the stimulus that caused that image is purported to not be informative—i.e., meaningful—enough to contribute to, for example, guiding successful actions like catching a fy ball or informing decisions like avoiding an obstacle while running. Consequently, sophisticated cognitive processes must occur in order to act on or construct representations of the world that are meaningful enough to guide actions and inform decisions. The accurate ways to construct representations and the correct ways to manipulate and process them are based upon inferences derived from the impoverished stimulation conveyed from the retinal image. Such inferences—or mental gymnastics (Chemero, 2009)—are the kinds of information processing thought to be necessary to compensate for the poverty of the environmental stimulus. Ecological psychologists respond to the claims that perception is necessarily an inferential process and that there is a lack of meaningful environmental information by appeal to afordances. Afordances are not just perceivable opportunities for behavior, as stated earlier. Afordances are directly perceivable meaningful opportunities for behavior. But how can it be that meaning can be directly perceived and guide behavior? As Gibson questioned and concluded, How do we go from surfaces to afordances? And if there is information in light for the perception of surfaces, is there information for the perception of what they aford? Perhaps the composition and layout of surfaces constitute what they aford. If so, to perceive them is to perceive what they aford. This is a radical hypothesis, for it implies that the “values” and “meanings” of things in the environment can be directly perceived. Moreover, it would explain the sense in which values and meanings are external to the perceiver. (Gibson, 1986/2015, p. 119; italics in original) Here, Gibson is drawing attention to a crucial consequence of his conception of organismenvironment systems exhibiting lawful perception-action properties. If ecological information specifes afordances, then ecological information also underlies what is meaningful to a perceiver. In other words, ecological information, such as patterns in ambient light,

Why “ecological” psychology?

33

will specify in an invariant (i.e., lawful) way what an organism can do and not do, such as pass through an aperture or not. An aperture that is wide enough for a dog to pass through is meaningful—i.e., in the sense of useful—in a way that one that is too narrow is not. It follows then, that afordances are a source of meaning for organisms, that is, because they are—to put it simply—those features of the world they can do things with. Understanding the environment as being rich with meaningful (ecological) information was a radical break from cognitivism in just about every substantial way: Perception is not a process of constructing mental representations of the environment. Instead, it is about detecting the energy layout of an organism’s world, which is constrained to its perceptual capacities. Action is not a distinct process from perception and is not the result of neural computations. Instead, it is continuous with perception and results from activity across the organism-environment system. Meaning is not based on internally-sourced (i.e., cognitive, mental, etc.) rules. Instead, meaning derives from afordances, which are specifed by ecological information. The purpose of psychology (particularly cognitive and perceptual psychology, as well as cognitive science) is not to illuminate the brain-centered informationprocessing nature of cognition. Instead, it is to investigate and explain organism-environment system interactions at the scale of perception-action as understood through the lens of afordances. This can be synthesized into four primary principles at the core of ecological psychology (cf. Chemero, 2013; Gibson, 1986/2015): 1. Perception is direct. An organism’s perceptual capacities can make unmediated contact with its environment in order to detect ecological information. 2. Perception and action are continuous. An organism’s perceptual capabilities were selected (i.e., evolutionarily speaking) to guide action; so too were action capabilities selected to enable perception. 3. Afordances. As a consequence of perception being direct and the continuity of perception and action, it follows that detected ecological information can specify meaningful opportunities for action, namely, afordances. 4. Organism-environment system. The spatiotemporal scale of organism and environment interactions is the proper point of inquiry for investigating, explaining, and understanding the previous three principles.10 The question remains, “Why should anybody accept ecological psychology over cognitivism?” Here I summarize two critiques that ecological psychologists have long made of cognitivism, which are purported reasons to abandon the cognitivist investigative approach. I refer to them as the critiques from disembodiment and uncashable checks (cf. Michaels & Palatinus, 2014; Richardson et al., 2008; Turvey, 2019). First, is the critique from disembodiment. By conceptualizing cognition—inclusive of perception and action—in terms of information processing, the cognitivist has, for all intents and purposes, disembodied the mind from the body and environment (Richardson 10 As stated in Chapter 1, previous literature has stated three principles of ecological psychology: perception is direct, perception and action are continuous, and afordances (e.g., Chemero, 2013; Favela, 2019; Favela & Chemero, 2016). Though, others have identifed less (e.g., de Laplante, 2004; Raja & Calvo, 2017), more (e.g., Heras-Escribano, 2019; Michaels & Palatinus, 2014; Richardson et  al., 2008), and diferent sets of three (e.g., Lobo et al., 2018). Moving forward, I will adhere to four core principles throughout the rest of the book.

34

Why “ecological” psychology?

et al., 2008). The cognitivist has disembodied the mind from the body by treating cognition as essentially an abstract informational process that is computational (e.g., rules) and representational (e.g., symbols). Consequently, the cognitivist has opened themself up to a wide range of critiques, not least of which are the deep metaphysical issues stemming from Cartesian dualism, such as the interaction problem of explaining how two radically diferent substances (i.e., mind stuf, which is not extended in space, and body stuf, which is extended in space) can have efects on each other (e.g., how immaterial mind stuf can make a material body move). Notwithstanding such metaphysical issues, and historically speaking, understanding cognition as computations acting on representations had at least two things going for it. First, it seemed to overcome the shortcomings of behaviorism and other investigative frameworks that did not incorporate sufcient contributions from the mind/brain in explanations of phenomena such as decision making, mental imagery, and mental mathematics (see Section 2.1). Second, it ofered a more widely-applicable understanding of cognition than the “chauvinism” of previous theories that identifed cognition with brain states, especially human brain states (Smart, 2017). As a result, cognition can be multiply realized in humans and other primates, but also any system sufciently capable of human-level computations and representations, which includes—in principle—AI and extraterrestrials. Those benefts, however, came with major negative consequences, which lead to the second critique. The second critique is uncashable checks. When somebody is given a check for, say $1,000, but that check is connected to a bank account that has no money in it, then that check is said to be uncashable. At its simplest, the uncashable check critique maintains that cognitivism makes claims that it cannot support. The claim is that cognition is essentially information processing in nature (e.g., deals in computations acting on representations). The support it cannot provide—according to ecological psychologists; especially of the Neo-Gibsonian variety—include both the empirical and theoretical. Here, I highlight three checks cognitivists have signed that have not been cashed. The frst uncashed check is innateness. It has been argued that cognitivists are committed to the claim that core features of cognition require an innate source. Language qua Universal Grammar (UG), for example, must be innate if it is to be a solution to the very problems it accused behaviorist theories of language as sufering from (e.g., poverty of stimulus [PoS]; Chomsky, 1959). Thus far, even proponents of UG have yet to cash that check, stating that there is a “poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved” (Hauser et al., 2014). The second uncashed check is meaning. As discussed in the previous subsection, a crucial feature of cognitivism is its representational structures. Representations provide the meaningful components of the cognitive system for which its computations act upon (i.e., manipulate, perform over, etc.) in order to produce behavior and thought. While it is commonly accepted that representations are indispensable to the cognitivist framework (e.g., Neisser, 1967/2014; Shea, 2018; Thagard, 2020; Von Eckardt, 2003), it remains far from clear how representations provide meaning, or, in other words, how representations are meaningful. A traditional way of understanding this issue is in terms of the symbol grounding problem, which asks, “How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads” (Harnad, 1990, p. 335; italics in original)? A related issue is the frame problem, which boils down to explaining what information is relevant to solving particular problems (e.g., Dennett, 1984; Shanahan, 2016), or which information is meaningful in a given scenario.

Why “ecological” psychology?

35

Finally, the third uncashed check is intelligence. The previous two uncashed checks are related in that they draw attention to the absence of adequate accounts of the source of computational procedures and manner of the meaning of representations. A related issue concerns how a system is supposed to interpret (i.e., compute, process, etc.) those procedures and symbols. In other words, how can the system be intelligent by way of information processing? Such “intelligence” is straightforward to explain in human-made artifcial systems like computers via their central processing units (CPU), which are built with rules and “understand” the symbols those rules act on. However, in living organisms, like humans, it is far from clear if such a CPU exists and, if so, specifcally where it is located—note that “the brain” is not specifc enough. This uncashed check has been described by ecological psychologists as the loan of intelligence that is taken out by information-processing explanations of cognition (e.g., Richardson et al., 2008; Turvey, 2019; Turvey, Shaw, Reed, & Mace, 1981; cf. Dennett, 1971). Ecological psychologists have argued that cognitivist qua information processing accounts of cognition require that something in the system understands the meaning of the symbols it manipulates. Although that something (e.g., CPU, executive controller, homunculus, etc.) has not been identifed or located, it is assumed that it one day will be. In fact, it must be identifed or located in order to pay back all of the uncashed checks (and loans). So why accept ecological psychology over cognitivism? Among other reasons, ecological psychologists claim to not sufer from those two major critiques of cognitivism. First, ecological psychology does not treat cognition as disembodied. On the contrary, ecological psychology is considered one of the forerunners of the embodied cognition/mind movement that began in the 1990s (Chemero, 2013; Shapiro & Spaulding, 2021). Even on theoretical grounds, afordances, for example, do not work if the organism is not embodied—let alone if the organism is not situated in an environment. The characteristics of an organism’s body inform what it can do in the world; they contribute to the way in which the world is meaningful (i.e., afordances point two ways, to the environment and to the observer; Gibson [1986/2015]). Second, ecological psychology has not written the same checks that cognitivism has been unable to cash. Regarding innateness: The source of an organism’s features and the manner by which it interacts with environments is directly informed by its evolutionary history. Telling the story of how an organism has been selected to survive in a particular environment is to explain the source of its afordances. How does a tree with fruit aford grasping for an orangutan? Because the orangutan has perception-action capabilities that have facilitated its four Fs—feeding, feeing, fghting, and reproduction (Churchland, 2005)—which include being able to eat the food in its environment. Regarding meaning: As discussed in detail earlier, ecological psychologists have argued that afordances are meaningful, and they are so based on both the features of the organism and environment, which are directly observable (e.g., an animal’s leg length and the steps in can climb) and not internally unobservable. Finally, regarding intelligence: Afordances are intelligible by way of their relational nature across organism-environment systems. An organism is not thrown into an environment, forced to internally process the items in that environment in order to fnd meaning in them. An organism is part of the environment; its body is formed by the environment (e.g., pressures from weather selecting organisms with fur), and its body forms the environment (e.g., beavers engineering ecosystems via dams). Consequently, questions entailed by cognitivism about mentally representing the environment and computing those representations so as to give them meaning and make them intelligible are at a minimum negligible and at a maximum irrelevant to the ecological psychology framework.

36

Why “ecological” psychology?

2.2

Conclusion

Chapter 2 began the part of the book aimed at presenting the history that makes ecological psychology and neuroscience seem irreconcilable. In order to frame the issue, it was helpful to frst present an overview and motivations for doing “cognitive” psychology. Then, I presented an overview and motivations for doing “ecological” psychology. The next chapter is about neuroscience. Presenting the history of neuroscience is far beyond the scope of this book. Instead, my aim is to discuss the ways neuroscience has inherited and embraced the commitments of cognitivism. As a result, I will argue that neuroscience sufers from the same shortcomings of cognitive psychology, cognitive science, and other scientifc frameworks that are guided by broadly cognitivist understanding of cognition. Consequently, neuroscience is vulnerable to the same critiques ecological psychologists have lobbed at cognitive psychology (and cognitive science, etc.). The following chapter will present and evaluate recent trends in ecological psychology. As will be demonstrated, although recent forms of ecological psychology do not sufer from the same shortcomings as cognitivist frameworks, they do lack in some areas that neuroscience does not. References Abrahamsen, A., & Bechtel, W. (2012). History and core themes. In K. Frankish & W. M. Ramsey (Eds.), The Cambridge handbook of cognitive science (pp.  9–28). New York, NY: Cambridge University Press. Anderson, J. R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85(4), 249–277. https://doi.org/10.1037/0033-295X.85.4.249 Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press. Anderson, J. R. (2015). Cognitive psychology and its implications (8th ed.). New York, NY: Worth Publishers. Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111(4), 1036–1060. https://doi.org/10.1037/0033295X.111.4.1036 Barrett, L. (2012). Why behaviorism isn’t Satanism. In J. Vonk & T. Shackelford (Eds.), The Oxford handbook of comparative evolutionary psychology (pp. 17–38). Oxford, UK: Oxford University Press. Barrett, L. (2016). Why brains are not computers, why behaviorism is not satanism, and why dolphins are not aquatic apes. The Behavior Analyst, 39(1), 9–23. https://doi.org/10.1007/s40614-015-0047-0 Berwick, R. C., Pietroski, P., Yankama, B., & Chomsky, N. (2011). Poverty of the stimulus revisited. Cognitive Science, 35(7), 1207–1242. https://doi.org/10.1111/j.1551-6709.2011.01189.x Boden, M. A. (2006). Mind as machine: A history of cognitive science (Vol. 1–2). New York, NY: Oxford University Press. Buchanan, B. G. (2005). A (very) brief history of artifcial intelligence. AI Magazine, 26(4), 53–60. https://doi.org/10.1609/aimag.v26i4.1848 Burton, G. (1992). Nonvisual judgment of the crossability of path gaps. Journal of Experimental Psychology: Human Perception and Performance, 18, 698–713. https://doi.org/10.1037/00961523.18.3.698 Buzsáki, G. (2019). The brain from inside out. New York, NY: Oxford University Press. Chemero, A. (2003a). An outline of a theory of afordances. Ecological Psychology, 15(2), 181–195. https://doi.org/10.1207/S15326969ECO1502_5 Chemero, A. (2003b). Radical empiricism through the ages. Contemporary Psychology: APA Review of Books, 48(1), 18–20. https://doi.org/10.1037/000698

Why “ecological” psychology?

37

Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: The MIT Press. Chemero, A. (2013). Radical embodied cognitive science. Review of General Psychology, 17(2), 145–150. https://doi.org/10.1037/a0032923 Chomsky, N. (1959). Review: Verbal behavior by B. F. Skinner. Language, 35(1), 26–58. https://doi. org/10.2307/411334 Chomsky, N. (1965/2015). Aspects of a theory of syntax. Cambridge, MA: The MIT Press. Chong, I., & Proctor, R. W. (2020). On the evolution of a radical concept: Afordances according to Gibson and their subsequent use and development. Perspectives on Psychological Science, 15(1), 117–132. https://doi.org/10.1177/1745691619868207 Churchland, P. S. (2005). A neurophilosophical slant on consciousness research. Progress in Brain Research, 149, 285–293. https://doi.org/10.1016/S0079-6123(05)49020-2 Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. New York, NY: Avon Books. de Laplante, K. (2004). Toward a more expansive conception of ecological science. Biology and Philosophy, 19(2), 263–281. https://doi.org/10.1023/B:BIPH.0000024410.43277.98 Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68(4), 87–106. https://doi. org/10.2307/2025382 Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. In C. Hookway (Ed.), Minds, machines and evolution: Philosophical studies (pp.  129–151). Cambridge, MA: Cambridge University Press. Di Paolo, E. A. (2020). Picturing organisms and their environments: Interaction, transaction, and constitution loops. Frontiers in Psychology: Theoretical and Philosophical Psychology, 11(1912). https://doi.org/10.3389/fpsyg.2020.01912 Edelman, S. (1999). Representation and recognition in vision. Cambridge, MA: The MIT Press. Eliasmith, C. (2013). Architecture of spaun. Wikipedia. Retrieved January 31, 2021 from https:// en.wikipedia.org/wiki/File:Architecture_of_Spaun.jpeg Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., & Rasmussen, D. (2012). A large-scale model of the functioning brain. Science, 338(6111), 1202–1205. https://doi.org/10.1126/ science.1225266 Evans, N., & Levinson, S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32, 429–492. https://doi. org/10.1017/S0140525X0999094X Everett, D. L. (2012). What does Pirahã grammar have to teach us about human language and the mind? Wiley Interdisciplinary Reviews: Cognitive Science, 3(6), 555–563. https://doi.org/10.1002/ wcs.1195 Fajen, B. R., Riley, M. A., & Turvey, M. T. (2008). Information, afordances, and the control of action in sport. International Journal of Sport Psychology, 40(1), 79–107. Favela, L. H. (2019). Soft-assembled human-machine perceptual systems. Adaptive Behavior, 27(6), 423–437. https://doi.org/10.1177/1059712319847129 Favela, L. H., & Chemero, A. (2016). An ecological account of visual “illusions.” Florida Philosophical Review, 16(1), 68–93. Favela, L. H., Riley, M. A., Shockley, K., & Chemero, A. (2018). Perceptually equivalent judgments made visually and via haptic sensory-substitution devices. Ecological Psychology, 30(4), 326–345. https://doi.org/10.1080/10407413.2018.1473712 Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47(6), 381–391. https://doi.org/10.1037/ h0055392 Fodor, J. A. (1975). The language of thought. New York, NY: Thomas Y. Crowell Company. Franchak, J., & Adolph, K. (2014). Afordances as probabilistic functions: Implications for development, perception, and decisions for action. Ecological Psychology, 26(1–2), 109–124. https://doi. org/10.1080/10407413.2014.874923

38

Why “ecological” psychology?

Franklin, S., & Patterson, Jr., F. G. (2006). The LIDA architecture: Adding new modes of learning to an intelligent, autonomous, software agent. IDPT-2006 Proceedings (Integrated Design and Process Technology). Society for Design and Process Science, San Diego, CA. Gallistel, C. R., & King, A. P. (2010). Memory and the computational brain: Why cognitive science will transform neuroscience. Malden, MA: Wiley-Blackwell. Georgopoulos, A. P. (1986). On reaching. Annual Review of Neuroscience, 9, 147–170. https://doi. org/10.1146/annurev.ne.09.030186.001051 Gibson, E. J. (1969). Principles of perceptual learning and development. New York, NY: Appleton-Century-Crofts. Gibson, E. J. (2002). Perceiving the afordances: A portrait of two psychologists. Mahwah, NJ: Lawrence Erlbaum Associates. Gibson, J. J. (1950). The perception of the visual world. Boston, MA: Houghton Mifin. Gibson, J. J. (1962). Observations on active touch. Psychological Review, 69(6), 477–491. Gibson, J. J. (1966). The senses considered as perceptual systems. Boston, MA: Houghton Mifin. Gibson, J. J. (1977). The theory of afordances. In R. Shaw & J. Bransford (Eds.), Perceiving, acting, and knowing: Toward an ecological psychology (pp.  67–82). Hillsdale, NJ: Lawrence Erlbaum Associates. Gibson, J. J. (1986/2015). The ecological approach to visual perception (classic ed.). New York, NY: Psychology Press. Graham, G. (2019). Behaviorism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (spring 2019 ed.). Stanford, CA: Stanford University. Retrieved January 30, 2021 from https:// plato.stanford.edu/archives/spr2019/entries/behaviorism/ Greeno, J. G. (1994). Gibson’s afordances. Psychological Review, 101, 336–342. https://doi. org/10.1037/0033-295X.101.2.336 Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3), 335– 346. https://doi.org/10.1016/0167-2789(90)90087-6 Hartley, L. P. (1953). The go-between. London, UK: Hamish Hamilton. Hatfeld, G. (2002). Psychology, philosophy, and cognitive science: Refections on the history and philosophy of experimental psychology. Mind & Language, 17(3), 207–232. https://doi. org/10.1111/1468-0017.00196 Hauser, M. D., Yang, C., Berwick, R. C., Tattersall, I., Ryan, M. J., Watumull, J., . . . Lewontin, R. C. (2014). The mystery of language evolution. Frontiers in Psychology: Language Sciences, 5(401). https://doi.org/10.3389/fpsyg.2014.00401 Heft, H. (2001/2016). Ecological psychology in context: James Gibson, Roger Barker, and the legacy of William James’s radical empiricism. New York, NY: Routledge. Heft, H., & Richardson, M. (2017). Ecological psychology. Oxford Bibliographies in Psychology. https://doi.org/10.1093/OBO/9780199828340-0072 Heras-Escribano, M. (2019). The philosophy of afordances. Cham, Switzerland: Palgrave Macmillan. Hergenhahn, B. R., & Henley, T. B. (2014). An introduction to the history of psychology (7th ed.). Belmont, CA: Wadsworth. Jones, K. S. (2003). What is an afordance? Ecological Psychology, 15, 107–114. https://doi. org/10.1207/S15326969ECO1502_1 Käufer, S., & Chemero, A. (2015). Phenomenology: An introduction. Malden, MA: Polity Press. Kotseruba, I., & Tsotsos, J. K. (2020). 40 years of cognitive architectures: Core cognitive abilities and practical applications. Artifcial Intelligence Review, 53, 17–94. https://doi.org/10.1007/ s10462-018-9646-y Lobo, L., Heras-Escribano, M., & Travieso, D. (2018). The history and philosophy of ecological psychology. Frontiers in Psychology: Cognitive Science, 9(2228). https://doi.org/10.3389/ fpsyg.2018.02228 Marcus, G. (1999). Poverty of the stimulus arguments. In R. A. Wilson & F. C. Keil (Eds.), The MIT encyclopedia of the cognitive sciences (pp. 660–661). Cambridge, MA: The MIT Press.

Why “ecological” psychology?

39

Mark, L. S. (1987). Eyeheight-scaled information about afordances: A study of sitting and stair climbing. Journal of Experimental Psychology: Human Perception and Performance, 13(3), 361– 370. https://doi.org/10.1037//0096-1523.13.3.361 Marr, D. (1982/2010). Vision: A computational investigation into the human representation and processing of visual information. Cambridge, MA: The MIT Press. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artifcial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. https://doi.org/10.1609/aimag.v27i4.1904 McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133. https://doi.org/10.1007/BF02478259 Michaels, C. F. (2003). Afordances: Four points of debate. Ecological Psychology, 15(2), 135–148. https://doi.org/10.1207/S15326969ECO1502_3 Michaels, C. F., & Carello, C. (1981). Direct perception. Englewood Clifs, NJ: Prentice-Hall. Michaels, C. F., & Palatinus, Z. (2014). A ten commandments for ecological psychology. In L. Shapiro (Ed.), The Routledge handbook of embodied cognition (pp. 19–28). New York, NY: Routledge. Miller, G. A. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive Sciences, 7(3), 141–144. https://doi.org/10.1016/S1364-6613(03)00029-9 Neisser, U. (1967/2014). Cognitive psychology (classic ed.). New York, NY: Psychology Press. Oudejans, R. R. D., Michaels, C. F., Bakker, F. C., & Dolne, M. A. (1996). The relevance of action in perceiving afordances: Perception of catchableness of fy balls. Journal of Experimental Psychology: Human Perception and Performance, 22, 879–891. https://doi.org/10.1037/0096-1523.22.4.879 Pesetsky, D. (1999). Linguistic universals and universal grammar. In R. A. Wilson & F. C. Keil (Eds.), The MIT encyclopedia of the cognitive sciences (pp. 476–478). Cambridge, MA: The MIT Press. Piccinini, G. (2020). Neurocognitive mechanisms: Explaining biological cognition. Oxford, UK: Oxford University Press. Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences, 3, 111–169. https://doi.org/10.1017/S0140525X00002053 Raja, V. (2019). J. J. Gibson’s most radical idea: The development of a new law-based psychology. Theory & Psychology, 29(6), 789–806. https://doi.org/10.1177/0959354319855929 Raja, V., & Calvo, P. (2017). Augmented reality: An ecological blend. Cognitive Systems Research, 42, 58–72. https://doi.org/10.1016/j.cogsys.2016.11.009 Reed, E. S. (1996). Encountering the world: Toward an ecological psychology. New York, NY: Oxford University Press. Reed, S. K. (2012). Human cognitive architecture. In N. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 1452–1455). New York, NY: Springer. https://doi.org/10.1007/978-1-4419-1428-6 Rescorla, M. (2020). The computational theory of mind. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (fall 2020 ed.). Stanford, CA: Stanford University. Retrieved January 15, 2021 from https://plato.stanford.edu/archives/fall2020/entries/computational-mind/ Richardson, M. J., Shockley, K., Fajen, B. R., Riley, M. R., & Turvey, M. T. (2008). Ecological psychology: Six principles for an embodied-embedded approach to behavior. In R. Calvo & T. Gomila (Eds.), Handbook of cognitive science: An embodied approach (pp. 161–187). Amsterdam: Elsevier Science. https://doi.org/10.1016/B978-0-08-046616-3.00009-8 Rogers, E. M. (1994). Claude Shannon’s cryptography research during World War II and the mathematical theory of communication. Proceedings of IEEE International Carnahan Conference on Security Technology (pp. 1–5). Albuquerque, NM, USA. https://doi.org/10.1109/CCST.1994.363804 Ross, B. A., & Favela, L. H. (2019). A defnition of memory for the cognitive sciences. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st annual conference of the Cognitive Science Society (pp. 974–980). Montreal, QB: Cognitive Science Society. Shanahan, M. (2016). The frame problem. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (spring 2016 ed.). Stanford, CA: Stanford University. Retrieved November 5, 2021 from https://plato.stanford.edu/archives/spr2016/entries/frame-problem/

40

Why “ecological” psychology?

Shannon, C. E., & Weaver, W. (1949/1964). The mathematical theory of communication. Urbana, IL: The University of Illinois Press. Shapiro, L., & Spaulding, S. (2021). Embodied cognition. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (winter 2021 ed.). Stanford, CA: Stanford University. Retrieved November 5, 2021 from https://plato.stanford.edu/archives/win2021/entries/embodied-cognition/ Shea, N. (2018). Representation in cognitive science. Oxford, UK: Oxford University Press. Shockley, K., Carello, C., & Turvey, M. T. (2004). Metamers in the haptic perception of heaviness and moveableness. Perception & Psychophysics, 66, 731–742. https://doi.org/10.3758/BF03194968 Smart, J. J. C. (2017). The mind/brain identity theory. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (spring 2017 ed.). Stanford, CA: Stanford University. Retrieved November 5, 2021 from https://plato.stanford.edu/archives/spr2017/entries/mind-identity/ Staddon, J. (2014). The new behaviorism (2nd ed.). New York, NY: Psychology Press. Stewart, T. C., Choo, F.-X., & Eliasmith, C. (2012). Spaun: A perception-cognition-action model using spiking neurons. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th annual conference of the Cognitive Science Society (pp. 1018–1023). Austin, TX: Cognitive Science Society. Stillings, N. A., Weisler, S. E., Chase, C. H., Feinstein, M. H., Garfeld, J. L., & Rissland, E. L. (1995). Cognitive science: An introduction (2nd ed.). Cambridge, MA: The MIT press. Stofregen, T. A. (2003). Afordances as properties of the animal-environment system. Ecological Psychology, 15, 115–134. https://doi.org/10.1207/S15326969ECO1502_2 Sun, R., Merrill, E., & Peterson, T. (2001). From implicit skills to explicit knowledge: A bottom-up model of skill learning. Cognitive Science, 25(2), 203–244. https://doi.org/10.1207/ s15516709cog2502_2 Thagard, P. (2005). Mind: Introduction to cognitive science (2nd ed.). Cambridge, MA: The MIT Press. Thagard, P. (2020). Cognitive science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (winter 2020 ed.). Stanford, CA: Stanford University. Retrieved January 15, 2021 from https:// plato.stanford.edu/archives/win2020/entries/cognitive-science/ Titchener, E. B. (1901/1922). Experimental psychology: A manual of laboratory practice. Vol. 1: Qualitative experiments. Part I: Students manual. New York, NY: MacMillan Co. Turing, A. M. (1948/2004). Intelligent machinery. In B. J. Copeland (Ed.), The essential Turing: Seminal writings in computing, logic, philosophy, artifcial intelligence, and artifcial life plus the secrets of enigma (pp. 410–432). New York, NY: Oxford University Press. Turing, A. M. (1950). I: Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi. org/10.1093/mind/LIX.236.433 Turvey, M. T. (1992). Afordances and prospective control: An outline of the ontology. Ecological Psychology, 4, 173–187. https://doi.org/10.1207/s15326969eco0403_3 Turvey, M. T. (2019). Lectures on perception: An ecological perspective. New York, NY: Routledge. Turvey, M. T., Shaw, R. E., Reed, E. S., & Mace, W. M. (1981). Ecological laws of perceiving and acting: In reply to Fodor and Pylyshyn (1981). Cognition, 9(3), 237–304. https://doi. org/10.1016/0010-0277(81)90002-0 Uttal, W. R. (2003). Psychomythics: Sources of artifacts and misconceptions in scientifc psychology. Mahwah, NJ: Lawrence Erlbaum Associates. Uttal, W. R. (2004). Dualism: The original sin of cognitivism. Mahwah, NJ: Lawrence Erlbaum Associates. Vernon, D. (2014). Artifcial cognitive systems: A primer. Cambridge, MA: The MIT Press. Von Eckardt, B. (2003). The explanatory need for mental representations in cognitive science. Mind & Language, 18(4), 427–439. https://doi.org/10.1111/1468-0017.00235 von Neumann, J. (1958/2012). The computer and the brain (3rd printing). New Haven, CT: Yale University Press. Warren, W. H. (1984). Perceiving afordances: Visual guidance of stair climbing. Journal of Experimental Psychology: Human Perception and Performance, 10(5), 683–703. https://doi. org/10.1037/0096-1523.10.5.683

Why “ecological” psychology?

41

Warren, W. H. (2006). The dynamics of perception and action. Psychological Review, 113(2), 358– 389. https://doi.org/10.1037/0033-295X.113.2.358 Warren, W. H. (2020). Perceiving surface layout: Ground theory, afordances, and the objects of perception. In J. B. Wagman & J. J. C. Blau (Eds.), Perception as information detection: Refections on Gibson’s ecological approach to visual perception (pp. 151–173). New York, NY: Routledge. Warren, W. H. (2021). Information is where you fnd it: Perception as an ecologically well-posed problem. i-Perception, 12(2), 1–24. https://doi.org/10.1177/20416695211000366 Warren, Jr., W. H., & Whang, S. (1987). Visual guidance of walking through apertures: Bodyscaled information for afordances. Journal of Experimental Psychology: Human Perception and Performance, 13, 371–383. https://doi.org/10.1037/0096-1523.13.3.371 Watrin, J. P., & Darwich, R. (2012). On behaviorism in the cognitive revolution: Myth and reactions. Review of General Psychology, 16, 269–282. https://doi.org/10.1037/a0026766 Wooldridge, M. (2021). A brief history of artifcial intelligence: What it is, where we are, and where we are going. New York, NY: Flatiron Books. Wylie, D. R., Gutiérrez-Ibáñez, C., Gaede, A. H., Altshuler, D. L., & Iwaniuk, A. N. (2018). Visualcerebellar pathways and their roles in the control of avian fight. Frontiers in Neuroscience: Perception Science, 12(223). https://doi.org/10.3389/fnins.2018.00223

3

The sins of cognitivism visited upon neuroscience

Those who cannot remember the past are condemned to repeat it. (Santayana, 1922, p. 284)

3.1

From cognitivism to neuroscience

The previous chapter presented a historical overview of the sources and motivations for the cognitivist (e.g., cognitive psychology and cognitive science) and ecological approaches to psychology. Cognitivism, with its origins in the 1950s and 1960s, was said to adhere to an information processing understanding of cognition as essentially computational and representational in nature (e.g., Abrahamsen & Bechtel, 2012; Pylyshyn, 1980). Ecological psychology, also originating in the 1950s, was said to adhere to four primary principles: Perception is direct, perception and action are continuous, the theory of afordances, and the organism-environment system as the relevant spatiotemporal scale of investigation. The current chapter focuses on neuroscience and attempts to answer the question, “Why are neuroscience and ecological psychology viewed as irreconcilable?” My aim is to motivate the claim that ecological psychology and neuroscience are typically at odds because the latter inherited the central theoretical commitments of cognitivism that the former has so staunchly rejected. Specifcally, neuroscience inherited cognitivism’s two main commitments about the nature of cognition, namely, that it is essentially computational and representational. Presenting a comprehensive history of neuroscience is far beyond the scope of the current chapter.1 With that said, I think there is a fairly straightforward narrative that can be told, which can illuminate the origins of the aforementioned tensions. By the end of this chapter, there will be reasons to believe that neuroscience inherited and practices the very sins that ecological psychology accused cognitivism of. Before diving in, an important reminder: I want to remind readers that by “neuroscience” I refer to particular subdisciplines. As stated in Chapter 1, there is no single origin for all of the areas of contemporary research known as “neuroscience.” Additionally, it is fair to say that most current neuroscientists do not adhere to a common set of agreed upon concepts, methods, or theories. The meeting of the Society for Neuroscience makes this abundantly clear as research is presented from a wide range of topics that span 1 For those interested in more in-depth discussions of the history of neuroscience, I recommend Cobb (2020), Glickstein (2014), Gross (2009), Kandel and Squire (2000), Shepherd (2010), and Wickens (2015).

DOI: 10.4324/9781003009955-3

The sins of cognitivism visited upon neuroscience

43

genetics, nervous systems, and societies (Society for Neuroscience, 2021). For the current purpose of illuminating the nature of the opposition among neuroscience and ecological psychology, when I refer to “neuroscience” I mean a subset of disciplines that—at least ostensibly—study the same phenomena that ecological psychology claims to. For that reason, I draw from research in behavioral neuroscience (e.g., Carlson, 2014), cognitive neuroscience (e.g., Gazzaniga, Ivry, & Mangun, 2014), computational neuroscience (e.g., Piccinini & Shagrir, 2014; Trappenberg, 2014), and sensory neuroscience (Barwich, 2020; Reid & Usrey, 2013). In the next section, I discuss the importance of perspectives in scientifc inquiry. After, in Section 3.3, I focus on the 1950s in order to highlight the maturing commitments of cognitivism by way of discussion of the key theories and models of early artifcial intelligence research. Building from that line of thought, in Section 3.4, I argue that as early as the 1950s, neuroscience started to show signs of splitting into subdisciplines with very diferent emphases on the ways nervous systems—and, accordingly, cognition—ought to be investigated and understood. This directly informs the concluding discussion of this chapter, which elaborates on reasons why the same criticisms ecological psychology has of cognitivism continue to be just as applicable to neuroscience. 3.2

From Hippocrates and Aristotle to Mike the Headless Chicken

The title of this section is a tad misleading. I have no intention of providing a detailed history of neuroscience that goes as far back as the Ancient Greeks. Yet I mention that era via reference to Hippocrates (~460–370 B.C.E.) due to the fact that it is common to point to that period of time as already exhibiting what would come to be the central dogma of modern neuroscience, namely, the exclusive role the brain plays in “experience,” “sensation,” and “intelligence,” among other mental phenomena (e.g., Bear, Connors, & Paradiso, 2016, p. 5; Glickstein, 2014, pp. 1, 286; Kandel, Schwartz, Jessell, Siegelbaum, & Hudspeth, 2013, pp. 4, 1116; Seth, He, & Hohwy, 2015, p. 1; Wickens, 2015, pp. 13–26). Aristotle is mentioned in order to highlight the fact that as early as the mid-350s B.C.E., even the greatest minds could not agree that mental phenomena were exclusively brain phenomena (cf. Clarke & Stannard, 1963). As stated in his “Parts of Animals”: But it is the heart that has supreme control, exercising an additional and completing function. Hence in sanguineous animals the source both of the sensitive and the nutritive soul must be [5] in the heart, for the functions relative to nutrition exercised by the other parts are ancillary to the activity of the heart. . . . Certainly, however, all sanguineous animals have the supreme organ of the [10] sense-faculties in the heart, for it is here that we must look for the common sensorium belonging to all the sense-organs. (Aristotle, 1984, p. 1642; italics added) This point is raised not with the goal of defending nonphysicalist (e.g., Cartesian dualism) or panpsychist theories of mind. The point is merely that it has not always been a given that the brain is exclusively where the action is at, so to say. Theories that have not claimed the brain as the location of the engine of reason and seat of the soul (cf. Churchland, 1994) are not mere relics of an unenlightened past. Even in the twenty-frst century, there are highly respected research agendas—both of the philosophical and scientifc variety—that do not treat the brain as sufcient for explaining cognition, goal-directed behavior, etc.

44

The sins of cognitivism visited upon neuroscience

These approaches remain physicalist and empirical through and through. Yet, they provide evidence and reasons for deprioritizing—at least a little bit—the neural as sufcient for explaining mind.2 It is worth emphasizing that I am in no way questioning the decades (perhaps centuries) of well-established research that makes clear the connections between brain activity and bodily activity, as well as the multitude of ways the brain is implicated in mental phenomena. Who can forget Phineas Gage, after all?3 Still, the diferences between Hippocrates and Aristotle are illustrative of the fact that investigators have particular aims when trying to explain a phenomenon. At the risk of stepping out of the boundaries of my areas of expertise and doing a bit of historical interpretation, a motivation for Hippocrates viewing the brain as the most crucial organ could have been due to his aspirations toward developing medical treatments (cf. Wickens, 2015, pp. 1–26). For example, Hippocrates and his contemporaries were aware that damage to one side of the head could result in seizures on the opposite side of the body (Westbrook, 2013). Aristotle, though, was especially focused on elucidating the telos—or teleology, i.e., a thing’s aim, end, or goal—of humans (Brennan, 2021; Leunissen, 2007). In view of that, his interests in physiology were largely in the service of understanding human telos, such as how we perceive and act, especially as they contribute to ethical behavior. Accordingly, he argued that it made sense for the heart—which actively pumps blood through the body—to play the primary role in driving such activities in organisms. Of course, we know now in the twenty-frst century that the heart is not the primary organ of perception and action. Yet an important lesson remains, namely, the perspectival nature of scientifc inquiry. Perspectivism understands scientifc inquiry—both in terms of data/observations and explanations/theories—as always stemming from, well, a perspective.4 That is to say, there is no supremely objective “view from nowhere” (cf. Nagel, 1986) that a scientist can base their observations or theories on. The scientist qua human will always bring to their research a variety of biological, cognitive, and social factors (Brown, 2009; Giere, 2006), which will infuence how they interact with the world and the goals that motivate such interactions (including neuroscience research; e.g., Chirimuuta [2014, 2019]). Hippocrates and Aristotle took scientifc perspectives on the brain: They based theories on systems founded on empirical evidence and rationality. Consequently, we can, to a reasonable degree, begin to understand why they held the views they did. Nevertheless, perspectivism does not allow for anything goes (cf. Feyerabend, 1993). While perspectivism may allow for a greater plurality of concepts, methods, and theories to coexist even within disciplines, as a colleague of mine likes to say, at some point the rubber needs to meet the road. In other words, those concepts, methods, and theories need to make contact with empirical evidence in order to

2 The theories I have in mind here include embodied cognition (e.g., Chemero, 2009), extended mind (e.g., Clark & Chalmers, 1998), extended cognition (e.g., Favela, Amon, Lobo, & Chemero, 2021), and integrated information theory (Tononi & Koch, 2015). I am also thinking of research involving “borderline” cases of minded organisms, such as single cells (e.g., Lyon, 2015), plants (e.g., Raja & Segundo-Ortín, 2021), and slime molds (e.g., Vallverdú et al., 2018). For additional examples and discussion, see Baluška and Levin (2016). Keep in mind, none of this work excludes the fact that some phenomena are primarily caused and constituted by brain activity (Favela, 2017). 3 Well, for those of you who do not remember Phineas Gage, or never learned about him, I recommend starting with Twomey (2010) and then checking out Damasio, Grabowski, Frank, Galaburda, and Damasio (1994) and Van Horn et al. (2012). 4 Perspectivism in the philosophy of science is relatively recent. More general applications of perspectivism can be found in the work of Friedrich Nietzsche (Hales & Welshon, 2000).

The sins of cognitivism visited upon neuroscience

45

be validated.5 This point explains why contemporary sciences of the mind are far more consistent with Hippocrates’ view of the brain than Aristotle’s. That is to say, there is overwhelming evidence that the brain plays a signifcant role in explaining mental phenomena to a far greater extent than the heart. With that said, even in the twenty-frst century, signifcant questions about the sufciency of the brain to cause even sophisticated behaviors and mental activities remain unanswered (e.g., Baluška & Levin, 2016). Consider the following two examples: First, is the case of Mike the Headless Chicken. In 1948, Mike the chicken’s head was cut of by a farmer in order to eat Mike for dinner (Figure 3.1a). Yet Mike lived for 18 months and was able to eat, walk around, and respond to stimuli during that time.6 The second case is the 44-yearold man whose brain was approximately 25% the typical size due to fuid buildup in his skull (Feuillet, Dufour, & Pelletier, 2007). Despite having most of his brain compacted or missing, this man had an IQ of 75 and “normal” social functioning. Such cases provide reasons to at least pause and refect upon even the most taken-for-granted assumptions in our theories about the relationship between brain and mental activity. The case presented by Feuillet and colleagues (2007) is not rare either. Even after having half of their brain removed, patients demonstrate returning to healthy language and reasoning capacities (Kliemann et al., 2019) and maintaining social and emotional functioning (Figure 3.1b; Kliemann, Adolphs, Paul, Tyszka, & Tranel, 2021).7 After centuries of debate concerning the mind’s location (e.g., Hippocrates) and the puzzling cases of organism’s still functioning even after part of their brains were removed (e.g., Mike the Headless Chicken), where does this leave neuroscience’s theoretical commitments at the start of the twentieth century? In the next section, I present the neuron doctrine as the primary perspective of neuroscience, which started in the early 1900s and continues through to today. 3.3 From Ramón y Cajal to McCulloch and Pitts Shepherd (2010) describes the history of neuroscience as embracing many disciplines (e.g., neurology, neurophilosophy, and pharmacology), extending across species (e.g., worms, fsh, and dogs), ranging across systems (e.g., sensory, motor, and central), and spreading across levels of organization (e.g., genes, cells, and circuits)—and that was just up until the 1950s. Had Shepherd’s history of neuroscience extend through the 2020s, it would surely demonstrate a continuation of being a discipline wide in scope. While there seems to have always been signifcant diversity across neuroscience’s subdisciplines, there has been sizable variation within those subdisciplines as well. In computational neuroscience, for example, Kording, Blohm, Schrater, and Kay (2020) identify 12 modeling goals exhibited by computational neuroscientists, such as developing models that are analytically tractable, behaviorally realistic, clinically relevant, and/or inspire experiments. What can be said to unite

5 Of course, what counts as “evidence” within one framework or another is still debatable. See Kuhn on scientifc paradigms (1962/1996) and Pigliucci (2010) on the demarcation problem of identifying science from nonscience (e.g., pseudoscience). 6 As others have speculated (Van Orden, Hollis, & Wallot, 2012), Mike probably still had a brainstem. So it was not a total decapitation, but it is more exciting to say it was. 7 For additional examples of maintained brain functioning after abnormalities and damage, see Lewin (1980) and Merker (2007).

46

The sins of cognitivism visited upon neuroscience

Figure 3.1 How important is the brain? (a) Mike the Headless Chicken. In spite of having his head cut of, Mike was able to eat, socialize, etc. for another 18 months. (b) Four patients posthemispherectomy. All patients maintained social and emotional capacities. Source: (a) Reprinted with permission from Troy Waters (2021); (b) Modifed and reprinted with permission from Kliemann et al. (2021). CC BY 4.0.

The sins of cognitivism visited upon neuroscience

47

all of these various disciplines, goals, species, and systems under one heading? If anything is a prime contender as the answer to that question, then it would be the neuron doctrine. The neuron doctrine is defned here as consisting in three parts (cf. Glickstein, 2014; Shepherd, 2016): First, there are particular types of cells found in the nervous system, called neurons. Second, neurons are connected but do not fuse together, that is, they remain individual cells. Third, neurons are the “anatomical, physiological, metabolic, and genetic unit of the nervous system” (Waldeyer paraphrased in Shepherd, 2016, p. 2; italics in original). The neuron doctrine is typically most associated with Santiago Ramón y Cajal, who in 1887 provided evidence that the nervous system was not one continuous fber—i.e., reticular theory—but was constituted by many individually connecting cells (Figure 3.2). Ramón y Cajal’s discovery built upon decades of prior research, especially Theodor Schwann’s cell theory (~1839), which maintained that all of the body’s tissues are composed of individual cells, and Camillo Golgi’s staining technique (Shepherd, 2016).

Figure 3.2 Pyramidal neurons. (a) Ramón y Cajal’s drawing of pyramidal neurons in the rabbit cortex. (b) Stain of rabbit pyramidal neurons by Ramón y Cajal using Golgi’s method. Such detailed images of neurons and their connections contributed to the end of the reticular theory in favor of the neuron doctrine. Source: Modifed and reprinted with permission from DeFelipe (2015). CC BY 4.0.

The sins of cognitivism visited upon neuroscience

48

This minimal description of the neuron doctrine does not do justice to its intriguing history. Though there should be no doubt as to Ramón y Cajal’s contributions to neuroscience (especially neurophysiology), his work was certainly a case of building on the shoulders of giants, namely, Schwann’s cell theory and Golgi’s staining technique. With that said, it is arguable that the neuron doctrine was in the air, so to speak. Perhaps one of the last people to be associated with the neuron doctrine and neuroscience, Sigmund Freud was well on his way to developing a “neurone” theory of his own as early as 1895 (Freud, 1950): The intention is to furnish a psychology that shall be a natural science: that is, to represent psychical processes as quantitatively determinate states of specifable material particles, . . . The neurones are to be taken as the material particles. (Freud in Kitcher, 1995, p. 52) Despite the fact that there seemed to be increasing research converging on the neuron doctrine, it was not accepted even by one of the most celebrated contributors to the theory’s development. The 1906 Nobel Prize in Physiology or Medicine was awarded to Ramón y Cajal and Golgi. However, Golgi rejected the neuron doctrine in favor of the reticular theory, which adhered to the view that the nervous system was one continuous fber and not constituted by individual cells (Golgi, 1906/1999). Today, the neuron doctrine is viewed by some as a fundamental principle for neuroscience (e.g., Glickstein, 2006) and the philosophy of neuroscience (e.g., Gold & Stoljar, 1999). Others accept that it is important but needs to be enhanced by other principles (e.g., neural felds; Kozma & Freeman, 2016), while others argue that it is no longer central to advancing neuroscience (Guillery, 2005) and “no longer encompasses important aspects of neuron function” (Bullock et al., 2005, p. 791). In spite of the early controversies and contemporary criticisms, the neuron doctrine in its simplest terms—specifcally, that the neuron is the primary functional unit of the nervous system—played a crucial role in the development of modern neuroscience. This was especially true in the mid-twentieth century and is exemplifed in the achievements of Alan Hodgkin and Andrew Huxley, and Warren McCulloch and Walter Pitts. 3.3.1

Hodgkin-Huxley model

The canonical Hodgkin and Huxley (1952) model of action potentials in the squid giant axon is considered not just “the single most successful quantitative model in neuroscience” (Koch, 1999, p. 171), but also “one of the great success stories in biology” (Häusser, 2000, p. 1165). The fully defned Hodgkin-Huxley model is a set of diferential equations defned by the following four-dimensional model (Equations 3.1–3.4): I = CM

dV + gK n 4 (V − VK ) + g Na m3h(V −VNa ) + gl (V − Vl ) dt

(3.1)

where dn dm dh

dt

= ˜n (1− n) − °n n,

(3.2)

dt

= ˜m (1− m) − °m m,

(3.3)

dt

= ˜h (1− h) − °h h

(3.4)

The sins of cognitivism visited upon neuroscience

49

Key elements of the model are: I (total membrane current as a function of time and voltage), CM (cell membrane capacity per unit), dV (change of membrane potential from resting value), dt (change over time), g’s (ion sodium [Na] and potassium [K]), and l (leak variable). For detailed explanations of this model see Gerstner, Kistler, Naud, and Paninski (2014), as well as Koch (1999) for discussion and further references. For now, it is important to understand that this model treats the action potential as an “all-or-none” event (e.g., Bear et al., 2016; Churchland & Sejnowski, 1992/2017; Eagleman & Downar, 2016). The action potential is treated as a binary event that occurs within distinctly defned timescales (Figure 3.3). Moreover, those timescales have a lower boundary, specifcally, 10 milliseconds (ms) in the canonical Hodgkin-Huxley model (Hodgkin & Huxley, 1952, p.  528; Koch, 1999, p. 334; Marom, 2010, p. 23). What that means is that the action potential of a neuron (i.e., its “spike” of activity) is treated within the Hodgkin-Huxley model as occurring at least 10 ms from initiation to termination of all involved processes (Marom, 2010, p. 22; Favela, 2022). As part of the research that earned Hodgkin and Huxley a Nobel Prize in 1963 (shared with John Eccles), it is no surprise that the Hodgkin-Huxley model is the standard for single-neuron models and foundational to those learning about modeling in neuroscience (e.g., Gerstner et al., 2014; Koch, 1999). The model and its development are exemplars of

Figure 3.3 Hodgkin-Huxley model. The basic shape of an action potential as produced by HodgkinHuxley model (created with MATLAB [MathWorks®, Natick, MA]). The x-axis captures the entire range of time from start to fnish that an action potential occurs, which, according to the model is 10 ms.

50

The sins of cognitivism visited upon neuroscience

how to do neuroscience: identifying an appropriate model organism (i.e., squid) and striking a balance between mathematical simplicity while maintaining biological realism (e.g., identifying the ionic mechanisms that underly action potentials). In addition, their work pushed technology forward (i.e., voltage clamp) and facilitated the development of hypotheses and avenues for research (Häusser, 2000). While Hodgkin’s and Huxley’s work had an enormous impact on neuroscience up through 1950s and 1960s, especially in the areas of neurobiology and neurophysiology, the same cannot be said for its infuence on cognitive psychology and cognitive science.8 Another model of single neurons, however, did have a major infuence on cognitivism, namely, the McCulloch-Pitts model. 3.3.2

McCulloch-Pitts model

Throughout the 1940s to 1960s, Hodgkin and Huxley conducted research on nervous systems that landed squarely within the areas of neurobiology and neurophysiology. Many of those years overlapped with others who were developing the foundations of computation, such as Alonzo Church, Alan Turing, and John von Neumann. Turing was especially focused on solving the problem of universal computability, which eventually lead him to develop what would come to be known as “Turing machines” (Turing, 1937, 1938). In short, a Turing machine is a hypothetical setup that includes a tape divided into equal sections that can be marked by a symbol, a head to read the symbols on each section of the tape, and a table with a set of rules (Figure 3.4). The tape will have symbols in the form of 1, 0, or nothing (technically, as long as they are consistently implemented it could be any two symbols, e.g., black square and red circle, a picture of a dog and a cat, and so on). As the tape roles by the head, depending on what state the head is in, its table of rules will instruct it to either erase the symbol, write a symbol, or move the tape. Turing proved that anything that can be computed can be done so via this simple setup. It is this setup that is the basis for modern computation. All computers—such as your Mac or PC—are essentially Turing machines that function by way of a central processing unit (head and tape) and combinations of bits (table and symbols). While Turing made incredible contributions to various felds, many know him by way of the “Turning test,” or the “imitation game.” In its original formulation (Turing, 1950), the imitation game is a thought experiment whereby a human (“interrogator”) sits in a room apart from two others and via conversation and questions must decide if either of them is a machine. If the human does not judge either to be a machine, but one of them in fact is, then, according to Turing, that machine has passed the test and has demonstrated that it is intelligent. Even though Turing is widely known via the imitation game and recognized as the architect of the modern computational theory of mind (Rescorla, 2020), some have argued that he did not directly discuss how thinking could be realized by Turing machines, let alone how thinking in brains could be computational (Piccinini, 2004). If Turing did not provide the foundations of the computational theory of mind, especially insofar as such work provided the impetus for artifcial intelligence, then where should the credit go?

8 As a simple measurement of their respective infuences, Margaret Boden’s (2006) expansive and thorough book on the history of cognitive science mentions Hodgkin and Huxley on 12 pages, while McCulloch and Pitts are mentioned on 379 pages! On the other hand, it is arguable that Hodgkin’s and Huxley’s work has had a greater infuence than McCulloch’s and Pitts’ on more dynamical approaches to cognitive science (e.g., Favela, 2020; Kelso, 1995; Port, 2006; Ward, 2002).

The sins of cognitivism visited upon neuroscience

51

Figure 3.4 Turing machine. Alan Turing conceived of an imaginary machine consisting of a tape with spaces to mark symbols, a head to read those symbols, and a table with a set of rules for what to do with those symbols depending on what state the system is currently in. Source: Modifed and recreated with permission from Acosta (2012). CC BY 3.0.

It can be quite challenging to precisely assign credit for developing the foundations of the project of artifcial intelligence. A number of brilliant minds worked together on various projects during the relevant periods of time, such as Church, Marvin Minsky, Turing, Shannon, Herbert Simon, von Neumann, and Weaver (Buchanan, 2005; Cobb, 2020; Wooldridge, 2021). As a result, there should be no doubt that much crosspollination of ideas occurred. Building on Turing’s work on universal computability and Shannon’s and Weaver’s information theory (Shannon & Weaver, 1949/1964), von Neumann was certainly one of the frst to directly engage with the project of artifcial intelligence (Cobb, 2020; Wooldridge, 2021). A great deal of von Neumann’s research that began in the 1940s was synthesized in a relatively short work, his 1958 “The Computer and the Brain,” which provided a “mathematician’s point of view” (1958/2012, p. 1) on the “logic” of brains and of computers, with the main aim of the former facilitating greater understanding of the latter. While von Neumann self-purportedly takes a brain-frst perspective on intelligence— namely, that illuminating the logic of brains could provide better understanding of logic in computers—the fact is that his “mathematician’s point of view” already smuggled in a computational interpretation of brain function. His discussion of the brain begins as follows: The basic component of this system [i.e., the brain] is the nerve cell, the neuron, and the normal function of a neuron is to generate and to propagate a nerve impulse. . . . The

52

The sins of cognitivism visited upon neuroscience nervous pulses can clearly be viewed as (two-valued) markers . . .: the absence of a pulse then represents one value (say, the binary digit 0), and the presence of one represents the other (say, the binary digit 1). . . . It is, then, to be interpreted as a marker (a binary digit 0 or 1) in a specifc, logical role. (von Neumann, 1958/2012, pp. 40 and 43; italics in original)

Von Neumann clearly states here that the brain’s computational powers come from their neurons (Figure 3.5a) and that neuron activity is essentially binary, like transistors (also see von Neumann, 1958/2012, pp. 45–50; Figure 3.5a, b). This binary computational understanding of neurons defended by von Neumann was likely due in large part to the work of Warren McCulloch and Walter Pitts. In the 1940s, McCulloch and Pitts (e.g., McCulloch & Pitts, 1943) put forward a model that treated the brain in computational terms. Specifcally, they utilized a formalism to defne the basic functional units of the brain (i.e., neurons) as logic circuits (Figure 3.5c).9 The power of the McCulloch-Pitts model of single-neuron activity provided a rich foundation for the computational theory of mind (Cobb, 2020; Rescorla, 2020; Shepherd, 2010). This was especially true of neural network approaches to artifcial intelligence, such as connectionism (Farmer, 1990; Rescorla, 2020). The McCulloch-Pitts neuron abstracts away from just about all biological features of neurons and idealizes them as simple binary units. Though the McCulloch-Pitts neuron can appear quite simple, McCulloch and Pitts were able to utilize their model to express a range of Boolean logic operators, such as AND, OR, and NOT (Cobb, 2020; McCulloch & Pitts, 1943). Others argued that a sufciently large number McCulloch-Pitts neurons could be capable of universal computability, like a Turing machine (Koch & Segev, 2000, p. 1171; others disagree, e.g., Piccinini, 2004). A few years after their 1943 paper, McCulloch and Pitts published another impactful paper, this time on the growing feld of pattern recognition (Pitts & McCulloch, 1947). This work further substantiated the power of their neuron model by demonstrating how it could successfully produce invariant pattern recognition in spite of distortions in the stimuli. While quite an accomplishment, the McCulloch-Pitts model would soon be modifed in order to both more closely resemble the activity of biological neurons and increase in computational power. The frst major advancement was the perceptron. Originally developed by Frank Rosenblatt (1958) and later by Minsky and Papert (1969/1988), the perceptron model was more biologically realistic than the McCulloch-Pitts model, both in terms of its inputs and output. Whereas the former model has binary inputs and a binary output after a specifc threshold is reached (Figure 3.5c), perceptrons have weighted inputs and a gradual output in terms of an activation function (Figure 3.5d), which allows for more sophisticated computations. While the basic perceptron model has been enhanced over the decades—e.g., improved input weights for particular tasks and advanced network connection confgurations—even the units of the most sophisticated deep neural networks of today do not difer substantially from the classical models of the 1950s to 1960s (Kuntzelman et al., 2021).

9 For those who question the contributions philosophers have made to science in the twentieth century—especially to the brain sciences—note that McCulloch and Pitts’ groundbreaking 1943 paper defned their theory in the formal symbol systems of Rudolph Carnap, Bertrand Russell, and Alfred North Whitehead (e.g., McCulloch & Pitts, 1943, p. 118). Thus, the foundations of modern artifcial intelligence, especially of the neuronal network kind, have analytic philosophy and the philosophy of science to thank to a great degree.

Figure 3.5 Biological neuron, transistor, and two neuron models. (a) Pyramidal neuron from the rat prefrontal cortex imaged via photomicrography. According to the neuron doctrine, the neuron is the basic functional unit of the nervous system. Neuronal activity is typically understood as all-or-nothing action potentials, or “spikes”. (b) Transistor diagram (N.B. the NPN transistor schematic symbol is rotated 90° counterclockwise to demonstrate the same input/output direction as the other fgure images). Transistors are the basic component of modern electronic devices. As current enters the “collector,” if it is amplifed by a higher current in the “base,” then the current will fow out of the emitter. Thus, the output fow of current will either be in an “of” (0) state or an “on” (1) state (image inspired by Figure  6.1 in Lytton, 2002, p.  89). (c) McCulloch-Pitts neuron model. McCulloch and Pitts (1943) aimed to understand the brain in logical terms, that is, as a collection of units that functioned as logic gates. Accordingly, they developed a model of neurons with a digital output (i.e., on or of, 1 or 0, etc.). (d) Perceptron neuron model. This model of single neurons can be viewed as an advanced version of the McCulloch-Pitts model. The perceptron model adds weighted values to its inputs, and instead of a binary output in terms of a threshold, it has a gradual output in terms of an activation function. (Images c and d inspired by Figure 3.1 in Kuntzelman et al., 2021.) Source: (a) Reprinted from Gonzalez-Islas and Hablitz (2003) with permission from copyright 2003 Society for Neuroscience.

54 The sins of cognitivism visited upon neuroscience 3.3.3

The McCulloch-Pitts tradition versus the Hodgkin-Huxley tradition

Where does this discussion leave us regarding the path from cognitivism to neuroscience? It is helpful to return to an earlier part of this chapter and the discussion of perspectivism. First, it is doubtful that a neat linear timeline of the development of artifcial intelligence can identify who is to be credited for what contributions and what theories and models informed who and what other work when. Still, it is safe to say that there was crosspollination among researchers engaging with the topic of intelligence (i.e., cognition, mental activity, minds, etc.). At one end, powerful sets of formalisms were being developed and proven with regard to topics on the foundations of computability (e.g., Turing). At the other end, impactful experimental work was elucidating the biological mechanisms of nervous systems (e.g., Hodgkin and Huxley). With such ideas in the air, it is not surprising—at least upon retrospect—that researchers would attempt to connect those areas (i.e., computability and neurobiology) in order to make progress on issues of intelligence, artifcial and otherwise. Reconstructed, the rationale looks like this: Question 1: Answer 1:

“What do we want to understand?” “Intelligence (i.e., cognition, mental activity, minds, etc.).”

Question 2: Answer 2:

“What is something that exhibits intelligence?” “Humans.”

Question 3: Answer 3:

“Where is human intelligence located?” “Their brain.”

Question 4: Answer 4:

“What is the primary functional unit of brains?” “Neurons.”

Question 5: Answer 5:

“What is a clear example of intelligence?” “Solving formal logic problems.”

Question 6: Answer 6:

“Is there an abstract way to subsume all problem-solving under one framework?” “Yes, any problem that can be systematically represented by symbols (e.g., 1 and 0) and have those symbols acted on by consistent rules (e.g., frst-order logic) can be solved by a computing (i.e., Turing) machine.”

Question 7: Answer 7:

“What is the basic function of a neuron?” “Neurons are binary, they either spike (i.e., action potential) or they do not spike.”

Question 8:

“Can neurons be represented as being in either an ‘of’ (0) state or an ‘on’ (1) state?” “Yes.”

Answer 8: Question 9:

“If neuron activity can be symbolically represented (e.g., binary 0 or 1), and if neuron activity follows consistent rules, then are brains computing machines?”

The sins of cognitivism visited upon neuroscience Answer 9:

55

“Yes, brains are essentially computing machines: they involve rules acting on representations.”

As this series of questions and answers attempts to make clear, a vocal group of researchers in the mid-twentieth century took a particular perspective on intelligence, that is, as essentially being a form of computation. Next, another perspective was taken on the particular case of human intelligence, that is, the neuron doctrine. It is from this combined set of perspectives that (as discussed in Chapter 2) cognitivism was born: The “cognitive” qua action, brain states, mental states, and perception are essentially forms of information processing. Cognitive information processing is defned by computations acting on representations. While such an information processing understanding of intelligence clearly informs the cognitivism at the heart of cognitive psychology and cognitive science, it did not have to be so. In many ways, this kind of information processing form of cognitivism was part of a particular lineage that I refer to as the McCulloch-Pitts tradition. As discussed earlier, the McCulloch-Pitts model abstracted away just about all biologically real features of neurons, thereby reducing their essence to mere binary switches—really, in no substantial way different than a transistor (Figure  3.5a, b, c). Consequently, it did not require a leap from biological brains to artifcial computers; brains really just are wet computers comprised of billions of binary switches. However, there was another lineage that the cognitive sciences could have been a part of, namely, the Hodgkin-Huxley tradition. Whereas the McCulloch-Pitts tradition—including perceptrons and other artifcial neural network approaches to come, such as connectionism—has been guided by an information processing perspective on intelligence and its biological realizations, the Hodgkin-Huxley tradition embraced a diferent perspective. The Hodgkin-Huxley tradition can be understood as taking a perspective on intelligence and its biological substrates that stresses biological realism and dynamics. The McCulloch-Pitts (McCulloch & Pitts, 1943) model was the frst published single-neuron model, appearing about 10 years before the HodgkinHuxley model (Hodgkin & Huxley, 1952). Although McCulloch was a neurophysiologist, he was more interested in modeling the neuron’s abstract (or static) properties in terms of Boolean logic and propositions (Cobb, 2020). Hodgkin and Huxley were interested in modeling single-neuron dynamics with emphases on electrophysiological properties, such as ion channels (Schwiening, 2012). In fact, their classic 1952 paper does not use the terms “compute,” “information,” or “representation” outside of descriptions of their diferential equations. Starting in the 1950s through the 2010s, the Hodgkin-Huxley tradition did not catch on to the degree that the McCulloch-Pitts tradition has in the cognitive and neural sciences. This is in large part due to the popularity of computers and computer science starting in the mid-twentieth century. Cognitivism, by way of cognitive psychology and cognitive science, facilitated the McCulloch-Pitts tradition also due in large part to the popularity of computers and explaining phenomena in computational terms. As noted previously, with regard to research on cognition, the Hodgkin-Huxley tradition has mostly been consigned to more fringe approaches in the cognitive sciences and psychology, especially dynamical approaches (e.g., Favela, 2020; Kelso, 1995; Port, 2006; Ward, 2002). Even so, notwithstanding the historical trends, the Hodgkin-Huxley tradition may be exhibiting a resurgence as some have drawn attention to the current “dynamical renaissance” in neuroscience (Favela, 2021; Shenoy, Sahani, & Churchland, 2013). With that background in place, I am now positioned to explain why ecological psychology has viewed cognitivism as visiting its sins upon neuroscience.

56

The sins of cognitivism visited upon neuroscience

3.4

Cognitivism fnds a new home

Two initial points need to be made. First, it is worth stressing again that by “neuroscience” I refer to a subset of its many disciplines and topics of investigative interest. With that said, contemporary neuroscience can be viewed as in many ways a continuation of the McCulloch-Pitts tradition, and therefore, as adhering to cognitivism and its commitment to understanding cognition and its physical realizers in information processing terms. This is especially true of neuroscience research on cognition, action, and perception. Second, many areas of neuroscience have been developing at the same time as cognitive psychology and cognitive science (Miller, 2003; Watrin & Darwich, 2012). Still, as I will now argue, while there was much crosspollination of concepts and theories between computer science and neuroscience during the early days of artifcial intelligence, cognitivism has come to have more impact on neuroscience than neuroscience on cognitivism. 3.4.1

Is neuroscience really cognitivist?

In a recent opinion piece, robotics researcher Rodney Brooks stated that Computational neuroscience is the respectable way to approach the understanding of [intelligence, thought, cognition] in all animals, including humans. And artifcial intelligence, the engineering counterpart to neuro-“science,” likewise assumes that to build an intelligent system we should write computer programs. In John McCarthy’s proposal for the famous 1956 Dartmouth Workshop on AI, the feld’s foundational event, he argued precisely this position on the very frst page. (Brooks, 2021) As I interpret Brooks—a well-known critic of cognitivism (e.g., Brooks, 1991)—he is stating that neuroscience is the way to understand cognition in animals (of course, this includes humans), that artifcial intelligence aims to take fndings from neuroscience and implement them in artifcial systems, and that this was the original aim of artifcial intelligence as stated by its founders. In the same opinion piece, Brooks goes on to make the point that although computationalism (i.e., the computational theory of mind, cognitivism, etc.) is “deeply entrenched,” it has exceeded its role as the guiding set of principles for research on cognition. He argues, in short, that not everything is a computer, and it is likely that brains are not computers. Some argue—I have heard it too many times—that cognitivism has not been central to neuroscience, especially not in the 2020s. A reviewer once commented on a recent manuscript of mine: “I’m tired of people saying that the currently dominant view is to treat the brain/mind in terms of computations acting on representations. Nobody is a Fodorian anymore!”10 It may be true that “nobody” is a “Fodorian” in many ways. For example, Fodor’s notion of modularity (Fodor, 1983) is not what network neuroscientists mean by a neural network module (e.g., Sporns & Betzel, 2016). So is Brooks just setting up a straw person that nobody actually defends? No. There is ample evidence that cognitivism is at least implicit in much contemporary neuroscience research and at most explicitly stated as

10 This is not the exact quote. I paraphrased their actual comment with the hopes of preserving the reviewer’s anonymity.

The sins of cognitivism visited upon neuroscience

57

playing various roles, such as generating hypotheses and interpreting experimental results. I do not intend to provide a literature review, but I think a number of examples will support my point. First, it is clear that even “Fodorian” approaches to cognition are still in play. In particular, the language of thought hypothesis (LOT; Fodor, 1975), which, in short, is the claim that thinking occurs in a computational-representational mental language (i.e., “mentalese;” Rescorla, 2019). Recent proponents include Susan Schneider’s (2011) “pragmatist” development of LOT, Steven Piantadosi’s, Tenenbaum’s, and Goodman’s (2016) integration of Bayesian modeling and LOT, and Quilty-Dunn’s, Porot’s, and Mandelbaum’s (2023) survey of purported evidence that while LOT is not the only game in town, it is the best one. Second, while not LOT per se, cognitivism is referred to by name as informing contemporary thinking about the brain. In their foundational work on embodied cognition, Francisco Varela, Evan Thompson, and Eleanor Rosch argued that an “important efect of cognitivism is the way it has shaped current views about the brain” (1991, p. 44), after which they provide an example from a neuroscience textbook that clearly states so (p. 263). While not referred to as cognitivism per se, defenders of computationalism abound in neuroscience. In recent work defending the computational theory of cognition, Gualtiero Piccinini asserts that “everyone is (or should be) a computational neuroscientist, at least in the general sense of embracing neural computation” (2020, p. 223) because “neural computation (simpliciter) explains cognition” (p. 297). While such a claim may seem bold in certain circles, it is quite uncontroversial in others. For example, the very frst sentence of Christof Koch’s “Biophysics of Computation” (1999), an infuential work in neuroscience from the 1990s, begins with the statement:11 The brain computes! This is accepted as a truism by the majority of neuroscientists engaged in discovering the principles employed in the design and operation of nervous systems. What is meant here is that any brain takes the incoming sensory data, encodes them into various biophysical variables .  .  . and subsequently performs a very large number of ill-specifed operations, frequently termed computations, on these variables to extract relevant features from the input. . . . The present book is dedicated to understanding in detail the biophysical mechanisms responsible for these computations. Its scope is the type of information processing underlying perception and motor control, occurring at the millisecond to fraction of a second time scale. (Koch, 1999, p. 1; italics added) Cognitivism’s infuence has also been explicitly noted in specifc neuroscience subdisciplines as well. Alfonso Caramazza and Max Coltheart state that since at least the mid1980s, “cognitive neuropsychology awoke from its slumbers” (2006, p.  3) due to the infuence of concepts and theories from the cognitivism that emerged from the cognitive revolution. Nowhere near as positive about the infuence of cognitivism, Howard Cromwell and Jaak Panksepp argue that the centrality of cognitivism has been detrimental to various areas of behavioral neuroscience, including treatments of neurobehavioral disorders and psychiatric illnesses, research on nonhuman animal cognition, and research on emotion (2011). In line with Cromwell and Panksepp, Cooper and Shallice (2010)

11 I came across this quote while reading Maley and Piccinini (2015), who cite that frst sentence as well.

58

The sins of cognitivism visited upon neuroscience

argue that the increasing centrality of cognitivism, especially “cognitive concepts,” is hindering progress in neuroscience. On the other hand, over the past 30 years, various researchers have argued that neuroscience needs the concepts (and methods and theories) of cognitivism. Concerning cognitive neuroscience, Michael Gazzaniga and colleagues (2014) claim that “the paradigms of cognitive psychology [i.e., cognitivism] have provided the tools for making more sophisticated analyses of the behavioral defcits observed following brain injury” (p. 83). Howard Gardner (1985, pp. 285–294) and George Miller (2003, p. 142) argue that because neuroscience alone will not be able to provide adequate explanations of brain and cognition, neuroscience will need to appeal to cognitivism’s concepts and theories. From this small sample of various relevant literatures—including behavioral neuroscience, cognitive neuroscience, cognitive neuropsychology, and cognitive science—it appears that cognitivism is understood as being central to neuroscience since at least the 1980s, that its primacy has been both benefcial and detrimental, and that there needs to and/or will be more cognitivism in neuroscience’s future. Regardless of the assessments being negative or positive, it seems that the fact is that cognitivism is not just at the periphery of much neuroscience research, but it is playing quite signifcant roles. The next section concludes this chapter with discussion of what this state of afairs, namely, of cognitivism’s place in neuroscience, means for neuroscience’s relationship with ecological psychology. 3.4.2

Yes, contemporary neuroscience is very much cognitivist

In the previous section, I presented a small—yet representative (e.g., neuroscience luminaries such as Caramazza, Gazzaniga, Koch, and Panksepp)—sample of various areas of neuroscience from the past three decades that motivate viewing cognitivism as having a substantial place in contemporary neuroscience research. In this section, I conclude Chapter 3 by elucidating a bit more on the place of cognitivism in contemporary neuroscience and how that presence sustains many of the reasons for ecological psychology and neurosciences’ continued confict and incompatibility. I begin by further motivating viewing neuroscience as being committed to cognitivism by calling attention to the centrality contemporary neuroscience research places on the concepts of computation and representation. From McCulloch and Pitts (1943) and von Neumann (1958/2012) in the 1940s and 1950s, to Koch (1999) and Gazzaniga (Gazzaniga et al., 2014) in the 1990s and 2010s, computation has been the fundamental concept in explaining brain function. In the 2020s alone, there are countless neuroscience research articles with the term “computation” (or a near variation) in the title, for example • “Computation through neural population dynamics” (Vyas, Golub, Sussillo, & Shenoy, 2020) • “Dendritic action potentials and computation in human layer 2/3 cortical neurons” (Gidon et al., 2020) • “Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity” (Jazayeri & Ostojic, 2021) • “Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree?” (Jones & Kording, 2021) • “Neural trajectories in the supplementary motor area and motor cortex exhibit distinct geometries, compatible with diferent classes of computation” (Russo et al., 2020).

The sins of cognitivism visited upon neuroscience

59

The same is true for neuroscience research articles with the term “representation” (or a near variation) in the title, for example • “Representational drift in primary olfactory cortex” (Schoonover, Ohashi, Axel, & Fink, 2021) • “Representational geometry of perceptual decisions in the monkey parietal cortex” (Okazawa, Hatch, Mancoo, Machens, & Kiani, 2021) • “Revealing the multidimensional mental representations of natural objects underlying human similarity judgments” (Hebart, Zheng, Pereira, & Baker, 2020) • “The representation of finger movement and force in human motor and premotor cortices” (Flint et al., 2020) • “Using deep reinforcement learning to reveal how the brain encodes abstract statespace representations in high-dimensional environments” (Cross, Cockburn, Yue, & O’Doherty, 2021). What are we to make of the regular use of “computation” and “representation” in neuroscience research that appears in the purported top peer-reviewed science journals in the world (e.g., Cell, Nature, Neuron, and Science)? On the one hand, we can say that the terms are used loosely, for example, “The neuron’s activity is described as if it were computing.” But that does not seem likely. No, I think neuroscientists literally mean their target phenomena (e.g., neural network, single neuron, etc.) are computing—see earlier quotes from Piccinini (2020) and Koch (1999, p. 1), “The brain computes!” The more interesting question is, if neuroscientists intend some sort of literal computation, then how do they cash out computation? Do they defne a neuron’s computation in terms of a binary Boolean function, like the McCulloch-Pitts neuron (McCulloch & Pitts, 1943)? Perhaps they defne it in terms of coding, like the encoding and decoding processes of Shannon and Weaver information theory (Shannon & Weaver, 1949/1964)? While I am confdent that neuroscientists really mean it when they describe dendrites, single neurons, and neural populations as “computing,” I am not confdent that many of them have a clear defnition of computing. Neural coding, for example, can be cashed out in terms of Shannon-Weaver information theory; but do neuroscientists really intend that kind of coding? Romain Brette (2019) has made a compelling case against the appropriateness of appealing to “coding” when explaining neural activity. Brette concludes that even as a metaphor, coding is an incompatible concept for its various applications in neuroscience. If neurons are not like transistors and if one of the most popular concepts, “coding,” is inappropriate, then we are still left wondering how “computation” is to be cashed out? Perhaps by “compute” neuroscientists just mean information processing (e.g., Koch, 1999). But that cannot work because computation is part of how information processing is explained!12 Does “representation” fare better than “computation?” No. The concept of “representation” in neuroscience is even worse of than “computation.” As Von Eckardt stated over 25 years ago, the computation part of cognitivism is on solid ground as it is derived from a well-developed, widely accepted body of knowledge in computer science and mathematics (1995, p. 144). Thus, as stated earlier, the question is whether or

12 See Maley (2022) for a recent attempt at presenting an attempt at a theoretically justifed and empirically tractable approach to treating the brain as literally being a computer.

60

The sins of cognitivism visited upon neuroscience

not brains do that kind of computation, not what computation is. Representation, on the other hand, is far from understood or defned in a generally accepted way. A recent study by Edouard Machery and myself provides empirical evidence to support this point (Favela & Machery, 2023). Applying elicitation methodology to an international group of participants comprised of researchers including neuroscientists, we found that while the vast majority responded “Yes” to the question, “Does cognition involve representations?”, results suggest that they exhibit uncertainty about what sorts of brain activity involve representations or not, and they prefer to characterize brain activity in causal, nonrepresentational terms. If researchers explicitly adhere to the claim that representation is central to cognition, then it seems quite concerning both that they are unsure how to apply the term and that they exhibit preferences for nonrepresentational descriptions of brain activity. One response to such claims as those made by Favela and Machery (2023) and Von Eckardt (1995) stems from the idea that cognitive psychologists, cognitive scientists, and philosophers have argued that the representation part of information processing in cognitivism just comes for free when you have computation. As Fodor put it, there is “no computation without representation” (1975, p. 34). Be that as it may regarding formal systems, such as a Turing machine, it is far from clear that neural systems that “compute” get the same freebie. Even if brains compute, that is, even if neurons just are McCulloch-Pittslike transistors when it comes down to it, it does not seem straightforward that the issue of representation is addressed. For example, the symbol grounding problem (discussed in Chapter 2; Harnad, 1990) of explaining how semantic content attaches to symbols remains for neurons as well. Thus far, I have attempted to motivate the claim that contemporary neuroscience adheres to cognitivist commitments. Specifcally, that cognition and the systems that realize it (i.e., brains) are essentially information processing systems that are defned in computational and representational terms. Additionally, I have argued that there is no clear and widely-accepted defnition of “computation” or “representation” used across the relevant neurosciences. These issues do not need to be worked out in order to address the remaining question of this chapter, “Why are ecological psychology and neuroscience irreconcilable?” The answer is: Ecological psychology and neuroscience are irreconcilable because neuroscience has embraced the very sins that made ecological psychology irreconcilable with cognitivism. The frst sin is the critique from disembodiment (Chapter  2, Section 2.1.2). While neuroscience is not as vulnerable to this critique as cognitive psychology and cognitive science are, there remain critical shortcomings to address. Neuroscience, by way of its very name, gives the impression of an “embodied” approach to cognition, mental activity, etc. That is to say, it is the science of nervous systems, especially brains. Nervous systems are physical entities that are a part of bodies. Thus, studying the nervous systems makes neuroscience embodied by default. Consequently, it is reasonable to conclude that neuroscience cannot be accused of, for example, Cartesian dualism about minds and bodies. However, by focusing on nervous systems, and to a larger extent brains, neuroscience runs the risk adhering to “Cartesian materialism” (Solymosi, 2011). Tibor Solymosi, building on Daniel Dennett’s argument (1991), describes Cartesian materialism as the view that, like the mind/brain identity theory (Smart, 2017), pinpointing the neural substrates of mental phenomena is sufcient to explain that phenomena. Additionally, a prime target of investigation is discovering the location of consciousness, that is to say, revealing where our mind is located in our brain. Dennett calls this Cartesian materialism because it asserts that there is a Cartesian theatre in the brain from which the mind’s eye perceives

The sins of cognitivism visited upon neuroscience

61

the world. Solymosi emphasizes the role it plays in neurophilosophy. Here, I appeal to Cartesian materialism as a danger that ecological psychology can accuse neuroscience of running the risk of exhibiting unless it takes a truly embodied approach (e.g., Chemero, 2009; Varela et al., 1991). The next set of sins are the critiques from uncashed checks. The ecological psychologist can claim that neuroscience runs a stronger risk of exhibiting these three sins. First, the sin/uncashed check from innateness: Like the cognitivist (e.g., linguists; Chomsky, 1959), neuroscience runs into the challenge of explaining the source of computational rules and operations. Second, the sin/uncashed check from meaning: As mentioned previously (e.g., symbol grounding problem), neuroscience has not provided solid explanations of the way(s) representations provide the meaningful components for the computations to act on. Finally, the third sin/uncashed check from intelligence: Like other cognitivist research programs, neuroscience appears to take out loans of intelligence regarding crucial aspects of brain and mind functioning. For example, in addition to grounding meaning in representations, there is the issue of how those representations are intelligible to the brain/ cognitive/neural system? In other words, there is no uncontroversial or widely-accepted account of where it all comes together for the system.13 Consequently, though neuroscience may not be as sinful as other disciplines that adhere to cognitivist commitments (e.g., cognitive psychology and cognitive science) regarding the sin of disembodiment, neuroscience is rather sinful regarding the three uncashed checks of innateness, meaning, and intelligence. As ecological psychology is incommensurable with the cognitivist commitments cognitive psychology and cognitive science, so too is it irreconcilable with sinful forms of neuroscience. 3.5

Conclusion

This chapter aimed at explaining in what sense the sins of cognitivism have been visited upon neuroscience. By way of Hippocrates and Aristotle, as well as Mike the Headless Chicken, I highlighted the importance perspectives play in scientifc research. As I argued, even something as apparently obvious as the brain being the source of mental phenomena stems from a particular investigative perspective. From there, I discussed one of the core perspectives in contemporary neuroscience, namely, the neuron doctrine. After, two of the most infuential models in the history of neuroscience—Hodgkin-Huxley and McCullochPitts—were considered in terms of their being informed by the neuron doctrine. It was also shown that those two models have served as points of divergence in the way neuroscience is practiced: The Hodgkin-Huxley tradition focuses on biology and dynamics, and the McCulloch-Pitts tradition focuses on the cognitivist commitments of computation and representation. Finally, I presented reasons for understanding neuroscience as practicing a cognitivist approach and the sins such commitments entail. As a result, neuroscience is positioned to fall victim to the same critiques ecological psychologists have made of other cognitivist approaches, like cognitive psychology and cognitive science. The next chapter presents overviews and critical assessments of prior attempts to reconcile ecological psychology and neuroscience.

13 Contender theories for explaining where it all comes together include various theories of consciousness, such as the dynamic core hypothesis (Edelman & Tononi, 2000) and global workspace theories (Baars, 1997; Mashour, Roelfsema, Changeux, & Dehaene, 2020).

62

The sins of cognitivism visited upon neuroscience

References Abrahamsen, A., & Bechtel, W. (2012). History and core themes. In K. Frankish & W. M. Ramsey (Eds.), The Cambridge handbook of cognitive science (pp.  9–28). New York, NY: Cambridge University Press. Acosta, R. (2012). Turing Machine, reconstructed by Mike Davey as seen at Go Ask ALICE at Harvard University. Wikipedia. Retrieved June 18, 2023 from https://commons.wikimedia.org/ wiki/File:Turing_Machine_Model_Davey_2012.jpg Aristotle. (1984). Parts of animals. In J. Barnes (Ed.), The complete works of Aristotle: The revised Oxford translation (Vol. 1–2, pp. 2176–2375). Princeton, NJ: Princeton University Press. (Original work published ~350 B.C.E.) Baars, B. J. (1997). In the theater of consciousness: The workspace of the mind. New York, NY: Oxford University Press. Baluška, F., & Levin, M. (2016). On having no head: Cognition throughout biological systems. Frontiers in Psychology: Cognitive Science, 7(902), 1–19. https://doi.org/10.3389/fpsyg.2016. 00902 Barwich, A. S. (2020). Smellosophy: What the nose tells the mind. Cambridge, MA: Harvard University Press. Bear, M. F., Connors, B. W., & Paradiso, M. A. (2016). Neuroscience: Exploring the brain (4th ed.). New York, NY: Wolters Kluwer. Boden, M. A. (2006). Mind as machine: A history of cognitive science (Vol. 1–2). New York, NY: Oxford University Press. Brennan, T. (2021). Telos. In Routledge encyclopedia of philosophy. United Kingdom: Taylor and Francis. https://doi.org/10.4324/9780415249126-A134-1 Brooks, R. A. (1991). Intelligence without representation. Artifcial Intelligence, 47(1–3), 139–159. https://doi.org/10.1016/0004-3702(91)90053-M Brooks, R. (2021). Cognition without computation. IEEE Spectrum. Retrieved October 29, 2021 from https://spectrum.ieee.org/computational-cognitive-science Brown, M. J. (2009). Models and perspectives on stage: Remarks on Giere’s scientifc perspectivism. Studies in History and Philosophy of Science Part A, 40(2), 213–220. https://doi.org/10.1016/j. shpsa.2009.03.001 Buchanan, B. G. (2005). A (very) brief history of artifcial intelligence. AI Magazine, 26(4), 53–60. https://doi.org/10.1609/aimag.v26i4.1848 Bullock, T. H., Bennett, M. V., Johnston, D., Josephson, R., Marder, E., & Fields, R. D. (2005). The neuron doctrine, redux. Science, 310(5749), 791–793. https://doi.org/10.1126/science.1114394 Brette, R. (2019). Is coding a relevant metaphor for the brain? Behavioral and Brain Sciences, 42, e215, 1–58. https://doi.org/10.1017/S0140525X19000049 Caramazza, A., & Coltheart, M. (2006). Cognitive neuropsychology twenty years on. Cognitive Neuropsychology, 23(1), 3–12. https://doi.org/10.1080/02643290500443250 Carlson, N. R. (2014). Foundations of behavioral neuroscience (9th ed.). Essex, UK: Pearson. Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: The MIT Press. Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness of computational explanation in neuroscience. Synthese, 191(2), 127–153. https://doi.org/10.1007/ s11229-013-0369-y Chirimuuta, M. (2019). Vision. In M. Sprevak & M. Colombo (Eds.), The Routledge handbook of the computational mind (pp. 397–409). New York, NY: Routledge. Chomsky, N. (1959). Review: Verbal behavior by B. F. Skinner. Language, 35(1), 26–58. https://doi. org/10.2307/411334 Churchland, P. M. (1994). The engine of reason, the seat of the soul: A philosophical journey into the brain. Cambridge, MA: The MIT Press. Churchland, P. S., & Sejnowski, T. J. (1992/2017). The computational brain (25th anniversary ed.). Cambridge, MA: The MIT Press.

The sins of cognitivism visited upon neuroscience

63

Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58, 7–19. https://doi.org/10.1093/ analys/58.1.7 Clarke, E., & Stannard, J. (1963). Aristotle on the anatomy of the brain. Journal of the History of Medicine and Allied Sciences, 18(2), 130–148. https://doi.org/10.1093/jhmas/XVIII.2.130 Cobb, M. (2020). The idea of the brain: The past and future of neuroscience. New York, NY: Basic Books. Cooper, R. P., & Shallice, T. (2010). Cognitive neuroscience: The troubled marriage of cognitive science and neuroscience. Topics in Cognitive Science, 2(3), 398–406. https://doi.org/10.1111/j.17568765.2010.01090.x Cromwell, H. C., & Panksepp, J. (2011). Rethinking the cognitive revolution from a neural perspective: How overuse/misuse of the term “cognition” and the neglect of afective controls in behavioral neuroscience could be delaying progress in understanding the BrainMind. Neuroscience & Biobehavioral Reviews, 35(9), 2026–2035. https://doi.org/10.1016/j.neubiorev.2011.02.008 Cross, L., Cockburn, J., Yue, Y., & O’Doherty, J. P. (2021). Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments. Neuron, 109(4), 724–738. https://doi.org/10.1016/j.neuron.2020.11.021 Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., & Damasio, A. R. (1994). The return of Phineas Gage: Clues about the brain from the skull of a famous patient. Science, 264(5162), 1102–1105. https://doi.org/10.1126/science.8178168 DeFelipe, J. (2015). The dendritic spine story: An intriguing process of discovery. Frontiers in Neuroanatomy, 9(14). https://doi.org/10.3389/fnana.2015.00014 Dennett, D. C. (1991). Consciousness explained. New York, NY: Back Bay Books. Eagleman, D., & Downar, J. (2016). Brain and behavior: A cognitive neuroscience perspective. New York, NY: Oxford University Press. Edelman, G. M., & Tononi, G. (2000). A universe of consciousness: How matter becomes imagination. New York, NY: Basic Books. Farmer, J. D. (1990). A Rosetta Stone for connectionism. Physica D, 42, 153–187. https://doi. org/10.1016/0167-2789(90)90072-W Favela, L. H. (2017). Consciousness is (probably) still only in the brain, even though cognition is not. Mind and Matter, 15(1), 49–69. Favela, L. H. (2020). Dynamical systems theory in cognitive science and neuroscience. Philosophy Compass, 15(8), e12695, 1–16. https://doi.org/10.1111/phc3.12695 Favela, L. H. (2021). The dynamical renaissance in neuroscience. Synthese, 199(1–2), 2103–2127. https://doi.org/10.1007/s11229-020-02874-y Favela, L. H. (2022). “It takes two to make a thing go right”: The coevolution of technological and mathematical tools in neuroscience. In J. Bickle, C. F. Craver, & A.-S. Barwich (Eds.), The tools of neuroscience experiment: Philosophical and scientifc perspectives. New York, NY: Routledge. https://doi.org/10.4324/9781003251392-18 Favela, L. H., Amon, M. J., Lobo, L., & Chemero, A. (2021). Empirical evidence for extended cognitive systems. Cognitive Science: A Multidisciplinary Journal, 45(11), e13060, 1–27. https://doi. org/10.1111/cogs.13060 Favela, L. H., & Machery, E. (2023). Investigating the concept of representation in the neural and psychological sciences. Frontiers in Psychology: Cognition. https://doi.org/10.3389/fpsyg.2023.1165622 Feuillet, L., Dufour, H., & Pelletier, J. (2007). Brain of a white-collar worker. Lancet, 370, 262. https://doi.org/10.1016/S0140-6736(07)61127-1 Feyerabend, P. (1975/1993). Against method. New York, NY: Verso. Flint, R. D., Tate, M. C., Li, K., Templer, J. W., Rosenow, J. M., Pandarinath, C., & Slutzky, M. W. (2020). The representation of fnger movement and force in human motor and premotor cortices. eNeuro, 7(4). https://doi.org/10.1523/ENEURO.0063-20.2020 Fodor, J. A. (1975). The language of thought. New York, NY: Thomas Y. Crowell Company. Fodor, J. A. (1983). The modularity of mind: An essay on faculty psychology. Cambridge, MA: The MIT Press.

64

The sins of cognitivism visited upon neuroscience

Freud, S. (1950). Project for a scientifc psychology (1895). In The standard edition of the complete psychological works of Sigmund Freud: Volume 1: (1886–1899): Pre-psycho-analytic publications and unpublished drafts (pp. 281–391). London, England: The Hogarth Press. Gardner, H. (1985). The mind’s new science: A history of the cognitive revolution. New York, NY: Basic Books. Gazzaniga, M. S., Ivry, R. B., & Mangun, G. R. (2014). Cognitive neuroscience: The biology of mind (4th ed.). New York, NY: W. W. Norton & Company Ltd. Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014). Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge, UK: Cambridge University Press. Gidon, A., Zolnik, T. A., Fidzinski, P., Bolduan, F., Papoutsi, A., Poirazi, P., .  .  . Larkum, M. E. (2020). Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science, 367(6473), 83–87. https://doi.org/10.1126/science.aax6239 Giere, R. N. (2006). Scientifc perspectivism. Chicago, IL: University of Chicago Press. Glickstein, M. (2006). Golgi and Cajal: The neuron doctrine and the 100th anniversary of the 1906 Nobel Prize. Current Biology, 16(5), R147–R151. https://doi.org/10.1016/j.cub.2006.02.053 Glickstein, M. (2014). Neuroscience: A historical introduction. Cambridge, MA: The MIT Press. Gold, I., & Stoljar, D. (1999). A neuron doctrine in the philosophy of neuroscience. Behavioral and Brain Sciences, 22(5), 809–830. https://doi.org/10.1017/S0140525X99002198 Golgi, C. (1906/1999). The neuron doctrine: Theory and facts. In Nobel lectures: Physiology or medicine 1901–1921 (pp. 189–217). Singapore: World Scientifc. Gonzalez-Islas, C., & Hablitz, J. J. (2003). Dopamine enhances EPSCs in layer II—III pyramidal neurons in rat prefrontal cortex. The Journal of Neuroscience, 23(3), 867–875. https://doi.org/10.1523/ JNEUROSCI.23-03-00867.2003 Gross, C. G. (2009). A hole in the head: More tales in the history of neuroscience. Cambridge, MA: The MIT Press. Guillery, R. W. (2005). Observations of synaptic structures: Origins of the neuron doctrine and its current status. Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1458), 1281–1307. https://doi.org/10.1098/rstb.2003.1459 Hales, S. D., & Welshon, R. (2000). Nietzsche’s perspectivism. Chicago, IL: University of Illinois Press. Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3), 335– 346. https://doi.org/10.1016/0167-2789(90)90087-6 Häusser, M. (2000). The Hodgkin-Huxley theory of the action potential. Nature Neuroscience, 3(11), 1165. https://doi.org/10.1038/81426 Hebart, M. N., Zheng, C. Y., Pereira, F., & Baker, C. I. (2020). Revealing the multidimensional mental representations of natural objects underlying human similarity judgments. Nature Human Behaviour, 4, 1173–1185. https://doi.org/10.1038/s41562-020-00951-3 Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4), 500–544. https://doi.org/10.1113/jphysiol.1952.sp004764 Jazayeri, M., & Ostojic, S. (2021). Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. Current Opinion in Neurobiology, 70, 113–120. https://doi.org/10.1016/j.conb.2021.08.002 Jones, I. S., & Kording, K. P. (2021). Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree? Neural Computation, 33(6), 1554– 1571. https://doi.org/10.1162/neco_a_01390 Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., & Hudspeth, A. J. (Eds.). (2013). Principles of neural science (5th ed.). New York, NY: McGraw-Hill. Kandel, E. R., & Squire, L. R. (2000). Neuroscience: Breaking down scientifc barriers to the study of brain and mind. Science, 290(5494), 1113–1120. https://doi.org/10.1126/science.290.5494.1113 Kelso, J. A. S. (1995). Dynamic patterns: The self-organization of brain and behavior. Cambridge, MA: The MIT Press.

The sins of cognitivism visited upon neuroscience

65

Kitcher, P. (1995). Freud’s dream: A complete interdisciplinary science of mind. Cambridge, MA: The MIT Press. Kliemann, D., Adolphs, R., Paul, L. K., Tyszka, J. M., & Tranel, D. (2021). Reorganization of the social brain in individuals with only one intact cerebral hemisphere. Brain Sciences, 11(8), 965, 1–20. https://doi.org/10.3390/brainsci11080965 Kliemann, D., Adolphs, R., Tyszka, J. M., Fischl, B., Yeo, B. T., Nair, R., .  .  . Paul, L. K. (2019). Intrinsic functional connectivity of the brain in adults with a single cerebral hemisphere. Cell Reports, 29(8), 2398–2407. https://doi.org/10.1016/j.celrep.2019.10.067 Koch, C. (1999). Biophysics of computation: Information processing in single neurons. New York, NY: Oxford University Press. Koch, C., & Segev, I. (2000). The role of single neurons in information processing. Nature Neuroscience, 3, 1171–1177. https://doi.org/10.1038/81444 Kording, K. P., Blohm, G., Schrater, P., & Kay, K. (2020). Appreciating the variety of goals in computational neuroscience. Neurons, Behavior, Data Analysis, and Theory, 3(6). https://arxiv.org/ abs/2002.03211v1 Kozma, R., & Freeman, W. J. (2016). Cognitive phase transitions in the cerebral cortex-enhancing the neuron doctrine by modeling neural felds. Switzerland: Springer. Kuhn, T. S. (1962/1996). The structure of scientifc revolutions (2nd ed.). Chicago, IL: University of Chicago Press. Kuntzelman, K. M., Williams, J. M., Lim, P. C., Samal, A., Rao, P. K., & Johnson, M. R. (2021). Deeplearning-based multivariate pattern analysis (dMVPA): A tutorial and a toolbox. Frontiers in Human Neuroscience: Cognitive Neuroscience, 15(89). https://doi.org/10.3389/fnhum.2021.638052 Leunissen, M. E. M. P. J. (2007). The structure of teleological explanations in Aristotle: Theory and practice. In D. Sedley (Ed.), Oxford studies in ancient philosophy (Vol. 33, pp.  145–178). New York, NY: Oxford University Press. Lewin, R. (1980). Is your brain really necessary? John Lorber, a British neurologist, claims that some patients are more normal than would be inferred from their brain scans. Science, 210(4475), 1232– 1234. https://doi.org/10.1126/science.7434023 Lyon, P. (2015). The cognitive cell: Bacterial behavior reconsidered. Frontiers in Microbiology: Evolutionary and Genomic Microbiology, 6(264). https://doi.org/10.3389/fmicb.2015.00264 Lytton, W. W. (2002). From computer to brain: Foundations of computational neuroscience. New York, NY: Springer-Verlag. Maley, C. J. (2022). How (and why) to think that the brain is literally a computer. Frontiers in Computer Science: Theoretical Computer Science, 4(970396). https://doi.org/10.3389/fcomp.2022.970396 Maley, C. J., & Piccinini, G. (2015). Neural representation and computation. In J. Clausen & N. Levy (Eds.), Handbook of neuroethics (pp. 79–94). Dordrecht: Springer. https://doi.org/10.1007/ 978-94-007-4707-4_8 Marom, S. (2010). Neural timescales or lack thereof. Progress in Neurobiology, 90, 16–28. https:// doi.org/10.1016/j.pneurobio.2009.10.003 Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776–798. https://doi.org/10.1016/j. neuron.2020.01.026 McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133. https://doi.org/10.1007/BF02478259 Merker, B. (2007). Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Behavioral and Brain Sciences, 30(1), 63–81. https://doi.org/10.1017/S0140525X07000891 Miller, G. A. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive Sciences, 7(3), 141–144. https://doi.org/10.1016/S1364-6613(03)00029-9 Minsky, M. L., & Papert, S. A. (1969/1988). Perceptrons: An introduction to computational geometry (expanded ed.). Cambridge, MA: The MIT Press. Nagel, T. (1986). The view from nowhere. New York, NY: Oxford University Press.

66

The sins of cognitivism visited upon neuroscience

Okazawa, G., Hatch, C. E., Mancoo, A., Machens, C. K., & Kiani, R. (2021). Representational geometry of perceptual decisions in the monkey parietal cortex. Cell, 184(14), 3748–3761. https:// doi.org/10.1016/j.cell.2021.05.022 Piantadosi, S. T., Tenenbaum, J. B., & Goodman, N. D. (2016). The logical primitives of thought: Empirical foundations for compositional cognitive models. Psychological Review, 123(4), 392– 424. https://doi.org/10.1037/a0039980 Piccinini, G. (2004). The frst computational theory of mind and brain: A close look at McCulloch and Pitts’s “logical calculus of ideas immanent in nervous activity.” Synthese, 141(2), 175–215. https://doi.org/10.1023/B:SYNT.0000043018.52445.3e Piccinini, G. (2020). Neurocognitive mechanisms: Explaining biological cognition. Oxford, UK: Oxford University Press. Piccinini, G., & Shagrir, O. (2014). Foundations of computational neuroscience. Current Opinion in Neurobiology, 25, 25–30. https://doi.org/10.1016/j.conb.2013.10.005 Pigliucci, M. (2010). Nonsense on stilts: How to tell science from bunk. Chicago, IL: The University of Chicago Press. Pitts, W., & McCulloch, W. S. (1947). How we know universals: The perception of auditory and visual forms. The Bulletin of Mathematical Biophysics, 9(3), 127–147. https://doi.org/10.1007/ BF02478291 Port, R. F. (2006). Dynamical systems hypothesis in cognitive science. In L. Nadel (Ed.), Encyclopedia of cognitive science. John Wiley & Sons, Ltd. https://doi.org/10.1002/0470018860.s00020 Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences, 3, 111–169. https://doi.org/10.1017/S0140525X00002053 Quilty-Dunn, J., Porot, N., & Mandelbaum, E. (2023). The best game in town: The re-emergence of the language of thought hypothesis across the cognitive sciences. Behavioral and Brain Sciences, 46, E261, 1–75. https://doi.org/10.1017/S0140525X22002849 Raja, V., & Segundo-Ortín, M. (2021). Plant sentience: Theoretical and empirical issues: Editorial introduction. Journal of Consciousness Studies, 28(1–2), 7–16. Reid, R. C., & Usrey, W. M. (2013). Vision. In L. R. Squire, D. Berg, F. E. Bloom, S. Du Lac, A. Ghosh, & N. C. Spitzer (Eds.), Fundamentals of neuroscience (4th ed., pp. 577–598). Waltham, MA: Academic Press. Rescorla, M. (2019). The language of thought hypothesis. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (summer 2019 ed.). Stanford, CA: Stanford University. Retrieved January 15, 2021 from https://plato.stanford.edu/archives/sum2019/entries/language-thought/ Rescorla, M. (2020). The computational theory of mind. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (fall 2020 ed.). Stanford, CA: Stanford University. Retrieved January 15, 2021 from https://plato.stanford.edu/archives/fall2020/entries/computational-mind/ Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519 Russo, A. A., Khajeh, R., Bittner, S. R., Perkins, S. M., Cunningham, J. P., Abbott, L. F., & Churchland, M. M. (2020). Neural trajectories in the supplementary motor area and motor cortex exhibit distinct geometries, compatible with diferent classes of computation. Neuron, 107(4), 745–758. https://doi.org/10.1016/j.neuron.2020.05.020 Santayana, G. (1922). The life of reason: Or the phases of human progress (2nd ed.). New York, NY: Charles Scribner’s Sons. Schneider, S. (2011). The language of thought: A new philosophical direction. Cambridge, MA: The MIT Press. Schoonover, C. E., Ohashi, S. N., Axel, R., & Fink, A. J. (2021). Representational drift in primary olfactory cortex. Nature, 594, 541–546. https://doi.org/10.1038/s41586-021-03628-7 Schwiening, C. J. (2012). A brief historical perspective: Hodgkin and Huxley. The Journal of Physiology, 590(11), 2571–2575. https://doi.org/10.1113/jphysiol.2012.230458 Seth, A. K., He, B. J., & Hohwy, J. (2015). Editorial. Neuroscience of Consciousness, 2015(1), niv001. https://doi.org/10.1093/nc/niv001

The sins of cognitivism visited upon neuroscience

67

Shannon, C. E., & Weaver, W. (1949/1964). The mathematical theory of communication. Urbana, IL: The University of Illinois Press. Shenoy, K. V., Sahani, M., & Churchland, M. M. (2013). Cortical control of arm movements: A dynamical systems perspective. Annual Review of Neuroscience, 36, 337–359. https://doi. org/10.1146/annurev-neuro-062111-150509 Shepherd, G. M. (2010). Creating modern neuroscience: The revolutionary 1950s. New York, NY: Oxford University Press. Shepherd, G. M. (2016). Foundations of the neuron doctrine (25th anniversary ed.). New York, NY: Oxford University Press. Smart, J. J. C. (2017). The mind/brain identity theory. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (spring 2017 ed.). Stanford, CA: Stanford University. Retrieved November 5, 2021 from https://plato.stanford.edu/archives/spr2017/entries/mind-identity/ Society for Neuroscience. (2021). Themes and topics. SfN Global Connectome. Retrieved January 15, 2021 from www.sfn.org/meetings/virtual-events/sfn-global-connectome-a-virtual-event/abstracts/ themes-and-topics Solymosi, T. (2011). Neuropragmatism, old and new. Phenomenology and the Cognitive Sciences, 10, 347–368. https://doi.org/10.1007/s11097-011-9202-6 Sporns, O., & Betzel, R. F. (2016). Modular brain networks. Annual Review of Psychology, 67, 613–640. https://doi.org/10.1146/annurev-psych-122414-033634 Tononi, G., & Koch, C. (2015). Consciousness: Here, there and everywhere? Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1668). https://doi.org/10.1098/rstb.2014. 0167 Trappenberg, T. P. (2014). Fundamentals of computational neuroscience (2nd ed.). New York, NY: Oxford University Press. Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, S2–42(1), 230–265. https://doi.org/10.1112/ plms/s2-42.1.230 Turing, A. M. (1938). On computable numbers, with an application to the Entscheidungsproblem: A correction. Proceedings of the London Mathematical Society, S2–42(1), 544–546. https://doi. org/10.1112/plms/s2-43.6.544 Turing, A. M. (1950). I: Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi. org/10.1093/mind/LIX.236.433 Twomey, S. (2010). Phineas Gage: Neuroscience’s most famous patient. Smithsonian Magazine. Retrieved November 10, 2021 from www.smithsonianmag.com/history/phineas-gage-neurosciencesmost-famous-patient-11390067/ Vallverdú, J., Castro, O., Mayne, R., Talanov, M., Levin, M., Baluška, F., . . . Adamatzky, A. (2018). Slime mould: The fundamental mechanisms of biological cognition. BioSystems, 165, 57–70. https://doi.org/10.1016/j.biosystems.2017.12.011 Van Horn, J. D., Irimia, A., Torgerson, C. M., Chambers, M. C., Kikinis, R., & Toga, A. W. (2012). Mapping connectivity damage in the case of Phineas Gage. PLoS One, 7(5), e37454. https://doi. org/10.1371/journal.pone.0037454 Van Orden, G., Hollis, G., & Wallot, S. (2012). The blue-collar brain. Frontiers in Physiology: Fractal Physiology, 3(207). https://doi.org/10.3389/fphys.2012.00207 Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. Cambridge, MA: The MIT Press. Von Eckardt, B. (1995). What is cognitive science? Cambridge, MA: The MIT Press. von Neumann, J. (1958/2012). The computer and the brain (3rd printing). New Haven, CT: Yale University Press. Vyas, S., Golub, M. D., Sussillo, D., & Shenoy, K. V. (2020). Computation through neural population dynamics. Annual Review of Neuroscience, 43, 249–275. https://doi.org/10.1146/annurevneuro-092619-094115 Ward, L. M. (2002). Dynamical cognitive science. Cambridge, MA: The MIT Press.

68

The sins of cognitivism visited upon neuroscience

Watrin, J. P., & Darwich, R. (2012). On behaviorism in the cognitive revolution: Myth and reactions. Review of General Psychology, 16(3), 269–282. https://doi.org/10.1037/a0026766 Westbrook, G. L. (2013). Seizures and epilepsy. In E. R. Kandel, J. H. Schwartz, T. M. Jessell, S. A. Siegelbaum, & A. J. Hudspeth (Eds.), Principles of neural science (5th ed., pp. 1116–1139). New York, NY: McGraw-Hill. Wickens, A. P. (2015). A history of the brain: From stone age surgery to modern neuroscience. New York, NY: Psychology Press. Wooldridge, M. (2021). A brief history of artifcial intelligence: What it is, where we are, and where we are going. New York, NY: Flatiron Books.

4

The varieties of ecological neuroscience

Wonder tissue appears in many locales. J. J. Gibson’s theory of perception, for instance, seems to treat the whole visual system as a hunk of wonder tissue, for instance, resonating with marvelous sensitivity to a host of sophisticated ‘afordances.’ (Dennett, 1984, pp. 149–150)

[Some Neo-Gibsonians, like] Turvey’s approach . . . includes specifcations of organism (especially neural) variables and constraints, whereas Gibson tends to leave the organism, if not empty, apparently stufed with foam rubber. (Pribram, 1982, p. 370; italics in original).

4.1

The story thus far

The title of this section is a double entendre: The “story thus far” refers both to what has been discussed in this book thus far and where, in general, ecological psychology and neuroscience currently stand regarding attempts at integration and reconciliation. To begin, the previous two chapters have presented a story about why ecological psychology and neuroscience have come to be viewed as incommensurable and irreconcilable approaches to the scientifc investigation of mind. Chapter 2 claimed that both stem from early-twentieth century reactions to behaviorism and mid-twentieth century developments in psychology. Specifcally, ecological psychology grew from attempts to overcome purported limitations of standard approaches to perceptual psychology, and neuroscience grew from the cognitivism born from the cognitive revolution. Chapter  3 claimed that the neuroscience of the mid- to late-1900s can be understood as following two main paths: one, the HodgkinHuxley tradition that stressed physiological properties of neurons and two, the McCullochPitts tradition that stressed logical properties of neurons. I argued that much of the divide between neuroscience and ecological psychology is due to the former embracing cognitivist commitments—by way of the McCulloch-Pitts tradition—that the latter rejected in cognitive psychology and the cognitive sciences. That is the story of the book thus far. The story of attempts to reconcile ecological psychology and neuroscience up to now is presented in the current chapter. This book is not the frst attempt to reconcile ecological psychology and neuroscience (e.g., de Wit & Withagen, 2019; Raja, 2018; Reed, 1996). As such, I do not claim to provide the best or only way to tell the story of what can be called “ecological neuroscience” or “Gibsonian neuroscience.” Still, given that there have been so few attempts, the area is open to interpretation and development in ways other, more established, areas are not. To begin

DOI: 10.4324/9781003009955-4

70

The varieties of ecological neuroscience

telling my account, Chapter 1 provided the example of visually-guided action and sketches of ecological psychology and neuroscience accounts of it in order to motivate their deep diferences. One of the most crucial diferences in their respective explanatory requirements is the role nervous systems—especially brains—play in each approach: for neuroscience it is essential and for ecological psychology it is not. Perhaps for a (caricatured) behaviorism (cf. Barrett, 2012, 2016), explaining intelligent behavior without any account of brains or “internal” states would be par for the course. Then again, has it not been known since at least the mid-1800s (e.g., Phineas Gage; cf. Chapters 2 and 3)—if not since ~400 B.C.E. (e.g., Hippocrates; cf. Chapter 3)—that the brain is a crucial, if not the central, organ of minds? Perhaps it is viewing that as an indubitable fact that motivated the comments from Daniel Dennett and Karl Pribram that started this chapter. At the risk of doing some mind reading, it seems that Dennett and Pribram are both astounded and exasperated by James J. Gibson’s apparent lack of even a sketch of the brain’s contributions to afordance perception. Dennett attributes this alleged oversight to Gibson thinking too highly of brains, that is, as made of “wonder tissue” whose magic powers cannot be explained. Pribram attributes this alleged oversight to Gibson thinking too little of brains, that is, whether an organism is flled with a nervous system or “foam rubber,” both make equivalent contributions to the realization and explanation of acts of perception. The aim of this chapter is not to defend Gibson and other ecological psychologists from the likes of Dennett and Pribram. The previous chapters have already motivated why ecological psychology does not place paramount explanatory emphasis on brains when investigating perception-action. Instead, this chapter’s focus is to provide an overview of examples of attempts at integrating and reconciling ecological psychology and neuroscience, namely, the varieties of ecological neuroscience. The focus is on those approaches to providing an account of the brain’s contribution to perception-action that also attempt to maintain core features of ecological psychology. The next section discusses preliminaries to an ecological neuroscience. After, summaries and critiques of prior attempts at formulating an ecological neuroscience are discussed, with particular focus on Edward Reed’s approach, neural reuse, and Bayesianism. 4.2

Preliminaries to an ecological neuroscience

Before describing and assessing prior attempts to formulate an ecological neuroscience, some preliminary issues need to be addressed. First, I will briefy discuss neuroscience’s attempts to integrate lessons from ecological psychology. Second, I critically assess what Gibson did say about the brain, namely, the concept of resonance. Third, while the previous two issues may paint a dire picture about the prospects for integrating ecological psychology and neuroscience, an attempt at optimism will be made by demonstrating that ecological psychology has a fruitful history of integrating concepts, methods, and theories from other approaches. 4.2.1

Neuroscience and (titular) afordances

Has neuroscience integrated lessons from ecological psychology? The short answer is, “no.” By and large, neuroscience has not integrated the four primary principles of Gibsonian ecological psychology. The longer answer is, “sort of, but not really.” Though some areas of neuroscience have gestured at the signifcance of aspects of ecological psychology, the majority have not. Moreover, when neuroscience research does incorporate aspects of ecological

The varieties of ecological neuroscience

71

psychology—for example, concepts like afordances—it does so without maintaining the essence of the concept as pertains its role in ecological psychology’s overall investigative framework. The work of Paul Cisek and colleagues is illustrative in this regard. Cisek and colleagues have done excellent work in neuroscience. In particular, they are (rightfully) pushing neuroscience to place evolutionary considerations at the center of attempts to explain brain structure and function (e.g., Cisek & Hayden, 2022; also see Buzsáki, 2019). That in itself should make their neuroscience research appealing to ecological psychologists. Additionally, their work on the “afordance competition hypothesis” (Cisek, 2007) sounds like a clear path for principles of ecological psychology into neuroscience. Unfortunately, this is not so. While Gibson is cited, Gibsonian terms are used (e.g., “specifcation”), and though there are seemingly overlapping commitments (e.g., signifcance of relationship between perception and action), the concept of “afordances” is used in ways incommensurable with Gibsonian ecological psychology (e.g., Cisek, 2007; Pezzulo & Cisek, 2016). Consider the description of the afordance competition hypothesis as, the processes of action selection and specifcation occur simultaneously and continue even during overt performance of movements [such that from] this perspective, behaviour is viewed as a constant competition between internal representations of the potential actions which Gibson ([1986/2015]) termed ‘afordances.’ (Cisek, 2007, p. 1586; italics in original) It is clear here that the word ‘afordances’ is used synonymously with a generic sense of “potential actions.” What is more, afordances are understood as forms of mental representations. These considerations make it evident that Cisek and colleagues are not using the concept of “afordances” in the stricter ecological sense (also see Cisek & Thura, 2019). In particular, afordances are not treated as directly perceivable (i.e., not representations) opportunities for behavior based on properties of both organism and environment and not as specifed by environmental information (i.e., not neurocomputational parameters for “how to do” an action; Cisek, 2007, pp. 1585–1586). The theory of afordances and its related ideas (e.g., specifcity of ecological information) are, admittedly, understood in particularly nuanced ways in Gibsonian ecological psychology. So it is perhaps not surprising that their use in outside felds is less than uniform. It is reasonable to view other, slightly more peripheral, lessons from Gibsonian ecological psychology as faring better in neuroscience. Embodiment, for example, and other considerations of situatedness are increasingly acknowledged and playing roles in neuroscience experiments and explanations (for overview see Reason for optimism 2 in Chapter 8). Still, more core Gibsonian considerations are popping up here and there. For example, consider results from work by Philip Parker, Abe, Leonard, Martins, and Niell (2022), who recorded both neural activity and visual input in freely moving mice and showed that neural responses in visual brain areas are modulated by eye and head position. This work is relevant due to the authors’ explicit appeal to Gibson and ideas about environmental cues produced by locomotion (e.g., optic fow and parallax) during experimental design and interpretation of results. While neuroscience remains brain-centric with widespread cognitivist commitments, work by neuroscientists like Cisek, Parker, and colleagues ought to be viewed as laudable for appreciating Gibsonian lessons to some degree—even if that work does not really embrace the core principles and in their intended form.

72 4.2.2

The varieties of ecological neuroscience Ecological psychology and resonance

Has ecological psychology integrated lessons from neuroscience? The short answer is, “no.” By and large, ecological psychology has not integrated neural spatiotemporal scales in experimental design or explanations. The longer answer is, well, there is no longer answer. While there have been recent attempts to explicitly integrate ecological psychology and neuroscience—as will be discussed later—those instances are small in number and certainly do not epitomize the current state of training or research by ecological psychologists. This section will provide an assessment of what Gibson himself said about neural contributions to perception-action, namely, resonance. The notion of resonance was the closest Gibson came to addressing the role of the brain and nervous system in perceptual systems. A few examples of early uses of the word include • “A percept is related to a stimulus invariant by the resonance of a perceptual system” (Gibson, 1966, p. 244) • “Instead of postulating that the brain constructs information from the input of a sensory nerve, we can suppose that the centers of the nervous system, including the brain, resonate to information” (Gibson, 1966, p. 267) • “And the action of the nervous system is conceived as a resonating to the stimulus information, not a storing of images or a connecting up of nerve cells” (Gibson, 1966, p. 271). Gibson used the word ‘tuning’ interchangeably with ‘resonance’ (e.g., Gibson, 1966, pp.  271, 275). He drew attention to the likelihood that readers would treat these terms as analogies—such as the tuning of a “radio receiver” (Gibson, 1966, p.  271)—instead of actual descriptors of real features of nervous systems. In this earlier work, it is clear that Gibson intended the latter, with the concept of “resonance” (and “tuning”) capturing something about the relationship between stimulus invariants and percept (Gibson, 1966, p. 244). In short, resonance is intended to be a real feature of perceptual systems. However, Gibson would later relegate the concept of “resonance” to the status of mere metaphor (e.g., “the metaphors used can be terms such as resonating . . .;” Gibson, 1986/2015, p. 235; italics in original); not to mention rarely using the concept “tuning” (i.e., “attune,” “tuned,” etc.) anymore.1 Still, as some ecological psychologists have recently claimed, “[t]he fact that Gibson’s metaphors are decidedly physical in orientation signifes the direction of his research” (Fultot, Adrian Frazier, Turvey, & Carello, 2019, p. 220). I agree with the spirit of this assessment. First, Gibson was using terms (e.g., resonance) as placeholders for future developments. Second, Gibson intended these terms to refer to real, physical phenomena. That is to say, just because the concept of “resonance” was not an ironed-out part of the theory of afordances does not mean researchers ought to abandon the ecological approach and retreat to cognitivist or—dare it be said—immaterial views of mind. Where I disagree is with those who appear to be using the term in the literal sense (e.g., Bruineberg & Rietveld, 2019; Raja & Anderson, 2019)—that is, as in the early Gibson (1966)—and those who attempt to reify a concept that may be more apt for elimination (e.g., van Dijk & Myin, 2019). In upcoming chapters, a more fruitful investigative framework comprised of approaches with proven successes (e.g., population dynamics, manifold theory, and synergies) for

1 Other eminent ecological psychologists describe resonance as a metaphor as well, such as Harry Heft (2020, p. 5).

The varieties of ecological neuroscience

73

explaining and understanding mind will be ofered that have less baggage than resonance. In the remainder of this section, four reasons will be presented for motivating the elimination of “resonance” as a concept that can play a fruitful role in ecological neuroscience. It is important to make clear that the problems with resonance go beyond its being a metaphor; that is, these are substantive limitations. It is also worth noting that none of these problems are necessarily dealbreakers on their own. Even so, their combination seriously undermines the ability of “resonance” to be a compelling concept and substantial part of an ecological neuroscience. The frst problem with the concept of “resonance” is its imprecision. Nowhere in Gibson’s writings (that I know of) is a detailed defnition of “resonance” provided. It is important to be clear that this point does not rest on the naive view that a concept can only be properly applied in scientifc research if it is defned by a generally accepted set of necessary and jointly sufcient conditions. Apart from formal systems (e.g., logic and mathematics) or some concepts (e.g., the concept of aunt), few concepts can be defned in such a complete manner (Machery, 2009). This is especially true concerning concepts of entities and processes in the natural world. Thus, there is no doubt that science progresses without defning all of its terms. Additionally, the absence of defnitions can be an indispensable feature of research when scientists are attempting to characterize novel and multidisciplinary targets of investigation. As neurophilosopher Patricia Churchland puts it, to “force precision by grinding out premature defnitions enlightens nobody” (1986, p. 346). The kind of imprecision exhibited by uses of “resonance” is problematic for deeper reasons. One reason is that it causes confusion both within felds and across. An example of confusion within felds is when ecological psychologists use the word ‘resonance’ but do not make clear if it is to be taken metaphorically or literally. An example of confusion across felds is when multidisciplinary researchers come across a concept that seems to be used in the same way but really is not, such as using both Gibson’s early use of “resonance” (Gibson, 1966) and cognitive and neural scientist Stephen Grossberg’s use of “resonance” (Grossberg, 2021). The second problem is that it is unnecessary. An earlier quote stated that even if Gibson utilized resonance as a metaphor, his uses were intended to signify the direction of his research. Such a “placeholder” view of resonance was understandable, at least at the time. However, the term resonance has come to be an unnecessary concept; a point long that is evidenced even by ecological psychologists. The most common scientifc use of the word ‘resonance’ is from dynamical systems theory (DST). Here, resonance is a dynamic feature of systems and is commonly exhibited via orbits in attractor states (Ott, 2002) and dynamics near bifurcation points (Medio & Lines, 2001; Perko, 2001). Across its many uses in DST, the system’s behavior is commonly described in terms of oscillations (Hilborn, 2000; Roberts, 2015). Historically, this is exactly how resonance is cashed out by ecological psychologists (e.g., Fultot et al., 2019; Kugler, Kelso, & Turvey, 1980; Michaels & Carello, 1981; Richardson, Schmidt, & Kay, 2007; Riley & Turvey, 2002; Shockley & Turvey, 2005). Other related concepts that have been applied instead of resonance include coordination and synchronization. In light of the fruitfulness and mathematical precision of concepts such as oscillation, it is evident that the concept of “resonance” is at best synonymous with these terms and at worst unnecessary in the sense of not referring to anything in the real world above and beyond what is already captured by the other terms.2

2 Ecological psychologist Edward Reed, who will be discussed later in this chapter, did not mention the concept of “resonance” at all when presenting his overarching treatment of ecological psychology (Reed, 1996). Additionally, when he did mention resonance, he described it as just being “covariance” of neural states and ecological information (e.g., Reed, 1989, p. 111).

74 The varieties of ecological neuroscience A third problem is tightly related to the charge of being unnecessary, and that is the issue of triviality. When ecological psychologists attempt to cash out resonance in a more mathematically precise way, it is typically in terms such as coupled energy or information fows (e.g., van der Weel, Agyei, & van der Meer, 2019), coupled oscillations (e.g., Fultot et al., 2019), or one oscillation driving another (e.g., van der Weel, Sokolovskis, Raja, & van der Meer, 2022). From those examples, the charge of triviality can be understood in two ways: One, it is trivially true that two dynamical systems that interact in some regular way will have a relationship that can be labeled; thus, resonance does not pick out anything especially informative if that is the work it is doing. Two, if ‘resonance’ is synonymous with terms like coordination, entrainment, oscillations, or synchronization, then it does not pick out anything unique such that the word ‘resonance’ is doing any work that those others already do. The fourth problem I draw attention to is that resonance might just be wrong. That is to say, if the concept of “resonance” is picking out something real (i.e., not an unnecessary term) or unique (i.e., not trivially synonymous with other terms like oscillations), and is either currently mathematically precise or will be (i.e., not empirically imprecise), then it might still be an incorrect way to explain or understand the phenomena it is applied to. A particularly potent argument to that efect comes from Mark Bickhard’s and Michael Richie’s assessment of Gibsonian resonance, when they state The resonant frequency is a copy, a duplicate, of the original frequency. Such vestiges of picture, of image, of encoding conceptualizations are regretfully distortive of Gibson’s basic interactive insight in his concept of information extraction. The pattern of an interaction need not have any particular structural correspondence whatsoever with the pattern of ambient light that it diferentiates. (Bickhard & Richie, 1983, p. 15) Their point can be understood as follows: If Gibson utilized the concept of “resonance” in a way that can be cashed out in terms of one frequency duplicating another—which seems like a reasonable interpretation given all the talk of coupling and oscillations mentioned previously—then it would be a concept that actually undermines many key lessons from ecological psychology. For example, notions like direct perception and ecological information allowed Gibson to provide a way to understand how environments can be meaningful for an organism without the organism needing to represent the environment, namely, to generate indirect representations to be computed and given meaning in the mind. But, as Bickhard and Richie argue, the idea of patterns interacting in that way makes Gibsonian resonance sound like a structural correspondence. The problem with structural correspondences is that in this context it means “structural representation,” which is a form of indirect mental representation. Structural representations are those mental or cognitive representations with the following features: (1) at least partially homo-/isomorphic to its target; (2) activated by signals from target; (3) able to guide behavior with respect to target; and (4) can be decoupled from target signals (Piccinini, 2022; Ramsey, 2007). Features one through three are consistent with Gibsonian resonance qua structural correspondence, which is consistent with the earlier discussions and examples of resonance qua coupled or driving oscillators. Feature four is not as straightforwardly applicable but does not take much massaging to work. For example, the ability to successfully reach for a cup of cofee on the table in front of you with your eyes closed can be explained as a structural representation, that is, as a representation that was frst created via a resonant relationship (i.e., seeing a cup of cofee on your table), which can guide

The varieties of ecological neuroscience

75

action even if the resonant relationship is severed soon after (i.e., closing your eyes). If such arguments are correct, then it is unfortunately too easy to view Gibson’s use of “resonance” as importing an uncomfortable amount of representation-like talk into ecological psychology— which is to say, given Gibson’s explicit antirepresentationalism, any talk of representations is too much. Taken together with the previous three problems, I think there are more reasons to eliminate the concept of “resonance” from ecological psychology than there are compelling reasons to keep it. 4.2.3

Playing nice with others

Before providing overviews and critically evaluating prior attempts at developing an ecological neuroscience, the fnal preliminary issue to be highlighted is the view that ecological psychology has a history of playing nice with others. This statement might be surprising to some and is reasonably viewed as contrary to what has been said in the book thus far. For instance, ecological psychology has rejected behaviorism, cognitivism (as well as cognitive psychology, cognitive science, and neuroscience), and other mainstream approaches to mind (i.e., brain-centric and information processing). So who remains to play nice with? Well, actually, many scientifc approaches to the investigation of mind are outside those just listed. Two areas that have a long history of playing nice with ecological psychology are DST and synergetics. Both of those will be explained in detail in Chapter 5. Right now, it sufces to say that ecological psychology has supplemented various features of its investigative framework by importing concepts, methods, and theories from those two areas (for a small but wide-ranging sample see Favela, Amon, Lobo, & Chemero, 2021; Kugler et al., 1980; Richardson et al., 2007; Riley & Van Orden, 2005). One of the best examples of ecological psychology playing nice with other felds is Anthony Chemero’s radical embodied cognitive science (RECS; Chemero, 2009, 2013). RECS “is an interdisciplinary approach to psychology that combines ideas from the phenomenological tradition with ecological psychology and dynamical systems modeling” (Chemero, 2013, p.  145). It is radical in that it adheres to an anticomputational and antirepresentational understanding of cognition. It is embodied in that it understands cognition as necessitating a description in terms of coupled agent-environment dynamics. It is cognitive in that it investigates complex, intelligent behavior (Chemero, 2009, p. 27, 2013, p. 148). RECS is guided by the core principles of Gibsonian ecological psychology: direct perception, continuity of perception-action, theory of afordances, and agent-environment system as the unit investigation. Like other ecological psychologists mentioned before, Chemero leverages the methods of DST to describe and explain the activities of agent-environment systems over time. As Chemero makes clear, DST is especially suited for ecological psychology research in its ability to capture the coupled nature of such agent-environment systems. A DST model of such a system can take the form of a single model comprised of two equations, where one equation captures the agent’s contributions and the other the environment’s contributions (see Chapter 5, Equation 5.1). A particularly efective feature of this methodology is that it provides a mathematically precise way to describe and explain events when any changes to the agent/equation 1 results in changes to the environment/equation 2, and vice versa. A powerful quality of RECS is its ability to synthesize concepts, methods, and theories from ecological psychology and DST (not to mention phenomenology; Käufer & Chemero, 2021) into a unifed framework for the scientifc investigation of agent/organismenvironment systems, especially afordance events. While quite efective at that scale, RECS is confned due to its inability to provide an account of neural spatiotemporal scale

76

The varieties of ecological neuroscience

contributions to afordance events. Chemero is aware that RECS is limited in that way and will need to be supplemented if a more complete account of mind is to be provided. Along those lines, Chemero speculates that RECS could be integrated with a theoretically-friendly form of cognitive science that conducts research at neural scales, such as enactivism (e.g., Thompson, 2007; Varela, Thompson, & Rosch, 1991). Though friendly in some ways (e.g., RECS and enactivism both stress the importance of perception-action loops), for a successful unifcation to happen, “much more work is required to genuinely integrate ecological and enactive cognitive science” (Chemero, 2009, p. 154). With that said, Chemero makes the unmistakable point that, “it is perfectly respectable .  .  . for radical embodied cognitive scientists to acknowledge that brains are important, but insist that they are far from whole story” (2009, p. 181). Although ecological psychology plays a central role in RECS and though RECS does not provide an account of neural scale contributions to afordance events, there have been attempts to do just that. In the next sections, examples of attempts at ecological neuroscience are presented and critically evaluated. 4.3 Reed and Edelman Edward Reed ofered one of the earliest attempts at an ecological neuroscience. Reed’s overarching aim was to “show that ecological psychology really does promise . . . a scientifc psychology of everyday life” (Reed, 1996, p. 7). In particular, if what is called “the psychological” is a part of nature, then the appropriate way to do scientifc psychology is to do it ecologically (Reed, 1996, p. 8). While not the core of his project, Reed did make explicit the need to provide some sort of an account of the neural scale of organism functioning. To that end, Reed leveraged Gerald Edelman’s “Neural Darwinism” (Edelman, 1987). Neural Darwinism will be explained in detail in Chapter  6 (so, feel free to jump ahead and come back). For now, it sufces to understand that Neural Darwinism is a theory to explain brain structure and function that is guided by the Darwinian principle of selectionism. Selectionism is the idea that variance within biological populations is necessary for the process of evolution. With regard to brains, the most signifcant spatiotemporal scale of organization and activity is found in neural populations. As follows, selectionist pressures act on neural populations, such that those neural groups that facilitate evolutionarilydesirable results (e.g., reproduction and survival) will be the ones whose connections are strengthened within the organism’s lifetime and whose genetic basis are passed to future generations. Reed argued that Neural Darwinism could provide the complementary brain part of the story to the ecological part ofered by ecological psychology. A major motivating reason for this is that Neural Darwinism rejected the “instructionist” approach to psychology—which ecological psychology also rejected—in favor of a selectionist one (Reed, 1989, p.  112, 1996, p.  70). In the context of the mind sciences, instructionism can be understood as synonymous or overlapping largely with cognitivism’s information processing approach. Instructionist approaches stress two purported features of cognitive systems: The frst one is that the nervous system operates largely on preestablished “programs,” whose source is innate or genetic. The other one is that minds are structured with a central executive system that processes those preestablished programs in order to control movement, reason, speak, and other intelligent acts. Reed agreed with Edelman that the nervous system and mind are in no way like instructionist systems. While Gibson and ecological psychologists agreed with Reed’s and Edelman’s rejection of instructionism, they did not—according to Reed—ofer a substantial alternative theory. Reed argued that the principle of selectionism

The varieties of ecological neuroscience

77

at the core of Neural Darwinism could provide that alternative theory. For example, Reed discussed the afnities among Neural Darwinism and Gibson’s work on active touch. Reed argued that selectionist processes among neural populations explains why learning and assessing object shapes requires exploration by way of the coordinated “activity across both input-output and central-central connections” (Reed, 1989, p. 111). Even though selectionism ofers ecological psychology a strong foundation for a neural scale account of the nature of perceptual systems, Reed would conclude that Edelman’s particular approach was fundamentally fawed. The main problem centered on psychological concepts. Reed claimed that Edelman did not, frst, make enough explicit connections among neural populations and psychological concepts and, second, that he emphasized categorization to his detriment (Reed, 1989, p. 112). While the frst part of the problem was more excusable—that is, future work could make those connections—the second was not. As Reed argued, categorization is not central to all psychological sciences nor psychological processes, especially perception. That in itself makes Edelman’s emphasis on categorization questionable. But there is a more critical issue: Reed claimed that Edelman’s own selectionist views undermine his reasons for treating categorization as a central psychological concept. The reason is because Edelman discussed categorization in ways that sounded very much like instructionist approaches, which are largely innate and thus unable to deal with novelty, whereas the ability to account for novelty is a strength of selectionist approaches. It seemed that Edelman’s selectionist-based approach was not selectionist enough. For that reason and others,3 Reed would conclude that Edelman’s approach would not sufce for the kind of theory needed to connect the neural scale to the ecological scale. I think Reed was correct when he concluded that Edelman’s approach was not selectionist enough and that it made weak and fawed connections with psychological phenomena. With that said, I think Neural Darwinism remains one of the most viable paths forward to integrating the neural spatiotemporal scale in an ecological neuroscience. But more on that later (i.e., Chapter 6).4 For now, this brief overview of Reed’s attempt at an ecological neuroscience concludes by underscoring the impact it had on future attempts. In particular, Reed was among the frst ecological psychologists to draw attention to the importance of population thinking to the brain part of the story. Here, “population thinking” refers both to selectionism as part of evolutionary pressures and to the scale of neural structure and function most relevant to perception-action activities, such as afordance events.

3 Reed would also highlight that Edelman’s approach did not incorporate the role of persistent ecological information in selective pressures concerning afordances (Reed, 1996, p. 82; also noted by Bruineberg & Rietveld, 2019, p. 203). 4 Mark Twain said, “There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope” (Twain in Paine, 1912, p. 1343). I experienced just that while writing this book. I have been a proponent of Neural Darwinism since at least 2008. When I started thinking about writing a book on ecological neuroscience, one of the frst ideas I had was to connect Edelman’s work to ecological psychology. I wrote what is now Chapter 6 before the current chapter. My background reading on prior attempts at ecological neuroscience ran backwards temporally, for example, reading about neural reuse before Reed. When I dove into Reed’s work more, I was both disappointed (more like horrifed) and reassured. I was disappointed because it seemed that Reed beat me by about 25 years to an attempt at integrating Edelman and ecological psychology. Be that as it may, I was reassured because if somebody like Reed was making connections that I was, then I must be onto something right. As readers will see in Chapter 6, whereas Reed stressed issues such as selectionism versus instructionism in his discussions of Edelman, I focus on other issues like the contributions Edelman made to explaining the development of and coordination among neural populations.

78

The varieties of ecological neuroscience

4.4 Neural reuse In recent years, one of the most popular approaches to neural structure and function among ecological psychologists is Michael Anderson’s neural reuse (Anderson, 2010, 2014). In a nutshell, neural reuse is an attempt to explain brain architecture that stresses the idea that local neural regions are utilized in various tasks and across many domains (Anderson, 2014, p.  4). For example, although Broca’s area is classically considered the part of the brain for motor speech-production, research has highlighted that the area is associated with non-speech related activity like action-related tasks (e.g., Nishitani, Schürmann, Amunts, & Hari, 2005); in a word, that area can be understood as being “reused.” In light of that, neural reuse is properly understood as a type of neuroplasticity (Anderson, 2016, p.  1), where various behavioral and cognitive capacities are achieved via neural coalitions and those neurons may contribute to other coalitions for diferent capacities as well. But what is appealing about neural reuse for ecological psychologists? Neural reuse is appealing to ecological psychologists in large part due to the “massive redeployment hypothesis” (Anderson, 2007). The massive redeployment hypothesis proposes that while brains have areas with specialized activity (i.e., neural circuits), those same activities underlie a variety of behavioral and cognitive functions (Anderson, 2007, p. 330). Here comes the part ecological psychologists like: It is because a brain is in a body that is in an environment that the same brain areas can be “redeployed” for an assortment of functions. This is because diferent body confgurations and environmental settings will provide conditions for the same brain area to contribute to diferent outcomes. As Anderson puts it, “neural, behavioral, and environmental resources [are] reused and redeployed in support of any newly emerging . . . capacities” (Anderson, 2014, p. 7). In view of this, neural reuse can be understood as ofering a number of features appealing to ecological psychologists, including, but not necessarily limited to the signifcance of embodiment to constraining and structuring behavior and cognitive capacities, the signifcance of environmental situations in constraining and structuring behavior and cognitive capacities, and its rejection of brain localization and classic treatments of modularity. To build on the claim ending the discussion of Reed in the previous section, neural reuse can be understood as a form of population thinking. While Anderson does not use that wording often in the major texts discussing neural reuse, he regularly uses the term “group” (e.g., Anderson, 2014, 2016), which is akin to “population” in the current context (other commentators have discussed neural reuse in terms of neural maps and populations as well, e.g., Parkinson & Wheatley, 2016). Neural reuse can be understood as a form of population thinking in both of the senses presented: one, as referring to the evolutionary pressures driving the selection of neural groups, and two, as identifying the scale of neural structure and function most relevant to perception-action activities. When viewed in that light, the popularity of neural reuse among ecological psychologists is readily understood as a continuation of the interest in population thinking exemplifed earlier by Reed. The question now is: Does neural reuse ofer a theory for the neuroscience part of ecological neuroscience? The short answer is that I am skeptical. My skepticism stems from two main reasons. The frst reason is that neural reuse does not provide a fundamental theory of brain structure and function (Favela, 2021). Neural reuse has many merits, such as signifcant empirical evidence in its favor and characteristics appealing to a broad range of mind scientists. Still, it remains a form of neuroplasticity, and neuroplasticity is not a fundamental theory for neuroscience. Like neuroplasticity, neural reuse is a descriptive account of brain architecture. However, it is not properly a general organizational principle of neural

The varieties of ecological neuroscience

79

structure and function. Neural reuse, like other forms of neuroplasticity, are subsumed by the more fundamental theory of Neural Darwinism. To be more precise, neural reuse is a property of brains, and that property falls under the general class of neural characteristics that is neuroplasticity, which falls under the selectionist theory of Neural Darwinism, which itself falls under Darwinism. Darwinism is the fundamental theory of biological organisms, with selection serving as one of its principles.5 Selectionism can serve as a fundamental theory for brain function in the form of Neural Darwinism, while preserving the more fundamental and overarching role of Darwinism. From this line of thought, though neural reuse is an empirically-supported feature of neurobiological systems, it is not a fundamental theory of brain structure and function. Thus, it cannot do the work that is needed of a neural theory in an ecological neuroscience. The second reason for skepticism is that even if it is granted that neural reuse is a “fundamental theory” of brain structure and function, it alone does not provide enough of a neural part of the story for a complete ecological neuroscience. This point is acknowledged even by proponents of neural reuse, including those who see it as the potential neural story for ecological neuroscience. Vicente Raja’s work is exemplary in this regard. Though explicitly an attempt at an “ecological cognitive architecture,” the relevant areas of Raja’s work are comfortably described as an ecological neuroscience (e.g., Raja, 2018). In this work, Raja aims to complement ecological psychology with an account of the central nervous system’s contributions to ecological scale events. In particular, Raja attempts to explain resonance. To that end, Raja ofers a framework in the form of neural reuse and a methodology in the form of multi-scale fractal DST to provide the combined neural part of the ecological neuroscience story. There is much to like about Raja’s approach, not least of which is that it is holds itself to a high bar—that is to say, it attempts to tell the neural scale story while “staying true to the tenets of ecological psychology” (Raja, 2018, p. 29)—and goes beyond a mere theoretical account to one that aims to be empirically assessable (i.e., multi-scale fractal DST; also see van der Weel et al., 2022). However, I do not think this approach facilitates the ability of neural reuse to provide the neural part of the ecological neuroscience story. This is for three main reasons. First, as discussed previously, the concept of “resonance” is problematic. Thus, for neural reuse to integrate it would be to take on those criticisms as well. For example, the current account sufers from the criticism of resonance being unnecessary, namely, at times Raja discusses resonance as synonymous with oscillations (e.g., Raja, 2018, p. 41). Additionally, it sufers from the criticism of resonance being trivial, namely, that two dynamical systems that interact in some regular way will have a relationship that can be labeled (e.g., Raja, 2018, p. 40; van der Weel et al., 2022, p. 12). Second, as discussed earlier, neural reuse is not as much a fundamental theory of brain structure and function so much as it is a description of brain architecture. As such, neural reuse does not provide the theoretical foundation Raja needs it to. Third, Raja’s methodology does not do the work it needs to in order to connect the neural scale described by neural reuse to the ecological scale accounted for by ecological psychology. Raja claims that multi-scale fractal DST is the proper methodology for his approach because it provides a mathematical way to understand how “the higher scale 5 Depending on what and how you prefer to count, evolution is commonly described as having three (variation, hereditary, and selection) to fve principles (variation, inheritance, selection, time, and adaptation, or VISTA). Selection is always one of the principles.

80

The varieties of ecological neuroscience

[i.e., environment] always constrains the lower one [i.e., neural] and not the other way around” (2018, p.  47). This is because, as Raja claims, the structure “predicted by the fractal theory is a mark of a scalar relation, namely, a relation in which a higher scale constrains a lower one” (2018, p. 47). Unfortunately, this is not true. Fractal structure (also known as pink noise, 1/f noise, or 1/f scaling) in data (e.g., time series) does not indicate a scalar relation. The common defnition of “scalars” treats them as being single quantities, typically in felds, and as directionless (e.g., Jefreys & Jefreys, 1972). The presence of fractals alone does not indicate unidirectional or top-down causation. In fact—quite the opposite—it is the multidirectional, mutual causation across scales within complex systems that results in a fractal structure, and this makes fractals so appealing for research in nonlinear dynamical systems theory. For example, interaction-dominant dynamics (which will be explained more in Chapter  5), occur when there is nonlinear feedback among the interactions of a system’s parts across scales. One indication that a system is interaction dominant is if perturbations to one scale of the system reverberate across scales (see Figure 5.5b). Such perturbations can be at a larger scale and reverberate to smaller scales or originate at smaller scales and reverberate to larger scales—oftentimes both. Fractal analyses are typically employed to conduct assessments of interaction-dominant dynamics (for a small sample see Davis, Brooks, & Dixon, 2016; Favela, 2020; Holden, Choi, Amazeen, & Van Orden, 2011; Ihlen & Vereijken, 2010; Richardson & Chemero, 2014; Van Orden, Holden, & Turvey, 2005). Thus, it is clear that fractals are not indicators of scalar relationships nor of top-down causation; it is quite the contrary. As a result, multiscale fractal DST cannot serve the role of providing the methodology for Raja’s attempt at connecting neural reuse to ecological scale activity in a way that maintains the tenets of ecological psychology. What Raja’s approach is surely correct about is that fractals and nonlinearity are proper ways to describe many organizational characteristics across scales in organism-environment systems. 4.5 Bayesianism The earlier work by Reed and the currently-popular neural reuse were presented as exemplary attempts at an ecological neuroscience. Both, as was argued, reveal that ecological psychologists appear to have a taste for neural populations when attempting to integrate neural scale considerations in explanations of perception-action. As will be argued in upcoming chapters, I think neural populations are the proper spatiotemporal scale for the neural part of an ecological neuroscience. With that said, identifying the proper scale is one thing; understanding the best way to handle data from that scale is another. Even though limitations in his approach were previously identifed, Raja is certainly correct to highlight the signifcant role of data analytic methodologies in an ecological neuroscience. Ideally, the theories (e.g., neural reuse) and methods (e.g., multi-scale fractal DST) ought to be mutually supporting. For example, if a theory states that scale invariance is a defnitional or predictive feature of intelligent systems, then an analytic method appropriate to assessing said features ought to be part of the investigative framework (e.g., fractal analyses). There are times when researchers put the cart before the horse (or, as some say, “Descartes before the horse”). That is to say, there are researchers who appear to prioritize methods or build theories from methods. The recent food of Bayesian-based work in the mind sciences is illustrative of this. In the mind sciences, “Bayesianism” refers to a collection of concepts and methods stemming from various implementations of Bayes’ theorem, which is a formal way to calculate

The varieties of ecological neuroscience

81

the conditional probability of a hypothesis—broadly construed—being true based on prior expectations and updating priors in the face of errors. Put formally, Bayes’ theorem is as follow (Equation 4.1): P(H | D) =

P(D | H)×P(H) P(D)

(4.1)

Here, H refers to a hypothesis and D to data. P(H) refers to the prior probability of belief in H before data D is obtained. P(D) refers to the probability that the data D refects the actual state of afairs. Thus, P(H|D) captures the probability of hypothesis H being true after data D is obtained. Various forms and interpretations of Bayes’ theorem (e.g., default, frequentist, and subjectivist; Mayo, 2018) have been fruitfully applied to analyze and model a wide range of neural and psychological phenomena, such as emotion regulation (Haker, Schneebeli, & Stephan, 2016), neural activity (Naselaris et  al., 2009), and perception (Kersten, Mamassian, & Yuille, 2004). In addition, Bayes’ theorem serves as the foundation of a variety of theories aimed at encompassing more general neural and psychological structures and functions, for example, active inference (Friston, FitzGerald, Rigoli, Schwartenbeck, & Pezzulo, 2017), Bayesian brain (Doya, Ishii, Pouget, & Rao, 2007), predictive coding (Spratling, 2016), and predictive processing (Hohwy, 2020). While the details and kinds of commitments vary (e.g., realist versus instrumentalist; Rescorla, 2020), it is arguable that at the core of all applications of Bayesianism is the idea of error minimization. In Bayes’ theorem (Equation 4.1), an ideal system strikes a balance between expectations (i.e., the hypothesis; H) and states of afairs (i.e., data; D), or a minimal error between the two. As a theory of brains and cognition, active inference builds from this core commitment to claim that “all facets of behavior and cognition in living organisms follow a unique imperative: minimizing the surprise of their sensory observations” (Parr, Pezzulo, & Friston, 2022, p.  6; italics in original). These sensory observations (or D in Bayes’ theorem; Equation 4.1) are produced when agents “adaptively control their actionperception loops to solicit desired sensory observations” (Parr et al., 2022, p. 6; italics in original). As an even broader theory of brains and cognition, one that also aims to explain life, the free-energy principle (FEP) also builds from the core commitment of error minimization in Bayes’ theorem to claim that “any self-organizing system that is at equilibrium with its environment must minimize free energy,” or “the bounds or limits [of] the surprise on sampling some data, given a generative model” (Friston, 2010, p. 127). Here, generative models are those probabilistic models of dependencies between causes and consequences (i.e., data), which is “specifed in terms of the likelihood of data, given their causes (parameters of a model) and priors on the causes” (Friston, 2010, p. 129). Again, the FEP boils down to the core of minimizing error: Like active inference, FEP treats cognition as fundamentally being about minimizing discrepancies between expectations (i.e., priors) and states of afairs (i.e., data). But unlike active inference, the FEP also treats living organisms as fundamentally being about minimizing discrepancies, where such “discrepancies” are defned in terms of entropy and the tendency of living organisms to resist disorder. The popularity of Bayesianism has even spread into attempts at developing an ecological neuroscience. Jelle Bruineberg, Erik Rietveld, and colleagues (e.g., Bruineberg, Kiverstein, & Rietveld, 2018; Bruineberg, Rietveld, Parr, van Maanen, & Friston, 2018) have argued that the “main explanandum for Gibsonian neuroscience is selective resonance or selective

82

The varieties of ecological neuroscience

sensitivity to afordances” (Bruineberg & Rietveld, 2019, p. 200; italics in original). This claim purports to result from the following line of thought: 1. The life sciences aim to explain living organisms, especially how organisms survive. 2. Living organisms survive by pursuing what is of value to them (e.g., reproduction, safety, etc.). 3. As part of the life sciences, the neural and psychological sciences are fundamentally concerned with explaining the structure and function of living organisms as they pursue what is of value to them. 4. Ecological psychology is right to emphasize the role of afordances because afordances can be understood as values. 5. Neuroscience can explain the neural contribution to afordances by way of the freeenergy principle and related theories of active inference. 6. Therefore, ecological neuroscience (or Gibsonian neuroscience) ought to combine ecological psychology’s theory of afordances with the free-energy principle. In short, proponents of this approach to ecological neuroscience prioritize two claims: One is that afordances are essentially a form of value, and two, values are explained in terms of minimization (i.e., perceptual, metabolic, etc.) that aims toward sustaining life. There is much to fnd appealing about this attempt at an ecological neuroscience. For example, it potentially ofers an elegant path toward unifcation, that is, applying theory and formalism of FEP across the neural and behavioral scales. Unfortunately, I do not think this approach succeeds as an approach to ecological neuroscience. The main reason for this skepticism stems from the point made earlier about putting the cart before the horse, that is to say, when researchers appear to prioritize methods or build theories from methods. Researchers like Karl Friston have attempted to build an all-encompassing framework to explaining everything from perception-action (e.g., active inference; Friston et al., 2017) to life itself (i.e., FEP; Friston, 2010) on a Bayesian foundation. The problem is that Bayesianism, methodologically speaking, does not provide a proper account of the nature of intelligence in living organisms. As a colleague and I have argued elsewhere (Favela & Amon, 2023), Bayesianism sufers from two major shortcomings as a theory of neural and psychological phenomena. First, its calculations and models are predominantly linear. This is a problem because neural and psychological systems are predominantly nonlinear. Second, when such models attempt to incorporate nonlinearities, they typically do so in terms of noise that is defned as being random and unstructured. This is a problem because while noise is widespread is neural and psychological phenomena, it tends to be deterministic and structured, which is contrary to how it is depicted in Bayesianism. Accordingly, any framework that models or explains organism-environment systems during afordance events as linear systems that exhibit only random and unstructured noise will be false. Thus, an ecological neuroscience built on a Bayesian foundation—namely, that puts the methodological cart before the theoretical horse—will be unsuccessful insofar as it is ofered as an account of real-world systems. 4.6 Conclusion This chapter ofered a taste of the varieties of ecological neuroscience. I began by setting the stage with preliminary considerations. It was claimed that neuroscience has not

The varieties of ecological neuroscience

83

integrated the primary principles of Gibsonian ecological psychology. Moreover, the conventional attempt to account for neural activity by ecological psychologists by way of the concept of “resonance” is problematic to say the least. Chemero’s radical embodied cognitive science was presented as exemplary of fruitful integrations of ecological psychology with other approaches like dynamical systems theory. After, overviews of three main kinds of attempts at developing an ecological neuroscience where presented: Reed’s attempt, neural reuse, and Bayesianism. I argued that while each has many merits and have certainly advanced past critiques of foam rubber and wonder tissue, they fall short insofar as they purport to ofer an ecological neuroscience. It needs to be acknowledged that this chapter could have included other attempts that explicitly ofer an ecological neuroscience (or Gibsonian neuroscience) and those that are implicitly related, such as some forms of radical embodied cognitive neuroscience (e.g., Dotov, 2014; Favela, 2014). Other than reasons of chapter space limitations, the three kinds that were presented were selected for reasons such as historical importance (i.e., Reed), contemporary popularity among ecological psychologists (i.e., neural reuse), and contemporary popularity among neuroscientists (i.e., Bayesianism). They were also selected because they relate to material in upcoming chapters. For example, nonlinearity and noise of the deterministic and structured kinds (cf. Bayesianism) will be important in Chapter 5, and Neural Darwinism (cf. Reed) and neural populations (cf. neural reuse) will be important in Chapters 6 and 7. It is ftting to view this point as the conclusion of the frst part of the book, which emphasized the history of relevant issues. Chapter 5 begins the second part of the book and presents the foundations (i.e., complexity science) for the ecological neuroscience ofered in Chapters 6 and 7 (i.e., NeuroEcological Nexus Theory, or NExT). References Anderson, M. L. (2007). Massive redeployment, exaptation, and the functional integration of cognitive operations. Synthese, 159(3), 329–345. https://doi.org/10.1007/s11229-007-9233-2 Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33(4), 245–313. https://doi.org/10.1017/S0140525X10000853 Anderson, M. L. (2014). After phrenology: Neural reuse and the interactive brain. Cambridge, MA: The MIT Press. Anderson, M. L. (2016). Précis of After phrenology: Neural reuse and the interactive brain. Behavioral and Brain Sciences, 39, e120, 1–45. https://doi.org/10.1017/S0140525X15000631 Barrett, L. (2012). Why behaviorism isn’t Satanism. In J. Vonk & T. Shackelford (Eds.), The Oxford handbook of comparative evolutionary psychology (pp. 17–38). Oxford, UK: Oxford University Press. Barrett, L. (2016). Why brains are not computers, why behaviorism is not satanism, and why dolphins are not aquatic apes. The Behavior Analyst, 39(1), 9–23. https://doi.org/10.1007/s40614-0150047-0 Bickhard, M. H., & Richie, D. M. (1983). On the nature of representation: A case study of James Gibson’s theory of perception. New York, NY: Praeger. Bruineberg, J., Kiverstein, J., & Rietveld, E. (2018). The anticipating brain is not a scientist: The freeenergy principle from an ecological-enactive perspective. Synthese, 195(6), 2417–2444. https://doi. org/10.1007/s11229-016-1239-1 Bruineberg, J., & Rietveld, E. (2019). What’s inside your head once you’ve fgured out what your head’s inside of. Ecological Psychology, 31(3), 198–217. https://doi.org/10.1080/10407413.2019 .1615204

84

The varieties of ecological neuroscience

Bruineberg, J., Rietveld, E., Parr, T., van Maanen, L., & Friston, K. J. (2018). Free-energy minimization in joint agent-environment systems: A niche construction perspective. Journal of Theoretical Biology, 455, 161–178. https://doi.org/10.1016/j.jtbi.2018.07.002 Buzsáki, G. (2019). The brain from inside out. New York, NY: Oxford University Press. Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: The MIT Press. Chemero, A. (2013). Radical embodied cognitive science. Review of General Psychology, 17(2), 145– 150. https://doi.org/10.1037/a0032923 Churchland, P. S. (1986). Neurophilosophy: Toward a unifed science of the mind/brain. Cambridge, MA: The MIT Press. Cisek, P. (2007). Cortical mechanisms of action selection: The afordance competition hypothesis. Philosophical Transactions of the Royal Society B, 362(1485), 1585–1599. https://doi.org/10.1098/ rstb.2007.2054 Cisek, P., & Hayden, B. Y. (2022). Neuroscience needs evolution. Philosophical Transactions of the Royal Society B, 377(1844), 20200518. https://doi.org/10.1098/rstb.2020.0518 Cisek, P., & Thura, D. (2019). Neural circuits for action selection. In D. Corbetta & M. Santello (Eds.), Reach-to-grasp behavior: Brain, behavior, and modelling across the life span (pp. 91–117). New York, NY: Routledge. Davis, T. J., Brooks, T. R., & Dixon, J. A. (2016). Multi-scale interactions in interpersonal coordination. Journal of Sport and Health Science, 5(1), 25–34. https://doi.org/10.1016/j.jshs.2016.01.015 Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. In C. Hookway (Ed.), Minds, machines and evolution: Philosophical studies (pp. 129–151). Cambridge, MA: Cambridge University Press. de Wit, M. M., & Withagen, R. (2019). What should a “Gibsonian neuroscience” look like? Introduction to the special issue. Ecological Psychology, 31, 147–151. https://doi.org/10.1080/10 407413.2019.1615203 Dotov, D. G. (2014). Putting reins on the brain: How the body and environment use it. Frontiers in Human Neuroscience, 8(795), 1–12. https://doi.org/10.3389/fnhum.2014.00795 Doya, K., Ishii, S., Pouget, A., & Rao, R. P. (Eds.). (2007). Bayesian brain: Probabilistic approaches to neural coding. Cambridge, MA: The MIT press. Edelman, G. M. (1987). Neural Darwinism: The theory of neuronal group selection. New York, NY: Basic Books. Favela, L. H. (2014). Radical embodied cognitive neuroscience: Addressing “grand challenges” of the mind sciences. Frontiers in Human Neuroscience, 8(796), 1–10. https://doi.org/10.3389/ fnhum.2014.00796 Favela, L. H. (2020). Cognitive science as complexity science. Wiley Interdisciplinary Reviews: Cognitive Science, 11(4), e1525, 1–24. https://doi.org/10.1002/wcs.1525 Favela, L. H. (2021). Fundamental theories in neuroscience: Why neural Darwinism encompasses neural reuse. In F. Calzavarini & M. Viola (Eds.), Neural mechanisms: New challenges in the philosophy of neuroscience (pp.  143–162). Cham, Switzerland: Springer. https://doi. org/10.1007/978-3-030-54092-0_7 Favela, L. H., & Amon, M. J. (2023). Enhancing Bayesian approaches in the cognitive and neural sciences via complex dynamical systems theory. Dynamics, 3, 115–136. https://doi.org/10.3390/ dynamics3010008 Favela, L. H., Amon, M. J., Lobo, L., & Chemero, A. (2021). Empirical evidence for extended cognitive systems. Cognitive Science: A Multidisciplinary Journal, 45(11), e13060, 1–27. https://doi. org/10.1111/cogs.13060 Friston, K. (2010). The free-energy principle: A unifed brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787 Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active inference: A process theory. Neural Computation, 29(1), 1–49. https://doi.org/10.1162/NECO_a_00912 Fultot, M., Adrian Frazier, P., Turvey, M. T., & Carello, C. (2019). What are nervous systems for? Ecological Psychology, 31(3), 218–234. https://doi.org/10.1080/10407413.2019.1615205

The varieties of ecological neuroscience

85

Gibson, J. J. (1966). The senses considered as perceptual systems. Boston, MA: Houghton Mifin. Gibson, J. J. (1986/2015). The ecological approach to visual perception (classic ed.). New York, NY: Psychology Press. Grossberg, S. (2021). Conscious mind, resonant brain: How each brain makes a mind. New York, NY: Oxford University Press. Haker, H., Schneebeli, M., & Stephan, K. E. (2016). Can Bayesian theories of autism spectrum disorder help improve clinical practice? Frontiers in Psychiatry, 7(107). https://doi.org/10.3389/ fpsyt.2016.00107 Heft, H. (2020). Ecological psychology and enaction theory: Divergent groundings. Frontiers in Psychology: Theoretical and Philosophical Psychology, 11(991), 1–13. https://doi.org/10.3389/ fpsyg.2020.00991 Hilborn, R. C. (2000). Chaos and nonlinear dynamics: An introduction for scientists and engineers (2nd ed.). New York, NY: Oxford University Press. Hohwy, J. (2020). New directions in predictive processing. Mind & Language, 35(2), 209–223. https://doi.org/10.1111/mila.12281 Holden, J. G., Choi, I., Amazeen, P. G., & Van Orden, G. (2011). Fractal 1/ƒ dynamics suggest entanglement of measurement and human performance. Journal of Experimental Psychology: Human Perception and Performance, 37(3), 935–348. https://doi.org/10.1037/a0020991 Ihlen, E. A., & Vereijken, B. (2010). Interaction-dominant dynamics in human cognition: Beyond 1/ƒα fuctuation. Journal of Experimental Psychology: General, 139(3), 436–463. https://doi. org/10.1037/a0019098 Jefreys, H., & Jefreys, B. (1972). Methods of mathematical physics (3rd ed.). Cambridge, UK: Cambridge University Press. Käufer, S., & Chemero, A. (2021). Phenomenology: An introduction (2nd ed.). Medford, MA: Polity Press. Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as Bayesian inference. Annual Review of Psychology, 55, 271–304. https://doi.org/10.1146/annurev.psych.55.090902.142005 Kugler, P. N., Kelso, J. A. S., & Turvey, M. T. (1980). On the concept of coordinative structures as dissipative structures. I: Theoretical lines of convergence. In G. E. Stelmach & J. Requin (Eds.), Tutorials in motor behavior (pp. 3–47). New York, NY: North-Holland Publishing Company. Machery, E. (2009). Doing without concepts. New York, NY: Oxford University Press. Mayo, D. G. (2018). Statistical inference as severe testing. New York, NY: Cambridge University Press. Medio, A., & Lines, M. (2001). Nonlinear dynamics: A primer. New York, NY: Cambridge University Press. Michaels, C. F., & Carello, C. (1981). Direct perception. Englewood Clifs, NJ: Prentice-Hall, Inc. Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009). Bayesian reconstruction of natural images from human brain activity. Neuron, 63(6), 902–915. https://doi.org/10.1016/j. neuron.2009.09.006 Nishitani, N., Schürmann, M., Amunts, K., & Hari, R. (2005). Broca’s region: From action to language. Physiology, 20(1), 60–69. https://doi.org/10.1152/physiol.00043.2004 Ott, E. (2002). Chaos in dynamical systems (2nd ed.). New York, NY: Cambridge University Press. Paine, A. B. (1912). Mark Twain: A biography (Vol. 4). New York, NY: Harper & Brothers. Parker, P. R., Abe, E. T., Leonard, E. S., Martins, D. M., & Niell, C. M. (2022). Joint coding of visual input and eye/head position in V1 of freely moving mice. Neuron, 110(23), 3897–3906. https://doi. org/10.1016/j.neuron.2022.08.029 Parkinson, C., & Wheatley, T. (2016). Reason for optimism: How a shifting focus on neural population codes is moving cognitive neuroscience beyond phrenology. Behavioral and Brain Sciences, 39, e126, 18–20. https://doi.org/10.1017/S0140525X15001600 Parr, T., Pezzulo, G., & Friston, K. J. (2022). Active inference: The free energy principle in mind, brain, and behavior. Cambridge, MA: The MIT Press. Perko, L. (2001). Diferential equations and dynamical systems (3rd ed.). New York, NY: SpringerVerlag.

86

The varieties of ecological neuroscience

Pezzulo, G., & Cisek, P. (2016). Navigating the afordance landscape: Feedback control as a process model of behavior and cognition. Trends in Cognitive Sciences, 20(6), 414–424. https://doi. org/10.1016/j.tics.2016.03.013 Piccinini, G. (2022). Situated neural representations: Solving the problems of content. Frontiers in Neurorobotics, 16(846979), 1–13. https://doi.org/10.3389/fnbot.2022.846979 Pribram, K. H. (1982). Refections on the place of brain in ecology of mind. In W. B. Weimer & J. S. Palermo (Eds.), Cognition and the symbolic processes (Vol. 2, pp. 361–381). Hillsdale, NJ: Erlbaum. Raja, V. (2018). A theory of resonance: Towards an ecological cognitive architecture. Minds and Machines, 28(1), 29–51. https://doi.org/10.1007/s11023-017-9431-8 Raja, V., & Anderson, M. L. (2019). Radical embodied cognitive neuroscience. Ecological Psychology, 31(3), 166–181. https://doi.org/10.1080/10407413.2019.1615213 Ramsey, W. M. (2007). Representation reconsidered. New York, NY: Cambridge University Press. Reed, E. S. (1989). Neural regulation of adaptive behavior. Ecological Psychology, 1(1), 97–117. https://doi.org/10.1207/s15326969eco0101_5 Reed, E. S. (1996). Encountering the world: Toward an ecological psychology. New York, NY: Oxford University Press. Rescorla, M. (2020). Bayesian modeling of the mind: From norms to neurons. Wiley Interdisciplinary Reviews: Cognitive Science, e1540, 1–15 https://doi.org/10.1002/wcs.1540 Richardson, M. J., & Chemero, A. (2014). Complex dynamical systems and embodiment. In L. Shapiro (Ed.), The Routledge handbook of embodied cognition (pp. 39–50). New York, NY: Routledge. Richardson, M. J., Schmidt, R. C., & Kay, B. A. (2007). Distinguishing the noise and attractor strength of coordinated limb movements using recurrence analysis. Biological Cybernetics, 96(1), 59–78. https://doi.org/10.1007/s00422-006-0104-6 Riley, M. A., & Turvey, M. T. (2002). Variability and determinism in motor behavior. Journal of Motor Behavior, 34(2), 99–125. https://doi.org/10.1080/00222890209601934 Riley, M. A., & Van Orden, G. C. (Eds.). (2005). Tutorials in contemporary nonlinear methods for behavioral sciences. Arlington, VA: National Science Foundation. Retrieved May 19, 2023 from www.nsf.gov/pubs/2005/nsf05057/nmbs/nmbs.pdf Roberts, A. J. (2015). Model emergent dynamics in complex systems. Philadelphia, PA: Society for Industrial and Applied Mathematics. Shockley, K., & Turvey, M. T. (2005). Encoding and retrieval during bimanual rhythmic coordination. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(5), 980–990. https://doi.org/10.1037/0278-7393.31.5.980 Spratling, M. W. (2016). Predictive coding as a model of cognition. Cognitive Processing, 17(3), 279–305. https://doi.org/10.1007/s10339-016-0765-6 Thompson, E. (2007). Mind in life: Biology, phenomenology, and the sciences of the mind. Cambridge, MA: Belknap Press. van der Weel, F. R. (Ruud), Agyei, S. B., & van der Meer, A. L. (2019). Infants’ brain responses to looming danger: Degeneracy of neural connectivity patterns. Ecological Psychology, 31(3), 182– 197. https://doi.org/10.1080/10407413.2019.1615210 van der Weel, F. R. (Ruud), Sokolovskis, I., Raja, V., & van der Meer, A. L. H. (2022). Neural aspects of prospective control through resonating taus in an interceptive timing task. Brain Sciences, 12(1737), 1–16. https://doi.org/10.3390/brainsci12121737 van Dijk, L., & Myin, E. (2019). Ecological neuroscience: From reduction to proliferation of our resources. Ecological Psychology, 31(3), 254–268. https://doi.org/10.1080/10407413.2019.1615 221 Van Orden, G. C., Holden, J. G., & Turvey, M. T. (2005). Human cognition and 1/f scaling. Journal of Experimental Psychology: General, 134(1), 117–123. https://doi.org/10.1037/0096-3445.134.1.117 Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. Cambridge, MA: The MIT Press.

5

Foundations of complexity science for the mind sciences

If we want to understand nature, if we want to master our physical surroundings, then we must use all ideas, all methods, and not just a small selection of them. (Feyerabend, 1975, p. 173; italics in original)

5.1

A way forward

As discussed in the previous chapter, there are various attempts at integrating features of ecological psychology into neuroscience and vice versa. While there is much to appreciate and learn from that work, it was claimed that they fall short in achieving the kind of reconciliation that would integrate ecological psychology and neuroscience together in a manner that would surpass what either feld can do alone and make progress toward an overarching scientifc understanding of mind (e.g., cognition, goal-directed behavior, perception, etc.) and its physical realizations. The aim of this chapter is to lay the foundations for a more fruitful reconciliation between ecological psychology and neuroscience by way of complexity science. Complexity science is the interdisciplinary study of complex systems, which, as a starting point, are phenomena composed of many interacting parts that give rise to irreducible order at particular spatial and/or temporal scales (Érdi, 2008; Mitchell, 2009; Solomon & Shir, 2003). To integrate ecological psychology and neuroscience under the single framework of complexity science is to treat its targets of investigative interest as being complex systems whose understanding requires an interdisciplinary strategy. Specifcally, it means supplementing or replacing their current concepts, methods, and theories with those of complexity science, which is already interdisciplinary through and through.1 This chapter has two main aims: The weaker aim is that ecological psychology and neuroscience both beneft from incorporating the concepts, methods and theories of complexity science. The stronger aim is that many phenomena investigated by ecological psychology and neuroscience are properly treated as complex systems and thus ought to be investigated via complexity science. It will be made clear that the weaker aim is uncontroversial due to the fact that many features of complexity science are already employed by those sciences, for example, sophisticated time-series analyses. The stronger aim is

1 In addition to interdisciplinary, complexity science could also be described as cross-disciplinary, multidisciplinary, and transdisciplinary. While these distinctions are relevant and signifcant in certain contexts (e.g., Klein, 2010), here the concept of interdisciplinary will sufce for the ends of reconciling ecological psychology and neuroscience.

DOI: 10.4324/9781003009955-5

88

Foundations of complexity science for the mind sciences

controversial in that it requires accepting a view of their phenomena of interest that is often at odds with many of the ecological psychologist’s and neuroscientist’s commitments. It will be shown that a complexity science-based investigative framework that integrates ecological psychology and neuroscience includes understanding their phenomena of interest as exhibiting the following four features: emergence, nonlinearity, selforganization, and universality. A further point that will be made in this chapter is that like other biological and social sciences (Bar-Yam, 2016), the scientifc study of mind (e.g., cognition, goal-directed behavior, nervous systems, etc.) is becoming more like a big data enterprise. This is due in part to the ever-increasing amount of data being generated about the brain (Frégnac, 2017; National Science Foundation, 2011; Sporns, 2013) and the expanding inclusion of nonneural, cognitively relevant features of the body and world into the mind sciences (Allen & Friston, 2018; Anderson, 2014; Chemero, 2009; Favela, 2014; Thompson & Varela, 2001). Such a situation is making it more evident and likely that various forms of mind and their physical realizers are complex systems and that such systems require particular epistemological approaches that can facilitate understanding of such systems. Consequently, it is crucial that accurate understanding and more complete explanations of mind become guided by investigative frameworks based on complexity science, which are suited to making the complex more comprehensible. 5.2 What is complexity science? Complexity science is the interdisciplinary investigation of, and attempt to explain and understand, complex systems (Allen, 2001; Ball, Kolokoltsov, & MacKay, 2013; Érdi, 2008; Favela, 2015, 2020a; Mobus & Kalton, 2015; Phelan, 2001; Vermeer, 2014). Complexity science has diverse origins (Figure 5.1), with early contributions from areas such as chaos theory, cybernetics, and Gestalt psychology, and more recently from big-data mining, network science, and systems biology (Castellani, 2018; Flood & Carson, 1993; Goldstein, 1999). As a result, complexity science serves as a point of integration from which a rich set of concepts, methods, and theories from previously disparate approaches can be successfully integrated and employed for various investigative and explanatory purposes. As an investigative framework, complexity science has been applied to phenomena studied in a variety of disciplines, including, but not limited to, biology, chemistry, economics, physics, and sociology (e.g., Boccara, 2010; Fuchs, 2013; Hooker, 2011a; Mainzer, 2007; Mitchell, 2009; Müller, Plath, Radons, & Fuchs, 2018). The cognitive, neural, and psychological sciences increasingly employ various aspects of complexity science (e.g., Beggs, 2022a; Favela, 2019a; Guastello, Koopmans, & Pincus, 2011; Sherblom, 2017; Sporns, Tononi, & Edelman, 2000; Tognoli & Kelso, 2014; Tomen, Herrmann, & Ernst, 2019; Tsuda, 2001). For example, concepts such as phase transitions (Schiepek, Heinzel, Karch, Plöderl, & Strunk, 2016) and self-organization (Dale, Fusaroli, Duran, & Richardson, 2014) are utilized along with methods like agent-based modeling (Sayama, 2015) and timeseries analyses (Riley & Van Orden, 2005) and then given theoretical grounding via theories such as catastrophe theory (Poston & Stewart, 1978) and universality classes (Timme et al., 2016). The increasing popularity of complexity science has stressed the need to answer the following question: Does the concept “complexity” refer to anything real; or, put another way, is “complexity” a scientifc concept? In short, yes, the concept “complexity” refers to something real. It can be helpful to understand “complexity” as an immature concept. In scientifc practice, a concept is “immature” when there is no broad agreement on its defnition (cf. Kuhn, 1962/1996), but there is enough

Foundations of complexity science for the mind sciences

89

Figure 5.1 Where does complexity science originate? The complexity sciences stem from a diverse and rich set of contributing sciences that include, but are not limited to, biology, computer science, mathematics, and physics. Source: Used with permission from Jefrey Goldstein.

family resemblance (cf. Wittgenstein, 1958/1986) across its usages that practitioners have a general sense of what others mean when they use the word. Across various literatures, the terms “complexity” and “complex systems” are utilized in assorted ways to refer to a range of characteristics. Bar-Yam (2016) focuses on such features as chaos, multiscale interactions, and universality as common to complex systems. Bishop and Silberstein (2019) list over a dozen properties often associated with complexity, with particular emphasis on feedback and strong nonlinearities. Érdi (2008) draws attention to three main characteristics: circular causality/ feedback loops, small changes leading to “dramatic” efects, and emergence. Sporns (2007), following Herbert Simon, identifes three common features of complex systems: components, interactions, and emergence. Tranquillo (2019) discusses over two dozen sets of concepts, such as nonlinearity, self-organization, and simple rules leading to complex behaviors. Van Orden and Stephen (2012) discuss complexity in terms of its empirical signals, especially qualitative states that exhibit nonstationarity and phase transitions. A pessimistic take away from this small sample is that “complexity” refers to a hodgepodge of properties, which is evident by authors associating diferent terms with the word. This apparent lack of agreed upon characteristics has led some to question whether it is possible or not to give necessary and sufcient conditions for “complexity” (e.g., Ladyman, Lambert, & Wiesner, 2013), if alleged complex systems exist only relative to an observer (i.e., not really real; cf. Crutchfeld, 1994), or if “complexity” is even a scientifc concept (Taborsky, 2014). If true, the concept would not be able to accomplish genuine experimental or explanatory roles (Hull, 1988). I think it is far too soon in the history of complexity science to draw these pessimistic conclusions. Although there is some variation in the defnition of “complexity” and the properties associated with complex systems, such reasons are not grounds for abandoning belief in complex systems or practicing complexity science. The fact of the matter is that complexity

90

Foundations of complexity science for the mind sciences

science is already practiced and complex systems are investigated in part or whole across the life, physical, and social sciences. In view of this, it is already true (to a degree) that “complexity” and “complex system” are scientifc concepts. Yet, it is reasonable to view those terms as being immature, namely, they are still developing and being refned. This is not an unusual situation in science. A number of core scientifc concepts across various disciplines (e.g., cognition, emotion, gene, information, innateness, mole, etc.) are broadly applied without having necessary and sufcient defnitional conditions. Moreover, such terms are open to revision even when it seems there was a broadly accepted defnition (e.g., the redefnition of the kilogram on May 20, 2019; Wood & Bettin, 2019). In that way, “complexity” and “complex system” are less like abstract and mathematical concepts (e.g., modus ponens, square, etc.) that can be defned via necessary and sufcient conditions and more like concepts in the natural and social sciences that are defned via family resemblance rather than strict conditions (e.g., mammal, nation, planet, etc.). With that said, the remainder of this book asserts “complexity” is not a vacuous term, nor is it a mere synonym for “complicated stuf” or a way of describing anything that is interesting but not understood. Instead, complexity “is a phenomenon that is deeply rooted into the laws of nature, where systems involving large numbers of interacting subunits are ubiquitous” (Nicolis & Nicolis, 2007, pp. 2–3). From that perspective, and based on the various usages and descriptions of the terms mentioned earlier, complexity science can be understood as the scientifc investigation of complex systems, which are typically characterized by the following four features: emergence, nonlinearity, self-organization, and universality. In order to motivate these points, it is helpful to understand some of the key areas of research that contribute to the favor of complexity science best suited to studying mind and to integrating ecological psychology and neuroscience. 5.3

The roots of complexity science

As previously mentioned, there are a range of disciplines—e.g., chaos theory, cybernetics, and Gestalt psychology—that have contributed to the development and practice of complexity science (Figure 5.1). Various authors have drawn attention to disciplines they take as central to complexity science, often for reasons related to their particular area of interest, such as biology or physics (Baofu, 2007; Bar-Yam, 2016; Érdi, 2008; Hooker, 2011b; Tranquillo, 2019). In that spirit, I draw attention to the three felds that I believe contribute the most to understanding how ecological psychology and neuroscience can be cast as part of an integrated complexity science: systems theory, nonlinear dynamical systems theory, and synergetics. Understanding these three felds makes it apparent why emergence, nonlinearity, self-organization, and universality are crucial to investigating and explaining mind as complex systems. 5.3.1

Systems theory

Here, “systems theory” is an umbrella term that encompasses such concepts and theories from cybernetics (Wiener, 1948) and general systems theory (e.g., von Bertalanfy, 1972; cf. Hammond, 2003). At its most general, systems theory centers on the study of abstract organizational principles (Heylighen & Joslyn, 1999). Modern-day thinking about systems theory tends to begin with von Bertalanfy, who argued that because systems—especially biological—interact with and are open to the infuence of their environments, they cannot be understood via reduction to their constituent parts. Rather, systems are wholes that emerge from the interaction of their parts. In that way, a plant cannot be defned by its cells

Foundations of complexity science for the mind sciences

91

and biochemical processes, but by the interactions of its cells, organs, body, and environment. Like von Bertalanfy’s general systems theory, cybernetics also emphasized systemlevel activity. However, cybernetics, which originated with Wiener, was particularly focused on the ways systems communicate and manipulate information (Adams, 1999). Central to cybernetics is the study of feedback and feedforward processes, especially for purposes of control. Biological and artifcial systems can be cybernetic, in that both involve feedback and feedforward in order to maintain homeostasis (e.g., a mammal’s body temperature) and specifed state maintenance (e.g., room temperature via thermostat). It is evident a central contribution of systems theory to complexity science was a focus on irreducible system-level activity, component interactions as central to accounting for a phenomenon of interest (i.e., as opposed to mere additive aggregates of the components themselves), and control theory (i.e., feedback, feedforward, and mixed forms). 5.3.2

Nonlinear dynamical systems theory

Most, if not all, systems are dynamic, namely, their behavior changes over time. Dynamical systems theory (DST) applies mathematical tools to evaluate the stability and variation of dynamic systems. As part of an investigative strategy, DST commonly exhibits two features: one quantitative and the other qualitative. For example, the quantitative part may involve assessing and accounting for variables via sets of diference or diferential equations. The qualitative part can include the plotting of the outputs of those equations in a phase space in order to show the possible states of the system as it evolves over time. Phase space plots also allow the researcher to see how variables interact and how the whole system transitions between qualitatively diferent states. For defnitions of terms, see Table 5.1. Though DST applies to linear phenomena, much of the appeal and strength of its methods involves its ability to account for nonlinear phenomena via nonlinear dynamical systems theory (NDST). An activity is nonlinear when its outputs are not proportional to its inputs. This can be due to exponential and multiplicative interactions among parts (Enns, 2010), which, in turn, can give rise to unexpected shifts among qualitatively distinct, yet stable, behaviors. In addition, the parts of nonlinear systems cannot be decoupled (Fuchs, 2013, p. 13). If, for example, a system is comprised of two parts that interact nonlinearly, then neither part can be understood as separate from the other, as changes to one part has efects on the other. Attempting to do so, such as solving one part of a model in isolation, would eliminate the system-level dynamics. Consider the following example of coupled differential equations (Equation 5.1): x = ax × by y = cx × dy

(5.1)

If the equations could be solved in isolation from each other, that is, if changes to x do not require accounting for changes to y, then the equations in Equation 5.1 would refer to two, separate one-dimensional equations. But if the equations could not be solved in isolation from each other, that is, if changes to x requires taking into account changes to y, then the equations would refer to a single, nondecomposable two-dimensional system (Favela & Chemero, 2016; Fuchs, 2013; also see Beer, 1995). This simple example makes clear how NDST provides numerous tools for studying system-level behavior. Three other facets of NDST are especially important to the practice of investigating mind via complexity science, which are discussed further.

92

Foundations of complexity science for the mind sciences

Table 5.1 Dynamical systems theory and synergetics concept defnitions Attractor

Control parameter Diference equation

Points in a state space where the trajectory of system variables move toward. Examples of attractors include fxed points (the single attractor a system will eventually rest at in the absence of external infuence), limit cycles (regular oscillatory behavior in a closed orbit around attractor), and chaotic (when the behavior of a system variable is deterministic in that it occurs within a set range of the state space [i.e., is globally stable], but is unpredictable in that it cannot be precisely located at any given time [i.e., locally unstable]). Variables that guide a system’s dynamics, for example, temperature of water as it undergoes phase transitions among the states of its order parameter from solid to liquid to gas. A mathematical function that describes the evolution of a system over time, where variables are treated as discrete, for example, populations, such as rabbits and wolves in a particular ecosystem. The general form of such equations is x(t + 1) = F ( x(t); pt)

Diferential equation

where F is the function, such that t is time, x is the vector (or position of system), p is a fxed parameter. When modeling, diference equations can serve as approximations to diferential equations. A mathematical function that describes the evolution of a system over time, where variables are treated as continuous, for example, pendulum swinging. The general form of such equations is x (t) = f ( x(t); p, t)

where f is the function, such that t is time, p is a fxed parameter, x is the vector (or position of system) with the overdot indicating change over time (or a frst time derivative). Dynamic Behavior that changes over time. Dynamical Abstract or physical entity with a confguration that can be specifed at any given system time by a set of numbers (i.e., system variables). Future confgurations are determined by past and present confgurations via rules that transform system variables. Linear At its most basic, this refers to additive relationships among variables. Systems comprised of linearly interacting variables can be quite predictable. Nonlinear At its most basic, this refers to nonadditive relationships among variables, particularly exponential and multiplicative interactions. Systems comprised of nonlinearly interacting variables tend to be computationally challenging to assess, no more predictable than statistical probability, and can give rise to phase transitions. Order parameter The collective variable that captures the macroscopic state of a system, for example, the various states of water, i.e., solid, liquid, and gas. The order parameter holds a circular relationship with its control variables, namely, the macroscopic state of the system constrains and is constrained by the features that constitute it, for example, temperature efects the state of water, but the state of water also efects its temperature. Phase transition Sudden qualitative shifts in a system’s states. Examples of such shifts include multistability (when a system has more than one stable state, or when a state space has more than one attractor, e.g., water’s solid, liquid, and gas states) and bistability (a multistable system with two states, or the shift among two attractors, e.g., the shifting perspective when viewing a Necker cube). State space The range of all possible values of x (vector), which is depicted visually via a phase space plot. Trajectory The path that system variables are directed in a state space, which is typically represented via arrows (e.g., Figure 5.2b). Based on Favela (2020b).

Foundations of complexity science for the mind sciences

93

First, the application of NDST as a set of methods and research strategy tends to aim at identifying the rules (or laws) that govern how a system’s state evolves over time (Riley & Holden, 2012). Such rules are presented via diferential equations, or the governing equations of a system (Bongard & Lipson, 2007; Brunton, Proctor, & Kutz, 2016; Dale & Bhat, 2018; Daniels & Nemenman, 2015). These are, as mentioned earlier, the quantitative part of the explanation. Limit cycles are examples of such rules captured by diferential equations. A limit cycle is a dynamical system with a closed trajectory (Strogatz, 2015). Nevertheless, diferential equations alone will not provide comprehension. This is because some diferential equations cannot be solved analytically, especially when, as previously discussed, multiple equations are nonlinearly coupled. Accordingly, in order to more fully comprehend a system’s dynamics, computer simulations and phase space plots are often utilized. As mentioned earlier, while diferential equations provide the quantitative part of the story, simulations and plots provide the qualitative part, which can facilitate a researcher’s comprehension in a way that quantitative alone do not. A clear example of this is a DST description of pendulum dynamics. The quantitative part of the account is provided via the following diferential equation (Equation 5.2): d 2q dt

2

+

g sin q = 0 l

(5.2)

While somebody familiar with diferential equations may have a sense of what is going on in this equation—for example, that the system involves angular displacement ( q ) of pendulum arm length (l) that is acted upon by gravity (g)—they likely do not have comprehension of the qualitative aspect of the system. For that, the diferential equation needs to be plotted (Figure 5.2) and potentially simulated (for an example of a simulation of a pendulum in movement based on this diferential equation see https://en.wikipedia.org/wiki/ File:Oscillating_pendulum.gif). A second key feature of NDST adopted by complexity science is a focus on phase transitions, or sudden and unexpected qualitative shifts common to nonlinear dynamical systems. NDST provides various tools for understanding phase transitions of systems, for example, by identifying universal patterns known as catastrophe fags (Isnard & Zeeman, 1976; Poston & Stewart, 1978). Eight catastrophe fags have been identifed: anomalous variation, critical slowing down, divergence, divergence of linear response, hysteresis, inaccessibility, (multi)modality, and sudden jumps (Gilmore, 1981). They are characteristics of nonlinear dynamical systems that can be observed at system-level activity. To empirically observe a catastrophe fag near a qualitative phase shift is typically a strong indicator that a phenomenon is a nonlinear system. A broad range of human behavioral changes have demonstrated catastrophe fags, especially hysteresis. The most widespread application of NDST via catastrophe fags in human research is to human development (e.g., Thelen & Smith, 2006; van Geert, 1994) and action/perception (e.g., Haken, Kelso, & Bunz, 1985; Richardson, Marsh, Isenhower, Goodman, & Schmidt, 2007; van Rooij, Favela, Malone, & Richardson, 2013). A third area of NDST that has become central to complexity science is fractal geometry. Originating with Mandelbrot in the 1970s (Mandelbrot, 1977/1983), a fractal is a scale-invariant, self-similar structure. Scale-invariance refers to exact or statistically selfsimilar patterns or structures at various spatial and temporal scales. Fractals can occur spatially or temporally, such that the global structure is maintained at various scales of

94

Foundations of complexity science for the mind sciences

Figure 5.2 Pendulum dynamics plots. (a) Time-series plot of pendulum diferential equation. (b) Phase space plot of pendulum diferential equation. Source: Reprinted with permission from Krishnavedala (2014). CC BY-SA 4.0.

observation (Figure  5.3). Spatial fractals that are exactly self-similar include Koch and Sierpinski triangles (Figure 5.3a). Spatial fractals that are statistically self-similar include geographic phenomena such as coastlines and mountain ranges, as well as biological phenomena such as tree branching and caulifower (Figure 5.3b). Temporal fractal structures that are exactly self-similar include metronomes and radio frequencies. Temporal fractal structures that are statistically self-similar include fnger tapping (Kello, Beltz, Holden, & Van Orden, 2007), healthy heartbeats (Peng et al., 1995), healthy human gait patterns (Hausdorf, Peng, Ladin, Wei, & Goldberger, 1995), spontaneous single-neuron activity

Foundations of complexity science for the mind sciences

95

Figure 5.3 Fractals. (a) The self-similarity of fractals can be perfect, as depicted by the Sierpinski triangle. (b) Fractals can be statistically self-similar, such as fractals found in nature, for example, Romanesco broccoli. (c) Physiological activity can also be statistically self-similar. Here, synthetic time series generated from detrended fuctuation analysis (Favela et al., 2016) of data recorded from spontaneous activity of single neurons in the rat main olfactory bulb (Stakic, Suchanek, Ziegler, & Grif, 2011). Statistical self-similarity shown in windows based on power of two—(top) 8,192 seconds, (middle) 2,048 seconds, and (bottom) 1,024 seconds—with overall temporal trends repeated within each window of time. Source: (a) Modifed and reprinted with permission from Stanislaus (2013). CC BY-SA 3.0; (b) Reprinted with permission from Leidus (2021). CC BY-SA 4.0.

(Favela, Coey, Grif, & Richardson, 2016; Figure 5.3c), and changes in fMRI signals (Lee et al., 2008). These examples demonstrate the ubiquity of fractals in nature. As such, it is surprising that it was not until the 1970s that such structures entered the scientifc lexicon. That is remarkable in itself. But what is more fascinating are the mathematical developments that followed. Mandelbrot hit the nail on the head when he stated, “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightening travel in a straight line” (Mandelbrot, 1977/1983, p. 1). The truth underlying that statement has caused some to say that for those natural phenomena, it is meaningless to utilize traditional mathematical concepts and methods to assess them (Falconer, 2013). It

96

Foundations of complexity science for the mind sciences

is “meaningless” in that it is quite uninformative when the details of such phenomena are smoothed over—pun intended: Clouds are not smooth spheres. As a result, new mathematical tools have been developed—and some old ones have been applied in new ways (e.g., set theory; Brown & Liebovitch, 2010)—in order to assess such phenomena in meaningful ways. For example, Figure  5.4 depicts three types of time-series data: white (random, unstructured), pink (self-similar, structured), and brown (random and unstructured at shorter timescales, more structured at longer timescales). If all three data sets were analyzed via standard statistical methods, like calculating the mean, and if the mean time is the same for all three, then a researcher could be led to believe the behavior that produced each was the same. However, if nonstandard methods, like fractal analyses, were used for all three data sets, then, even if the mean is the same, the researcher would see that the behaviors that produced the data were quite diferent. For example, during a visual-search task (e.g., Aks, Zelinsky, & Sprott, 2002), white noise could have been produced by a participant ineffciently looking at random locations on a screen, pink noise produced by a participant utilizing an efcient strategy, and brown noise by a participant whose visual-search patterns were sometimes efcient and sometimes not. In sum, Mandelbrot sought to characterize the art of roughness in nature, where the “rough” shapes of those natural objects are not random but structured. The following lists methods commonly used to assess for fractals and self-similarity: box/grid counting (Mandelbrot, 1977/1983), detrended fuctuation analysis (Peng et  al., 1995), multifractal analysis (Kelty-Stephen & Wallot, 2017), multifractal detrended fuctuation analysis (Ihlen, 2012), spectral analyses (e.g., Fourier transform and periodogram; Delignieres et al., 2006; Sebastián & Navascués, 2008), and wavelets (Ihlen & Vereijken, 2010). These methods have given researchers the quantitative tools needed to assess and reveal features of natural phenomena that were not previously or properly comprehensible. Most noteworthy for present purposes, these methods have been employed in the assessment of complex systems. Specifcally, they have been applied in the assessment of nonlinearity and scale-invariant structure, among others. 5.3.3

Synergetics

The third contributor to complexity science that is most signifcant to understanding complexity science as being able to integrate ecological psychology and neuroscience is synergetics. Synergetics is a framework for investigating systems with many parts that interact at various spatial and temporal scales (Haken, 2007). A number of features distinguish synergetics from other frameworks that investigate system-level phenomena. First, it focuses on spontaneous processes and structures, specifcally, self-organization. Second, its aim is to, “unearth general principles (or laws) underlying self-organization irrespective of the nature of the individual parts of the considered systems” (Haken, 2016, p. 150; italics in original). In other words, a primary goal of synergetics is to discover general laws of the ways systems self-organize. Third, it conceptualizes systems in terms of macro- and microscopic spatial and temporal scales in a contextual manner. Specifcally, there is no absolute “macro-” scale that applies to all investigations; what counts as “macro-” and “microscopic” depends on the research question. This leads to the fourth and fnal distinguishing feature of synergetics: Research is guided by the conceptualization and application of order and control parameters. As previously discussed, NDST-guided research often centers on identifying how a system’s state evolves according to a rule. Mathematics, in the form of diferential equations,

Foundations of complexity science for the mind sciences

97

Figure 5.4 Time-series of three signal structure types. (a) Random white noise, which is unstructured over time. (b) Pink noise (also known as 1/f noise or 1/f scaling) is fractal, namely, the signal’s structure is self-similar at shorter and longer timescales. (c) Brown noise, which exhibits random structure at shorter timescales and more ordered structure at longer timescales. The y-axis refers to a value (x) such as fnger taps or heartbeats. The x-axis refers to temporal values (s) such as milliseconds or seconds.

98 Foundations of complexity science for the mind sciences is typically employed to this end. Similarly, synergetics is interested in discovering general laws of self-organization. Moreover, such laws are stated in terms of diferential equations that capture the macroscopic state of a system. Within synergetics, such macroscopic states are referred to as order parameters (Haken, 1988/2006, p. 13). Order parameters are the collective variable that describes the macroscopic phenomenon under investigation (Haken, 2016, p. 151). The other half of the approach are the control parameters. Control parameters are those variables that guide the system’s dynamics, such as energy or information that fows into and through the system and/or among its parts. At this point it is crucial not to make the following mistake: Although it is reasonable—for example, in terms of the experimental design stage—to conceptualize the order parameter like a dependent variable and the control parameters like independent variables (cf. Roberton, 1993), the comparison is not one of equivalency. The crucial diference manifests in the way each set treats causation. In terms of dependent and independent variables, the latter causes the former. In terms of order and control parameters, the latter does not cause the former. Control variables do not cause the system-level behavior as a result of any sort of linear cause–efect relationship (cf. Kelso, 1997, pp. 7, 45). This is due to two commitments in synergetics: the (unfortunately named) slaving principle and circular causality (Haken, 2016). The slaving principle refers to the idea that the order parameter determines the activity of the system’s parts (Haken, 1988/2006, pp. 13, 48). Note that the slaving principle is not the idea that the order parameter determines the control parameters. This signifcant difference leads to the second commitment: circular causation. An example is helpful when trying to understand what circular causality means in synergetics. Consider the HakenKelso-Bunz (HKB; Haken et al., 1985) model of bimanual coordination. This model was an early achievement in synergetics that aimed to explain the dynamics and transitions among states while two limbs moved at diferent frequencies; here the limbs were the index fngers with movements starting at in-phase or anti-phase positions. The goal was to account for the observed patterns of behavior with as few variables as possible. In terms of the synergetics framework, the order parameter is the variable that captures the coordinative states of the two fngers, and the control parameter is the variable that captures what is driving the coordinative states. The concise (and now infuential) model that was developed is as follows (Equation 5.3): f = −a sinf − 2b sin 2f

(5.3)

Here, the order parameter is f , which refers to the dynamics over time, and the control parameters are a, the frequency of fnger 1, and b, the frequency of fnger 2. With this simple model, the entire range of coordinative movements is accounted for. In terms of the slaving principle, neither a nor b cause f , because they are enslaved to f . That is to say, that while a and b do efect f , f also efects a and b. In that way, there is circular causality: none of the variables can be pinpointed as the starting point in a linear causal chain to explain the system-level (i.e., macroscopic) dynamics. The manner in which circular causation occurs here is also the manner in which nonlinearity is exhibited. For one reason, control parameters do not have an additive or linear efect on the order parameter, such that increasing control parameter a by one unit will not necessarily result in the order parameter f exhibiting a proportional efect. The order and control parameter approach has proven quite successful. As a modeling strategy, it has facilitated the identifcation of phase transitions among various behaviors. Examples include such diverse cases as decision-making (van Rooij et  al., 2013), speech

Foundations of complexity science for the mind sciences

99

categorization patterns (Tuller, Case, Ding, & Kelso, 1994), perception of ambiguous fgures (Ditzinger & Haken, 1995), and synergies among neuronal ensembles (Kelso, 2012). As a part of an explanatory framework, it has facilitated the identifcation of “laws” of selforganization, often in line with catastrophe theory. Examples include critical slowing down (Scholz, Kelso, & Schöner, 1987), hysteresis (Haken et al., 1985; van Rooij et al., 2013), multistability (Ditzinger & Haken, 1995), and sudden jumps (Thelen & Smith, 2006). Taken together, systems theory, NDST, and synergetics have made major contributions to complexity science. They have provided sophisticated conceptual tools to understand complex systems. Moreover, they have provided methods to empirically assess those concepts. With this background in place, in the next section I present the most signifcant features of complexity science that can facilitate the integration of ecological psychology and neuroscience under one investigative framework. 5.4 Key concepts and ways to get a grip on them As previously mentioned, various researchers highlight aspects of complexity science that interests them most, which can lead some to believe there is no coherent discipline that investigates complex systems. Although I am sympathetic to the idea that complexity science is an immature science, I do think there are core concepts that capture the central features of specifc domains of interest. To that end, I present the following four concepts as being crucial for understanding mind in terms of complex systems and, thereby, for integrating ecological psychology and neuroscience: emergence, nonlinearity, self-organization, and universality. First, is emergence, one of the most common concepts associated with complexity science (e.g., Érdi, 2008; Favela, 2019a; Sporns, 2007), with some claiming it is equivalent to complexity (e.g., Agazzi & Montecucco, 2002). Each of the key contributors to complexity science previously described makes some attempt to account for emergence. In particular, they attempt to do so in terms of the typical understanding of “emergence” as referring to cases when the whole is more than the sum of its parts—a description that dates as far back as Aristotle (Humphreys, 2016). For example, central to systems theory was the attempt to understand systems as wholes that result from interactions of their parts and not reducible to what the parts do in isolation. For reasons that will be stated shortly, I argue that emergence of the type typical to the study of mind is often cashed out in terms of interactiondominant dynamics. While I respect the fact that there is an enormous literature on emergence (e.g., Bedau & Humphreys, 2008; Goldstein, 1999; Kim, 2006), I will attempt to provide a concise sense in which it plays a role in complexity science that is most relevant to the mind sciences. In the philosophical literature, fve features can be commonly considered necessary for emergence: downward causal infuence, novelty, relationality, supervenience, and unpredictability (Francescotti, 2007). The scientifc literature, however, typically does not use ‘emergence’ to refer to all of those fve features. I have argued (Favela, 2019b) that in the mind sciences, especially the cognitive and psychological sciences, “emergence” is often used interchangeably with “interaction-dominant dynamics” (e.g., Davis, Brooks, & Dixon, 2016; Holden, Van Orden, & Turvey, 2009; Szary, Dale, Kello, & Rhodes, 2015; Wijnants, Bosman, Hasselman, Cox, & Van Orden, 2009). Interaction dominance contrasts with component dominance. A system’s dynamics are component dominant when the system-level dynamics are reducible to the additive and linear relationship of the dynamics the components have if separated and added together. One indication that a system is

100

Foundations of complexity science for the mind sciences

component dominant is if perturbations to one part of the system stay localized, where “local” can be in temporal (Figure 5.5a) or spatial terms. A system’s dynamics are interaction dominant when they exhibit nonlinear feedback among the interactions of their parts, such that it is the continual interactions of the parts that facilitate the system-level dynamics. One indication that a system is interaction dominant is if perturbations to one part of the system do not remain local but reverberate throughout (Figure 5.5b). As with synergetics, the kind of causation at work here is one of circularity: the system-level dynamics and the parts simultaneously structure each other’s dynamics. Like cybernetics, feedback is crucial in interaction-dominant systems. However, unlike cybernetics, the feedback is not in the service of prescribed outcomes for purposes of control. As complex systems, interactiondominant systems are context-dependent such that varying contexts can alter the nature of the parts during interactions. In the complexity science literature, there are many tools for assessing emergence, with computational models doing much of the work (e.g., agent-based models and cellular automaton; Floreano & Mattiussi, 2008). If, however, I am correct (Favela, 2019b), and emergence in the cognitive and psychological sciences—as well as the neurosciences—is always a kind of interaction-dominant dynamics, then many of the same tools that evaluate interaction dominance ought to be applicable to the investigation of emergence; thereby resulting in emergence being an empirically tractable phenomenon. Second, nonlinearity, which, like emergence, is commonly referred to in the complexity science literature as a key feature of complex systems (e.g., Bishop & Silberstein, 2019; Tranquillo, 2019). Nonlinearity is also central to NDST (obviously), as well as synergetics, especially in regard to the slaving principle and circular causality exhibited by the relationship among order and control parameters. I have already pointed out some of the features of nonlinearity, but that was primarily in the service of explaining methods. Here I focus a bit more on the defnition. Though nonlinearity can be defned simply, the consequences for a system that exhibits it are more complicated. Nonlinearity refers to cases when the output is not directly proportional to the input. Instead, outputs are exponential or multiplicative. In contrast, linearity refers to cases when the output is proportional to the input, such that outputs are the additive result of the inputs. Assuming that a system or data set is linear or nonlinear has numerous consequences for research, two of which I draw attention to here. One consequence involves the distinction between historical and logical variation. Historical variation refers to the notion that fuctuations in a system are infuenced by its previous states (Klein, 1997). This type of variation contrasts with logical variation, the central underlying assumption of most standard data analysis methods used in the mind sciences. Methods such as standard linear statistics (e.g., analysis of variance and t tests) treat diferences between measurements as discreet from each other, such that variation is not infuenced by history. In this way, given enough observations (e.g., coin tosses), measurements will adhere to the central limit theorem and fall along a Gaussian distribution. There is no doubt that linear statistics are useful when assessing all sorts of phenomena. However, when it comes to complex systems where historical variation is the rule and not the exception (May, 1976), reliance on methods that are ahistorical regarding their data will surely result in distorted or false conclusions. Remember, linearity underlies logical variation in that it is assumed that enough data points will fall along a Gaussian bell-shaped distribution. Allowing for historical variation in a system does not make the same assumption that enough data points will adhere to Gaussian distribution. If previous states afect current and future states of a system, then an average score may ignore some of the most interesting and relevant activity of the evolving system.

Foundations of complexity science for the mind sciences

101

Figure 5.5 Component-dominant dynamics and interaction-dominant dynamics. (a) A synthetic white noise time-series. Each section depicts the localized efect of perturbations common to systems exhibiting component-dominant dynamics. (b) A synthetic pink noise (1/f scaling) time-series. The arrows depict the distributed, nonlocalized efects of perturbations common to scale-invariant systems exhibiting interaction-dominant dynamics.

It is worth noting that there is no single kind of data analysis that will make a defnitive case for a data set exhibiting historical variation. Assessing for power-law distributions, for example, is often a frst-step in investigating whether a system exhibits historical variation. Yet evidence of power laws alone will not arbitrate the issue as there are many thorny issues involving the relationship of fractals, historical variation, normal distributions, and power laws. With that said, analyses such as the cocktail model (Holden et al., 2009), which

102

Foundations of complexity science for the mind sciences

express distributional features of data (e.g., location, scale, and shape; Amon & Holden, 2021), can contribute to building a case as to whether or not historical variation is at play in a particular phenomenon. In the case of neuronal activity, for example, since no single measure has yet to be developed that can determine whether a data set exhibits complexity, Marshall et al. (2016) provide a data toolset that involves applying maximum-likelihood estimations to four types of distributions (power law, doubly truncated power law, exponential, and log normal) in data sets. Again, no single analysis is available to determine such features as historical variation or complexity, but cases can be motivated via methods such as cocktail models and maximum-likelihood estimations. The second consequence concerns the manner in which system perturbations are assessed. If a system’s dynamics are the result of linear (i.e., additive) processes, then it follows that the efects of perturbations will be localized in its individual components (Holden et al., 2009, p. 319). This is because linear systems have minimal, if any, interactions, such that the system-level dynamics are the result of additive relationships among its relatively independent parts (Figure  5.5a). On the other hand, if a system’s dynamics are the result of nonlinear (e.g., multiplicative) processes, then it follows that the efects of perturbations will not be localized and will percolate throughout the system due to interaction-dominant dynamics (Figure 5.5b). There are many methods for assessing nonlinearity and the relationship among component parts, including recurrence quantifcation analysis (RQA; Coco & Dale, 2014; Shockley, 2005; Figure 5.6). In the context of complex systems, RQA identifes recurrent patterns of behavior that unfold over time and are visualized in a recurrence matrix (Webber & Zbilut, 2005). Recurrence plots can be used to visualize the recurrence

Figure 5.6 Recurrence quantifcation analysis plots. (a) Depiction of the recurrence of events across 2000 s. (b) Greater detailed depiction of the circled part of (a). By color coding data points—that is, red for α, green for β, and blue for γ—a recurrence plot is able to decipher between qualitative shifts in behavior, where each color represents a distinct and repetitive pattern that unfolds over time. Source: Figure created by Hana Vrzakova based on code and data from Coco and Dale (2014).

Foundations of complexity science for the mind sciences

103

of single features of a system, such as various wave heights of a body of water (Webber & Zbilut, 2005), or various parts of a single system, such as wave heights and overall depth of a body of water. The left plot in Figure 5.6 depicts simulated data from an experiment. The right plot depicts in greater detail the circled part of the left plot. By color coding data points—that is, red for α, green for β, and blue for γ—a recurrence plot is able to decipher between qualitative shifts in behavior, where each color represents a distinct and repetitive pattern that unfolds over time (Amon, Vrzakova, & D’Mello, 2019; Necaise, Williams, Vrzakova, & Amon, 2021). Taken together with quantitative tools, the recurrence plot can reveal the nonlinear efects that, for example, multiple participants during a task or specifc actions have on each other as they contribute to a single task. Such an approach has been successfully applied to the experimental investigation of extended cognitive systems (Favela, Amon, Lobo, & Chemero, 2021). Third is self-organization, which was frst introduced here in discussions of systems theory and synergetics. Whereas cybernetics focused on predefned or prespecifed outcomes of systems with feedback, systems theory centered on systems that organized without direct intervention or instruction from an outside source or central controller. Synergetics takes this approach a step further and explores general rules that result in self-organized behavior. Like fractals, self-organization seems to be ubiquitous in nature. Examples of the selforganization of complex structures and activity range from Rayleigh-Bénard convection cells in heated fuids and Belousov-Zhabotinsky chemical reactions, to fsh bait balls and starling murmurations, as well as highway and internet trafc. Following from the infuences of systems theory and synergetics, self-organization plays very specifc roles in complexity science when applied to phenomena in the cognitive, neural, and psychological sciences. These can be understood via the following to goals: 1. Identify simple rules (i.e., laws or principles) to account for the complex organization and activity of the systems of investigative interest (i.e., cognitive, neural, and psychological systems). 2. If such rules (i.e., laws or principles) can be identifed, then they must guide the organization and activity of a system without being executed or directed by some sort of central controller or preprogramming. The HKB model of coordination discussed earlier is an early success in this approach (Haken et al., 1985). Based on very simple rules, the HKB model could account for the full range of bimanual coordination, including phase transitions among qualitatively different states. Kelso, an author of that work, has since expanded the HKB model and developed an investigative framework centered on coordination dynamics. Kelso and colleagues have been able to build on the simple HKB model of bimanual coordination to successfully model and explain a range of phenomena, from language—e.g., the physiology of sound production (Kelso, Tuller, Vatikiotis-Bateson, & Fowler, 1984) and speech categorization patterns (Tuller et al., 1994)—to phase transitions among coupled neurons (Kelso, Dumas, & Tognoli, 2013). In addition to coordination dynamics, research on self-organization in cognitive systems has continued in the form of complexity matching (Fine, Likens, Amazeen, & Amazeen, 2015; Marmelat & Delignières, 2012) and synergies (Müller et al., 2018). The fourth is universality, which, as Batterman states, “is a technical term for something quite ordinary” (Batterman, 2019, p.  26). What is “ordinary” is the fact that in nature there are patterns of activity and organization that recur both in diferent substrates and

104

Foundations of complexity science for the mind sciences

in various contexts. To put it simply: Nature seems to reuse many of the same kinds of structures. Batterman points out the example of rainbows as a universal kind of organization: Despite diferent conditions (e.g., planetary location, temperature, number and size of drops, etc.), rainbows exhibit the same basic pattern. Originating in statistical mechanics, universality—or “universality classes”—refers to particular kinds of behaviors of systems that are determined by few characteristics, occur across multiple spatial and temporal scales, and are substrate neutral (Batterman, 2000; Thouless, 1989). Universality classes are sets of mathematical models that have the same critical exponents (Wilson, 1983). This means that as the energy of various systems fuctuate, they will approach a fxed point (or critical point) and undergo a phase transition in the same manner independent of the microscopic details of those systems. In other words, the numerical values of the critical exponents that describe the states of various systems as they approach a phase transition are identical across a large group of phenomena that seem to have diverse physical constitutions (Stanley, 1999). Put in terms of dynamical systems: Universality refers to the idea that there are large classes of systems that exhibit features mostly independently of the dynamical details of that system (Edelman, 2018). The exemplary case of universality in physics is critical phenomena. Systems exhibit critical states when they are poised at the point of a phase transition. Examples of such critical points are exhibited by H2O undergoing phase transitions among liquid, gas, and solid states. Put in the language of synergetics, the system-level state of the molecules is an order parameter that holds relationships with various control parameters, such as pressure and temperature. What makes criticality universal is that the relationship among the spatial and/ or temporal parts of a system—its correlation length—are the same across systems comprised of various kinds of fuids are the same and exhibit phase transitions at the same point (Batterman, 2019). For example, as water heats up, the H2O molecules begin to organize into groups of various sizes, such that there are larger groups with smaller groups, and those have smaller groups, and so on. The relationship, or correlation length, among those groupings of H2O molecules will begin to exhibit a scale-invariant relationship. As previously discussed in relation to NDST, scale-invariant structures are statistically self-similar patterns or structures at various spatial and temporal scales; the more scale-invariant a system is, the more fractal it becomes. A very appropriate question can be raised at this point: What does universality have to do with the mind sciences? It is becoming evident that as more detailed data is obtained in the cognitive, neural, and psychological sciences—from fner spatial and temporal recordings of small-scale neuroanatomy to larger-scale social coordination—the more it appears that they exhibit universal features (Figure 5.7). Many natural systems ranging from the geographical to the biological exhibit fractal branching patterns and ratios, for example, rivers and neurons (Figure 5.7a, b). It is even the case that nonliving and living systems can exhibit the same universal dynamics. Sandpiles and neuronal networks, for example, can exhibit the same correlation length among their sand-based and neuron-based avalanches (Figure  5.7c, d). One universality class that is gaining traction in the life sciences is the critical phenomenon of self-organized criticality (SOC; Bak, Tang, & Wiesenfeld, 1988; Beggs, 2022a; Favela et al., 2016; Jensen, 1998; Plenz & Niebur, 2014; Pruessner, 2012). SOC refers to the behaviors of a system at diferent spatial and temporal scales that tend to organize and exhibit phase transitions near critical states. SOC systems have interactions between components across scales that yield coherent global patterns of organization. Because these interactions are in constant fux and occur across scales, the dynamics of SOC systems occupy a wider range of temporal and spatial scales than is typical of comparable

Foundations of complexity science for the mind sciences

105

Figure 5.7 Universal spatial structures and temporal dynamics in nature. (a) Geographic structures like rivers, such as the Mississippi River Delta, exhibit fractal branching. (b) Cell structures, such as dendritic branching in neurons of the fy visual system, exhibit fractal branching. (c) Avalanches can follow power-law distributions in nonbiological systems such as sandpiles. (d) Avalanches can follow power-law distributions in biological systems such as neuronal networks in the primary motor cortex. Source: (a) Courtesy NASA/JPL-Caltech; (b) Reprinted with permission from Cuntz, Forstner, Haag, and Borst (2008). Copyright 2008 Public Library of Science, CC BY 4.0; (c) Reprinted with permission from Zachariou, Expert, Takayasu, and Christensen (2015). Copyright 2015 Public Library of Science, CC BY 4.0 and Modifed from Craven (2011). CC BY 2.0; and (d) Reprinted with permission from Klaus, Yu, and Plenz (2011). Copyright 2011 Public Library of Science; CC BY 4.0.

systems. As a result, research suggests that SOC is widespread in cognitive systems, from neuronal dynamics (e.g., Beggs & Plenz, 2003) to temporal estimation (e.g., Holden et al., 2009). So in response to the question posed at the start of this paragraph: If complexity science is to explain and understand the nature of systems investigated via the mind sciences, then it ought to at least take into consideration the role of universal organization and dynamics in those systems. Accordingly, universality can serve to inform hypotheses, guide analyses, and inform explanations.

106

Foundations of complexity science for the mind sciences

Note that the role of universality in complexity science is not limited to the statistical mechanics sense of the term. There are other senses in which phenomena in the cognitive, neural, and psychological sciences exhibit “universal” features, a number of which have been mentioned here, such as fractals. Catastrophe fags can be understood as another form of universality. For example, hysteresis can be exhibited by a variety of phenomena, from magnetization to decision-making. Coordination dynamics ofer another form of universality via its application of the HKB model to explain phenomena from fnger wagging to neuronal coupling. Utilizing universality as a central guide to discovery (cf. Chemero, 2013) in complexity science is not a radical idea. It is already applied in the mind sciences. Accordingly, integrating ecological psychology and neuroscience under complexity science can be further facilitated via utilization of the concept of universality. 5.5 Putting complexity science to work The aim of this section is to synthesize the previous material and sketch a coherent investigative framework that allows complexity science to subsume the targets of investigative interest commonly studied via the cognitive, neural, and psychological sciences. Doing so moves us one big step forward in reconciling ecological psychology and neuroscience. The investigative framework on ofer is inspired by approaches presented by Cannon (1967) and Thelen and Smith (2006). Thus, without further ado, one way to practice ecological psychology and neuroscience as complexity sciences is to follow these steps: 1. 2. 3. 4. 5. 6.

Identify the phenomenon of interest. Defne the order parameter. Defne mathematical model to capture the physical model. Solve mathematical model to identify system states. Identify and defne control parameters. Measure-the-simulation.

The frst step is to identify the phenomenon of interest, or target process or system. Even at this earliest stage of the investigative approach, it is justifable to be theoretically driven. Research questions and topics can be driven by “guides to discovery,” which are sources of new hypotheses for experimental testing (Chemero, 2013). Examples of guides to discovery include afordances (i.e., in ecological psychology; Chemero, 2013), dynamical similitude (i.e., when very diferent systems exhibit the same behavior and are governed by the same equations; Amazeen, 2018), and universality (e.g., criticality; Beggs, 2008, 2022a, 2022b; Roli, Villani, Filisetti, & Serra, 2018). The second step is to defne the order parameter. An order parameter is the collective variable that describes the macroscopic phenomenon under investigation (Haken, 1988/2006, 2016). They capture qualitatively diferent behaviors and organizational structures. Here, “macroscopic” is not objective, but is relative to the research questions. Accordingly, a single neuron can be macroscopic to a microscopic synapse, but then a single neuron can be microscopic to a macroscopic neuronal ensemble. The relative nature of identifying a phenomenon as macroscopic and microscopic is particularly well-suited to the application of universality classes in research, as they can be scale-invariant and demonstrated by smaller scales such as neuronal activity and larger scales such as limb movements. At this stage, the order parameter is defned based on early stages of data collection and processing, such as single-neuron recording and motion-tracking of limbs

Foundations of complexity science for the mind sciences

107

during a perception-action task. That data is typically plotted (e.g., time-series) in order to generate a physical model that sufciently matches the actual target system. Such plots can provide a general sense of the nature of the system, which then constrains the defnition of the order parameter. The third step is to defne a mathematical model to capture the physical model. Although diferential equations are typically the go-to for modeling nonlinear dynamic systems, due to the fact that the behavior of such systems can be especially challenging to grasp, diference equations can be a suitable frst step (Richardson, Dale, & Marsh, 2014). Whereas diferential equations model the continuous evolution of a system, diference equations model system behavior at discrete time steps. Relatively gross models exhibiting discrete data points can help a researcher focus in upon, for example, which catastrophe fag or universality class a system seems to be demonstrating. Eventually, however, investigations of complex systems beneft from the ability to model their behavior as continuous. Doing so can provide fner details that reveal whether a phase transition is really precipitated by hysteresis or if it truly maintains criticality at its attractor states. With that said, because complex systems, with their numerous components nonlinearly interacting at various scales, can quickly become overwhelming for an investigator to model, methodologies from the data sciences are increasingly being applied. One goal of NDST is to identify the governing equations for a system. For reasons just mentioned regarding the staggering complexity of some target systems, data science methods are increasingly being applied to extract governing equations from sets. Dale and Bhat (2018) recently imported a novel data science methodology to the investigation of complex cognitive systems: the method of sparse identifcation of nonlinear dynamics (SINDy; Brunton et al., 2016). Methods such as SINDy can be indispensable at this step in the investigative process. SINDy, for example, provides a way to infer diferential equations directly from data, thereby facilitating the ability to defne mathematical models that capture the physical model. At this point, an important issue must be addressed. While the SINDy method is impressive, its developers and proponents (e.g., Brunton et al., 2016; Dale & Bhat, 2018) have identifed a major limitation of it and other similar approaches: high-dimensional data. As noted in previous sections, the cognitive, neural, and psychological sciences are quickly becoming big data enterprises. The infux of enormous amounts of data are resulting in “grand challenges” (Favela, 2014), for example, logistical challenges concerning efciently sharing data and epistemological challenges concerning how to understand that data. This situation is the result of a technological doubleedged sword: As recording technologies bring us closer to evermore detail and quality, it pushes us further away from being able to get an epistemic grasp on target phenomena. To address such challenges, new data analytic tools are necessary, especially those relating to dimensionality reduction. In statistics and other forms of data analysis (e.g., machine learning), dimensionality (or dimensions) refers to the informative features of a dataset. For example, medical data such as blood pressure, temperature, white blood cell count, etc., are all features—or inputs—of a dataset obtained for the purpose of diagnosing an illness—or output. High-dimensional data refers to datasets with a “high” number (a relative amount) of features such that determining their relationships to each other, and the phenomenon of interest can be computationally exceedingly demanding. For example, datasets comprised of gene expressions are paradigmatic cases of high-dimensional data as there are seemingly innumerable relationships among genes, diferent temporal scales, etc. In a more technical sense, data are high dimensional when the number of features, or variables observed (p), are larger than the

108

Foundations of complexity science for the mind sciences

number of observations, or data points (n) (Bühlmann, Kalisch, & Meier, 2014; Bühlmann & Van De Geer, 2011). This contrasts with low dimensional data, that is, when the number of observations (n) far outnumber the features (p). Dimensionality reduction, in the simplest terms, is a data processing strategy that attempts to cut down on the number of a dataset’s features without losing valuable information (Hinton & Salakhutdinov, 2006; Nguyen & Holmes, 2019; Sorzano, Vargas, & Pascual-Montano, 2014). This is typically done in two general ways: fltering variables from the original dataset to keep only what is most relevant or exploiting redundancy in input data to fnd fewer new variables that contain the same information (Cohen, 2017; Sorzano et al., 2014). As with any data processing or analysis techniques, one must be aware of the limitations of dimensionality reduction (Carlson, Goddard, Kaplan, Klein, & Ritchie, 2018; Jonas & Kording, 2017). Yet, there are many virtues to employing dimensionality reduction on high-dimensional datasets, including its ability to flter out meaningless noise (Cohen, 2017), help control for incorrect intuitions about relationships among variables (Holmes & Huber, 2018), increase a dataset’s statistical power (Nguyen & Holmes, 2019), and reveal deeper organizational relationships and structures (Batista, 2014). Dimensionality reduction is employed broadly in the data sciences, especially in computer sciences such as machine learning. But for the aims of the current work, the most important way dimensionality reduction intersects with the mind sciences is when it is integrated with the tools of dynamical systems theory (DST), which is at the heart of much work in complexity science. In particular, dimensionality reduction intersects with DST for the purpose of reducing the number of variables needed to account for even the most complex of data from behavioral and cognitive tasks, as well as the underlying neural processes. With simple—usually human-made—systems, it can be relatively straightforward to identify the most relevant variables to account for the phenomenon of interest. As discussed earlier, the full range of pendulum dynamics can be understood via three variables: angular arm displacement (q), gravitational acceleration (g), and pendulum length (l). When it comes to neural systems and their related behaviors, however, variable identifcation is typically nowhere near as straightforward (Churchland & Abbott, 2016; Churchland et al., 2012; Cunningham & Byron, 2014; Frégnac, 2017; Williamson, Doiron, Smith, & Byron, 2019). Since it is crucial to identify the relevant dimensions (i.e., features, variables) when developing models and equations of dynamical systems, various dimensionality reduction analyses can be employed. These methods include, but are not limited to, linear methods such as correspondence analysis and nonlinear methods such as difusion maps (Nguyen & Holmes, 2019). A popular method of dimensionality reduction in the mind sciences, and one that will come up in later section, is principal component analysis. The following is a brief and conceptually-focused introduction to principal component analysis (PCA; see Jollife & Cadima [2016] for a more technical introduction). While PCA has been around since the early-1900s, it was not until much more recently that the computational resources were available to leverage its techniques on high-dimensional datasets. The basic idea underlying PCA is to reduce a dataset’s dimensionality while preserving variability. Here, preserving variability means discovering new variables—principal components (PC)—with linear functions that match those in the original input data. Moreover, those new variables should maximize variance and be uncorrelated with each other (Jollife & Cadima, 2016). An illustrative and simple example of PCA is found in the work of Werner and colleagues on fsh body measurements (Werner, Rink, Riedel-Kruse,

Foundations of complexity science for the mind sciences

109

Figure 5.8 Principal component analysis (PCA) example. (a) Fish body measurements provide the input dataset, with height and length obtained from N individuals. (b) Height and length are assumed to be strongly correlated. PCA defnes a change of coordinate system from the original (height, length)-axes (here, the x- and y-axes) to a new axes (B1 and B2), which depict the principle axes of the dimensions that covary. The process of defning a new coordinate system (V) corresponds to a reduction of the dimensionality of the data space, which also retains most of the data’s variability. Source: Modifed and reprinted with permission from Werner et al. (2014). CC BY 4.0.

& Friedrich, 2014). In this example, the input dataset contains height and length measurements of various fsh (Figure 5.8a). As it is assumed those two dimensions are strongly correlated, the PCA defnes a change of coordinate systems from the original two-dimensional (height, length) data space to a single dimension (frst shape score) data space (Figure 5.8b). This reduction from two dimensions to one dimension retains the maximum amount of the original dataset’s variability. While the example of fsh measurements is quite simple (Figure  5.8a), when the various dimensionality reduction methods intersect with dynamical systems approaches, it is usually for the purpose of helping investigators get an epistemological grip on data far more unwieldy. This is achieved by contributing to the identifcation of the most relevant variables among multivariate datasets, or the “frst shape score” in the case of fsh body dimensions. Although dimensionality reduction in its contemporary form (e.g., via neural networks and other kinds of machine learning) is relatively new (~early-2000s), issues concerning how to cope with high-dimensional data have been explicitly discussed in computer science (e.g., the “curse of dimensionality;” Bellman, 1961) and statistics (Finney, 1977) since the mid-1900s. It was around that time (give a decade back or two) that both dynamical systems approaches and forms of dimensionality reduction were contributing to some of the most signifcant research in neuroscience, namely, the Hodgkin-Huxley and FitzHughNagumo models (Favela, 2021). In returning to the third step of the current investigative framework, researchers will likely fnd themselves facing challenges with defning a mathematical model to capture the physical model. This is in large part due to the fact that cognitive, neural, and psychological systems produce high-dimensional data. Accordingly, part of the step of modeling the phenomenon involves leveraging tools of dimensionality reduction in order to identify the most relevant variables. By the end of this step, the model ought to describe the behavior and dynamic trajectory of the order parameter identifed in step one.

110

Foundations of complexity science for the mind sciences

The fourth step is to solve the mathematical model to identify system states. Once the model is developed in the previous step, it is essential that it be solved so as to verify that it accurately captures the target system. Currently, methods like SINDy have not been applied to highly complex cognitive systems, such as those involving social dynamics. However, as Dale and Bhat (2018) demonstrated, there is proof of concept for the method as it can be successfully applied to well-studied dynamic systems, such as bistable attractor model of human choice behavior, logistic map, and the Lorenz system. With that said, it is at this step in the investigative process that the qualitative features of the framework become essential. As discussed with the model of pendulum dynamics, even in simple cases such as those it can be challenging to solve the equations analytically. For that reason, it becomes necessary to plot the equation in order to assess the model’s ability to capture the transition points of the order parameter. Phase space plots (Figure 5.2b), time-series (Figure 5.4), and recurrence plots (Figure 5.6) can be especially useful in this regard. Moreover, appealing to dimensionality reduction can facilitate this step by assisting in identifying such attractor states and transition points. The ffth step is to identify and defne control parameters. As a reminder, control parameters are variables that guide the system’s dynamics. Due to the slaving relationship with the order parameter and circular causality, control parameters do not cause the order parameter’s behavior. For that reason, nonlinear methods are needed to assess the relationships among variables. NDST has accurately identifed control parameters for a range of phenomena, such as decision-making (van Rooij et al., 2013), Hénon map (Levi, Schanz, Kornienko, & Kornienko, 1999), perceptual judgments (Frank, Profeta, & Harrison, 2015), and Rayleigh-Bénard convection (Newell, Passot, & Lega, 1993). Correctly identifying control parameters allows researchers to practice other scientifc explanatory virtues as control, intervention/manipulation, prediction, and augmentation. The sixth step is to measure-the-simulation. As should be clear by now, modeling is an indispensable part of understanding cognitive, neural, and psychological systems as complex systems to be investigated via complexity science. This is at least in part due to the large amount of data typically involved in complex systems research. Be that as it may, modeling virtues such as prediction and simplicity must be balanced with the actual behavior and organization of the target of investigation. Spivey (2018) points out the danger of making the inference from having a model that seems to accurately simulate a cognitive phenomenon to that model revealing the actual ontological structure of that phenomenon. Spivey presents measure-the-simulation as a needed step to address this challenge. At its most basic, to measure-the-simulation is to examine the simulations proposed by the models for ways in which the simulation itself might distort or transform the empirical data it is based on. Revealing such distortions or transformations is crucial if researchers doing complexity science want to be confdent that their models are true to the systems being investigated. 5.6 Conclusion Complexity science has been presented as an interdisciplinary framework that is appropriate for the investigation of phenomena studied via ecological psychology and neuroscience. Within this framework, cognitive, neural, and psychological phenomena are understood as complex systems that exhibit the following four key features: emergence, nonlinearity, self-organization, and universality. In order to fruitfully conduct research on systems with those properties, complexity science employs a range of concepts, methods,

Foundations of complexity science for the mind sciences

111

and theories that are integrated from systems theory, NDST, and synergetics. In the next chapter, I assemble the pieces that have been laid out in previous chapters in order to provide an approach that reconciles ecological psychology and neuroscience under a unifed complexity science framework. That approach is the NeuroEcological Nexus Theory (NExT). References Adams, F. (1999). Cybernetics. In R. Audi (Ed.), The Cambridge dictionary of philosophy (2nd ed., pp. 199–200). New York, NY: Cambridge University Press. Agazzi, E., & Montecucco, L. (Eds.). (2002). Complexity and emergence. River Edge, NJ: World Scientifc. Aks, D. J., Zelinsky, G. J., & Sprott, J. C. (2002). Memory across eye-movements: 1/f dynamic in visual search. Nonlinear Dynamics, Psychology, and Life Sciences, 6, 1–25. Allen, M., & Friston, K. J. (2018). From cognitivism to autopoiesis: Towards a computational framework for the embodied mind. Synthese, 195(6), 2459–2482. https://doi.org/10.1007/s11229-0161288-5 Allen, P. (2001). What is complexity science? Knowledge of the limits to knowledge. Emergence: A Journal of Complexity Issues in Organizations and Management, 3(1), 24–42. https://doi. org/10.1207/S15327000EM0301_03 Amazeen, P. G. (2018). From physics to social interactions: Scientifc unifcation via dynamics. Cognitive Systems Research, 52, 640–657. https://doi.org/10.1016/j.cogsys.2018.07.033 Amon, M. J., & Holden, J. G. (2021). The mismatch of intrinsic fuctuations and the static assumptions of linear statistics. Review of Philosophy and Psychology, 12, 149–173. https://doi.org/10.1007/ s13164-018-0428-x Amon, M. J., Vrzakova, H., & D’Mello, S. K. (2019). Beyond dyadic synchrony: Multimodal behavioral irregularity predicts quality of triadic problem solving. Cognitive Science, 43(10), e12787, 1–22. https://doi.org/10.1111/cogs.12787 Anderson, M. L. (2014). After phrenology: Neural reuse and the interactive brain. Cambridge, MA: The MIT Press. Bak, P., Tang, C., & Wiesenfeld, K. (1988). Self-organized criticality. Physical Review A, 38, 364– 374. https://doi.org/10.1103/PhysRevA.38.364 Ball, R., Kolokoltsov, V., & MacKay, R. S. (Eds.). (2013). Complexity science: The Warwick master’s course (Vol. 408). Cambridge, MA: Cambridge University Press. Baofu, P. (2007). The future of complexity: Conceiving a better way to understand order and chaos. Hackensack, NJ: World Scientifc. Bar-Yam, Y. (2016). From big data to important information. Complexity, 21(S2), 73–98. https://doi. org/10.1002/cplx.21785 Batista, A. (2014). Multineuronal views of information processing. In M. S. Gazzaniga & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp.  477–489). Cambridge, MA: The MIT Press. Batterman, R. W. (2000). Multiple realizability and universality. British Journal for the Philosophy of Science, 51, 115–145. https://doi.org/10.1093/bjps/51.1.115 Batterman, R. W. (2019). Universality and RG explanations. Perspectives on Science, 27(1), 26–45. https://doi.org/10.1162/posc_a_00298 Bedau, M. A., & Humphreys, P. (2008). Emergence: Contemporary readings in philosophy and science. Cambridge, MA: The MIT Press. Beer, R. D. (1995). A dynamical systems perspective on agent-environment interactions. Artifcial Intelligence, 72, 173–215. https://doi.org/10.1016/0004-3702(94)00005-L Beggs, J. M. (2008). The criticality hypothesis: How local cortical networks might optimize information processing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1864), 329–343. https://doi.org/10.1098/rsta.2007.2092

112

Foundations of complexity science for the mind sciences

Beggs, J. M. (2022a). The cortex and the critical point: Understanding the power of emergence. Cambridge, MA: The MIT Press. Beggs, J. M. (2022b). Addressing skepticism of the critical brain hypothesis. Frontiers in Computational Neuroscience, 16(703865), 1–23. https://doi.org/10.3389/fncom.2022.703865 Beggs, J. M., & Plenz, D. (2003). Neuronal avalanches in neocortical circuits. The Journal of Neuroscience, 23, 11167–11177. https://doi.org/10.1523/JNEUROSCI.23-35-11167.2003 Bellman, R. E. (1961). Adaptive control processes: A guided tour. Princeton, NJ: Princeton University Press. Bishop, R., & Silberstein, M. (2019). Complexity and feedback. In S. Gibb, R. F. Hendry, & T. Lancaster (Eds.), The Routledge handbook of emergence (pp. 145–156). New York, NY: Routledge. Boccara, N. (2010). Modeling complex systems (2nd ed.). New York, NY: Springer Science + Business Media, LLC. Bongard, J., & Lipson, H. (2007). Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24), 9943–9948. https://doi.org/10.1073/ pnas.0609476104 Brown, C., & Liebovitch, L. (2010). Fractal analysis. Los Angeles, CA: SAGE. Brunton, S. L., Proctor, J. L., & Kutz, J. N. (2016). Discovering governing equations from data by sparse identifcation of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15), 3932–3937. https://doi.org/10.1073/pnas.1517384113 Bühlmann, P., Kalisch, M., & Meier, L. (2014). High-dimensional statistics with a view toward applications in biology. Annual Review of Statistics and Its Application, 1(1), 255–278. https://doi. org/10.1146/annurev-statistics-022513-115545 Bühlmann, P., & Van De Geer, S. (2011). Statistics for high-dimensional data: Methods, theory and applications. New York, NY: Springer. Cannon, Jr., R. H. (1967). Dynamics of physical systems. Mineola, NY: Dover Publications, Inc. Carlson, T., Goddard, E., Kaplan, D. M., Klein, C., & Ritchie, J. B. (2018). Ghosts in machine learning for cognitive neuroscience: Moving from data to theory. NeuroImage, 180, 88–100. https://doi. org/10.1016/j.neuroimage.2017.08.019 Castellani, B. (2018). Map of the complexity sciences. Art & Science Factory. Retrieved November 16, 2019 from www.art-sciencefactory.com/complexity-map_feb09.html Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: The MIT Press. Chemero, A. (2013). Radical embodied cognitive science. Review of General Psychology, 17(2), 145–150. https://doi.org/10.1037/a0032923 Churchland, A. K., & Abbott, L. F. (2016). Conceptual and technical advances defne a key moment for theoretical neuroscience. Nature Neuroscience, 19(3), 348–349. https://doi.org/10.1038/nn.4255 Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster, J. D., Nuyujukian, P., Ryu, S. I., & Shenoy, K. V. (2012). Neural population dynamics during reaching. Nature, 487(7405), 51–56. https://doi.org/10.1038/nature11129 Coco, M. I., & Dale, R. (2014). Cross-recurrence quantifcation analysis of categorical and continuous time series: An R package. Frontiers in Psychology: Quantitative Psychology and Measurement, 5(510). https://doi.org/10.3389/fpsyg.2014.00510 Cohen, M. X. (2017). MATLAB for brain and cognitive scientists. Cambridge, MA: The MIT Press. Craven, P. (2011). Washed recycled sand. Wikipedia. Retrieved September 18, 2022 from https://commons.wikimedia.org/wiki/File:Washed_recycled_sand_%285529799518%29.jpg Crutchfeld, J. P. (1994). Observing complexity and the complexity of observation. In H. Atmanspacher & G. J. Dalenoort (Eds.), Inside versus outside: Endo- and exo-concepts of observation and knowledge in physics, philosophy and cognitive science (pp. 235–272). Berlin, Heidelberg: Springer-Verlag. Cunningham, J. P., & Byron, M. Y. (2014). Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17(11), 1500–1509. https://doi.org/10.1038/nn.3776 Cuntz, H., Forstner, F., Haag, J., & Borst, A. (2008). The morphological identity of insect dendrites. PLoS Computational Biology, 4(12), e1000251. https://doi.org/10.1371/journal.pcbi.1000251

Foundations of complexity science for the mind sciences

113

Dale, R., & Bhat, H. (2018). Equations of mind: Data science for inferring nonlinear dynamics of socio-cognitive systems. Cognitive Systems Research, 52, 275–290. https://doi.org/10.1016/j. cogsys.2018.06.020 Dale, R., Fusaroli, R., Duran, N. D., & Richardson, D. C. (2014). The self-organization of human interaction. In B. H. Ross (Ed.), The psychology of learning and motivation (Vol. 59, pp. 43–95). Waltham, MA: Academic Press. Daniels, B. C., & Nemenman, I. (2015). Automated adaptive inference of phenomenological dynamical models. Nature Communications, 6, 8133. https://doi.org/10.1038/ncomms9133 Davis, T. J., Brooks, T. R., & Dixon, J. A. (2016). Multi-scale interactions in interpersonal coordination. Journal of Sport and Health Science, 5(1), 25–34. https://doi.org/10.1016/j.jshs.2016.01.015 Delignières, D., Ramdani, S., Lemoine, L., Torre, K., Fortes, M., & Ninot, G. (2006). Fractal analyses for “short” time series: A re-assessment of classical methods. Journal of Mathematical Psychology, 50(6), 525–544. https://doi.org/10.1016/j.jmp.2006.07.004 Ditzinger, T., & Haken, H. (1995). A synergetic model of multistability in perception. In P. Kruse & M. Stadler (Eds.), Ambiguity in mind and nature: Multistable cognitive phenomena (pp. 255–274). Berlin, Heidelberg: Springer-Verlag. Edelman, M. (2018). Universality in systems with power-law memory and fractional dynamics. In M. Edelman, E. E. N. Macau, & M. A. F. Sanjuan (Eds.), Chaotic, fractional, and complex dynamics: New insights and perspectives (pp. 147–171). Cham, Switzerland: Springer. Enns, R. H. (2010). It’s a nonlinear world. New York, NY: Springer Science+Business Media. Érdi, P. (2008). Complexity explained. Berlin: Springer-Verlag. Falconer, K. (2013). Fractals: A very short introduction. Oxford: Oxford University Press. Favela, L. H. (2014). Radical embodied cognitive neuroscience: Addressing “grand challenges” of the mind sciences. Frontiers in Human Neuroscience, 8(796), 1–10. https://doi.org/10.3389/ fnhum.2014.00796 Favela, L. H. (2015). Understanding cognition via complexity science [Doctoral dissertation, University of Cincinnati]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohio link.edu/etdc/view?acc_num=ucin1427981743 Favela, L. H. (2019a). Integrated information theory as a complexity science approach to consciousness. Journal of Consciousness Studies, 26(1–2), 21–47. Favela, L. H. (2019b). Emergence by way of dynamic interactions. Southwest Philosophy Review, 35(1), 47–57. https://doi.org/10.5840/swphilreview20193515 Favela, L. H. (2020a). Cognitive science as complexity science. Wiley Interdisciplinary Reviews: Cognitive Science, 11(4), e1525, 1–24. https://doi.org/10.1002/wcs.1525 Favela, L. H. (2020b). Dynamical systems theory in cognitive science and neuroscience. Philosophy Compass, 15(8), e12695, 1–16. https://doi.org/10.1111/phc3.12695 Favela, L. H. (2021). The dynamical renaissance in neuroscience. Synthese, 199(1–2), 2103–2127. https://doi.org/10.1007/s11229-020-02874-y Favela, L. H., Amon, M. J., Lobo, L., & Chemero, A. (2021). Empirical evidence for extended cognitive systems. Cognitive Science: A Multidisciplinary Journal, 45(11), e13060, 1–27. https://doi. org/10.1111/cogs.13060 Favela, L. H., & Chemero, A. (2016). The animal-environment system. In Y. Coelllo & M. H. Fischer (Eds.), Foundations of embodied cognition: Volume 1: Perceptual and emotional embodiment (pp. 59–74). New York, NY: Routledge. https://doi.org/10.4324/9781315751979 Favela, L. H., Coey, C. A., Grif, E. R., & Richardson, M. J. (2016). Fractal analysis reveals subclasses of neurons and suggests an explanation of their spontaneous activity. Neuroscience Letters, 626, 54–58. https://doi.org/10.1016/j.neulet.2016.05.017 Feyerabend, P. (1975). ‘Science’: The myth and its role in society. Inquiry: An Interdisciplinary Journal of Philosophy, 18(2), 167–181. https://doi.org/10.1080/00201747508601758 Fine, J. M., Likens, A. D., Amazeen, E. L., & Amazeen, P. G. (2015). Emergent complexity matching in interpersonal coordination: Local dynamics and global variability. Journal of Experimental

114

Foundations of complexity science for the mind sciences

Psychology: Human Perception and Performance, 41(3), 723–737. https://doi.org/10.1037/ xhp0000046 Finney, D. J. (1977). Dimensions of statistics. Journal of the Royal Statistical Society: Series C (Applied Statistics), 26(3), 285–289. https://doi.org/10.2307/2346969 Flood, R. L., & Carson, E. R. (1993). Dealing with complexity: An introduction to the theory and application of systems science (2nd ed.). New York, NY: Springer Science & Business Media. Floreano, D., & Mattiussi, C. (2008). Bio-inspired artifcial intelligence: Theories, methods, and technologies. Cambridge, MA: The MIT Press. Francescotti, R. M. (2007). Emergence. Erkenntnis, 67, 47–63. https://doi.org/10.1007/s10670-0079047-0 Frank, T. D., Profeta, V. L. S., & Harrison, H. S. (2015). Interplay between order-parameter and system parameter dynamics: Considerations on perceptual-cognitive-behavioral mode-mode transitions exhibiting positive and negative hysteresis and on response times. Journal of Biological Physics, 41(3), 257–292. https://doi.org/10.1007/s10867-015-9378-z Frégnac, Y. (2017). Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain? Science, 358(6362), 470–477. https://doi.org/10.1126/science.aan8866 Fuchs, A. (2013). Nonlinear dynamics in complex systems: Theory and applications for the life-, neuro-, and natural sciences. New York, NY: Springer-Verlag. Gilmore, R. (1981). Catastrophe theory for scientists and engineers. New York, NY: Dover. Goldstein, J. (1999). Emergence as a construct: History and issues. Emergence, 1(1), 49–72. https:// doi.org/10.1207/s15327000em0101_4 Guastello, S. J., Koopmans, M., & Pincus, D. (Eds.). (2011). Chaos and complexity in psychology: The theory of nonlinear dynamical systems. Cambridge, MA: Cambridge University Press. Haken, H. (1988/2006). Information and self-organization: A macroscopic approach to complex systems (3rd ed.). New York, NY: Springer. Haken, H. (2007). Synergetics. Scholarpedia, 2(1), 1400. https://doi.org/10.4249/scholarpedia.1400 Haken, H. (2016). The brain as a synergetic and physical system. In A. Pelster & G. Wunner (Eds.), Self-organization in complex systems: The past, present, and future of synergetics (pp. 147–163). Cham, Switzerland: Springer. Haken, H., Kelso, J. S., & Bunz, H. (1985). A theoretical model of phase transitions in human hand movements. Biological Cybernetics, 51, 347–356. https://doi.org/10.1007/BF00336922 Hammond, D. (2003). The science of synthesis: Exploring the social implications of general systems theory. Boulder, CO: University Press of Colorado. Hausdorf, J. M., Peng, C.- K., Ladin, Z., Wei, J. Y., & Goldberger, A. L. (1995). Is walking a random walk? Evidence for long-range correlations in stride interval of human gait. Journal of Applied Physiology, 78, 349–358. https://doi.org/10.1152/jappl.1995.78.1.349 Heylighen, F., & Joslyn, C. (1999). Systems theory. In R. Audi (Ed.), The Cambridge dictionary of philosophy (2nd ed., pp. 898–899). New York, NY: Cambridge University Press. Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507. https://doi.org/10.1126/science.1127647 Holden, J. G., Van Orden, G. C., & Turvey, M. T. (2009). Dispersion of response times reveals cognitive dynamics. Psychological Review, 116(2), 318–342. https://doi.org/10.1037/a0014849 Holmes, S., & Huber, W. (2018). Modern statistics for modern biology. New York, NY: Cambridge University Press. Hooker, C. (Ed.). (2011a). Philosophy of complex systems. Waltham, MA: Elsevier. Hooker, C. (2011b). Introduction to philosophy of complex systems: A. In C. Hooker (Ed.), Philosophy of complex systems (pp. 3–90). Waltham, MA: Elsevier. Hull, D. L. (1988). Science as a process: An evolutionary account of the social and conceptual development of science. Chicago, IL: University of Chicago Press. Humphreys, P. (2016). Emergence: A philosophical account. New York, NY: Oxford University Press. Ihlen, E. A. F. (2012). Introduction to multifractal detrended fuctuation analysis in Matlab. Frontiers in Physiology, 3(141). https://doi.org/10.3389/fphys.2012.00141

Foundations of complexity science for the mind sciences

115

Ihlen, E. A. F., & Vereijken, B. (2010). Interaction-dominant dynamics in human cognition: Beyond 1/ƒα fuctuation. Journal of Experimental Psychology: General, 139(3), 436–463. https://doi. org/10.1037/a0019098 Isnard, C. A., & Zeeman, E. C. (1976/2013). Some models from catastrophe theory in the social sciences. In L. Collins (Ed.), The use of models in the social sciences (pp. 44–100). Chicago, IL: Routledge. Jensen, H. J. (1998). Self-organized criticality: Emergent complex behavior in physical and biological systems. Cambridge, MA: Cambridge University Press. Jollife, I. T., & Cadima, J. (2016). Principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065), 20150202. https://doi.org/10.1098/rsta.2015.0202 Jonas, E., & Kording, K. P. (2017). Could a neuroscientist understand a microprocessor? PLoS Computational Biology, 13(1), e1005268. https://doi.org/10.1371/journal.pcbi.1005268 Kello, C. T., Beltz, B. C., Holden, J. G., & Van Orden, G. C. (2007). The emergent coordination of cognitive function. Journal of Experimental Psychology: General, 136, 551–568. https://doi. org/10.1037/0096-3445.136.4.551 Kelso, J. A. S. (1997). Dynamic patterns: The self-organization of brain and behavior. Cambridge, MA: The MIT Press. Kelso, J. A. S. (2012). Multistability and metastability: Understanding dynamic coordination in the brain. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1591), 906–918. https://doi.org/10.1098/rstb.2011.0351 Kelso, J. A. S., Dumas, G., & Tognoli, E. (2013). Outline of a general theory of behavior and brain coordination. Neural Networks, 37, 120–131. https://doi.org/10.1016/j.neunet.2012.09.003 Kelso, J. A. S., Tuller, B., Vatikiotis-Bateson, E., & Fowler, C. A. (1984). Functionally specifc articulatory cooperation following jaw perturbations during speech: Evidence for coordinative structures. Journal of Experimental Psychology: Human Perception and Performance, 10(6), 812–832. https:// doi.org/10.1037/0096-1523.10.6.812 Kelty-Stephen, D. G., & Wallot, S. (2017). Multifractality versus (mono-) fractality as evidence of nonlinear interactions across timescales: Disentangling the belief in nonlinearity from the diagnosis of nonlinearity in empirical data. Ecological Psychology, 29(4), 259–299. https://doi.org/10.1080/ 10407413.2017.1368355 Kim, J. (2006). Emergence: Core ideas and issues. Synthese, 151, 547–559. https://doi.org/10.1007/ s11229-006-9025-0 Klaus, A., Yu, S., & Plenz, D. (2011). Statistical analyses support power law distributions found in neuronal avalanches. PLoS One, 6(5), e19779. https://doi.org/10.1371/journal.pone.0019779 Klein, J. L. (1997). Statistical visions in time: A history of time series analysis, 1662–1938. New York, NY: Cambridge University Press. Klein, J. T. (2010). A taxonomy of interdisciplinarity. In R. Frodeman, J. T. Klein, & C. Mitcham (Eds.), The Oxford handbook of interdisciplinarity (pp. 15–30). New York, NY: Oxford University Press. Krishnavedala. (2014). Pendulum phase portrait. Wikipedia. Retrieved August 16, 2019 from https:// commons.wikimedia.org/wiki/File:Pendulum_phase_portrait.svg Kuhn, T. S. (1962/1996). The structure of scientifc revolutions (2nd ed.). Chicago, IL: University of Chicago Press. Ladyman, J., Lambert, J., & Wiesner, K. (2013). What is a complex system? European Journal for Philosophy of Science, 3(1), 33–67. https://doi.org/10.1007/s13194-012-0056-8 Lee, J.-M., Hu, J., Gao, J., Crosson, B., Peck, K. K., Wierenga, C. E., .  .  . White, K. D. (2008). Discriminating brain activity from task-related artifacts in functional MRI: Fractal scaling analysis simulation and application. NeuroImage, 40, 197–212. https://doi.org/10.1016/j.neuroimage. 2007.11.016 Leidus, I. (2021). Romanesco broccoli (Brassica oleracea). Wikipedia. Retrieved February 3, 2022 from https://en.wikipedia.org/wiki/File:Romanesco_broccoli_(Brassica_oleracea).jpg Levi, P., Schanz, M., Kornienko, S., & Kornienko, O. (1999). Application of order parameter equations for the analysis and the control of nonlinear time discrete dynamical systems.

116

Foundations of complexity science for the mind sciences

International Journal of Bifurcation and Chaos, 9(8), 1618–1634. https://doi.org/10.1142/ S0218127499001127 Mainzer, K. (2007). Thinking in complexity: The computational dynamics of matter, mind, and mankind (5th ed.). New York, NY: Springer-Verlag. Mandelbrot, B. B. (1977/1983). The fractal geometry of nature. New York, NY: W.H. Freeman and Company. Marmelat, V., & Delignières, D. (2012). Strong anticipation: Complexity matching in interpersonal coordination. Experimental Brain Research, 222(1–2), 137–148. https://doi.org/10.1007/ s00221-012-3202-9 Marshall, N., Timme, N. M., Bennett, N., Ripp, M., Lautzenhiser, E., & Beggs, J. M. (2016). Analysis of power laws, shape collapses, and neural complexity: New techniques and MATLAB support via the NC toolbox. Frontiers in Physiology: Fractal and Network Physiology, 7(250). https://doi. org/10.3389/fphys.2016.00250 May, R. M. (1976). Simple mathematical models with very complicated dynamics. Nature, 261, 459–467. https://doi.org/10.1038/261459a0 Mitchell, M. (2009). Complexity: A guided tour. New York, NY: Oxford University Press. Mobus, G. E., & Kalton, M. C. (2015). Principles of systems science. New York, NY: Springer. Müller, S. C., Plath, P. J., Radons, G., & Fuchs, A. (Eds.). (2018). Complexity and synergetics. Cham, Switzerland: Springer. National Science Foundation. (2011). Empowering the nation through discovery and innovation: NSF strategic plan for fscal years (FY) 2011–2016. Retrieved June 16, 2014 from www.nsf.gov/ news/strategicplan/nsfstrategicplan_2011_2016.pdf Necaise, A., Williams, A., Vrzakova, H., & Amon, M. J. (2021). Regularity versus novelty of users’ multimodal comment patterns and dynamics as markers of social media radicalization. Proceedings of the 32nd ACM Conference on Hypertext and Social Media (pp.  237–243). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3465336.3475095 Newell, A. C., Passot, T., & Lega, J. (1993). Order parameter equations for patterns. Annual Review of Fluid Mechanics, 25(1), 399–453. https://doi.org/10.1146/annurev.f.25.010193.002151 Nguyen, L. H., & Holmes, S. (2019). Ten quick tips for efective dimensionality reduction. PLoS Computational Biology, 15(6), e1006907. https://doi.org/10.1371/journal.pcbi.1006907 Nicolis, G., & Nicolis, C. (2007). Foundations of complex systems: Nonlinear dynamics, statistical physics, information and prediction. Hackensack, NJ: World Scientifc. Peng, C.- K., Havlin, S., Hausdorf, J. M., Mietus, J. E., Stanley, H. E., & Goldberger, A. L. (1995). Fractal mechanisms and heart rate dynamics: Long-range correlations and their breakdown with disease. Journal of Electrocardiology, 28, 59–65. https://doi.org/10.1016/S0022-0736(95)80017-4 Phelan, S. E. (2001). What is complexity science, really? Emergence: A Journal of Complexity Issues in Organizations and Management, 3(1), 120–136. https://doi.org/10.1207/S15327000 EM0301_08 Plenz, D., & Niebur, E. (Eds.). (2014). Criticality in neural systems. Weinheim, Germany: Wiley-VCH. Poston, T., & Stewart, I. (1978). Catastrophe theory and its applications. London: Pitman Publishing Limited. Pruessner, G. (2012). Self-organised criticality: Theory, models and characterisation. New York, NY: Cambridge University Press. Richardson, M. J., Dale, R., & Marsh, K. L. (2014). Complex dynamical systems in social and personality psychology. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (2nd ed., pp. 253–282). New York, NY: Cambridge University Press. Richardson, M. J., Marsh, K. L., Isenhower, R. W., Goodman, J. R., & Schmidt, R. C. (2007). Rocking together: Dynamics of intentional and unintentional interpersonal coordination. Human Movement Science, 26, 867–891. https://doi.org/10.1016/j.humov.2007.07.002 Riley, M. A., & Holden, J. G. (2012). Dynamics of cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 593–606. https://doi.org/10.1002/wcs.1200

Foundations of complexity science for the mind sciences

117

Riley, M. A., & Van Orden, G. C. (Eds.). (2005). Tutorials in contemporary nonlinear methods for the behavioral sciences. United States: National Science Foundation. www.nsf.gov/pubs/2005/ nsf05057/nmbs/nmbs.pdf Roberton, M. A. (1993). New ways to think about old questions. In L. B. Smith & E. Thelen (Eds.), A dynamic systems approach to development: Applications (pp.  95–117). Cambridge, MA: The MIT Press. Roli, A., Villani, M., Filisetti, A., & Serra, R. (2018). Dynamical criticality: Overview and open questions. Journal of Systems Science and Complexity, 31(3), 647–663. https://doi.org/10.1007/s11424-0176117-5 Sayama, H. (2015). Introduction to the modeling and analysis of complex systems. Geneseo, NY: Open SUNY Textbooks, Milne Library. Schiepek, G., Heinzel, S., Karch, S., Plöderl, M., & Strunk, G. (2016). Synergetics in psychology: Patterns and pattern transitions in human change processes. In A. Pelster & G. Wunner (Eds.), Selforganization in complex systems: The past, present, and future of synergetics (pp. 181–208). Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-27635-9_12 Scholz, J. P., Kelso, J. A. S., & Schöner, G. (1987). Nonequilibrium phase transitions in coordinated biological motion: Critical slowing down and switching time. Physics Letters A, 123(8), 390–394. https://doi.org/10.1016/0375-9601(87)90038-7 Sebastián, M. V., & Navascués, M. A. (2008). A relation between fractal dimension and Fourier transform: Electroencephalographic study using spectral and fractal parameters. International Journal of Computer Mathematics, 85(3–4), 657–665. https://doi.org/10.1080/00207160701286141 Sherblom, S. A. (2017). Complexity-thinking and social science: Self-organization involving human consciousness. New Ideas in Psychology, 47, 10–15. https://doi.org/10.1016/j.newideapsych.2017. 03.003 Shockley, K. (2005). Cross recurrence quantifcation of interpersonal postural activity. In M. A. Riley & G. C. Van Orden (Eds.), Tutorials in contemporary nonlinear methods for the behavioral sciences (pp. 142–177). Arlington, VA: National Science Foundation. www.nsf.gov/pubs/2005/ nsf05057/nmbs/nmbs.pdf Solomon, S., & Shir, B. (2003). Complexity: A science at 30. Europhysics News, 34(2), 54–57. https:// doi.org/10.1051/epn:2003204 Sorzano, C. O. S., Vargas, J., & Pascual-Montano, A. (2014). A survey of dimensionality reduction techniques. arXiv preprint. https://doi.org/10.48550/arXiv.1403.2877 Spivey, M. J. (2018). Discovery in complex adaptive systems. Cognitive Systems Research, 51, 40–55. https://doi.org/10.1016/j.cogsys.2018.05.001 Sporns, O. (2007). Complexity. Scholarpedia, 2(10), 1623. https://doi.org/10.4249/scholarpedia. 1623 Sporns, O. (2013). Making sense of brain network data. Nature Methods, 10(6), 491–493. https:// doi.org/10.1038/nmeth.2485 Sporns, O., Tononi, G., & Edelman, G. M. (2000). Connectivity and complexity: The relationship between neuroanatomy and brain dynamics. Neural Networks, 13(8–9), 909–922. https://doi. org/10.1016/S0893-6080(00)00053-8 Stakic, J., Suchanek, J. M., Ziegler, G. P., & Grif, E. R. (2011). The source of spontaneous activity in the main olfactory bulb of the rat. PLoS One, 6(8), e23990. https://doi.org/10.1371/journal.pone. 0023990 Stanislaus, B. (2013). Sierpinski triangle. Wikipedia. Retrieved February 3, 2022 from https://commons. wikimedia.org/wiki/File:Sierpinski_triangle.svg Stanley, H. E. (1999). Scaling, universality, and renormalization: Three pillars of modern critical phenomena. Reviews of Modern Physics, 71(2), S358–S366. https://doi.org/10.1103/RevModPhys.71. S358 Strogatz, S. H. (2015). Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and engineering (2nd ed.). New York, NY: CRC Press.

118

Foundations of complexity science for the mind sciences

Szary, J., Dale, R., Kello, C. T., & Rhodes, T. (2015). Patterns of interaction-dominant dynamics in individual versus collaborative memory foraging. Cognitive Processing, 16(4), 389–399. https:// doi.org/10.1007/s10339-015-0731-8 Taborsky, P. (2014). Is complexity a scientifc concept? Studies in History and Philosophy of Science Part A, 47, 51–59. https://doi.org/10.1016/j.shpsa.2014.06.003 Thelen, E., & Smith, L. B. (2006). Dynamic systems theories. In W. Damon (Ed.), Handbook of child psychology: Theoretical models of human development (Vol. 1, 5th ed., pp. 563–634). New York, NY: Wiley. Thompson, E., & Varela, F. J. (2001). Radical embodiment: Neural dynamics and consciousness. Trends in Cognitive Sciences, 5(10), 418–425. https://doi.org/10.1016/S1364-6613(00)01750-2 Thouless, D. (1989). Condensed matter physics in less than three dimensions. In P. Davies (Ed.), The new physics (pp. 209–235). Cambridge, MA: Cambridge University Press. Timme, N. M., Marshall, N. J., Bennett, N., Ripp, M., Lautzenhiser, E., & Beggs, J. M. (2016). Criticality maximizes complexity in neural tissue. Frontiers in Physiology, 7(425). https://doi. org/10.3389/fphys.2016.00425 Tognoli, E., & Kelso, J. A. S. (2014). The metastable brain. Neuron, 81(1), 35–48. https://doi. org/10.1016/j.neuron.2013.12.022 Tomen, N., Herrmann, J. M., & Ernst, U. (Eds.). (2019). The functional role of critical dynamics in neural systems. Cham, Switzerland: Springer. Tranquillo, J. (2019). An introduction to complex systems: Making sense of a changing world. Cham, Switzerland: Springer Nature. Tsuda, I. (2001). Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behavioral and Brain Sciences, 24(5), 793–810. https://doi.org/10.1017/ S0140525X01000097 Tuller, B., Case, P., Ding, M., & Kelso, J. A. S. (1994). The nonlinear dynamics of speech categorization. Journal of Experimental Psychology: Human Perception and Performance, 20, 3–16. https:// doi.org/10.1037/0096-1523.20.1.3 van Geert, P. (1994). Dynamic systems of development: Change between complexity and chaos. New York, NY: Harvester Wheatsheaf. Van Orden, G., & Stephen, D. G. (2012). Is cognitive science usefully cast as complexity science? Topics in Cognitive Science, 4, 3–6. https://doi.org/10.1111/j.1756-8765.2011.01165.x van Rooij, M. M. J. W., Favela, L. H., Malone, M., & Richardson, M. J. (2013). Modeling the dynamics of risky choice. Ecological Psychology, 25(3), 293–303. https://doi.org/10.1080/104074 13.2013.810502 Vermeer, B. (Ed.). (2014). Grip on complexity: How manageable are complex systems? Directions for future complexity research. The Hague, The Netherlands: Netherlands Organisation for Scientifc Research (NWO). von Bertalanfy, L. (1972). The history and status of general systems theory. Academy of Management Journal, 15(4), 407–426. https://doi.org/10.5465/255139 Webber, Jr., C. L., & Zbilut, J. P. (2005). Recurrence quantifcation analysis of nonlinear dynamical systems. In M. A. Riley & G. C. Van Orden (Eds.), Tutorials in contemporary nonlinear methods for the behavioral sciences (pp. 26–94). Arlington, VA: National Science Foundation. www.nsf.gov/ pubs/2005/nsf05057/nmbs/nmbs.pdf Werner, S., Rink, J. C., Riedel-Kruse, I. H., & Friedrich, B. M. (2014). Shape mode analysis exposes movement patterns in biology: Flagella and fatworms as case studies. PLoS One, 9(11), e113083. https://doi.org/10.1371/journal.pone.0113083 Wiener, N. (1948). Cybernetics. Scientifc American, 179(5), 14–19. https://www.jstor.org/stable/ 24945913 Wijnants, M. L., Bosman, A. M., Hasselman, F. W., Cox, R. F., & Van Orden, G. C. (2009). 1/f scaling in movement time changes with practice in precision. Nonlinear Dynamics, Psychology, and Life Sciences, 13, 75–94.

Foundations of complexity science for the mind sciences

119

Williamson, R. C., Doiron, B., Smith, M. A., & Byron, M. Y. (2019). Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction. Current Opinion in Neurobiology, 55, 40–47. https://doi.org/10.1016/j.conb.2018.12.009 Wilson, K. G. (1983). The renormalization group and critical phenomena. Reviews of Modern Physics, 55(3), 583–600. https://doi.org/10.1103/RevModPhys.55.583 Wittgenstein, L. (1958/1986). Philosophical investigations (G. E. M. Anscombe, Trans., 3rd ed.). Oxford, UK: Basil Blackwell. Wood, B., & Bettin, H. (2019). The Planck constant for the defnition and realization of the kilogram. Annalen der Physik, 531(5), 1800308. https://doi.org/10.1002/andp.201800308 Zachariou, N., Expert, P., Takayasu, M., & Christensen, K. (2015). Generalised sandpile dynamics on artifcial and real-world directed networks. PLoS One, 10(11), e0142685. https://doi.org/10.1371/ journal.pone.0142685

6

What is NExT? NeuroEcological Nexus Theory

When there are disputes among persons, we can simply say: Let us calculate, without further ado, and see who is right. (Leibniz, 1685/1898, p. 86)

6.1

Assembling the pieces

Up to this point, I appreciate that you, the reader, have been taken on a whirlwind tour. First, I motivated the claim that ecological psychology and neuroscience are commonly viewed as irreconcilable investigative frameworks via the example of visually-guided action. Second, an overview of Gibsonian ecological psychology was presented, as well as its motivations for doing scientifc psychology in that way. Third, a case was made for viewing contemporary neuroscience as inheriting core investigative commitments that ecological psychology rejects. While ecological psychology and neuroscience attempt to investigate some of the same phenomena, isolated from each other they provide incomplete explanations and understanding. For that reason, fourth, overviews of various attempts to do ecological neuroscience were presented, as well as their limitations. Fifth, complexity science was presented as providing concepts, methods, and theories that can integrate ecological psychology and neuroscience under a unifed investigative framework. Still, what would such a framework look like in practice? The aim of this chapter is to assemble the pieces that have been laid out in previous chapters in order to answer the aforementioned question as follows: When treated as part of complexity science, an integrated ecological psychology and neuroscience gives rise to the NeuroEcological Nexus Theory, or NExT. To be considered successful—or at least heading in the right direction—NExT must provide solutions to the following problems: First, enhance ecological psychology by telling a theoretically compelling and empirically supported story about neural contributions to phenomena of interest, such as perception-action events at the spatial and temporal scales of organism-environment systems. Second, enhance neuroscience by telling a theoretically compelling and empirically supported story about the situated1 nature of organism-environment systems that includes neural processes but does not privilege such contributions in explanations of phenomena of interest, such as perception-action events. Solutions to these problems 1 The term ‘situated’ is used here to refer to the general claim that intelligent systems are always embodied—i.e., are constituted, in part, by a physical body—and embedded—i.e., are constituted, in part, by their environments (e.g., Robbins & Aydede, 2009; Roth & Jornet, 2013; Walter, 2014). When intelligent systems are situated, it is also common for them to be distributed (e.g., an airplane-pilot-co-pilot system), extended (e.g., a human and their iPhone), and enactive (e.g., an animal’s sensorimotor system). Thus, here “situatedness” refers to these clusters of intelligent system-level constitutions and organizations. DOI: 10.4324/9781003009955-6

What is NExT? NeuroEcological Nexus Theory

121

have constraints. As discussed in Chapter 4, for an “ecological neuroscience,” broadly construed, to be properly ecological, it must maintain the four primary principles at the core of (Gibsonian) ecological psychology (see Chapter 2 for explanations of these principles): 1. 2. 3. 4.

Perception is direct. Perception and action are continuous. Afordances. Organism-environment system.

Additionally, to properly incorporate the neural scale, it must provide testable hypotheses about the nature of neural activity related to those principles. Such neural activity cannot be merely correlational, namely, identify neural activity that happens at the same time as, for example, the perception of an afordance. A proper account needs to identify ways the relevant neural activity causes and constitutes phenomena of interest, such as afordance perception. The remainder of this chapter aims to motivate the claim that NExT provides a solution to these problems. 6.2

What is NExT?

nexus, n. 1.a. A bond, link, or junction; a means of connection between things or parts; (also) the state of being connected or linked. . . . 3. A central point or point of convergence; a focus; a meeting-place. (OED Online, 2022)

The NeuroEcological Nexus Theory (NExT) is both a theory about the nature of minded systems and a theoretical program for the scientifc investigation of such phenomena. As stated in Chapter 1, by “mind” I refer to those capacities of organisms commonly referred to as cognition, consciousness, goal-directed behavior, and mental processes. As a theoretical program, NExT is a complexity science that provides concepts, methods, and theories to achieve the aims of ecological psychology and neuroscience. In view of that, NExT is properly applied to the investigation of minded organism-environment systems that have neural physiology, that is, nervous systems of some sort. Since, as claimed earlier, minded systems are always situated, NExT is committed to the position that such systems necessarily include organisms that are, at a minimum, tightly coupled with or, at a maximum, constituted by their ecologies. Here, “ecologies” is broadly construed to include the ecology of the body (i.e., embodiment) and the ecology of the environment (i.e., embeddedness), with systems commonly being distributed and extended as well. That, in short, is the NeuroEcological part of NExT.2 But what does “Nexus” refer to? 2 It is important to acknowledge that the term ‘neuroecology’ (‘neuroecological,’ etc.) is used in other ways as well. For example, neuroecology is defned as the “study of adaptive variation in cognition and the brain” (Sherry, 2006, p. 167), a feld that combines neuroscience with community and population ecology (Zimmer & Derby, 2011), and the attempt to study brain functions with varying evolutionary histories by taking into account their ecological niches (Mars & Bryant, 2022). I think my use of “neuroecological” in the NeuroEcological Nexus Theory (NExT) is consistent in many ways with the aforementioned other uses of the term. For instance, NExT puts the ecological niche of organisms at the forefront of understanding their perception-action capabilities. In that way, the ecological psychology part of NExT overlaps with other conceptions of “neuroecology.” Additionally, NExT takes seriously the necessity of considering role of natural selection when understanding the nervous system. In that way, the neuroscience part of NExT overlaps with other conceptions of “neuroecology.” As such, it is my hope that NExT is viewed by other “neuroecological” researchers as supporting or being consistent with—albeit in varying ways and emphases—many of their core commitments.

122

What is NExT? NeuroEcological Nexus Theory

The Nexus part of NExT plays both rhetorical and explanatory roles. Rhetorically, NExT is a “nexus” in terms of being a point of convergence for ecological psychology and neuroscience, as well as complexity science. Explanatorily, NExT appeals to the notion of a “nexus” in terms of the primary causal and/or constitutive links or junctions that connect the various parts of a minded, organism-environment system. When attempting to reconcile ecological psychology and neuroscience, the most signifcant nexus is that which connects brains, bodies, and environments. In view of that, the most crucial part of NExT is identifying those connections and identifying how they serve to underlie the physical realizers of minded systems. The Theory part of NExT refers to the fact that it ought to be understood as being a scientifc theory and should be evaluated as one. Specifcally, as a scientifc theory, NExT provides understanding of natural phenomena, in part, by achieving three goals (Favela, 2021a; cf. U.S. National Academy of Sciences, 2018). First, it provides explanations, that is to say, it answers “How?” and “Why?” questions such as, “How does vision work?” and “Why are certain memories faster to recall?” Second, it is plausible. That is, the explanation must reasonably follow from acceptable commitments, for example, produce data that facilitate explanations consistent with those produced by experimental work involving auxiliary hypotheses. Third, it is predictable, namely, it must lead to the generation of testable hypotheses with expected outcomes.3 In summary, NExT is an attempt to understand minded organism-environment systems. Its phenomena of interest are organisms with nervous systems (Neuro-) that are situated in bodies and environments (-Ecological). Mind, such as cognition and goal-directed behavior, is caused and constituted by particular types of connections among the landscape of parts (Nexus). It aims to be reasonable and empirically verifable (Theory). NExT provides a set of six hypotheses that comprise its theoretical program.4 What follows is a detailed explication of these hypotheses. 6.3 Six hypotheses NExT can be put into practice via a theoretical program for empirically investigating and understanding minded organism-environment systems by means of six hypotheses. Taken together, the hypotheses provide a way to integrate ecological psychology (i.e., the four primary principles) and neuroscience (i.e., neural spatiotemporal scales), while not treating the purview of one (e.g., neurons via neuroscience) as being epistemically or metaphysically entitled over others during investigative work. The six hypotheses are as follows: • Hypothesis 1: The organism-environment system is the privileged spatiotemporal scale of description to understand mind. • Hypothesis 2: Neural population dynamics generate relevant states. • Hypothesis 3: Mind is based on low-dimensional neural dynamics.

3 For more on theories in the mind sciences, see Bordens and Abbott (2014), Newell (1990), and Uttal (2005). 4 Understanding NExT as a “theoretical program” that is feshed out via a set of hypotheses is directly inspired by Schöner (2020). In that work, six hypotheses are presented as the bases of a theoretical program for investigating behavior and cognition. While there are some shared concepts, methods, and theories among Schöner’s approach and mine (e.g., an emphasis on dynamics and appreciation of lessons from Gibsonian ecological psychology), there are critical and fundamental diferences (e.g., his use of representations and my attempt to maintain ecological principles).

What is NExT? NeuroEcological Nexus Theory

123

• Hypothesis 4: Body organizes into low-dimensional synergies to generate relevant states. • Hypothesis 5: Mind fundamentally emerges at low-dimensional scales of organism(neural, body)-environment activity. • Hypothesis 6: The NeuroEcological Nexus Theory explains the architecture of the mind by means of a fnite set of universal principles. The following subsections are dedicated to explicating each hypothesis. As the reader will see, much of the work in explicating the hypotheses is frontloaded in the frst few hypotheses. Hypothesis 1: The organism-environment system is the privileged spatiotemporal scale of description to understand mind. As discussed in Chapter 1, the current work utilizes the term “mind” to denote those capacities of organisms commonly referred to as cognition, consciousness, goal-directed behavior, and mental processes. Typical examples of mind include decision making, perception, and prospective control. NExT provides a scientifc framework that aims to explain and understand overlapping phenomena studied by ecological psychology and neuroscience, while not privileging the commonly investigated spatiotemporal scales of one over the other. Consequently, a comprehensive investigation of minds is not achieved by doing neuroscience alone. That is to say, by attempting to explain, for example, visual perception solely or primarily in terms of neuronal activity. Additionally, a comprehensive investigation is not achieved by doing ecological psychology alone. That is to say, by attempting to explain, for example, visual perception solely or primarily in terms of ambient arrays. With that said, neither neuronal activity nor ambient arrays can be excluded either. In order to understand mind, the spatiotemporal scale of organism-environment systems must be privileged above the features of each alone. A few crucial points must be made here. One, is that privileging the scale of organismenvironment systems does not entail a rejection of neuroscience’s tried and tested methods. Methodological reductionism, decomposition and localization, and modeling target phenomena as computational systems, to name a few, have borne much fruit. However, NExT requires that such methods be utilized with an aim toward shedding light on the spatiotemporal scales of organism-environment system phenomena. Another crucial point is that that such privileging does not entail acceptance of explanations that would be satisfactory within an ecological psychology framework alone. While the target of investigation of much ecological psychology research is at the scale of organism-environment interactions, especially afordance events, much of that work accepts forms of explanatory completeness that come at the cost of excluding smaller scale physiological contributions. For example, an explanation of the afordance pass-through-able is justifably complete in an ecology psychology study by focusing on the gross physiological features of an animal (e.g., a human’s shoulder width) and the environment (e.g., a door’s width; e.g., Warren & Whang, 1987). Despite the fact that such work is “complete” in its own way—namely, in terms of ecological psychology expectations—it is incomplete insofar as it does not incorporate physiological contributions at fner scales that surely enable such events to occur, such as the visual system’s exploitation of the optic array to guide walking. The third crucial point concerns the possibility of misinterpreting the phrase “organism-environment system” as referring to a single spatial and/or temporal scale of investigation. The fact is that while the target of investigative interest of NExT is the “privileged” scale of organism-environment systems, such phenomena multiscale in nature. Thus, it is “privileged” insofar as it ofers the

124

What is NExT? NeuroEcological Nexus Theory

advantage of being committed from the start that such phenomena require multiple spatial and temporal scales to be accounted for. An immediate challenge facing a commitment to treating the organism-environment system as the privileged scale of investigation is the seemingly unmanageable amount of data that an investigator must reckon with in order to explain even the simplest of target phenomena. While true of interdisciplinary projects (e.g., incorporating the cognitive, neural, and psychological sciences), this is happening in individual disciplines alone, especially in contemporary neuroscience. As the neurosciences have become a big data enterprise (Amunts et al., 2019; Frégnac, 2017), they have increasingly been faced with a “data deluge challenge” (Favela, 2014, 2023; cf. “big behavior,” von Ziegler, Sturman, & Bohacek, 2021). Here, “big data” refers to two characteristics of contemporary neuroscience: One is the quality and quantity of data and the other is the relevant sources of data. With regard to the frst, with the advent of increasingly sophisticated recording methods, there has been an explosion of data obtained about neural systems and related behaviors. Examples of such research abound in the neural decoding literature. Here, recorded brain activity are used to make predictions about features in the world and have increasingly been integrated with machine learning tools, like support vector machines (Glaser et  al., 2020), such as decoding visual percepts utilizing machine learning-based analyses of electroencephalography (EEG) data. With regard to the second, this explosion of data has been in part due to the increasing acknowledgment of the contributions of nonneuronal causally and constitutively relevant factors to phenomena once thought explanatorily and metaphysically reducible to the brain. The idea that not only brains, but also the body and environments they are situated in, play signifcant roles in both constituting and causing behavior, cognition, and consciousness has resulted in researchers broadening the investigative purview of what is considered relevant to explaining said phenomena (e.g., Anderson, 2014; Barrett, 2022; Buzsáki & Tingley, 2023; Spivey, 2020; Sporns, 2003; Westlin et  al., 2023). The challenges stemming from big data are further compounded when target phenomena are investigated from interdisciplinary and multidisciplinary approaches, such as explaining visual perception via both ecological psychology and neuroscience. Does NExT have a way to address challenges such as those stemming from the attempt to handle the potentially enormous amount of relevant data to explain phenomena at the organism-environment system scale, while both maintaining biological realism and epistemic comprehensibility? NExT provides a response to the data deluge challenge in three primary ways. The frst is by identifying the “smallest” causally and constitutively relevant spatiotemporal scale to organism-environment system minded events, namely, neural population dynamics (Hypothesis 2). Second, NExT posits that dimensionality reduction techniques reveal that this scale organizes into low dimensional structures (Hypothesis 3). As well, the most causally and constitutively relevant contributors from all relevant scales—that is, neural, body, and environment—will reveal organizational tendencies toward low dimensional structures (Hypotheses 4 and 5). Third, these organizational tendencies of relevant constituents of organism-environment systems exhibit a fnite set of universal principles (Hypothesis 6). It is as a complexity science that these hypotheses allow NExT to provide responses to the data deluge challenge. By quantitatively employing concepts such as emergence, nonlinearity, self-organization, and universality, NExT ofers a way to understand the structures

What is NExT? NeuroEcological Nexus Theory

125

and operations of organism-environment systems in a manner that is conceptually compelling and mathematically rigorous (cf. Favela, 2020; Favela & Amon, 2023). Hypothesis 2: Neural population dynamics generate relevant states. NExT aims to account for the overlapping phenomena of interest investigated by ecological psychology and neuroscience. As such, it must provide a plausible hypothesis about the nature of neural-scale contributions to ecological-scale activity of organism-environment systems. In view of that, NExT hypothesizes that the relevant spatiotemporal scale of neural activity related to, for example, afordance events, are population dynamics at the mesoscale. Two questions must be addressed: What are neural population dynamics, and what is the mesoscale of neural activity? First, what are neural population dynamics? The primary part of that question requires understanding what a neural population is. That is no easy question in itself as it is underlaid by a foundational issue in neuroscience, namely, identifying the primary unit of neural systems. As discussed in Chapter 3, following Ramón y Cajal’s seminal work in the late 1800s demonstrating that the nervous system is not one continuous fber—i.e., reticular theory—but is constituted by many individually connecting cells, neurons have been defned as the primary unit of study for neuroscience—i.e., the neuron doctrine (Glickstein, 2006). It was argued in Chapter 3 that the single neuron has continued to be the primary unit of investigation in cognitive and computational neuroscience as well, as demonstrated by the widespread infuence of McCulloch’s and Pitts’ (1943) work and the continued application and development of neural networks.5 While the single neuron continues to be a signifcant aspect of understanding neural systems, much work in the neural sciences has come to investigate “populations” as the primary unit for explaining various activity. Some have even argued that the neuron doctrine ought to be replaced by the “neural population doctrine” (Ebitz & Hayden, 2021; Saxena & Cunningham, 2019). Again, challenges arise as defning a neural population is not straightforward. Cortical columns are regularly ofered as being candidates for the title of neural population (e.g., Gerstner, Kistler, Naud, & Paninski, 2014, p. 293). However, this just pushes the problem back one step as it is unclear what defnes a “cortical column;” for example, is it cells of the same type, do the cells need to reside in one layer (Gerstner et al., 2014), or do they need to share the same tuning attributes (Horton & Adams, 2005)? As a path forward to addressing these challenges, I follow Kohn and colleagues in their claim that descriptions of populations “should be motivated by and linked to well-specifed function” (Kohn, Coen-Cagli, Kanitscheider, & Pouget, 2016, p. 237). In light of Kohn et  al.’s prescription (2016), and in connection with earlier literature in ecological psychology that attempted to make connections with neuroscience (Reed, 1996), NExT treats neural populations in terms of Gerald Edelman’s theory of neuronal group selection, also known as Neural Darwinism (Edelman, 1987). Neural Darwinism is a theory to explain brain structure and function that is guided by the Darwinian principle

5 In the 2020s, there should be no doubt that it remains truer than ever that the single neuron continues to be not just the primary unit of neuroscience but also of what has come to be labeled “artifcial intelligence.” The popularity of large language models (LLM) in 2023 alone is testament to this, with exploding interest in neural network systems like ChatGPT (https://openai.com/blog/chatgpt).

126

What is NExT? NeuroEcological Nexus Theory

of population thinking, or selectionism. Population thinking is the idea that variance within biological populations is necessary for the process of evolution, such that those individuals who can best cope with their environment will reproduce more successfully (Edelman, 1988). In order to appreciate the sense in which Neural Darwinism is key to the hypothesis that neural populations generate the most relevant dynamics for organism-environment system activity—particularly with regard to the Kohn et al. (2016) point that how neural populations are defned should be motivated by and linked to well-specifed function—it is worth going into some detail about the three foundational points of Neural Darwinism (Edelman, 1987; Edelman, Gally, & Baars, 2011; Favela, 2009, 2021a): 1. Developmental selection leads to primary repertoire. 2. Experiential selection to yield secondary repertoire. 3. Reentry. Neural Darwinism stresses that neural systems are part of organisms that learn and must cope with their environmental niche. As such, the theory starts at the beginning of an organism’s development. The primary repertoires are those morphological features that develop early in an organism’s life, such as the general layout of the body and early outgrowths of neural networks. Genetic constraints are at their most potent at this stage. However, epigenetic infuences are present even then. The main message from Edelman’s weighty discussion of embryonic development is that cells of all kinds form groups based upon genetic information that is expressed in a controlled manner (1988, 1989). This step is important because it demonstrates that the notions of environmental infuence and selectionismguided development express infuences at the very beginning of an organism’s development. As Edelman notes, “The internal environment [e.g., mammalian womb] during development can exert as great a selective force . . . as the external environment” (1988, p. 52). Gravity, temperature, and toxins are a few examples of the environmental conditions that afect the development of cell groups. If the environment were not novel, then selectionism would not be necessary. The fact is that the environment is novel and occurrences such as temperature and toxicity must be accounted for even at the early stages of development. It is at these earliest of developmental stages that the most minimal degrees of variation are demonstrated among species. Increased novelty in the environment of the organism occurs after the earlier stages of development and once the organism leaves the controlled environment of the womb or egg. Accordingly, with the ability to successfully cope with environments of increased complication come decreases in inherited morphology. This is especially true for neurons, for the more inherited a capacity is, the less that capacity can cope with a novel factor. Once the basic genotype has been expressed in a particular environment (i.e., primary repertoire), the secondary repertoire goes into efect. The secondary repertoire accounts for experiential selection via changes in synaptic strength and network organization (i.e., neural plasticity). Based upon morphologically constrained behavioral experiences, the corresponding neural activity will be strengthened or weakened. Once an organism’s primary repertoires are in the process of expression in an environment, epigenetic development and alterations take place as a result of the experiences had by the organism. A human who plays the piano for many years, for example, will strengthen connectivity in neuronal groups associated with fnger dexterity (Gaser & Schlaug, 2003). The behavior resulting from interactions with the environment induces efects upon neuronal coordination and organization. Interacting with the environment does not cause changes to larger anatomical

What is NExT? NeuroEcological Nexus Theory

127

structures of the brain (e.g., cerebellum, frontal lobes, etc.), but they do cause changes of varying strengths and weaknesses at smaller scales (e.g., neural networks and synapses). These connections begin to develop into neuronal groups called maps. These maps are groupings of populations whose signals have been strengthened by environment-infuenced behaviors (Edelman, 1989, p.  45). This development is not confned to preestablished, domain-specifc modules, such as those entailed by evolutionary psychology. In comparing primary and secondary repertoires, it helps to think of the primary repertoires as the product of genotype, weakly infuenced by the environment, and pre-experiential morphological development. Secondary repertoires are the epigenetic, strongly environment infuenced, experience-based selection and alteration of the fne structures of morphology such as the synaptic connectivity of brains. The third foundation of Neural Darwinism is reentry, or reentrant signaling. Reentry is the dynamic process whereby an organism’s cognitive and behavioral capacities (resulting from the primary and secondary repertoires) are supported by anatomically distant maps in the brain, which are linked by reciprocal signals that coordinate (via synchronization and integration) with each other and the physical dimensions of the body and world with a high degree of spatiotemporal accuracy (Favela, 2021a; cf. Edelman, 1989; Edelman & Tononi, 2000). It is important to note that reentry is not feedback (Edelman & Gally, 2013). As a term from control theory, “feedback” is a process that requires control and correction of signals based on prespecifed paths and desired outputs, or prescribed relationships among variables (Mayr, 1970). Although the primary repertoire is genetically inherited and can be thought of as “prespecifed,” its expression and the secondary repertoire are not. Organisms are selectionist systems that develop via experience. For that reason, reentry is a process that synchronizes and integrates signals simultaneously from multiple neuronal populations, which are themselves receiving signals from the body and world. Another crucial feature of reentry is that it contributes to the degenerate nature of the components and activities that underlay behavior and cognition. With regard to the brain, degeneracy refers to the ability of structurally diferent neuronal circuits and maps to give rise to the same function or output (Edelman, 2003; Edelman & Gally, 2001).6 Degeneracy is constant throughout an organism’s life because experiential selection (i.e., secondary repertoires) is ongoing. This means is that during an organism’s lifetime, various structures will give rise to similar capacities, for example, consciousness (Edelman, 2003), motor movements (Sporns & Edelman, 19937), and visual perception (Sporns, Tononi, & Edelman, 2000). As is made evident by the preceding examples, ‘structures’ is used broadly to include neuronal as well as behavioral and bodily confgurations. A consequence of treating all of those structures as degenerate is that their various combinations can underlay the same or similar capacities. For example, diferent neuronal structures in the same environment could give rise to the same capacity. This is a desirable capability for an organism to have because it means that those capacities that facilitate success in various environments can be achieved by a range of neuronal confgurations.

6 While I only use the term “degeneracy” (i.e., the same function emerges from diferent structures), most of these points apply to “redundancy” (i.e., the same function emerges from the same or similar structures) and “pluripotentiality” (i.e., diferent functions emerge from the same structure) as well (Edelman & Gally, 2001; Figdor, 2010; Friston & Price, 2003; Pessoa, 2014). 7 Ecological psychologists who are interested in motor control may be especially interested in this work that purports to solve Bernstein’s problem or the challenge of redundant degrees of freedom in motor control (Sporns & Edelman, 1993).

128

What is NExT? NeuroEcological Nexus Theory

With this introduction to Neural Darwinism at an end, I am poised to state what a neural population is within the NExT framework. A neural population is a neuronal group that has been selected for over species development timescales (i.e., primary repertoire), individual organism lifetime (i.e., secondary repertoire), and moment-to-moment experience timescales (i.e., reentry), which have been selected for due to their contributions to synchronized and integrated brain-body-environment structures that have facilitated evolutionary advantages.8 In that way, neural populations are defned by their links to well-specifed function (cf. Kohn et al., 2016). Such a treatment of neural populations and neural structure-function relationships in general ought not to strike as controversial as it may once have. Recent years have seen calls from various neuroscientists to shift away from understanding brains in terms of traditional psychological categories (e.g., mental terms like attention, memory, perception, etc.) to treating the brain on its own terms. What this means is focusing research upon categories whose defnitions are constrained by the fact that brains (and organisms in total) are the product of evolutionary pressures (e.g., Buzsáki, 2019; Cisek & Hayden, 2022; Pessoa, Medina, & Desflis, 2022). Such an understanding of brains is quite suited for a framework like NExT that aims at integrating neuroscience and ecological psychology. A major reason is because it encourages drawing connections through organism-environment systems that can be drawn from the neural scale to perception-action events like those involving afordances. This brings me to the second question of this subsection: What is the mesoscale of neural activity for NExT? As with neural populations, there is no universally agreed upon defnition or application of the terms microscale, mesoscale, and macroscale.9 There are some more common applications of the terms, such as applying ‘mesoscale’ to cortical circuits (e.g., Mitra, 2014). Then again, it is easy to fnd other respectable uses of the same terms applied to what others may think of as diferent scales. In connectome research, for example, circuits are labeled as microscale (e.g., Sporns, 2016). One reason for this seemingly incongruent application of such concepts is that these scales are not necessarily defned in precise anatomical or functional terms. Instead, these terms are typically defned within a larger research strategy, where their defnition is constrained by recording, modeling, and analysis techniques, such as the connectome defnition of “microscale” in terms of high resolution images of wiring patterns of circuits (Sporns, 2016). Without a feld-wide precise and generally agreed upon defnition to appeal to, NExT defnes mesoscale as describing the organizational characteristics and dynamics of those neural populations that contribute to organism-environment system functions, like afordance events. This sense of mesoscopic is consistent with Walter Freeman’s use of the term: The microscopic contributions of the individuals support the formation of the mesoscopic state of the population, which Haken (1983) calls an “order parameter”, because it shapes, regulates and constrains or “enslaves” the activity of the individuals into an ordered pattern with the degrees of freedom reduced far below the magnitude given by

8 Evolutionary advantages, of course, being “the four F’s: feeding, feeing, fghting, and reproduction” (Churchland, 1994, p. 31). 9 William Uttal (2017) raises a number of issues with current defnitions of micro-, meso-, and macroscale, not least of which is if those defnitions ought to be taken literally or as metaphors.

What is NExT? NeuroEcological Nexus Theory

129

the number of otherwise autonomous members. Therefore, a neurodynamic model of brain activity must have three kinds of state variables: microscopic, representing the activities of the neurons as individual members of a population; mesoscopic, representing the activities of the neural ensembles comprising the parts of the brains; and macroscopic, representing the cognitive actions of the brains under study. (Freeman, 2000, p. 5) This passage is illuminated by the discussion of complexity science in Chapter  5. As a reminder, an order parameter is the collective variable that describes the global scale phenomenon under investigation, where “global” is always contextually defned. Putting Freeman’s ideas to use in NExT, “mesoscopic” scales can be understood as referring to neural populations, or maps in Neural Darwinian terms. These neural populations (or “ensembles” in Freeman’s language) are comprised of the microscopic activities of their constituent neurons. The macroscopic refers to those target phenomena of interest, such as the brain’s contributions to intelligence and goal-directed behavior. Two particular features of Freeman’s use of mesoscopic will be signifcant in my discussions of the additional hypotheses. One, and that also refers back to Chapter 5, is the notion of circular causality. While neural populations (or mesoscale) are treated as the scale of neural organization generating the relevant activity for NExT, it cannot be stressed enough the contextual nature of that scale. In short, the mesoscale is always defned in a context. For Freeman and NExT, that context involves a “microscale” of neurons and “macroscale” of body and body-environment activity. Thus, while the mesoscale can be treated as, for example, a well-delineated variable in a model of organism-environment system activity, the neural populations comprising that scale cannot actually be isolated from its smaller and larger scale constituents and efects. It is that sense that circular causality is evident: While the mesoscale (neural populations) is constrained by the microscale (individual neurons) activity, so too is the microscale constrained by the mesoscale activity. Freeman states that the “defnition of mesoscopic and macroscopic state variables requires consideration of circular causality” (2000, p. 6) for this very reason that each scale constrains and is constrained by the others.10 The signifcance of circular causality and its contribution to the contextual nature of organism-environment systems and the defnition of their structure-function relationships cannot be stressed enough. There should be no doubt that linear causation has been taken for granted throughout the history of the life sciences.11 Neuroscience is no exception to this, as the nervous system has usually been understood via linear cause/efect or input/ output causal structures. For example, when executing motor control, the line of thought is typically: stimulus received by nervous system → stimulus transformed into representations → representations computed → motor command sent to limb → and so on (e.g., Jordan & Wolpert, 2000; Pouget & Snyder, 2000). Such linearities can be exhibited in both bottomup and top-down form, where the signifcant causal activity stems from neurons (“upward causation”) or from populations (“downward causation”). While “circular causation” is not the phrase that is always employed (e.g., cross-scale interactions; Le Van Quyen, 2011),

10 In connecting again with material in Chapter 5, it is due to this sense of circular causality that the brain can be understood as an interaction-dominant system. 11 For helpful discussion of this point, with emphasis on the cognitive and psychological sciences, see Amon and Holden (2021).

130

What is NExT? NeuroEcological Nexus Theory

there is increasing acknowledgement in the neurosciences of its crucial role in neural activity (e.g., Buzsáki, 2019; Kelso, 2021; Yin, 2020). The other feature of Freeman’s use of mesoscopic that is signifcant in my discussions of the additional hypotheses (especially Hypothesis 3), and that is congruent with Neural Darwinism, is his discussion of the importance of topology to mesoscopic dynamics (e.g., Freeman, 2000, pp. 99, 372). This is especially true of Freeman’s discussion of topology and its relation to understanding complex systems. As I argued earlier, Freeman’s notion of mesoscopic is compatible with the way Neural Darwinism defnes neural populations. For Neural Darwinism, neural populations are the scale at which the relevant neural dynamics occur for organism-environment system activity. As such, it is the population-level relationships among neurons that matters most to said activity. A proper way to understand the characteristics of neural populations at this scale is via their topological properties. As it so happens, topological properties are one of Freeman’s key defning characteristics of the mesoscale (Freeman, 1975/2004, p. 10). There are multiple virtues to emphasizing the topological features at the mesoscale—one is that it facilitates the ability to identify neural network properties by abstracting away—to a certain degree—from the details of individual neurons that are not diference makers to targets of investigation (e.g., Freeman, 1975/2004, pp. 25, 179). By providing a means to bypass unnecessary details for particular explanatory ends, another virtue of topological methods is that they help researchers to understand complex systems phenomena that are comprised of a potentially overwhelming amount of variables (e.g., Freeman, 1975/2004, p. 440). Topological methods in neuroscience have come a long way since Freeman’s application of them in the 1970s. The most popular contemporary developments are certainly those seen in network neuroscience (e.g., Sporns, 2011). With improved neural imaging tools and increased computational power, the past two decades have seen an explosion of work mapping and analyzing brain structural and functional connectivity. For current purposes, the most signifcant developments have stemmed from progress in manifold theory, especially the neural manifold hypothesis. With neural population dynamics hypothesized as being those that generate relevant states for NExT, I now turn to the next hypothesis. I will argue that manifold theory provides a way to identify the relevant states of neural population activity that cause and constitute the organism-environment system activity of interest. Hypothesis 3: Mind is based on low-dimensional neural dynamics. Though Hypothesis 1 claims that the privileged spatiotemporal scale of description to understand mind is the organism-environment system, doing so does not mean excluding signifcant contributions from the neural scale. Doing so would merely repeat the shortcoming of doing ecological psychology alone. Following Edelman and Freeman, Hypothesis 2 claims that the relevant scale of neural activity for NExT is found at the mesoscale of neural populations. While focusing on the mesoscale certainly helps constrain the scope of investigation somewhat, data deluge challenges remain at the scale of neural population dynamics. The challenge of identifying meaningful causal and constitutive relationships among neural population contributions to environmental scale activity is further compounded by issues such as the degenerate nature of neural networks (cf. Edelman & Gally, 2001), degrees of freedom problems in body coordination (cf. Turvey, 1990), and identifying efective ecological information to guide perceptionaction (cf. Araújo, Davids, & Hristovski, 2006).

What is NExT? NeuroEcological Nexus Theory

131

Hypothesis 3 claims that the relevant neural population dynamics are low-dimensional. Specifcally, they are low dimensional in a way that is identifable via manifold theory. There are various kinds of “manifolds” across mathematics, for example, analytic manifolds and complex manifolds (Tu, 2011). Topological manifolds are the applicable kind with respect to Hypothesis 3. Topology is the mathematical study of the properties of objects that are maintained despite changing the shape of the object (e.g., stretching or twisting), without compromising the object’s integrity (e.g., cutting or ripping; Weisstein, 2022). As such, topology is concerned with the shape of objects (e.g., relative position and shape), as opposed to geometry, which is concerned with the set measurements of objects (e.g., angle and distance; Earl, 2019). The “objects” of topology are called topological spaces. When two or more topological spaces have the same properties (e.g., shape), then they are called homeomorphic. These concepts underlie the idea of manifolds. In topology, a manifold is a mathematical object that looks locally like Euclidean space but globally may have a more complicated structure. Specifcally, a manifold is a topological space that is locally homeomorphic to Euclidean space of a given dimension. This means that if a small portion of a manifold is zoomed in upon, it will appear to be fat and Euclidean. But if the entire space of the manifold is viewed by zooming out, it may have more complicated shape features, such as curves and holes. Basic shapes have an associated topological dimension, or n-dimensionality: points = 0-dimension, curves = 1-dimension, surfaces = 2-dimension, and solids = 3-dimension (Edgar, 2008). For example, a sphere is a 3-dimensional body, but is 2-dimensional as a topological manifold. As a manifold, a sphere looks like a fat plane locally but has a global curvature. The classic example of a homeomorphic topological space is the cofee mug and donut, or torus (Figure 6.1). While at frst glance these two objects seem quite diferent—i.e., one has a handle and an opening that can hold fuids, the other has a hole that goes through it—they are equivalent in terms of their surface properties. That is to say, a cofee mug can be reshaped into a donut without compromising its continuous integrity (e.g., without ripping it), and so too can a donut be reshaped into a cofee mug. The reason is that they share the same topological dimension. Specifcally, they are both 2-dimensional manifolds that are continuous shapes with one hole. As such, they look locally like a fat plane with a hole, while they have a global structure that is diferent from a fat plane. Contrast the cofee mug and donut with

Figure 6.1 Topologically equivalent objects. A cofee mug (far most left object) and a donut (far most right object) are topologically equivalent. They are both 2-dimensional in that their surfaces are equivalent, namely, they have one hole. As such, a cofee mug can be deformed into a donut (images left to right) and a donut can be deformed into a cofee mug (images right to left).

132

What is NExT? NeuroEcological Nexus Theory

a ball, which is topologically 2-dimensional as well but has no hole. Thus, the ball cannot be reshaped into a cofee mug or donut. Still, what is manifold theory? Manifold theory is a branch of mathematics that studies the properties and behavior of manifolds, which are mathematical objects that look locally like Euclidean space but can have a more complicated global structure (Edgar, 2008; Ma & Fu, 2012; Rowland, 2022; Tu, 2011). The main focus of manifold theory is the study of the local and global properties of manifolds, such as curvature and diferentiability. As discussed earlier, manifolds can be classifed according to their dimension and topological properties. Key topics in manifold theory include algebraic topology, which studies the topological properties of manifolds and their invariants; diferential geometry, which studies the behavior of smooth functions and diferential equations on manifolds; and geometric topology, which studies the global structure and properties of manifolds, and the classifcation of manifolds. Additionally, manifold theory is increasingly being combined with dynamical systems theory and machine learning in neuroscience research (e.g., Duncker & Sahani, 2021; Floryan & Graham, 2022; Linot & Graham, 2022; McCullough & Goodhill, 2021). The concepts and tools from manifold theory, and topology more broadly, have come to be increasingly utilized in data science. In particular, this is evident in widening applications of the manifold hypothesis. In short, the manifold hypothesis is the claim that very high dimensional datasets have much lower dimensional manifolds that capture their principal structure (Feferman, Mitter, & Narayanan, 2016). Here, “very high dimensional datasets” can be understood as referring to any large amount of recorded data from sources that include, but are not limited to, deep learning (Brahma, Wu, & She, 2016), images and speech (Feferman et al., 2016), and neural activity (Cunningham & Yu, 2014). In this way, the manifold hypothesis can be understood as a form of dimensionality reduction. As discussed in Chapter  5, in data analyses (e.g., machine learning and statistics), dimensionality refers to the informative features of a dataset. High-dimensional data are those datasets with a large number of features that are computationally demanding to determine their relationships to each other and phenomena of interest. Dimensionality reduction is a data processing strategy for trimming the number of a dataset’s features without losing valuable information. Why is the manifold hypothesis relevant to NExT? In light of the data deluge challenges previously discussed, attempts to explain and understand multiscale phenomena that includes the spatiotemporal scale of organism-environment systems quickly become epistemically overwhelming. When attempting to explain an afordance event, such as the pass-through-ability of a door, for NExT to be an advance on ecological psychology and neuroscience alone, it must provide an account that not only integrates relational features of the body and environment (e.g., aperture to shoulder ratio), but it must include the causal and constitutive contributions of neural activity as well. This is an incredibly daunting task, as accounting for the data from specifc contributors— e.g., brain, body, or environment—can quickly become overwhelming on their own. As discussed in Chapter 5, this is in part why researchers in the neural and psychological sciences have increasingly appealed to data science methods like dimensionality reduction. While justifably categorized as a form of dimensionality reduction, the manifold hypothesis ofers something more. Common dimensionality reduction methods like principal component analysis merely identify statistical correlations among variables. Such analyses can be powerful and useful in their own ways. However, they do not provide guides to discovery to provide a priori reasons for testing particular hypotheses about the nature of the dataset

What is NExT? NeuroEcological Nexus Theory

133

under examination.12 The neural manifold hypothesis provides a richer guide to discovery than more common dimensionality reduction methods. The neural manifold hypothesis is the claim that very high dimensional datasets— specifcally, in the form of neural population dynamics—have much lower dimensional manifolds that capture their principal structure—i.e., “neural modes”—that generate specifc behaviors (Gallego, Perich, Miller, & Solla, 2017). Motivated by increasing evidence (Abbaspourazad, Choudhury, Wong, Pesaran, & Shanechi, 2021; Balasubramaniam et al., 2021; Chaudhuri, Gerçek, Pandey, Peyrache, & Fiete, 2019; Claudi & Branco, 2022; Jazayeri & Ostojic, 2021; Humphries, 2021; Kang, Xu, & Morozov, 2021; Kato et  al., 2015; Khona & Fiete,  2022; Mitchell-Heggs, Prado, Gava, Go, & Schultz, 2023; Vyas, Golub, Sussillo, & Shenoy, 2020; Wärnberg & Kumar, 2019; for additional review see Gao et al., 2017), the neural manifold hypothesis claims that the spatiotemporal scope of neural activity causally related to and/or constitutive of a range of phenomena (e.g., motor control) may seem incredibly large but is in fact confned to a much smaller scale. In other words, while the activity of large numbers of neurons during any given task may seem to indicate high degrees of freedom, the subspace of relevant activity actually spans only a few variables. Moreover, those variables are interpretable via a fnite set of manifolds. So how is the neural manifold hypothesis implemented? One way to understand the aim of the neural manifold hypothesis is as a method for identifying neural modes that cause and constitute various behavioral and cognitive activity. To that end, and in line with NExT Hypothesis 2, it is hypothesized that those neural modes are found at the mesoscale of population activity. This commitment claims that relevant neural activity is not found in individual neurons (i.e., microscale) or in brain regions (i.e., macroscale). Thus, statistical methods that average across activity of individual neurons or across larger brain regions will not sufce. As Gallego and colleagues put it: Here we argue that the underlying network connectivity constrains these possible patterns of population activity .  .  . and that the possible patterns are confned to a lowdimensional manifold . . . spanned by a few independent patterns that we call ‘‘neural modes.’’ These neural modes capture a signifcant fraction of population covariance. It is the activation of these neural modes, rather than the activity of single neurons, that provides the basic building blocks of neural dynamics and function.” (Gallego et al., 2017, p. 978) Identifying these modes requires extracting Z variables from a recorded set of X variables. Because Z is not directly observed or obvious, they are called latent variables. In neuroscience research, the X variables usually take the form of neuronal timeseries. Thus, the neural manifold hypothesis expects that because X neurons are part of an interconnected network, only Z will be needed to account for their activity. These neural population dynamics are processed with various dimensionality reduction techniques to produce a neural manifold, which is a set of activity plotted in an n-dimensional states space (e.g., Chaudhuri et al., 2019). Next, the latent dynamics across the plotted states are weighted to see if there are patterns in the

12 As discussed in Chapter 1, guides to discovery are sources of new hypotheses that can support the development of new experiments and provide theories to constrain explanations of experimental results (cf. Chemero, 2009).

134

What is NExT? NeuroEcological Nexus Theory

activity, or persistent homologies. If the neural manifold hypothesis is supported by the data, then this analysis—i.e., identifying latent dynamics across a state space of recorded neural activity—ought to reveal a topological manifold in the noisy cloud of data points. There are a number of compelling examples of the successful implementation of the neural manifold hypothesis. One example is research by Gallego and colleagues (2017), who identifed neural manifolds for the generation and control of movement in monkey brain. Another is research by Chaudhuri and colleagues (2019), who identifed neural manifolds for sensorimotor activity (i.e., head direction) in the mouse brain. For the sake of exposition, I present an example of the implementation of the neural manifold hypothesis that integrates material from a few diferent studies.13 In this demonstration, data were obtained from the mouse brain via electrodes (e.g., multielectrode array; Figure 6.2a). Specifcally, recordings were obtained from the area associated with the control of head direction (e.g., thalamus; Figure 6.2b; Preston-Ferrer, Coletta, Frey, & Burgalossi, 2016). The raw data can take the form of (top; Figure 6.2d) raster plots, local feld potentials, ripple power, and linear track position (Chenani et al., 2019). Next, utilizing the Isomap dimensionality reduction technique (Tenenbaum, de Silva, & Langford, 2000), neural trajectories are plotted along potential latent dimensions and heatmaps can be produced to indicate trial-by-trial variability of those dimensions (bottom, Figure 6.2d; Xia, Marks, Goard, & Wessel, 2021). The Isomap dimensions identifed as latent variables are then plotted in a 3-dimensional state space (Figure 6.2e). Next a spline is ftted along the population activity in the state space corresponding to the direction of neural population activity recorded during the task, namely, head movement (Figure 6.2f). After, analyses are conducted for the presence of persistent homologies to determine the topology of the manifold among the noisier point cloud data (Figure 6.2g; Chaudhuri et al., 2019). This is when the methodology reveals something especially fascinating. Remember, the aim of this research is to identify the neural populations that play causal and constitutive roles in head movement. The idea is that particular neurons will be active depending on where the mouse is orientated. The neural manifold hypothesis claims that if the population dynamics embody a variable of particular dimension and topology, then those dynamics would be focused on a subspace of corresponding manifold dimension and topology. If head direction is a single variable (i.e., topologically 1-dimension), then the expectation is that among the high dimensional data there will be a low dimensional spline that is also a single variable (i.e., topologically 1-dimension). This is exactly what is seen in this research. The mouse’s head direction (indicated by diamond shape-tipped arrow; Figure 6.2a) occurs across a 1-dimensional topology within Euclidean space (Figure 6.2c). So too do the latent variables track along 1-dimensional topology (Figure 6.2f), which, if straightened out conforms to a torus (Figure 6.2g). This point cannot be understated: The topology of the actual mouse’s head direction and its causal and constitutive neural population activity are homeomorphic, not just in shape (i.e., torus) but in direction (i.e., the neural population activity along the spline tracks identically with the direction of the mouse’s head). In other words, the mouse’s head direction in real space, neural population activity in the brain, neural population activity in the state space along the spline, and the location along an abstract torus are equivalent.

13 In addition to expository considerations, the combination of multiple research projects is due to an inability to obtain copyright permissions to reproduce fgures from the desired studies. C’est la vie.

Source: (a) Modifed and reprinted with permission from Preston-Ferrer et al. (2016). CC BY 4.0; (b) Modifed and reprinted courtesy of Allen Institute for Brain Science (2004, 2011); (c) Public domain; (d) Top: Modifed and reprinted with permission from Chenani et al. (2019). CC BY 4.0, Bottom: Modifed and reprinted with permission from Xia et al. (2021). CC BY 4.0; (e) and (f) Modifed and reprinted with permission from Xia et al. (2021). CC BY 4.0; (g) Public domain.

What is NExT? NeuroEcological Nexus Theory

Figure 6.2 Neural manifold hypothesis methodology. (a) During the task of rotating the mouse’s head direction (indicated by diamond shape-tipped arrow), (b) activity from the region of interest (e.g., thalamus) is (d) recorded (e.g., raster plot), (e) dimensionality reduction techniques are applied (e.g., Isomap) to isolate the activity, and then (f) further analyzed to identify the latent variables via a generated spline. (c) The mouse’s head direction in Euclidean space tracks with the neural population activity along the spline in the data points within the state space defned by the dimensions of interest.

135

136

What is NExT? NeuroEcological Nexus Theory

In addition to success with head direction, other research has made progress applying versions of the neural manifold hypothesis to a range of phenomena, such as decision making (Nieh et al., 2021), insects (Assisi, Stopfer, Laurent, & Bazhenov, 2007), intrinsic neural activity (Rubin et  al., 2019), motor commands (Brennan & Proekt, 2019), neurological disorders (Mitchell-Heggs et  al., 2023), olfactory systems (Bathellier, Buhl, Accolla, & Carleton, 2008), reach-and-grasp movements (Abbaspourazad et al., 2021), spatial location (Kang et  al., 2021; Yoon et  al., 2013), and visual perception (Stringer, Michaelos, Tsyboulski, Lindo, & Pachitariu, 2021).14 These examples motivate the reasonable assumption that such successes will generalize the neural manifold hypothesis to other neuralrelated behaviors as well. Hypothesis 4: Body organizes into low-dimensional synergies to generate relevant states. Hypothesis 3 argues that while neural population dynamics produce very high dimensional datasets, their principal structure is governed by much lower dimensional activity, which can be fruitfully assessed via the neural manifold hypothesis. Correspondingly, Hypothesis 4 states that the body organizes into low-dimensional synergies to generate relevant states. Hypothesis 4 has three main parts. First, it engages with the form that the data deluge challenge takes at the body’s spatiotemporal scale, namely, the degrees of freedom problem. Second, it addresses this problem by appealing to the concept of synergies, as well as uncontrolled manifold methodology. Third, it provides an account of the body’s causal and constitutive contributions to the organism-environment system. To begin, what is the degrees of freedom problem? In the context of organism-environment system activity, the degrees of freedom problem—also known as Bernstein’s problem (Bernstein, 1967) and referred to as the motor equivalence problem (e.g., Kelso et al., 1998)—refers to the idea that for a body (i.e., motor system) to accomplish controlled movements, there are far more degrees of freedom in that system than are needed to successfully execute the action (Latash, 2020; Riley, Richardson, Shockley, & Ramenzoni, 2011; Scholz & Schöner, 1999; Sporns & Edelman, 1993; Turvey, 1990). In other words, if natural selection is not wasteful, then an explanation needs to be provided as to why there are so many redundant degrees of freedom for any given action. There are many intertwined issues that contribute to the degrees of freedom problem, not least of which is the identifcation of the relevant contributing variables. For example, when explaining how a blacksmith controls their movement while swinging a hammer, do the “degrees of freedom” include the nervous system (e.g., only motor neurons), muscles, joints, movement trajectories, movement velocities, and so on (Bernstein, 1967; Jordan & Wolpert, 2000; Latash, Levin, Scholz, & Schöner, 2010; Turvey, 1990)? Additionally, there are issues such as whether motor control is properly explained as a computational and representational activity or in noncomputational and nonrepresentational terms (Favela, 2021b). There are a number of broad classes of motor control theories, such as those that emphasize refex, motor programming, and dynamic action (Cano-De-La-Cuerda et  al., 2015). By way of some of these theories, specifc solutions to the degrees of freedom problem have been proposed. One is the equilibrium-point hypothesis, in which the brain indirectly

14 Of course, this is not to mention the even wider application of topological methods in general (e.g., Curto, 2017).

What is NExT? NeuroEcological Nexus Theory

137

controls motor actions via modifying neurophysiological states that infuence but are independent from biomechanical variables (e.g., Feldman, 2019). Another is optimal control theory, where tasks specify “costs” in the form of accuracy and efort, and movements are selected by the motor system to minimize those costs via feedback (e.g., Todorov & Jordan, 2002). Hypothesis 4 of NExT is informed by a synergies-based solution to the degrees of freedom problem. So what is a synergy? In terms of bodily control of action, synergies are functional assemblies of parts (e.g., neurons, muscles, tendons, etc.) that are temporally constrained to act as a single unit (Kelso, 2009). The body can be understood as a synergy when its various parts are organized into a functional unit, where “functional” is understood in terms of tasks conducted by the unit (Kelso, 2009; Kugler, Kelso, & Turvey, 1980). For example, when the blacksmith is wielding their hammer, the particular assemblage of their skeleton and muscles, along with the tool, are a synergy in that they are a functional unit aimed at accurately and forcefully striking a piece of hot iron.15 It is because the body is a soft-assembled system that it can reconfgure (to varying degrees) into synergies based on task requirements. Systems are “soft assembled” when their material constitution is not rigidly constrained so they can confgure and reconfgure themselves into functional coordinative structures—or synergies—in a context-sensitive manner (Favela, 2019; Haken, 1996; Thelen & Smith, 2006; Turvey, 2007). This description of a synergy may lead one to mistakenly think that any isolatable system that is conducting a task is a “synergy.” While such a labeling can be done loosely, it would, strictly speaking, be an incorrect application of the term. The accurate idea of a synergy is intended to explain how natural, biological systems come to successfully be selected for over evolutionary timescales and during the time scales of task execution, while being comprised of seemingly incalculable numbers of parts and seemingly limitless variables to consider (cf. Kelso, 2009, 2021; Kelso & Haken, 1995). When understood with that purpose in mind, synergies provide an account of the body’s role in organism-environment systems and, moreover, a solution to the degrees of freedom problem. It was discussed earlier in this chapter that NExT is to be understood as a complexity science. Accordingly, organism-environment systems are to be understood as complex systems. As discussed in Chapter 5, the four core characteristics of complex systems are emergence, nonlinearity, self-organization, and universality. Systems theory, nonlinear dynamical systems theory (NDST), and synergetics were presented as providing much of the foundation of complexity science, particularly methods for empirically investigating complex systems.16 Synergies can be treated as being part of complexity science because they exhibit the four characteristics and are fruitfully investigated by the aforementioned disciplines’ methods. Thus, if a body is a synergy, then it too exhibits those characteristics and can be studied by those methods. As the literature on synergies demonstrates, this is indeed the case.

15 The idea that a biological system (e.g., human) can incorporate a nonbiological object (e.g., hammer) into a single synergy raises a number of interesting issues related to extended cognition (e.g., Clark & Chalmers, 1998; Favela, Amon, Lobo, & Chemero, 2021) and distributed cognition (e.g., Amon & Favela, 2019; Hutchins, 1995). Such issues are beyond the scope of the current discussion. 16 Keep in mind that “synergetics” and “synergies” are not equivalent concepts in the context of this book. As discussed in Chapter 5, synergetics is an investigative framework. As discussed in the current chapter, synergies are functional assemblies of parts.

138

What is NExT? NeuroEcological Nexus Theory

A classic example of the body as a synergy—namely, as a temporally coordinating functional unit—comes from research on speech production (Kelso, Tuller, Vatikiotis-Bateson, & Fowler, 1984). In a series of experiments by Kelso and colleagues (1984), it was demonstrated that perturbing one part of speech involving anatomy (i.e., forcefully displacing the subject’s jaw with a prosthesis17) did not result in the inability to produce specifc speech sounds (e.g., “bæb” and “bæz”). Instead, other parts of the relevant anatomy (e.g., lower lip and tongue) exhibited reciprocal compensation in order to continue to generate the desired speech sounds. Experimental observations of synergies has been observed in phenomena across a range of spatiotemporal scales, such as among brain regions (e.g., Kelso & Tognoli, 2007), brain-to-brain coupling (e.g., Hasson, Ghazanfar, Galantucci, Garrod, & Keysers, 2012), fnger force stabilization (Grover et al., 2022), interpersonal coordination (Riley et al., 2011), and posture (Marsden, Merton, & Morton, 1983), just to name a few. Returning to the case of speech production, this capability can be understood as a synergy that is part of a complex system in that: It is emergent (i.e., as a functional unit it is irreducible to what its parts do in isolation), nonlinear (e.g., especially regarding the importance of the historical variation), and self-organized (e.g., its capabilities result from changing task constraints and not prespecifed [e.g., genetic] instruction), and it instantiates universal principles (e.g., critical phase transitions among states18). Additionally, it can be fruitfully investigated via methods from systems theory, NDST, and, especially, synergetics. Not only due to the similarity in spelling, but synergies are appropriately investigated by methodological tools ofered from synergetics. Remember the discussion in Chapter 5 concerning order and control parameters. The order parameter is the collective variable that describes the macroscopic phenomenon under investigation. Control parameters are those variables that guide the system’s dynamics (Haken, 2016). In that way, the functional unit can be modeled and explained in terms of its being an order parameter that exhibits circular causal relationships with its control parameters. Before addressing the degrees of freedom problem by way of synergies understood in terms of order and control parameters, it is necessary that two other features of synergies be discussed: dimensional compression and reciprocal compensation. Dimensional compression refers to the idea that the potential variables that can contribute to a particular synergy are high dimensional. But when they begin to couple into a functional unit, then they become low dimensional (cf. Kay, 1988; Riley et al., 2011). For the sake of consistency in terminology and with the aim of presenting a unifed framework in the current work, dimensional compression can be understood as falling under the umbrella of dimensionality reduction, which has been discussed in this and previous chapters. Reciprocal compensation refers to the idea that in a synergy each component responds to changes in other components (Latash, 2008). This feature can be understood in relation to interaction dominance, especially the idea that perturbations to one part of the system do not remain local but reverberate throughout (see Chapter 5; see Figure 5.5). These two features are not limited to providing additional conceptual understanding of synergies. What is more, they connect with a quantitative methodology

17 Good luck getting that through Institutional Review Boards (IRB) today! The 1980s sure were a wonderous time to experiment on human subjects. 18 As I am unaware of specifc research demonstrating critical phase transitions in perturbed speech production, this point is admittedly speculative. Though it is reasonable to expect that other universal classes (e.g., catastrophe fags like hysteresis; see Chapter 5) could be exhibited in such experimental conditions as well.

What is NExT? NeuroEcological Nexus Theory

139

Figure 6.3 Uncontrolled manifold (UCM). State space depicting the task of constant output produced by two efectors, E1 and E2. The slanted solid line is the UCM for the task, indicating compensatory activity indicative of a synergy. The dashed line perpendicular to the UCM indicates uncompensatory activity indicating a non-synergy. Data distribution across repetitive trials fall within the circle and ellipsis. Trials that fall within the ellipse correspond to more variance parallel to the UCM (VGOOD; i.e., “good variability”) as compared to the variance perpendicular to it (VBAD; i.e., “bad variability”), while the circle has equal amounts of variance in both directions. Data within the ellipse depicts an inaccurate synergy and the circle depicts an accurate non-synergy. Source: Image inspired by Latash, Gorniak, and Zatsiorsky (2008).

for empirically investigating synergies, specifcally, the uncontrolled manifold. It is via this method that synergies can address the degrees of freedom problem. The uncontrolled manifold (UCM; Scholz & Schöner, 1999) is a methodology for assessing the variability of movements with regard to tasks, or the aim of functional units.19 The UCM is an analysis that defnes a particular confguration space that is populated by variables hypothesized to capture a particular movement. The activity that defnes such tasks can be understood as the order parameter. UCM treats motor control as being fundamentally about the stabilization of performance variable values. These can be understood as the control parameters. Those values are quantifed in terms of their being compensatory and uncompensatory with regard to the task. The methodology generates a state space where a manifold depicts the variables contributing to the task and quantifes the amount of constancy among those variables (Figure 6.3). The result is a “synergy index” (Latash et al., 2010), where variance along the manifold are compensatory variables when they maintain task performance (i.e., “good variability”) and variance perpendicular to the manifold are

19 Apologies to readers for presenting two important ideas that are diferent but include the same word: neural manifold hypothesis and uncontrolled manifold. While similarities between the two uses of the word ‘manifold’ can be identifed—for example, a geometric space comprised of points—they should be understood in the current context as quite diferent. As a reminder, the neural manifold hypothesis is the claim that very high dimensional datasets from recordings of neural population dynamics have much lower dimensional manifolds that capture their principal structure that generate specifc behaviors. The uncontrolled manifold is an analysis space defned within a particular confguration and is populated by variables hypothesized to capture a particular movement (Scholz & Schöner, 1999).

140

What is NExT? NeuroEcological Nexus Theory

uncompensatory variables when they result in loss of performance (i.e., “bad variability;” Latash, 2012; Scholz & Schöner, 1999). A classic example of the application of UCM is in experiments involving two efectors tasked with producing particular amounts of magnitude (e.g., Latash, Scholz, Danion, & Schöner, 2001). In these experiments, the two efectors (E1 and E2) are two fngers, and the task is to produce a total stabilizing force on a narrow support surface. According to the UCM hypothesis, if the efectors are functioning as a single synergy, then the increase in applied force by one efector should result in a compensatory decrease in applied force by the other efector. This hypothesis has been supported by these experiments (e.g., Latash et al., 2001), as well as others involving neurological disorders of movement (e.g., Solnik, Furmanek, & Piscitelli, 2020), multidigit pressing, and postural sway (e.g., Furmanek et al., 2018). UCM solves the degrees of freedom problem by reframing it not as an issue of explaining the apparent unnecessary redundancy of variables contributing to any action (Bernstein, 1967). Instead, through the lens of UCM, redundant degrees of freedom ought to be understood as being useful or vital for motor control. Such abundancy is assessed via UCM as “good variance” when it has no efect upon task performance (Latash, 2012). In addition, such variance may underlie the ability of systems to compensate for perturbations and shift to new conditions (Latash, 2012). Thus, UCM provides a methodology for determining if a system is a synergy and, if it is, a way to quantify the strength of the synergy. How does the concept of synergies and the UCM methodology provide an account of the body’s causal and constitutive contributions to the organismenvironment system? NExT claims that a body is a soft-assembled system that can self-organize into synergies. As a synergy, the body is partially caused and constituted by neural population dynamics across a fexible range of variance that is constrained only by their ability to facilitate successful task completion. This is consistent with selectionist view of evolution (see earlier discussion of Neural Darwinism) and the degenerate nature of neural networks. So too does the environment contribute to the body’s performance in fexible ways (e.g., adapting to new forms of ecological information to guide action). UCM methodology allows investigators to empirically assess the presence of and degree to which a system is a synergy. This allows for defning synergies without adhering to a priori defnitions about where the boundaries of an intelligent system are drawn. Variables that contribute to synergies can be located in the brain, arm, or environment, as long as they facilitate successful task completion. Accordingly, bodies are properly understood as being part of adaptive, self-organizing organism-environment systems. Hypothesis 5: Mind fundamentally emerges at low-dimensional scales of organism(neural, body)-environment activity. I have frontloaded much of the background information, defnitions of concepts, and descriptions of methodology needed to understand NExT in Hypotheses 1 through 4. I am now poised to see the returns on that investment in Hypotheses 5 and 6. While Hypothesis 1 claims that investigating ecological psychology and neuroscience’s overlapping phenomena of interest requires privileging the spatiotemporal scale of organism-environment systems, doing so does not mean diminishing or excluding neural or body scale contributions. Hypotheses 2 and 3 claim that neural population dynamics play signifcant causal and constitutive roles in body activity, with the former’s principle structure being low dimensional

What is NExT? NeuroEcological Nexus Theory

141

(cf. neural manifold hypothesis). Hypothesis 4 claims that the body’s contribution to organism-environment scale activity emerges via synergies, whose principle structure is also low dimensional (cf. UCM). As a complex system, circular causation plays a major role in facilitating self-organization within neural populations, within body synergies, and between the brain and body. Following from this line of thought, what does it mean to say that the mind fundamentally emerges at low-dimensional scales of organism(neural, body)-environment activity? NExT claims that mind is a multiscale phenomenon that emerges from systematic relationships among brain, body, and world. As previously discussed, each spatiotemporal scale is high dimensional. That is to say, data from neural population activity and body synergies are straight away high dimensional, which leads to the data deluge challenge of how to understand incredible amounts of data and numerous variables. Be that as it may, the principle structure of each is low dimensional. This is motivated by the neural manifold hypothesis regarding neural populations and uncontrolled manifold regarding body synergies. From those methods, targets of investigative interest at each spatiotemporal scale can be modeled and depicted via phase space plots that present the state space of empirically-verifed activity (e.g., Figures 6.2f and 6.3). As neural populations and body synergies are dynamic systems, these data can take the form of dynamical landscapes (cf. Libedinsky, 2023; Morita & Suemitsu, 2002; Vohryzek, Cabral, Vuust, Deco, & Kringelbach, 2022; Zhang, Sun, & Saggar, 2022). A dynamical landscape is a qualitative description of system’s states based on quantitative data from characteristics like directionality and strength (Figure 6.4). While dynamical landscapes can occupy a state space of innumerable dimensions, dimensionality reduction methods (e.g., Isomap) help bring that number down to more epistemically manageable levels (e.g., two or three dimensions). The purpose of such methods is not merely to bring the number of dimensions down but to identify the principle structure assumed to be within the clouds of noisy data (i.e., latent variables). As discussed in the earlier section on Hypothesis 3, large data sets from neural population activity recorded during tasks like head movement have successfully revealed a principle structure across only three dimensions (Figure 6.2d, e, f). The left state space in Figure 6.5 depicts an idealized version of the activity of three dimensions in the form of a dynamical landscape. Similarly, as discussed in the previous section on Hypothesis 4, large data sets from body-scaled activity recorded during tasks, like force stabilization via efectors, have successfully revealed a principle structure across two dimensions (Figure 6.3). The center state space in Figure 6.5 depicts an idealized version of the activity of three body-scaled dimensions in the form of a dynamical landscape. The lines with arrows between the neural population landscape and the behavioral landscape depict relationships among features of low-dimensional manifolds that constrain and are constrained by each other’s manifolds. In this way, the brain-body system exhibits circular causation, where changes the dimension of one landscape will have an efect on the other landscape’s dimensions. Hypothesis 5 takes this line of reasoning a step further. As brain and body spatiotemporal scale activity has low-dimensional structure within high-dimensional data sets, so too does the environment. That is to say, the organism’s environment—i.e., “the surrounding-world of the animal” (von Uexküll, 1934, p.  79) or umwelt—can be understood as high dimensional with a principle structure that is low dimensional. Here, “high dimensional” refers to the food of potential stimulation constituting the world—e.g., light, odors,

142

What is NExT? NeuroEcological Nexus Theory

Figure 6.4 Dynamical landscapes and bifurcation diagram of single brain region. (a–c) Single brain region activity governed by a dynamical landscape: valleys are attractors (i.e., stable states), and peaks are repellers (i.e., unstable states). (d) The 3-dimensional topology of the region of the dynamical landscape can be altered by a control parameter (i.e., external parameter; e.g., input from other regions). Dynamical landscape variation due to the control parameter can be reduced to 2-dimensions and projected on bottom plane. (e) Bifurcation diagram depicts attractor and repeller activity due to continuous control parameter changes: the top and bottom lines (red) are attractors with phase transitions occurring at critical values (±1). Source: Modifed and reproduced with permission from Zhang et al. (2022). CC BY 4.0.

rocks, temperature, trees, etc.—and “low dimensional” refers to ecological information.20 Remember, as discussed in Chapter  2, ecological information are distributions of energy that surround an organism and are those patterns that uniquely specify properties of the world (Gibson, 1986/2015; Michaels & Carello, 1981). They are higher-order properties in part because organisms perceive them in their surrounding space over time. The relational nature between environmental energies and organisms that can or cannot perceive those energies is why ecological information specifes meaningful features of the world.

20 It would be reasonable at frst pass to compare this discussion of the environment (i.e., “food of potential stimulation constituting the world”) to Kant’s ideas about human perception and the unstructured manifold of sense data (Kant, 1996). However, it would be quite a stretch to say there are any deep similarities between Gibsonian ecological information (e.g., direct perception) and Kant’s unstructured manifold (i.e., indirect perception; cf. Käufer & Chemero, 2021).

Source: Figure inspired by Dodel, Tognoli, and Kelso (2020) and Jazayeri and Afraz (2017).

What is NExT? NeuroEcological Nexus Theory

Figure 6.5 Systematic relationships among brain-body-environment systems during an afordance perception-action event. Each state space depicts a 3-dimensional dynamical landscape of features that govern that spatiotemporal scale’s activity after low-dimensional principle structure is identifed within high-dimensional data. The left state space depicts trajectories based on three neuronal population dimensions most relevant to the target phenomenon of interest. The middle state space depicts trajectories based on three behavioral dimensions that capture the most relevant contributions to the afordance. The right state space depicts structural dimensions of ecological information that capture the most relevant contributions to the afordance. Lines with arrows depict relationships among features of state spaces that constrain and are constrained by activity of the other state spaces.

143

144

What is NExT? NeuroEcological Nexus Theory

Specifcation is a relationship between an organism and the actions it can perform in an environment. Those actions are the perceivable opportunities for behavior, or afordances. The right state space in Figure  6.5 depicts structural dimensions of ecological information that capture the most relevant features of the environment for a given task. The area labeled “afordances” between the center and right states spaces in Figure  6.5 identifes where afordances emerge in the organism-environment system. Specifcally, it is by way of the systematic relationships between the body and environment that afordances exist.21 The afordance pass-through-able does not exist in the environment alone (e.g., a door) nor does it exist in the organism (e.g., shoulder width). In addition, it does not exist in neural populations. While neural populations are signifcant causal and constitutive aspects of the organism-environment system during, for example, perception-action events, afordances are not represented in the mind (ecological psychology primary principle 1: perception is direct) or obtain their meaning from brain states (e.g., memories). Afordances are meaningful by way of the body-environment relationship. The investigative strategy depicted in Figure 6.5 is becoming increasingly utilized. That is to say, obtaining high-dimensional data sets, utilizing dimensionality reduction to identify low dimensional structure in the form of fewer variables, quantitatively depicting low dimensional data via state spaces, and evaluating signifcant and systematic relationships among multiple state spaces. Examples include the use of perturbation tools like optogenetics to identify correlational correspondences between intrinsic neural manifolds and measured behavioral manifolds (Jazayeri & Afraz, 2017), identifying continuous fows across neural manifolds (Lou et al., 2020), and aligning neural manifolds from recordings across time, subsets of neurons, and across individuals (Dabagia, Kording, & Dyer, 2022; McCullough & Goodhill, 2021; Schneider, Lee, & Mathis, 2023).22 NDST ofers additional methods to quantify systematic relationships among scales, such as complexity matching. Complexity matching can assess relationships among various distributions of data, for example, among spatial-spatial, temporal-temporal, and spatialtemporal. Complexity matching is especially appropriate for organism-environment systems due to its ability to account for nonlinear phenomena (Amon, 2016; Fine, Likens, Amazeen, & Amazeen, 2015; West, Geneston, & Grigolini, 2008). Forms of scale invariance and self-similarity (e.g., fractals; see Chapter  5) are kinds of nonlinearities common to such systems and are explicitly analyzable by complexity matching. Both monofractal and multifractal analyses provide a scaling index that characterizes the overall power-law dynamics across scales (i.e., alpha, Holder, and Hurst exponents). The correlation between spatial and temporal scaling exponents reveals the extent to which structures reveal complexity matching, which can be understood in terms of high degrees of information exchange across spatial and temporal scales (Mafahim, Lambert, Zare, & Grigolini, 2015; Marmelat & Delignières, 2012).

21 In this book, I do not take a strong stance or provide a defense of a particular theory of afordances. Still, I am sympathetic to those that describe them as emergent (e.g., Stofregen, 2003) features of organism-environment interactions and not as properties but as relations between organisms and environments (e.g., Chemero, 2009, pp. 139–145). 22 It is worth noting that the general approach on ofer by NExT, especially the part concerned with identifying signifcant relationships among brain, body, and environment, is friendly to other approaches as well. In particular, NExT is bolstered by and may provide support for work such as Spivey’s continuity of mind thesis (Spivey, 2007).

What is NExT? NeuroEcological Nexus Theory

145

In essence, to say that the mind fundamentally emerges at low-dimensional scales of organism(neural, body)-environment activity is to observe a set of metaphysical and epistemological claims. Metaphysically, it is to maintain that mind is caused and constituted by active features of an organism—namely, its body and, particularly, its nervous system—and spatiotemporal structures of the environment. Epistemologically, it is to maintain that complete explanations and understand of mind requires concepts, methods, and theories that do not privilege “lower” scales (e.g., neuronal activity) or “higher” scales (e.g., ambient arrays) but traverse and integrate them. Taken together, the metaphysical and epistemological aspects of NExT contribute to its role as an investigative framework. While various aspects of Hypotheses 1 through 5 can be appealed to as guides to discovery in their own right, those are more properly understood as auxiliary hypotheses that support the overarching guide provided by Hypothesis 6. Hypothesis 6: The NeuroEcological Nexus Theory explains the architecture of the mind by means of a fnite set of universal principles. In Chapter 5, the idea of universality was introduced as one of the key features of complex systems needed to understand mind from a complexity science investigative framework that integrates ecological psychology and neuroscience. Universality refers to the fact that there are patterns of activity and organization in nature that reoccur via diverse substrates and various contexts. Two examples of universality classes were discussed: catastrophe fags and self-organized criticality (SOC). One way in which they are universal is in light of the fact that both are exhibited in biological and nonbiological systems. For example, catastrophe fags like hysteresis are observed in magnets and mammalian visual systems, and SOC is observed in sandpiles and neural populations. From Darwinian natural selection to the Standard Model of particle physics, science has often been guided by the search for and application of universal forms of explanation and understanding. The same is true for ecological psychology, which can historically be understood as primarily appealing to afordances as the source of hypothesis generation and explanation. Neuroscience is no diferent, with explicit calls for shared theoretical language for describing the brain across scales (e.g., He et al., 2013) and the development of generalizable and universal principles (e.g., Wang et al., 2020). Given that as a feld neuroscience is far more diverse than ecological psychology, it is not surprising to see appeals to a wider range of potential universal principles, such as Bayesianism (Doya, Ishii, Pouget, & Rao, 2007), coordination dynamics (Kelso, 2021), free-energy principle (Friston, 2010), network theory (van den Heuvel, Bullmore, & Sporns, 2016), and neural reuse (Anderson, 2014). Does universality play any particular role in NExT beyond being an examined feature of complex systems? With the aim of unifying ecological psychology and neuroscience, Hypothesis 6 of NExT claims that the architecture of the mind can be explained by means of a fnite set of universal principles. To be clear, Hypothesis 6 is not asserting that a single universal principle will account for all phenomena of investigative interest. Instead, it claims that—as a testable hypothesis—there will be a limited number of universal principles that integrate in systematic ways to explain the architecture of organism-environment system minds. What would such integration look like? To begin, it is becoming increasingly evident that universal principles of organization are being identifed at each scale of brain, body, and environment. The abundance of empirical evidence from decades of ecological psychology research on afordances in animal-environment systems demonstrates the lawfulness—or “universality”—of

146

What is NExT? NeuroEcological Nexus Theory

Figure 6.6 Invariant manifolds of neural dynamics across subjects. (a) Manifold constructed from recording of 107 neurons of single roundworm. Colors are coded to locomotor behavior snapshots (see bottom of fgure; for full color see Figure 6.4 in Brennan & Proekt, 2019). Arrows indicate temporal evolution. Dimensions of principle component analysis (DPCA) depicting neural dynamics for diferent locomotor activity. (b) Manifold constructed from recording of all roundworms. (c) Lines depict data from single roundworm projected onto manifold constructed from four other roundworms. The two projections (3-dimensional on left and 2-dimensional on right) emphasize data and manifold similarities. Source: Modifed and reproduced with permission from Brennan and Proekt (2019). CC BY 4.0.

perception-action capabilities grounded in systematic relationships among bodily characteristics (i.e., body-scaled and action-scaled) and ecological information (for a very small sample, see discussion and references in Blau & Wagman, 2023; Chemero, 2009; Turvey, 2019; Wagman & Blau, 2020). For nearly 40 years, the Haken-Kelso-Bunz (HKB) model has served as one exemplary form of universality that explains the lawfulness of coordination dynamics among body parts, humans, and humans with nonhuman animals and with tools (for review see Kelso, 2021). In more recent years, improved technology (i.e., recording and analyses) has facilitated empirical research demonstrating universality in brain structure and function. For example, Ódor, Gastner, Kelling, and Deco (2021) have demonstrated universality in both structure (i.e., network topology) and function (i.e., critical dynamics) in large-scale human brain connectomes. Brennan and Proekt (2019) demonstrated universal dynamics for motor control in low-dimensional manifolds of whole nervous system activity both within and between subjects of roundworm C. elegans (Figure 6.6). Safaie et al. (2022) demonstrated similar conserved (i.e., universal) neuronal dynamics in low-dimensional manifolds within and between subjects of mice and within and between subjects of monkeys.

What is NExT? NeuroEcological Nexus Theory

147

The kind of integration among universal principles would, in part, resemble the framework depicted in Figure 6.5 to formalize relationships among brain-body-environment systems during an afordance perception-action event. First, it is reasonable that the dynamical landscape depicted in the state space of neural population activity (Figure 6.5, left) could be explained as a series of phase transitions guided by SOC. Accordingly, it is possible that criticality is the universal class of brain activity (e.g., Beggs, 2022). Second, it is reasonable that the dynamical landscape depicted in the state space of behavioral activity (Figure 6.5, center) could be explained via coordination dynamics. Accordingly, it is possible that coordination dynamics along the lines of the extended HKB model is the universal class of body-scale activity (e.g., Kelso, 2021). Third, it is reasonable that the dynamical landscape depicted in the state space of ecological information (Figure 6.5, right) could be explained as a global array of higher-ordered patterns of environmental energy (e.g., Stofregen, Mantel, & Bardy, 2017). Accordingly, it is possible that laws of ecological physics/geometry account for the structure of energy arrays (cf. Blau & Wagman, 2023). But what prevents such an approach from being a mere jumble of random ideas? Where some may see a disorganized mess in the previous paragraph,23 NExT treats the situation of explaining and understanding organism-environment systems as necessitating a pluralism of universal classes. Even if this explanatory pluralism expects a single universal class at each spatiotemporal scale, what is applicable as a universal class at one scale may not be fruitfully applied at others (cf. Favela & Chemero, 2021, 2023). With that said, NExT claims that while there may be several universal classes for diferent spatiotemporal scales (i.e., those operating at the scales of neuronal activity, body movement, etc.), it is reasonable to expect a fnite set of classes within scales and systematic relationships between scales. For example, if criticality is the universal principle of brains and coordination dynamics that of bodies, then there will be systematic relationships among brain (criticality) and body (coordination dynamics) states (see arrows relating state spaces in Figure 6.5). Thus, while NExT is comfortable with pluralism, it does not necessarily embrace a promiscuous pluralism.24 Other than hypothesizing that there will be at least one universal class or principle that contributes to explanations of organism-environment systems, NExT does not have an in principle reason to expect any particular number of universal classes or principles. It is an empirical issue how many will be needed to properly explain and understand targets of investigative interest. 6.4

Conclusion

Earlier chapters presented a way to understand why ecological psychology and neuroscience have traditionally been at odds with each other when it comes to investigating targets of mutual investigative interest. The previous chapter argued that one way to reconcile these felds is to integrate them into complexity science. The current chapter presented the NeuroEcological Nexus Theory (NExT), an investigative framework to explain and understand mind in organism-environment systems. NExT integrates ecological psychology and

23 Perhaps those with tastes for theory reductionism (e.g., Ernest Nagel), certain forms of unity (e.g., logical positivists) or “desert landscapes” (e.g., Willard Van Orman Quine). 24 Here “promiscuous pluralism” is not to be mistaken for John Dupré’s (1993) use of the phrase.

148

What is NExT? NeuroEcological Nexus Theory

neuroscience by way of shared concepts, methods, and theories from complexity science. Six hypotheses were presented as constituting NExT. In the following chapter, NExT will be put to work by applying it to an afordance event. It will be demonstrated that NExT is an advancement on both ecological psychology and neuroscience alone by accounting for neural scale activity during an afordance event while preserving the four primary principles at the core of (Gibsonian) ecological psychology. References Abbaspourazad, H., Choudhury, M., Wong, Y. T., Pesaran, B., & Shanechi, M. M. (2021). Multiscale low-dimensional motor cortical state dynamics predict naturalistic reach-and-grasp behavior. Nature Communications, 12(1), 607. https://doi.org/10.1038/s41467-020-20197-x Allen Institute for Brain Science. (2004). Allen mouse brain atlas [dataset]. Retrieved September 28, 2022 from mouse.brain-map.org Allen Institute for Brain Science. (2011). Allen reference atlas: Mouse brain [brain atlas]. Retrieved September 28, 2022 from atlas.brain-map.org Amon, M. J. (2016). Examining coordination and emergence during individual and distributed cognitive tasks [Doctoral dissertation, University of Cincinnati]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1468336815 Amon, M. J., & Favela, L. H. (2019). Distributed cognition criteria: Defned, operationalized, and applied to human-dog systems. Behavioural Processes, 162, 167–176. https://doi.org/10.1016/j. beproc.2019.03.001 Amon, M. J., & Holden, J. G. (2021). The mismatch of intrinsic fuctuations and the static assumptions of linear statistics. Review of Philosophy and Psychology, 12, 149–173. https://doi.org/10.1007/ s13164-018-0428-x Amunts, K., Knoll, A. C., Lippert, T., Pennartz, C. M., Ryvlin, P., Destexhe, A., . . . Bjaalie, J. G. (2019). The Human Brain Project: Synergy between neuroscience, computing, informatics, and brain-inspired technologies. PLoS Biology, 17(7), e3000344. https://doi.org/10.1371/journal. pbio.3000344 Anderson, M. L. (2014). After phrenology: Neural reuse and the interactive brain. Cambridge, MA: The MIT Press. Araújo, D., Davids, K., & Hristovski, R. (2006). The ecological dynamics of decision making in sport. Psychology of Sport and Exercise, 7(6), 653–676. https://doi.org/10.1016/j.psychsport.2006.07.002 Assisi, C., Stopfer, M., Laurent, G., & Bazhenov, M. (2007). Adaptive regulation of sparseness by feedforward inhibition. Nature Neuroscience, 10(9), 1176–1184. https://doi.org/10.1038/nn1947 Balasubramaniam, R., Haegens, S., Jazayeri, M., Merchant, H., Sternad, D., & Song, J. H. (2021). Neural encoding and representation of time for sensorimotor control and learning. Journal of Neuroscience, 41(5), 866–872. https://doi.org/10.1523/JNEUROSCI.1652-20.2020 Barrett, L. F. (2022). Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. American Psychologist, 77(8), 894–920. https://doi. org/10.1037/amp0001054 Bathellier, B., Buhl, D. L., Accolla, R., & Carleton, A. (2008). Dynamic ensemble odor coding in the mammalian olfactory bulb: Sensory information at diferent timescales. Neuron, 57(4), 586–598. https://doi.org/10.1016/j.neuron.2008.02.011 Beggs, J. M. (2022). The cortex and the critical point: Understanding the power of emergence. Cambridge, MA: The MIT Press. Bernstein, N. (1967). The co-ordination and regulation of movements. Long Island City, NY: Pergamon Press. Blau, J. J. C., & Wagman, J. B. (2023). Introduction to ecological psychology: A lawful approach to perceiving, acting, and cognizing. New York, NY: Routledge. Bordens, K. S., & Abbott, B. B. (2014). Research design and methods: A process approach (9th ed.). New York: McGraw-Hill Education.

What is NExT? NeuroEcological Nexus Theory

149

Brahma, P. P., Wu, D., & She, Y. (2016). Why deep learning works: A manifold disentanglement perspective. IEEE Transactions on Neural Networks and Learning Systems, 27(10), 1997–2008. https://doi.org/10.1109/TNNLS.2015.2496947 Brennan, C., & Proekt, A. (2019). A quantitative model of conserved macroscopic dynamics predicts future motor commands. eLife, 8, e46814. https://doi.org/10.7554/eLife.46814 Buzsáki, G. (2019). The brain from inside out. New York, NY: Oxford University Press. Buzsáki, G., & Tingley, D. (2023). Cognition from the body-brain partnership: Exaptation of memory. Annual Review of Neuroscience, 46, 191–210. https://doi.org/10.1146/annurev-neuro-101222110632 Cano-De-La-Cuerda, R., Molero-Sánchez, A., Carratalá-Tejada, M., Alguacil-Diego, I. M., MolinaRueda, F., Miangolarra-Page, J. C., & Torricelli, D. (2015). Theories and control models and motor learning: Clinical applications in neurorehabilitation. Neurología (English Edition), 30(1), 32–41. https://doi.org/10.1016/j.nrleng.2011.12.012 Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A., & Fiete, I. (2019). The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nature Neuroscience, 22(9), 1512–1520. https://doi.org/10.1038/s41593-019-0460-x Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: The MIT Press. Chenani, A., Sabariego, M., Schlesiger, M. I., Leutgeb, J. K., Leutgeb, S., & Leibold, C. (2019). Hippocampal CA1 replay becomes less prominent but more rigid without inputs from medial entorhinal cortex. Nature Communications, 10(1341), 1–13. https://doi.org/10.1038/s41467-01909280-0 Churchland, P. S. (1994). Can neurobiology teach us anything about consciousness? Proceedings and Addresses of the American Philosophical Association, 67, 23–40. https://doi.org/10.2307/3130741 Cisek, P., & Hayden, B. Y. (2022). Neuroscience needs evolution. Philosophical Transactions of the Royal Society B, 377(1844), 20200518. https://doi.org/10.1098/rstb.2020.0518 Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58, 7–19. https://doi.org/10.1093/ analys/58.1.7 Claudi, F., & Branco, T. (2022). Diferential geometry methods for constructing manifold-targeted recurrent neural networks. Neural Computation, 34(8), 1790–1811. https://doi.org/10.1162/ neco_a_01511 Cunningham, J. P., & Yu, B. M. (2014). Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17(11), 1500–1509. https://doi.org/10.1038/nn.3776 Curto, C. (2017). What can topology tell us about the neural code? Bulletin of the American Mathematical Society, 54(1), 63–78. http://doi.org/10.1090/bull/1554 Dabagia, M., Kording, K. P., & Dyer, E. L. (2022). Aligning latent representations of neural activity. Nature Biomedical Engineering, 1–7. https://doi.org/10.1038/s41551-022-00962-7 Dodel, S., Tognoli, E., & Kelso, J. A. S. (2020). Degeneracy and complexity in neuro-behavioral correlates of team coordination. Frontiers in Human Neuroscience, 14(328). https://doi.org/10.3389/ fnhum.2020.00328 Doya, K., Ishii, S., Pouget, A., & Rao, R. P. (Eds.). (2007). Bayesian brain: Probabilistic approaches to neural coding. Cambridge, MA: The MIT Press. Duncker, L., & Sahani, M. (2021). Dynamics on the manifold: Identifying computational dynamical activity from neural population recordings. Current Opinion in Neurobiology, 70, 163–170. https://doi.org/10.1016/j.conb.2021.10.014 Dupré, J. (1993). The disorder of things: Metaphysical foundations of the disunity of science. Cambridge, MA: Harvard University Press. Earl, R. (2019). Topology: A very short introduction. Oxford, UK: Oxford University Press. Ebitz, R. B., & Hayden, B. Y. (2021). The population doctrine in cognitive neuroscience. Neuron, 109(19), 3055–3068. https://doi.org/10.1016/j.neuron.2021.07.011 Edelman, G. M. (1987). Neural Darwinism: The theory of neuronal group selection. New York, NY: Basic Books. Edelman, G. M. (1988). Topobiology: An introduction to molecular embryology. New York, NY: Basic Books.

150

What is NExT? NeuroEcological Nexus Theory

Edelman, G. M. (1989). The remembered present: A biological theory of consciousness. New York, NY: Basic Books. Edelman, G. M. (2003). Naturalizing consciousness: A theoretical framework. Proceedings of the National Academy of Sciences, 100, 5520–5524. https://doi.org/10.1073/pnas.0931349100 Edelman, G. M., & Gally, J. A. (2001). Degeneracy and complexity in biological systems. Proceedings of the National Academy of Sciences, 98, 13763–13768. https://doi.org/10.1073/pnas.231499798 Edelman, G. M., & Gally, J. A. (2013). Reentry: A key mechanism for integration of brain function. Frontiers in Integrative Neuroscience, 7(63). https://doi.org/10.3389/fnint.2013.00063 Edelman, G. M., Gally, J. A., & Baars, B. J. (2011). Biology of consciousness. Frontiers in Psychology: Consciousness Research, 2(4), 1–7. https://doi.org/10.3389/fpsyg.2011.00004 Edelman, G. M., & Tononi, G. (2000). A universe of consciousness: How matter becomes imagination. New York, NY: Basic Books. Edgar, G. (2008). Measure, topology, and fractal geometry (2nd ed.). New York, NY: Springer. Favela, L. H. (2009). Biological theories of consciousness: The search for experience [Thesis, San Diego State University]. OCLC: 427651225. Favela, L. H. (2014). Radical embodied cognitive neuroscience: Addressing “grand challenges” of the mind sciences. Frontiers in Human Neuroscience, 8(796), 1–10. https://doi.org/10.3389/ fnhum.2014.00796 Favela, L. H. (2019). Soft-assembled human-machine perceptual systems. Adaptive Behavior, 27(6), 423–437. https://doi.org/10.1177/1059712319847129 Favela, L. H. (2020). Cognitive science as complexity science. Wiley Interdisciplinary Reviews: Cognitive Science, 11(4), e1525, 1–24. https://doi.org/10.1002/wcs.1525 Favela, L. H. (2021a). Fundamental theories in neuroscience: Why neural Darwinism encompasses neural reuse. In F. Calzavarini & M. Viola (Eds.), Neural mechanisms: New challenges in the philosophy of neuroscience (pp.  143–162). Cham, Switzerland: Springer. https://doi. org/10.1007/978-3-030-54092-0_7 Favela, L. H. (2021b). The dynamical renaissance in neuroscience. Synthese, 199(1–2), 2103–2127. https://doi.org/10.1007/s11229-020-02874-y Favela, L. H. (2023). Nested dynamical modeling of neural systems: A strategy and some consequences. Journal of Multiscale Neuroscience, 2(1), 240–250. https://doi.org/10.56280/1567939485 Favela, L. H., & Amon, M. J. (2023). Reframing cognitive science as a complexity science. Cognitive Science: A Multidisciplinary Journal. https://doi.org/10.1111/cogs.13280 Favela, L. H., Amon, M. J., Lobo, L., & Chemero, A. (2021). Empirical evidence for extended cognitive systems. Cognitive Science: A Multidisciplinary Journal, 45(11), e13060, 1–27. https://doi. org/10.1111/cogs.13060 Favela, L. H., & Chemero, A. (2021). Explanatory pluralism: A case study from the life sciences. PhilSci-Archive. [Preprint]. http://philsci-archive.pitt.edu/id/eprint/19146 Favela, L. H., & Chemero, A. (2023). Plural methods for plural ontologies: A case study from the life sciences. In M.-O. Casper & G. F. Artese (Eds.), Situated cognition research: Methodological foundations. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-031-39744-8_14 Feferman, C., Mitter, S., & Narayanan, H. (2016). Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4), 983–1049. https://doi.org/10.1090/jams/852 Feldman, A. G. (2019). Indirect, referent control of motor actions underlies directional tuning of neurons. Journal of Neurophysiology, 121(3), 823–841. https://doi.org/10.1152/jn.00575.2018 Figdor, C. (2010). Neuroscience and the multiple realization of cognitive functions. Philosophy of Science, 77(3), 419–456. https://doi.org/10.1086/652964 Fine, J. M., Likens, A. D., Amazeen, E. L., & Amazeen, P. G. (2015). Emergent complexity matching in interpersonal coordination: Local dynamics and global variability. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 723–737. https://doi.org/10.1037/xhp0000046 Floryan, D., & Graham, M. D. (2022). Data-driven discovery of intrinsic dynamics. Nature Machine Intelligence, 4, 1113–1120. https://doi.org/10.1038/s42256-022-00575-4

What is NExT? NeuroEcological Nexus Theory

151

Frégnac, Y. (2017). Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain? Science, 358(6362), 470–477. https://doi.org/10.1126/science.aan8866 Freeman, W. J. (1975/2004). Mass action in the nervous system. New York, NY: Academic Press. Freeman, W. J. (2000). Neurodynamics: An exploration in mesoscopic brain dynamics. London: Springer-Verlag. Friston, K. (2010). The free-energy principle: A unifed brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787 Friston, K. J., & Price, C. J. (2003). Degeneracy and redundancy in cognitive anatomy. Trends in Cognitive Sciences, 7(4), 151–152. https://doi.org/10.1016/S1364-6613(03)00054-8 Furmanek, M. P., Solnik, S., Piscitelli, D., Rasouli, O., Falaki, A., & Latash, M. L. (2018). Synergies and motor equivalence in voluntary sway tasks: The efects of visual and mechanical constraints. Journal of Motor Behavior, 50(5), 492–509. https://doi.org/10.1080/00222895.2017.1367642 Gallego, J. A., Perich, M. G., Miller, L. E., & Solla, S. A. (2017). Neural manifolds for the control of movement. Neuron, 94(5), 978–984. https://doi.org/10.1016/j.neuron.2017.05.025 Gao, P., Trautmann, E., Yu, B., Santhanam, G., Ryu, S., Shenoy, K., & Ganguli, S. (2017). A theory of multineuronal dimensionality, dynamics and measurement. bioRxiv, 214262. https://doi. org/10.1101/214262 Gaser, C., & Schlaug, G. (2003). Brain structures difer between musicians and non-musicians. Journal of Neuroscience, 23(27), 9240–9245. https://doi.org/10.1523/JNEUROSCI.23-27-09240.2003 Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014). Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge, UK: Cambridge University Press. Gibson, J. J. (1986/2015). The ecological approach to visual perception (classic ed.). New York, NY: Psychology Press. Glaser, J. I., Benjamin, A. S., Chowdhury, R. H., Perich, M. G., Miller, L. E., & Kording, K. P. (2020). Machine learning for neural decoding. eNeuro, 7(4). 1–16. https://doi.org/10.1523/ENEURO.050619.2020 Glickstein, M. (2006). Golgi and Cajal: The neuron doctrine and the 100th anniversary of the 1906 Nobel Prize. Current Biology, 16(5), R147–R151. Grover, F. M., Andrade, V., Carver, N. S., Bonnette, S., Riley, M. A., & Silva, P. L. (2022). A dynamical approach to the uncontrolled manifold: Predicting performance error during steady-state isometric force production. Motor Control, 26(4), 536–557. https://doi.org/10.1123/mc.20210105 Haken, H. (1996). Principles of brain functioning: A synergetic approach to brain activity, behavior and cognition. Berlin, Germany: Springer-Verlag. Haken, H. (2016). The brain as a synergetic and physical system. In A. Pelster & G. Wunner (Eds.), Self-organization in complex systems: The past, present, and future of synergetics (pp. 147–163). Cham, Switzerland: Springer. Hasson, U., Ghazanfar, A. A., Galantucci, B., Garrod, S., & Keysers, C. (2012). Brain-to-brain coupling: A mechanism for creating and sharing a social world. Trends in Cognitive Sciences, 16(2), 114–121. https://doi.org/10.1016/j.tics.2011.12.007 He, B., Coleman, T., Genin, G. M., Glover, G., Hu, X., Johnson, N., . . . Ye, K. (2013). Grand challenges in mapping the human brain: NSF workshop report. IEEE Transactions on Biomedical Engineering, 60(11), 2983–2992. https://doi.org/10.1109/TBME.2013.2283970 Horton, J. C., & Adams, D. L. (2005). The cortical column: A structure without a function. Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1456), 837–862. https://doi.org/10.1098/rstb.2005.1623 Humphries, M. D. (2021). Strong and weak principles of neural dimension reduction. Neurons, Behavior, Data Analysis and Theory (NBDT), 5(2), 1–28. https://doi.org/10.51628/001c.24619 Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: The MIT Press. Jazayeri, M., & Afraz, A. (2017). Navigating the neural space in search of the neural code. Neuron, 93(5), 1003–1014. https://doi.org/10.1016/j.neuron.2017.02.019

152

What is NExT? NeuroEcological Nexus Theory

Jazayeri, M., & Ostojic, S. (2021). Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. Current Opinion in Neurobiology, 70, 113–120. https://doi.org/10.1016/j.conb.2021.08.002 Jordan, M. I., & Wolpert, D. M. (2000). Computational motor control. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (2nd ed., pp. 601–618). Cambridge: The MIT Press. Kang, L., Xu, B., & Morozov, D. (2021). Evaluating state space discovery by persistent cohomology in the spatial representation system. Frontiers in Computational Neuroscience, 15(616748). https:// doi.org/10.3389/fncom.2021.616748 Kant, I. (1781/1787/1996). Critique of pure reason (unifed ed., W. S. Pluhar, Trans.). Indianapolis, IN: Hackett Publishing, Inc. Kato, S., Kaplan, H. S., Schrödel, T., Skora, S., Lindsay, T. H., Yemini, E., . . . Zimmer, M. (2015). Global brain dynamics embed the motor command sequence of Caenorhabditis elegans. Cell, 163(3), 656–669. https://doi.org/10.1016/j.cell.2015.09.034 Käufer, S., & Chemero, A. (2021). Phenomenology: An introduction (2nd ed.). Medford, MA: Polity Press. Kay, B. A. (1988). The dimensionality of movement trajectories and the degrees of freedom problem: A tutorial. Human Movement Science, 7(2–4), 343–364. https://doi.org/10.1016/0167-9457 (88)90016-4 Kelso, J. A. S. (2009). Synergies: Atoms of brain and behavior. In D. Sternad (Ed.), Progress in motor control: A multidisciplinary perspective (pp. 83–91). New York, NY: Springer. https://doi. org/10.1007/978-0-387-77064-2_5 Kelso, J. A. S. (2021). The Haken—Kelso—Bunz (HKB) model: From matter to movement to mind. Biological Cybernetics, 115(4), 305–322. https://doi.org/10.1007/s00422-021-00890-w Kelso, J. A. S., Fuchs, A., Lancaster, R., Holroyd, T., Cheyne, D., & Weinberg, H. (1998). Dynamic cortical activity in the human brain reveals motor equivalence. Nature, 392(6678), 814–818. https://doi.org/10.1038/33922 Kelso, J. A. S., & Haken, H. (1995). New laws to be expected in the organism: Synergetics of brain and behaviour. In M. P. Murphy & L. A. J. O’Neil (Eds.), What is life? The next ffty years: Speculations on the future of biology (pp. 137–160). New York, NY: Cambridge University Press. Kelso, J. A. S., & Tognoli, E. (2007). Toward a complementary neuroscience: Metastable coordination dynamics of the brain. In L. I. Perlovsky & R. Kozma (Eds.), Neurodynamics of cognition and consciousness (pp. 39–59). Berlin: Springer-Verlag. Kelso, J. S., Tuller, B., Vatikiotis-Bateson, E., & Fowler, C. A. (1984). Functionally specifc articulatory cooperation following jaw perturbations during speech: Evidence for coordinative structures. Journal of Experimental Psychology: Human Perception and Performance, 10(6), 812–832. https:// doi.org/10.1037/0096-1523.10.6.812 Khona, M., & Fiete, I. R. (2022). Attractor and integrator networks in the brain. Nature Reviews Neuroscience, 23, 744–766. https://doi.org/10.1038/s41583-022-00642-0 Kohn, A., Coen-Cagli, R., Kanitscheider, I., & Pouget, A. (2016). Correlations and neuronal population information. Annual Review of Neuroscience, 39, 237–256. https://doi.org/10.1146/ annurev-neuro-070815-013851 Kugler, P. N., Kelso, J. A. S., & Turvey, M. T. (1980). On the concept of coordinative structures as dissipative structures. I: Theoretical lines of convergence. In G. E. Stelmach & J. Requin (Eds.), Tutorials in motor behavior. New York, NY: North: Holland Publishing Co. Latash, M. L. (2008). Evolution of motor control: From refexes and motor programs to the equilibrium-point hypothesis. Journal of Human Kinetics, 19(2008), 3–24. https://doi.org/10.2478/ v10078-008-0001-2 Latash, M. L. (2012). The bliss (not the problem) of motor abundance (not redundancy). Experimental Brain Research, 217, 1–5. https://doi.org/10.1007/s00221-012-3000-4 Latash, M. L. (2020). On primitives in motor control. Motor Control, 24(2), 318–346. https://doi. org/10.1123/mc.2019-0099 Latash, M. L., Gorniak, S., & Zatsiorsky, V. M. (2008). Hierarchies of synergies in human movements. Kinesiology (Zagreb, Croatia), 40(1), 29–38.

What is NExT? NeuroEcological Nexus Theory

153

Latash, M. L., Levin, M. F., Scholz, J. P., & Schöner, G. (2010). Motor control theories and their applications. Medicina, 46(6), 382–392. https://doi.org/10.3390/medicina46060054 Latash, M. L., Scholz, J. F., Danion, F., & Schöner, G. (2001). Structure of motor variability in marginally redundant multifnger force production tasks. Experimental Brain Research, 141(2), 153–165. https://doi.org/10.1007/s002210100861 Leibniz, G. W. (1685/1898). The monadology and other philosophical writings (R. Latta, Trans.). London, UK: Oxford University Press. Le Van Quyen, M. (2011). The brainweb of cross-scale interactions. New Ideas in Psychology, 29(2), 57–63. https://doi.org/10.1016/j.newideapsych.2010.11.001 Libedinsky, C. (2023). Comparing representations and computations in single neurons versus neural networks. Trends in Cognitive Sciences. https://doi.org/10.1016/j.tics.2023.03.002 Linot, A. J., & Graham, M. D. (2022). Data-driven reduced-order modeling of spatiotemporal chaos with neural ordinary diferential equations. Chaos: An Interdisciplinary Journal of Nonlinear Science, 32(7), 073110. https://doi.org/10.1063/5.0069536 Lou, A., Lim, D., Katsman, I., Huang, L., Jiang, Q., Lim, S. N., & De Sa, C. (2020). Neural manifold ordinary diferential equations. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), NIPS’20: Proceedings of the 34th international conference on neural information processing systems (pp. 17548–17558). Red Hook, NY: Curran Associates Inc. https://doi.org/10.5555/ 3495724.3497196 Ma, Y., & Fu, Y. (Eds.). (2012). Manifold learning theory and applications. Boca Raton, FL: CRC Press. Mafahim, J. U., Lambert, D., Zare, M., & Grigolini, P. (2015). Complexity matching in neural networks. New Journal of Physics, 17(1), 015003-1–015003-17. https://doi.org/10.1088/13672630/17/1/015003 Marmelat, V., & Delignières, D. (2012). Strong anticipation: Complexity matching in interpersonal coordination. Experimental Brain Research, 222, 137–148. https://doi.org/10.1007/s00221-012-3202-9 Mars, R. B., & Bryant, K. L. (2022). Neuroecology: The brain in its world. In S. Della Sala (Ed.), Encyclopedia of behavioral neuroscience (Vol. 3, 2nd ed., pp.  757–765). Amsterdam: Elsevier Science. https://doi.org/10.1016/B978-0-12-819641-0.00054-2 Marsden, C. D., Merton, P. A., & Morton, H. B. (1983). Rapid postural reactions to mechanical displacement of the hand in man. Advances in Neurology, 39, 645–659. Mayr, O. (1970). The origins of feedback control. Cambridge, MA: The MIT Press. McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133. https://doi.org/10.1007/BF02478259 McCullough, M. H., & Goodhill, G. J. (2021). Unsupervised quantifcation of naturalistic animal behaviors for gaining insight into the brain. Current Opinion in Neurobiology, 70, 89–100. https:// doi.org/10.1016/j.conb.2021.07.014 Michaels, C. F., & Carello, C. (1981). Direct perception. Englewood Clifs, NJ: Prentice-Hall. Mitchell-Heggs, R., Prado, S., Gava, G. P., Go, M. A., & Schultz, S. R. (2023). Neural manifold analysis of brain circuit dynamics in health and disease. Journal of Computational Neuroscience, 51(1), 1–21. https://doi.org/10.1007/s10827-022-00839-3 Mitra, P. P. (2014). The circuit architecture of whole brains at the mesoscopic scale. Neuron, 83(6), 1273–1283. https://doi.org/10.1016/j.neuron.2014.08.055 Morita, M., & Suemitsu, A. (2002). Computational modeling of pair-association memory in inferior temporal cortex. Cognitive Brain Research, 13(2), 169–178. https://doi.org/10.1016/S0926-6410 (01)00109-4 Newell, A. (1990). Unifed theories of cognition. Cambridge, MA: Harvard University Press. Nieh, E. H., Schottdorf, M., Freeman, N. W., Low, R. J., Lewallen, S., Koay, S. A., . . . Tank, D. W. (2021). Geometry of abstract learned knowledge in the hippocampus. Nature, 595(7865), 80–84. https://doi.org/10.1038/s41586-021-03652-7 Ódor, G., Gastner, M. T., Kelling, J., & Deco, G. (2021). Modelling on the very large-scale connectome. Journal of Physics: Complexity, 2(4), 045002. https://doi.org/10.1088/2632-072X/ac266c OED Online. (2022, September). Nexus, n: OED online. Oxford University Press. Retrieved October 5, 2022 from www.oed.com/view/Entry/126677

154

What is NExT? NeuroEcological Nexus Theory

Pessoa, L. (2014). Understanding brain networks and brain organization. Physics of Life Reviews, 11(3), 400–435. https://doi.org/10.1016/j.plrev.2014.03.005 Pessoa, L., Medina, L., & Desflis, E. (2022). Refocusing neuroscience: Moving away from mental categories and towards complex behaviours. Philosophical Transactions of the Royal Society B, 377(1844), 20200534. https://doi.org/10.1098/rstb.2020.0534 Pouget, A., & Snyder, L. H. (2000). Computational approaches to sensorimotor transformations. Nature Neuroscience, 3(11), 1192–1198. https://doi.org/10.1038/81469 Preston-Ferrer, P., Coletta, S., Frey, M., & Burgalossi, A. (2016). Anatomical organization of presubicular head-direction circuits. eLife, 5, e14592. https://doi.org/10.7554%2FeLife.14592 Reed, E. S. (1996). Encountering the world: Toward an ecological psychology. New York, NY: Oxford University Press. Riley, M. A., Richardson, M. J., Shockley, K., & Ramenzoni, V. C. (2011). Interpersonal synergies. Frontiers in Psychology: Movement Science and Sport Psychology, 2(38), 1–7. https://doi. org/10.3389/fpsyg.2011.00038 Robbins, P., & Aydede, M. (Eds.). (2009). The Cambridge handbook of situated cognition. New York, NY: Cambridge University Press. Roth, W. M., & Jornet, A. (2013). Situated cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(5), 463–478. https://doi.org/10.1002/wcs.1242 Rowland, T. (2022). Manifold. MathWorld: A Wolfram Web Resource. Retrieved July 1, 2022 from https://mathworld.wolfram.com/Manifold.html Rubin, A., Sheintuch, L., Brande-Eilat, N., Pinchasof, O., Rechavi, Y., Geva, N., & Ziv, Y. (2019). Revealing neural correlates of behavior without behavioral measurements. Nature Communications, 10(4745). https://doi.org/10.1038/s41467-019-12724-2 Safaie, M., Chang, J. C., Park, J., Miller, L. E., Dudman, J. T., Perich, M. G., & Gallego, J. A. (2022). Preserved neural population dynamics across animals performing similar behaviour. bioRxiv. https://doi.org/10.1101/2022.09.26.509498 Saxena, S., & Cunningham, J. P. (2019). Towards the neural population doctrine. Current Opinion in Neurobiology, 55, 103–111. https://doi.org/10.1016/j.conb.2019.02.002 Schneider, S., Lee, J. H., & Mathis, M. W. (2023). Learnable latent embeddings for joint behavioural and neural analysis. Nature, 617, 360–368. https://doi.org/10.1038/s41586-023-06031-6 Scholz, J. P., & Schöner, G. (1999). The uncontrolled manifold concept: Identifying control variables for a functional task. Experimental Brain Research, 126, 289–306. https://doi.org/10.1007/ s002210050738 Schöner, G. (2020). The dynamics of neural populations capture the laws of the mind. Topics in Cognitive Science, 12(4), 1257–1271. https://doi.org/10.1111/tops.12453 Sherry, D. F. (2006). Neuroecology. Annual Review of Psychology, 57, 167–197. https://doi. org/10.1146/annurev.psych.56.091103.070324 Solnik, S., Furmanek, M. P., & Piscitelli, D. (2020). Movement quality: A novel biomarker based on principles of neuroscience. Neurorehabilitation and Neural Repair, 34(12), 1067–1077. https://doi. org/10.1177/1545968320969936 Spivey, M. (2007). The continuity of mind. New York, NY: Oxford University Press. Spivey, M. J. (2020). Who you are: The science of connectedness. Cambridge, MA: The MIT Press. Sporns, O. (2003). Embodied cognition. In M. A. Arbib (Ed.), The handbook of brain theory and neural networks (2nd ed., pp. 395–398). Cambridge, MA: The MIT Press. Sporns, O. (2011). Networks of the brain. Cambridge, MA: The MIT Press. Sporns, O. (2016). Connectome networks: From cells to systems. In H. Kennedy, D. C. Van Essen, & Y. Christen (Eds.), Micro-, meso- and macro-connectomics of the brain (pp. 107–127). Cham: Springer. https://doi.org/10.1007/978-3-319-27777-6_8 Sporns, O., & Edelman, G. M. (1993). Solving Bernstein’s problem: A proposal for the development of coordinated movement by selection. Child Development, 64(4), 960–981. https://doi. org/10.2307/1131321

What is NExT? NeuroEcological Nexus Theory

155

Sporns, O., Tononi, G., & Edelman, G. M. (2000). Connectivity and complexity: The relationship between neuroanatomy and brain dynamics. Neural Networks, 13(8–9), 909–922. https://doi. org/10.1016/S0893-6080(00)00053-8 Stofregen, T. A. (2003). Afordances as properties of the animal-environment system. Ecological Psychology, 15(2), 115–134. https://doi.org/10.1207/S15326969ECO1502_2 Stofregen, T. A., Mantel, B., & Bardy, B. G. (2017). The senses considered as one perceptual system. Ecological Psychology, 29(3), 165–197. https://doi.org/10.1080/10407413.2017.1331116 Stringer, C., Michaelos, M., Tsyboulski, D., Lindo, S. E., & Pachitariu, M. (2021). High-precision coding in visual cortex. Cell, 184(10), 2767–2778. https://doi.org/10.1016/j.cell.2021.03.042 Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323. https://doi.org/10.1126/science.290.5500.2319 Thelen, E., & Smith, L. B. (2006). Dynamic systems theories. In R. M. Lerner (Ed.), Handbook of child psychology: Volume 1: Theoretical models of human development (pp. 258–312). Hoboken, NJ: John Wiley. Todorov, E., & Jordan, M. I. (2002). Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5(11), 1226–1235. https://doi.org/10.1038/nn963 Tu, L. W. (2011). An introduction to manifolds (2nd ed.). New York, NY: Springer. https://doi. org/10.1007/978-1-4419-7400-6 Turvey, M. T. (1990). Coordination. American Psychologist, 45(8), 938–953. https://doi.org/10.1037/ 0003-066X.45.8.938 Turvey, M. T. (2007). Action and perception at the level of synergies. Human Movement Science, 26, 657–697. https://doi.org/10.1016/j.humov.2007.04.002 Turvey, M. T. (2019). Lectures on perception: An ecological perspective. New York, NY: Routledge. U.S. National Academy of Sciences. (2018). Defnitions of evolutionary terms. The National Academies of Sciences, Engineering, Medicine: Evolution Resources. Washington, DC. Retrieved October 5, 2022 from http://nationalacademies.org/evolution/Defnitions.html Uttal, W. R. (2005). Neural theories of mind: Why the mind-brain problem may never be solved. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Uttal, W. R. (2017). The neuron and the mind: Microneuronal theory and practice in cognitive neuroscience. New York, NY: Routledge. van den Heuvel, M. P., Bullmore, E. T., & Sporns, O. (2016). Comparative connectomics. Trends in Cognitive Sciences, 20(5), 345–361. https://doi.org/10.1016/j.tics.2016.03.001 Vohryzek, J., Cabral, J., Vuust, P., Deco, G., & Kringelbach, M. L. (2022). Understanding brain states across spacetime informed by whole-brain modelling. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 380(2227), 20210247, 1–16. https:// doi.org/10.1098/rsta.2021.0247 von Uexküll, J. (1934). A stroll through the worlds of animals and men: A picture book of invisible worlds. In C. H. Schiller (Ed.), Instinctive behavior: The development of a modern concept (pp. 5–80). New York, NY: International Universities Press, Inc. von Ziegler, L., Sturman, O., & Bohacek, J. (2021). Big behavior: Challenges and opportunities in a new era of deep behavior profling. Neuropsychopharmacology, 46(1), 33–44. https://doi. org/10.1038/s41386-020-0751-7 Vyas, S., Golub, M. D., Sussillo, D., & Shenoy, K. V. (2020). Computation through neural population dynamics. Annual Review of Neuroscience, 43, 249–275. https://doi.org/10.1146/annurev-neuro092619-094115 Wagman, J. B., & Blau, J. J. C. (Eds.). (2020). Perception as information detection: Refections on Gibson’s ecological approach to perception. New York, NY: Routledge. Walter, S. (2014). Situated cognition: A feld guide to some open conceptual and ontological issues. Review of Philosophy and Psychology, 5(2), 241–263. https://doi.org/10.1007/s13164-013-0167-y Wang, X. J., Hu, H., Huang, C., Kennedy, H., Li, C. T., Logothetis, N., .  .  . Zhou, D. (2020). Computational neuroscience: A frontier of the 21st century. National Science Review, 7(9), 1418– 1422. https://doi.org/10.1093/nsr/nwaa129

156

What is NExT? NeuroEcological Nexus Theory

Wärnberg, E., & Kumar, A. (2019). Perturbing low dimensional activity manifolds in spiking neuronal networks. PLoS Computational Biology, 15(5), e1007074. https://doi.org/10.1371/journal. pcbi.1007074 Warren, Jr., W. H., & Whang, S. (1987). Visual guidance of walking through apertures: Bodyscaled information for afordances. Journal of Experimental Psychology: Human Perception and Performance, 13, 371–383. https://doi.org/10.1037/0096-1523.13.3.371 Weisstein, E. W. (2022). Topology. MathWorld: A Wolfram Web Resource. Retrieved July 1, 2022 from https://mathworld.wolfram.com/Topology.html West, B. J., Geneston, E. L., & Grigolini, P. (2008). Maximizing information exchange between complex networks. Physics Reports, 468(1–3), 1–99. https://doi.org/10.1016/j.physrep.2008.06.003 Westlin, C., Theriault, J. E., Katsumi, Y., Nieto-Castanon, A., Kucyi, A., Ruf, S. F., . . . Barrett, L. F. (2023). Improving the study of brain-behavior relationships by revisiting basic assumptions. Trends in Cognitive Sciences, 27(3), 246–257. https://doi.org/10.1016/j.tics.2022.12.015 Xia, J., Marks, T. D., Goard, M. J., & Wessel, R. (2021). Stable representation of a naturalistic movie emerges from episodic activity with gain variability. Nature Communications, 12(5170), 1–15. https://doi.org/10.1038/s41467-021-25437-2 Yin, H. (2020). The crisis in neuroscience. In W. Mansell (Ed.), The interdisciplinary handbook of perceptual control theory: Living control systems IV (pp. 23–48). Elsevier. https://doi.org/10.1016/ B978-0-12-818948-1.00003-4 Yoon, K., Buice, M. A., Barry, C., Hayman, R., Burgess, N., & Fiete, I. R. (2013). Specifc evidence of low-dimensional continuous attractor dynamics in grid cells. Nature Neuroscience, 16(8), 1077– 1084. https://doi.org/10.1038/nn.3450 Zhang, M., Sun, Y., & Saggar, M. (2022). Cross-attractor repertoire provides new perspective on structure-function relationship in the brain. NeuroImage, 259(119401), 1–15. https://doi.org/10.1016/j. neuroimage.2022.119401 Zimmer, R. K., & Derby, C. D. (2011). Neuroecology and the need for broader synthesis. Integrative and Comparative Biology, 51(5), 751–755. https://doi.org/10.1093/icb/icr070

7

Putting the NeuroEcological Nexus Theory to work

These terms and concepts are subject to revision as the ecological approach to perception becomes clear. May they never shackle thought as the old terms and concepts have! (Gibson, 1986/2015, p. 298)

7.1

Investigating an afordance via NExT

The previous chapter presented the NeuroEcological Nexus Theory (NExT). NExT is a framework for the investigation of mind in organism-environment systems. It does so by means of six hypotheses and the integration of ecological psychology and neuroscience under a unifed approach grounded in concepts, methods, and theories from complexity science. In that way, it aims to both reconcile and advance ecological psychology and neuroscience with regard to their pursuits of overlapping phenomena of investigative interest. The purpose of this chapter is to describe how an afordance can be explained via the NExT framework. It will be shown that NExT both maintains the core principles of ecological psychology and advances it by ofering an account of neural contributions to phenomena of interest, such as perception-action events at the spatial and temporal scales of organismenvironment systems. It will also be shown that NExT advances neuroscience by ofering an account of the situated nature of organism-environment systems that includes neural processes but does not privilege such contributions in explanations of phenomena of interest, such as perception-action events. After, I provide an outline of potential future developments for the approach. 7.2

The afordance of pass-through-able

The afordance of pass-through-able was frst introduced in Chapter 1.1 Pass-throughable refers to the ability of an organism to move through an aperture, where an “aperture” can mean a doorway for a human, a gap in a fence for a dog, a crack in a wall for a cockroach, and so on. To review, a typical ecological psychology explanation of this afordance centers on the aperture-to-shoulder-width ratio (A/S, where A is width of the aperture and S is shoulder width at the broadest point) that separates the transition from passing through the aperture during normal locomotion to needing to turn their body (e.g., hips or shoulders) to ft through, where the latter is not considered

1 It is an afordance that is near and dear this author’s heart (Favela, Amon, Lobo, & Chemero, 2021; Favela, Riley, Shockley, & Chemero, 2014, 2018).

DOI: 10.4324/9781003009955-7

158

Putting the NeuroEcological Nexus Theory to work

pass-through-able in a strict sense. One way such events are explained in neuroscience is to identify how the environment is represented and computed in the visual system, followed by an account of how that information sends motor commands to the body to guide movement through the aperture. As discussed in the previous chapter, NExT is charged with accomplishing at least two goals: First, it must maintain the fundamental features of ecological psychology—namely, its four core principles—and neuroscience—namely, neural spatiotemporal scale activity. Second, for it to be an advance on both ecological psychology and neuroscience’s approaches to explaining an afordance like pass-through-able, it must integrate explanatory features that the other does not. That is to say, relational features of the body and environment (e.g., A/S ratio) for neuroscience and causal and constitutive contributions of neural activity for ecological psychology. In essence, NExT explains the afordance of pass-through-able as follows: The afordance pass-through-able is an organism-environment system event that is caused and constituted by reciprocally interacting spatiotemporal activity at the neural scale (e.g., head movement neural population manifold), body scale (i.e., synergy by way of function defned anatomical organization and movement), and environmental scale (i.e., organism and environment defned ecological information). Applying NExT to a specifc example will help illustrate these features. Take the case of a mouse that is inside the walls of a house. While foraging, the mouse sees a ray of light. As it moves toward the light, it sees that it is shining through a hole in the wall. The mouse decides to approach the hole due to the possibility that it may lead to new foraging opportunities. As the mouse comes close to the hole, it slows down, moves its head side-to-side in order to perceive if the hole is wide enough for it to pass through. Since the hole is wide enough, it afords passing through for the mouse (Figure 7.1a). What is happening during this afordance event at the various scales of investigative interest? First, to explain this event, NExT hypothesizes that the organism-environment system (i.e., “mouse-hole-in-wall system”) is the privileged spatiotemporal scale of description to understand the perceived and acted upon situation (Hypothesis 1; ecological psychology primary principle 4). Consequently, ofering an explanation of pass-through-able for the mouse will be incomplete without accounting for causal and constitutive features of both mouse and environment. Second, neural spatiotemporal elements must be part of the explanation in order to include in NExT what is already in neuroscience-only approaches. As argued for in Chapter 6, neural population dynamics are hypothesized as generating the relevant states for afordance events (Hypothesis 2). Third, the neural population dynamics that contribute to the causes and constitution of the afordance of pass-through-able are principally low-dimensional (Hypothesis 3). Specifcally, the principle structure of these dynamics is governed by a torus manifold. While the mouse is deciding if the hole afords pass-through-ability, its body is performing a reciprocal perception-action activity. Specifcally, the mouse is detecting ecological information about the hole (e.g., ambient light refecting from the wall), which informs its actions (i.e., head movement), which in turn alters the structure of the ecological information it is detecting, and so on—that is, the perception-action loop (ecological psychology primary principle 2). What is happening at neural spatiotemporal scales is richly accounted for by the neural manifold hypothesis. Remember, the neural manifold hypothesis claims that very high dimensional datasets obtained from neural population dynamics (Figure 7.1b, top row) have much lower dimensional manifolds that capture their principal structure (Figure  7.1c, top). This principle structure is the neural mode that generates particular behaviors (cf. Gallego, Perich, Miller, & Solla, 2017).

159

Source: (a) The author generated this image in part with DALL-E, a multimodal implementation of GPT-3, which is OpenAI’s large-scale language-generation model. Upon generating the image, the author reviewed, edited, and revised the image to their own liking and takes ultimate responsibility for the content of this publication [https://openai.com/product/dall-e]; (b) Left: Modifed and reprinted with permission from Gwilz, 2013. CC BY-SA 4.0, Right: Modifed and reprinted with permission from Niedringhaus, Chen, and Dzakpasu (2015). CC BY 4.0, Huang et al. (2021). CC BY 4.0, Matthis, Muller, Bonnen, and Hayhoe (2022). CC BY 4.0, Kartashova, Sekulovski, de Ridder, te Pas, and Pont (2016). CC BY 4.0.

Putting the NeuroEcological Nexus Theory to work

Figure 7.1 The NeuroEcological Nexus Theory (NExT) applied to the afordance of pass-through-able. (a) The afordance pass-through-able is exhibited when a mouse walks through a hole, and involves features of the environment (e.g., hole in wall), organism (e.g., shoulder width), and brain (e.g., sensorimotor neural populations). (b) Each row depicts a data source that contributes to a NExT-based explanation. Top: Time series from multielectrode recording of relevant neural populations during afordance event. Middle: Example of motion-tracking marker locations on mouse body and trace data. Bottom: Sources of relevant ecological information; here, optic fow and physical and visual light felds. (c) Each row depicts a phase space plot with state space generated from low-dimensional principle structure identifed from highdimensional data recorded from sources such as those depicted in column b.

160

Putting the NeuroEcological Nexus Theory to work

If the research aim is to identify the neural populations most relevant to the afordance pass-through-able, then the neural population that causes and constitutes head movement will be one of the focuses because it plays a signifcant role in the mouse’s engagement with ecological information that guides the most applicable action. The activity of these particular neurons will be dependent on the mouse’s orientation. The neural manifold hypothesis claims that the principle structure of these population dynamics will be expressed by particular topological dimensions, such that head direction is one variable (i.e., 1-dimension topology) that fts on a low dimensional spline within the high dimensional cloud of data (Figure 6.2f). Prior research has demonstrated location equivalence in Euclidean space of the location of activity along the low-dimensional spline, the abstracted torus (Figure 6.2g), and the mouse’s real body (e.g., Chaudhuri, Gerçek, Pandey, Peyrache, & Fiete, 2019; Gallego et al., 2017). This is where NExT gets especially compelling. While Hypothesis 3 emphasizes neural population activity, privileging the organism-environment system (ecological psychology primary principle 4) and the mutuality of perception-action (ecological psychology primary principle 2) are maintained. A crucial reason for this is that the activity of neural population dynamics does not hold a unidirectional causal relationship with the body or environment. The state of the body (i.e., head location) and the environment (i.e., aperture edge) inform and constrain the state of the neural activity. If the head cannot move any further to the right because it hits up against the hole’s edge, then the neural population will also not continue activity in that direction (cf. Figure 6.2a, c, g), as exhibited by its location on the low dimensional manifold (Figure 6.2f, g). So too does the state of the neural activity inform and constrain the state of the body and environment, such that where the direction of the head points will provide the perspective from which the body (e.g., eyes) will detect ecological information (e.g., light refecting from the surface edges of the hole). In that way, direct perception (ecological psychology primary principle 1) is also maintained due to the fact that neural activity is informed and constrained by the direct engagement with ecological information. Fourth, the mouse’s body organizes into low-dimensional synergies to generate relevant states (Hypothesis 4). For reasons just discussed, while neural population dynamics play signifcant roles in the afordance pass-through-able, they do not do so to the exclusion of the states of the body or environment. Still, as an investigative spatiotemporal scale of interest, the body can be explained in its own way. In particular, the body is fruitfully understood as a synergy that contributes to organism-environment systems. Remember, synergies are functional assemblies of parts (e.g., muscles, tendons, etc.) that are temporally constrained to act as a single unit for specifc tasks (Kelso, 2009; Kugler, Kelso, & Turvey, 1980). The body’s ability to fexibly adapt to tasks is due in large part to its being a soft-assembled system, such that their material constitution is not rigidly constrained so they can confgure and reconfgure themselves into functional coordinative structures—or synergies—in a context-sensitive manner (Favela, 2019). In this way, adaptation and fexibility are forms of “good variability” when they facilitate organizations that contribute to task completion. Various data analysis methods (e.g., uncontrolled manifold; Scholz & Schöner, 1999) and motion tracking technologies are available to record, quantify, and assess the degree to which a body is functioning as a synergy (Figure 7.1b, middle row). In the current example, a mouse body is a synergy when it is organized to enable the function of a successful encounter with a hole in the wall. Here, a “successful” synergy is not equivalent to the afordance pass-through-able. For the hole to aford pass-throughability, the mouse would need to be able to move through it. In that way, the synergy is

Putting the NeuroEcological Nexus Theory to work

161

successful in that the functional coordinative structure that is the body contributed to passing through. However, a synergy can be a “successful” functional coordinative structure when features of the environment do not facilitate an afordance. The mouse’s body can be a successful synergy that detects a hole that does not aford pass-through-ability. That is to say, it takes a properly functioning unit that can engage with the environment to successfully detect opportunities for both action and non-action.2 It takes a synergy to contribute to engagement with both afordances and non-afordances. In other words, all afordances require synergies (ecological psychology primary principle 3).3 Viewing the relationship of afordances and synergies in this way is consistent with the primary principles of ecological psychology. For example, the body qua synergy requires that perception and action are continuous, as the body will adapt and reorganize depending on the structure of environmental information, which is itself partly caused and constituted by the organization of the body (ecological psychology primary principle 2). Furthermore, it is consistent to claim that what is perceived is accessed in a direct way (ecological psychology primary principle 1). Thus, while NExT applies the idea of synergies to understanding the body alone, viewing the spatiotemporal scale of the body’s contributions to afordances as a synergy is consistent with core ideas that account for the other scales that constitute organism-environment systems (i.e., neural population activity and ecological information). Fifth, the afordance of pass-through-able for the mouse fundamentally emerges at low-dimensional scales of organism(neural, body)-environment activity (Hypothesis 5). Afordances do not exist alone in brains, bodies, or environments. Afordances are events that spread across organism-environment systems (Hypothesis 1; Figure 7.1a). For the hole in the wall to aford pass-through-ability, the integration and coordination of spatiotemporal scales must occur. The principle structure of low-dimensional manifolds in neural populations (Hypothesis 2 and 3; Figure 7.1b and c, top rows) exist in a reciprocal relationship that both causes and constitutes the body’s structure and function, while also being caused and constituted by the body’s activities. The principle structure of low-dimensional synergies of the body (Hypothesis 4; Figure 7.1b and c, middle rows) exist in a reciprocal relationship that both causes and constitutes its relationship with the environment, while also being caused and constituted by the environment’s features. The ecological information of the environment (e.g., optical fow and ambient light array; Figure 7.1b, bottom) exist in a reciprocal relationship that both causes and constitutes its relationship with the organism, while also being caused and constituted by the organism’s features. It must be emphasized that the causal and constitutive contributions of each of these spatiotemporal scales to organism-environment systems during afordance events are low dimensional. They are “low dimensional” in a metaphysical and epistemic sense. 2 To understand this point, it can be helpful to think of the old teacher trick of ofering students two options when taking a test: Option one is to take it as normal, that is, the grade is a product of the number of correct and incorrect answers. Option two is to take the test but the student must get every answer wrong, that is, the student will earn an A on the test if they get every answer wrong. The trick is that recognizing the “bad” requires recognizing the “good.” 3 It is fair to reply that this wording is too strong, and that not all afordances require synergies. For example, the environment could provide features for afordances for a body that is not a functional coordinative structure, like a human with a broken ankle tripping over their own legs and ftting (falling?) through a doorway without hitting the sides. In that sense, a door afords passing through for any object that fts (e.g., clumsy donkey, rolling stone, etc.). In a very loose sense, that is true. However, such a broad sense of afordances misses the crucial roles that the idea plays within ecological psychology, such as providing a sophisticated account for what is meaningful to an organism or not (see Chapter 2).

162

Putting the NeuroEcological Nexus Theory to work

Metaphysically speaking, the brain, body, and environment have a principle structure that causes and constitutes the phenomena of interest. During the afordance event that is a mouse passing through a hole, its relevant neural activity displays a neural mode for head movement (which coordinates with other neural modes, e.g., locomotion), its body organizes into a synergy, and the environment is constituted lawful ecological information (e.g., refective angles of ambient light; Figure  7.1b, bottom row). Each of these—i.e., neural mode, synergy, and ecological information—are low dimensional. They are metaphysically low dimensional in that there is a principle structure that integrates and coordinates with the other scales in systematic ways to facilitate task-defned success, such as the mouse successfully moving through a hole in the wall. This brings me to the epistemic sense in which these contributions are low dimensional. Researchers currently have at their disposal the ability to record at higher spatial and temporal resolution than ever before. This situation raises some challenges (see data deluge challenge in Chapter 6). However, given the recent developments and increasingly sophisticated data analysis methods and technology, researchers have never been better poised to understand enormous amounts of multiscale and multimodal data like those observed in organism-environment systems. Understanding such data and identifying systematic relationships among brain, body, and environment is a fundamental aim of NExT. Hypothesizing that this data will have a principle structure that is low dimensional puts researchers on the path toward obtaining an epistemic grip on an investigative situation that can readily become overwhelming. Sixth, the claim that the architecture of afordance events will be explained by appeal to universal principles (Hypothesis 6). Even if it turns out that the principle structures of brains, bodies, and environments during afordance events are low dimensional, such discoveries will not provide full explanations or understanding of the phenomenon. Identifying, for example, that a mouse’s neural mode for head movement during the afordance of pass-through-able is a low dimensional manifold on a torus topology would certainly be informative in a descriptive sense. Still, it would not provide a full explanation. For that, NExT hypothesizes that such supported hypotheses would need to be situated within an explanatory framework that grounds those fndings in universal principles.4 As discussed in Chapters 5, universality refers to the fact that there are patterns of activity and organization in nature that reoccur in diverse substrates and various contexts. In Chapter 6 it was argued that it is unlikely that one universal principle will account for the entire spatiotemporal range of phenomena across organism-environment systems. Instead, NExT presupposes the more likely situation to be one of a plurality of universal classes, certainly across scales but also potentially within scales as well (cf. Favela & Chemero, 2021, 2023). For example, criticality may explain the high speed of phase transitions among mouse neural populations, coordination dynamics may explain the mouse’s body movements, and ecological information explained by lawful energy structures. In this fashion, NExT aims to ground investigative fndings (Hypotheses 1 through 5) in universal principles that provide the

4 This wording can readily be understood as implicitly embracing a covering law model (or deductive-nomological model) of explanation. Covering law models contend that a phenomenon is explained if and only if a statement of that fact is deduced from a set of statements where at least one of those statements includes a general scientifc law (Salmon, 1989). The covering law model has been appealed to by various authors as providing the general explanatory style for research friendly to the one currently on ofer, such as dynamical cognitive science (e.g., Chemero, 2000; Dale & Bhat, 2018; Moralez & Favela, 2016; Stepp, Chemero, & Turvey, 2011; Walmsley, 2008). As my aims are not to defend any particular explanatory theory or to apply one to NExT, I remain agnostic for the moment about this issue.

Putting the NeuroEcological Nexus Theory to work

163

“why” part of the explanation. Taken together, the fndings and the principles can facilitate a more complete understanding of the phenomenon of investigative interest. 7.3 Making everybody happy? NExT = complexity science, ecological psychology, and neuroscience This section summarizes how the previous NExT account of an afordance exhibits the features of complexity science, maintains the primary principles of ecological psychology, and ofers a theoretically plausible and empirically assessable hypothesis about the contributions from the neural spatiotemporal scale. 7.3.1

NExT and complexity science

The NExT account of the afordance pass-through-able exhibits all features of complexity science discussed in Chapter 5. Frist, the afordance event is treated as a complex systems phenomenon. Accordingly, the overall organism-environment system and its contributing subscales—that is, neural populations, body, and animal scale detection of ecological information—are properly understood as emergent scales. For example, each scale exhibits interaction-dominant dynamics and relationality. The dynamics within and between scales are nonlinear, undergoing qualitative shifts of the course of the afordance event. Related to emergence and nonlinearity, each scale is self-organized. For example, the principle structure of neural population and synergy dynamics emerge via phase transitions that are produced via processes such as feedback and not external control. Finally, these dynamics are hypothesized as falling within a fnite set of universal classes, such as those exhibited during coordination dynamics and critical states. In order to investigate the aforementioned features, NExT draws on methodologies from complexity science. This is especially true of nonlinear dynamical systems theory (NDST) and synergetics. Much of the analytic methods in manifold theory come from NDST, this includes both quantitative tools (e.g., diferential equations) and qualitative tools (e.g., state spaces). From synergetics, NExT appeals to the order parameter and control parameter approach. Here, each relevant spatiotemporal scale’s activity that primarily contributes to the system-level phenomenon can be treated as the order parameter. For example, the neural population dynamics on a manifold are the order parameter. The control parameters, or contributing variables, that guide the order parameter’s “macroscopic” activity during the afordance pass-through-able include considerations such as inputs into the head direction cell population from other brain areas and perturbations to the body synergy. Most signifcantly, NExT utilizes dimensionality reduction in various forms at each scale in order to both manage the large amount of data and to identify principle structures (e.g., neural modes). As a result of integrating concepts, methods, and theories from complexity science, NExT can be practiced via the following steps (Section 5.5): 1. Identify the phenomenon of interest: This is the overarching phenomenon (i.e., afordance pass-through-able) and “sub-phenomena” that make signifcant contributions to it (i.e., neural population activity, body synergy, and structure of ecological information). 2. Defne the order parameter: As with step 1, because there is an overarching phenomenon that has causally and constitutively relevant contributions from other scales, each of those scales will be defned by their own order parameters, which can be nested within

164

3.

4.

5.

6.

Putting the NeuroEcological Nexus Theory to work

each other. As a result, pass-through-able is an order parameter, but so too are head movements for the body synergy and for the neural population. Defne mathematical model to capture the physical model: This step requires appealing to a cluster of tools from NDST and synergetics because each scale may be best modeled in a variety of ways. When applied to pass-through-able, NExT defnes models for the neural population via manifold methodologies and the body via uncontrolled manifold models. The aim is to fnd systematic relationships among the models at various scales. Solve mathematical model to identify system states: Once the proper models are identifed in the previous step, they can be plotted via phase spaces to reveal various qualitative features, such as attractor states and transition points. So too, the aim is to fnd systematic relationships among the various state spaces (Figure 6.5). Identify and defne control parameters: The quantitative work of modeling the order parameter and the qualitative work of plotting those systems in a state space facilitate the ability to identify more precise features of the phenomenon, namely, the control parameters that provide the principle guiding structure. Measure-the-simulation: As is clear by now, NExT relies on data analysis tools to model empirical data and produce simulations to conduct further analyses. Due to the fact that much of the data undergoes signifcant transformations and processing (e.g., from raw multielectrode array recordings of neural populations to dimensionally reduced manifolds), it is important to examine the models and simulations themselves for ways they may distort the original data. This is especially important when attempting to identify systematic relationships among scales, where artifacts and confounds can introduce mischaracterized data features that percolate throughout the overarching system.

7.3.2

NExT and ecological psychology

The NExT account of the afordance pass-through-able maintains all four primary principles at the core of ecological psychology discussed in Chapter 2. 1. Perception is direct: The NExT account of pass-through-able maintains this in two primary ways. First, at the organism-environment scale, the organism’s perceptual capacities detect ecological information in an unmediated way. Second, that ecological information directly infuences relevant neural population activity. The reciprocal relationship of the neural manifold that causes and constitutes head movement is guided by the state of the body and the state of ecological information. For example, environmental structures (e.g., wall refectance) can directly constrain the body’s action, which guides the neural population for head movement. 2. Perception and action are continuous: The perception-action loop is maintained due to the reciprocal relationship of the principle structures at each scale identifed by NExT as causing and constituting the afordance event of pass-through-able. As discussed earlier, the mouse is performing a reciprocal perception-action activity that consists of it neural populations guiding head movement, which alters visual perception of the hole, which alters its body movements, which changes its perception of the hole, which contributes to neural population activity, and so on. 3. Afordances: As a consequence of afordances being the meaningful opportunities for action, both neural populations and body synergies that facilitate successful engagement with the environment will be selected for. This evolutionary successful is what guides the

Putting the NeuroEcological Nexus Theory to work

165

selection of neural populations (e.g., Neural Darwinism) and body characteristics (i.e., functional units). 4. Organism-environment system: NExT treats the organism-environment system (e.g., “mouse-hole-in-wall system”) as the privileged spatiotemporal scale of description to understand the perception-action event that is the afordance pass-through-able. This follows primarily from two things: frst, the previous three principles, and second, the necessarily reciprocal nature of that system and its causally and constitutively relevant subscales, namely, neural populations, body synergies, and ecological information. 7.3.3

NExT and neuroscience

The NExT account of the afordance pass-through-able would be inadequate both concerning a complete explanation of the aforesaid event and as an advance on ecological psychology if it did not ofer a theoretically plausible and empirically assessable hypothesis about contributions from the neural spatiotemporal scale. Targeting neural population activity and appealing to methods like the neural manifold hypothesis provides that part of the account. The previous chapter presented the neural manifold hypothesis as ofering a rich methodology for addressing a number of challenges facing research that attempts to make connections among brain, body, and environment activities. As previously discussed, clear connections can be made between states of neural population activity (i.e., manifold activity that governs head movement), the body (i.e., head movement), and the environment (i.e., ecological information, such as substance hardness of an aperture’s boundaries). These connections are based on empirically-supported research. The speculative nature of these claims concerns the extensiveness that this approach can be applied, namely, to other afordances and forms of mind (e.g., decision making, goal-directed behavior, prospective control, etc.). 7.4

What comes NExT?

Section 7.2 provided an example of the application of NExT to the investigation of a specifc phenomenon, namely, the afordance of pass-through-able. Section 7.3 provided an overview of the ways that account is a complexity science and integrates ecological psychology and neuroscience. One feature of NExT that was not explicated in as much detail was the part that involves identifying systematic relationships among spatiotemporal scales (Figure 7.1c; see arrows relating state spaces in Figure 6.5). In light of the earlier discussion of the anticipated plurality of universal classes between and within spatiotemporal scales (also see Chapter 6), it would be reasonable to assume that the overall investigative aim of NExT is a harmonious, though piecemeal, set of disunifed explanations. The story told about NExT thus far supports that conclusion. Moreover, viewing the investigation of organism-environment systems in that way would be congruous with views of neuroscience as a “mosaic unity in which distinct felds contribute piecemeal to the construction of a complex and evidentially robust mechanistic explanation” (Craver, 2007, p. 19). Still, what could come next for NExT with regard to the dis/unifcation of explanations? As discussed in Chapter  6, an important feature of NExT is its aim to elucidate systematic relationships among the multiple spatiotemporal scales that cause and constitute phenomena of investigative interest, such as afordance events. A potential next step for NExT would be to substantially integrate those yet to be identifed systematic relationships

166 Putting the NeuroEcological Nexus Theory to work into a single metric similar to taxonomic ranks in biology. Such a metric could defne afordances in ways that include the contributions of the brain, body, and environment, for example, defning the afordance of step-on-able as: Step-on-ableαβγ, where α = brain, β = body, and γ = environment. A metric like this could enable the development of a taxonomy of organism-environment activities, which could have benefts like communication across disciplines and large-scale analyses within and between organisms and tasks. Leveraging recently developed tools from the data sciences can provide sketches of what such a metric could look like in practice. The frst example comes from research on manifold fows. Reparameterization is a technique for maintaining essentially the same model of a phenomenon while transforming it in a manner that makes it easier to interpret (Hox, 2010, pp. 68–69). Normalizing fows is a method for conducting the reparameterization of variables from highly complicated datasets. This process is relatively easy with manifolds in Euclidean spaces. Recently, work in machine learning has developed methods to produce manifolds with normalized fows of more intricate and non-Euclidean spaces. Lou and colleagues (2020) developed the Neural Manifold Ordinary Diferential Equations (Neural ODE) method to construct manifold continuous normalizing fows. The Neural ODE method allows for the connecting of diferent manifolds and the prediction of future manifolds that could follow from the prior integrated fows. The application of this approach is depicted in their “dynamic chart method” (Figure  7.2a). When applied to NExT, consider the three distinct manifold fows at the bottom of Figure 7.2a as depicting the three state spaces generated from low-dimensional principle structures across brain, body, and environment data as seen in Figure 7.1c. One way to integrate these scales is to apply the Neural ODE method to construct a single manifold of continuous normalizing fows. It is possible that if successful, such as method could provide scientifc virtues like manipulation (e.g., quantify how changes in variables alter organism-environment system-level fow) and prediction (e.g., estimating the organismenvironment system state at future times). As described, the dynamic chart method by way of the Neural ODE approach provides NExT with a process for producing rich qualitative and quantitative integration of highly complied data. Another potential method that could, in principle, provide integration for NExT via rich qualitative and quantitative analyses is found in research by Mitra and colleagues (2020). In this work, Mitra et  al. (2020) propose the “Multi-view Neighbourhood Embedding (MvNE)” methodology, which aims to capture as much variability within multi-view data sets as possible within a unifed distribution. It does this by way of embedding, which can be understood as a kind of dimensionality reduction (see Chapter  5) and is a relatively low-dimensional space into which you can translate high-dimensional vectors (Google Developers, 2023). Embeddings fnd points in low-dimensional space and places similar inputs close together in the embedding space. Multi-view data sets are intended to provide “consistent and complimentary information” (Mitra et al., 2020, p. 1) from multiple data sources, such as clinical record comprised of patient data and gene expressions. When applied to NExT, the multiple data sources that are combined (Figure  7.2b, left) could be from multielectrode arrays, motion tracking, and optic fow recording (Figure  7.1b). Whereas NExT currently produces phase space plots depicting the relationship of each dataset in separate state spaces (Figure  7.1c), confation techniques like those described by Mitra et  al. (2020) could be applied to combine the state spaces in meaningful way (Figure 7.2b, middle). Here, “meaningful” refers to identifying relationships among data that are not merely statistically signifcant, but those that reveal causal and constitutive characteristics. From there, a set of values could be obtained with MvNE to produce a unifed low-dimensional embedding (Figure 7.2b, right). This low-dimensional embedding

Putting the NeuroEcological Nexus Theory to work

167

Figure 7.2 Potential techniques for NExT to develop single metrics. (a) Utilizing the Neural Manifold Ordinary Diferential Equations (Neural ODE) method to solve a single manifold ODE via the dynamic chart method (Lou et  al., 2020). (b) Multiple low-dimensional values analyzed from multiple views of a phenomenon and their state spaces (left). The confation method integrates those multiple data sources into a single state space comprised of probabilistic low-dimensional values (center). Low-dimensional variables and subsequent state space generated from the combined previous values (right; Mitra, Saha, & Hasanuzzaman, 2020). Source: (a) Modifed and reprinted with permission from Lou et al. (2020). Courtesy of authors; (b) Modifed and reprinted with permission from Mitra et al. (2020). CC BY 4.0.

168

Putting the NeuroEcological Nexus Theory to work

could serve as a unifed metric for organism-environment system activities like afordance events. The ability of techniques like Neural ODE (Lou et al., 2020) and MvNE (Mitra et al., 2020) to provide the kind of unifcation envisioned here is yet to be determined. Nevertheless, it is based on concrete research that has been successful in other domains. As such, there is no in principle reason to dismiss the possibility of successful application to NExT. 7.5 Conclusion In this chapter, I attempted to provide a clear demonstration of the applicability of NExT to a real-world target of investigation: the afordance of pass-through-able for a mouse. By incorporating prior research on mouse brains (e.g., multielectrode recordings during head movement), mouse body movement (e.g., motion tracking), and real environmental features that a mouse would encounter in such a scenario (e.g., optic fow and ambient light array), a proof of concept was demonstrated for the successful integration of ecological psychology and neuroscience. All four core principles of ecological psychology were honored: Perception was treated as direct, perception and action as continuous, afordances as the central phenomenon, and the organism-environment system was the privileged scale of investigation. Additionally, neuroscience was honored by way of incorporating a rich account of causal and constitutive contributions of neural spatiotemporal scale activity to the afordance event. After, I provided a more speculative outline of potential future developments for NExT in the form of two approaches that could enable deeper unifcation across spatiotemporal scales. I conclude the book in the next chapter with a bit of review, acknowledge some potential consequences of applying the NExT framework, and give some reasons to be optimistic about the prospects of reconciling ecological psychology and neuroscience. References Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A., & Fiete, I. (2019). The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nature Neuroscience, 22(9), 1512–1520. https://doi.org/10.1038/s41593-019-0460-x Chemero, A. (2000). Anti-representationalism and the dynamical stance. Philosophy of Science, 67(4), 625–647. https://doi.org/10.1086/392858 Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. New York, NY: Oxford University Press. Dale, R., & Bhat, H. S. (2018). Equations of mind: Data science for inferring nonlinear dynamics of socio-cognitive systems. Cognitive Systems Research, 52, 275–290. https://doi.org/10.1016/j. cogsys.2018.06.020 Favela, L. H. (2019). Soft-assembled human-machine perceptual systems. Adaptive Behavior, 27(6), 423–437. https://doi.org/10.1177/1059712319847129 Favela, L. H., Amon, M. J., Lobo, L., & Chemero, A. (2021). Empirical evidence for extended cognitive systems. Cognitive Science: A Multidisciplinary Journal, 45(11), e13060, 1–27. https://doi. org/10.1111/cogs.13060 Favela, L. H., & Chemero, A. (2021). Explanatory pluralism: A case study from the life sciences. PhilSci-Archive. [Preprint]. http://philsci-archive.pitt.edu/id/eprint/19146 Favela, L. H., & Chemero, A. (2023). Plural methods for plural ontologies: A case study from the life sciences. In M.-O. Casper & G. F. Artese (Eds.), Situated cognition research: Methodological foundations. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-031-39744-8_14 Favela, L. H., Riley, M. A., Shockley, K., & Chemero, A. (2014). Augmenting the sensory judgment abilities of the visually impaired. Paper presented at the 122nd Annual Convention of the American Psychological Association, Washington, DC. https://doi.org/10.1037/e553822014-001

Putting the NeuroEcological Nexus Theory to work

169

Favela, L. H., Riley, M. A., Shockley, K., & Chemero, A. (2018). Perceptually equivalent judgments made visually and via haptic sensory-substitution devices. Ecological Psychology, 30(4), 326–345. https://doi.org/10.1080/10407413.2018.1473712 Gallego, J. A., Perich, M. G., Miller, L. E., & Solla, S. A. (2017). Neural manifolds for the control of movement. Neuron, 94(5), 978–984. https://doi.org/10.1016/j.neuron.2017.05.025 Gibson, J. J. (1986/2015). The ecological approach to visual perception (classic ed.). New York, NY: Psychology Press. Google Developers. (2023). Embeddings. Foundational Courses: Machine Learning Crash Course. Retrieved April 26, 2023 from https://developers.google.com/machine-learning/crash-course/ embeddings/video-lecture Gwilz. (2013). Vector diagram of laboratory mouse (black and white). Wikipedia. Retrieved October 29, 2022 from https://commons.wikimedia.org/wiki/File:Vector_diagram_of_laboratory_mouse_ (black_and_white).svg Hox, J. J. (2010). Multilevel analysis: Techniques and applications (2nd ed.). New York, NY: Routledge. Huang, K., Han, Y., Chen, K., Pan, H., Zhao, G., Yi, W., . . . Wang, L. (2021). A hierarchical 3D-motion learning framework for animal spontaneous behavior mapping. Nature Communications, 12(2784). https://doi.org/10.1038/s41467-021-22970-y Kartashova, T., Sekulovski, D., de Ridder, H., te Pas, S. F., & Pont, S. C. (2016). The global structure of the visual light feld and its relation to the physical light feld. Journal of Vision, 16(10), 1–16. https://doi.org/10.1167/16.10.9 Kelso, J. A. S. (2009). Synergies: Atoms of brain and behavior. In D. Sternad (Ed.), Progress in motor control: A multidisciplinary perspective (pp. 83–91). New York, NY: Springer. https://doi. org/10.1007/978-0-387-77064-2_5 Kugler, P. N., Kelso, J. A. S., & Turvey, M. T. (1980). On the concept of coordinative structures as dissipative structures. I: Theoretical lines of convergence. In G. E. Stelmach & J. Requin (Eds.), Tutorials in motor behavior. New York, NY: North Holland Publishing Co. Lou, A., Lim, D., Katsman, I., Huang, L., Jiang, Q., Lim, S. N., & De Sa, C. (2020). Neural manifold ordinary diferential equations. arXiv preprint, arXiv:2006.10254. https://doi.org/10.48550/ arxiv.2006.10254 Matthis, J. S., Muller, K. S., Bonnen, K. L., & Hayhoe, M. M. (2022). Retinal optic fow during natural locomotion. PLoS Computational Biology, 18(2), e1009575. https://doi.org/10.1371/journal. pcbi.1009575 Mitra, S., Saha, S., & Hasanuzzaman, M. (2020). Multi-view clustering for multi-omics data using unifed embedding. Scientifc Reports, 10(13654). https://doi.org/10.1038/s41598-020-70229-1 Moralez, L. A., & Favela, L. H. (2016). Thermodynamics and cognition: Towards a lawful explanation of the mind. In A. Papafragou, D. Grodner, D. Mirman, & J. C. Trueswell (Eds.), Proceedings of the 38th annual conference of the Cognitive Science Society (pp. 948–953). Austin, TX: Cognitive Science Society. https://cogsci.mindmodeling.org/2016/papers/0173/ Niedringhaus, M., Chen, X., & Dzakpasu, R. (2015). Long-term dynamical constraints on pharmacologically evoked potentiation imply activity conservation within in vitro hippocampal networks. PLoS One, 10(6), e0129324. https://doi.org/10.1371/journal.pone.0129324 Salmon, W. C. (1989). Four decades of scientifc explanation. In P. Kitcher & W. C. Salmon (Eds.), Scientifc explanation (Minnesota Studies in the Philosophy of Science, Vol. 13, pp.  3–219). Minneapolis, MN: University of Minnesota Press. Scholz, J. P., & Schöner, G. (1999). The uncontrolled manifold concept: Identifying control variables for a functional task. Experimental Brain Research, 126, 289–306. https://doi.org/10.1007/ s002210050738 Stepp, N., Chemero, A., & Turvey, M. T. (2011). Philosophy for the rest of cognitive science. Topics in Cognitive Science, 3(2), 425–437. https://doi.org/10.1111/j.1756-8765.2011.01143.x Walmsley, J. (2008). Explanation in dynamical cognitive science. Minds and Machines, 18, 331–348. https://doi.org/10.1007/s11023-008-9103-9

8

Conclusion

The worst readers are those who act like plundering soldiers. They take out some things that they might use, cover the rest with flth and confusion, and blaspheme about the whole. (Nietzsche, 1879/1913, p. 69)

The preceding chapters have presented reasons why ecological psychology and neuroscience have been viewed as incommensurable and irreconcilable approaches. A path was ofered toward reconciliation via a complexity science-based framework: the NeuroEcological Nexus Theory (NExT). This conclusion summarizes the main points presented in the book. After, some challenges to the NExT framework will be addressed. Then, reasons why the account currently on ofer might make folks upset are explored. Finally, reasons to be optimistic about the prospects of reconciling ecological psychology and neuroscience will be ofered. 8.1

The ecological brain

On the surface, the challenge motivating this book is the idea that ecological psychology and neuroscience are commonly understood as being incommensurable and irreconcilable ways of investigating and explaining mind. The deeper challenge stems from the more general tension between approaches to mind that, on the one hand, focus on the brain as the most causally and constitutively signifcant factor and, on the other hand, those that focus on the body and environment. Thus, while ecological psychology and neuroscience have been the focus, the hope has been for the discussion to generalize when appropriate to other relevant felds and approaches, such as computer science (especially artifcial intelligence), philosophy of mind (e.g., computational theory of mind), and situatedness (e.g., distributed cognition). Accordingly, a number of discussions throughout the book have intersected and overlapped with felds outside of ecological psychology and neuroscience, such as the cognitive sciences and the history and philosophy of science. This has been unavoidable given the interdisciplinary and multidisciplinary nature of the mind sciences. Chapter  1 aimed at setting the scope of the discussion. This included not ofering a precise defnition of “mind.” While not defning mind may be frustrating to some, pointing to paradigmatic cases was enough to contribute to the aims of the book. To that end, visually-guided action was an example that highlighted key diferences between ecological psychology and neuroscience as investigative frameworks. Setting the scope also involved limiting what was meant by “neuroscience.” While ecological psychology is a more unifed approach, there is no single set of methods or target phenomena across all neuroscience subdisciplines. Consequently, when ecological psychology is contrasted with neuroscience, it has been predominantly limited to behavioral, cognitive, computational, and sensory DOI: 10.4324/9781003009955-8

Conclusion

171

neurosciences; that is to say, those subdisciplines that tend to investigate the same or similar phenomena as ecological psychology. Chapter  2 ofered a historically and conceptually-focused story about how ecological psychology and neuroscience came to be viewed as incommensurable and irreconcilable. Historically speaking, it was helpful to view both ecological psychology and neuroscience as stemming from early twentieth century reactions against behaviorism and World War II era developments in psychology. Ecological psychology grew directly from limitations of standard theories of visual perception in perceptual psychology. Neuroscience would come to be infused with the cognitivism that grew out of the cognitive revolution of the mid-1900s. In particular, the information processing understanding of mind spurred from World War II era developments in information theory, artifcial intelligence, and linguistics. Conceptually speaking, ecological psychology and cognitivism could not be more diferent. Cognitivism adhered to an understanding of mind that stressed its information processing nature, particularly in terms of the roles of computations and representations. Ecological psychology stressed the situated nature of mind, particularly the roles of perception-action, embodiment, and features of the environment. This chapter concluded by claiming that ecological psychology overcomes key limitations of cognitivism stemming from disembodiment and its inability to provide compelling accounts of the sources of innateness, meaning, and intelligence. Chapter 3 continued the historical and conceptual story. Here it was argued that a major reason why ecological psychology and neuroscience are viewed as incommensurable and irreconcilable is due in large part to the latter’s embracing the cognitivism that the former rejected. The discussion began by stressing the signifcance of appreciating investigators’ perspectives and motivations in guiding the development of scientifc frameworks. Hippocrates’ and Aristotle’s views about the primary organ of mind was illustrative in this regard. Hippocrates was focused on medical care and stressed the fact that damage to the head altered states of mind; thus, the brain was taken as primary. Aristotle was focused on understanding the telos of humans as centering on perception and action; thus, the heart, which pumps the blood that allows for perceiving and acting, was taken as primary. In a similar fashion, the neuroscience of the mid to late twentieth century could be understood as being guided by two perspectives: the Hodgkin-Huxley tradition, which emphasized biological features of neurons, such as electrophysiological properties, and the McCulloch-Pitts tradition, which lay emphasis on abstract features of neurons, such as their logical properties. The remainder of the chapter motivated the claim that neuroscience (i.e., behavioral, cognitive, computational, and sensory) embraced the McCulloch-Pitts tradition and became infused with cognitivist commitments to the information-processing features of neural systems. As a result, ecological psychology views neuroscience qua the McCulloch-Pitts tradition as having the same limitations it criticized cognitive psychology and cognitive science as having. Chapter 4 concluded the part of the book that can be viewed as setting the scope of the issue and providing the historical and conceptual background. Here, overviews and critical assessments of prior attempts to reconcile ecological psychology and neuroscience were presented. That is to say, eforts to formulate an ecological neuroscience. Three examples of ecological neuroscience were discussed, which were labeled broadly as Edward Reed’s approach, neural reuse, and Bayesianism. Chapter 5 began the part of the book that ofered a solution to the challenge of reconciling ecological psychology and neuroscience. It was claimed here that complexity science is an interdisciplinary framework appropriate for the investigation of phenomena studied via ecological psychology and neuroscience. Within this framework, mind is understood as being the kinds of complex systems that exhibit four key features: emergence, nonlinearity, self-organization, and universality. In order to fruitfully conduct research on systems

172

Conclusion

with those properties, complexity science employs a range of concepts, methods, and theories that are integrated from disciplines such as systems theory, nonlinear dynamical systems theory (NDST), and synergetics. Chapter 6 is the heart of the book as it presented a way to integrate ecological psychology and neuroscience under a unifed complexity science-based approach: the NeuroEcological Nexus Theory (NExT). NExT is an investigative framework and theory about the nature of minded organism-environment systems. Its phenomena of interest are organisms with nervous systems (Neuro-) that are situated in bodies and environments (-Ecological). Mind is caused and constituted by particular kinds of connections among the landscape of parts (Nexus). It was argued that NExT is a compelling solution because it both maintains the best of ecological psychology and neuroscience and enhances them. NExT respects ecological psychology’s four primary principles and maintains neuroscience’s purview by ofering a theoretically compelling and empirically supported story about neural contributions to perception-action events at the spatial and temporal scales of organism-environment systems. NExT is comprised of six hypotheses that leverage concepts, methods, and theories from complexity science to synthesize neural population dynamics, synergies, and ecological psychology’s four principles into a unifed framework. Chapter 7 demonstrated the viability of NExT via application to the afordance of passthrough-able for a mouse. This afordance event ofered a clear demonstration of the ability of NExT to account for three spatiotemporal scales most relevant to explaining the organismenvironment system activity: head movement neural population manifold, body synergy, and ecological information. After, an overview was presented of the ways this account demonstrates that NExT is a complexity science, maintains ecological psychology’s primary principles, and explicates causal and constitutive contributions from neuroscience’s spatiotemporal purview. The chapter concluded with discussion of potential future directions for NExT. I am now positioned to answer one of the biggest questions that has been on readers’ minds: “What is the ecological brain?” At its most straightforward, the phrase “ecological brain” means that brains must be understood as always being part of ecologies. Brains exist in the environments of skulls, of bodies, and of worlds, which are themselves comprised of climates (e.g., temperature), fundamental interactions (e.g., gravity), and social institutions and relationships, respectively. In view of that, two facts must be considered at all times when attempting to investigate, explain, and understand mind qua ecological psychology and neuroscience’s target of investigative interest. First, even at the spatiotemporal scales of organism-environment system activities, the nervous system plays essential roles in causing and constituting those phenomena. Second, and perhaps paradoxically when considering the previous point, the nervous system is not special (i.e., exceptional or privileged) when it comes to investigating organismenvironment system activities. The body and environment are just as essential as the nervous system when it comes to providing a full and fruitful explanation and understanding of mind.1 1 For those readers who think that the idea of the “ecological brain” presented here has afnities with developmental systems theory, you are correct. Developmental systems theory is a general framework for understanding evolution, heredity, and development (Oyama, 2000; Oyama, Gray, & Grifths, 2001). Contrary to gene-centered and ontologically-reductive approaches to those topics, developmental systems theory stresses context sensitivity, multiple causation, and evolution as construction (i.e., evolution is about organism-environment systems changing over time), among others. In recent years, developmental systems theory has found allies in methods from complex systems and nonlinear dynamical systems theory (e.g., Molenaar, Lerner, & Newell, 2014). While the account I am ofering is not centered on issues of evolution or development, they certainly play crucial roles in various aspects, such as the Neural Darwinism at the core of population thinking discussed in Chapter 6.

Conclusion

173

8.2 Challenges The earlier chapters of the book focused on providing a path to understanding the sources and ways ecological psychology and neuroscience are incommensurable and irreconcilable. The later chapters focused an explicating and applying an approach that can integrate ecological psychology and neuroscience. This section draws attention to four sets of challenges facing such attempts at integration and the approach on ofer (i.e., NExT). Challenge 1: Why not reduce ecological psychology to neuroscience, or vice versa? The straightforward response to this challenge is that reducing one to the other ignores the goal of integrating or reconciling the two areas, for reducing one to the other would seem to require that the other makes too many compromises. But to respond to that question more directly, it is necessary to ask another question frst: What kind of “reduction:” epistemic, methodological, or ontological (Brigandt & Love, 2023)? Epistemic reductionism holds that the knowledge of higher-level sciences can be reduced to more fundamental-level sciences, for example, reducing chemistry to physics. In those terms, ecological psychology cannot be reduced to neuroscience as afordances can only be properly understood by including environmental considerations and cannot be explained by neural activity alone. Likewise, neural activity cannot be “reduced” to ecological-scale activity, as the former occupies spatiotemporal scales that the latter does not. Methodological reductionism holds that the best way to do science is to aim at investigating the lowest possible levels, for example, investigate the biochemical parts and processes of cells to best understand a plant. Since afordances exist at the organism-environment system scale, while methodologically it can be interesting to identify the smaller scale constituents (e.g., neural activity contributing to head movement), privileging that scale would have the consequence of no longer investigating “afordances” as the ecological psychologist views them. A similar point holds for neural activity, which would not be properly understood at ecological spatiotemporal scales. Finally, ontological reductionism holds that a system is comprised only of its smallest constituent parts and their interactions, for example, a biological organism just is its molecules. Once again, while the neural scale plays signifcant causal and constitutive roles in afordance events, afordance are not—ontologically speaking—just neural activity. In the same way, it is improper to ignore the spatiotemporal scales most relevant to neural activity, even when attempting to understand their contributions to “higher” scale phenomena like overt behavior. Challenge 2: Why not reconcile and integrate ecological psychology and neuroscience under enactivism? Enactivism is understood by many as frst introduced in the early 1990s by Francisco Varela, Evan Thompson, and Eleanor Rosch. There, enactivism was defned by two principles: one is that perception is defned as perceptually guided action, and the other is that cognition emerges from sensorimotor activity (Varela, Thompson, & Rosch, 1991, p. 173). Thompson would later revise and expand enactivism to fve defning features: living beings are autonomous agents, the nervous system is an autonomous dynamic system, cognition is the exercise of skillful know-how in situated and embodied action, a cognitive being’s world is an enacted relational domain, and consciousness must be investigated as well (2007, pp. 13–15). Since the early 1990s, others have developed enactivism in various other ways

174

Conclusion

(for review see Ward, Silverman, & Villalobos, 2017). Consequently, my frst response to this question is: “What kind of enactivism?” In light of the fact that “enactivism” can be understood quite broadly with varying emphases—e.g., from a sensorimotor theory (Pessoa, Thompson, & Noe, 1998) to a philosophy of nature (Gallagher, 2017)—it is possible that complexity science, ecological psychology, neuroscience, and NExT could be subsumed under some sort of “Enactivism” umbrella. Be that as it may, I do not think the current account is properly understood as a form of enactivism. In particular, while not necessarily contrary to some forms of enactivism in this regard, NExT does not explicitly defend theories about life (e.g., autopoiesis; Thompson, 2007). Additionally, while NExT aims to directly test hypotheses and provide empirical support even for its theoretical posits, it is not clear that enactivism by and large abides by that standard.2 My second response is as follows: As long as NExT maintains the four principles at the core of Gibsonian ecological psychology, it is not clear that it will be congruent with enactivism. While some ecological psychologists have left the door open for integrating ecological psychology and enactivism (e.g., Chemero, 2009),3 it is uncertain how far that integration can go without requiring one and/or the other to make too many compromises their core principles (for discussion of many of these issues see Di Paolo, Heras-Escribano, Chemero, & McGann, 2021). Challenge 3: But what’s the mechanism? While we’re at it, where are the computations and representations? As you may have noticed, other than in historical contexts, there is little to no mention of mechanisms, computations, or representations in this book. This is especially true in parts discussing the foundations of complexity science (Chapter  5), presenting the NeuroEcological Nexus Theory (NExT; Chapter 6), and applying NExT to the afordance of pass-through-able (Chapter 7). This is not an accident. An underlying goal of this book is to demonstrate that mind phenomena in organism-environment systems can be investigated, explained, and understood in a theoretically compelling and empirically assessable way without necessitating those commonly-employed concepts. First, regarding mechanisms: Appealing to “mechanisms” as central to explanations has a long and rich history in philosophy and science (Glennan & Illari, 2018). Since the mid1990s, a number of philosophers have defended the claim that the search for mechanisms is the most fundamental kind of explanatory strategy, particularly in the life sciences. This claim is said to be both historically and normatively true (e.g., Bechtel & Abrahamsen, 2005; Craver & Darden, 2013; Craver & Tabery, 2019). There should be no doubt that mechanisms play a central role in both neuroscience’s theorizing (e.g., Piccinini, 2020) and

2 Historically, there has clearly been empirical research by “enactivists” (e.g., Varela, Lachaux, Rodriguez, & Martinerie, 2001). Still, it is arguable that most adherents view it more as a “philosophy” than a scientifc theory to be tested. A recent work by Shaun Gallagher is illustrative of this point when he states that, “even if enactivism were to be considered a philosophy of nature, it wouldn’t be right to conclude that it cannot ofer concrete hypotheses or raise novel scientifc questions” (2017, p. 24). This statement can be read as saying that, in principle, it is possible to mine enactivism for hypotheses, but such experimental features do not necessarily follow—which is not a very strong scientifc claim. 3 As Chemero states, “Obviously, much more work is required to genuinely integrate ecological and enactive cognitive science under the banner of radical embodied cognitive science” (2009, p. 154).

Conclusion

175

investigative work.4 Still, it is far from clear what a “mechanism” is or what a “mechanistic explanation” amounts to. The fact is that what defnes a “mechanism” has varied considerably over the past 30 years since the new mechanism movement of the early 2000s (e.g., Bechtel & Richardson, 1993/2010; Glennan, 2017; Machamer, Darden, & Craver, 2000). If providing a “mechanistic explanation” amounts to describing a phenomenon as consisting “of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon” (Glennan, Illari, & Weber, 2022; p. 145), then NExT provides mechanistic explanations (e.g., the neural modes of neural populations support regular behaviors).5 In that way, it is unclear what import those ideas have as they seem to apply to just about everything in nature and every way of doing science. On the other hand, if there are more stringent criteria—for example, mechanisms must be explained by decomposing them into parts with localizable functions—then NExT is not properly viewed as providing mechanistic explanations—for example, due to contrary commitments such as the tight causal and constitutive relationship among body and world. With regard to actual scientifc practice, it is not a far stretch to view scientists—especially neuroscientists—as predominantly utilizing “mechanism” in the looser sense. If that is all it takes to ofer a mechanistic explanation, then count NExT in. Second, regarding computations and representations: Chapter  2 presented reasons why ecological psychology does not sufer from a number of critical shortcomings of cognitivism. For example, the direct perception feature of the theory of afordances ofers reasons why representations are both unnecessary and fawed ways to explain visual perception. Given that NExT maintains the primary principles of ecological psychology, it is able to also bypass limitations inherent in the cognitivist approach. Consequently, there is no need to appeal to computations or representations when investigating and explaining phenomena like afordance events. Moreover, and along the lines of responses to challenges from mechanism and reductionism: What kinds of “computation” and “representation” should NExT incorporate? As argued in Chapter 3, recent work I conducted with Edouard Machery provides empirical evidence to support the claim that the concept of “representation” is far from understood or defned in a generally accepted way (Favela & Machery, 2023). Results from our experiments with an international group of researchers that included neuroscientists suggest that they exhibit uncertainty about what sorts of brain activity involve representations or not and they prefer to characterize brain activity in causal, nonrepresentational terms. Given that the vast majority of these participants believe that cognition involves representations, it is quite concerning that they are unsure how to apply this core concept and that they exhibit preferences for descriptions of brain activity that are nonrepresentational. If I was a gambling person, I would bet that tantamount results would be found in experiments on the concept of “computation.” As such, it does not seem that anything is lost from NExT by not including concepts that are so imprecise and potentially unnecessary to begin with. Challenge 4: Perception-action is one thing, but what about real cognition? This is a challenge that is faced by nearly every investigative framework that claims to be an alternative to cognitivism and other mainstream approaches in the cognitive, neural, 4 For an unsystematic but quick indicator to motivate this point, a Google Scholar search for “neuroscience” + “mechanism” resulted in 1,960,000 results (retrieved May 19, 2023 from https://scholar.google.com/ scholar?hl=en&as_sdt=0%2C10&q=%22neuroscience%22+%2B+%22mechanism%22&btnG=). 5 I suspect most neuroscientists understand mechanisms by way of this “Minimal Mechanism Thesis” (Glennan et al., 2022, p. 145).

176

Conclusion

and psychological sciences. The challenge typically goes something like this: “Of course approaches like ecological psychology, embodied cognition, and the like do respectable scientifc work and tell us a lot about visual perception and motor control; but they do not tell us much at all about real cognition.” This challenge is rooted in a line of thought in the history of “Western” philosophy and science that treats mind as radically distinct from bodies and action (Ohlsson, 2007). While not necessarily a form of substance dualism, approaches that treat mind (i.e., cognition, mental phenomena, etc.) as the “software” of the brain maintain the division between mind and body (e.g., Adams & Aizawa, 2008; Chomsky, 2009; Fodor, 2009; Pylyshyn, 1984; Thagard, 2005; Von Eckardt, 1995). Such is cognitivism, which treats mind as essentially information processing of some sort that is centralized in brains. Those processes are purported to underlie “real” cognitive faculties like decision-making, language, reasoning, recollecting, and mental imagery, among others. Since at least Descartes’ work in the 1600s, the “Western” tradition has treated those faculties as just so happening to be embodied in biological systems like us, but not necessarily so. For Descartes, cognitive faculties like language could not be the product of mere meat (e.g., brains; cf. Bisson, 1991), as the former are purported to have properties that physical things do not (e.g., not spatially extended; Descartes, 1641/2006). For cognitivists, cognitive faculties like language just so happen to be the product of meat, but their true nature need not be explained as such (e.g., their logical properties are what matters; cf. discussion of McCulloch-Pitts tradition in Chapter 3).6 Though my current aims have centered on perception-action phenomena like afordances, I am sympathetic to reasons that motivate “real cognition” challenges facing not just my approach but non-cognitivist approaches in general (e.g., ecological psychology, embodied cognition, etc.). As stated earlier, I have not ofered a precise defnition of “mind” but have opted to point to paradigmatic cases as needed to contribute to the main aims of the book. There is a reason for this agnosticism about defnitions of mind and “real” cognition: Truth be told, I am not sure what mind is, nor could I begin to ofer a set of necessary and sufcient conditions that capture the full range of phenomena that fall in the “mind” umbrella. This confession does not simultaneously concede defeat because I am confdent that cognitivists and the like do not really know either (cf. Favela & Martin, 2017). But that does not matter for my response to the challenge. There is compelling research applying concepts, methods, and theories from complexity science that have fruitfully investigated and explained a wide range of phenomena uncontroversially treated as examples of real cognition. The following is a small list of such cases: • Cognitive tasks involving emotion, gambling, language, mathematics, N-back, relational, and social (Shine et al., 2019) • Decision-making (van Rooij, Favela, Malone, & Richardson, 2013) • Intelligence (Mustafa et al., 2012) • Learning (Sandu et al., 2014) • Linguistics (Hawkins, 2004) • Memory retrieval (Maylor, Chater, & Brown, 2001) • Mental representations involving speeded judgment, accuracy of discrimination, and production (Gilden, 2001)

6 Also compare with functionalism, a theory about the nature of mental states, which are identifed by what they do rather than by what they are made of (Putnam, 1975).

Conclusion • • • • •

177

Music perception (Pease, Mahmoodi, & West, 2018) Pedagogy (Mason, 2008) Problem solving (Amon, Vrzakova, & D’Mello, 2019) Speech (Ramirez-Aristizabal, Médé, & Kello, 2018) Psychopathologies (Brookes et al., 2015).

These 12 kinds of examples include decision-making, learning, memory, problem solving, and strategizing, among others. This small sample is at a minimum enough to provide proof of concept that complexity science ofers concepts, methods, and theories to investigate, explain, and understand “real” cognition. Thus, insofar as NExT is a complexity science, then it too can be fruitfully applied to such phenomena as well.7 In addition to the examples from complexity science just provided, other features of NExT are increasingly being applied to “real” cognitive phenomena as well, especially the emphasis on neural population dynamics and approaches like the neural manifold hypothesis. Chapter 6 presented numerous examples of the fruitful application of the neural manifold hypothesis chiefy in research centering on sensorimotor capacities, such as control and reach-and-grasp behavior (e.g., Abbaspourazad, Choudhury, Wong, Pesaran, & Shanechi, 2021; Balasubramaniam et al., 2021; Gallego, Perich, Miller, & Solla, 2017). While sensorimotor activity remains the primary area of application, the neural manifold hypothesis is being increasingly applied directly to “real” cognitive capacities like attention, decision-making, executive function, learning, and working memory (for reviews see Ebitz & Hayden, 2021; Khona & Fiete, 2022). Taken together with the examples from complexity science, it should be clear that there are no in principle reason or insurmountable experimental obstacles to applying NExT to the full range of mind phenomena exhibited by organism-environment systems. 8.3 So is everybody upset? This book began with the claim that the big reward of reconciling and integrating ecological psychology and neuroscience could come with big cost. This is because the path to reconciliation and integration would require both ecological psychology and neuroscience to loosen their grip on some of their core conceptual, methodological, and theoretical commitments. For that reason, I predicted that my current proposal would make everybody upset. Has that prophecy come to pass? In concluding this book, I draw attention to the core features that the current account on ofer has motivated ecological psychology and neuroscience to loosen their commitments to. After, I end on a positive note by suggesting that the mind sciences are already showing signs of going in the direction of complexity science and NExTlike frameworks. To begin, neuroscience will have to loosen its grip and potentially abandon three commitments. The frst one is the centrality of an information processing conception of mind and nervous systems. Fundamental to the information processing view are notions of computation and representation. As argued in previous chapters, both of these concepts have

7 Additional replies to the “real cognition” challenge from the perspectives of ecological psychology, embodied cognition, and the like include topics such as counter-factual thinking (Sanches de Oliveira, Raja, & Chemero, 2021), creativity (Baber, 2021), imagination (van Dijk & Rietveld, 2020), mathematical learning (Nemirovsky & Ferrara, 2009), and prospective control (van Rooij, Bongers, & Haselager, 2002).

178

Conclusion

serious limitations, such as the inability to account for meaning and the widespread imprecision in their use by proponents. Instead, it has been argued that NDST ofers a rich set of concepts, methods, and theories to investigate and explain mind and nervous systems. For example, instead of computing algorithms and manipulating representations, nervous systems realize mind in the form of neuronal trajectories that traverse a landscape of attractor states. The second one is linear causality. The algorithmic assumptions of even parallel processing computational view of minded systems are inappropriate for real biological systems. Instead, organism-environment systems are properly understood via circular and mutual causality. For example, self-organization as described by synergetics is a fruitful way to understand the kinds of causation at the heart of biological systems. The third one is the “outside-in” understanding of the relationship of organisms and environments. Mentioned in Chapters 1 and 2, György Buzsáki describes the “outside-in” understanding as “the dominant framework of mainstream neuroscience, which suggests that the brain’s task is to perceive and represent the world, process information, and decide how to respond” (2019, p.  xiii). The outside-in approach incorporates the problems with information processing and linear causality. It adheres to a view of minded organisms as alien entities detached from environments that receive stimulation → convert into representations → compute representations → sends commands to act (e.g., problem solving, move body, etc.). Instead, the organism is one scale of a multiscale organism-environment system, where self-organizing (i.e., circular and mutual causation) dynamics govern neural, body, and environment activities. In this way, a radical shift in understanding the brain must occur. The brain is not primarily an input-receiving and information storage and manipulation device. The brain is a self-organized dynamic system of spatiotemporal patterns that achieves balance between long-term (e.g., genetics; cf. primary repertoire in Neural Darwinism, Chapter 6; Edelman, 1987, 1989) and short-term (e.g., ecological information; e.g., ambient light array) considerations (cf. Brembs, 2021; Buzsáki & Vöröslakos, 2023; Pezzulo & Cisek, 2016). Ecological psychology does not get a free pass. It will have to loosen its grip and shift focus in three fundamental ways. The frst overarching point is that ecological psychologists must be willing to revise even their most cherished defnitions of concepts and methods. It is fair to say that some schools of ecological psychology have taken more militant approaches to their feld than others. In many ways, this is understandable given the uphill battle many have fought in the face of criticisms from and overwhelming dominance of cognitivism in the cognitive and psychological sciences throughout the twentieth century. In spite of such sociohistorical considerations, such uncompromising attachment to a fundamentalist view of ecological psychology’s commitments is actually contradictory. Such fundamentalism contrasts with the founder of ecological psychology himself, James J. Gibson, who made it clear that ecological psychology ought to be open to revising even its most core commitments. This was true when he publicly presented the approach and in his fnal book, for example Now, someday there will have to be corollaries added to this theory. (Gibson, 1974, 22:40) These terms and concepts are subject to revision as the ecological approach to perception becomes clear. May they never shackle thought as the old terms and concepts have! (Gibson, 1986/2015, p. 298) Second, ecological psychologists on the whole must be open to the minimal fact that brains and nervous systems are important parts of explanations of perception-action in

Conclusion

179

biological systems. Additionally, they must be open to the possibility that brains and nervous systems could be just as important as features of bodies and environments when explaining perception-action events in biological systems. It is true that Gibsonian ecological psychology was intended to be a theory of perception full stop, that is, applicable not just to humans or even mammals but to birds and insects as well.8 In that way, ecological psychology qua theory of perception in toto can reasonably remain confned to its historical form. However, ecological psychology qua theory of perception for organism-environment systems with nervous systems cannot. As demonstrated by the example of NExT applied to the afordance pass-through-able and the mouse, features of the body (e.g., synergy) and environment (e.g., ecological information) can account for a large amount of the phenomenon. But it leaves out signifcant contributions from the nervous system, such as those that are causally and constitutively involved in the detection of ecological information and the related control of head movement. Third, and invoking the spirit of the frst point, ecological psychologists interested in providing an account of the brain and nervous system’s contributions to perception-action ought to eliminate the concept of “resonance.” As discussed in Chapter 4, the notion of resonance was the closest Gibson came to addressing the role of the brain and nervous system in perceptual systems (e.g., Gibson, 1966, pp. 260, 268). While it started out as a concept to be taken as referring to real features of nervous systems (e.g., Gibson, 1966, pp. 267, 271), Gibson would later relegate it to the status of mere metaphor (e.g., “the metaphors used can be terms such as resonating,” Gibson, 1986/2015, p. 235). Still, as some ecological psychologists have recently claim, “[t]he fact that Gibson’s metaphors are decidedly physical in orientation signifes the direction of his research” (Fultot, Adrian Frazier, Turvey, & Carello, 2019, p. 220). I agree with the spirit of this assessment, namely, that Gibson was using terms as placeholders for future developments. Where I disagree is with those who appear to be using the term in the literal sense (e.g., Bruineberg & Rietveld, 2019; Raja & Anderson, 2019)—that is, as in the early Gibson (1966)—and those who attempt to reify a concept that may be more apt for elimination (e.g., van Dijk & Myin, 2019). As presented in the previous chapters, complexity science and NExT ofer more fruitful concepts and theories with proven successes and less baggage (e.g., population dynamics, manifold theory, and synergies). At this point, it is reasonable to suppose that at least some ecological psychologists and neuroscientists will be upset by these claims. It is also reasonable to suppose that those who are upset are on the winning side. This is in light of the fact that there is a much longer history of successes in ecological psychology and neuroscience than complexity science qua mind science, and certainly more than NExT. Be that as it may, there are reasons for an optimistic view of the prospects for application and success of the present account. Reason for optimism 1: Complexity science is increasingly utilized in ecological psychology and neuroscience.

8 On the point that Gibsonian ecological psychology is a theory of perception full stop: “[This conception of] perceptual systems is intended to apply to man and all mammals, and most of it holds for all vertebrates. It would have to be somewhat modifed for invertebrates and insects, for their systems, although analogous, are diferent, not only in anatomy but also in the range of information detected.” (Gibson, 1966, p. 49) “This description is very general; it holds true for insects, birds, mammals, and men.” (Gibson, 1986/2015, p. 31).

180

Conclusion

While some may say ecological psychology is behind the curve in terms of not giving an account of brain activity in perception-action, it has certainly been ahead of the curve in comparison to neuroscience concerning its employing concepts, methods, and theories from complexity science. For over 40 years, ecological psychologists have utilized complexity science methods from NDST to explain everything from control and coordination in terms of dissipative structures (Kugler, Kelso, & Turvey, 1980) to afordance perception via sensory-substitution devices (Favela, Amon, Lobo, & Chemero, 2021). In fact, various features of complexity science are applied more and more in the broader cognitive sciences and psychology literatures as well (for overviews see Favela, 2020; Guastello, Koopmans, & Pincus, 2011; Riley & Van Orden, 2005).9 The same is not true for the neurosciences. Though there have been some neuroscientists appealing to complexity science throughout the decades (e.g., Edelman & Gally, 2001; Skarda & Freeman, 1987; Sporns, Tononi, & Edelman, 2000; Tsuda, 2001), such instances have been in the minority. However, the past decade or so has seen an increase, especially applications of coordination dynamics and synergetics (e.g., Kelso, 2012; Kelso, Dumas, & Tognoli, 2013), network theory (Bullmore & Sporns, 2009), and physics (e.g., Beggs, 2022; Tomen, Herrmann, & Ernst, 2019). The greatest increase is undoubtedly the presence of dynamical systems theory in neuroscience. As I argued elsewhere (Favela, 2021), dynamical systems theory is mostly absent from contemporary neuroscience curriculums—a point made by prominent neuroscientists as well (Izhikevich, 2007, p. xvi). With that said, the increased application of dynamical systems theory in neuroscience should be viewed more as a renaissance than the introduction of something totally new. It is a renaissance because it is a return—at least in the behavioral, cognitive, computational, and sensory neurosciences—to an approach more prominent in the early- to mid-twentieth century. Specifcally, dynamical systems theory was characteristic of the Hodgkin-Huxley tradition discussed in Chapter 3. In many ways, the increased appeals to complexity science in neuroscience are actually returns to approaches that were more common and central in its past, as with dynamical systems theory, but also systems theory (especially control theory, properly understood; Yin, 2020; also see Ju & Bassett, 2020; Pezzulo & Levin, 2016). Reason for optimism 2: Neurosciences (and cognitive and psychological sciences) are acknowledging the signifcance of brain-body and brain-body-environment systems. I am confdent that the vast majority of neuroscientists acknowledge that brains and nervous systems are parts of bodies and worlds. However, I am skeptical that most think the body and world are all that signifcant with regard to causing or constituting mind. With that said, there are indications that such attitudes are changing. Consider the fact that top mainstream neuroscience venues are publishing serious research on the various ways the body and environment play causal and constitutive roles in brain and nervous system activity. One particularly fascinating line of research in this regard concerns work suggesting that phenomena commonly accepted as being processed in the brain (e.g., touch) are actually processed in other parts of the body (e.g., spinal cord; e.g., Chirila et al., 2022; Turecek,

9 In prior work, I have attempted to demonstrate the broad applicability of complexity science by utilizing it to bolster various additional areas of research, such as Bayesianism in the cognitive and neural sciences (Favela & Amon, 2023), the integrated information theory of consciousness (Favela, 2019), and theories of reasoning (Favela & van Rooij, 2019).

Conclusion

181

Lehnert, & Ginty, 2022). Other recent examples of neuroscience research that is shifting focus to the causal and constitutive roles of the body include work developing methods for simultaneous recording to identify relationships among brain and body activity (von Ziegler, Sturman, & Bohacek, 2021), the temporal coupling of brain and body rhythms (Criscuolo, Schwartze, & Kotz, 2022; Park, Mori, Okuyama, & Asada, 2017), reinterpreting neural imaging data with emphases on brain-body relationships (Westlin et al., 2023), the connection of body activity and memory (Buzsáki & Tingley, 2023), understanding brain-body systems via network theory (Sporns, 2011), the connection of heartbeat rhythms and time perception (Arslanova, Kotsaris, & Tsakiris, 2023), and visceral signals from gastrointestinal (GI) tract contribute to and shape brain dynamics and behavioral and neural responses to stimuli (Azzalini, Rebollo, & Tallon-Baudry, 2019), just to name a few. In addition, there is expanding neuroscience literature that stresses the crucial need to consider context when investigating brain and nervous system activity, including experimental design that takes ecological features seriously during neural activity recording (e.g., optic fow; Koay, Charles, Thiberge, Brody, & Tank, 2022; Parker, Abe, Leonard, Martins, & Niell, 2022). In much of this work, “context” can be cashed out in terms of situatedness, specifcally, that brains are in bodies and bodies are in worlds (e.g., Barrett, 2022; Pessoa, 2022). So is everybody upset? Probably not everybody, but a lot of people. But “If you’re making everybody happy, then you’re doing something wrong” (Anonymous, n.d.). References Abbaspourazad, H., Choudhury, M., Wong, Y. T., Pesaran, B., & Shanechi, M. M. (2021). Multiscale low-dimensional motor cortical state dynamics predict naturalistic reach-and-grasp behavior. Nature Communications, 12(1), 607. https://doi.org/10.1038/s41467-020-20197-x Adams, F., & Aizawa, K. (2008). The bounds of cognition. Malden, MA: Blackwell Publishing. Amon, M. J., Vrzakova, H., & D’Mello, S. K. (2019). Beyond dyadic synchrony: Multimodal behavioral irregularity predicts quality of triadic problem solving. Cognitive Science, 43(10), e12787, 1–22. https://doi.org/10.1111/cogs.12787 Arslanova, I., Kotsaris, V., & Tsakiris, M. (2023). Perceived time expands and contracts within each heartbeat. Current Biology, 33(7), 1389–1395. https://doi.org/10.1016/j.cub.2023.02.034 Azzalini, D., Rebollo, I., & Tallon-Baudry, C. (2019). Visceral signals shape brain dynamics and cognition. Trends in Cognitive Sciences, 23(6), 488–509. https://doi.org/10.1016/j.tics.2019.03.007 Baber, C. (2021). Embodying design: An applied science of radical embodied cognition. Cambridge, MA: The MIT Press. Balasubramaniam, R., Haegens, S., Jazayeri, M., Merchant, H., Sternad, D., & Song, J. H. (2021). Neural encoding and representation of time for sensorimotor control and learning. Journal of Neuroscience, 41(5), 866–872. https://doi.org/10.1523/JNEUROSCI.1652-20.2020 Barrett, L. F. (2022). Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. American Psychologist, 77(8), 894–920. https://doi. org/10.1037/amp0001054 Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and Philosophy of Biological and Biomedical Sciences, 36, 421–441. https://doi.org/10.1016/j. shpsc.2005.03.010 Bechtel, W., & Richardson, R. C. (1993/2010). Discovering complexity: Decomposition and localization as strategies in scientifc research (2nd ed.). Cambridge, MA: The MIT Press. Beggs, J. M. (2022). The cortex and the critical point: Understanding the power of emergence. Cambridge, MA: The MIT Press. Bisson, T. (1991). They’re made out of meat. OMNI, 13(7), 54. Retrieved January 16, 2015 from www.terrybisson.com/page6/page6.html

182

Conclusion

Brembs, B. (2021). The brain as a dynamically active organ. Biochemical and Biophysical Research Communications, 564, 55–69. https://doi.org/10.1016/j.bbrc.2020.12.011 Brigandt, I., & Love, A. (2023). Reductionism in biology. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (summer 2023 ed.). Stanford, CA: Stanford University. Retrieved May 17, 2023 from https://plato.stanford.edu/archives/sum2023/entries/reduction-biology/ Brookes, M. J., Hall, E. L., Robson, S. E., Price, D., Palaniyappan, L., Liddle, E. B., . . . Morris, P. G. (2015). Complexity measures in magnetoencephalography: Measuring “disorder” in schizophrenia. PLoS One, 10(4), e0120991. https://doi.org/10.1371/journal.pone.0120991 Bruineberg, J., & Rietveld, E. (2019). What’s inside your head once you’ve fgured out what your head’s inside of. Ecological Psychology, 31(3), 198–217. https://doi.org/10.1080/10407413.2019 .1615204 Bullmore, E., & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3), 186–198. https://doi.org/10.1038/ nrn2575 Buzsáki, G. (2019). The brain from inside out. New York, NY: Oxford University Press. Buzsáki, G., & Tingley, D. (2023). Cognition from the body-brain partnership: Exaptation of memory. Annual Review of Neuroscience, 46, 191–210. https://doi.org/10.1146/annurev-neuro101222-110632 Buzsáki, G., & Vöröslakos, M. (2023). Brain rhythms have come of age. Neuron, 111(7), 922–926. https://doi.org/10.1016/j.neuron.2023.03.018 Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: The MIT Press. Chirila, A. M., Rankin, G., Tseng, S. Y., Emanuel, A. J., Chavez-Martinez, C. L., Zhang, D., . . . Ginty, D. D. (2022). Mechanoreceptor signal convergence and transformation in the dorsal horn fexibly shape a diversity of outputs to the brain. Cell, 185(24), 4541–4559. https://doi.org/10.1016/j. cell.2022.10.012 Chomsky, N. (2009). Cartesian linguistics. New York, NY: Cambridge University Press. Craver, C. F., & Darden, L. (2013). In search of mechanisms: Discoveries across the life sciences. Chicago, IL: University of Chicago Press. Craver, C., & Tabery, J. (2019). Mechanisms in science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (summer 2019 ed.). Stanford, CA: Stanford University. Retrieved from https:// plato.stanford.edu/archives/sum2019/entries/science-mechanisms/ Criscuolo, A., Schwartze, M., & Kotz, S. A. (2022). Cognition through the lens of a body-brain dynamic system. Trends in Neurosciences, 45(9), 667–677. https://doi.org/10.1016/j.tins.2022.06.004 Descartes, R. (1641/2006). Meditations, objections, and replies (R. Ariew & D. Cress, Eds. and Trans.). Indianapolis, IN: Hackett Publishing Company, Inc. Di Paolo, E. A., Heras-Escribano, M., Chemero, A., & McGann, M. (Eds.). (2021). Enaction and ecological psychology: Convergences and complementarities. Lausanne: Frontiers Media SA. https:// doi.org/10.3389/978-2-88966-431-3 Ebitz, R. B., & Hayden, B. Y. (2021). The population doctrine in cognitive neuroscience. Neuron, 109(19), 3055–3068. https://doi.org/10.1016/j.neuron.2021.07.011 Edelman, G. M. (1987). Neural Darwinism: The theory of neuronal group selection. New York, NY: Basic Books. Edelman, G. M. (1989). The remembered present: A biological theory of consciousness. New York, NY: Basic Books. Edelman, G. M., & Gally, J. A. (2001). Degeneracy and complexity in biological systems. Proceedings of the National Academy of Sciences, 98, 13763–13768. https://doi.org/10.1073/pnas.231499798 Favela, L. H. (2019). Integrated information theory as a complexity science approach to consciousness. Journal of Consciousness Studies, 26(1–2), 21–47. Favela, L. H. (2020). Cognitive science as complexity science. Wiley Interdisciplinary Reviews: Cognitive Science, 11(4), e1525, 1–24. https://doi.org/10.1002/wcs.1525 Favela, L. H. (2021). The dynamical renaissance in neuroscience. Synthese, 199(1–2), 2103–2127. https://doi.org/10.1007/s11229-020-02874-y

Conclusion

183

Favela, L. H., & Amon, M. J. (2023). Enhancing Bayesian approaches in the cognitive and neural sciences via complex dynamical systems theory. Dynamics, 3, 115–136. https://doi.org/10.3390/ dynamics3010008 Favela, L. H., Amon, M. J., Lobo, L., & Chemero, A. (2021). Empirical evidence for extended cognitive systems. Cognitive Science: A Multidisciplinary Journal, 45(11), e13060, 1–27. https://doi. org/10.1111/cogs.13060 Favela, L. H., & Machery, E. (2023). Investigating the concept of representation in the neural and psychological sciences. Frontiers in Psychology: Cognition. https://doi.org/10.3389/fpsyg.2023.1165622 Favela, L. H., & Martin, J. (2017). “Cognition” and dynamical cognitive science. Minds and Machines, 27, 331–355. https://doi.org/10.1007/s11023-016-9411-4 Favela, L. H., & van Rooij, M. M. J. W. (2019). Reasoning across continuous landscapes: A nonlinear dynamical systems theory approach to reasoning. Cognitive Systems Research, 54, 189–198. https://doi.org/10.1016/j.cogsys.2018.12.013 Fodor, J. (2009). Where is my mind? London Review of Books, 31(3), 13–15. www.lrb.co.uk/ the-paper/v31/n03/jerry-fodor/where-is-my-mind Fultot, M., Adrian Frazier, P., Turvey, M. T., & Carello, C. (2019). What are nervous systems for? Ecological Psychology, 31(3), 218–234. https://doi.org/10.1080/10407413.2019.1615205 Gallagher, S. (2017). Enactivist interventions: Rethinking the mind. Oxford, UK: Oxford University Press. Gallego, J. A., Perich, M. G., Miller, L. E., & Solla, S. A. (2017). Neural manifolds for the control of movement. Neuron, 94(5), 978–984. https://doi.org/10.1016/j.neuron.2017.05.025 Gibson, J. J. (1966). The senses considered as perceptual systems. Boston, MA: Houghton Mifin. Gibson, J. J. (1974). Q and A: Lecture at Ohio State University [Audio]. International Society for Ecological Psychology. Retrieved September 26, 2022 from http://commons.trincoll.edu/isep/ gibson-lecture/ Gibson, J. J. (1986/2015). The ecological approach to visual perception (classic ed.). New York, NY: Psychology Press. Gilden, D. L. (2001). Cognitive emissions of 1/f noise. Psychological Review, 108(1), 33–56. https:// doi.org/10.1037/0033-295X.108.1.33 Glennan, S. (2017). The new mechanical philosophy. Oxford, UK: Oxford University Press. Glennan, S., & Illari, P. (2018). Introduction: Mechanisms and mechanical philosophy. In S. Glennan & P. Illari (Eds.), The Routledge handbook of mechanisms and mechanical philosophy (pp. 1–9). New York, NY: Routledge. Glennan, S., Illari, P., & Weber, E. (2022). Six theses on mechanisms and mechanistic science. Journal for General Philosophy of Science, 53, 143–161. https://doi.org/10.1007/s10838-021-09587-x Guastello, S. J., Koopmans, M., & Pincus, D. (Eds.). (2011). Chaos and complexity in psychology: The theory of nonlinear dynamical systems. Cambridge, MA: Cambridge University Press. Hawkins, J. A. (2004). Efciency and complexity in grammars. New York, NY: Oxford University Press. Izhikevich, E. M. (2007). Dynamical systems in neuroscience: The geometry of excitability and bursting. Cambridge, MA: The MIT Press. Ju, H., & Bassett, D. S. (2020). Dynamic representations in networked neural systems. Nature Neuroscience, 23(8), 908–917. https://doi.org/10.1038/s41593-020-0653-3 Kelso, J. A. S. (2012). Multistability and metastability: Understanding dynamic coordination in the brain. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1591), 906–918. https://doi.org/10.1098/rstb.2011.0351 Kelso, J. S., Dumas, G., & Tognoli, E. (2013). Outline of a general theory of behavior and brain coordination. Neural Networks, 37, 120–131. https://doi.org/10.1016/j.neunet.2012.09.003 Khona, M., & Fiete, I. R. (2022). Attractor and integrator networks in the brain. Nature Reviews Neuroscience, 23, 744–766. https://doi.org/10.1038/s41583-022-00642-0 Koay, S. A., Charles, A. S., Thiberge, S. Y., Brody, C. D., & Tank, D. W. (2022). Sequential and efcient neural-population coding of complex task information. Neuron, 110(2), 328–349. https://doi. org/10.1016/j.neuron.2021.10.020

184

Conclusion

Kugler, P. N., Kelso, J. A. S., & Turvey, M. T. (1980). On the concept of coordinative structures as dissipative structures: I. Theoretical lines of convergence. In G. E. Stelmach & J. Requin (Eds.), Tutorials in motor behavior (pp. 3–47). New York, NY: North-Holland Publishing Company. Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25. https://doi.org/10.1086/392759 Mason, M. (Ed.). (2008). Complexity theory and the philosophy of education. Malden, MA: WileyBlackwell. Maylor, E. A., Chater, N., & Brown, G. D. (2001). Scale invariance in the retrieval of retrospective and prospective memories. Psychonomic Bulletin & Review, 8(1), 162–167. https://doi.org/10.3758/ BF03196153 Molenaar, P. C. M., Lerner, R. M., & Newell, K. M. (Eds.). (2014). Handbook of developmental systems theory and methodology. New York, NY: The Guilford Press. Mustafa, N., Ahearn, T. S., Waiter, G. D., Murray, A. D., Whalley, L. J., & Staf, R. T. (2012). Brain structural complexity and life course cognitive change. NeuroImage, 61(3), 694–701. https://doi. org/10.1016/j.neuroimage.2012.03.088 Nemirovsky, R., & Ferrara, F. (2009). Mathematical imagination and embodied cognition. Educational Studies in Mathematics, 70, 159–174. https://doi.org/10.1007/s10649-008-9150-4 Nietzsche, F. (1879/1913). Human all-too-human: A book for free spirits, part II (P. V. Cohn, Trans.). New York, NY: The MacMillan Company. Ohlsson, S. (2007). The separation of thought and action in Western tradition. In A. Brook (Ed.), The prehistory of cognitive science (pp. 17–37). New York, NY: Palgrave Macmillan. Oyama, S. (2000). The ontogeny of information: Developmental systems and evolution (2nd ed.). Durham, NC: Duke University Press. Oyama, S., Gray, R. D., & Grifths, P. E. (Eds.). (2001). Cycles of contingency: Developmental systems and evolution. Cambridge, MA: The MIT Press. Park, J., Mori, H., Okuyama, Y., & Asada, M. (2017). Chaotic itinerancy within the coupled dynamics between a physical body and neural oscillator networks. PLoS One, 12(8), e0182518. https:// doi.org/10.1371/journal.pone.0182518 Parker, P. R., Abe, E. T., Leonard, E. S., Martins, D. M., & Niell, C. M. (2022). Joint coding of visual input and eye/head position in V1 of freely moving mice. Neuron, 110(23), 3897–3906. https://doi. org/10.1016/j.neuron.2022.08.029 Pease, A., Mahmoodi, K., & West, B. J. (2018). Complexity measures of music. Chaos, Solitons & Fractals, 108, 82–86. https://doi.org/10.1016/j.chaos.2018.01.021 Pessoa, L. (2022). The entangled brain: How perception, cognition, and emotion are woven together. Cambridge, MA: The MIT Press. Pessoa, L., Thompson, E., & Noe, A. (1998). Finding out about flling-in: A guide to perceptual completion for visual science and the philosophy of perception. Behavioral and Brain Sciences, 21(6), 723–748. https://doi.org/10.1017/S0140525X98001757 Pezzulo, G., & Cisek, P. (2016). Navigating the afordance landscape: Feedback control as a process model of behavior and cognition. Trends in Cognitive Sciences, 20(6), 414–424. https://doi. org/10.1016/j.tics.2016.03.013 Pezzulo, G., & Levin, M. (2016). Top-down models in biology: Explanation and control of complex living systems above the molecular level. Journal of the Royal Society Interface, 13(124), 20160555. https://doi.org/10.1098/rsif.2016.0555 Piccinini, G. (2020). Neurocognitive mechanisms: Explaining biological cognition. Oxford, UK: Oxford University Press. Putnam, H. (1975). The nature of mental states. In Mind, language, and reality: Philosophical papers (Vol. 2, pp. 429–440). Cambridge, UK: Cambridge University Press. Pylyshyn, Z. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: The MIT Press.

Conclusion

185

Raja, V., & Anderson, M. L. (2019). Radical embodied cognitive neuroscience. Ecological Psychology, 31(3), 166–181. https://doi.org/10.1080/10407413.2019.1615213 Ramirez-Aristizabal, A. G., Médé, B., & Kello, C. T. (2018). Complexity matching in speech: Efects of speaking rate and naturalness. Chaos, Solitons & Fractals, 111, 175–179. https://doi. org/10.1016/j.chaos.2018.04.021 Riley, M. A., & Van Orden, G. C. (Eds.). (2005). Tutorials in contemporary nonlinear methods for behavioral sciences. United States: National Science Foundation. Retrieved May 19, 2023 from www.nsf.gov/pubs/2005/nsf05057/nmbs/nmbs.pdf Sanches de Oliveira, G., Raja, V., & Chemero, A. (2021). Radical embodied cognitive science and “real cognition”. Synthese, 198, 115–136. https://doi.org/10.1007/s11229-019-02475-4 Sandu, A. L., Staf, R. T., McNeil, C. J., Mustafa, N., Ahearn, T., Whalley, L. J., & Murray, A. D. (2014). Structural brain complexity and cognitive decline in late life: A longitudinal study in the Aberdeen 1936 Birth Cohort. NeuroImage, 100, 558–563. https://doi.org/10.1016/j. neuroimage.2014.06.054 Shine, J. M., Breakspear, M., Bell, P. T., Martens, K. A. E., Shine, R., Koyejo, O., . . . Poldrack, R. A. (2019). Human cognition involves the dynamic integration of neural activity and neuromodulatory systems. Nature Neuroscience, 22(2), 289–296. ttps://doi.org/10.1038/s41593-018-0312-0 Skarda, C. A., & Freeman, W. J. (1987). How brains make chaos in order to make sense of the world. Behavioral and Brain Sciences, 10(2), 161–173. https://doi.org/10.1017/S0140525X00047336 Sporns, O. (2011). Networks of the brain. Cambridge, MA: The MIT Press. Sporns, O., Tononi, G., & Edelman, G. M. (2000). Connectivity and complexity: The relationship between neuroanatomy and brain dynamics. Neural Networks, 13(8–9), 909–922. Thagard, P. (2005). Mind: Introduction to cognitive science (2nd ed.). Cambridge, MA: The MIT Press. Thompson, E. (2007). Mind in life: Biology, phenomenology, and the sciences of the mind. Cambridge, MA: Belknap Press. Tomen, N., Herrmann, J. M., & Ernst, U. (Eds.). (2019). The functional role of critical dynamics in neural systems. Cham, Switzerland: Springer. Tsuda, I. (2001). Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behavioral and Brain Sciences, 24(5), 793–810. https://doi.org/10.1017/ S0140525X01000097 Turecek, J., Lehnert, B. P., & Ginty, D. D. (2022). The encoding of touch by somatotopically aligned dorsal column subdivisions. Nature, 612, 310–315. https://doi.org/10.1038/s41586-022-05470-x van Dijk, L., & Myin, E. (2019). Ecological neuroscience: From reduction to proliferation of our resources. Ecological Psychology, 31(3), 254–268. https://doi.org/10.1080/10407413.2019.1615 221 van Dijk, L., & Rietveld, E. (2020). Situated imagination. Phenomenology and the Cognitive Sciences, 1–23. https://doi.org/10.1007/s11097-020-09701-2 van Rooij, I., Bongers, R. M., & Haselager, F. G. (2002). A non-representational approach to imagined action. Cognitive Science, 26(3), 345–375. https://doi.org/10.1207/s15516709cog2603_7 van Rooij, M. M. J. W., Favela, L. H., Malone, M. L., & Richardson, M. J. (2013). Modeling the dynamics of risky choice. Ecological Psychology, 25, 293–303. https://doi.org/10.1080/10407413 .2013.810502 von Ziegler, L., Sturman, O., & Bohacek, J. (2021). Big behavior: Challenges and opportunities in a new era of deep behavior profling. Neuropsychopharmacology, 46(1), 33–44. https://doi. org/10.1038/s41386-020-0751-7 Varela, F., Lachaux, J. P., Rodriguez, E., & Martinerie, J. (2001). Phase synchronization and largescale integration. Nature Reviews Neuroscience, 2, 229–239. https://doi.org/10.1038/35067550 Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. Cambridge, MA: The MIT Press.

186

Conclusion

Von Eckardt, B. (1995). What is cognitive science? Cambridge, MA: The MIT Press. Ward, D., Silverman, D., & Villalobos, M. (2017). Introduction: The varieties of enactivism. Topoi, 36, 365–375. https://doi.org/10.1007/s11245-017-9484-6 Westlin, C., Theriault, J. E., Katsumi, Y., Nieto-Castanon, A., Kucyi, A., Ruf, S. F., . . . Barrett, L. F. (2023). Improving the study of brain-behavior relationships by revisiting basic assumptions. Trends in Cognitive Sciences, 27(3), 246–257. https://doi.org/10.1016/j.tics.2022.12.015 Yin, H. (2020). The crisis in neuroscience. In W. Mansell (Ed.), The interdisciplinary handbook of perceptual control theory: Living control systems IV (pp. 23–48). Elsevier. https://doi.org/10.1016/ B978-0-12-818948-1.00003-4

Index

Note: numbers in bold indicate a table. Numbers in italics indicate a fgure on the corresponding page Abbott, B. 123n3 action-scaled metrics and data 11, 31–32, 146 afordance events, architecture of 161 afordance of pass-through-able 5, 30–31, 123, 132, 143, 157–163, 164, 165, 168; defning 157; NExT and complexity science and 163–164; NExT and ecological psychology and 164–165; NExT and neuroscience and 165; NExT applied to 174; NExT applied to a mouse 159, 168, 179 afordances xii, 3; concept of 31; ecological psychology and 11, 144, 145, 164–165; Edelman’s approach to 77n3; as form of value 82; Gibsonian ecological psychiatry, as one of four main principles at core of 121; Gibsonian neuroscience and selective sensitivity to 82–83; as guides to discovery 6, 106; as higher-order physical properties 31; innateness, meaning, and intelligence in relationship to 35; meaning derived from 33; as meaningful facets of perception 3, 9; means of assessment 32; as mental representations 71; model of visually-guided locomotion in terms of 10; neuroscience and 70–72; NExT and 164, 165; neuroscience and 165; non-afordances and 161; pass-throughable 30, 144, 161, 164, 165, 168; as perceivable opportunities for behavior 28–33, 144; perception-action events involving 128; as potential actions 71; representations and 12; as source of meaning for organisms 33; synergies and 161; theory of 42, 72, 75, 82, 143n21, 175; wonder tissue and 69 afordances of the environment 31 agent/organism-environment systems 75 ambient arrays 123, 145, 161, 168, 178

ambient light 28 Anderson, Michael 78 Aristotle 43–45, 61, 99, 171 artifcial computers 55 artifcial intelligence 21, 50–56, 170–171; biological brains and 55; cognitivism and 56; development of 54; foundations of project of 51; neural network approaches to 52; origin of name 21; single neuron as primary unit of 125n5; symbolic AI; theories and models of early research in 43; Turing’s infuence on 50–51 artifcial neural networks 55 artifcial systems: cybernetic 91; human-made 35 attractor 92, 142 attractor states 73, 107, 110, 164 Bayesian brain 6, 8 Bayesian interference 4 Bayesianism 70, 80–82, 83, 145, 180n9; defnition of 80; free-energy principle and 8 Bayesian modeling 57 Bayes’ theorem 81 Beggs, John 6, 13, 106 behaviorism 20, 69–70, 171; cognitivism as alternative to 23–24; ecological psychology’s rejection of 75; psychological 20; shortcomings of 34 behaviorist approach 20–21 behaviorists 30 behaviorist theories of language 34 Belousov-Zhabotinsky chemical reactions 103 Bernstein’s problem see degrees of freedom problem Bhat, H. 107, 110 big behavior 124 big data 88: data deluge challenge and 107, 124 big data enterprises, neurosciences as 107, 124 big rewards, big cost 1

188

Index

bistable attractor model 110 bistability 92 body-scaled metrics and data 5, 9, 11, 31–32, 141, 146 brain 3–5, 12–13; afordances and 161; Aristotle’s view of 45; Bayesian 6, 81; CNS and 11; behavior and 124; Brooks’ view of 56; cognitivism and 57; computing by 59–60; computational thinking by 50, 60; ecological xi, 172; as engine of reasons 43; FEP and 81; Hippocrates’ views of 44–45; importance of 76; as information-processing system 8; logic of 51; McCulloch and Pitts’ view of 53, 54–55; monkey 134; neural reuse and 78; science of 4; as seat of the soul 43; orangutan 8; overview of concepts of 70; theories of cognition and 81; understanding on own terms of 128; von Neumann’s view of 51–52; as wet computer 55; see also loan of intelligence; Mike the Headless Chicken; mouse brain; orangutan brain activity, neurodynamic model of 129 brain-body 44, 141, 180 brain-body systems 141, 181 brain-body-environment systems 128, 143, 147, 180 brain-centric understandings of mind ix, 13, 33, 71, 75 brain/cognitive/neural system 61 brain-focused conception of mind 5, 13 brain injury 58 brain/mind, black box of 23–24 brain states 19, 20, 34 brain structure and function 71; Neural Darwinism as theory to explain 125, 127; theory of 79 brain-to-brain coupling 138 Brooks, Rodney 56 Buzsáki, György 8, 28n8, 178 Caramazza, Alfonso 57–58 cart before the horse 80, 82 Cartesian coordinate system 9 Cartesian dualism 34, 43, 60 Cartesian materialism 60–61 Cartesian space 25 central processing units (CPU) 35 chemistry 88, 173 Chemero, Anthony 23n5, 75–76, 83, 106, 174n3 chicken see Mike the Headless Chicken Chomsky, Noam 61; Universal Grammar (UG) of 23, 34 Church, Alonzo 21, 50, 51 Churchland, Patricia 73 circular causality 89, 98, 100, 110, 129, 138, 178

circular causation 141, 178 Cisek, Paul 71 cognition 4–5, 43; in animals 56; anticomputational and antirepresentational theory of 75; cognitivism and 56; cognitivist understanding of 36; computational theory of 57; computations and representations and xi; conceptualizing 33; degeneracy and 127; distributed 137n15, 170; embodied approach to 60; extended 44n2, 137n15; FEP and 81; Fodorian approaches to 57; informationprocessing explanations/understandings of 35, 42; mind and 87, 88, 122; “mind” and 2, 121, 123; multiple realizations of 34; neuroscientifc explanations of 58; real xii, 175–177; RECS 75; representation and 28, 60, 175; research on 55; sensorimotor activity and 173; theories of brain and 81; twodimensional axis for understanding 23; as type of intelligence 54; understood as computation acting on representation 34; see also embodied cognition cognitive architecture 21, 22, 23; ecological 79 cognitive capacities 78, 177 cognitive concepts 58 cognitive information processing 55 cognitive neuropsychology 57, 58; see also neuropsychology cognitive neuroscience 4–5, 58, 83, 125; see also neuroscience cognitive psychologists 60 cognitive psychology 19–24, 36, 42, 50, 55, 56, 61 cognitive representation 74 cognitive revolution of 1950s and 1960s 5, 13, 19 cognitive science 19, 24, 36, 42, 50, 55, 56, 61, 69, 75, 76, 83, 100, 106 cognitive scientists 4, 21, 60, 73 cognitivism xiii; alternatives to 175; appeal of 23–24; behaviorism and 24; centrality of 58; cognition and 42, 56–61; cognitive revolution of 1950s and 1960s and 5, 19, 23–24, 42; cognitivist 19; computations and representations and 13, 24, 58, 171; core elements of 20; critical shortcomings of 175; critiques of 33–35; dominance of 19, 178; ecological psychology and 13, 24–35, 75; faws of 24; Gibsonian objections to 19; information processing approach of 76, 176; McCulloch-Pitts model and 50, 54– 55; mind as defned by 3; neuroscience and 36, 42–43, 58; new home for 56–61;

Index sins of, visited upon neuroscience 42–61; symbol processing; Computational Theory of cognition cognitive psychology 19–24 cognitivist understanding of cognition 36, 61 Coltheart, Max 57 complexity 90, 102; emergence and 99 complexity matching 144 complexity science xiii, 1, 12, 176; appeals to 180; defning 88–90; foundations of, for the mind sciences 13, 83, 87–111; key concepts of 99–106; NExT and xii, 120–122, 124, 129, 137, 147, 161–164, 165, 170–174, 177, 179; NDST and 144; putting complexity science to work 106–110; role of universality in 106, 145; roots of 90–99; unifed framework of 111 complex systems, as concept 90 computability 54; foundations of 54; universal 50–52 computation 20, 28, 59–60; foundations of 50; as fundamental concept explaining brain function 58; human-level 34; neural 33; neurocomputation 71 computational and representational activity 136; anticomputational and antirepresentational understanding of cognition 75; computational procedures and meanings of representations 35; see also computationalism and representationalism; computations and representations computationalism 56–57; defenders of 57 computationalism and representationalism 8 computational modeling and simulation 19, 100 computational neuroscience 5, 43, 45–46, 56, 125, 180 computational neuroscientists 4 computational power, increased 130 computational resources 108 computational rules and operations 61 computational systems 123 computational theory 7, 8 computational theory of cognition 57 computational theory of mind 24, 50, 52, 56, 170 computational view of the world 178 computations acting on representations 34, 55 computations and representations xi; afordance events and 175; challenge 3 (three) 174; cognitivism and 13, 24, 58, 171; concept of 59–60; cognition and 42; in neuroscience research 59; “no computation without representation” 20 computers 35; logic in 51; perception distinct from 3; theoretical foundations of 21, 25 computer science 2, 56, 89, 109, 170

189

computer simulation 93 computers to environments 19–36 computing machines, brains as 55 concepts 7, 11; afordances as 71; AI and linguistic research as providing 23; Bayesian 80; behavioral 20; “coding” as 59; cognitive 58; cognitivism’s utilization of 24, 58; common 99; complexity science and 1; “computation” and “representations” and 8; core 99; discipline-specifc 5–6; key concepts 99–106; manifold and manifold theory 131–132; mathematical 90, 95; NExT 121, 124, 147 phase transitions as 88, 89; psychological 77; redefning 2; separate 6; shared 5; synergetics and synergy as distinct 137n16 concepts and theories: cognitivism’s’ 58; crosspollination of 56–57 concepts, methods, guides, and theories 2–3, 8–9, 11–12, 19, 42, 89; co-existence of, in disciplines 44; of complexity science 87, 147; integration of, by ecological psychology 70; RECS and 75 concepts of entities and processes 73 construction: evolution as 172n1 constructive processing 20, 26 control parameters 164; defnition of 138; identifying and defning 164; order and control parameter approach 98, 104, 106, 163; NExT and 163–164; synergetics and 92, 96, 98, 104, 138, 163–164 control theory 127; optimal 137 criticality 4, 6, 106–107; self-organized (SOC) 104, 145; as universal class of brain activity 147 critical phenomena 104 Cromwell, Howard 57 curse of dimensionality 109 cybernetics 88, 90, 103; feedback and 100; system-level activity emphasized by 91; systems theory and 90–91 Dale, Rick 107, 110 Darwinism 79; see also Neural Darwinism data and information 12n21; action-scaled 11, 31–32, 146; body-scaled 5, 9, 11, 31–32, 141, 146; perspectivism and 44; scaled 5; sensory 57; see also big data data collection and processing 106 data deluge challenge 107, 124, 136 data points 100, 103 data science 107 data sets 96, 100–102 degeneracy 127 degrees of freedom problem 136–138, 140

190

Index

Dennett, Daniel 60; on wonder tissue 69, 70 Descartes, René 176 dimensionality 107; multineuronal 151 dimensionality reduction 107–110, 124, 132, 138, 144; as data processing strategy 132; embedding as form of 166; neural manifold hypothesis compared to 133, 135; NExT and 124, 163 dimensionality reduction methods: Isomap 134, 135, 141 dimensionality reduction methods and techniques 132–133, 135, 141 dimensions of principle component analysis (DPCA) 146; see also principle component analysis (PCA) 108, 109 directionless 80 direct perception (also “perception is direct”) 3, 9, 27, 33, 42, 74–75, 121; as Gibsonian ecological information 142 directly perceivable, afordances as being 32 disembodiment 33–35, 60–61, 171 dynamical cognitive science 162n4 dynamical explanation, nonreductive 5 dynamical landscape 141, 142, 143, 147 dynamical renaissance 55 dynamical similitude 106 dynamical systems 74, 79; limit cycle as 93; universality and 104 dynamical systems modeling 75 dynamical systems theory (DST) xi, 9, 73, 80, 83; Hodgkin-Huxley system and 55, 109, 180; manifold theory and 132; in neuroscience 180; nonlinear (NDST) 90, 91–96, 137, 163, 172; synergetics concept defnition and 92; see also nonlinear dynamical systems theory (NDST) dynamics: agent-environment 75; bodily 3; component-dominant 101; coordination 6, 103, 106, 145, 147, 162, 163; critical 146; Hodgkin-Huxley tradition’s focus on biological realism and 55, 61; interaction-dominant 80, 99–100, 163; latent 134; mesoscale 128; mesoscopic 130; neural 130, 146; neural population xii, 122, 125, 130–131, 133, 140, 158, 160, 163; nonlinear 107; pendulum 93, 94, 108; population 72, 122, 125, 134, 160; power-law 144; single-neuron 55; social 110; synergy 163; system 92–93, 98, 138; system-level 91, 99–100, 102; temporal 105; universal 104 ecological brain xi, 172 ecological information 29, 31–33, 71, 161, 178; ambient light as 28, 30, 178; invariant 11; substance as 28; surface as 28

ecological laws 31 ecological neuroscience 72; preliminaries to 70–75; varieties of 69–83; see also Bayesianism; Gibsonian neuroscience; neural reuse; Reed ecological psychology 19–36; four primary principles of 164–165; neuroscience and 5–13; resonance and 72– 75; “why” of 36 ecology of the body 121 Edelman, Gerald 76–78, 125–127, 130 embodied action 173 embodied approach to cognition 60–61 embodied cognition 35, 44n2, 57, 60, 75, 176, 177n7; see also RECS embodied mind 35; disembodied mind 3 embodiment xi, 11, 71, 78, 121; disembodiment 33, 60–61 emergence, nonlinearity, self- organization, and universality 88, 90, 99–106, 110, 124, 137, 171 enactivism 76, 173–174 equations 75, 106, 108; analytic solving of 110; diferential 48, 55, 92, 93, 96–98, 107, 163; diference 92; examples of 81, 91; governing 93; manifolds and 132; pendulum diferential 94; see also Neural ODE explanation 1n1, 6, 9; incommensurable 19; incomplete 120; nonreductive dynamical 5; NExT as source of 122, 147, 159; neuroscience 5; perspectivism and 44 explanations of acts of perception or perceptionaction 70, 80 explanations of afordance pass-through-able 123, 157–158 explanations of human performance 21 explanations of mind 88, 145 explanations of psychological phenomena 20 explanations of visual perception 28 extended cognition 44n2, 137n15 extended cognitive systems 103 extended mind 44 extraterrestrials, minds of 3, 34 Feyerabend, Paul 44, 87 frst shape score 109 fsh 45 fsh bait balls 103 fsh body measurements 108, 109 Fodor, Jerry 56, 60 Fodorian approaches 57 fractals 101, 103–106; 1/f noise 80, 97; 1/f scaling 80, 97, 101; DST 79; geometry 93; pink noise 80, 96, 97, 101; selfsimilarity of 95, 96; scale-invariance; monofractal analysis 144; multifractal

Index analysis 96, 144; spatial 94; structure 80; temporal 94 fractal analysis 80, 96 fractal branching 105 free-energy principle (FEP) 6, 8, 81–82 Freeman, Walter 128–130 Freud, Sigmund 48 Gage, Phineas 24, 44, 70 Gardner, Howard 58 Gazzaniga, Michael, 58 Gestalt psychology 24, 88, 90 Gibson, Eleanor J. 3 Gibsonian ecological information 142n20 Gibsonian ecological psychology 75, 83, 120–121, 122n4, 148, 174, 178–179 Gibsonian neuroscience 12, 14, 69, 81–83 Gibsonians 19; Neo-Gibsonians 9, 34, 69 Gibsonian resonance 74–75 Gibsonian terms 71–73 Gibson, James J. xi, 3, 8n6, 25–35, 157; antirepresentationalism of 75; on brains 70, 179; on Reed’s rejection of instructionism 76 Golgi, Camillo: neuron doctrine rejected by 48; staining technique of 57 guide to discovery 6, 12, 106, 133 Haken-Kelso-Bunz model (HKB model) 98, 103, 106, 146 headless chicken see Mike the Headless Chicken heart: Aristotle’s understanding of action and role of 43–45, 171 heartbeats 94, 97 hemispherectomy 46 high-dimensional data 107–109, 112, 132, 141, 144 Hippocrates 43–45, 61, 70, 171 Hodgkin, Alan 48–50 Hodgkin-Huxley model 49–50, 61, 109 Hodgkin-Huxley tradition xii, 54–55, 61, 69, 171, 180 Huxley, Andrew 48–50 inference 12, 25, 110; active 81–82; Bayesian 4, 81; theories of active inference 82 information 28, 90; stimulus 28; see also ecological information information processing 19, 20, 32; braincentered 33; brain-centric 75; brain states understood as form of 55; cognition and 34; cognitive 55; cognitivism and 56, 60, 76, 176; computation as explanation for 59; “compute” as synonym for 59; intelligence and 35; perception and motor control and 57 information-processing approach 11

191

information-processing system 8 information processing understanding of mind and cognition xii, 13, 42, 171 information theory 21, 24; see also Shannon; Weaver innate capacity for language 23 innateness 90, 171; as “uncashed check” 34, 61 instructionism or instructionist approaches 76–77 intelligence 61, 171; as “uncashed check” 35, 61; see also loan of intelligence investigating, explaining, and understanding 1n1, 12, 24, 120, 147; universal forms of 145 Isomap 134, 135, 141 Jamesian functionalism 24 Kelso, J. A. Scott 6, 50n8, 98–99, 103, 137–138 Koch, Christof 49, 57–59, 94 Koch triangle 94 Kording, Konrad 45 Kuhn, Thomas 4, 88; on scientifc paradigms 45n5 Language of Thought (LOT) 57 large language models (LLM) 125n5 latent variables 133, 134, 135, 141 life sciences 82 linear 82, 92; see also nonlinear 92 linear causal chain 98 linear causation/causality 129, 178 linear methods 108 linear phenomenon 91 linear processes 102 linearity see nonlinear dynamical systems theory; nonlinearity linguistic computations 34 linguistics 34, 171, 176 loan of intelligence 12, 35, 61 LOT see Language of Thought low-dimensional manifold 141, 146, 160–161 manifold, manifolds 131–136; behavioral 144; intrinsic neural 144; invariant 146; low-dimensional141, 146, 160–161; neural 114, 133, 134–135, 164–165; torus 158; uncontrolled (UCM) 138–141, 160; unstructured 142n20 manifold theory 72, 132, 163 manifold hypothesis see neural manifold hypothesis Marr, David: computational theory of 7, 8; hardware implementation of 7, 8; three levels of analysis of 7, 8 representation and algorithm of 7, 8; McCulloch, Warren S. 52, 58; infuence of 125

192

Index

McCulloch-Pitts model 50, 54–55 McCulloch-Pitts neuron 52, 59–60, 69 McCulloch-Pitts tradition xii, 55–56, 61, 69 meaning 171, 178; as “uncashed check” 61 meaningful 166; afordances as 35; determining what is meaningful to an organism 32–35, 74, 161; representations as 34 meaningful interactions between organism and environment 28, 74 meaningful facets of perception 3, 9 meaningful features of the world 142 meaningful opportunities for action 164 meaningless 95–96 measure-the-simulation 110 mechanisms 1n1 174–175; appealing to 174; biological 54; biophysical 57; defning 175; ionic 50 mechanistic explanation 11, 165, 175 “mentalese” 57 Mike the Headless Chicken 43–45, 46, 61 Miller, George 58 mind: AI and 21; behavior and 20; brain-centric understandings of ix, 13, 33, 71, 75; brain as central organ of 70; brainfocused conception of 5, 13; brain/ mind, black box of 23–24; Cartesian dualism of body and 60; cognition/mind movement of 1990s 35; computational theory of 24, 50; contemporary sciences of 45; continuity of mind thesis 144n22; disembodied body from 33; mainstream approaches to 75; RECS and 76; science of 20; see also Mike the Headless Chicken; understanding mind mind functioning, brain and 61 mind sciences 1–4; Bayesian-based work in 80; foundations of complexity sciences for 88–111 mind scientists 78 mind stuf 34 monkey brain 134 monkeys 146 mouse, mice xiii, 71, 146; case study of mouse inside a house (i.e., pass-through-ability) 158–162, 164, 179 mouse brain 4, 134, 168 mouse-hole-in-wall system 158, 164 mutual causality 178 NDST see nonlinear dynamical systems theory network neuroscientists 56 networks: artifcial neural 55; connectionism and 55; neural 21, 23, 52, 109, 125–127, 140; neuronal 104, 105 network theory 4, 145, 180–181 Neumann see von Neumann, Johann neural computation 57

Neural Darwinism 6, 8, 11, 76–79, 83, 125–130, 140, 165, 172; theory of neuronal group selection and 125; selectionist versus instructionist 76–77, 79, 126–127, 140 neural dynamics 130, 146 neural manifold 114, 133, 134–135, 164 neural manifold hypothesis 130, 133, 135, 139n19, 141, 160, 165 Neural Manifold Ordinary Diferential Equations (Neural ODE) 166, 167 neural networks 21, 23, 52, 109, 125–127, 140 neural network properties 130 neural network module 56 neural population 128, 145, 159 neural population activity 161, 164, 165 neural population dynamics xii, 122, 125, 130– 131, 133, 140, 158, 160, 164; mesoscale 128; mesoscopic 130 neural population manifold 158 neural population 128 neural reuse 6, 11, 70, 77n4, 78–80, 83, 145 NeuroEcological Nexus Theory (NExT): organism-environment system and 121–122, 157–158; overview of 120–127; pass-through-able and 157–165, 168, 174; putting NExT to work 157–168; six hypotheses 122–127 neuroecology 121n2 neurology 45 neuronal activity 12, 53, 102, 123, 145, 147 neuronal coordination 126 neuronal coupling 106 neuronal dynamics 146 neuronal ensembles 99 neuronal fring patterns 12 neuronal groups 126–127; as maps 127; neural population as 128 neuronal group selection, theory of 125 neuronal networks 52, 105; sandpiles and 104 neuronal population dimensions 143 neuronal timeseries 133 neuron doctrine 4, 45, 47–48, 53, 55, 61; neural population doctrine and 125 neurophilosophy 45 neural population doctrine 125 neuropsychology: cognitive 57, 58 neuroscience: cognitive 4–5, 58, 83, 125; mainstream 178 Newell, Allen 21, 122 Nietzsche, Friedrich 44n4, 170 noise: 1/f 80, 97; brown 96, 97; deterministic and structured 83; meaningless 108; pink 80, 96, 97, 101; random and unstructured 82; white 96, 97, 101 nonlinear 92 nonlinear dynamical systems theory (NDST) 80, 99–100, 104, 107, 110; complexity

Index science and 96; DST and 91; key features of 93; nonlinearity as central to 100; scale-invariant structures and 93, 104, 144 nonlinearity 80, 83, 88, 100; assessing 102 nonlinearity, emergence, self-organization, and universality 88, 90, 99–106, 110, 124, 137, 171 normalizing fows 166 ontological reductionism 173 ontologies 11 optic array 10, 11, 29, 30–31, 123 optic fow 28, 29, 159, 161, 166, 168, 181 optic fow recording 166 orangutan 6, 8, 35; brain 8 order and control parameter approach 98, 104, 106; NExT and 163 order parameters 98, 104; defning 106, 163–164 organisms: afordances and 28, 31–33, 35, 71; borderline cases of minded organisms 44n2; Darwinism and 79; FEP and 82; Gibson and Neo-Gibsonian understanding of 69, 121; heart’s role in 44; innateness and 35; life sciences and 82; meaningful and meaning, to an organism 32–33; mental gymnastics of 32; Neural Darwinism and 76; pass-through-able of 157–158, 159; perception and 33; puzzle of mind when brain is removed from 45; specifcation and 28, 69; structural correspondence and structural representations per 74; tendency to resist disorder 81 organism-environment interaction 6, 9; see also “outside-in” approach organism-environment system 11, 32–33, 80; adaptive self-organizing 140; agent/ organism-environment systems 75; architecture of 145; body as synergy in 160; circular and mutual causality in 178; complexity matching in 144; as complex systems 137; NExT and 121–122, 157–158; NExT pass-throughable in 157–165, 166, 168, 174; NExT’s six hypotheses for 122–127; perception-action in 31; pluralism of universal classes and 147; as relevant spatiotemporal scale of investigation 42; situated nature of 120; six hypotheses of 122–147; theory of perception for 179; see also mouse brain organism-environment system-level understanding of mind 5 “outside-in” approach of neuroscience 8, 28n8, 178

193

Panksepp, Jaak 57–58 parallax 71; motion 27; variance 139 Parker, Philip 71 pass-through-able: afordance of 5, 30–31, 123, 132, 144, 157–163, 164, 165, 168; explanations of afordance of 123, 157–158; NExT and 157–165, 166, 168, 174; of organisms 57–158, 159; see also mouse, mice pattern recognition 52 PCA see principle component analysis pendulum dynamics see dynamics perception: air theory of 26, 27; ground theory of 26, 27, 27 perception-action xii; afordances as 176; Bayesianism and 80; brains and brain activity in 70, 178–180; CNS and 12; ecological psychology and 171; Friston’s framework for 82; identifying efective ecological action to guide 130; lawful regularities in organism-environment systems and 31–32; mammalian 11; neural contributions to 72; purpose of psychology and 33; real cognition and (Challenge 4) 175; RECS and 75; resonance and 72, 179 perception-action activities 78; afordance events as 77; reciprocal 158 perception-action events 120, 144; afordance 143, 145–146, 176; neural contributions to 172; NExT and 157 perception-action capabilities: ecological niche of organisms and 121n2; of orangutans 35; universality of 145–146 perception-action loop 76, 164–165 perception-action properties, lawful 32 perception-action task 107 perceptron 52, 55 perceptron neuron model 53 perspectivism 44, 54 phase space plots 91; computer simulations and 93; state space and 92 phase transitions 88, 89, 92, 93, 98, 103–104; among mouse neural populations 162; attractors with 142; critical 138n18; guided by SOC 145; NExT and complexity science and 163; universal principles and 138 phenomenon of interest, identifying 106, 163 Piantadosi, Steven 57 Piccinini, Gualtiero 57, 59 Pitts, Walter 45–48, 52, 58; infuence of125; neural network strategy of 21; McCulloch-Pitts model 50, 54–55; McCulloch-Pitts neuron 52, 59–60, 69; McCulloch-Pitts tradition xii, 55–56, 61, 69

194

Index

pluripotentiality 128n6 POS see poverty of stimulus poverty of stimulus (PoS) 23, 32, 34 Pribram, Karl 69, 70 principle component analysis (PCA) 108, 109 Quilty-Dunn, Jake 57 Quine, Willard Van Orman 147n23 radical embodied cognitive science (RECS) 75–76, 83 Raja, Vicente 79–80 Ramón y Cajal, Santiago 45–48, 125 Rayleigh-Bénard convection 103, 110 RECS see radical embodied cognitive science recurrence quantifcation analysis (RQA) 102, 102 recurrent patterns of behavior 102 reduction: dimensionality 107–110, 141 reduction to constituent parts 90 reductionism xi; epistemic 173; methodological 123; ontological 173 redundancy 108, 127n6, 152 redundancy of variables 140 Reed, Edward 70, 73, 76–77, 78, 80, 83, 171 representation xi, xii; antirepresentational 3, 27, 75; brain-centric focus on computations and 13; cognition and 28, 60, 175; cognitivism/cognition as computation and 20, 24, 34, 42, 55, 60; concepts of 58; computational-representational mental language 57; computations acting on 55–56; construct 32; constructing mental representations 33; ecological psychology and 28; failure to explain what representations are 12, 61; indirect mental 9; manner and meaning of 35; as meaningful 34; mental 26, 27; mental gymnastics and 32; neuroscience research articles with the term “representation” 59; as posited by neuroscience 11; statespace 59, 63; structural correspondence and structural representations 74 representation and algorithm 7, 8 representational, mind as 3, 5 resonance xii; concept of 70, 83; criticism of 79; ecological psychology and 72–75; Gibson’s early use of 70, 73, 75; perception-action and 72, 179; Raja’s explanation of 79; selective 81 Rosch, Eleanor 57, 173 RQA see recurrence quantifcation analysis Safaie, Mostafa 146 scale-invariant structures 93, 96, 104, 106 Schneider, Susan 57

selectionism versus instructionism 76–77, 79, 126–127, 140 self-organization, emergence, nonlinearity, and universality 88, 90, 99–106, 110, 124, 137, 171 Shannon, Claude 21 Sierpinski triangle 94, 95 situatedness xi, 11, 71, 120, 170, 181 situated action: cognition and 173 situated cognition 122; see also cognition Skinner, B. F. 20, 23 slaving principle 98, 100, 110 soft-assembled system 137, 160; body as 137, 140 “software” of the brain 176 specifcation 28, 71, 144 specifcation of afordances 31 specifcation of organism 69 speed-accuracy tradeof 21 Spivey, Michael 110, 144n22 Sporns, Olaf 89, 127–128, 130 starling murmuration 103 state maintenance 91 state space 92; 3-dimensional 134; criticality and 147; dynamical landscapes and 141; low-dimensional variables and 166, 167; NExT and 165–166; phase space plot with 159; specifcation and, 143, 144; as qualitative tool 163–164; UCM and 138, 139 state space of behavioral activity 147 state space of ecological information 147 state space of neural population activity 147 state-space representation 59, 63 synergetics 96–99, 103, 111; applications of 180; circular causality in 98; complexity science and 99, 137, 172; control parameters in 92, 96, 98, 104, 139, 163; ecological psychology and 75, 90; NDST and 163, 172; NExT and 163; order parameters in 92, 98, 139, 163; selforganization and 103, 178; synergies and 137–138; universality in physics and 104 synergetics concept defnitions 92 synergy, synergies 137–140; afordances as 161; body 163, 172; body as 137, 160–161, 179; defnition of 136; description of 137; inaccurate and accurate depiction of 137, 139; NExT and 158; reciprocal compensation 138; “successful” 160; UCM and 140 synergy dynamics 163 systems theory 90–91, 103, 111; see also dynamical systems theory (DST); nonlinear dynamical systems theory (NDST)

Index teleology 44 telos 44, 171 theoretical neuroscientists 4 Thompson, Evan 57, 173 topology 130–132; 1-dimensional 134, 160; 3-dimensional 142; algebraic 132; defnition of 131; geometric 132; network 146; torus 162; see also manifolds; manifold theory topological dimensions 134, 160 topological manifolds 131 topological properties 130, 132 topological spaces 131; homeomorphic 131, 134; n-dimensionality in 131 torus manifold 158 trajectory 21, 92; closed 93; dynamic 109 Tranquillo, Joe 89 triviality 74, 79 Turing, Alan 21, 25, 50–52, 54 Turing machine 51, 52, 60 Turing test 50 Twain, Mark 77n4 UCM see uncontrolled manifold UG see Universal Grammar uncashed checks, critique of 33–35, 61 uncontrolled manifold (UCM) 138–141 understanding: conceptual understanding of synergies 138; in reference to scientifc framework 1n1; of what is meaningful to an organism 32–33; outside-in, in neuroscience 178 understanding brain-body systems via network theory 181 understanding brains 128 understanding complex systems 130 understanding human telos 44 understanding mind, understanding of mind 12, 73, 99, 179 understanding natural phenomenon 122 understanding neural systems 125

195

understanding neuroscience 61 understanding of cognition, understanding cognition 34, 36, 56; antirepresentational 75; information processing 42; scientifc 87 understanding of intelligence 55 understanding of logic 51 understanding of mind 13; informational processing xii; system-level 5 understanding perception 30 understanding phase transitions of systems 93 understanding the body 161 Universal Grammar (UG) 23, 34 universality: defnition of 104, 145, 162; emergence, nonlinearity, selforganization, and 88, 90, 99–106, 110, 124, 137, 171; NExT and 124, 145, 162 universality classes 89, 104, 107, 145 value, values 52–53; afordances as 82; living organisms’ pursuit of that which has 82; low-dimensional 167; numerical 97; performance variable 139; resting 49; temporal 97; weighted 53 values and meanings of things 32 Varela, Francisco 57, 173 vision 32, 122 “Vision” (Marr) 8n6 visually-guided action 6, 19, 25, 31, 70, 120; air theory of perception and 26, 27; ecological psychology approach to 9–10, 11, 12, 170; ground theory of perception and 26, 27, 27; neuroscience approach to 6–9, 11, 12, 170 von Neumann, John 20n1, 50–52, 58 Warren, William H. 6, 25, 27, 30, 31–32, 123 Weaver, Warren 21 Whang, S. 6, 30, 31–32, 123 wonder tissue 69, 70 World War II xii, 21, 25, 171