503 69 4MB
English Pages 590 Year 2016
THE ROUTLEDGE HANDBOOK OF PHILOSOPHY OF THE SOCIAL MIND
The idea that humans are by nature social and political animals can be traced back to Aristotle. More recently, it has also generated great interest and controversy in related disciplines such as anthropology, biology, psychology, neuroscience, and even economics. What is it about humans that enabled them to construct a social reality of unrivalled complexity? Is there something distinctive about the human mind that explains how social lives are organised around conventions, norms, and institutions? The Routledge Handbook of Philosophy of the Social Mind is an outstanding reference source to the key topics and debates in this exciting subject and is the first collection of its kind. An international team of contributors present perspectives from diverse areas of research in philosophy, drawing on comparative and developmental psychology, evolutionary anthropology, cognitive neuroscience, and behavioural economics. The thirty-two original chapters are divided into five parts: •
The evolution of the social mind: including the social intelligence hypothesis, co-evolution of culture and cognition, ethnic cognition, cooperation; • Developmental and comparative perspectives: including primate and infant understanding of mind, shared intentionality, and moral cognition; • Mechanisms of the moral mind: including norm compliance, social emotion, and implicit attitudes; • Naturalistic approaches to shared and collective intentionality: including joint action, team reasoning and group thinking, and social kinds; • Social forms of selfhood and mindedness: including moral identity, empathy and shared emotion, normativity, and intentionality. Essential reading for students and researchers in philosophy of mind and psychology, The Routledge Handbook of Philosophy of the Social Mind is also suitable for those in related disciplines such as social psychology, cognitive neuroscience, economics, and sociology. Julian Kiverstein is Assistant Professor of Neurophilosophy at the University of Amsterdam, and Research Fellow at the Academic Medical Centre, Amsterdam,The Netherlands. He works in philosophy of cognitive science and neuroscience, and is currently completing a book on embodied and enactive cognition.
ROUTLEDGE HANDBOOKS IN PHILOSOPHY
Routledge Handbooks in Philosophy are state-of-the-art surveys of emerging, newly refreshed, and important fields in philosophy, providing accessible yet thorough assessments of key problems, themes, thinkers, and recent developments in research. All chapters for each volume are specially commissioned, and written by leading scholars in the field. Carefully edited and organized, Routledge Handbooks in Philosophy provide indispensable reference tools for students and researchers seeking a comprehensive overview of new and exciting topics in philosophy. They are also valuable teaching resources as accompaniments to textbooks, anthologies, and research-orientated publications. Also available: THE ROUTLEDGE HANDBOOK OF EMBODIED COGNITION Edited by Lawrence Shapiro THE ROUTLEDGE HANDBOOK OF NEOPLATONISM Edited by Pauliina Remes and Svetla Slaveva-Griffin THE ROUTLEDGE HANDBOOK OF CONTEMPORARY PHILOSOPHY OF RELIGION Edited by Graham Oppy THE ROUTLEDGE HANDBOOK OF PHILOSOPHY OF WELL-BEING Edited by Guy Fletcher THE ROUTLEDGE HANDBOOK OF PHILOSOPHY OF IMAGINATION Edited by Amy Kind THE ROUTLEDGE HANDBOOK OF THE STOIC TRADITION Edited by John Sellars THE ROUTLEDGE HANDBOOK OF PHILOSOPHY OF INFORMATION Edited by Luciano Floridi THE ROUTLEDGE HANDBOOK OF PHILOSOPHY OF BIODIVERSITY Edited by Justin Garson, Anya Plutynski, and Sahotra Sarkar
THE ROUTLEDGE HANDBOOK OF PHILOSOPHY OF THE SOCIAL MIND
Edited by Julian Kiverstein
First published 2017 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2017 selection and editorial matter, Julian Kiverstein; individual chapters, the contributors The right of Julian Kiverstein to be identified as the author of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Kiverstein, Julian, editor. Title: The Routledge handbook of philosophy of the social mind / edited by Julian Kiverstein. Description: 1 [edition]. | New York : Routledge, 2016. | Series: Routledge handbooks in philosophy | Includes bibliographical references and index. Identifiers: LCCN 2016014443 | ISBN 9781138827691 (hardback : alk. paper) | ISBN 9781315530178 (e-book) Subjects: LCSH: Philosophy of mind. | Evolutionary psychology—Philosophy. | Sociobiology—Philosophy. | Social psychology—Philosophy. Classification: LCC BD418.3 .R774 2016 | DDC 128/.2—dc23 LC record available at https://lccn.loc.gov/2016014443 ISBN: 978-1-138-82769-1 (hbk) ISBN: 978-1-315-53017-8 (ebk) Typeset in Bembo by Apex CoVantage, LLC
CONTENTS
Notes on contributors ix Acknowledgementsxv Introduction: sociality and the human mind Julian Kiverstein
1
PART I
The evolution of the social mind
17
1 The (r)evolution of primate cognition: does the social intelligence hypothesis lead us around in anthropocentric circles? Louise Barrett
19
2 Pedagogy and social learning in human development Richard Moore
35
3 Cultural evolution and the mind Adrian Boutel and Tim Lewens
53
4 Embodying culture: integrated cognitive systems and cultural evolution Richard Menary and Alexander James Gillett
72
5 The evolution of tribalism Edouard Machery
88
6 Personhood and humanhood: an evolutionary scenario John Barresi v
102
Contents PART II
Developmental and comparative perspectives
115
7 Pluralistic folk psychology in humans and other apes Kristin Andrews
117
8 The development of individual and shared intentionality Hannes Rakoczy
139
9 False-belief understanding in the first years of life Rose M. Scott, Erin Roby, and Megan A. Smith
152
10 Cross-cultural considerations in social cognition Jane Suilin Lavelle
172
11 The social formation of human minds Jeremy I. M. Carpendale, Michael Frayn, and Philip Kucharczyk
189
12 Pluralism, interaction, and the ontogeny of social cognition Anika Fiebich, Shaun Gallagher, and Daniel D. Hutto
208
13 Sharing and fairness in development Philippe Rochat and Erin Robbins
222
PART III
Mechanisms of the moral mind
245
14 Doing the right thing for the wrong reason: reputation and moral behavior Jan M. Engelmann and Christian Zeller
247
15 Is non-consequentialism a feature or a bug? Fiery Cushman
262
16 Emotional processing in individual and social recalibration Bryce Huebner and Trip Glazer
280
17 Implicit attitudes, social learning, and moral credibility Michael Brownstein
298
18 Social motivation in computational neuroscience: (or, if brains are prediction machines, then the Humean theory of motivation is false) Matteo Colombo vi
320
Contents PART IV
Naturalistic approaches to shared and collective intentionality
341
19 Joint distal intentions: who shares what? Angelica Kaufmann
343
20 Joint action: a minimalist approach Stephen Butterfill
357
21 Commitment in joint action John Michael
370
22 The first-person plural perspective Mattia Gallotti
387
23 Team reasoning: theory and evidence Jurgis Karpus and Natalie Gold
400
24 Virtual bargaining: building the foundations for a theory of social interaction Nick Chater, Jennifer B. Misyak,Tigran Melkonyan, and Hossam Zeitoun 25 Social roles and reification Ron Mallon
418
431
PART V
Social forms of selfhood and mindedness
447
26 Diachronic identity and the moral self Jesse J. Prinz and Shaun Nichols
449
27 The embedded and extended character hypotheses Mark Alfano and Joshua August Skorburg
465
28 Mindshaping and self-interpretation Tadeusz W. Zawidzki
479
29 Vicarious experiences: perception, mirroring or imagination? Pierre Jacob and Frédérique de Vignemont
498
30 Phenomenology of the we: Stein, Walther, Gurwitsch Dan Zahavi and Alessandro Salice
515
vii
Contents
31 Social approaches to intentionality Glenda Satne
528
32 Normativity Joseph Rouse
545
Index563
viii
NOTES ON CONTRIBUTORS
Mark Alfano is Associate Professor of Philosophy at Delft University of Technology. He works on moral psychology, broadly construed to include ethics, epistemology, philosophy of mind, and philosophy of psychology. He also maintains an interest in Nietzsche, focusing on Nietzsche’s psychological views. Kristin Andrews is Associate Professor in the Department of Philosophy and Cognitive Science Program at York University, in Toronto, Canada, and a Member of the Royal Society of Canada’s College of New Scholars. Her main research interests are in philosophy of animal minds, folk psychology, and the evolution of normativity. John Barresi is a retired professor and currently Adjunct Professor in Psychology & Neuroscience and Philosophy at Dalhousie University, Canada. He has co-authored, with Raymond Martin, Naturalization of the Soul (2000) and The Rise and Fall of Soul and Self (2006). Louise Barrett is Professor of Psychology and Canada Research Chair in Cognition, Evolution and Behaviour at the University of Lethbridge. Her research interests consider the way in which sociality has selected for particular kinds of psychological traits in humans and nonhuman primates. Adrian Boutel is Research Associate in the Department of History and Philosophy of Science at the University of Cambridge, UK. Following a PhD in the philosophy of mind, his postdoctoral work focuses on the metaphysics of social science. Michael Brownstein is Assistant Professor of Philosophy at John Jay College/CUNY, USA. His research focuses on philosophy of psychology and cognitive science, with an emphasis on the nature of the implicit mind. Stephen Butterfill is Associate Professor in Philosophy at the University of Warwick, UK. He has investigated philosophical issues in research on joint action, mindreading, motor representation, categorical perception, and infants’ knowledge of objects.
ix
Notes on contributors
Jeremy I. M. Carpendale is Professor of Developmental Psychology in the Psychology Department at Simon Fraser University, Canada. His areas of research include prelinguistic communication, social understanding, and moral development. Nick Chater is Professor of Behavioural Science at Warwick Business School. He is interested in the cognitive and social foundations of rationality, as well as applications of cognitive and behavioural science to public policy. Matteo Colombo is Assistant Professor at the Tilburg Center for Logic, Ethics and Philosophy of Science (TiLPS), and in the Department of Philosophy at Tilburg University (The Netherlands). Matteo is currently thinking about social norms, explanation, and computational neuroscience. Fiery Cushman is Assistant Professor of Psychology at Harvard University in the United States, where he directs the Moral Psychology Research Laboratory. His research investigates the cognitive mechanisms responsible for human moral judgment, along with their development, evolutionary history, and neural basis. Jan M. Engelmann is a postdoctoral Research Fellow in the Department of Developmental and Comparative Psychology at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Anika Fiebich’s research foci are social cognition, collective intentionality, and social ontology. Her work is inspired by empirical evidences from developmental psychology, cultural sciences, and neurosciences. Currently, she is a postdoctoral Humboldt Fellow at the DuisburgEssen University in Germany. Michael Frayn is a graduate student in the Psychology Department at Simon Fraser University, Canada. He is particularly interested in exploring theories concerning the integration of the mind and brain sciences. Shaun Gallagher is the Lillian and Morrie Moss Professor of Excellence in Philosophy at the University of Memphis, and Professorial Fellow at the Faculty of Law, Humanities and the Arts, University of Wollongong (AU). He is currently a Humboldt Foundation Anneliese Maier Research Fellow (2012–17). Mattia Gallotti is Research Fellow in Philosophy and the Project Manager of The Human Mind Project in the School of Advanced Study of the University of London. He works on issues about the relationship between mind and society, collaborating with psychologists and neuroscientists. Alexander James Gillett is a PhD candidate at Macquarie University working on an analysis of how the idea of distributed cognition has been used in philosophy of science. Trip Glazer is Adjunct Instructor of Philosophy at Georgetown University. His publications include Can Emotions Communicate? (2014). Natalie Gold is Senior Research Fellow at King’s College London, where she leads the European Research Council–funded project Self-Control and the Person: An Inter-Disciplinary Account. x
Notes on contributors
She has published on topics including framing, moral judgements and decisions, cooperation and coordination, and self-control. Bryce Huebner is Associate Professor in the Philosophy Department at Georgetown University (USA). He is the author of Macrocognition: A Theory of Distributed Minds and Collective Intentionality (2013). His research targets issues in moral psychology as well as the philosophy of the social and cognitive sciences. Daniel D. Hutto is Professor of Philosophical Psychology at the University of Wollongong and member of the Australian Research Council College of Experts. His most recent books include: Wittgenstein and the End of Philosophy (2006), and Folk Psychological Narratives (2008). He is co-author of the award-winning Radicalizing Enactivism (2013) and editor of Narrative and Understanding Persons (2007) and Narrative and Folk Psychology (2009). A special yearbook, Radical Enactivism, focusing on his philosophy of intentionality, phenomenology, and narrative, was published in 2006. He regularly speaks at conferences and expert meetings for anthropologists, clinical psychiatrists, educationalists, narratologists, neuroscientists, and psychologists. Pierre Jacob is a French philosopher of mind and the cognitive sciences. He is currently Emeritus Director of Research at CNRS, and a member of the Institut Jean Nicod. His current work focuses on what is special to human social cognition. Jurgis Karpus is a PhD student at King’s College London with research interests in the philosophy of economics, rational choice, decision theory, and game theory. His thesis addresses intertemporal choice and self-control. Angelica Kaufmann is Associate Research Scholar at the Italian Academy for Advanced Studies at Columbia University in NYC, and Early Career Fellow at the Lichtenberg-Kolleg at Göttingen University. Her work focuses on various issues in the philosophy of mind, including intention, mental content, action planning, shared agency, and memory. Julian Kiverstein is Assistant Professor of Neurophilosophy at the University of Amsterdam, and Research Fellow at the Academic Medical Centre, Amsterdam,The Netherlands. He works in philosophy of cognitive science and neuroscience, and is currently completing a book on embodied and enactive cognition. Philip Kucharczyk is Research Assistant in the Theory/History Lab at Simon Fraser University. He currently holds Bachelor’s degrees in both philosophy and psychology. His study interest is focused upon the extent to which early triadic interactions determine how objects are perceived and manipulated at later stages. Jane Suilin Lavelle is Lecturer in Philosophy of Mind and Cognitive Science in the School of Philosophy, Psychology and Language Sciences, at the University of Edinburgh. She is working on The Social Mind, a textbook to be published with Routledge. Tim Lewens is Professor of Philosophy of Science at the University of Cambridge, UK. His recent books, all published in 2015, include Cultural Evolution: Conceptual Challenges, The Biological Foundations of Bioethics, and The Meaning of Science. xi
Notes on contributors
Edouard Machery is Distinguished Professor in the Department of History and Philosophy of Science at the University of Pittsburgh, Director of the Center for Philosophy of Science at the University of Pittsburgh, a member of the Center for the Neural Basis of Cognition (University of Pittsburgh-Carnegie Mellon University), and an Adjunct Research Professor in the Institute for Social Research at the University of Michigan. Ron Mallon is Associate Professor of Philosophy and Director of the Philosophy-NeurosciencePsychology Program at Washington University in St. Louis. His research explores moral psychology, social construction, and the philosophy of race. He is the author of The Construction of Human Kinds (2016). Tigran Melkonyan is Associate Professor of Behavioural Science at Warwick Business School. Tigran’s research interests cover a number of areas including decision-making under risk and uncertainty, applied game theory, and resource economics. Richard Menary is Associate Professor of Philosophy at Macquarie University. He works in the philosophy of mind and cognitive science and has published on the evolution of cognition, mathematical cognition, embodied cognition, and cognitive integration. John Michael completed his PhD in Philosophy at the University of Vienna in 2010. He then held postdoctoral positions in Aarhus, Copenhagen, and Budapest, where he was a Marie Curie research fellow at the Department of Cognitive Science of the Central European University (CEU) in Budapest. He is currently Assistant Professor in the Philosophy Department of University of Warwick (UK). He works conceptually and experimentally on commitment and trust, joint action, mindreading, and other issues in social cognition research. Jennifer B. Misyak is a postdoctoral researcher at Warwick Business School, with a background in cognitive science and psychology (PhD, Cornell University). Her research covers topics including: foundations for communication and conventions; mechanisms for language and statistical learning; and coordinated behaviour and social interaction. Richard Moore is a philosopher and experimental psychologist who holds a postdoctoral position at the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin. His research addresses the cognitive and motivational pre-requisites of intentional communication and social learning, and the contribution of these to cognitive development in ontogeny and phylogeny. Shaun Nichols is Professor of Philosophy at the University of Arizona. He works at the intersection of philosophy and psychology. He is the author of Sentimental Rules (2007) and Bound: Essays on Free Will and Moral Responsibility (2015). Jesse J. Prinz is Distinguished Professor of Philosophy and Director of the Committee for Interdisciplinary Science Studies at the City University of New York, Graduate Center. Prinz has research interests in cognitive science, philosophy of psychology, philosophy of language, moral psychology, and aesthetics. Hannes Rakoczy is Professor of Developmental Psychology at the University of Göttingen. His main research interests lie in early cognitive development and in comparative cognitive science.
xii
Notes on contributors
Erin Robbins is Lecturer in the School of Psychology and Neuroscience at the University of St Andrews (Scotland). Her research focuses on the development of social cognition in infants and children of highly contrasted cultural contexts. Erin Roby is a graduate student in Developmental Psychology at the University of California Merced. Her research investigates factors that contribute to false-belief understanding and other social cognitive abilities across the lifespan. Philippe Rochat is Professor of Psychology at Emory University. The main focus of his research is early sense of self, the development of social cognition and relatedness, and the emergence of moral sense during the preschool years. Joseph Rouse is Hedding Professor of Moral Science at Wesleyan University, Middletown, Connecticut, USA. He is the author of Articulating the World (2015), How Scientific Practices Matter (2002), Engaging Science (1996), and Knowledge and Power (1987). Alessandro Salice is Lecturer at University College Cork, Ireland. Previously, he held postdoctoral positions in Copenhagen,Vienna, Basel, and Graz. His main areas of research are phenomenology, philosophy of mind, social ontology, and the theory of collective intentionality. Glenda Satne is Assistant Professor in the Department of Philosophy at the Universidad Alberto Hurtado, Chile, and Vice Chancellor Postdoctoral Fellow at the University of Wollogong, Australia. Rose M. Scott is Assistant Professor of Psychological Sciences at the University of California Merced. Her research investigates language acquisition and social cognition, with specific emphases on the development of false-belief understanding in the first four years of life. Joshua August Skorburg is a graduate student in the Philosophy Department and the Institute for Cognitive and Decision Sciences at the University of Oregon, USA. His research is in moral psychology and he has published articles about virtue theory, philosophy of mind, pragmatism, and epistemology. Megan A. Smith is a graduate student in Developmental Psychology at the University of California Merced. Her research examines the origins of stereotyped-beliefs in infancy, the nature of this early reasoning, and possible environmental influences on stereotype formation. Frédérique de Vignemont is CNRS Research Director at Institut Jean Nicod, Paris, France. She works in philosophy of cognitive sciences, where her interests include self-consciousness, bodily awareness, embodiment, pain, empathy, and social cognition. She is currently writing a book on the body and the self. Dan Zahavi is Professor of Philosophy at the University of Copenhagen, Denmark, Director of the Center for Subjectivity Research, and Co-Editor-in-Chief of the journal Phenomenology and the Cognitive Sciences. His publications include Self-Awareness and Alterity (1999), Husserl’s Phenomenology (2003), Subjectivity and Selfhood (2005), Self and Other (2014), and (together with Shaun Gallagher) The Phenomenological Mind (2008).
xiii
Notes on contributors
Tadeusz W. Zawidzki is Associate Professor and Chair of Philosophy at George Washington University, in Washington, DC, USA. He received his PhD from the Philosophy-Neuroscience-Psychology program at Washington University in St. Louis. He is author of Dennett (2007), and Mindshaping (2013). Hossam Zeitoun is Assistant Professor at Warwick Business School at the University of Warwick, UK. He received his PhD from the Department of Business Administration at the University of Zurich. His current research interests include behavioural science and configurational theories of organization. Christian Zeller is a postdoctoral Research Fellow at the Institute of Social Research in Frankfurt / Main, Germany.
xiv
ACKNOWLEDGEMENTS
I would like to thank Tony Bruce for first approaching me with the innovative and excellent idea for a volume exploring what philosophers have to say about the social life of the mind, and to the two anonymous reviewers of my proposal who had excellent ideas for how to improve on my initial vision for the handbook. Adam Johnson and Emma Craig at Routledge have provided invaluable support in keeping the project on track and bringing it to timely completion. I would also like to extend warm thanks to Tina Cottone and her team at Apex CoVantage for the careful copy-editing work and their extremely efficient handling of the whole production process. They have been a pleasure to work with. I received invaluable feedback on a number of the chapters from reviewers. I would like to extend my appreciation to Kristin Andrews, Richard Moore, Tad Zawidzki, Rebecca Kukla, Victor Kumar, Joel Smith, Nicolas Baumard, Philip Kitcher, Bryce Huebner, Orestis Palermos, Nick Bardsley, Richard Holton, Deborah Tollefsen, Christian Skirke, and Frank Hindriks. Finally I would like to thank Andy Clark, Andreas Roepstorff, and Erik Rietveld for many discussions about the social roots of the mind.
xv
INTRODUCTION Sociality and the human mind Julian Kiverstein
Introduction The idea of humans as by nature social and political and animals can be traced back to Aristotle and was given a modern inflection by Hegel and Marx.These philosophers took sociality to be built into our very being. It is what defines us as human, distinguishing us from other animals. Yet how did we come to be this way? Chimpanzees live in groups, form alliances and they use simple tools such as sticks to fish for termites and stones to break open nuts. They have the rudiments of a social and political life. However, a gulf seems to separate their social life from ours. Humans created political and economic institutions. We developed elaborate systems of knowledge of our own history, and of the natural and physical world. We enrich our lives with artistic objects, each the product of a long prior history of making which these objects reflect back at us. We identify ourselves with groups and engage in the complex rituals and practices of those groups. We feel guilt, shame and pride when a member of our own group does something noteworthy. By contrast, we instinctively fear and often despise members of out-groups. Is there something in human psychology that might explain how people joined forces to become not just you and me but we? This handbook comprises 32 original chapters and brings together perspectives from diverse disciplines from diverse areas of research in philosophy, comparative and developmental psychology, evolutionary anthropology, cognitive neuroscience and behavioural economics. Why should a handbook in philosophy draw upon the resources from these diverse disciplines? The social life of humans seems on the face of it to make humans special. It seems to be among the sources of human uniqueness, but are humans really so special? Philosophers are not well-placed to answer this question on their own. It is a question that calls for careful comparison of the cognitive profile of humans with that of other species, in particular our closest evolutionary ancestors.While their social lives are in important respects different from our own, they are also in many ways very similar. These similarities might lead us to question the very idea of human uniqueness. Could the idea that we humans are somehow special be a product of anthropocentric philosophical preconceptions (see Barrett, ch.1)? At the very least, the uniqueness of humans isn’t something that should be taken for granted. It is a claim we ought to first carefully and critically scrutinise, bringing to bear all of the available resources from empirical and philosophical research. 1
Julian Kiverstein
The unrivalled complexity of human social life is a datum that stands in need of explanation. People invented science, art, religion and politics. No other animal has the sophisticated systems of symbolic communication and reasoning that humans have created. This presents us with something of a puzzle, explored by many of the authors in this volume in different ways. What is it about humans that enabled us to construct a social reality of unrivalled complexity? Is there something about the human mind that explains how we came to have social lives organised around social conventions, norms and institutions? This volume is organised around the assumption that a broadly naturalistic answer to this question can be given. Humans are cultural beings and our lives undoubtedly have taken on very different forms within different cultural and historical contexts (Geertz 1977). Naturalists argue, however, that an explanation can be given of how history, culture and the social can also be conceived of as material processes. In my view there need be no competition between this type of naturalistic enquiry and theorising in the social sciences and the humanities. Any natural science of human behaviour must study human action within the contexts in which it occurs, taking into account the subjective perspective and points of view from which human agents act. Social scientists provide descriptions and interpretations of what it is like to be a social actor belonging to a particular society and culture. The explanations of the natural sciences fail to deliver the “thick” understanding of the particularities of human action we get from the internal standpoint of the social sciences (see Boutel & Lewens, ch.3). Naturalists argue, however, that cultural and social life stand in need of explanation if we are to avoid a split between bodies as products of biological evolution, and the human mind as a product of culture. It is such a naturalistic project that the contributors to this volume are engaged in. Why speak of a social mind? I found three broad answers to this question in reading the chapters in this volume, and no doubt there are many more. The first takes its lead from evolutionary psychology.1 The human mind is a social mind because it is made up of specialised mechanisms for dealing with the problems of living in large social groups such as cheater detection, mate choice, disease avoidance, coalition formation and so on. Perhaps the most influential articulation of this theory can be found in Humphrey (1976). Humphrey sketched an evolutionary scenario in which the driving force behind the evolution of human intelligence was the elaborate social interactions among primates. Primates live in large groups consisting of many individuals who recognise each other, and are able to keep track of affiliations. They use this information to compete with each other for available resources, often entering into impressive acts of tactical deception (Byrne & Whiten 1989). Humphrey argued that in such an environment it will have been important to keep track of the mental states of one’s cohorts. The social environment of primates may thus have created selection pressures for capacities for meta-representation or thinking about thinking.The capacity for tracking the mental states of others so as to predict and manipulate their behaviour was an adaptation to a social environment containing agents that always have a stake in behaving deceptively.2 Sperber argues that this capacity for meta-representation makes human cognition especially powerful, enabling everything from cooperative communication to teaching and dialogical argument (Sperber 1996, 2000). A second reason for thinking of the human mind as essentially social comes from the joint and collective forms of intentionality that distinguish the human mind from that of other animals.3 In recent years, a number of evolutionary hypotheses have been developed arguing that people are born default cooperators (Tomasello 2009, 2014; Sterelny 2003, 2012). Humans are distinguished from other social animals by the extent to which we enter into collaborative and cooperative social interactions.4 When chimpanzees work together in groups they tend to do so in what Raimo Tuomela (2007) has called the “I-mode” in pursuit of their own individual 2
Introduction
goals. Our human foraging ancestors (hominins from around 400–200,000 years ago), by contrast, worked together collaboratively, and shared the fruits of their efforts with each other based on norms of fairness. This difference in social lifestyle may have created selection pressures for capacities for group-mindedness that didn’t arise for other primates. Young infants enter the social world with capacities for shared intentionality and joint attentional engagement that are not present to the same degree in great apes.These capacities form the basis for people to coordinate with each other in ways that set the stage for agreeing upon conventions and norms.5 The third perspective takes the human mind to be the result of a co-evolutionary process in which biological evolution joins forces with cultural learning.6 Cultural learning allows a group to preserve technological innovations and skills whilst also building upon and extending those skills. People are able to transmit, preserve and elaborate on skills and knowledge across generations (Tomasello 1999). Children come to resemble their parents not only because of the genes they have inherited, but also because of the informational resources they have inherited from their parents (Sterelny 2003, 2012; Menary 2007).The minds of children grow in a learning environment structured by the skill base acquired from previous generations.This skill base includes facility with increasingly elaborate technical systems and public systems of representation. These tools and techniques have transformed the cognitive profile of humans, allowing us to develop and acquire types of thinking and problem solving that would be impossible had these systems not been invented.7 The three naturalistic approaches to understanding the sociality of the human mind are explored and developed in detail across the five parts of this handbook. Part 1 asks what might have happened over the course of human history that led to humans acquiring the abilities for the construction of a social world. Each of the three conceptions of the social mind I have just outlined make their first appearance in this section of the handbook. Part 2 zooms in on questions concerning the development of social cognition without which there would arguably be no social learning and no cooperative communication. The essays in part 3 delve into the nature of social and moral norms. The essays in this section draw their naturalistic inspiration from findings in cognitive neuroscience concerning social learning, and show how these findings can help us to understand human moral behaviour. Part 4 investigates questions raised by human capacities for joint and collective forms of thought and action, and the differences in thinking among a plurality of agents that share a perspective on the world. Part 5 brings the handbook to a close, and begins by looking at reasons for thinking that certain forms of selfhood and mindedness are constitutively social. The remainder of my introduction is organised around a brief synopsis of each chapter identifying and mapping some themes and topics that are central in the philosophy of the social mind. It should also help the reader to gain an overall view of the landscape covered in this volume.
1. The evolution of the social mind The chapters in this first section explore the hypothesis that the human mind is social because human psychology has evolved to include cognitive and emotional capacities dedicated to social life. One hypothesis common to these chapters is that human cognition is distinguished from that of other primates by capacities for dealing with the social world. Some have argued these capacities took the form of sophisticated capacities for reasoning about the mental states of others, their beliefs, preferences and intentions. Others stress the importance of capacities for social learning which will have allowed for individuals to learn all kinds of new things from others in their culture including the use of artefacts and symbolic forms of communication. Evolutionary history may also have left its mark on human cognition in a number of social 3
Julian Kiverstein
domains from the way people think about social groups to the capacity for agent-neutral thinking that allowed for the emergence of conventions, norms and institutions. In the opening chapter of this section, comparative psychologist Louise Barrett reviews the evidence for what has come to be known as the “social intelligence” or “social brain” hypothesis and finds the evidence wanting. She argues that the inference that is made from brain size to the cognitive capacities for dealing with the demands of social life is based on anthropocentric assumptions. The social intelligence hypothesis assumes that the social life of non-human primates (and many other species) calls for the same types of folk psychological abilities as are found in humans. This is anthropocentric insofar as it leads to the behavioural data being interpreted based on a projection of cognitive capacities found in humans onto other species. The social intelligence hypothesis supports a theory of the social mind as a set of adaptations to social life characterised by competition for access to resources and deception. An alternative strand of thinking within evolutionary psychology views the social mind of humans as the result of a co-evolutionary process in which biological evolution joins forces with cultural learning. There are at least two forms that this co-evolution of the human mind and the social environment has taken. The first is discussed in chapter 2 by Richard Moore and concerns the co-evolution of social and technical skills. The second is discussed in chapter 3 by Adrian Boutel and Tim Lewens and concerns cultural evolution as a medium for the inheritance of survival enhancing behavioural traits. Moore begins chapter 2 by explaining how the emergence of cumulative culture couldn’t be explained by individual learning alone. Moore argues that cumulative culture wouldn’t exist were it not for cultural forms of learning. He discusses two varieties of cultural learning: imitation and pedagogy. Both are high-fidelity modes of information transmission. In imitation an agent learns how to reproduce not just the outcome of an observed action but also the precise technique for bringing about this outcome. This type of learning guarantees that the precise details of a craft are preserved across generations. Moore defines “pedagogy” in terms of a teacher providing verbal instruction with the intention of helping a student to acquire knowledge or skills that are in some way important to a community. The student is thereby initiated into community and becomes responsible for maintaining and expanding the knowledge of this community. In chapter 3 Adrian Boutel and Tim Lewens review theories of cultural evolution assessing to what extent these theories support a theory of the human mind as a social phenomenon. Models of cultural evolution seek to explain cultural and social change in Darwinian terms. Large-scale processes of cultural change are explained by aggregates of many smaller-scale events taking place in the lives of individuals. Cultural evolution is a theory of the social mind because it takes biases and dispositions that determine from whom we learn, and how we learn, to be built into the very fabric of the human mind. Capacities for social learning are genetically inherited adaptations that conferred reproductive advantages on individuals.These capacities tended to get us information that was selectively advantageous, and this in turn helped to spread the genes responsible for the capacities for social learning. At the same, Boutel and Lewens are keen to stress that cultural evolution offers a methodological individualist account of the social. The persistence and spread of cultural variants through populations is explained by the social acquisition of information by individuals. In chapter 4 Richard Menary and Alexander James Gillett offer an alternative perspective on how cumulative culture may have contributed to the evolution of the human mind. Early humans developed skills that enabled them to adapt to a wide variety of different environments thereby creating a selection pressure for the extraordinary learning-dependent plasticity found in the human brain. Furthermore, humans go through an extraordinarily long period 4
Introduction
of development compared with other species. Throughout this period, up to and including our teenage years, the human brain is being shaped and sculpted by experience in the cultural world. Menary and Gillett show how brain plasticity makes it possible for the cultural niche of humans to transform and enhance human cognitive capacities. They develop this argument through a case study of mathematical cognition, discussing in particular findings that in the human brain we can find a system that responds to approximate quantities (the so-called “ancient number system”) and a system that allows for reasoning about precise quantities (the “discrete number system”). This is an example of how culturally developed capacities repurpose phylogenetically older regions of the brain involved in processing number for newer cultural purposes. Collaborative and cooperative social interactions are ubiquitous in human social life but these interactions tend to be selective.8 People exhibit a marked preference for cooperating with those who belong to their own social group and actively work against outsiders. In chapter 5 Edouard Machery discusses the evolution of what he terms “tribe psychology”. While tribes have largely disappeared from the modern world, many social groups are organised in ways that closely resemble tribes. Tribal thinking may also have left its mark on human social psychology in the form of nationalism, ethnocentrism and racism, as Machery explores in his chapter. While it is controversial to what extent tribal psychology leads us to think of differences between groups as essential and immutable,9 Machery suggests this aspect of human psychology may be behind deep rooted distinctions between “us” and “them” which seem built into human moral psychology. Machery explains how the distinction between us and them may have originated in the tribal psychology that emerged with the cooperative and collaborative lifestyle of early humans. In the closing chapter of this section, psychologist John Barresi draws upon the same period in human history to offer a genealogy of egalitarian or agent-neutral reasons and values. He shows how the cooperative and collaborative lifestyle of early humans may have led them to the development of the concept of personhood. Barresi shows how “agent-neutral” reasons and motivations are grounded in a conceptual ability people have to think of themselves and others as persons. He argues that it was the integration of first-person and third-person perspectives that occurs in joint action that may have set the stage for early humans to develop an agentneutral perspective.
2. Developmental and comparative perspectives The chapters in part 2 explore cognitive development in three domains essential to human social life: 1. understanding other minds; 2. morality; and 3. shared and collective intentionality. A central theme in this section is that humans make sense of the actions of themselves and others in a variety of ways, only some of which require reasoning about mental states, a capacity sometimes referred to as “mindreading”.10 A second theme concerns the effects of cultural experience on cognitive development.11 These chapters assess to what extent people growing up in different cultures differ in their cognitive capacities, and explore the implications of cultural variation where it is found. A third theme relates to the role of social interaction in human cognitive development.12 Social interaction in humans is characterised by intersubjective forms of shared experience that become increasingly elaborate over the course of development. These intersubjective forms of experience result in infants developing minds that are social at their very core because they are relationally constituted, a theme that a number of chapters return to in part 5.13 In the opening chapter of part 2, philosopher and comparative psychologist Kristin Andrews assesses the evidence that it is mindreading that makes human cognition unique, setting humans 5
Julian Kiverstein
apart from other primates. She suggests that human folk psychology essentially concerns thinking about how people should act rather than thinking about how they do, or will act. Andrews shows how there is substantial evidence for thinking that primates likewise operate with normative expectations about each other. She assembles a rich body of evidence that suggests that chimpanzees, for example, construct models about how individuals belonging to their social groups ought to behave. Once we think of folk psychology as providing us with models of how people should act, she argues there is every reason to think that primates likewise operate with folk psychology that is no less rich than that found in humans. Chapter 8, by developmental psychologist Hannes Rakoczy, discusses how different forms of intentionality develop in children and primates. He starts by providing a taxonomy of types of intentionality organised around distinctions between first- and second-order intentionality and between individual, shared and collective intentional states. Both infants and other animals develop first-order intentional states, but he shows how the evidence for second-order intentional states or meta-representational capacities in non-human animals is at best patchy. Shared intentional states are found whenever two or more individuals form a joint “we” attitude.14 Rakoczy reviews compelling evidence that both children and primates can share perceptual states in jointly attending to an object. However, whether primates can engage in cooperative action for joint or shared goals is more controversial.15 Rakoczy ends his chapter by discussing the development of collective intentionality that John Searle (2010) has argued to be necessary for the construction of entities that make up the social world such as norms, conventions and institutions. Even if people do make use of a plurality of strategies and heuristics to make sense of each other, they undoubtedly also have recourse to mindreading. The next two chapters are concerned with mindreading proper and in particular belief understanding. In chapter 9, developmental psychologists Rose M. Scott, Erin Roby and Megan A. Smith review over a decade’s worth of research that shows how infants from around 15 months and from a variety of cultures are able to track false beliefs. In a violation of expectation paradigm, for instance, infants have been shown to look much longer when an agent acts in ways that conflict with what the agent ought to do given certain beliefs. It seems that infants expect an agent to act in one way because of what the agent ought to believe. When the agent acts differently, this surprises the infant and this is taken to be evidence that the infant represents the agent as having certain beliefs that conflict with the agent’s behaviour. In chapter 10, Jane Suilin Lavelle considers to what extent people differ across cultures in their recourse to mindreading. Does folk psychology only capture our peculiarly Western understanding of people’s behaviour? Well-known findings from cultural psychology show that people from European and American cultures typically exhibit individualist patterns of thinking and reasoning, while people from East-Asian cultures tend to think and reason more holistically (Nisbett et al. 2001). Might these differences in patterns of thinking translate into differences in how people from these different cultural backgrounds go about explaining behaviour? Developmental psychologists Jeremy I. M. Carpendale, Michael Frayn and Philip Kucharczyk outline two views of human development in chapter 11. What they call “individualist” theories claim that mental states are private to the minds of individuals in the sense of being known directly only by the persons to whom they belong. Individualist theories claim that to know the minds of others, a person must engage in either theory-based inference or simulation routines, or some combination of the two. Carpendale and colleagues defend what they describe as an “interactionist theory” of development according to which the mind of the infant takes form in development through social interaction. Anika Fiebich, Shaun Gallagher and Daniel D. Hutto also stress the interactive character of social cognition in chapter 12. They 6
Introduction
argue against the received view that mindreading is the default method that people employ for understanding each other, and argue instead for a pluralistic theory of social cognition in which second-person narratives, character traits, and expectations based on norms, habits and conventions all do a large part of the work of delivering mutual understanding in our day-today interactions.16 Chapter 13, by developmental psychologists Philippe Rochat and Erin Robbins, discusses the development of moral agency, with a particular focus on the understanding of ownership and fairness in children. How do children come to develop an ethical stance from which they evaluate not only the fairness or otherwise of some distribution of goods but also to think about how goods ought to be shared? Rochat and Robbins describe development in terms of a progressive elaboration of shared intersubjective experiences. Around 21 months, children start to express self-conscious emotions like embarrassment, shame, envy and pride that concern the children’s evaluation of themselves as members of a group. At the same time, children exhibit sensitivity to group norms and begin to develop what Rochat and Robbins label an “ethical stance”, acting according to ethical principles of fairness. Rochat and Robbins focus in particular on inequity aversion: children across widely different cultures display a clear preference for proportional equity, preferring to distribute resources to each individual according to their need.
3. Mechanisms of the moral mind Humans are distinguished from other animals by how much of our everyday behaviour is governed by normative rules and principles that tell people what is expected of them by others belonging to their groups and communities. They do so often without being written into the law and without enforcement by social institutions. People from across cultures are universally motivated to act according to norms, seemingly treating them as ultimate ends (Sripida & Stich 2007).Violations of norms bring with them reactive attitudes such as anger, condemnation and blame and serve as the basis for punishment. Sanctions for norm violation are found across cultures (Sober & Wilson 1998).17 The chapters in this section are concerned with the psychological and neural mechanisms that might explain why social norms are found across all human groups, governing a vast array of human activities. Why are people motivated to conform to norms? Does the motivation stem from a concern with reputation, which signals to others whether or not an agent is to be trusted? Could the motivation derive from a person’s more fundamental values and ideals?18 A second theme concerns the role of evaluative learning in moral cognition. Affect is the brain’s way of keeping track of the expected value of actions based on the history of past reward and may also underlie moral judgement and behaviour.19 In chapter 14 the evolutionary anthropologists Jan M. Engelmann and Christian Zeller explore what motivates people to act in compliance with moral norms. Are people motivated by a self-interested desire to protect their reputation? A person’s reputation is of the utmost importance in societies that depend on cooperation. Reputation tells us which people are trustworthy and which are not. One way to settle this question experimentally would be to observe whether people behave more morally when their behaviour is publically observed, when the chances are high that others might get to hear about their behaviour through gossip. Engelmann and Zeller review studies that show that people are more motivated to comply with norms when their doing so can be observed by others.20 However, they argue that these findings don’t necessarily establish that people’s motivations are always strategic. The presence of other people results in us automatically and spontaneously adopting their perspective. We immediately think about how other people will evaluate and judge us. This can be an 7
Julian Kiverstein
important source of information that allows us to assess to what extent our actions and decisions match up with our own ideals and values. Chapter 15, by cognitive neuroscientist Fiery Cushman, explores the basis for nonconsequentialist moral judgement. People don’t always make moral decisions with the goal in mind of maximising utility such as the welfare of others. Cushman asks whether failure to do so is a bug in systems that would otherwise function along consequentialist lines.To answer this question, Cushman appeals to neurocomputational models of decision-making and evaluative learning that also make an appearance in a number of other chapters in this part of the handbook. He suggests that non-consequentialist moral thinking might be explained by model-free systems involved in habit learning that assign value to actions based on a history of reward. Consequentialist moral thinking, on the other hand, may be accounted for by systems involved in deliberate planning of actions. These systems search through decision trees, set up on the basis of statistical associations between actions and their outcomes. Both systems have a role to play in the explanation of moral cognition. Philosophers Bryce Huebner and Trip Glazer explore the social function of emotions in chapter 16.They focus on emotions that were selected for the role they play in helping animals to navigate the challenges of social life. For example, emotions such as guilt or shame can motivate us to comply with social norms when we might otherwise be disposed not to (Greene 2013; Frank 1988). Huebner and Glazer show how emotions can also motivate individuals not to conform to unfair or unjust social arrangements, and to imagine alternative social arrangements. At first glance this last function of motivating social resistance would seem to be hard to explain in terms of the biological functions of emotions. They address this problem by providing a detailed account of the role of affective capacities in attuning us to the social world. Affective states are never unambiguous in the meaning: fear, for instance, can look and feel very differently depending on the situation to which it is a response. Before the agent can settle on a course of action, their affective systems have to arrive at an adequate conceptualisation or construal of the situation. How exactly a person responds to a particular situation will thus depend on the conceptualisations they rely upon. Different conceptualisations of affect may open up new, hitherto unforeseen possibilities for social engagement. Continuing with the theme of the role of emotions in moral behaviour, chapter 17, by philosopher Michael Brownstein, asks under what conditions intuition and emotional reactions can give an agent credible reasons for action, guiding the agent’s actions in ways that fit with their rational judgement. There is substantial evidence that people navigate the social world based on implicit attitudes, many of which are learned through processes of model-free learning. By attuning us to the social world, implicit attitudes possess a defeasible moral credibility. What should we say, however, when these learning systems are tuned up by an environment suffused with prejudice? How can a person recognise when his morally credible spontaneous inclinations have become corrupted by living in an unjust social world? Brownstein recommends that we think of the cultivation of the moral implicit attitudes as a type of skill learning. Still, he notes, this leaves as an urgent and open question what social skills are required for steering a moral path through an environment dominated by prejudice and injustice. Chapter 18 by Matteo Colombo brings this section to a close by returning to the topic of social motivation. His chapter explores to what extent the empirical literature in computational neuroscience confirms Humean theories that claim that beliefs have no motivational power taken in isolation from desires. Colombo begins by reviewing computational models of learning that support Humeanism by distinguishing mechanisms that compute value or utility from mechanisms that compute probabilities. Predictive processing theories of the brain, however, reject such a distinction, modelling all neural processing as working in the service of 8
Introduction
the accurate and precise prediction of current sensory input.21 Colombo finishes his chapter by asking what the latter view might imply for theories of social motivation. One consequence is that neuroscientists would no longer need to invoke the concept of utility maximisation in explanations of decision-making.They could instead frame their explanations in terms of prior beliefs about the expected sensory consequences of actions.
4. Naturalistic approaches to shared and collective intentionality Although joint action is seen in many species of animal (e.g. bees swarming, chimpanzees hunting together), humans seem to be motivated to coordinate their actions for joint goals far more flexibly and in a wider range of contexts than is seen in other animals. Capacities for joint action and joint attentional engagement allow people to coordinate with each other in ways that set the stage for agreeing upon conventions and norms. What is it for two or more people to have intentional states (e.g. plans) that are jointly directed at the world? Does it require people to have special capacities for forming group mental states such as joint commitments (Gilbert 2013) or we-intentions (Searle 1995) that are not explained by the psychological capacities of individuals? The chapters in this part of the handbook all share in common the aim of offering naturalistic or psychologically informed accounts of shared and collective intentionality. The section begins with discussions of shared intentionality in the context of joint action. These chapters develop a minimalist theory of joint action that offers an account of joint action without invoking special collective forms of intentionality.22 An urgent question for such a minimalist approach is how to account for the difference between thinking in the first-person singular (or I-mode) and thinking and reasoning in the first-person plural (or we-mode).23 A further set of issues tackled in this section concern how people reason in the we-mode in solving coordination problems (Karpus & Gold, ch.23; Chater et al., ch.24). Coordination problems arise in the context of cooperative actions in which knowing what the other person will do is crucial to the success of the project.24 My decision in this kind of strategic interaction is conditional on knowing what you will do, and your decision is likewise dependent on knowing what I will do. Solving this problem is particularly crucial since norms, conventions and institutions are arguably solutions to coordination problems. If we are to understand how people came to organise society around conventions and institutions, we need to understand how they solve coordination problems.25 Chapter 19 by Angelica Kaufmann opens this section with a critical evaluation of the thesis defended by Tomasello and colleagues that shared intentionality is unique to human primates (Tomasello 2014). Her chapter is focused on evaluating this thesis for the capacity to form and coordinate joint plans of action. Kaufmann focuses on the examples of group hunting as observed in Taï chimpanzees’ group hunting behaviour by cognitive ethologist Christophe Boesch. She argues that group hunting is a planned joint activity, a claim she defends by relying on Bratman’s recent account of shared intention (Bratman 2014). Chapter 20 by Stephen Butterfill continues where Kaufmann leaves off. Butterfill defends a “minimalist” account of joint action that avoids introducing special types of plural subjects and does not attribute a special class of attitudes to agents such as we-intentions or jointcommitments. Butterfill addresses a central question that has occupied philosophers’ attention in the literature on shared intentionality: what distinguishes actions that are performed by agents in parallel from genuinely joint actions? A minimalist answer to this question looks for the simplest possible cases of joint action and only adds extra ingredients as and when they are strictly necessary. Take as an example of a minimal case two people using a system of a pulley 9
Julian Kiverstein
and rope to move a heavy block. Butterfill describes lifting the block as the “collective goal” of their actions. It is a collective goal in the sense that it is a goal that can only be brought about through the coordinated actions of the two agents. Butterfill shows how the concept of collective goals yields an account of joint action that is too broad. He therefore proposes to supplement it with a requirement that each of the agents that takes part in a joint action acts for an intention fulfilled by actions that have collective goals. Butterfill labels this “the flat intention” view of joint action because it avoids appeal to higher-order intentions (i.e. intentions that agents intend to act together). Chapter 21 by philosopher John Michael constructs a minimalist account of commitment in the context of joint action. On standard views of commitment agents must form conditional obligations that are common knowledge between agents. Yet children and non-humans show an understanding of commitment without understanding much if anything about obligations that arise from promises (Gräfenhain et al. 2009). Michael develops a minimalist theory of commitment in order to address this and other problems. He argues that agents have default expectations that others will help them to bring about goals that they cannot bring about on their own. We see this in infants, for instance, that expect others to help them attain desires they are not able to fulfil by themselves. Such an expectation automatically triggers a sense of commitment to help the child to bring about the goal she cannot achieve by herself. A minimal theory of commitment can therefore make sense of how animals and infants can form commitments without understanding anything of acts of promising and the obligations such acts imply. In chapter 22 philosopher Mattia Gallotti turns to more general questions concerning the nature of collective intentionality. Gallotti asks what it is about the attitudes of a plurality of individuals that make it the case that they are shared, or jointly directed at the world. In common with previous chapters, Gallotti resists explaining this fact in terms of metaphysical entities such as plural subjects or group minds. The psychological states needed for understanding collective intentionality belong to individuals but occur only in the context of group interactions. It is for this reason that Gallotti describes collective intentionality in terms of the “first-person plural” or the we-mode. He draws on research from the cognitive sciences to argue that the we-mode can be explained at the sub-personal level in terms of processes of mutual or reciprocal alignment (Tollefson & Dale 2012). The next two chapters take up the question of how people succeed in solving coordination problems. Chapter 23, by philosophers Jurgis Karpus and Natalie Gold, discusses strategic decision-making in the Hi-Lo and prisoner’s dilemma games. In these games the player’s decision about what to do is conditional on beliefs about what other players in the game will do. Each player seeks the optimal strategy that will deliver the highest pay-off, but in order to select the optimal strategy players have to know what the other player will decide. Karpus and Gold show how team reasoning can help us to solve this problem. We think about what outcome would serve the interests of both of us the best, and then select the strategy that allows each of us to play our part in bringing about this outcome. Chapter 24, by behavioural scientists Nick Chater, Jennifer Misyak, Tigran Melkonyan and Hossam Zeitoun, also discusses how people use group or we-thinking to solve coordination problems. Chater and colleagues propose an account of group-thinking in terms of what they call virtual bargaining. We think to ourselves: what would we both agree upon if we were to discuss the problem at hand? The difference between virtual bargaining and team reasoning is that it doesn’t just apply to cooperative games in which there is some common goal two or more agents aim to bring about. Chater and colleagues claim virtual bargaining can also be used to solve decision problems in which two or more individuals have competing interests, and each has an interest in double-crossing the other.26 Chater and colleagues go on to show 10
Introduction
how virtual bargaining can also explain how coordination problems are solved in communication and in joint action. In each of these domains, answering the question of what I should do is conditional on my knowing what the people I am interacting with are likely to do. Part 4 closes with chapter 25 by Ron Mallon, which investigates the mechanisms that underlie socially constructed categories such as sexuality, gender and race. Mallon begins by asking whether these categories can be thought of as social roles that people play. Social roles confer on groups of people certain rights, duties and expectations that are common knowledge among people that understand these social categories. However, race, gender and sexuality are unlike other social role categories in an important respect. People treat the latter categories and the differences they mark as if they were natural kinds that identified essential differences between people. Mallon calls them “covert constructions” because unlike other socially constructed categories, people mistake race, gender and sexuality for natural kinds. Covert constructions are sustained and continue to exert a causal influence on people’s behaviour because of people’s mistaken belief in their naturalness.
5. Social forms of selfhood and mindedness The chapters in this final part of the handbook explore arguments that there are aspects of the mind that the individual person has only by virtue of being a member of a social group. The chapters by Jesse J. Prinz and Shaun Nichols (ch.26) and by Mark Alfano and Joshua August Skorburg (ch.27) both take up the relationship between a person’s moral identity and the social environment. Prinz and Nichols discuss the concept of personal identity and argue that a person’s identity depends upon the links a person forges with others as a part of a moral community. Alfano and Skorburg continue this line of thought through a discussion of epistemic and moral virtue and vice. They show how virtue and vice can depend with different degrees of strength on the social environment. Self-knowledge has often been associated with a peculiar epistemic authority or privilege that doesn’t hold in the case of knowledge of the mental states of others. People know their own minds immediately and directly and the minds of others only on the basis of observations of their behaviour. The next three chapters complicate this received view on the relation between self and other.Tadeusz W. Zawidzki (ch.28) focuses on self-interpretation and the role it plays in licensing expectations from other people. He makes an argument for understanding self-knowledge in a social context.The chapters by Pierre Jacob and Frédérique de Vignemont (ch.29) and Dan Zahavi and Alesandro Salice (ch.30) both discuss empathy and the access it gives us to another person’s emotional life. Jacob and de Vignemont focus on the sense in which empathy allows us to live through another person’s emotional experience as if it were our own. Zahavi and Salice, by contrast, stress that empathy delivers direct perceptual awareness of an experience that is not our own. The final two chapters by Glenda Satne (ch.31) and Joseph Rouse (ch.32) take up arguments that the human mind is constitutively social. Satne focuses on arguments for this claim based on the social nature of the intentional relation, while Rouse focuses his attention on arguments for the normativity of meaning. Chapter 26, by Jesse J. Prinz and Shaun Nichols, opens the section. Prinz and Nichols present previously unpublished experimental findings about the kinds of continuity people take to be important for the identity of a person over time. They describe a series of experiments they have carried out in which they ask participants to consider whether a person was still the same person after undergoing a moral change due to a brain injury. They compared people’s responses with cases in which a person experiences loss of memory, agency and narrative.They found that people tended to rate continuity in moral values as more important than continuity 11
Julian Kiverstein
of these other cognitive capacities. Thus retaining the same moral values seems to be a central component in what people ordinarily believe makes someone the same person as they undergo changes in their lives. A person’s moral conduct in relation to others is rated by people as an important determinant of a person’s identity through time. In Chapter 27 Mark Alfano and Joshua Skorburg argue for the dependence of moral character on the social environment.The chapter takes as its starting point much-discussed findings from social psychology that show that people’s moral behaviour can be influenced by all manner of environmental factors that ought really to have no influence on virtuous moral agents. Social expectations seem to be a particularly powerful predictor of moral behaviour. Alfano and Skorburg argue for an interpretation of these situational influences as pointing to dependence of character on the social environment. We act as the people around us expect us to. Character is not a monadic property of individual agents, but strongly depends on social relations. The next three chapters take up questions concerning self-knowledge and the nature of knowledge of other minds. In Chapter 28 Tadeusz W. Zawidzki argues that this epistemic privilege derives from the ability of humans to shape each other’s thoughts and actions in ways that make us more predictable to each other. Zawidzki proposes that self-attributions of propositional attitudes can be thought of as commitment devices that provide an agent with an incentive to act in ways that conform to what he has said about himself.27 The role of the selfattribution is thus to make the person committed to being the kind of person that conforms to what others expect from them given this self-attribution. Zawidzki argues that this is a role that self-attributions can only play for the self since we can speak for ourselves with an authority that we cannot speak for others. Chapter 29, by Pierre Jacob and Frédérique de Vignemont, is concerned with knowledge of other minds, in particular knowledge of emotions and sensations that we get through empathy. Jacob and Vignemont take empathy to be a vicarious experience in which a person undergoes an affective experience similar to an affective experience belonging to the target of their empathic episode. For example, if x is a child scared of bullying at school and y empathises with x, then it is necessary for y also to be vicariously afraid of bullies. Jacob and Vignemont take empathy to depend upon mental simulation, which they analyse as a non-propositional form of imagination. When I empathise with a child’s fear, I activate my own fear system in offline simulation mode. Jacob and Vignemont argue that the vicarious experience I come to enjoy is the result of my sharing with the child an evaluative representation with the same content as that of the child (i.e. the danger of some sort of threatening stimulus). Chapter 30, by Dan Zahavi and Alessandro Salice, shows how the phenomenological tradition in philosophy has rich insights to offer concerning the foundations of sociality. Zahavi and Salice deny that empathy requires shared experience.28 They argue for the reverse claim that any shared experience requires empathy. If you and I share an experience of joy, for instance, we undergo an experience in which the joy is experienced as ours, not only as yours and mine. We could not experience the joy as our joy were it not the case that each of us was already aware of the other’s joy. At the same time, empathy requires self-other differentiation, and this raises a puzzle that Zahavi and Salice go on to trace through a number of thinkers. How, given such differentiation between subjects, are shared experiences nevertheless possible? Shared experience requires an interlocking or interdependence of the experiences of different subjects. Zahavi and Salice discuss a number of proposals for how to characterise this interdependence. Chapter 31, by Glenda Satne, critically evaluates two theories that take intentionality to, in some way, depend on social practices.29 Communitarian accounts of intentionality analyse meaning by reference to consensus among members of a community about how a term or 12
Introduction
concept ought to be used. Individuals are trained through mechanisms of social conformism to use a concept or term as others do. Interpretivist theories of meaning take intentionality to originate in a conceptual framework people employ to make sense of each other as rational agents. Satne shows how both these approaches face a number of difficulties, not least of all in accounting for the development of intentional capacities. In order for these accounts of meaning to get off the ground, it seems they have to assume that agents are already in possession of intentional capacities. She ends her chapter by arguing that this debt can be repaid by enactivist theories that argue for a theory of basic cognition as content-free (Hutto & Satne 2015). In the final chapter (ch.32), Joseph Rouse takes up the question of whether mind and meaning are normative phenomena. Rouse defines normativity as phenomena the response to which can be understood as correct or incorrect, appropriate or inappropriate, right or wrong, just or unjust and so on. He then goes on to explore arguments that have been made that discursive or conceptual capacities might be thought of as normative in this sense. A normative theory of these capacities claims that to think, speak or act is to be answerable to what one ought to think, say, or do. Normative theories face a problem, however, when it comes to situating normative phenomena in nature as understood scientifically. The response to this problem developed by Rouse is to ground normativity in human social life. In agreement with Satne, Rouse argues that both communitarian and interpretivist versions of this strategy face insuperable difficulties. Rouse’s own preferred view locates the normativity of social practices in the temporality of human agency, whereby members of a practice are oriented to a common future on the basis of their present and past performances.
Conclusion The essays in this handbook are unified by their ambition to provide a naturalistic account of the sociality of the human mind. These naturalistic projects take very different forms, a flavour of which I have attempted to impart in providing an overview of the volume as a whole.Taking seriously the sociality of the human mind has two important consequences. First, it directs our attention to phenomena that normally fall within the remit of the social sciences and might not otherwise receive the attention of philosophers and cognitive scientists concerned with the nature of the human mind. Second, it corrects for an individualist orientation both in philosophy and in the cognitive sciences. Most philosophers and cognitive scientists concerned tend to think of the individual person as the locus of mental states.The essays assembled in this volume aim to show that membership of social groups, and the coordination that takes place in social interactions, are just as important for understanding the minds of humans as the processes taking place inside of individuals’ heads. Indeed, understanding the fundamental sociality of mind may even help people to imagine new social arrangements that correct for the injustices and imbalances of power inherent in the societies we live in today.30
Notes 1 Machery (ch.5) applies this approach to ethnic cognition, and the chapters by Scott et al. (ch.9) and Lavelle (ch.10) to theory of mind. See Barrett (ch.1) and Carpendale et al. (ch.11) for critical discussions of evolutionary psychology. 2 Byrne and Whiten (1989) further developed this hypothesis showing how the demands of primate social life may have led to the development of complex capacities for reasoning about the psychological states of conspecifics. Dunbar (1995) argues that selection pressures arising from group living led to increases in brain size. In particular it was the challenge of keeping track of and manipulating information about large numbers of pairs of adult mates that proved cognitively demanding.
13
Julian Kiverstein 3 Intentionality is used here to refer to the feature of mental states in virtue of which they are directed towards objects and states of affairs in the world (see Rakoczy (ch.8) and Satne (ch.31)). Collective and shared intentionality occurs when two or more individuals have mental states that are jointly directed at objects or states of affairs. Collective and shared forms of intentionality are a central topic in this volume and are discussed throughout part 4. 4 This claim that humans are unique in their capacity for shared intentionality is discussed by Barresi (ch.6) and Rakoczy (ch.8). See the chapters by Andrews (ch.7) and Kaufmann (ch.19) for critique. 5 Zawidzki (ch.28) argues that cooperative communication may well have also originated through selection pressures for the better and more complex forms of signalling of cooperative potential. 6 See the chapters by Moore (ch.3), Boutel and Lewens (ch.2), Menary and Gillett (ch.4), Barresi (ch.6), and Zawidzki (ch.28). 7 See Menary and Gillett (ch.4) for further discussion. 8 This claim, that humans are unique in their capacity for shared intentionality, is controversial. See Andrews (ch.7) and Kaufmann (ch.19) for evidence against this claim. 9 See Mallon (ch.25) for further discussion of essentialist thinking and its role in thinking about gender, sexuality and race. 10 See the chapters by Andrews (ch.7), Carpendale et al. (ch.11) and Fiebich et al. (ch.12) for further discussion. 11 See the chapters by Scott et al. (ch.9), Lavelle (ch.10), Fiebich et al. (ch.12) and Rochat and Robbins (ch.13). 12 See the chapters by Carpendale et al. (ch.11), Fiebich et al. (ch.12) and Rochat and Robbins (ch.13). 13 See in particular the chapter by Zahavi and Salice (ch.30). 14 Shared intentionality is among the topics explored in part 4 of this handbook. See in particular the essays by Kaufmann (ch.19), Butterfill (ch.20) and Gallotti (ch.22). 15 This is a question that is taken up by philosopher Angelica Kaufmann in detail in Chapter 19. 16 There is an interesting agreement with the arguments of Andrews (ch.7) here. 17 The chapter by Rochat and Robbins (ch.13), for instance, shows how children across cultures are motivated by fairness norms. 18 See the chapters by Englemann and Zeller (ch.14), Huebner and Glazer (ch.16) and Colombo (ch.18) for discussions of norm compliance. 19 The chapters by Cushman (ch.15), Huebner and Glazer (ch.16) and Brownstein (ch.17) show how processes of non-social reward learning play a central role in moral cognition. Colombo (ch.18) also discusses these models, contrasting them with the account of motivation that is supported by predictive processing models of decision-making. 20 They focus in particular on a study in which people have to decide whether to pay into an honesty box for drinks they are consuming in an office canteen (Bateson et al. 2006). When a picture of eyes was placed next to the instructions for the honesty box, Bateson and her colleagues found that nearly three times more people paid as compared with what happened when a picture of flowers was placed by the instructions. The eyes seem to function as a cue that reminds the agent of the reputational costs of freeriding. 21 For an excellent overview of the predictive processing theoretical framework in computational neuroscience, see Hohwy (2013). 22 See the chapters by Kaufmann (ch.19), Butterfill (ch.20) and Michael (ch.21). 23 See the chapters by Gallotti (ch.22), Karpus and Gold (ch.23) and Chater et al. (ch.24). 24 For instance, in the Hi-Lo game players must coordinate on the same response in order to obtain payoffs. If they pick options that do not match, neither of them receives anything. In order to receive the highest pay-off, they must both pick Hi, but how can I know for sure that if I pick “Hi” you will do the same? (Bacharach 2006) 25 The chapters by Karpus and Gold (ch.23), Chater et al. (ch.24) and Mallon (ch.25) explore to what extent theories of reasoning in the we-mode and collective intentionality can help us to answer this question and these more general questions about the construction of social reality. 26 Karpus and Gold state in their chapter that double-crossing decreases the likelihood that people will engage in team-reasoning. Furthermore, Chater and colleagues show that if people with competing interests were to team-reason they would agree on a sub-optimal strategy, whereas if they were to engage in virtual bargaining this would result in them selecting a strategy with the best pay-off for both of them. 27 Zawidzki is following Brandom 1994. For critical discussion of Brandom, see Satne (ch.30) and Rouse (ch.31).
14
Introduction 28 They therefore disagree strongly with the account of empathy as vicarious experience defended by Jacob and Vignemont in the previous chapter. Jacob and Vignemont, for their part, offer a number of arguments against the phenomenological theories in the opening section of their chapter (ch.29). 29 I use “intentionality” as a technical term to refer to the meaningful directedness of the mind at the world. 30 See the chapters by Huebner and Glazer (ch.16), Brownstein (ch.17) and Mallon (ch.25) for more discussion of the political implications of thinking through the sociality of the human mind.
References Bacharach, M. (2006). Beyond Individual Choice: Teams and Frames in Game Theory. (N. Gold and R. Sugden, eds.) Princeton, NJ: Princeton University Press. Bateson, M., Nettle, D. & Roberts, G. (2006). Cues of being watched enhance cooperation in a real-world setting. Biology Letters, 2(3), 412–414. doi:10.1098/rsbl.2006.0509 Brandom, R. (1994). Making it Explicit. Reasoning, Representing and Discursive Commitment. Cambridge, MA: Harvard University Press. Bratman, M. E. (2014). Shared Agency: A Planning Theory of Acting Together. Oxford: Oxford University Press. Byrne, R. & Whiten, A. (1989). Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford: Blackwell. Dunbar, R.I.M. (1995). Neocortex size and group size in primates: A test of the hypothesis. Journal of Human Evolution, 28(3), 287–296. Frank, R. (1988). Passion Within Reason:The Strategic Role of the Emotions. New York: Norton. Geertz, C. (1977).Thick description:Toward an interpretive theory of culture. In The Interpretation of Cultures (pp. 3–30). New York: Basic Books. Gilbert, M. P. (2013). Joint Commitment: How We Make the Social World. Oxford: Oxford University Press. Gräfenhain, M., Behne, T., Carpenter, M. & Tomasello, M. (2009). Young children’s understanding of joint commitments. Developmental Psychology, 45(5), 1430–1443. Greene, J. (2013). Moral Tribes: Emotion, Reason, and the Gap between us and Them. New York: Penguin Press. Hohwy, J. (2013). The Predictive Mind. Oxford: Oxford University Press. Humphrey, N. K. (1976). The social function of intellect. In P.P.G. Bateson and R. A. Hinde (Eds.), Growing Points in Ethology (pp. 303–317). Cambridge: Cambridge University Press. Hutto, D. & Satne, G. (2015). The natural origins of content. Philosophia. Philosophical Quarterly of Israel, 43, 3. Menary, R. (2007). Cognitive Integration: Mind and Cognition Unbounded. Basingstoke: Palgrave MacMillan. Nisbett, R., Peng, K., Choi, I. & Norenzayan, A. (2001). Culture and systems of thought: Holistic versus analytic cognition. Psychological Review, 108, 291–310. Searle, J. R. (1995). The Construction of Social Reality. New York: The Free Press. ———. (2010). Making the Social World:The Structure of Human Civilization. Oxford: Oxford University Press. Sebanz, N., Bekkering, H. & Knoblich, G. (2006). Joint action: Bodies and minds moving together. Trends in Cognitive Sciences, 10(2), 70–76. Sober, E. & Wilson, D. S. (1998). Unto Others. Cambridge, MA: Harvard University Press. Sperber, D. (1996). Explaining Culture: A Naturalistic Approach. Cambridge: Cambridge University Press. ———. (2000). Meta-representations in an evolutionary perspective. In D. Sperber (Ed.), Meta-Representations: A Multidisciplinary Perspective (pp. 117–146). Oxford: Oxford University Press. Sripada, C. & Stich, S. (2007). A framework for the psychology of norms. In P. Carruthers, S. Laurence and S. Stich (Eds.), The Innate Mind: Culture and Cognition. (pp. 280–301). Oxford: Oxford University Press. Sterelny, K. (2003). Thought in a Hostile World – The Evolution of Human Cognition. Oxford: Blackwell Publishing ———. (2012). The Evolved Apprentice: How Evolution Made Humans Unique. Cambridge, MA: MIT Press. Tollefson, D. & Dale, R. (2012). Naturalising joint action: A process based view. Philosophical Psychology, 25(3), 385–407. Tomasello, M. (1999). The Cultural Origins of Human Cognition. London: Harvard University Press. ———. (2009). Why We Cooperate. London: MIT Press. ———. (2014). A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. Tuomela, R. (2007). The Philosophy of Sociality:The Shared Point of View. Oxford, UK: Oxford University Press.
15
PART I
The evolution of the social mind
1 THE (R)EVOLUTION OF PRIMATE COGNITION Does the social intelligence hypothesis lead us around in anthropocentric circles? Louise Barrett
Introduction: anthropomorphic reflexivity The British comedian, Eddie Izzard, has a joke about squirrels: isn’t it weird, he says, the way they pause suddenly while eating, like they’ve just remembered something terrible? Izzard mimics a squirrel eating a nut before pausing dramatically and asking himself: “Did I leave the gas on?” There is a pause for laughter, before Izzard resumes his imaginary nut-eating, saying dismissively: “Nah . . . of course I didn’t!! I’m a f**king Squirrel!” As well as getting an even bigger laugh, the notion of a squirrel anthropomorphically commenting on the folly of attributing an anthropomorphic thought to a squirrel captures the reflexivity of human thoughts and action, adding a further layer to the joke. For anyone interested in recent developments in comparative cognition, yet another layer is added by the recognition that this same reflexive quality structures much of the debate over the use of anthropomorphism as a scientific strategy, with worries raised over whether ascribing particular traits to other species, or our refusal to do so, is a reflexive response to the way in which we wish to see ourselves. What makes this all even more mind-bending is that human reflexivity is itself argued to have arisen evolutionarily (and ontogenetically) as a result of the intense sociality that characterizes our species: Dewey, Mead and Vygotsky, for example, all suggested that human selves, and our ability to see our selves as selves, arose from the internalization of those around us to produce a “generalized” or “social other” (Dewey 1958; Mead 1934; Vygotsky 1997). In this view, sociality is fundamental to our individuality, representing both its source and cause. Similar ideas about the evolutionary pressures generated by social life have also been put forward to explain the evolution of cognitive abilities across the primate order as a whole (Humphrey 1976; Dunbar 1998). In particular, the social intelligence hypothesis or, as it is now more commonly known, the social brain hypothesis has enjoyed great success as an explanation for why the anthropoid monkeys and apes, including humans, should possess the largest brains for their body size across the animal kingdom. In what follows, I briefly trace the history of the social intelligence hypothesis, highlighting its strengths and weaknesses as a general theory of cognitive evolution. I then go on to consider 19
Louise Barrett
how the social intelligence hypothesis links to current debates on anthropomorphism in ways that preserve the essential anthropocentrism that lies at the heart of both. As a parting shot, I suggest that this outcome is inevitable if we accept the social intelligence hypothesis as our best explanation for primate cognitive evolution, which perhaps explains why we continue to find ourselves in the same mind-bending zone as Izzard’s forgetful squirrel.
A brief history of the social intelligence hypothesis Following Alison Jolly’s early suggestion that primate social life “preceded the growth of primate intelligence, made it possible, and determined its nature” ( Jolly 1966, p. 506), Humphrey (1976) subsequently (and independently) made a similar claim based on his own observations of captive monkeys and wild apes. As he pointed out, the mountain gorilla, which lives, effectively, in a giant bowl of salad, faces a set of ecological problems that are simple and finite, and no more demanding of intelligence than those that apparently face other, less brainy, creatures. Add in the demands of social life, however, and you have a game-changer: obligate sociality places a premium on the ability to predict how other animals are likely to act, a capacity that makes it possible to anticipate, ameliorate or avoid altogether the competition that arises when animals are forced to live together and share resources. In other words, animate beings, with agendas and goals of their own, generated the selection pressure to acquire a specific kind of psychological insight into others, and a more “creative” form of intelligence; one that is fundamentally different from the practical knowledge needed to forage effectively and efficiently. More specifically, Humphrey (1976) made the case for primates as “social gamesmen” (p. 309), rather like human chess-players: they needed to both preserve the overall structure of the group while “exploiting and out-manoeuvring others” (p. 309). The idea of primates as “calculating beings” capable of a “special form of forward planning” (p. 309) gained even greater prominence with the publication of the “Machiavellian Intelligence” (Byrne & Whiten 1989). As the editors were at pains to note, the word “Machiavellian” referred only to the need for a specifically psychological understanding of others, rather than the need for manipulativeness or deception, although these clearly formed part of the argument, and were especially prominent in Humphrey’s work. Empirical tests of the theory then followed, presenting evidence to suggest that social factors had indeed been more important than ecological factors in promoting brain size evolution: Dunbar (1992, 1995) found a positive correlation between neocortex ratio and group size across the anthropoid primates, while simultaneously demonstrating that ecological factors did not show any significant relationship. This was interpreted as showing that primates used specifically social strategies to help cope with the ecological problems they faced (e.g., competition for food resources); problems that, in turn, had been generated by selection pressures favouring group living (most specifically, avoiding predation: van Schaik 1983).1 Further studies reinforced the association between social life and brain size, by correlating neocortex measures with complex social behaviours like promiscuous mating (Pawlowski et al. 1998), grooming clique size (Kudo & Dunbar 2001) and rates of deception (Byrne & Corp 2004).This led to the further claim that, much as Humphrey (1976) had argued previously, it was the ability to represent and manipulate specific kinds of information about social relationships (as opposed to, say, improved memory capacities) that both selected for increased brain size, and then placed limits on the size of group that a given species was able to sustain. More specifically, the form of social intelligence indexed by enlarged brain size “principally focuses on the ability to use knowledge about other individuals’ behavior – and perhaps mind-states – to predict and manipulate those individuals’ behavior” (Dunbar 2003, p. 167). What was noticeable in this shift to empirical 20
The (r)evolution of primate cognition
investigation, then, were the assumptions that large neocortex size supported a highly specific kind of cognitive ability, and that group size could be used as an appropriate indicator of social complexity, without all that much in the way of independent empirical support for either of these positions. In recent years, this has led some researchers to question whether evolutionary size increases in particular parts of the brain can, in fact, be traced causally to sociality in the manner implied (Healy & Rowe 2007; Healy & Rowe 2013; Rowe & Healy 2014). The rather nebulous concept of social complexity used in these analyses may also explain why, when empirical work was extended to other species,2 the nature of the relationship proved to be distinctively different to that of the primates, with pair-bonding (i.e., the formation of an enduring relationship between adult mates) emerging as the key correlate of enlarged brain size (Dunbar & Shultz 2007; Shultz & Dunbar 2007). This naturally required some reconfiguring of the original hypotheses, with a more precise delineation of the quantitative versus qualitative demands of sociality. Among the primates, the use of both a quantitative measure of group size and relative neocortex size rested on the assumption that tracking a large number of dyadic relationships generated a significant cognitive burden. As described above, this quantitative demand was accompanied by the (implicit) assumption that animals also possessed a conceptual understanding and recognition of relationship quality, and attempted to manipulate others using this information accordingly (Dunbar 1998). The discovery that pair-bonded species have the largest relative brain sizes in ungulates and birds has therefore been taken to indicate that it is primarily the quality of relationships, and not their quantity, that is evolutionarily important, thus reversing the original interpretation (Dunbar & Shultz 2007; Shultz & Dunbar 2007; see also Dunbar & Shultz 2010). The aspect of social life now considered to have selected for the enhanced brain size of the anthropoid primates compared to other species is the manner in which the pair bond has been generalized to include all group members. Dunbar (2009, p. 564) characterizes this as “an important phase transition in the form of the social brain effect” that took place at an early stage of primate evolution. The question that arises from all this is: why are pair-bonded relationships so cognitively demanding? Dunbar (2009) suggests this question has no ready answer because we also lack a ready answer to the prior question of how animals go about the day to day business of maintaining a pair bond; so far, we have only considered their value in an evolutionary-functional sense and not from a proximate-psychological/physiological perspective (i.e., we consider only the way in which they contribute to fitness, and not how they are maintained within the context of an individual lifetime). This may well be true, but it is equally true to say that this applies to the original social brain hypothesis. The cognitive demands of social life were based on a series of assumptions, put forward by Humphrey and elaborated on by Dunbar and colleagues, that social life was complex because it required the manipulation of social information about various third-party relationships, and a form of prospective knowledge about one’s own relationships and those of others. This, however, stemmed from the existing knowledge that the anthropoid primates had larger brain sizes than expected for their body sizes, and that they were all intensely social. The reasoning therefore ran from the observation of large brains to the kinds of complex processing large brains would afford in a social context (which, one could argue, were based largely on a folk psychological projection of our own abilities in this domain). No convincing evidence was offered to suggest that the social life of non-human primates actively required the use of flexible, highly cognitive, prospective strategies in real-time. Evidence in support of the evolutionary argument, such as the correlation between group size and brain size across the primate order (Dunbar 1995), was then taken as implicit support for the postulated proximate 21
Louise Barrett
behavioural and cognitive mechanisms by which individual animals increased their survival and reproductive success. Part of the reason for this seems to lie in the anthropocentric focus of the original theory, where explaining the evolutionary origin of our own extraordinarily large brains was integral to the whole project. Questions about other primates’ social cognition were posed in ways that privileged the evolutionary origins of cognitive abilities like language, analogical reasoning and theory of mind skills, because these were seen as essential and fundamental to understanding how people are able to predict what other people will do, despite evidence that suggests that can be achieved without the need to model the mental states of others (e.g., Hutto 2004; Gallagher 2001; Andrews 2007a). Research efforts were (and still are) geared explicitly to detecting these abilities or, more commonly, their precursors, in monkeys and apes, either to reinforce our own uniqueness or to identify how our own skills in these domains have been derived from evolutionarily simpler mechanisms (e.g. Dunbar 2003; Bergman et al. 2003; Call & Tomasello 2008; Zuberbühler 2000; Cheney & Seyfarth 2005; Seyfarth & Cheney 2013). This criticism doesn’t apply to all theories of human cognitive evolution, where the focus is more squarely placed on identifying the unique path taken following the evolution of our own species, rather than tracing our capacities through our common ancestors as such. Sterelny (2003, 2007, 2012), for example, argues that we are “creatures of feedback”, where our uniqueness lies in the nature of the feedback mechanisms that connect, among other things, the cultural environments we construct and inhabit, human social learning, individual expertise and human life history processes. As he notes:“[a]s hominids made their own worlds, they indirectly made themselves” (Sterelny 2003, p. 17). Heyes (2012) makes a somewhat similar argument when she suggests that both the “grist and mills” of our lives – the knowledge and know-how needed to deal with the world, and the social learning processes that allow us to acquire this know-how, respectively – are both culturally inherited, in contrast to views which suggest these social learning “mills” are biologically inherited: according to Heyes (2012), our social learning mechanisms are cultural adaptations, not evolved functional specializations. Equally, Tomasello’s “Vygotskian intelligence hypothesis” (Moll & Tomasello 2007; Tomasello & Moll 2010) posits unique motivations and cognitive skills that allow humans to understand others as cooperative agents with whom one can share collaborative actions (“shared intentionality”), an ability apparently not found in other apes. In his work on ape intelligence, however, Tomasello (Tomasello & Call 2006; Call & Tomasello 2008; Schmelz et al. 2011) does seem to offer a largely anthropocentric “top down” view, where apes are compared and contrasted with those of humans, with respect to capacities like the attribution of certain kinds of mental states. This same “top down” view, where our own capacities colour theories of what we should look for in other species, can be seen in the hypotheses put forward to explain why pair-bonding should generate high cognitive demand. As Dunbar (2009) sees it, there are two options: (i) intense pressure for finely-tuned mate choice competences and (ii) the need to coordinate and synchronize behaviour. These are not mutually exclusive, but a “critical tests” analysis pitting the predictions of the hypotheses against one another is argued to come down in favour of behavioural coordination. One reason for this, Dunbar (2009) suggests, is the need for individuals to anticipate their mate’s needs so as to ensure that both members of the pair are able to meet their nutritional and other requirements. He then goes on to say: Being attentive to the mate’s needs so as to ensure that he/she can achieve his/ her daily nutrient intake has many of the hallmarks that would be recognized as theory of mind in humans. In effect, pairbonded species have to be able to engage in perspective-taking, a phenomenon that is widely accepted as being a prerequisite 22
The (r)evolution of primate cognition
for mentalizing (or theory of mind: Hare et al. 2001, 2006). Hence, pair bonded monogamy can perhaps be seen as laying the foundations for the kinds of advanced social cognition found (albeit in limited form) in primates and (perhaps uniquely in full-blown form) in humans. (Dunbar 2009, p. 568) This has, in turn, led to the search for such perspective-taking skills on the part of pair-bonded species, with at least one study on Eurasian jays reporting positive evidence (Ostojic´ et al. 2013) (although this conclusion is contingent on the precise way in which the data were analyzed3). There is also evidence to suggest that behavioural coordination is important to breeding success: zebra finches (which are not, it should be noted, particularly large brained) that were allowed to choose their own mates displayed higher fitness and better behavioural coordination than “forced” pairs placed together by the experimenters (Ihle et al. 2015). In contrast, a study on New Caledonian crows demonstrated that tasks involving behavioural coordination can be learned readily and rapidly without any form of cooperative cognition (i.e., understanding the importance of timing and the role of the partner) ( Jelbert et al. 2015). There is also an experimental study of both wild and captive vervet monkeys that revealed they can learn to coordinate their behaviour in complex ways on the basis of individual reinforcement learning alone, with no evidence for either perspective-taking or social learning (Fruteau et al. 2013).4 This equivocal pattern of results is not too surprising given that the reasoning presented for perspective-taking as a necessary condition for behavioural coordination follows the same logic as the original social intelligence hypothesis: assume the end point of the process to be human social intelligence and then project this backward onto other species to arrive at the kinds of traits that can make sense of the behavioural data. Indeed, there seems to be an even greater explanatory need for other species to possess such cognitive traits (or their precursors) so that the social intelligence hypothesis continues to cohere as a general theory of brain size evolution. The anthropocentric origins of the social intelligence/social brain hypothesis thus continue to push the study of social cognition along a trajectory that ultimately may hamper rather than enhance our understanding of other species (see also Barrett & Würsig 2014).
Not everything is about you . . . Looking explicitly for precursors of human socio-cognitive attributes, for example, judges other species’ capacities against a human standard, and by necessity implies they will only meet some fraction of this standard, when it would be more productive to consider these as adaptations in their own right (Tyler 2003; Barrett 2011; Barrett 2015a,b). More insidiously, the success of the social brain hypothesis has led other social species to be judged against a specifically primate standard – the crow family has been characterized as “feathered apes”, while studies of the cetaceans emphasize their convergence with the cognitive abilities of chimpanzees. As the apes themselves are tethered to a human standard, this means that, despite an overt evolutionary emphasis, humans continue to lie at the centre of the comparative cognition project, even if they no longer occupy the top spot on the “great chain of being”. This subtler form of anthropocentrism continues to pervade both theoretical and empirical treatments of the social intelligence hypothesis because this is not a formal theory, but one that makes extensive use of a folk psychological intentional stance to construct and test its hypotheses. Penn (2011) describes the methodological principles of this “comparative folk psychology” as the process by which a behaviour that appears clever to human eyes is systematically investigated, such that instinct, stimulus-bound associative learning or simply random guessing 23
Louise Barrett
are all ruled out as possible explanations of a behaviour. Alternative explanations based on what humans would be thinking if they were behaving like the subjects in that same context are then offered and the claim is made that it is appropriate (and more parsimonious) to conclude that the non-human subjects are thinking what human subjects would be thinking or, at least, thinking functionally equivalent thoughts or their “precursors”. This notion of functional equivalence is even more problematic than Penn (2011) suggests because it has licensed a form of “as if ” evolutionary reasoning that actively eliminates any need for the mechanism to be specified in detail (Barrett et al. 2007). Animals are argued to act “as if ” they possess particular psychological mechanisms because their behaviour produces the appropriate functional consequences. This, in turn, may stem from the fact that Dunbar’s original analyses were situated purely at the functional level, designed to identify the factors that would potentially enhance the fitness of their possessors, and did not explicitly set out to link particular proximate mechanisms to their ultimate function. Thus, all that is required to test hypotheses relating to social intelligence is that animals act in ways consistent with the proposed evolutionary function of the behaviour, while the nature of the underlying mechanism remains utterly opaque. The use of folk psychological “as if ” reasoning thus obscures the fact that studies using this tactic generally provide no solid evidence of the ultimate fitness benefits of a behaviour, nor do they demonstrate the operation of a specific proximate mechanism (Barrett et al. 2007; Barrett 2011). Instead, they sit uncomfortably in between, offering a “proximatey-ultimish” level of explanation. There is also the related problem that, in most studies following such reasoning, there is the implicit equation of apparent behavioural complexity with cognitive complexity, when there is, in fact, no necessary connection between the two. For example, one interpretation of the food-cache protection strategies of scrub jays is that they are able to attribute mental states, take the perspective of another bird, and so understand when their caches are at risk of pilfering by others. This behavioural outcome has also been modelled, however, by assuming that re-caching is motivated simply by a general desire to cache more under conditions of stress, which itself is determined by the presence of onlookers, and by a bird’s own unsuccessful recovery attempts (van der Vaart et al. 2011, 2012; for further examples see Barrett 2011). The overt patterning of behaviour in both cases is the same, and the outcomes are equivalent. This suggests that mechanisms based on the notion of mental state attribution are just “simpler for us” to understand intuitively, rather than detecting something in behaviour that necessitates a more complex cognitive mechanism as its cause. Having said this, a recent test of van der Vaart et al.’s (2012) model on real scrub jays suggested that their stress hypothesis could not adequately account for patterns of caching (Thom & Clayton 2013), although the degree to which it tested the stress hypothesis is open to question: the study recorded only the number of items cached, whereas van der Vaart et al.’s (2012) hypothesis predicts an increased frequency of caching and re-caching (i.e., instances where the same item could be cached several times over). What remains true, however, in both the original work and in this more recent test, is that the more complex cognitive mechanism expressed in folk psychological terms provides no mechanistic explanation at all.Van der Vaart et al.’s (2012) simulated jays, by contrast, posit clearly specified mechanisms that provide a promising explanation of caching behaviour. Thus, for all we know, “mental state attribution” might simply boil down to contextually-influenced tendencies to behave in particular ways. Indeed, this is the burden of the critique presented by Penn (2011). He argues that folk psychological explanations prosper because comparative psychologists have, to all intents and purposes, abandoned the fundamental tenets of the cognitive revolution. Instead of embracing the idea that cognitive processes are computational, rule-governed, algorithmic processes, Penn 24
The (r)evolution of primate cognition
(2011) argues that comparative psychologists aim to dismiss this possibility in favour of demonstrating that animals have an “understanding of ” or “insight into” a particular folk psychological concept.The word “cognitive”, he argues, has thus become synonymous with “mentalistic”, “conscious” or “insightful”, or, as Tomasello and Call (2006, p. 371) put it, with the notion that “apes really do know what others do and do not see” (emphasis added). This, Penn (2011) argues, is actually a strongly anti-cognitivist position: the mentalistic mechanisms are presented as alternatives to computational or algorithmic processes, when they are really just short-hand folk psychological description of such processes. In many ways, then, Penn (2011) echoes the arguments of the radical behaviourists, when he suggests that contemporary comparative psychologists seem content to trade in folk psychological idiom (what Skinner would call “mental fictions”) as explanations of behaviour (Skinner 1977). For Penn (2011), the solution to this problem is obvious: open up the “black box”, cash out folk psychological descriptions in terms of computational, algorithmic or neural levels of explanation, and fulfil the promise of the cognitive revolution. Abandoning folk psychological explanations would, he suggests, allow us “to imagine a rosier future”, one where “we study non-human social cognition from a cognitive perspective” (Penn 2011, p. 262, emphasis in the original). Such a move would allow us to “build a scrub jay that thinks like a scrub jay” (p. 262). But would it? One reason why this might not work as well as intended is because the computational approach that Penn advocates is not as free of human intentionality as he supposes. Although we treat computational cognitivist theories as essentially species-neutral, this is not the case (e.g., Brooks 1991; Barrett 2015a). The problem domains (e.g., symbolic algebra, geometrical problems, natural language understanding and vision) of the original artificial intelligence project were all “bench-marked against the sorts of tasks humans do within those areas” (Brooks 1991, p. 140).These problem domains were also defined and refined by researchers in ways that simplified the task at hand, abstracting away most of the details. Thus, as Brooks (1991) points out, humans had employed most of the intelligence needed to solve a particular task (i.e., the part of the process involving abstraction) well before the computational models or artificial systems were let loose on it. In other words, given that the relevant details of such models were decided by humans in advance, the notion of intelligence captured by models of this nature was fundamentally tied to human capacities. The possibility therefore exists that there is a deep anthropocentrism built right into the core of computational theories of mind (see Barrett 2015a for a more detailed discussion of this). Hence, any theory articulated in computational terms, particularly those that make use of a representational framework, may continue to build a scrub jay that continues to think rather like a human. Indeed we can already see this in existing work: the programming language, Act-R, used by van der Vaart et al. (2012) is explicitly designed to mimic the human brain.5
Moving beyond the brain This argument links to a broader criticism of computational theories in general, and the social intelligence/social brain hypothesis in particular, as both neglect the perception-action mechanisms by which animals engage with the world. Indeed, by defining cognition as a brain-bound process of computation, such theories actively encourage such neglect, treating sensory and motor abilities as “peripherals” that convey input to and output from the brain, and are not relevant to understanding cognitive processes as such (an aspect that reinforces the idea that computational theories are species-neutral). This neglect of bodily engagement with the environment is problematic because selection has acted for much longer on these perceptual and 25
Louise Barrett
motor systems than on the kinds of “high-level” processes identified as cognitive, and it seems likely that perceptual and motor capacities influence the form that any such cognitive processes take, as well as raising the possibility that, in some instances, intervening cognitive processes may not be needed at all (Brooks 1999; see also Barrett 2011 and 2015a for more detailed reviews of this alternative stance). With respect to the social intelligence/social brain hypothesis specifically, this failure to consider perceptual and motor abilities means that differences between primates and other species inevitably are downplayed, even though these may make a very real difference to the kinds of socio-cognitive strategies open to a species. This seems to make it even less likely that we will build a “scrub jay that thinks like a scrub jay”.What is also lost in a solely brain-based view of social intelligence, then, is any notion that the selection pressures acting on non-primate species may compete with those social pressures argued to increase brain size. Holekamp et al. (2013) in her work on spotted hyenas, for example, has argued that selection on the ability of the jaws to resist high stresses and exert strong bite forces may have limited the extent of brain expansion in the genus (and carnivores more generally). Brain size expansion reduces the area available for muscles within the zygomatic arches, and smaller muscles would mean weaker bite forces. In the hyena case, the feeding apparatus seems to have been under stronger selection than the brain, and Holekamp suggests these morphological constraints explain why hyenas show less behavioural flexibility than primates, despite striking similarities in social organization (see also Barrett & Würsig 2014 for a similar argument with respect to cetacean cognition). This brings us back to the issue of anthropomorphism. Recently, there have been a number of positive arguments in favour of an anthropomorphism as a valid scientific strategy that are couched in slightly different terms to more standard arguments. What I want to argue here is that these similarly contain a deep anthropocentrism, despite ostensively arguing to the contrary.
Anthropomorphism as “ethnocentrism” Debates about anthropomorphism and its appropriateness have a long history, coming and going in cycles, reflecting the dominant schools of thought at a given place and time. This story is well rehearsed, from Descartes, to Darwin and then onto behaviourism, the cognitive revolution and the rise of cognitive ethology. As in all areas of life, then, there are trends in science, and currently we are in a phase where similarities between species (or rather similarities between humans and other species) are emphasized and differences are downplayed. In recent years, there has been a steady rise in the number of empirical claims for human-like cognitive traits in other species, bolstered by the application of an evolutionary framework. In line with this, there have been a number of cogent arguments in favour of an anthropomorphic strategy, or at least one that does not display a bias against attributing “high-level” or “human-like” abilities to other species. Andrews (2007a,b, 2011) for example, suggests that the issue at stake is not whether humans use folk psychology to understand other animals, but whether animals use their own form of folk psychology to understand each other.This in turn raises questions over what folk psychology actually is. Andrews notes that the traditional definition is some form of “mind-reading”, i.e., the attribution of mental states – beliefs and desires – to other people in order to predict their behaviour.The problem with this definition is that, as she demonstrates, we do not always attribute beliefs and desires to predict what our fellow humans will do: I can predict that my office-mate will head straight for the kettle when she arrives at work in the morning because 26
The (r)evolution of primate cognition
this is what she always does, not because I believe she desires a cup of tea, and believes that the kettle will help her achieve this by boiling the water for her. It is equally clear that human children engage in folk psychological behaviours before they gain any understanding of belief. This means that the traditional question to ask in a “critter psychology” – do animals attribute beliefs and desires to others in order to predict behaviour? – simply won’t do, because it asks more of animals than it does of humans (Andrews 2007a, 2011). Buckner (2013) makes a very similar argument, in which he coins the phrase “anthropofabulation” to describe our tendency to inflate human psychological capacities, and then insist that only those non-human animals that can perform at this (over-inflated) level can be said to possess the psychological trait or mechanism in question. Andrews (2007a) suggests the way out of this quandary is to change the premise that folk psychology requires the attribution of mental states to others, to the notion that folk psychology is the understanding that others are minded, intentional creatures. She argues that this involves two elements: practices of social interaction and the possession of abstract mentalistic concepts (i.e., animals possess beliefs of some kind, and these are instrumental in enabling effective behaviour, even if they cannot attribute such beliefs to each other). On this basis, Andrews considers there to be sufficient evidence to conclude that chimpanzees possess some kind of folk psychology by which they understand conspecifics (e.g., Hare et al.’s (2000, 2001) experiments, which are argued to show that chimpanzees “understand what others see”). Chimpanzee folk psychology “need not be as robust or as complex as our own, but the use of mentalistic concepts to engage in social interaction counts” (Andrews 2007a, p. 204). From a positive perspective, this argument leads to the conclusion that folk psychology is not a monolith: there may be many ways for other social creatures to predict behaviour and engage socially with each other. On the negative side, there may still be a problem with Andrews’ assessment here, one that arises empirically, rather than logically. The data she uses to support her position involved chimpanzees being tested within an experimental paradigm that adopts a Western, scientific conception of human folk psychology. Within this paradigm, the only mentalistic concepts that chimpanzees can display are those of the experimenters’, whose European-American folk psychology informed the design. Despite Andrews’ arguments, and despite the greater ecological validity that is said to characterize Hare’s form of testing (based on competition rather than cooperation), the conceptual framework of the studies she uses to support her argument remains strongly anthropocentric and, arguably, is “ethnocentric” to boot. On the one hand, perhaps this doesn’t matter if all we wish to argue for is functional continuity across species, and simply make the case that other species possess some kind of folk psychology of their own. On the other hand, broadening the concept of folk psychology and making it more “pluralistic” does not eradicate this deeper problem of anthropocentrism because, empirically speaking, human folk psychology remains at its centre. This problem is thrown into sharper relief when we consider the animals’ failure to pass such tests: is this because they truly do not possess the capacities being tested for? Or do they fail because the test is not “culturally appropriate” for the species in question, reflecting only one particular conception of what understanding others entails? How would this influence our view of their folk psychology? It seems that a concern about the “colonization” of the animal mind by a particular Western scientific view of folk psychology remains, even if we decide that criticism of anthropomorphism per se is baseless. The same perhaps is true of Andrews (2007b, 2011) arguments in favour of using “folk expert opinion”, and the functionality of attributions to assess which traits can be attributed to other animals besides ourselves. Here, her argument is that we can elicit folk psychological assessments 27
Louise Barrett
of other species from people who spend time caring for other species, such as zoo-keepers, or those who spend long periods of time observing other species, like fieldworkers. The veracity of such assessments can then be adjudicated by how useful they are in predicting and controlling behaviour, which is something that can be put to rigorous scientific test. Given that we use this approach to attribute psychological properties to other humans who are unable to do so for themselves (such as infants and very young children, or people who are profoundly disabled in some way), Andrews argues that we should not be shy of using a similar approach with other species. From a pragmatic point of view, this approach has much to recommend it, especially in a caregiving context where attributing mental states may very well enhance overall welfare and wellbeing of other species. However, this strategy will work regardless of whether animals actually do possess psychological states, or whether we merely project them. Scientifically, this approach may also work for the reasons Andrews gives, namely, that folk expertise is not the end point, but the starting point for further scientific investigation (although this would seem to fall prey to Penn’s criticism that folk psychological explanation is not a good basis for scientific investigation). Moreover, if we are happy with folk expertise as applied to human infants, then we should similarly be happy with its application to other species, especially the primates, where its application is licensed by the functional continuity between human minds and those of other species. There is, however, a further scientific worry here, which Andrews doesn’t fully discuss, which is that the personality inventories and other scales used to elicit this folk expertise have been created in the context of a particular (Western, individualistic) scientific way of thinking about human psychology, which may not even apply to all humans (Greenfield 1997; Greenfield et al. 2003). This being so, it becomes much less clear whether these will permit access to a non-human folk psychology that captures how animals might think about each other. What is interesting here is that Andrews and Huss (2014) make this same argument, but from the opposite perspective. They point out that [t]he problematic properties are those that require a degree of interpretation to identify, those that are still more opaque than transparent. Those human properties that currently defy a robust scientific account are also those that are most often cited as problematically anthropomorphic. (p. 716) But this is precisely the problem as Penn (2011) sees it: a folk psychological account is more opaque and requires more interpretation in virtue of the fact that it offers no real hypotheses nor explanation of the specific cognitive mechanisms involved. In their account, however, Andrews and Huss (2014) suggest that “selective skeptics”, like Penn (2011), regard folk psychological explanations as problematic simply because they offer potentially false accounts of human behaviour, and thus we compound this error when we apply such explanations to other species. Andrews and Huss (2014) therefore suggest that this anti-anthropomorphic stance ultimately cannot work in the way the skeptics would like because “the distinction between folk psychological concepts and scientific psychological concepts will not map onto the distinction between anthropomorphic human properties and shared properties” (p. 716), but this is not the argument being made. The skeptics are not arguing that folk psychological attributions are anthropomorphic and therefore impermissible. Rather, they are arguing that folk psychological attributions are impermissible because they fail to identify any kind of cognitive mechanism that could be put to the test, whether these are found in human or non-human animals.The “mental representations” that researchers like Povinelli and Vonk speak of are not the folk psychological beliefs and other mental states studied by the likes of Hare, Call and Tomasello, Povinelli & Vonk (2004). 28
The (r)evolution of primate cognition
If social intelligence is the answer, are we stuck with the same old questions? This brings me to my final point, which concerns the perceived bias against attributing highlevel abilities to non-humans. As Sober (2005) and Andrews and Huss (2014) note, there is no (commonly used) word that denotes the mistaken failure to attribute human-like characteristics to other animals. Although de Waal (1997), for example, has suggested the word “anthropodenial” to cover such cases, it has failed to gain much traction. Sober (2005) suggests that the reason for this bias is that Type II errors, i.e., cases of mistaken anthropodenial, are considered to be less serious than the Type I error of anthropomorphism. This, Sober (2005) argues, seems to stem from the idea that only a sentimental attitude would lead one to assume one’s pet has mental states and it is, therefore, a sign of strength to resist this argument. Hence, the preferred null hypothesis is one that suggests the animals under study will not possess human psychological traits. Sober (2005) argues that such reasoning in favour of a skeptical null hypothesis is flawed because the two errors are, in fact, equivalent, with each as undesirable as the other. We should, therefore, adopt neither anthropomorphism nor anthropodenial as a default position, but gather the evidence needed to discriminate between them appropriately (see also Barrett 2011). This, in turn, means, for Sober (2005) at least, that studies of animal cognition should reject Neyman and Pearson’s methodology for setting up a null hypothesis and its alternative. Andrews and Huss (2014) recently took on Sober’s (2005) argument, demonstrating that there is no problem with the null hypothesis testing as such, for it does not lead inexorably to a preference for a skeptical null hypothesis over an optimistic one. The exact details of their argument need not detain us here, as the only point I wish to make is that Andrews and Huss (2014) concede that a skeptical null may sometimes be warranted for prima facie, pre-empirical reasons. These include statistically-based reasons for why other closely related species lack the property in question, anecdotal evidence to suggest the ability may be lacking, and independent theoretical reasons to suggest that this is so. Given this argument, the final question I want to raise is this: if, for the sake of argument, we accept that the social intelligence hypothesis is the best explanation for why human and non-human cognitive capacities take the form that they do, does this provide sufficient reason for adopting a skeptical null as the default when investigating the psychology of other species? As we have seen, the arguments for the social intelligence hypothesis suppose that certain kinds of animals, namely those that live in particular kinds of temporally stable, structured groups, have been selected for the ability to manipulate and use information relating to the psychological properties of other individuals. Humphrey (1976, p. 313) took this to its logical conclusion by suggesting that our man setting out to apply his intelligence to solve a social problem may expect to be involved in a fluid, transactional exchange with a sympathetic human partner. To the extent that the thinking appropriate to such a situation represents the customary mode of human thought, men may be expected to behave inappropriately in contexts where a transaction cannot in principle take place: if they treat inanimate entities as “people” they are sure to make mistakes. The misapplication of our social intelligence, Humphrey suggests, is what leads us to bargain with an indifferent nature, through ritual, sacrifice and prayer, or assume the roulette will eventually “respond to our persistent overtures” (p. 313) and come up red. More positively, Humphrey argues that this misapplication may also have been the source of many of our most 29
Louise Barrett
impressive cultural and technological developments, such as agriculture (as we “transact” with nature in the planting and tending of crops), an idea expanded on by Mithen (2007). If we accept this to be the case, then the attribution of psychological states conceivably could be the misapplication of our specifically human (content-filled, language-based) social intelligence to other animate creatures beside ourselves. This is, of course, the very definition of anthropomorphism, but the notion is complicated by our awareness that it is the social intelligence hypothesis itself that leads to this conclusion, although it is the very same hypothesis that suggests we should, in fact, expect to see these kinds of psychological states in other social creatures. Thus, it becomes pertinent to ask whether the social intelligence hypothesis provides the kind of pre-empirical reason needed to favour the adoption of a skeptical null. Andrews and Huss (2014) might argue perhaps that this theoretical stance and the empirical tests are not independent enough to justify a skeptical approach. My point in raising this question is not so much to decide whether the skeptical stance is justified as to illustrate that, if we accept the social intelligence hypothesis as the best explanation for the particular character of human cognition, then it seems likely that we will forever remain trapped in a reflexive debate over whether and why we should expect to see human folk psychological traits in other species. The social intelligence hypothesis itself pushes us toward a skeptical stance on the psychological states of other species, even as it provides us with the justification for adopting a more optimistic one, thus leading us round in circles. I suspect this is so because of the impossibility of escaping the deep anthropocentrism that lies at the heart of the social intelligence hypothesis. This is something that, contrary to Penn’s (2011) advice, we cannot solve by adopting a more stringently cognitivist approach, because this also retains a deeply anthropocentric signature. What we need is a fresh approach: one that places greater emphasis on evolution as a diversity-generating process, and considers situated, embodied action in the world as a constitutive of cognitive processes (Chemero 2009; Barrett 2011; Hutto & Myin 2013). We do not have to accept that the social cognition of our closest relatives must be conceived of in mentalistic terms on the grounds of cognitive and evolutionary parsimony. Instead, we can ask how other animals might acquire the know-how to deal with their social lives, without assuming that “basic minds” must be content-bearing and representational simply because we know our own minds to possess such characteristics, and because an evolutionary stance seems to require some form of psychological continuity through time (Barrett 2015b). In other words, it seems worthwhile to entertain the view of cognition offered by a radical embodied and enactivist stance as these focus attention on embodied creatures embedded in their environments, and not as algorithmic, detached, isolated minds. In this way, perhaps, we can at least resist the pull of our own anthropocentrism, even if we can never eradicate it completely. Our view of other animals may always be coloured by our view as “outsiders”, but perhaps it will at least allow us to avoid the colonizing tendencies of current comparative theories of social intelligence.
Notes 1 A parallel series of studies by Barton (1998, 2004; Barton et al. 1995) refined these analyses by identifying the specific parts of the brain that showed the greatest expansion, demonstrating, among other relationships, that the parvocellular layer of the lateral geniculate nucleus correlated with group size, suggesting a role for visual signalling in driving brain size evolution. More recently, Barton and Venditti (2014) have shown that, among the apes, it is the cerebellum rather than the neocortex that has undergone an explosive increase in size. This has been argued to support the idea that it was “technical intelligence” (i.e., the ability to manipulate and plan sequences of actions) as opposed to social intelligence that accounts for ape cognitive abilities, relative to those of the monkeys.
30
The (r)evolution of primate cognition 2 Although other taxa do not show the extremely large relative brain sizes of the primates, it follows that the same logic can be applied: social species should possess larger brains than non-social species, even if absolute brain size is small relative to the primates (see Dunbar and Shultz 2007 and Shultz and Dunbar 2007 for reviews; Dunbar and Shultz 2010). 3 In one of these studies, males watched a female being pre-fed until they were satiated with either mealworms (M) or wax-moths (W), after which they themselves could offer females one of these same items. The design thus exploits the phenomenon of specific satiety: feeding on one food reduces the desire for that food, without having an impact on the motivation to feed on other types of food. The authors’ reasoning was that, if the male birds understood the females’ internal desire-state, then they should recognize that a female fed on one food (either M or W) would subsequently desire the alternative food, and the male should therefore offer the female relatively more items of the alternative food. The authors tested their hypothesis by quantifying the extent to which males offered females items of W relative to baseline levels of feeding on that food (in order to control for individual differences in food intake). In one condition, females were pre-fed to satiety with W, with the prediction that males who recognized that W was now devalued should offer fewer items of W to the females. In a second condition, the females were pre-fed with M, with the prediction that males should offer the female more items of W (in both cases, more or less were relative to the pre-established baseline feeding on these items). Although the authors report a positive result, with males offering food that matched the females’ current desire-state, the nature of the analysis strategy (i.e., how much W was fed relative to baseline) meant that, in the condition where females were pre-fed W and so should favour M, the authors tested only whether the males offered fewer items of W to the female, but did not test whether the male offered the female more items of M relative to the baseline; arguably, this could be viewed as a more critical and stringent test. It is possible to get some sense of whether males offer females more items of M in this condition from data presented in the accompanying tables. This shows that two males offered the females more items of M relative to baseline, while all the others actually offered fewer items (3 males) or an equal number of items of M (1 male). Thus, the hypothesis is supported in one condition, but not the other, suggesting the results are more equivocal than they seem at first. The authors apparently anticipate this potential objection by stating in their introduction that “if you eat a sandwich before lunch, this doesn’t mean you will buy two cakes at lunchtime, merely that you won’t buy a sandwich.” This is true but, in the context of their study, it meant that they predicted a different response by the male in each of the two conditions, i.e., it was argued that a female that was pre-fed M should subsequently be fed more items of W (i.e., not simply show a reduced inclination to feed M), whereas a female that was pre-fed W should simply be offered fewer items of W, rather than being offered more of the alternative food, M. In a second study, this time designed to test whether males would offer food according to the females’ specific satiety when their own specific satiety conflicted (i.e., when males and females should prefer to feed on different foods), the results were even more equivocal, with males continuing to show a bias toward their own needs, rather than those of the female. 4 In this study, a low-ranking female vervet monkey (the “expert”) was shaped to open a box containing food. The most dominant animal would, however, monopolize the food when the expert opened the box. When this happened, the expert refrained from opening the box when the dominant was in proximity. The dominant then learned to keep its distance, so that the expert would open the box, grab some food and then leave the box, which the dominant could then monopolize (or compete for with other dominant animals). Once the most dominant animal had learned to keep its distance, however, the second-ranking animal then attempted to monopolize the box on opening, and it too had to learn to keep its distance. Over time, each animal more dominant to the expert learned to keep its distance from the box until after it was opened, sitting outside an invisible “forbidden circle”. The pattern of learning showed no evidence of any social or observational learning taking place, and instead conformed to each animal individually learning about how its own behaviour influenced the presence of food rewards, on an intermittent reinforcement schedule (Fruteau et al. 2013). 5 Thanks to Gert Stulp for pointing this out to me.
References Andrews, K. (2007a). Critter psychology: On the possibility of nonhuman animal folk psychology. In D. D. Hutto and M. Ratcliffe (Eds.), Folk Psychology Re-Assessed. (pp. 191–209). Dordrecht: Springer Netherlands.
31
Louise Barrett ———. (2007b). Politics or metaphysics? On attributing psychological properties to animals. Biology & Philosophy, 24(1), 51–63. ———. (2011). Beyond anthropomorphism: Attributing psychological properties to animals. In T. L. Beauchamp and R. G. Frey (Eds.), Oxford Handbook of Animal Ethics (pp. 469–494). Oxford: Oxford University Press. Andrews, K. & Huss, B. (2014). Anthropomorphism, anthropectomy, and the null hypothesis. Biology & Philosophy, 29(5), 711–729. Barrett, L. (2011). Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton, NJ: Princeton University Press. ———. (2015a). A better kind of continuity. The Southern Journal of Philosophy, 53, 28–49. ———. (2015b). Back to the rough ground and into the hurly-burly. In D. Moyal-Sharrock, V. Muz and A. Coliva (Eds.), Mind, Language and Action: Proceedings of the 36th International Wittgenstein Symposium (p. 299). Berlin: Walter de Gruyter GmbH & Co KG. Barrett, L., Henzi, S. P. & Rendall, D. (2007). Social brains, simple minds: Does social complexity really require cognitive complexity? Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 561–575. Barrett, L. & Würsig, B. (2014). Why dolphins are not aquatic apes. Animal Behavior and Cognition, 1(1), 1–18. Barton, R. A. (1998).Visual specialization and brain evolution in primates. Proceedings. Biological Sciences / The Royal Society, 265(1409), 1933–1937. ———. (2004). Binocularity and brain evolution in primates. Proceedings of the National Academy of Sciences of the United States of America, 101(27), 10113–10115. Barton, R. A., Purvis, A. & Harvey, P. H. (1995). Evolutionary radiation of visual and olfactory brain systems in primates, bats and insectivores. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 348(1326), 381–392. Barton, R. A. & Venditti, C. (2014). Rapid evolution of the cerebellum in humans and other great apes. Current Biology, 24(20), 2440–2444. Bergman, T. J., Beehner, J. B., Cheney, D. L. & Seyfarth, R. M. (2003). Hierarchical classification by rank and kinship in baboons. Science, 302(5648), 1234–1236. Brooks, R. (1991). Intelligence without representation. Artificial Intelligence, 47, 139–159. ———. (1999). Cambrian Intelligence:The Early History of the New AI. Cambridge, MA: MIT Press. Buckner, C. (2013). Morgan’s Canon, meet Hume’s Dictum: Avoiding anthropofabulation in cross-species comparisons. Biology & Philosophy, 28(5), 853–871. Byrne, R. & Whiten, A. (1989). Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford: Blackwell. Byrne, R. W. & Corp, N. (2004). Neocortex size predicts deception rate in primates. Proceedings of the Royal Society B: Biological Sciences, 271(1549), 1693–1699. Call, J. & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192. Chemero, A. (2009). Radical Embodied Cognitive Science. Cambridge, MA: MIT press. Cheney, D. L. & Seyfarth, R. M. (2005). Constraints and preadaptations in the earliest stages of language evolution. The Linguistic Review, 22(2–4), 135–159. De Waal, F. (1997). Are we in anthropodenial? Scientists frown on thinking that animals have intentions and emotions.Yet how else can we really understand them? Discover, 18, 50–53. Dewey, J. (1958). Experience and Nature (1925). New York: Dover. Dunbar, R. I. (1992). Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 22(6), 469–493. ———. (1995). Neocortex size and group size in primates: A test of the hypothesis. Journal of Human Evolution, 28(3), 287–296. ———. (1998).The social brain hypothesis. Evolutionary Anthropology: Issues, News, and Reviews, 6(5), 178–190. ———. (2003). The social brain: Mind, language, and society in evolutionary perspective. Annual Review of Anthropology, 32, 163–181.
32
The (r)evolution of primate cognition ———. (2009).The social brain hypothesis and its implications for social evolution. Annals of Human Biology, 36(5), 562–572. Dunbar, R. I. & Shultz, S. (2007). Evolution in the social brain. Science, 317(5843), 1344–1347. ———. (2010). Bondedness and sociality. Behaviour, 147(7), 775–803. Fruteau, C.,Van Damme, E. & Noë, R. (2013).Vervet monkeys solve a multiplayer “Forbidden Circle Game” by queuing to learn restraint. CURBIO, 23(8), 665–670. Gallagher, S. (2001).The practice of mind.Theory, simulation or primary interaction? Journal of Consciousness Studies, 8(5–6), 83–108. Greenfield, P. M. (1997). You can’t take it with you: Why ability assessments don’t cross cultures. American Psychologist, 52(10), 1115. Greenfield, P. M., Keller, H., Fuligni, A. & Maynard, A. (2003). Cultural pathways through universal development. Annual Review of Psychology, 54(1), 461–490. Hare, B., Call, J., Agnetta, B. & Tomasello, M. (2000). Chimpanzees know what conspecifics do and do not see. Animal Behaviour, 59(4), 771–785. Hare, B., Call, J. & Tomasello, M. (2001). Do chimpanzees know what conspecifics know? Animal Behaviour, 61(1), 139–151. ———. (2006). Chimpanzees deceive a human competitor by hiding. Cognition, 101, 495–514. Healy, S. D. & Rowe, C. (2007). A critique of comparative studies of brain size. Proceedings. Biological Sciences / The Royal Society, 274(1609), 453–464. ———. (2013). Costs and benefits of evolving a larger brain: Doubts over the evidence that large brains lead to better cognition. Animal Behaviour, 86(4), e1–e3. Heyes, C. (2012). Grist and mills: On the cultural origins of cultural learning. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1599), 2181–2191. Holekamp, K. E., Swanson, E. M. & Van Meter, P. E. (2013). Developmental constraints on behavioural flexibility. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 368(1618), 20120350. Humphrey, N. K. (1976). The social function of intellect. In P.P.G. Bateson and R. A. Hinde (Eds.), Growing Points in Ethology. (pp. 303–317). Cambridge: Cambridge University Press. Hutto, D. D. (2004). The limits of spectatorial folk psychology. Mind & language, 19(5), 548–573. Hutto, D. D. & Myin, E. (2013). Radicalising Enactivism. Cambridge, MA: MIT Press. Ihle, M., Kempenaers, B. & Forstmeier, W. (2015). Fitness benefits of mate choice for compatibility in a socially monogamous species. PLoS Biology, 13(9), e1002248. Jelbert, S. A., Singh, P. J., Gray, R. D. & Taylor, A. H. (2015). New Caledonian crows rapidly solve a collaborative problem without cooperative cognition. PLoS ONE, 10(8), e0133253. Jolly, A. (1966). Lemur social behavior and primate intelligence. Science, 153(3735), 501–506. Kudo, H. & Dunbar, R. (2001). Neocortex size and social network size in primates. Animal Behaviour, 62(4), 711–722. Mead, G. H. (1934). Mind, Self and Society. Chicago: University of Chicago Press. Mithen, S. (2007). Did farming arise from a misapplication of social intelligence? Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 362(1480), 705–718. Moll, H. & Tomasello, M. (2007). Cooperation and human cognition:The Vygotskian intelligence hypothesis. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 362(1480), 639–648. Ostojić, L. Shaw, R.C., Cheke, L.G., and Clayton, N.S. (2013). Evidence suggesting that desire-state attribution may govern food sharing in Eurasian jays. Proceedings of the National Academy of Sciences of the United States of America, 110(10), 4123–4128. Pawlowski, B., Lowen, C. B. & Dunbar, R. I. M. (1998). Neocortex size, social skills and mating success in primates. Behaviour, 135, 357–368. Penn, D. (2011). How folk psychology ruined comparative psychology: And how scrub jays can save it. In R. Menzel and J. Fischer (Eds.), Animal Thinking: Contemporary Issues in Comparative Cognition. (pp. 253–266). Cambridge, MA: MIT Press. Povinelli, D. & Vonk, J. (2004). We don’t need a microscope to explore the chimpanzee’s mind. Mind and Language 19.1: 1–28.
33
Louise Barrett Rowe, C. & Healy, S. D. (2014). Measuring variation in cognition. Behavioral Ecology, 25(6), 1287–1292. Schmelz, M., Call, J. & Tomasello, M. (2011). Chimpanzees know that others make inferences. Proceedings of the National Academy of Sciences of the United States of America, 108(7), 3077–3079. Seyfarth, R. M. & Cheney, D. L. (2013). Affiliation, empathy, and the origins of theory of mind. Proceedings of the National Academy of Sciences, 110(Supplement 2), 10349–10356. Shultz, S. & Dunbar, R. I. (2007). The evolution of the social brain: Anthropoid primates contrast with other vertebrates. Proceedings. Biological sciences / The Royal Society, 274(1624), 2429–2436. Skinner, B. F. (1977). Why I am not a cognitive psychologist. Behaviorism, 5(2), 1–10. Sober, E. (2005). Comparative psychology meets evolutionary biology: Morgan’s Canon and Cladistic Parsimony. In G. Mittman and L. Daston (Eds.), Thinking with Animals: New Perspectives on Anthropomorphism. (pp. 85–99). New York: Columbia University Press. Sterelny, K. (2003). Thought in a Hostile World:The Evolution of Human Cognition. Oxford: Wiley-Blackwell. ———. (2007). Social intelligence, human intelligence and niche construction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 719–730. ———. (2012). The Evolved Apprentice. Cambridge, MA: MIT Press. Thom, J. M. & Clayton, N. S. (2013). Re-caching by Western scrub-jays (Aphelocoma californica) cannot be attributed to stress. PLoS ONE, 8(1), e52936. Tomasello, M. & Call, J. (2006). Do chimpanzees know what others see – or only what they are looking at? In S. Hurly and M. Nudds (Eds.), Rational Animals? (pp. 371–384). Oxford: Oxford University Press. Tomasello, M. & Moll, H. (2010). The gap is social: Human shared intentionality and culture. In P. M. Kappeler and J. Silk (Eds.), Mind the Gap. (pp. 331–349). New York: Springer. Tyler, T. (2003). If horses had hands. . . . Society & Animals, 11(3), 267–281. van der Vaart, E.,Verbrugge, R. & Hemelrijk, C. K. (2011). Corvid caching: Insights from a cognitive model. Journal of Experimental Psychology: Animal Behavior Processes, 37(3), 330. ———. (2012). Corvid re-caching without “theory of mind”: A model. PLoS ONE, 7(3), e32904. van Schaik, C. P. (1983). Why are diurnal primates living in groups? Behaviour, 87(1), 120–144. Vygotsky, L. S. (1997). The Collected Works of LS Vygotsky: Problems of the Theory and History of Psychology. (Ed. R. W. Rieber and J. Wollock). New York: Springer Science & Business Media. Zuberbühler, K. (2000). Referential labelling in Diana monkeys. Animal Behaviour, 59(5), 917–927.
34
2 PEDAGOGY AND SOCIAL LEARNING IN HUMAN DEVELOPMENT Richard Moore
The greater part of our knowledge of reality rests upon belief that we repose in things we have been taught or told. (Anscombe, 1979, p. 143)
Introduction In both ontogeny and phylogeny the act of teaching plays a foundational role in human cognitive development. Not only does it shape our worldview; it likely enabled the survival of the human lineage in evolutionary history. With respect to human phylogeny, though, a puzzle arises about when in our history teaching might have emerged. If we think of teaching in the manner in which it exists in contemporary Western classrooms, then given the sorts of cognitive and social systems that this practice requires, it could only have emerged later in phylogeny.The later teaching emerged, though, the less it could have contributed to the survival of our early ancestors. In this chapter, I try to defuse this concern by showing that teaching can come in sociocognitively undemanding forms. Even early in evolutionary history, these forms would have been well placed to contribute to the survival and expansion of the human race, by facilitating the emergence of cumulative culture. This is a sophisticated and perhaps uniquely human form of social learning, in which cultural technologies are not simply learned from others. Rather, high fidelity social learning strategies and technological innovation interact (Legare & Nielsen, 2015) – leading to the development of increasingly complex technology over generations. I start by illustrating why teaching (and other forms of social learning) played an essential role in the emergence of cumulative culture, by contrasting the strengths and weaknesses of three different models of learning. I then sketch an account of the nature of teaching, and I defend an account of its cognitive pre-requisites that does not presuppose sophisticated individual abilities or social mechanisms – and so is consistent with the possibility that teaching emerged early in hominin phylogeny.
Why does social learning matter? The significance of social learning to human development can be illustrated by considering three different models of the sources of an agent’s abilities to survive in and cope with its 35
Richard Moore
environment. The first model appeals only to inherited cognitive abilities and makes little use of the notion of learning. The second supplements the set of inherited abilities with further individual learning; and a third model explains survival via a combination of inherited abilities, and individual and social learning. While the first and second models might suffice to explain the survival of some organisms, no one would claim that they are adequate for the characterisation of humans.1 However, considering the differences between the models will highlight the importance of social learning to human forms of life. (i) Inheritance alone On the first model, the cognitive abilities that allow a creature to survive in its environment are unlearned. Such abilities might consist of a set of hardwired, specialised cognitive modules or heuristics supporting essential survival skills – including (but not exhausted by) knowledge of the food types and hunting skills that would enable a creature to survive in its environment, and of the fight-or-flight and mating instincts that would enable it to avoid or overwhelm predators while reproducing often enough to ensure the survival of its lineage. Some abilities that particularly lend themselves to survival seem to be present in this way in our nearest cousins, the non-human great apes. For example, Mountain gorillas (Gorilla gorilla beringei) have been shown to feed spontaneously on a type of stinging nettle, Laportea alatipes, that is thought to have medicinal properties; and when they do so, they employ a skillful stripping technique to protect them from its sting (Byrne & Byrne, 1993). When Tennie et al. (2008) presented similar nettles (Urtica dioica) to groups of captive Western lowland gorillas (Gorilla gorilla gorilla), who seem not to eat nettles in the wild and who were known not to have encountered them in captivity, gorillas in each group spontaneously ate the leaves, and did so using the same elaborate processing technique as their wild cousins. By contrast, they did not use this technique for eating similar looking but harmless willow leaves; and similar techniques were not observed when the nettles were fed to other species of non-human great ape. Given the spontaneous appearance of this distinctive technique in groups of naïve individuals, it is plausibly the result of an adaptation for nettle eating in the gorilla genus. Such adaptations likely play an important role in the survival of all species. However, without supplementation, species equipped with only innate abilities would prove particularly vulnerable to certain kinds of threat. Hardwired abilities are typically adapted to certain kinds of ecological niche, and have been honed to survive in this niche as a result of natural selection for beneficial mutations over a period of evolutionary time. Steven Pinker eloquently describes such change as “arising by random mutation and being tuned over generations by the slow feedback of differential survival and reproduction” (Pinker, 2010, p. 8994). Where environmental changes occur in real-time, though – over the course of individuals’ lifetimes and not generations – species who are too highly adapted to one environment may find themselves ill-equipped to cope with environmental change. (ii) Inherited abilities supplemented by individual learning Creatures could counter this threat if they were able to supplement innate abilities with a capacity for individual learning. This would allow them to acquire skills more suitable for a changing environment. For example, species equipped with the ability to learn might discover new causal contingencies in their environment – like the fact that certain nuts could be cracked open with the use of stone hammers, and then eaten. Alternatively, pushed into a very new environment, they might discover that fish could be caught and eaten; and that if these fish were cooked, they were easier to eat and tasted better. 36
Pedagogy and social learning
So long as such technologies were fairly simple, inquisitive and moderately intelligent individuals could learn them during the course of their lifetimes, and so augment their ability to survive. However, there are inevitable restrictions on what any mortal individual can learn.The sophistication of tools that they could create would be limited, leaving individuals potentially ill-equipped to cope with more difficult environmental challenges. Furthermore, where agents learn not from each other but only individually, then valuable technological innovations will always be lost with the death of their inventor. Thus tools previously discovered by others would have to be reinvented by each new generation; and new learners would spend their time reinventing technologies already discovered and lost many times over, rather than refining the technologies of their forebears. We know that as our early human ancestors moved out of Africa and came to populate what is now Europe, they overcame much greater obstacles than could be survived by individuals working alone (Wade, 2006). They were able to do this because they did not just learn individually. They also learned from each other. As Dennett has noted, “cultural evolution operates many orders of magnitude faster than genetic evolution, and this is part of its role in making our species special” (Dennett, 1996, p. 339). (iii) Inherited abilities supplemented by individual and social learning In arguing for the claim that “culture is essential for human adaptation”, Boyd, Richerson, and Henrich (2011) show that even among cultures that seem technologically unsophisticated, skills that are essential for survival turn out to exploit remarkable degrees of expertise. As such, the skills required to survive could not be learned by isolated individuals; and their existence within a community can be explained only by the existence of social learning. They give the example of the Central Inuit of Northern Canada, who live in temperatures as low as ‑35°C in the winter months. To survive in these temperatures, the Central Inuit rely on clothes made from carefully constructed and elaborately treated caribou skins that are lined with wolverine fur, and stitched together with fine bone needles using sinew taken from around the caribou vertebrae. The Central Inuit also live in snow igloos built on sea ice that are carefully constructed to maximise heat retention, and which are lit with lamps carved from stone and fuelled by seal fat. In the winter they survive on meat harvested from seals hunted at breathing holes using harpoons carved from antlers and tipped with sharpened polar bear bones. However, in the summer, their primary sustenance comes from caribou that are hunted using long bows expertly crafted using a combination of driftwood, horn, antler, and sinew. Therefore, to survive, the Central Inuit must become expert tailors, hunters, weapons-makers, and architects – not to mention their need to raise and manage dogs, and engineer sledges, kayaks, and tools. On top of all of this, to be able to engage in more than rudimentary communication, and thereby coordinate their activities with others, all humans must acquire language. This requires particularly high levels of social learning, because word forms must be copied precisely from others in order to be useful in communication (Tomasello, 2008; Moore, 2013a; Fridland & Moore, 2014). If the Central Inuit individuals had to learn each of the above survival skills for themselves and from scratch, making the same mistakes as their forefathers had done, they would die long before they could master the very substantial expertise that survival in such climates requires. They therefore survive only because each new generation learns from their older peers, and learns only the latest technological innovations. Expertise accumulates over generations, as specialists in each new generation refine the skills learned by their forebears. As these skills develop, survival in hostile climes becomes more assured. 37
Richard Moore
The form of culture in which communities build upon and extend the knowledge of their forebears is known as ‘cumulative culture’ (Galef, 1992; Tomasello, 1999; Richerson & Boyd, 2005). While our nearest cousins, the non-human great apes, are capable of some forms of cultural learning, no other great ape species comes close to the fidelity, range, or complexity of social learning of which humans are capable, none innovate so well, and none possess cumulative culture (Tennie, Call, & Tomasello, 2009; Laland & Galef, 2009; Moore, 2013b). An early estimate comparing six chimpanzee communities identified 39 culturally variant behaviours between chimpanzee groups (Whiten et al., 1999).While subsequent research has increased this number (e.g., Luncz, Mundry, & Boesch, 2012; van Leeuwen et al., 2012), the total number of culturally variant behaviours attributable to apes will inevitably pale in comparison to countless such differences between groups of humans. In addition to there being fewer culturally variant behaviours in chimpanzees than in humans, the skills that chimpanzees learn from their peers are much simpler. The most complex tool sets seen in groups of wild chimpanzees are sets of several sticks of different sizes and lengths used to dig up and break open underground honey nests (Sanz & Morgan 2007, 2009; Boesch, Head, & Robbins, 2009). By human standards, though, such tool sets are technologically unimpressive. A recent pastiche on The Onion website, entitled “Chimpanzees: Dumber Than All Humans”, put this point as follows: It is true that chimpanzees have been observed using tools, but their tools are little more than sticks. [A] hammer is an infinitely better tool than a stick, and it is not even that good relative to other human tools.2 Given the greater demands of human social learning, in comparison to our nearest relatives humans have had to develop far superior skills for social learning – including the use of transmission mechanisms that are not shared by the other great ape species.
High fidelity learning mechanisms Two social learning mechanisms have played a particularly important role in the development of cumulative culture in humans: imitation and pedagogy (Richerson & Boyd, 2005).These are thought to be significant because they are high fidelity modes of social information transmission. Fidelity is important, because when it comes to cumulative culture, if complex tool sets can be copied precisely from others, then less time will need to be spent reinventing techniques already mastered by others, and more time can be devoted to improving these techniques. Even in humans, skillsets learned from others are rarely reproduced entirely faithfully (Sperber 1996). However, where skills are copied inaccurately within communities, then the chances of sophisticated innovations being lost through imperfect reproduction is high (Sterelny, 2012). Imitation occurs when one agent observes another’s behaviour, recognises the goal with which it was performed, and then reproduces the same action in pursuit of the same goal. Specifically, in reproducing the observed action, the agent should be concerned to replicate the technique performed by the original author as precisely as possible (Tomasello, 1999; Tennie, Call, & Tomasello, 2009; Fridland & Moore, 2014). Imitation differs from ‘lower fidelity’ social learning mechanisms like emulation – in which an observer recognises a behaviour as goal directed, and tries to realise the goal of that behaviour herself, but without having a particular concern to replicate the techniques that she observes. This difference can be crucial in the copying of complex behaviours. For example, if I see you trying to make a long bow by using a combination of driftwood, horn, antler, and sinew, but do so without paying particular attention to the ways in which you combine those objects, then even if my finished product 38
Pedagogy and social learning
looks superficially similar to yours, important details of your craft may not have been copied adequately. With respect to language, failure to attend to and replicate the means may be even more catastrophic. Any member of a community who cannot reproduce the same arbitrary patterns of sounds as her peers may fail to acquire a functional vocabulary with which to communicate (Moore, 2013a; Fridland & Moore, 2014). For these reasons, imitation is likely to have played a very important role in human cognitive development in both ontogeny and phylogeny – and perhaps particularly in the role of language development (Donald, 1991; Tomasello, 2008; Arbib, 2012). Nonetheless, when supplemented or even replaced by pedagogy, it becomes more powerful still. It was recently found that novices trying to recreate Oldowan stone tools produced significantly more viable tools when given verbal and non-verbal teaching instructions than when given the opportunity to learn only imitatively (Morgan et al., 2015).3 Using computational models of cultural evolution, others have argued that cumulative culture likely emerged only after imitative forms of learning were accompanied by the emergence of expert ‘assessors’ who were able to provide communicative feedback on the learned behaviours of their peers (Castro & Toro, 2004). Teaching therefore likely played a foundational role in the development of uniquely human forms of culture (Richerson & Boyd, 2005; Csibra & Gergely, 2009; Sterelny, 2012). For its emergence to be possible, though, a certain set of cognitive and social conditions would need to be in place. Characterising what these conditions and abilities are will help us to better understand where in human pre-history teaching likely emerged. Before this can be attempted, though, some preliminary characterisation of the nature of teaching is needed.
What is pedagogy? For the purpose of answering the question of whether there is teaching in non-human animals, and thereby shedding light on the possible evolutionary roots of pedagogy, Caro and Hauser (1992) defined teaching as follows: An individual actor A can be said to teach if it modifies its behavior only in the presence of a naïve observer, B, at some cost or at least without obtaining an immediate benefit for itself. A’s behavior thereby encourages or punishes B’s behavior, or provides B with experience, or sets an example for B. As a result B acquires knowledge or learns a skill earlier in life or more rapidly or efficiently than it might otherwise do, or that it would not learn at all. (Caro & Hauser, 1992, p. 153) In this way, they identify four conditions as necessary and co-sufficient for teaching: (1) the ‘teacher’ modifies its behaviour, (2) the modification occurs only in the presence of naïve (or appropriately inexpert) observers, (3) the modification does not benefit the teacher, and (4) it facilitates learning in the observer. While these criteria have been very influential, and have been adopted by many animal cognition researchers (e.g., Hoppitt et al., 2008; Thornton & McAuliffe, 2012), they are imperfect for the characterisation of teaching in humans (Csibra, 2007; Byrne & Rapaport, 2011; Moore, 2013b).4 A feature of human pedagogy that is not well captured on this definition is that teaching in humans characteristically takes the form of an intentional communicative act (Csibra, 2007; Moore, 2013b). In particular, it is a communicative act in which a teacher provides information that she intends to benefit the learner. While teaching need not always benefit the teacher, it may sometimes do so. This would be the case if by teaching you how to perform a task that 39
Richard Moore
I would otherwise have to do myself I can reduce my own workload. In that case, Caro and Hauser’s (3) should be dropped and their (1) revised: Teaching is (1) an act of intentional communication in which (2) a knowledgeable individual (the ‘teacher’) volunteers information for the benefit of one or more naïve individuals (the ‘learners’), (3) with the intention of facilitating learning (e.g. the development of knowledge or skills) in the naïve individuals. Here, of course, teachers and learners need be knowledgeable and ignorant only in respects that are relevant to the content of the teacher’s message. A teacher in one context may be a learner in another. Additionally, the content of ‘intending to facilitate learning’ can be specified in conceptually undemanding ways. For example, it may be that the teacher has no articulate or general concepts of skill or learning, and that she intends only that her interlocutor come to grasp some fact, or perform some relevant task better. A further feature of teaching motivates another revision. Typically we think of pedagogical acts as connected to the acquisition of lasting skillsets and general truths about the world, and not just of occasion-specific items of knowledge (Csibra, 2007). For example, while the assertion that “Christopher is a cat, and therefore a carnivore” is something that could be taught, the utterance that “Christopher is in the garden” intuitively could not – despite its sometimes being consistent with (3) above. In order to distinguish between cases in which an agent teaches and cases in which an agent merely tells her interlocutor something, Csibra (2007) argues for a further modification: (4) The information provided by the knowledgeable individual is generalisable. While the communication of generalisable information is no doubt an important part of teaching, the above formulation is unsatisfactory. Generic semantic knowledge (“Cats are carnivores”) and general technical instructions (“To achieve E in situation S, do ϕ”) are not the only forms of knowledge that can be taught. The details of historical events can also be taught, as can truths like “Caligula was a tyrant”. Intuitively, episodic facts like these fail condition (4). The characterisation of (4) should therefore be revised in such a way that it licenses a distinction between teaching and telling.5 In a recent paper, Small (2015) suggests the following: Successful teaching [in contrast to telling] results in the learner’s initiation into a science, art, craft, or other kind of practice, the members of which are such as to become independent active principles in its maintenance and development, be it theoretical physics, the violin, cricket, or pottery. (Small, 2015, p. 381) That is, in teaching but not telling, a teacher initiates her students into a community of intergenerational knowledge that each of them becomes responsible for maintaining and expanding.This characterisation is appealing for the purposes in hand, because it suggests a conceptual link between teaching and cumulative culture. Like Csibra, Small thinks of the contents of teaching as general, in a way that the contents of telling are not. Thus, on his account, history is not just the practice of learning historical facts: “it is part of teaching history to teach students how to deal with historical evidence in various ways” (ibid., p. 382). While true, this point does not answer the question of why the details of 40
Pedagogy and social learning
some but not all episodes can be taught. A possible answer is suggested by another feature of Small’s view, though. If teaching is characterised as the initiation of others into a community of knowledge, then it may be that the identity of that community turns on its identification with some narratives and historical episodes but not others. Thus teaching might extend to incorporate not only generic statements and skills that form a platform for future innovations, but historical narratives relevant to the identity and history of the knowledge community.6 Since the boundaries of all communities are fuzzy, one could accept this point and recognise that the teaching-telling boundary is itself sometimes fuzzy. Small’s view (developing ideas sketched by Rödl (2014)) also has implications for how we think of the status of teachers, relative to their peers. The teacher does not speak as an individual subject, but . . . in a sense goes proxy for the whole science, art, or practice[.] . . . Though the actuality of the epistemic community for which the teacher speaks resides in the individuals and concrete interpersonal relations it comprises, it is not merely on behalf of those individuals that the teacher speaks, but on behalf of the science, art, or craft of which they are, as it were, the present custodians. (Small, 2015, p. 384) It is often true that teachers see themselves as guardians of traditions – be they evolving scientific practices, or historical, cultural narratives. However, at least for the purposes of an account of the consequences of teaching for human survival, the requirement that teachers think of themselves as speaking on behalf of such traditions seems arbitrary, and so ought not to be a pre-requisite for teaching. Thus, one might think that content can be taught just when it contributes to the transmission of general skills, or communicates information that is important to the identity of a community, even if teachers do not think of themselves as guardians of communal traditions, or of a communal knowledge base. As a result, the fourth clause below does not fall under the scope of any thought process that the teacher must entertain. It simply serves as an external constraint on the sorts of communicative acts that we would class as teaching. (4) The information provided by the knowledgeable individual is generalisable or relevant to the identity of the group of which teacher and students are members, and could serve as a platform for future insight or innovation by others. Together, these conditions give the following preliminary analysis: Teaching is (1) an act of intentional communication in which (2) a knowledgeable individual (the ‘teacher’) volunteers information for the benefit of one or more naïve individuals (the ‘learners’), (3) with the intention of facilitating learning (e.g. the development of knowledge or skills) in the naïve individuals. (4) The information provided by the knowledgeable individual is generalisable, or relevant to the identity of the group to which teacher and students belong, and could serve as a platform for future insight or innovation by others. This account is attractive because it incorporates both the cases of verbal instruction described by Morgan et al. (2015) and the cases of feedback modelled by Castro and Toro (2004). However, it excludes as pedagogical cases in which a learner learns from an expert by observation but where there is no intention to provide valuable information, and also cases in which the 41
Richard Moore
information that the teacher volunteers is too ephemeral. It also characterises a set of behaviours that is pervasive in humans, but seemingly absent from the behavioural repertoire of our nearest relatives, the non-human great apes (Lonsdorf, 2006; Moore, 2013b; Moore & Tennie, 2015).The emergence of this behaviour is likely to have been closely connected with the phylogenetic origins of cumulative culture, and the transition of ancestral hominin groups from possessors of a limited tool set, to possessors of a far more impressive set of technologies. It is also likely to have played a foundational role in the dispersal of our earliest human ancestors out of Africa and into new territories for which our genetic inheritance had not prepared us to survive.
Simple and complex forms of teaching A virtue of the above characterisation is that it is loose enough to incorporate a variety of functionally similar but importantly different behaviours – from simple demonstrations of how to use a tool, to cases closer to the teaching methods used in contemporary university lecture theatres. This is important, because if we think of pedagogy in the mode in which it currently features in Western classrooms, a further set of evolutionary questions arises. On this paradigm, a knowledgeable teacher stands at the front of a classroom and repeats for her students a series of related statements – historical narratives, scientific theories, mathematical proofs – that her own teachers previously taught to her. Here pedagogy takes the form of a set of propositions asserted as true by a pedagogue and passed down from one generation to the next. Teachers who conduct their own research may supplement inherited propositions with insights of their own. However, the teacher may herself have independently evaluated the truth of only a small subset of the propositions that she asserts, and will likely take for granted the truth of much of what she was herself taught. In turn, many of the propositions that she asserts will be taken as true by students simply because they have been asserted – either by the teacher, or in a set of texts with which the teacher supplements her teaching. Such teaching methods have no doubt contributed a great deal to human learning. However, from an evolutionary perspective, they are both cognitively demanding and potentially unstable. Cognitive challenges to teaching A first worry is that this form of teaching could not exist in the absence of language. This would place constraints on the models of cultural evolution that could be prima facie credible, because it suggests that complex tool sets like those described by Boyd, Richerson, and Heinrich (2011) could not have emerged until after language was in place.This worry dissipates if it is acknowledged that there could be forms of non-verbal or ‘proto-linguistic’ communication that emerged earlier than language. However, even here difficulties arise. On standard accounts of communication even non-verbal communication requires acting with and understanding communicative intentions with a ‘Gricean’ intentional structure (Grice, 1957).This is problematic because on traditional readings of Grice, such communication requires very sophisticated socio-cognitive abilities – including possession of a concept of belief, the ability to make complex inferences about others’ communicative goals, and the ability to entertain third and fourth orders of meta-representation (Sperber, 2000; see Moore, 2014, 2015, under resubmission for discussion). Since there is empirical evidence that even ten-year-old school children struggle to entertain fourth-order meta-representations (Liddle & Nettle, 2006), then the ability of ten-year-olds to learn from teaching would seem to be inconsistent with traditional Gricean 42
Pedagogy and social learning
accounts of communication. Unless we think our early ancestors cleverer than educated, Western ten-year-old children, this worry would also generalise to make problems for an account of the emergence of teaching in phylogeny. This empirical evidence is consistent with an anxiety that Grice himself raised, which was that the cognitive abilities required by the account of communication he specified were “too sophisticated a state to be found in a language-destitute creature” (Grice, 1986, p. 85). In other words, while Grice’s account of acting with and understanding communicative intent might serve as a foundation for the non-verbal behaviour of linguistic creatures, at least on standard accounts it looks like a poor candidate for explaining the communicative actions of creatures who have yet to evolve language. A number of responses have been proposed to the challenges presented by accounts of Gricean communication for cognitive development. One set of responses is to argue that potentially difficult communicative interactions have been made more tractable by the existence of adaptations for teaching and learning from teaching. Adaptions provide a convenient solution to cognitive challenges because they take over and automate difficult behaviours and processes, and thereby reduce the demands that they place on the resources of cognitively limited individuals. Sperber and Wilson (1995, 2002) accept the standard story about the meta-representational demands of Gricean communication. However, they argue that humans have inherited modular abilities for meta-representation that make the production and comprehension of communicative intentions easy. Moreover, they argue, we also have an adaptation for processing the content of speakers’ communicative intentions, based on the relevance of what they say. The Relevance detection module that they propose assigns content to speakers’ utterances on the basis of what would be the most relevant interpretation, where this is calculated by the formula Relevance = Cognitive Gain/Processing Cost. More recently, Gergely and Csibra have proposed a second adaptation – which they call ‘Natural Pedagogy’ – to explain children’s facility for learning from teaching (Csibra & Gergely, 2006, 2009; Csibra, 2010; Gergely & Csibra, 2006, 2013; Gergely, Egyed, & Kiraly, 2007). While the details of their proposal have changed as the claim has been developed, three claims are central: (1) Communicative Intention: Human infants are hardwired to recognise certain ‘ostensive’ signals (including eye contact and infant-directed speech) as indicating that a speaker is acting with communicative intent (Csibra, 2010). (2) Content Filling: On the basis of (1), when infants are addressed with ostensive cues, they seek to recover the content of the speaker’s message. In particular they do this by setting out to infer the object to which the speaker intended to refer (Senju & Csibra, 2008), and by considering the ways in which that object might be relevant to their on-going interaction with the speaker. (3) Generalisation of the Content: Additionally, because of the presence of ostensive cues, the addressee takes the speaker to be making a general claim about the object kind to which she is referring (Csibra & Gergely, 2009). In principle, the Relevance and Natural Pedagogy proposals could work in unison. Thus, students might recognise that their teacher was acting with communicative intent on the basis of her addressing them with ostensive cues. The same ostensive cues would additionally trigger the operation of the audience’s Relevance modules, and subsequently fill out the content of the teacher’s utterance based on an interpretation of what she said. The children would then 43
Richard Moore
interpret this content to be a general claim about the world, and generalise it. Presented with an ostensive demonstration of how to use a magnetic tool (a ‘blicket’) to sweep iron filings from the surface of a desk, children would recognise that and take the teacher to be communicating to them an enduring, general claim about blickets – for example, their property of being magnetic – and not a claim about any particular blicket – like its location.7 There is some evidence that older children do indeed generalise taught information in the way that Gergely and Csibra predict. For example, Butler and Markman (2012) presented children with a scene in which an experimenter used a tool like the blicket detector described above, in a way that revealed its magnetic properties. The experimenter demonstrated the tool either ostensively, intentionally but non-ostensively, or accidentally (by dropping the tool onto magnetic objects).They found that four- (but not three-) year-old children spent longer investigating similar looking objects that were not magnetic when they had seen the object used in the demonstration condition than in either the intentional or accidental conditions. From this, Butler and Markman inferred that the older children in that condition had acquired expectations that the object should have essential functional properties (namely magnetism) that children in other conditions did not acquire. Unlike the older subjects, however, three-yearolds did not distinguish between pedagogical and intentional conditions. In a follow-up study (Butler & Markman, 2014), the same authors found that children of four and five years sorted similar looking objects into different groups depending on their functional properties (whether or not they were magnetic) when they observed those objects demonstrated ostensively. By contrast, when they observed the same object properties through watching intentional but non-demonstrative and accidental uses of the object, they sorted on the basis of appearance, and not function. These studies suggest that pedagogical demonstrations (i.e. those accompanied by a demonstrator’s use of ostensive cues) lead at least older children to assume that they are being taught about functional properties of the class of objects to which the demonstrated object belongs. However, if Natural Pedagogy is an adaptation, it is puzzling that younger children in the first Markman and Butler study did not distinguish between intentional and pedagogical conditions. Heyes (2016) argues that if Natural Pedagogy is an adaptation, its effects on learning would be expected to be present from birth; and so behavioural evidence should be present in children at the youngest testable ages – around four months old. In this direction, Yoon et al. (2008) found that children of nine months who observed an experimenter ostensively pointing to an object were more likely to retain information about its identity; while those who saw an experimenter reach non-ostensively for the same object were more likely to retain information about its location. However, these subjects are still much older than the four-month-olds that Heyes thinks would be evidence for the existence of an adaptation. It may be that early in life, children simply learn a heuristic that when adults are communicating with them, they often communicate information about object kinds. Evidence that Natural Pedagogy is an adaptation is therefore currently lacking. While human children are undeniably very good at inferring communicative intentions, evidence for the existence of a Relevance detection module is also far from conclusive; and proponents of Relevance Theory make some claims that are simply inconsistent with existing empirical data. For example, Sperber and Wilson’s claim that Gricean communication requires fourth-order meta-representations is difficult to reconcile with evidence that ten-year-old children find communication easy but fourth-order meta-representation difficult (e.g. Sperber, 2000). For these reasons, some authors (myself included) have sought to challenge the orthodoxy that claims that Gricean communication requires high orders of meta-representation (Gómez, 1994; Moore, 2014, 2015, in press-b; Sterelny, 2017). If we are right, and Gricean 44
Pedagogy and social learning
communication requires less complex meta-cognition than Sperber and Wilson suppose, then this worry also dissipates, and Gricean analyses of communication may be appropriate for evolutionary accounts of communication after all. In addition to the possibility that children possess adaptations for learning from teaching, it may also be that parents possess adaptations for teaching.With respect to Natural Pedagogy, one recent study of parental teaching behaviour suggests that parents do change their behaviour when demonstrating different object properties to children. However, this data is not obviously supportive of the Natural Pedagogy hypothesis. In observations of mothers interacting with their children, Hoicka (2015) found that parents produce more ostensive cues when they are joking or pretending with their children than when they are teaching them. That is, they behave more ostensively precisely when they intend to communicate non-generalisable information. If ostensive cues are part of an adaptation for the learning of generalisable information, such behaviour seems puzzling. In another study, Brand and colleagues (2002) found that mothers of young infants (6–8 months, and 11–13 months) spontaneously adopted ‘motionese’ when demonstrating object functions to their children – exaggerating, simplifying, and repeating object movements in ways that would help children learn. They were less likely to use motionese when demonstrating the same tools to adults. However, the authors of this study do not claim that motionese constitutes an adaptation, since it may also be a culturally learned practice. Given these considerations, evidence for the existence of adaptations for teaching and learning from teaching is currently suggestive but inconclusive. However, even if modern humans possess such adaptations, their postulation would not resolve all issues connected to the emergence of teaching in phylogeny. It is part of the nature of adaptations that selection pressure for them can emerge only when the abilities that they exploit are already in use. Thus, adaptations for learning from teaching could have emerged only after the establishment of teaching in communities of our ancestors. Adaptationist accounts are therefore by their nature ill-equipped to explain the teaching and learning interactions of our early ancestors.While their pedagogical interactions might have undergone processes of natural selection that would result in cognitive adaptations that facilitated teaching and learning from teaching in subsequent generations, teaching in earlier generations still might have proved to be cognitively challenging. This is a reason to develop deflationary models of the sorts of cognition that simple forms of teaching require. Social challenges to teaching Independently of worries about the cognitive pre-requisites of teaching, there are also grounds for thinking that at least some forms of teaching presuppose robust – and perhaps evolutionarily recent – forms of social structure. For if we think of teaching involving linguistic communication, then a possibility emerges that deceptive individuals might use teaching not to help others, but to cheat on them. For example, if we are members of competing groups, I might give you false information about the whereabouts of valuable resources, or faulty instructions about how to make the tools that would ensure your survival. If, by doing this, I can bring about your demise while safeguarding my survival, my deception could be adaptively advantageous. Where communities are small, and communicators are dependent on one another to survive, there may be little incentive for communicators to deceive one another. Furthermore, even where communicators are not co-dependent, deception may be an aversively risky strategy – if, for example, the prospect of re-encountering those one has wronged is high (Axelrod, 1984; 45
Richard Moore
Sterelny, 2012). However, as communities grow, and others’ fates are less inextricably linked, the motivation to deceive may increase. This is particularly so when social networks become large enough for interactions between strangers to be common, since this means that potential deceivers can predict that they will not encounter their victims again. In such communities, if communication is to be a stable mechanism for the transmission of valuable information, there will need to be some way of limiting the extent to which it can be used to deceive. One way in which individuals can deal with dishonest others is to avoid them, or ignore what they say. However, if the punishment for deception is not a group-wide enterprise, then peers unaware of who is dishonest may remain vulnerable to exploitation. Where the fates of group members are bound together, a group-led response will therefore be needed. One powerful remedy against dishonesty is the existence of social norms to regulate uses of communication. For example, it is widely agreed that some sort of knowledge norm governs the use of assertion (Brandom, 1983; Goldberg, 2011). Even if a norm exists that prohibits assertion of known falsehoods, though, this norm could be flouted by unscrupulous individuals looking to benefit themselves. For norms to be effective, they must be backed up by systems that enforce adherence to them. The practice of assertion consequently depends on a set of social practices the existence of which discourages others from violating the norm of assertion, and punishes those who do – for example, by ostracising, censoring, or discrediting liars. In the absence of such practices, those who were dishonest about the quality of the goods that they sell, or who used their own influence to undermine their competitors and gain advantages for themselves, would be allowed to thrive. In a system in which dishonest practices were not curtailed, then at least in large groups (where relative anonymity made productive lying possible), the practice of assertion – and perhaps, by extension, teaching – would break down. There are a number of ways in which the norm of assertion might be upheld. One possibility is via a system of reputation management (Enquist & Leimar, 1993; see also Dunbar, 1996, 2004; Engelmann & Zeller, this volume). If groups of individuals systematically warned others off interacting with liars, then it would pay to be honest. However, while there are ways for safeguarding the honesty of assertions within a community, policing is imperfect and expensive to maintain in terms of both effort and cognitive development. Only those who are sensitive to the possibility of being cheated or exploited could take the steps needed to protect themselves, and doing so may be hard work.8 As a result, the possibility of deception poses further constraints on the emergence of teaching in phylogeny: if the practice of assertion requires the existence of social networks that enforce honesty on the members of a community, then at least in large communities assertion-led teaching may have been unstable and unreliable prior to the development of appropriately complex forms of norm enforcement. A partial remedy to the possibility of deception would be the existence of individual mechanisms for guarding against epistemic exploitation. Recently Sperber and colleagues have argued that humans do not simply accept what they are told unquestioningly, but possess “a suite of cognitive mechanisms for epistemic vigilance” (Sperber et al., 2010, p. 359) that they use to evaluate the information that others communicate to them. Humans typically track both the honesty and reliability of informants, and evaluate the content of what they are told for consistency with their other beliefs. Abilities like these are found not only in adults, but also in young children (see Harris & Corriveau, 2011). Sperber and colleagues argue that while some of these mechanisms are likely cultural, others are biological adaptations for survival in a communication-dependent environment (Sperber et al., 2010). However, as in the case of cultural safeguards against deception, any dependence upon the existence of individual adaptations
46
Pedagogy and social learning
for epistemic vigilance would place further constraints upon the stage of human history at which teaching could emerge.
A solution: demonstrating the use of tools If pedagogy presupposes the existence of both cognitively sophisticated individuals and socially developed communities in which norms of assertion are enforced, then it may be that teaching could emerge only relatively late in hominin phylogeny. In fact, though, there are forms of teaching that do not require this. As a result, we can posit the emergence of teaching in phylogeny earlier than might otherwise have been supposed. A cognitively simple form of teaching that is also relatively robust against the threat deception is the case of demonstration (Sterelny, 2012; Moore, 2013b; Buckwalter & Turri, 2014). Suppose that I want to teach you how to make warm clothes by lining caribou skins with wolverine fur, and stitching the materials together using bone needles and sinew. I can do this just by engaging in the activity myself while soliciting your attention, in order to encourage you to attend to what I am doing (Moore, 2013b). To make particular aspects of the performance particularly salient – for example, the alignment of certain materials in the preparation of animal skins – I could additionally exaggerate particular aspects of my activity, by performing them more slowly or deliberately (consistent with what Brand and colleagues (2002) called motionese). In such cases, the content of my communicative act may be nothing more than “Look!”, or “Do this!”, or “Do this like this!” – where the demonstratives would pick out either action sequences, or behavioural means that could be used to achieve the goals in hand. If a knowledgeable tooluser deliberately called the attention of a naïve observer in this way, and then demonstrated the use of her tool in order to help her interlocutor learn, she would satisfy clauses (1)–(3) of the characterisation of teaching given previously. Additionally, if her student acquired generalisable information from this demonstration – like improved knowledge of how to make warm clothes – then the fourth criterion would also be met. The requirement here, then, is just that the teacher be able to perform a potentially simple communicative act with some sensitivity to the knowledge (or ignorance) state of her interlocutor, and with the intention to help rectify that ignorance. While sometimes ascertaining others’ knowledge states can be difficult, in other cases it will be less so. For example, when one is in the process of performing some manual task inexpertly, the failure to use the required tools properly will often be visually salient. Sterelny has argued that even simple forms of pedagogy may be deceptively complex: A demonstration for teaching purposes is rarely identical to a utilitarian performance. Demonstrations are slowed down and exaggerated; sometimes crucial elements are repeated. One point of demonstration is to make the constituent structure of a complex procedure obvious, for often that structure is not obvious in practiced, fluid performance. (Sterelny, 2012, p. 135) Here one might worry that even simple forms of pedagogy therefore place strong demands on teachers’ ability to break down and represent the separate stages of the actions that they teach (Sutton, 2013). However, in the most basic cases of pedagogy the teacher need not break down and represent separately the parts of her activity (Moore, 2013b). Even by soliciting attention to non-stylised acts produced without any accompanying verbal or gestural commentary, teachers would facilitate knowledge transfer to their pupils, by encouraging pupils to attend to and reflect
47
Richard Moore
upon what they were doing. In such cases, teaching need not even require that teachers go to great lengths to facilitate their pupils’ learning. Thus, while Caro and Hauser (1992) originally suggested that teachers must pay a cost to teach, in the most basic forms of teaching, this cost need not be high. Demands on a teacher’s altruistic tendencies would also then be minimal.9 In addition to making only simple demands on a teacher’s prosocial tendencies and representational abilities, action demonstrations are honest in ways that verbal utterances need not be. While it is easy to use words to deceive, in the case of demonstration success can usually be evaluated readily (Sterelny, 2012). For example, a demonstration of how to use a hammer to crack a nut can be known to be reliable if, following the demonstration, the nut has been cracked. While deception might sometimes occur, if early forms of teaching took the form of demonstrations, teaching could have emerged even prior to the emergence of both epistemic vigilance and societal mechanisms for norm enforcement. In such communities, technology, language, and teaching might have developed together. First, by attending to and coming to appreciate the difficulties of their students in reproducing particular aspects of a performance, teachers might have learned how to break down the processes of their activity more carefully than before, in order to better demonstrate the required skills to their students. In doing so, they might thereby have gained new insights into their behaviour that paved the way for technical refinements and further technological developments. If teachers also innovated new utterance types to better discriminate between similar processes, the emergence of such forms of language might additionally help them to better understand the details of the practices in which they were engaged. The teaching interaction would therefore constitute a learning opportunity for teachers and pupils alike, thereby enabling further technological developments.
Conclusions Teaching has played a fundamental role in the survival and expansion of the human species. Furthermore, since its most basic forms can be both cognitively and socially undemanding, its earliest forms may have arisen early in hominin phylogeny. In time, selective pressures may have given rise to adaptations that improved our ancestors’ abilities to teach and learn from teaching, and new socio-cultural practices for disseminating and assessing information would additionally have arisen. It is likely that such practices further advanced the role of teaching in human communities, making possible ever more sophisticated forms of cultural technology. However, we need not assume that these advanced teaching techniques were present in early human communities in order to explain how teaching contributed to the survival of the earliest human communities.
Acknowledgements For helpful comments on drafts of this material, the author would like to thank Anika Fiebich, Karline Janmaat, Julian Kiverstein, John Michael, Henrike Moll, Michael Pauen, Anna Strasser, Natalie Uomini, and members of the Philosophy of Mind Colloquium at the Berlin School of Mind and Brain.
Notes 1 While authors differ in the relative importance that they attribute to adaptations, and individual and social learning strategies – contrast, for example, Pinker (2010) and Boyd, Richerson and Henrich (2011) – these are largely differences of emphasis.
48
Pedagogy and social learning 2 The Onion, Horrifying Planet, episode 2, July 25th, 2012. https://www.youtube.com/watch?v=HfC9uNyhWo 3 In this study, imitation and emulation learners also accrued no clear advantage over those who used reverse engineering strategies. While it is often assumed that imitation evolved for the social learning of complex tools (Gergely & Csibra, 2006; Arbib, 2012), and was subsequently appropriated for use in communication, this finding suggests an alternative hypothesis. For even quite sophisticated forms of tool technology, reverse engineering may suffice to ensure faithful reproduction.The same is not true for arbitrary forms of communication, imperfect copies of which may fail to be usable. Evolutionary pressure for imitation may therefore have originated not for tool mastery, but for the acquisition of languages making use of arbitrary signs. 4 Another recent attempt to characterise the evolutionary origins of teaching (Kline, 2015) is less compelling, because it lumps together behaviours supported by very different underlying abilities and which appear in only distantly related clades. See Moore and Tennie (2015) for discussion. 5 Alternatively, one could opt for a more restricted characterisation of teaching that leaves behind the common usage of the word in favour of characterising a more homogenous class of acts. I see no reason to prefer this approach. 6 For a valuable discussion of the relationship between social learning and group identity, see Haun and Over (2013). 7 While I have argued that some forms of episodic information can be taught, the Natural Pedagogy adaptation is hypothesised to explain children’s learning of generalisable knowledge only. 8 Dunbar (1996, 2004) also argues that high orders of mental state attribution are necessary for gossip, since such abilities are a pre-requisite of Gricean communication. I have argued at length against this view (Moore 2014, 2015, in press-b). However, it may be that high order mental state attributions are important for tracking deceptive intentions – and so play an important role in cheater detection practices – even if they are not necessary for intentional communication in general. 9 For discussion of whether and to what extent communication requires prosocial tendencies, see Tomasello (2008) and Moore (in press-a). While Tomasello argues that human communication is fundamentally cooperative, I argue that this need not be the case.
References Anscombe, G.E.M. (1979). What is it to believe someone? In C. F. Delaney (Ed.), Rationality and Religious Belief (pp. 141–151). Notre Dame: University of Notre Dame Press. Arbib, M. (2012). How the Brain Got Language:The Mirror System Hypothesis. Oxford: Oxford University Press. Axelrod, D. (1984). The Evolution of Cooperation. New York: Basic Books. Boesch, C., Head, J. & Robbins, M. (2009). Complex tool sets for honey extraction among chimpanzees in Loango National Park, Gabon. Journal of Human Evolution, 56(6), 560–569 Boyd, R., Richerson, P. J. & Henrich, J. (2011).The cultural Niche:Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences of the United States, 108, 10918–10925. Brand, R. J., Baldwin, D. A. & Ashburn, L. (2002). Evidence for ‘motionese’: Mothers modify their infantdirected actions. Developmental Science, 5, 72–83. Brandom, R. (1983). Asserting. Noûs, 17(4), 637–650. Buckwalter, W. & Turri, J. (2014). Telling, showing and knowing: A unified theory of pedagogical norms. Analysis, 74(1), 16–20. Butler, L. & Markman, E. (2012). Preschoolers use intentional and pedagogical cues to guide inductive inferences and exploration. Child Development, 83, 1416–1428. ———. (2014). Preschoolers use pedagogical cues to guide radical reorganization of category knowledge. Cognition, 130(1), 116–127. Byrne R. W. & Byrne, J.M.E. (1993). Complex leaf-gathering skills of mountain gorillas (Gorilla gorilla beringei):Variability and standardization. American Journal of Primatology, 31, 241–261. Byrne, R.W. & Rapaport, L. G. (2011).What are we learning from teaching? Animal Behaviour, 82, 1207–1211. Caro, T. M. & Hauser, M. D. (1992). Is there teaching in nonhuman animals? Quarterly Review of Biology, 67, 151–174. Castro, L. & Toro, M. (2004). The evolution of culture: From primate social learning to human culture. Proceedings of the National Academy of Sciences of the United States 101, 10235–10240.
49
Richard Moore Csibra, G. (2007). Teachers in the wild. Trends in Cognitive Science, 11(3), 95–96. ———. (2010). Recognizing communicative intentions in infancy. Mind & Language, 25(2), 141–168. Csibra, G. & Gergely, G. (2006). Social learning and social cognition: The case for pedagogy. In Munakata and Johnson (Eds.), Processes of Change in Brain and Cognitive Development. Attention and Performance, XXI, (pp. 249–274). Oxford: Oxford University Press. ———. (2009). Natural pedagogy. Trends in Cognitive Sciences, 13(4), 148–153. Dennett, D. (1996). Darwin’s Dangerous Idea: Evolution and the Meanings of Life. London: Penguin. Donald, M. (1991). Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition. Cambridge, MA: Harvard University Press. Dunbar, R.I.M. (1996). Grooming, Gossip, and the Evolution of Language. Cambridge, MA: Harvard University Press. ———. (2004). Gossip in evolutionary perspective. Review of General Psychology, 8(2), 100–110. Engelmann, J. M. & Zeller, C. (2017). Doing the right thing for the wrong reason: Reputation and moral behavior. In Kiverstein (Ed.), The Routledge Handbook of the Social Mind. Enquist, M. & Leimar, O. (1993).The evolution of cooperation in mobile organisms. Animal Behaviour, 45(4), 747–757. Fridland, E. & Moore, R. (2014). Imitation reconsidered. Philosophical Psychology, 28(6), 856–880. Galef, B. G. (1992). The question of animal culture. Human Nature, 3, 157–178. Gergely, G. & Csibra, G. (2006). Sylvia’s recipe:The role of imitation and pedagogy in the transmission of cultural knowledge. In Enfield and Levinson (Eds.), Roots of Human Sociality: Culture, Cognition, and Human Interaction (pp. 229–255). Oxford: Berg. ———. (2013). Natural pedagogy. In M. R. Banaji and S. A. Gelman (Eds.), Navigating the Social World:What Infants, Children, and Other Species Can Teach Us (pp. 127–131). Oxford: Oxford University Press. Gergely, G., Egyed, K. & Király, I. (2007). On pedagogy. Developmental Science, 10(1), 139–146. Goldberg, S. (2011). Putting the norm of assertion to work: The case of testimony. In J. Brown and H. Cappelen (Eds.), Assertion (pp. 175–196). Oxford: Oxford University Press. Gómez, J. C. (1994). Mutual awareness in primate communication: A Gricean approach. In S. T. Parker, R. W. Mitchell and M. L. Boccia (Eds.), Self-Awareness in Animals and Humans (pp. 75–85). Cambridge: Cambridge University Press. Grice, P. (1957). Meaning. In P. Grice, Studies in the Way of Words (pp. 213–223). London: Harvard University Press, 1989. ———. (1986). Reply to Richards. In Grandy and Warner (Eds.), Philosophical Grounds of Rationality: Intentions, Categories, Ends (pp. 45–106). Oxford: Clarendon Press. Harris, P. & Corriveau, K. (2011).Young children’s selective trust in informants. Philosophical Transactions of the Royal Society B, 366, 1179–1187. Haun, D. & Over, H. (2013). Like me: A homophily-based account of human culture. In P. J. Richerson and M. H. Christiansen (Eds.), Cultural Evolution: Society, Technology, Language, and Religion (pp. 61–80). Cambridge, MA: MIT Press. Heyes, C. (2016). Born pupils? Natural pedagogy and cultural pedagogy. Perspectives on Psychological Science, 11(2), 280–295. Hoicka, E. (2015). Parents’ communicative and referential cues distinguish generalizable and non-generalizable information. Journal of Pragmatics, 95(1), 137–155. Hoppitt, W. J., Brown, G. R., Kendal, R., Rendell, L., Thornton, A., Webster, M. M. & Laland, K.N. (2008). Lessons from animal teaching. Trends in Ecology and Evolution, 23, 486–493. Kline, M. A. (2015). How to learn about teaching, an evolutionary framework for the study of teaching behavior in humans and other animals. Behavioural and Brain Sciences, 38(e50), 1–71. Laland, K. N. & Galef, B. G. (eds.) (2009). The Question of Animal Culture. Cambridge, MA: Harvard University Press. Legare, C. H. & Nielsen, M. (2015). Imitation and innovation: The dual engines of cultural learning. Trends in Cognitive Sciences, 19(11), 688–699. Liddle, B. & D. Nettle (2006). Higher-order theory of mind and social competence in school-age children. Journal of Cultural and Evolutionary Psychology, 4(3/4), 231–244.
50
Pedagogy and social learning Lonsdorf, E. V. (2006). What is the role of the mother in the acquisition of tool-use skills in wild chimpanzees? Animal Cognition, 9, 36–46. Luncz, L. V., Mundry, R. & Boesch, C. (2012). Evidence for cultural differences between neighboring chimpanzee communities. Current Biology, 22(10), 922–926. Moore, R. (2013a). Imitation and conventional communication. Biology and Philosophy, 28(3), 481–500. ——— (2013b). Social learning and teaching in chimpanzees. Biology and Philosophy, 28, 879–901. ———. (2014). Ontogenetic constraints on Paul Grice’s theory of communication. In Danielle Matthews (Ed.), Pragmatic Development in First Language Acquisition (pp. 87–104). Amsterdam: John Benjamins. ———. (2015). Meaning and ostension in great ape gestural communication. Animal Cognition, 19(1), 223–231. ———. (In press-a). Gricean communication, joint action, and the evolution of cooperation. Topoi: An International Review of Philosophy. doi: 10.1007/s11245-016-9372-5 ———. (In press-b). Gricean communication and cognitive development. Philosophical Quarterly. doi: 10.1093/pq/pqw049. Moore, R. & Tennie, C. (2015). Cognitive mechanisms matter – But they do not explain the absence of teaching in chimpanzees. Behavioural and Brain Sciences, 38(e50), 32–33. Morgan, T. J. H., Uomini, N., Rendell, L. E., Chouinard-Thuly, L., Street, S. E., Lewis, H. M., . . . Laland, K. N. (2015). Experimental evidence for the co-evolution of hominin tool-making, teaching and language. Nature Communications, 6, 6029, 1–8. Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. Proceedings of the National Academy of Sciences, 107, 8893–8999. Richerson, P. J. & Boyd, R. (2005). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago: Chicago University Press. Rödl, S. (2014). Testimony and generality. Philosophical Topics, 42(1), 291–302. Sanz, C. & Morgan, D. (2007). Chimpanzee tool technology in the Goualougo Triangle, Republic of Congo. Journal of Human Evolution, 52(4), 420–433. ———. (2009). Flexible and persistent tool-using strategies in honey-gathering by wild chimpanzees. International Journal of Primatology, 30, 411–427. Senju, A. & Csibra, G. (2008). Gaze following in human infants depends on communicative signals. Current Biology, 18, 668–671. Small, W. (2015) Teaching and telling. Philosophical Explorations: An International Journal for the Philosophy of Mind and Action, 17(3), 372–387. Sperber, D. (2000) Meta-representations in an evolutionary perspective. In D. Sperber (Ed.), Meta-Representations: A Multidisciplinary Perspective (pp. 117–146). Oxford: Oxford University Press. Sperber, D. (1996). Explaining Culture: A Naturalistic Approach. Oxford: Blackwell. Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G. & Wilson, D. (2010). Epistemic vigilance. Mind & Language, 25(4), 359–393. Sperber, D. & Wilson, D. (1995). Relevance: Communication and Cognition, 2nd edn. Oxford: Blackwell. ———. (2002). Pragmatics, modularity and mind-reading. Mind & Language, 17(1/2), 3–23. Sterelny, K. (2012). The Evolved Apprentice. Cambridge, MA: MIT Press. ———. (2017). Language: From how-possibly to how-probably? In R. Joyce (Ed.), Routledge Handbook of Evolution and Philosophy. London: Routledge. Sutton, J. (2013). Collaboration and skill in the evolution of human cognition. Biological Theory, 8(1), 28–36. Tennie, C., Hedwig, D., Call, J. & Tomasello, M. (2008). An experimental study of nettle feeding in captive gorillas. American Journal of Primatology, 70, 584–593. Tennie, C., Call, J. & Tomasello, M. (2009). Ratcheting up the ratchet: On the evolution of cumulative culture. Philosophical Transactions of the Royal Society B, 364(1528), 2405–2415. Thornton, A. & McAuliffe, K. (2012). Teaching can teach us a lot. Animal Behaviour, 83, e6–e9. Tomasello, M. (1999). The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press. ———. (2008). Origins of Human Communication. Cambridge, MA: MIT Press. Van Leeuwen, E.J.C., Cronin, K. A., Haun, D.B.M., Mundry, R. & Bodamer, M. D. (2012). Neighbouring chimpanzee communities show different preferences in social grooming behaviour. Philosophical Transactions of the Royal Society B, 279, 4362–4367.
51
Richard Moore Wade, N. (2006). Before the Dawn: Recovering the Lost History of Our Ancestors. New York, NY: Penguin. Whiten, A., Goodall, J., McGrew,W. C., Nishida,T., Reynolds,V., Sugiyama,Y., . . . Boesch, C. (1999). Cultures in chimpanzees. Nature, 399(6737), 682–685. Yoon, J.M.D., Johnson, M. H. & Csibra, G. (2008). Communication-induced memory biases in preverbal infants. Proceedings of the National Academy of Sciences, 105, 13690–13695.
52
3 CULTURAL EVOLUTION AND THE MIND Adrian Boutel and Tim Lewens
Introduction Much of what goes on in our minds is socially acquired: we learn it from other people. And those people need not be our parents. We can learn from other adults, our peers and even our own offspring. This complicates, to put it mildly, any attempt to do standard evolutionary ethology on humans. The normal sort of evolutionary model, which presupposes that traits, including behavioural ones, are represented in subsequent generations according to their bearers’ relative successes and failures in passing on genes, does not capture the full range of influences that determine the makeup of human populations. Theories of cultural evolution (CE) propose that culture can nonetheless be usefully treated in evolutionary terms. A given generation does not make up its culture ex nihilo. Instead, the culture of one generation is informed by the culture of previous generations. But while many cultural items are retained by subsequent generations, new ones arise (some wholly novel, some refinements of old ones); some existing items become more widespread; and some fade away, rejected or forgotten. In short, culture has a crucial evolutionary feature: descent with modifications. CE claims that these processes of cultural change can be modelled and explained in ways that are, very broadly speaking, Darwinian, even if they do not resemble standard models of biological evolution – even if, in fact, they give relatively little role to cultural analogues of natural selection. In this chapter we will sketch the outlines of what we take to be mainstream CE, and look at the extent to which it treats human minds as social phenomena. In one sense, of course, CE is clearly a social theory of the mind: its raison d’être is to deal with the ubiquity of human social learning. But in several other senses it is individualist. Mainstream CE often treats the capacity for social learning, along with certain characteristic “biases” in how we learn from others, as genetically inherited adaptations, conferred on us by old-fashioned natural selection for traits that confer reproductive advantage on individuals. It distinguishes social learning from individual learning in a way that may understate how much of our learning owes itself to forms of sociality. And, not least, its models and explanations deal with social phenomena in an individualist fashion, by aggregating interactions between individuals and the cultural information in their heads. Over the course of this chapter we show how CE might accord greater influence to the social, without giving up its characteristic individualist methods. 53
Adrian Boutel and Tim Lewens
It is not our goal to defend the cultural evolutionary project. (Nor, conversely, are we looking to defend, against CE, an anti-individualist or holist approach that denies that social phenomena can be understood in terms of features of individuals.) It is fair to say that CE has met considerable resistance, particularly from scholars in the humanities (for example, Ingold, 2007; Bloch, 2012).1 Nonetheless, the considerations discussed in this chapter offer some kind of defence against criticisms that CE is excessively individualist or reductionist.
What is cultural evolution? Cultural evolutionary theory asserts that culture evolves. What does that mean? First, let us say what we mean by “culture”. We will take the explanatory target of CE to be “cultural information”, defined by Alex Mesoudi thus: A broad term [referring] to what social scientists and lay people might call knowledge, beliefs, attitudes, norms, preferences, and skills, all of which may be acquired from other individuals via social transmission and consequently shared across social groups. (2011, p. 3) Note that cultural information in this sense is not limited to “information” in the sense of knowledge about the world; it includes false beliefs, non-belief states such as preferences and norms, and non-propositional items like skills. Is this a reasonable notion of culture? Culture, of course, comprises more than information in people’s heads, however broadly construed. It includes performances, and other cultural behaviour; and it includes material cultural products or artefacts. Some cultural evolutionists do indeed deal with artefacts: Sterelny (2006) treats artefacts as memes; Richerson and Boyd (2006, p. 61) write of cultural information being stored in pots, since they can serve as models for production of new pots. But because cultural evolutionists generally deal with cultural information stored in heads – and because this book is aimed at the social mind – we will, too. A focus on (mental) cultural information by no means prevents a theory dealing with external cultural items, since they can be regarded as products of cultural information or as vessels for its transmission. We will also talk of “cultural variants”. A cultural variant is really just an item of cultural information. The term “variant” is used because, for cultural evolution to work, cultural items have to be exposed to potential replacement. (Some dominant variants, of course, may have no extant substitutes; at the other extreme, truly novel ideas may substitute for an absence of cultural information.) Cultural variants include anything from varying beliefs about matters of fact or religion, to techniques for preparing food, to attitudes towards sexuality. In general, CE treats the evolution of cultural information as a matter of the appearance, persistence, spread and disappearance of cultural variants. Cultural variants can be idiosyncratic – indeed, when they first appear they are bound to be. This means cultural information need not be “cultural” in the sense that it characterises the typical practices or beliefs of some group. Being present in one misfit’s head is enough. But, as we will see, CE can also offer explanations of “cultural” phenomena in this group-level sense, by explaining how cultural variants come to be held throughout a social group. What is it, then, to say that cultural information “evolves”? In one sense, it is simply to say that it changes over time. But almost everyone thinks that. We would not want to count every historical or genealogical student of culture as a cultural evolutionist. 54
Cultural evolution and the mind
We might suggest, instead, that it means cultural change results from selection of one cultural variant over another. Natural selection is, of course, the iconic explanatory resource of biological evolutionary theory. Many cultural evolutionists do indeed give selection an important role in cultural evolution (e.g. Mesoudi et al., 2004; Richerson & Boyd, 2006; Blackmore, 2000). But it would be too restrictive to characterise CE as the theory of cultural selection. As we will see shortly, many cultural evolutionary explanations do not appeal to natural selection of cultural variants, but to biases in the transmission of cultural information. (Of course, any mechanism which leads to some cultural variants persisting and spreading, while others do not, can be seen as “selective” in an extended sense. But just as taking “evolution” to mean change over time makes everyone who believes in cultural change an evolutionist, that broad definition of selection would make anyone who offers explanations of cultural change a selectionist.) A more general way of describing CE, endorsed by some of its most prominent proponents, places stress on the role of “population thinking” in the evolutionary approach to culture. Exactly what “population thinking” amounts to is not always completely clear, but at least it involves the idea that large-scale events that take place over long stretches of time – in mainstream biology these might be processes of speciation or adaptation – are the aggregated results of many small-scale events that occur in the lives of individual organisms. Hence, modern evolutionary biology explains change over time using models that refer to individual traits and variations, and their respective effects, rather than to collective properties of species. Similarly, say cultural evolutionists, processes of cultural innovation, homogenisation, differentiation and so forth should also be understood by using models that aggregate events that occur in the lives of individual humans. Richerson and Boyd, for example, in their influential overview of the cultural evolutionary project, say that “Modern biology is fundamentally Darwinian, because its explanations of evolution are rooted in population thinking. . . . The heart of this book is an account of how the population-level consequences of imitation and teaching work” (2006, p. 6). This type of thinking is not unique to evolution – the general technique of explaining macro-level events by reference to the aggregation of micro-level events is shared with the kinetic theory of gases. (In this chapter we will often refer to CE models as “kinetic” because of this analogy, in preference to the less wieldy “population-thinking-based”). But population thinking is both characteristic of biological evolutionary theory, and the secret of much of its success. Population thinking was originally contrasted with Platonic approaches that treat species as types, with species members sharing essential properties or natures (Mayr, 1982). The advantages of population thinking over essentialism for a theory of change are fairly obvious. But population thinking also marks the right contrasts between CE and other extant approaches to the study of culture. The alternatives to population thinking there are either holism (more on which later), or thick descriptions of individuals. CE models describe individuals thinly (the term is from Ryle, 1971). Models may recognise differences between individuals – perhaps in the extent to which individuals divide their efforts between learning from their environments versus learning from each other, or perhaps with regard to whether the cultural variants they hold are likely to promote or detract from reproductive fitness. That is rather more individuality than the kinetic theory ascribes to gas molecules, of course, but there is no effort to give “thick” characterisations of the idiosyncratic practices or beliefs of individuals. People may be described in very thin terms as “being influenced by successful members of their groups”, when thicker characterisations might variously describe those same individuals as “demonstrating allegiance to a ruler”, “showing considered respect for the powerful”, “emulating a highly regarded expert”, “honouring an elder 55
Adrian Boutel and Tim Lewens
statesman” and so forth. Such thick descriptions still describe individuals – thick description does not entail holism – but they do so in more detail than CE’s kinetic models, which are looking for properties that can be aggregated across multiple contexts. Psychology, the social sciences and the humanities are all generally concerned with people described relatively thickly (Geertz, 1977). After all, what we have breezily been calling “change in cultural information” involves almost every facet of human psychological and social life. A complete understanding of the transmission of cultural information involves accounting for all of: formal and informal individual learning, their capacities and imperfections; choice of role models; parental influence and adolescent rebellion; and fame in the cultural industries. Forces acting to change cultural items include: technological development, moral progress, the gifts of genius, inter-cultural contact, environmental change and the violent suppression of dissent. Any number of those factors and forces can interact in complex ways in any particular token event of cultural transmission or change. Obviously no complete theory or history of cultural change, in these senses, is going to come out of CE models. But biological evolutionary theory faces a similar problem: it must abstract from the unlimited detail of organismic physiology and ecology, to look for patterns that can be analysed and modelled in general terms. In the same way, cultural evolution aims to find general patterns in cultural change, patterns that can be analysed and explained without presupposing complete psychological or sociological accounts of human life. And, just as biological evolutionary theory does not purport to displace physiology or ecology, CE should not presume to displace thick psychological or sociological accounts of cultural change. For every cultural change that can be analysed in cultural evolutionary terms, one can also look for narratives that detail what actually happened, and explain it in its particular psychological and sociological context. CE does not seek to displace psychological and sociological explanations in favour of a new “evolutionary” force driving cultural change. Rather, it should be seen as offering high-level, relatively abstract, generalisations about such processes of change.
What cultural evolution isn’t – evolutionary psychology and memetics CE should be distinguished from two other recent approaches that apply broadly Darwinian, kinetic ideas to the mind and its contents: evolutionary psychology, often known as the “Santa Barbara school” after the work of Leda Cosmides and John Tooby (notably, Barkow et al., 1992), and memetics. Evolutionary psychology (EP) asserts that many features of human psychology evolved as cognitive solutions to problems faced by our ancestors in the recent evolutionary past, in the “environment of evolutionary adaptedness” (EEA). According to EP, our species responded to those challenges in the traditional way of biological evolution, by the natural selection of genetically based adaptive traits. In our case, those traits were cognitive.2 The result is that humans are equipped with a “toolkit” of cognitive capacities and dispositions. The tools in that kit are domain-specific rather than general-purpose, and the toolkit itself is, for practical purposes, fixed by our genes.The effects of those tools may vary, for better or worse, as we encounter novel circumstances – but if it’s for worse, we are stuck with them until natural selection has time to catch up with the environmental change, or at least until the developmental pathways by which these once-adaptive genes express themselves are somehow disrupted or perturbed. This allows EP to offer explanations of modern human behaviour based on its putative advantages in the EEA. For example, Daly and Wilson (1988) argue that the disproportionate maltreatment of step-children can be explained by an adaptive disposition not to invest in genetically unrelated offspring. This approach can also explain behaviour that is not adaptive 56
Cultural evolution and the mind
in modern environments, such as the sweet tooth (Cosmides & Tooby, 1997). A taste for sugarrich foods was adaptive when calories were hard to come by. Although residents of many countries face more risk from obesity than from starvation, this genetically based disposition has not had time to respond to the changed environment. This points to the most significant difference between EP and CE. CE’s appeal to nongenetic transmission of cultural information via social learning makes room for the comparatively rapid development of adaptive traits in response to changed environmental demands (Richerson & Boyd, 2006, p. 147). Even if some cultural variant evolved in response to cultural selection, we are not stuck with it if it becomes counterproductive or morally undesirable; an improved variant can spread as fast as – indeed can be constituted by – the realisation that the old one is counterproductive or undesirable. On the other hand, mainstream cultural evolutionists like Richerson and Boyd (2006, p. 230) take the capacity for social learning itself to be a genetic adaptation, like the tools in EP’s toolkit. The same is true of the characteristic “biases” of social learning: the dispositions governing from whom we learn. We have the learning capacities and dispositions that we do because they are the ones that tended to get us advantageous cultural information. By virtue of possessing those learning capacities, we can acquire dispositions and behaviours in a manner that could not be achieved by biological evolution. But while these cognitive adaptations are more general and flexible than those of EP, both find ultimate explanations in old-fashioned biological evolution. CE also should not be identified with memetics (Dennett, 1996; Blackmore, 2000; Dawkins, 1989). Memetics fits within our definition of cultural evolution; unlike EP, it can be seen as a version of CE rather than a competitor. But memetics has a special commitment to treating cultural variants as replicators: roughly, mental analogues of genes. (Whence the term “meme”.) Replicators do two things: as the name suggests, they copy themselves faithfully; and, by doing so, they ensure their bearers in later generations resemble those in earlier ones. Since evolution by natural selection requires resemblance across generations, some evolutionists have argued that it also requires replicators (Hull, 1988). If that’s right, then cultural selectionism, at least, is committed to a cultural analogue of genetic replicators – that is, to memes. And what we should expect, then, from cultural selectionism is the spread of memes that produce behaviour in their hosts which leads to those memes’ reproductive success – as measured by the further replication of the meme. Such talk of meme-based selection can easily suggest a “selfish meme” picture, of mental parasites using human minds as hosts and vectors. That impression is occasionally encouraged by the memeticists’ own language, but it is not a necessary consequence. Memes do not jump from head to head like fleas or rhinoviruses. They are adopted by learners, who have (at least in principle) the ability to evaluate and reject them. Moreover, just like genes, cultural variants often (but certainly not always) do well by furthering the interests of their hosts. A more serious problem for memetics is that cultural variants do not, in fact, behave very much like genes. They do not always replicate themselves accurately: compared to genetic replication, cultural transmission is enormously unfaithful.This can be due to error, to deliberate revision by the recipient and to the fact that cultural transmission can be many → one. (Our views on cultural evolution have many influencers, but no one parent whose ideas we have mimicked.) When accurate transmission does occur, it need not be due to individuals copying their role models; rather, as Dan Sperber (1996) has argued, it may be because quite general input from role models combines with shared background factors to, in effect, generate the original idea anew. An absence of cultural replicators would be fatal to memetics, but it need not be fatal to cultural selection. We will shortly look at other mechanisms for correcting errors in cultural 57
Adrian Boutel and Tim Lewens
transmission. And it certainly would not be fatal to CE, which is committed neither to replicators nor, necessarily, to cultural selection.
Examples of cultural evolution Now for some examples. In this section we briefly describe four, rather different, cultural evolutionary theories. Gene/culture co-evolution Adult mammals do not drink milk. Until a few thousand years ago, humans, like other mammals, lost the ability to digest milk after weaning. This made perfect sense when milk was only drunk in infancy; producing the necessary enzyme, lactase, in adulthood would have been wasted effort. Most adult humans remain lactose-intolerant today. But almost half of us, including most Westerners, have genetic mutations which allow continued production of lactase into adulthood.These mutations are very recent, in evolutionary terms: they seem to have coincided with the domestication of cattle. Their spread also matches the spread of dairy farming. Of course, we did not have these mutations so that we could drink the milk provided by our new cows. Mutations are not that smart. But the advantage offered by these mutations was significant: access to an additional – and renewable – source of food. This appears to have helped them spread through the cattle-farming population by natural selection (Holden & Mace, 1997). (Conversely, one might speculate, the ability to digest milk increased the advantage offered by cattle-farming, encouraging its spread in return. These need not be competing hypotheses about the “direction” of gene/culture influence; the two effects could have been mutually reinforcing. But there is good evidence that the adoption of dairy-farming preceded the spread of lactose-tolerance (Mace, 2009)). Even within the cattle-farming population, lactosetolerance varies with the consumption of lactose: members of Mediterranean cultures, which eat cheese rather than drink milk, are less likely to be lactose-tolerant. Here cultural change, spread rapidly by social transmission, has influenced genetic evolution, which responded to a new, culturally constituted, selective pressure. Cultural evolutionists have suggested that a similar co-evolutionary process has helped produce many of our genetically based psychological adaptations. The advantages offered by social learning – the rapid spread of good ideas; the possibility of complex, cumulative, intelligent responses to problems – helped to spread the genes responsible for our capacity to learn socially, as well as the biases affecting social learning. Cultural evolutionists thereby stress how cultural change can constrain or corral genetic change, just as much as genetically based psychologies influence the course of cultural change (Richerson & Boyd, 2006, p. 194). The demographic transition Our second example involves cultural change occurring despite, rather than in tandem with, natural selection. During the 19th century, birth rates among Italian women declined from around five children per woman to two. This was a cultural change; there is no evidence of a decline in physiological fertility, or a change in the physical environment. A genetic change reducing offspring numbers would have suffered under a heavy selective disadvantage, since it would be passed on to fewer members of the next generation than more fecund alternatives.3 Cavalli-Sforza and Feldman (1981) argued that, for the disposition to have fewer children to persist and spread in subsequent generations, it must have been transmitted obliquely – that is, 58
Cultural evolution and the mind
from people other than the women’s own parents. In our terms, it must have been transmitted by social learning. This by itself is perhaps not a very radical conclusion. But Cavalli-Sforza and Feldman used quantitative models to draw more specific implications about how the oblique transmission would have had to operate. In particular, they showed that if women simply adopted the cultural variant (either large or small family size) that was prevalent locally, the small-family variant would still have been outcompeted. They would need to be disposed to adopt the small-family preference even if only a small number of their neighbours shared it. This is a classic CE account, in that it explains the adoption of a new preference in terms of a model of differential transmission of a cultural variant, thinly described. It abstracts away from such matters as the particular attraction of smaller family sizes for individual women, the sociological and psychological factors that explain why particular women took on the small-family variant while others were more conservative, who first noticed the advantages of restricting family size and why, and so on. It shows that adoption of such a fecundity-reducing preference will occur if and only if its attractiveness to peers is sufficiently strong – specified more quantitatively in the model than we have here – to overcome natural-selective disadvantage. Conformist bias and the S-shaped adoption curve When an innovation arises, even if it is obviously beneficial, its adoption tends to follow an S-shaped curve. Adoption starts slowly, but becomes rapid, and remains rapid until virtually everyone has adopted it; after which there is a slow conversion of the remaining holdouts. This contrasts with the r-shaped curve one might have expected: given that the innovation is clearly beneficial, it should be adopted rapidly from the start – limited only by the speed of communication – with, again, a slow conversion of holdouts at the end (Henrich, 2001). Henrich argues that the shape of such S-curves – and in particular the “long tails” often observed at their bottom left, where initial adoption is slow until a “take-off ” point is reached – is explained by conformist bias.Where alternative cultural variants exist, individuals face a choice as to which they should adopt. It would hardly be practicable to fully test the merits of each alternative; that would eliminate much of the benefit of social learning. But at the same time, one wants to adopt the best of the options. Conformist bias offers a heuristic, or proxy, for quality: “adopt, with disproportionately high probability, the variant held by the majority of people around you.” If people generally have a conformist bias, then adoption of a new innovation will be slow at first. Only those who receive strong signals of the innovation’s merits (perhaps via individual learning, or persuasion by other early adopters) will take it on. From there on, success breeds success; the more people adopt the innovation, the more other people will find it to be the most common in their vicinity, and so adopt it. The S-curve then remains steep until, at the top right, it encounters conservative holdouts – the converse of the early adopters – who have some reason to resist it despite its majority status (perhaps they encountered an atypical negative result, or perhaps they are just constitutionally conservative, so that the process can only advance funeral by funeral). This sort of modelling abstracts away from the details that would be relevant to a complete history of the adoption of some innovation – what the innovation is, its genesis, the reasons it is useful, the social and psychological features that distinguish adopters from holdouts, and so on. Nonetheless, the innovation-agnostic conformist-bias model explains an equally general aspect of adoption: its shape over time. Conformist bias also offers an answer to the question raised above in relation to memetics: how, given the error-proneness of cultural transmission, do cultural items manage to persist in 59
Adrian Boutel and Tim Lewens
more or less the same form across generations? In exclusively parent–offspring transmission such as genetic inheritance, high error rates would pose a severe problem. But if members of the population adopt the variant that is most common among their neighbours, then idiosyncratic variations resulting from error will simply disappear after one generation – unless, of course, some clear advantage leads to their being picked up more widely. Transmission need only be faithful enough to preserve the majority status of the most common variant (Henrich & Boyd, 2002). This, in turn, offers an explanation of conformist bias itself. Given error-prone copying, cultural information can persist only if there is an error-correction mechanism such as conformist bias. The conformist bias is part of what makes possible the advantages of cultural information. A disposition to learn from the majority, then, will be favoured by natural selection for the same reason as, and alongside, the capacity to learn from consocials itself. Conformist bias is not the only mechanism that could play that stabilising role. Sperber (1996) puts forward an alternative involving what he calls “attractors”. The notion of an “attractor” is an abstraction which gestures towards the reasonable stability, for some reason or another, of some type of cultural variant. A particular type of variant might be stable because it is the more-or-less inevitable result of constrained learning processes, or because it is the more-or-less inevitable result of manipulating materials that are subject to physical constraints, or simply because it is an objectively good solution to a common problem. Most transmission errors would produce deviations from the local attractor; but the relevant constraints retune the transmittee’s information back to the attractor. Conformist bias and attractors are compatible explanations; both could be operating in a population simultaneously. Importantly, neither is regarded by its proponents as a selective force. An individual of great cultural prominence might, because of that, be the target for emulation by a vast number of other individuals. But suppose our prominent individual has beliefs and practices that are exceptionally difficult for others to acquire, because they are so hard to learn. Here the action of a set of attractors (a group of easy to acquire variants) may outweigh the action of selection (which, on one understanding, reflects the propensity of an individual to serve as a model for emulation). Transmission biases, such as conformist bias, are non-selective in a more nuanced sense. The distinction between selection and bias here is adapted from genetic evolution. A gene does well by natural selection if it produces phenotypic properties which encourage the survival and reproduction of its organism. It does well by biased transmission if it worms its way into more than its fair share of the organism’s offspring as by, for example, meiotic drive. Analogously, a cultural variant does well by selection if having it leads to its bearer being in a position to influence more people; but it does well by transmission bias if people preferentially pick it from among the role models available to them. For example, a preference for a new tool over an older one does well by cultural selection if users of the new tool become more visible to others for some reason. (That reason might be natural selection, if the new tool confers an advantage in survival and reproduction.) The preference does well by transmission bias, by contrast, if potential transmittees choose to adopt the new tool because its use is easier to learn – or, indeed, because they decide it is a better tool on the merits (“content bias”). Cultural group selection Richerson, Boyd and many of their collaborators have argued that a process they call “cultural group selection” has been a significant factor in human evolutionary history. Roughly speaking, this is a process in which one type of cultural variant succeeds with respect to another, 60
Cultural evolution and the mind
because of the benefits the variant in question confers in processes of inter-group competition. Darwin himself endorsed one version of the cultural group selection hypothesis when he argued that the emergence of sympathetic feelings for other members of our social groups could be explained by the advantage caused by those feelings in “tribal” warfare. Richerson and Boyd (2006, inter alia) – as well as Sober and Wilson (1998) – have suggested that the 19th century expansion of the Nuer at the expense of the Dinka constitutes a clear example (a “smoking gun”, in Sober and Wilson’s words) of cultural group selection in action.Their main source for their interpretation of this episode is the anthropologist Raymond Kelly, who relies, in turn, on classic ethnographic work by Evans-Pritchard. As Kelly (1985) describes it, “the Nuer increased their territorial domain four-fold . . . during the period from about 1818 to 1890”: this expansion was achieved by raiding Dinka lands, and then occupying those lands once the Dinka, wishing to avoid further attack, fled elsewhere. Richerson and Boyd follow Kelly in ascribing the Nuer’s relative success to their social organisation. The Nuer grouped themselves into “tribes” based on kinship, which produced larger units than the Dinka’s village-based groupings. Those kinship- and village-based units were also the units of military organisation so, although the two groups were peers in military technology and used similar tactics, the Nuer found themselves with the advantage of numbers.The result was that the Nuer, and with them Nuer culture, gradually displaced the Dinka culture in areas formerly occupied by Dinka. Boyd and Richerson stress that, in their view, cultural group selection need not involve the killing of members of one group by members of another. They follow Kelly in arguing that many Dinka were, in fact, assimilated into expanding Nuer communities. Kelly (1985, p. 65) stresses that “the key features of Nuer social and economic organisation that were instrumental to territorial expansion remained unaltered by the massive assimilation of Dinka and Anuak tribesmen.” By contrast, Boyd and Richerson take gene-based group selection to require differences in individual survival rates, such as would be provided by killing members of losing groups. Even though most Dinka individuals survived their battles with the Nuer, the processes by which these individuals adopted Nuer ways meant that groups with Nuer culture came to predominate over groups with Dinka culture (Bell et al., 2009). Boyd and Richerson do not focus on the “Nuer expansion” because of its intrinsic interest. Their hope is that if they can establish the reality of cultural group selection in this comparatively recent and well documented case, they boost the credibility of their much more ambitious claim that, in early human evolutionary history, group selection driven by competition between “ethno-linguistic tribes” played an important role in explaining human pro-sociality and altruism. If they are right about this, then some very basic features of human social life derive from cultural evolutionary interactions between social groups. How widespread this sort of straight-up cultural competition between groups might be is disputable. The significance of the Nuer/Dinka example is not clear-cut, in part because of worries about whether the supposed “assimilation” of displaced Dinka into Nuer communities truly involved the preservation of Nuer cultural traits, as opposed to a form of negotiated cultural blending of Nuer and Dinka ways. The more general notion that early hominin evolution was characterised by competitive interactions between “ethno-linguistic tribes” is also contentious. For example, Richerson and Boyd cite a study by Birdsell from 1968 in support of their idea that populations in the Pleistocene were divided into “ethno-linguistic” units with between 500 and 1,500 people (Birdsell 1968). Birdsell in turn, used Norman Tindale’s earlier work on aboriginal people in Australia as the basis for his inference about likely Pleistocene social organisation (Tindale 1940). But Tindale’s interpretation of his data is open to challenge. Berndt (1959), for example, argued that what were primarily names for different aboriginal dialects were taken by Tindale to name “tribes”, in the sense of important units of social 61
Adrian Boutel and Tim Lewens
organisation. And Berndt claimed that various important aboriginal social groupings tended to contain shifting collections of different dialect speakers. In effect, Berndt denied that Australian aboriginals were organised into “ethno-linguistic tribes” at all.4 So the cultural group selection hypothesis is contentious. Nonetheless, it offers an intriguing evolutionary explanation for humans’ conspicuous tendencies to cooperate with people who are not members of their families, and whom they have little chance of encountering on a regular basis.
Cultural evolution and the social mind To what extent does CE offer a theory of the social mind? Motivated as it is by the existence of social learning, it could hardly fail to be social to some degree. But how social is it? What implications does it have for our understanding of mind? Social explanations of mental contents CE is, first and foremost, a theory about how some of the information (in the broad sense) contained in human minds got there, and about why humans became the sorts of creatures able to acquire information in these ways in the first place. It claims that such information has (a) social sources – we acquire information from other people – and (b) social causal explanations – what we end up learning is explained by facts about the social distribution of cultural variants, not just by our own individual investigations. The implications of this should not be overdrawn. As we have said, CE nonetheless treats cultural information as located in the heads of individuals. To borrow a term from Robert A. Wilson (2005), CE treats cultural information as “socially manifested” – that is, as a feature of individuals that is causally dependent on social input, rather than as a collective property of a social group. One thing CE is not, for example, is a social theory of representational content. It is neutral on how the content of a given informational state is fixed. In particular, it does not imply a social-externalist view of mental content (Burge 1979; 1986). On such views, the meaning of certain concepts is, in effect, delegated to other members of society. In Burge’s prototypical example, a patient complaining to their doctor of “arthritis in my thigh” succeeds in referring to arthritis, even though their complaint demonstrates they do not know what arthritis is (i.e., a disease of the joints). That the patient refers to arthritis anyway – and so makes an incorrect claim about having arthritis, rather than a true one about whatever their actual problem is – is because they defer to relevant experts for the term’s meaning. CE is friendly to such externalist views, of course, in that the social acquisition of information provides ample opportunities for conceptual deference. But it does not entail that content is determined this way. A socially acquired concept might nonetheless have descriptive content, via a definition learned along with the concept. Or it could refer causally: causal theories of reference allow the relevant causal chains to run through other people (Kripke 1980, 91). (Indeed, if cultural information can be constituted by deferentially possessed concepts, that complicates things slightly for CE. CE models tend to assume that cultural information, if it has been passed on faithfully, is equivalent in its effects for transmitter and receiver (modulo any other differences between them). But if you possess the concept yew tree only deferentially, then learning from a skilled bowyer that yew wood makes the best bows won’t help you make better bows. Cf. Putnam, 1975.)5 CE also does not imply a social theory of individual decision: it does not favour social structure over individual agency as the determinant of people’s actions. By offering social 62
Cultural evolution and the mind
explanations for individuals’ mental states, and thereby of their behaviour, CE certainly provides material that a structuralist in social science might exploit. CE is clearly compatible with sociological and anthropological notions such as habitus (Bourdieu, 1977) and Vygotskyan scaffolding (Vygotsky, 1962; compare Sterelny’s use of “scaffolding” in his 2003). But CE does not thereby displace individual decision-making, any more than any causal story about the mind does. Even CE theories that appeal to conformist bias – which involves individuals deferring to the group consensus – allow that individuals can and do choose to buck the consensus. (Otherwise the technology adoption curve would not be S-shaped, but flat along the x-axis.) A fortiori for “content bias”, which is just a formal way of saying that some cultural variants are adopted because the adopter perceives them as attractive (Richerson & Boyd, 2006, p. 156.) More generally, CE offers abstract, thinly described models of how and when cultural variants are transmitted. It is entirely compatible with individual choice serving as the specific mechanism for, or proximate cause of, such transmission. All that said, mainstream CE’s account of social learning does have implications for the way individuals come to adopt cultural variants. It is not just that people learn from other members of their social groups. There are ways of doing that that are hardly social at all. One can treat other people as data, observation of which yields potentially useful information about techniques and strategies for achieving the sorts of goals people have. Or one can learn from them via testimony, including teaching: one can listen to their expressions of cultural information, and adopt their ideas or techniques, or not, according to their testimonial value and persuasive power.6 “Social learning” of those varieties is consistent with a thoroughly individualistic, rationalist view of human psychology. But social learning for the cultural evolutionist is more social than that. According to theorists like Boyd, Richerson and Henrich, not only our capacity for social learning, but also the dispositions and biases that govern from whom we learn, are genetically inherited products of natural selection. These capacities and dispositions are not learned, or adopted based on a rational assessment of their heuristic benefits. Rather, like the EP toolkit, they are part of our genetically inherited psychological makeup. These versions of CE offer a view of human psychology as dependent on others’ minds in a way that is independent of, and parallel to, individual learning (Henrich, 2001). Let us stress again that this does not remove individual choice or agency from the picture. The S-shaped curve still rises. People can and sometimes do choose to reject the dominant cultural variant – just as, even if Daly and Wilson are correct, people can choose to treat their step-children kindly. Methodological individualism Although CE traffics in social learning, social facts and social explanations, its kinetic approach is methodologically individualist, in the same way game theory or microeconomics is individualist. It offers explanations of social-level phenomena, but it does so by aggregating individuals, their properties and the interactions between them. Among the consequences of these individual interactions, there can be found social-level phenomena, like the S-shaped adoption curve. Where those social-level phenomena appear in real life, CE takes its models to have provided an explanation of them. But it is a reductive explanation, in the sense that it appeals only to individual-level explanans; no exogenous social-level phenomena are involved. Even CE treatments of group selection look at the aggregated effects of individual interactions in groups with different compositions.7 63
Adrian Boutel and Tim Lewens
So while CE is inevitably a social theory of the mind, it is not a thoroughgoing one. In the following sections we will discuss some pressures that may push CE further in the social direction.
Cultural evolution and the even more social mind Mainstream CE presupposes a distinction between social and individual learning. In this section, we will consider some reasons to question this distinction – each of which results, in different ways, in more learning counting as social. Learning to learn socially Many cultural evolutionists take the evolution of social learning itself to be a case of gene/culture co-evolution, like lactose-tolerance. A capacity for social learning would have conferred an obvious advantage on our ancestors, in that one could take advantage of one’s neighbours’ clever ideas without having to discover them for oneself (Richerson & Boyd, 2006, p. 144). On this view, many of our adaptive behaviours are produced by a cumulative process of individual discovery coupled to social learning. But the mechanisms that enable this adaptive learning are still understood as the results of natural selection acting on genetic variation. Recently, Cecilia Heyes and collaborators have argued that the capacity for social learning does not involve a separate cognitive mechanism. They note that the capacities for individual learning and social learning covary across species, which suggests they have a common underpinning (Heyes, 2012a, p. 194). What is special about humans, they argue, is not a new learning mechanism, but a tuning of the inputs to the general mechanism, resulting in forms of selective attention to the activities of other agents (Heyes, 2012b; Heyes, 2012a). This by itself does not greatly threaten the mainstream CE picture.The retuning of inputs – the shaping of perceptual, attentional and motivational systems that direct us to the activities of other people – might still be due to genetic change. But Heyes suggests that such tuning, too, may be learned (Heyes, 2012a, p. 199); for example, she notes that parents often reward imitation by their infants (Ray & Heyes, 2011, p. 100). One obstacle to this picture is presented by the capacity for imitation itself. Imitation is a form of social learning that is often taken to pose particular difficulties for a general learning mechanism. Imitating others is a harder task than it may sound. The imitator needs to match observed actions to intentional muscle movements, but it is not obvious how to do that: the sight of an action performed by another does not resemble the way it feels to produce it. A general learning mechanism needs to overcome this “correspondence problem” in order to get imitation started, but it is not obvious how it could – particularly for bodily movements that are hard to observe as one makes them, such as smiling or the articulation of phonemes. The result is, in effect, a poverty-of-the-stimulus argument that imitation must be an unlearned social learning mechanism. Heyes has an answer to this problem, too. She argues that the link between internal and external aspects of movements can be, and is, learned. The learner can infer correspondence between sensory and motor representations for simple units of movements through a combination of observing their own visible actions, using artificial supports such as mirrors and observing others’ movements when engaged in a common task, where observer and observee are doing the same thing. Once correspondences are established for simple or basic actions, more complex movements can be imitated by piecing together their component movements. 64
Cultural evolution and the mind
Heyes’s view is not yet orthodoxy, but it does have meaningful empirical backing. It helps to explain why chimpanzees can be, but also need to be, trained to imitate by humans; why newborn human infants imitate very little beyond perhaps tongue protrusion; and why birds seem able to imitate behaviours that their flocks engage in collectively (Heyes, 2001). The upshot for our purposes is that social learning may itself be, at least in part, a social product. We learn how to imitate by watching our consocials; and so, at least in that respect, we learn from our consocials how to learn from our consocials. If Heyes is right, then social learning, and so CE, is even more social than one might have thought. What does this mean? On the one hand, more of our learning capacity turns out to be social in origin: the faculty of social learning need not be a genetic innovation, but may have been acquired by learning from our fellow humans. On the other hand, to the extent that we acquire our social learning capacities via individual learning, CE is more compatible with an individualistic psychology of learning. Social learning is, at root, just individual learning from other people. And yet that is not to say that we do not have evolved capacities and dispositions specifically for social learning – just that their evolution will have been cultural. At any rate, Heyes’s work does not require radical modifications to the CE framework already discussed. Even if social and individual learning invoke the same underlying learning mechanisms, one can still distinguish between uses of those mechanisms to learn from others and from the environment. And their social uses can potentially be given the same “ultimate” evolutionary explanation whether they are genetically or socially acquired – even if the proximate, mechanistic and developmental explanations of their presence will be different. Social transmission via individual learning The distinction between social and individual learning has also been brought into question by developments in evolutionary theory more widely. CE differs from strictly genetic accounts of evolution by positing interaction between different modes of inheritance – genetic and cultural transmission (as in gene/culture co-evolution), or individual and social learning. Now, theories like Developmental Systems Theory (Griffiths & Gray, 1994; Oyama et al., 2003) move beyond interactionism by questioning whether it makes sense to speak of distinct “channels” of inheritance at all. As an example of a threat to the distinction between individual and social channels, we will consider the notion of niche-construction (Odling-Smee et al., 2003). Niche-constructionists stress the ways in which organisms alter their own environments – building dams, aerating soil – and so affect their own evolutionary circumstances. In the context of learning, the moral is that humans live in environments that have been affected in many ways by other humans’ activities. The effects of that social context can influence even the most isolated individual investigation. For example, individual learning may be aided by what psychologists call “stimulus enhancement”, where someone else’s action makes whatever it is they’re interacting with more interesting, encouraging the learner to investigate and so discover for themselves what the other person knows. Similarly, in “local enhancement” the learner is simply attracted to the place where other people are or have been, after which they discover whatever is to be discovered there. A dog might investigate a doggy smell lingering nearby, and thereby discover that a certain plant is poisonous. (See, generally, Heyes, 1994; Heyes et al., 2000.) Even more ambiguous cases are possible in humans. Consider a scenario of deliberate, but indirect, teaching: a child is given a toy, manipulation of which is designed to teach the child some principle of mechanics. If the child duly learns the principle, is that social learning? On 65
Adrian Boutel and Tim Lewens
the one hand, the cognitive mechanisms deployed by the child are those of individual discovery; the cultural information gained was not expressed by the teacher, nor did the child learn it from observing the teacher’s behaviour. On the other hand, the situation was set up as a way of inducing the child to learn what the teacher knew. But for the teacher knowing the principle, the child would not have learned it. Intentional teaching need not be involved. People set up environments to live in that are, generally speaking, people-friendly. We do not, for example, tend to cultivate highly poisonous plants near our houses. If we allow our children to roam free in our non-poisonous gardens, they may form (by strictly individual cognitive processes) preferences for the taste and appearance of the foods with which they become familiar.This may lead them to plant non-poisonous plants, just as their elders did. Again, their preference has an obvious social cause in their elders’ preference; but it is produced by individualist mechanisms. Kim Sterelny (2012) has suggested that even our leavings may aid individual learning. An apprentice exposed to the detritus – tools, raw materials, off-cuts – of their employer’s workshop may gain valuable information by experimenting with them, in ways that would be much harder to duplicate in an unstructured environment. Sterelny suggests that the opportunities for learning afforded by such environments may be how social learning began, since one can be attracted to such environments by a simple preference for being around one’s elders. Such informational inheritance can then be facilitated by other behavioural changes, such as not driving the kids away from one’s stuff. This sort of consideration suggests, again, that more learning is social than one might expect. In this case, learning that is, cognitively speaking, individual – in that it does not involve observing others’ behaviours – is nonetheless a means to social transmission of cultural information between individuals. Again, however, this does not threaten the overall CE project.8 If anything, it expands its scope, by relaxing an inessential commitment to a particular proximate cause, or mechanism, of social transmission. Because, as we have noted, CE operates at a level of abstraction removed from the mechanics of transmission, it does not require a sharp distinction between social and individual “channels” of information. It is enough that information flows from existing holders to people who aren’t their offspring, whatever the channel.9 Power and the kinetic This section has so far focused on CE’s distinction between individual and social learning. We now turn to its methodological individualism, and, more specifically, its explanation of social phenomena using “kinetic” models which aggregate properties of, and interactions between, individuals. There are, plainly, things in society besides individuals and their properties; there are institutions, organisations and power relations. Unless those things can be captured in aggregable descriptions of individuals and their psychological properties, CE’s kinetic models will not be able to handle them. But it is not at all obvious that they can be so reduced (Lewontin, 2005; Lukes, 2005). If CE can handle such social-level phenomena, perhaps after some friendly modification, we take it that it should.The alternative is to restrict the domain of CE explanations to contexts in which institutions and power relations can be set aside.This could be done by restricting CE to social groups in which there are no interesting power relations – perhaps it applies only to early hominin hunter-gatherer bands, which appear to have been fairly egalitarian (Knauft et al., 1991; Boehm, 1999). That’s not nothing: it is likely to cover some of the contexts in which social learning originally evolved. But it is well short of what CE proponents might have hoped 66
Cultural evolution and the mind
for. A second alternative is to treat such social phenomena as part of the fixed background against which CE models evaluate the success of cultural variants.This would at least mean that CE had something to say about cultural change in modern societies, but it would mean those social phenomena themselves, and so their impacts on cultural change, could not be given CE explanations. They would be in the same position vis-a-vis CE models as climate. Whether kinetic models can handle such social phenomena depends in part on what sort of thing social phenomena are. At the extreme, if social phenomena are genuinely emergent features of society in C. D. Broad’s (1929) sense, then they cannot be handled by any individualist models, never mind kinetic ones. Metaphysics aside, such a non-individualist or “holist” approach might be the best option methodologically.Trying to understand social phenomena using individual-level models might be like trying to describe animals’ behaviour by chemically assaying them rather than by watching them behave. This might be true even if it is conceded that social phenomena supervene on, or are made up of, nothing more than individuals and their interactions, just as organisms are composed of interacting chemical substances. On the other hand, CE offers evidence that at least some social phenomena can be explained in kinetic, and a fortiori individualist, terms. CE models can handle certain sorts of institutions (such as property or marriage) by treating them in Lewisian fashion, as equilibria resulting from patterns of interactions between individuals (Lewis, 1969; Richerson & Henrich, 2012). Some power or power-like relations can also be reflected in kinetic models, by treating influence as a determinable property of individuals, such that having a high value biases transmission of cultural information in favour of your ideas. One of the transmission biases suggested by cultural evolutionists, prestige bias, works in exactly that way. As with conformist bias, discussed above, Henrich and Gil-White (2001) suggest that a preference for cultural information held by individuals with high prestige explains actual patterns of adoption of cultural variants. Moreover, they say, the existence of prestige bias can be explained by selection, as an adaptive heuristic for determining which cultural variants to adopt. As Richerson and Boyd put it, determining who is successful is a lot easier than determining what will be successful (2006, p. 124). The prestigious are certainly not always right, but a preference for variants held by people who have made it to prestigious positions at least makes one less likely to adopt an idea that seriously damages its holders’ chances of success. But power is not ( just) prestige. Kinetic models have a much harder time incorporating external social relations, those which cannot be captured by ascriptions of properties to individuals. For example, there are specific power relations between individuals that cannot be represented as differential levels of generalised prestige: consider the employee/employer relation, or bureaucrat/citizen, or music executive/consumer. Music executives may have a disproportionate influence over our music purchases (Salganik et al., 2006), but they do not have influence over beliefs about climate change. Since such power relations are relations between individuals, they can be incorporated into individual-level models, but a model that incorporates external power relations no longer generates its results by aggregating individual interactions; instead it directly encodes information about the macro-level structure of society. In that sense, it is no longer a simple “kinetic” model. Such models can still be methodologically individualist; they need not fall into full-blown holism or emergentism, which denies that social-level phenomena can be understood at the individual level at all. For example, functional decompositions (Levins, 1970) are analyses of highlevel phenomena in terms of the combined causal interactions of their parts. Because each of the parts is given a distinct role, these analyses are not kinetic; but they are nonetheless individualist, in that they offer individual-level microfoundations for social phenomena. This is the 67
Adrian Boutel and Tim Lewens
same type of reduction as is offered by mechanistic explanations in biology (Craver & Darden, 2013). Whether such non-kinetic individualist models are themselves able to explain power and other social phenomena is a very general question in the metaphysics of social science that we will not touch on here. Enough to say that the models of CE may need to attend more closely to the idiosyncrasies of social relations between individuals if they are to capture important phenomena of power.
Conclusion CE offers a picture of a highly social mind, one that is tuned to receive information from other humans, and which became so tuned as a result of cultural evolutionary processes. In light of recent developments, that picture looks set to become yet more social. We take it that CE’s commitment to social learning – that humans learn from other humans besides their parents – is not controversial. CE’s more contestable offerings for the understanding of mind are: (a) its gene/culture co-evolutionary explanations of important features of our psychology, such as the capacity for social learning with its characteristic biases; and (b) its kinetic theories of cultural change, based on models that thinly describe individuals and their interactions, omit intrinsically social phenomena and abstract away from thick psychological and sociological characterisations of the cultural features and transmission processes involved. While we are not endorsing any particular CE explanation, we think that neither the thinness of CE models, nor their kinetic individualism, should be a reason to reject them in principle. CE models are compatible with the thicker explanations and narratives offered by psychology, the social sciences and the humanities; and they are not threatened by a softening of the distinction between social and individual learning. So we think it rather likely that the contents of the human mind have CE-style social explanations, even if extant CE theories turn out not to be correct. The question of non-kinetic social phenomena such as power relations is more difficult; explaining such aspects of social life remains, as we see it, a crucial challenge for cultural evolutionary theory.10
Notes 1 This chapter draws heavily on Lewens (2015), which offers a detailed critical discussion of cultural evolutionary theory’s merits and weaknesses, including sympathetic responses to some objections from the humanities. 2 This description of EP, like that of memetics to follow and of CE itself above, is inevitably abbreviated almost to the point of caricature. For example, “the EEA” need not denote a single time or location (the Pleistocene, or the savannah); different cognitive adaptations may have evolved at different times in response to different circumstances. A helpful general description of EP is available in Cosmides and Tooby (1997). Richerson and Boyd (2006, p. 10) discuss the differences between their views and EP. 3 There is also no evidence that children in larger families were unhealthy, such that restricting immediate offspring could have increased long-term fitness. 4 Jonathan Birch, in collaboration with one of us (TL), has elaborated these concerns in detail in ongoing work in progress. 5 This would also mean our description of cultural information as “in the heads” (or, perhaps more precisely, minds) of individuals would have to be qualified: beliefs and other vehicles for cultural information would remain in an individual’s mind, but the content represented by those vehicles can be affected by facts elsewhere. 6 These two forms correspond to Frith and Frith’s distinction between observational learning and learning by instruction (see their 2012). 7 Although the different subgroup sizes Richerson and Boyd appeal to in the Nuer/Dinka case may be an intriguing exception.
68
Cultural evolution and the mind 8 Blurring the distinction between channels of inheritance may not be the only way niche-construction and Developmental Systems Theory challenge CE. Niche-constructionists have argued, for example, that CE models do not pay enough attention to the environment’s role in gene-culture co-evolution (Laland et al., 2000). More fundamentally, DST theorists like Griffiths and Gray have criticized the idea of a distinction between cultural and biological evolution, in favour of DST’s unified conception of evolving developmental processes (1994, pp. 301ff). We will not defend CE against these sorts of objections here, though we suspect that cultural processes are distinct and autonomous enough from other evolutionary factors that specifically cultural models can be useful. 9 Oliver Morin has reminded us that Dan Sperber and colleagues’ work on, for example, argumentative reasoning treats it as both individual and social: an individual capacity that can be exercised in isolation, but whose deliverances reflect its role in social persuasion rather than purely individual discovery. (See, for example, Mercier & Sperber, 2011.) 10 This work has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007–2013)/ERC Grant agreement no. 284123.
References Barkow, Jerome, Cosmides, Leda & Tooby, John (eds.) (1992). The Adapted Mind: Evolutionary Psychology and the Generation of Culture. Oxford: Oxford University Press. Bell, Adrian V., Richerson, Peter J. & McElreath, Richard (2009). Culture rather than genes provides greater scope for the evolution of large-scale human prosociality. Proceedings of the National Academy of Sciences, 106(42), 17671–17674. Berndt, R. (1959) The concept of the “Tribe” in the Western Desert of Australia. Oceania, 30, 81–107. Birdsell, J. (1968). Some predictions for the Pleistocene based on equilibrium systems among recent HunterGatherers. In R. B. Lee and I. Devore (Eds.), Man the Hunter: Transaction (pp. 230–241). Livingston, NJ: Aldine Transaction. Blackmore, Susan J. (2000). The Meme Machine. Oxford: Oxford University Press. Bloch, M. (2012). Anthropology and the Cognitive Challenge. Cambridge: Cambridge University Press. Boehm, D. (1999). Hierarchy in the Forest: The Evolution of Egalitarian Behavior. Cambridge, MA: Harvard University Press. Bourdieu, Pierre (1977). Outline of a Theory of Practice. Cambridge: Cambridge University Press. Broad, C. D. (1929). The Mind and its Place in Nature. Abingdon: Routledge and Kegan Paul. Burge, Tyler (1979). Individualism and the Mental. Midwest Studies in Philosophy, 4, 73–121. ———. (1986). Individualism and psychology. The Philosophical Review, 95, 3–45. Cavalli-Sforza, Luca & Feldman, M. (1981). Cultural Transmission and Evolution: A Quantitative Approach. Princeton, NJ: Princeton University Press. Cosmides, Leda & Tooby, J. (1997). Evolutionary Psychology: A Primer, published online at http://www.cep. ucsb.edu/primer.html. Accessed 5 May 2015. Craver, Carl F. and Darden, L. (2013). In Search of Mechanisms. Chicago: University of Chicago Press. Daly, M. & Wilson, M. (1988). Homicide. New York: De Gruyter. Dawkins, Richard (1989). The Selfish Gene, 2nd edition. Oxford: Oxford University Press. Dennett, Daniel C. (1996). Darwin’s Dangerous Idea: Evolution and the Meanings of Life. New York: Simon & Schuster. Frith, Chris D. & Frith, Uta (2012). Mechanisms of social cognition. Annual Review of Psychology, 63, 287–313. Geertz, Clifford (1977). Thick description: Toward an interpretive theory of culture. In The Interpretation of Cultures (pp. 3–30). New York: Basic Books. Griffiths, Paul E. & Gray, Russell D. (1994). Developmental systems and evolutionary explanation. The Journal of Philosophy, 91, 277–304. Henrich, J. (2001). Cultural transmission and the diffusion of innovations: Adoption dynamics indicate that biased cultural transmission is the predominate force in behavioral change. American Anthropologist, 103, 992–1013. Henrich, J. & Boyd, R. (2002). On modeling cognition and culture:Why cultural evolution does not require replication of representations. Journal of Cognition and Culture, 2(2), 87–112.
69
Adrian Boutel and Tim Lewens Henrich, J. & Gil-White, F. J. (2001). The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evolution and Human Behavior, 22, 165–196. Heyes, Cecilia (1994). Social learning in animals: Categories and mechanisms. Biological Reviews, 69, 207–231. ———. (2001). Causes and consequences of imitation. Trends in Cognitive Sciences, 5(6), 253–261. ———. (2012a). What’s social about social learning? Journal of Comparative Psychology, 126(2), 193–202. ———. (2012b). Grist and mills: On the cultural origins of cultural learning. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1599), 2181–2191. Heyes, Cecilia, Ray, E. D., Mitchell, C. J. & Nokes, T. (2000). Stimulus enhancement: Controls for social facilitation and local enhancement. Learning and Motivation, 31(2), 83–98. Holden, Clare & Mace, Ruth. (1997). Phylogenetic analysis of the evolution of lactose digestion in adults. Human Biology, 69(5), 605–628. Hull, David L. (1988). Science as a Process. Chicago: University of Chicago Press. Ingold, Tim. (2007). The trouble with “evolutionary biology”. Anthropology Today, 23(2), 13–17. Kelly, R. (1985). The Nuer Conquest: The Structure and Development of an Expansionist System. Ann Arbor: University of Michigan Press, 1. Knauft, B. M., Abler, T. S., Betzig, L., Boehm, C., Knox Dentan, R., Kiefer, T. M., . . . Rodseth, L. (1991). Violence and sociality in human evolution. Current Anthropology, 32(4), 391–428. Kripke, Saul (1980). Naming and Necessity. Oxford: Blackwell. Laland, Kevin N, Odling-Smee, John & Feldman, Marcus W. (2000). Niche construction, biological evolution, and cultural change. Behavioral and Brain Sciences, 23, 131–146. Levins, R. (1970). Complexity. In Towards a Theoretical Biology, Volume Three (pp. 67–86). Edinburgh: University of Edinburgh Press. Lewens, T. (2015). Cultural Evolution: Conceptual Challenges. Oxford: Oxford University Press. Lewis, David (1969). Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. Lewontin, R. C. (2005). The wars over evolution. New York Review of Books, October 20th. Lukes, S. (2005). Power: A Radical View. Basingstoke: Palgrave Macmillan. Mace, Ruth. (2009). Update to Holden and Maces’ phylogenetic analysis of the evolution of lactose digestion in adults. Human Biology, 81(5/6), 621–624. Mayr, Ernst. (1982). The Growth of Biological Thought: Diversity, Evolution and Inheritance. Cambridge, MA: Harvard University Press. Mercier, H & Sperber, D. (2011). Diversity, evolution and inheritance. Behavioral and Brain Sciences, 34(2), 57–74. Mesoudi, Alex (2011). Cultural Evolution: How Darwinian Theory Can Explain Human Culture and Synthesize the Social Sciences. Chicago: University of Chicago Press. Mesoudi, Alex, Whiten, A. & Laland, Kevin N. (2004). Perspective: Is human cultural evolution Darwinian? Evidence reviewed from the perspective of the origin of species, Evolution, 58(1), 1–11. Odling-Smee, J., Laland, Kevin N. & Feldman, M. (2003). Niche Construction:The Neglected Process in Evolution. Princeton, NJ: Princeton University Press. Oyama, Susan, Griffiths, Paul E. & Gray, Russell D. (2003). Cycles of Contingency: Developmental Systems and Evolution. Cambridge, MA: MIT Press. Putnam, Hilary (1975). The meaning of “meaning”. In K. Gunderson (Ed.), Language, Mind, and Knowledge, Minnesota Studies in the Philosophy of Science (Vol. 7, pp. 131–193). Minneapolis: University of Minnesota Press. Ray, Elizabeth & Heyes, Cecilia (2011). Imitation in infancy: The wealth of the stimulus. Developmental Science, 14(1), 92–105. Richerson, Peter J. & Boyd, R. (2006). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago: University of Chicago Press. Richerson, Peter J. & Henrich, J. (2012). Tribal social instincts and the cultural evolution of institutions to solve collective action problems. Cliodynamics, 3(1), 38–80. Ryle, Gilbert (1971), The thinking of thoughts: What is Le Penseur doing?, Collected Essays 1928–68,Volume Two (pp. 494–510). London: Hutchinson.
70
Cultural evolution and the mind Salganik, M. J., Dodds, P. S. & Watts, D.J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 10 February, 854–856. Sober, E. & Wilson, D. S. (1998). Unto Others. Cambridge, MA: Harvard University Press. Sperber, D. (1996). Explaining Culture: A Naturalistic Approach. Cambridge: Cambridge University Press. Sterelny, Kim. (2003). Thought in a Hostile World:The Evolution of Human Cognition. Malden, MA: Blackwell. ———. (2006). Memes Revisited. The British Journal for the Philosophy of Science, 57, 145–165 ———. (2012). The Evolved Apprentice: How Evolution Made Humans Unique. Cambridge, MA: MIT Press. Tindale, N. (1940). Distribution of Australian aboriginal tribes: A field survey. Transactions of the Royal Society of South Australia, 64, 140–231. Vygotsky, Lev (1962). Thought and Language, 1st edition. Cambridge, MA: MIT Press. Wilson, Robert A. (2005). Collective memory, group minds, and the extended mind thesis Cogn Process, 6(4), 227–236.
71
4 EMBODYING CULTURE Integrated cognitive systems and cultural evolution Richard Menary and Alexander James Gillett1
Introduction The Cognitive Integration (henceforth CI) framework posits the existence of integrated cognitive systems (henceforth ICS). In this chapter we outline the nature of ICS and their phylogenetic history. We shall argue that phylogenetically earlier forms of cognition are built upon by more recent cultural innovations. Many of the phylogenetically earlier components are forms of sensorimotor interactions with the environment (Menary 2007a, 2010a, 2016). These sensorimotor interactions are redeployed (or retrained) to service more recent cultural innovations (Dehaene & Cohen 2007). Take, for example, a rudimentary ability for tool use, that is refined and then built upon by innovations over many generations.The same refined sensorimotor skills for manipulating tools can be redeployed to recent cultural innovations for writing with stylus, brush or pencil (Menary 2015). Redeployment happens after a process of learning or training and the cultural innovations are inherited and spread out across groups.2 This process depends upon both high fidelity cultural inheritance and a high degree of plasticity (Sterelny 2012), which in humans is a specialised form of learning driven plasticity (Menary 2014). Learning driven plasticity (henceforth LDP) is the capacity for functional changes that are acquired from (usually) scaffolded learning in a highly structured social niche. This results in a multi-layered system with heterogeneous components, dynamically interwoven into a complex arrangement of processes and states in an integrated cognitive system. The coordination dynamics of the system are, at least in part, understood in terms of the physical dynamics of brain–body–niche interactions in real-time. One of the key ingredients of ICS is the social/cultural practices, which we call normative patterned practices (henceforth NPP), that govern the dynamics of brain–body–niche interactions. NPPs operate at both social levels and individual, even sub-personal, levels. They originate as patterns of activity spread out over a population of agents (Roepstorff et al. 2010); consequently they should be understood primarily as public systems of activity and/or representation that are susceptible to innovative alteration, expansion and even contraction over time. They are transmitted horizontally across generational groups and vertically from one generation to the next. At the individual level they are acquired most often by learning and training (hence the importance of LDP), and they manifest themselves as changes in the ways in which individuals think, but also the ways in which they act (intentionally) and the ways in 72
Embodying culture
which they interact with other members of their social group(s) and the local environment. NPPs, therefore, operate at different levels (groups and individuals) and over different timescales (intergenerationally and in the here-and-now). The main aim of this chapter is to give an overview of the CI framework in terms of phylogenetically ancient embodied interactions with the environment and the more recent culturally evolved practices that redeploy our primitive capacities for sensorimotor interactions and manipulations of tools, objects and, in a very recent innovation, public systems of representation. In doing so, we provide a case for the enculturation of our bodies and brains. In the first section we outline the role of brain–body–niche interactions in ICS. In the second section we place these interactions into the context of an inherited cognitive niche. In the third section we lay out the fundamentals of the process of enculturation, and in the final section we outline the enculturation of our basic abilities for mathematical cognition as an example of the enculturation process.
ICS and embodied engagements The CI framework explains how we learn to be active cognitive agents who think by manipulating their environments and by interacting with one another in social groups. One of the key theses of CI is that body and environment coordinate, such that the environment is a resource available to the organism for acting, thinking and communicating. In particular we look at the role of body–environment coordination in the assembly of ICS. The coordination dynamics of the system are understood in terms of the physical dynamics of brain–body– niche interactions in real-time.3 However, the interactions that matter are those that are governed by NPPs. The primary form of NPPs that we shall consider are cognitive practices (CPs) (Menary 2007a, 2010a). Cognitive practices are enacted by creating and manipulating informational structures in public space. For example, by creating shared linguistic content and developing it through dialogue, inference and narrative; or it can be by actively creating and manipulating environmental structures, which might take the form of tools of public and shared representations (or a combination of both). How do individuals embody CPs? They do so by a process of transformation of body schemas or motor programmes (Menary 2007a, 2010b; Farne et al. 2007). Motor programmes are acquired through learning and training, but existing programmes may also be extended during training. Learning to catch, write, type, or flake a hand axe are examples of acquired motor programmes. Cognition or thought is accomplished through the coordination of body and environment and is, therefore, governed both by body schemas and by biological and cultural norms. The latter will draw on many learned skills. A clear way to understand the nature of the CPs at work is the manipulation thesis. The manipulation thesis (Rowlands 1999, 2010; Menary 2007a, 2010a) concerns our embodied engagements with the world, but it is not simply a causal relation. Bodily manipulations are also normative – they are embodied practices developed through learning and training (in ontogeny).We outline six different classes of bodily manipulation of the environment, with the general label of Cognitive Practices.4 They are: 1 2 3
Biological Interactions Corrective Practices Epistemic Practices 73
Richard Menary and Alexander James Gillett
4
Epistemic Tools and Representational Systems a Epistemic Tools b Representational Systems
5
Blended Practices
1. Biological interactions are direct sensorimotor interactions with the environment. An obvious example are sensorimotor contingencies (O’Regan and Noë 2001), a direct example of low-level, embodied interactions with the environment. One might think of simple perceptionaction cycles, where direct perceptual input from the environment reciprocally causes action, which then directly feeds into further behaviour. For example, Ballard and colleagues’ (1995) study details how participants in a memory-taxing pattern-copying task offload these cognitive demands through exploratory saccadic eye movements. Dewey anticipated such a model in his discussion of the reflex arc (see Menary 2016).5 2. Corrective practices are a form of exploratory inference and are clearly present early in early cognitive development. The main feature of this form of practice is action looping through the environment to correct future action (e.g. instructional nudges (Sutton 2007)). This might be done verbally, or it might be done by a form of epistemic updating, testing a hypothesis through action. A classic example from Vygotsky helps to illustrate: A four-and-a-half-year-old girl was asked to get candy from a cupboard with a stool and a stick as tools. The experiment was described by Levina in the following way (his descriptions are in parentheses, the girl’s speech is in quotation marks): (Stands on a stool, quietly looking, feeling along a shelf with stick). “On the stool.” (Glances at experimenter. Puts stick in other hand) “Is that really the candy?” (Hesitates) “I can get it from that other stool, stand and get it.” (Gets second stool) “No that doesn’t get it. I could use the stick.” (Takes stick, knocks at the candy) “It will move now.” (Knocks candy) “It moved, I couldn’t get it with the stool, but the, but the stick worked.” (Vygotsky 1978, p. 25) The child uses speech as a corrective tool: “That didn’t work, so I’ll try this.” Speech as a corrective tool is a medium through which the child can correct her activity in the process of achieving the desired result. It may be that hypothesis formulation and test through action is developing early in children. Indeed, there is good developmental evidence for exploratory behaviour in neonates (Menary 2016). However, the dialogical nature of the self-corrective practice in this example is likely to have been developed via verbal interactions with caregivers (and possibly peers).6 3. Epistemic practices: A classic example is Kirsch and Maglio’s (1994) example of epistemic action in expert Tetris players. Experts would often perform actions that did not directly result in a pragmatic goal.7 The actions were designed to simplify cognitive processing. Other examples include the epistemic probing of an environment and epistemic diligence – maintaining the quality of information stored in the environment (Menary 2012). Epistemic diligence can take quite sophisticated forms: a simple form would be keeping the physical environment organised in such a way that it simplifies visual search (Kirsh 1995, Heersmink 2013). However, more complicated forms of epistemic diligence include updating written information in a notebook or computer file, organising it and adding information as it becomes available. 4. Epistemic tools and representational systems 74
Embodying culture
4a. Epistemic tools: Many tools aid in the completion of cognitive tasks, from rulers to calculators, from pen and paper to computers. Manipulating the tools as part of our completion of cognitive tasks is something that we learn, often as part of a problem-solving task. So, more complicated forms of tool use are built upon simpler forms of sensorimotor interactions with the environment, and innovations allow for continual improvement of technique. Some tools are more obviously designed to produce physical ends; however, other tools are designed to measure, observe, record and extend our senses (Humphreys 2004). These are more obviously epistemic tools and the way that we manipulate these tools is distinct from how we deploy, for example, the hammer.Yet, the same sensorimotor programmes for physical tool use can be redeployed as the biological basis for epistemic tool use. However, without sophisticated cognitive practices and public systems of representation, epistemic tools would be as useless to us as they are to cats. 4b. Representational systems: Behaviourally modern humans display an incredible facility for innovating new forms of representational systems. They also display a general capacity for learning how to create, maintain and deploy representations. Alphabets, numerals, diagrams and many other forms of representation are often deployed as part of the processing cycle that leads directly to the completion of a cognitive task (Menary 2015). Without public systems of representation, cognitive practices of the most sophisticated kind would be impossible. Therefore, it is important to have an account of what the nature of these public systems of representations are.8 5. Blended interactions: Complex cognitive tasks may involve combinations of practices in cycles of cognitive processing. This seems likely given the hierarchical nature of ICS, where more recent practices are built upon the more ancient. All levels of processing can be deployed at once depending upon the nature of the task. As we shall see in the third section, mathematical cognition may call upon the manipulation of tools in conjunction with mastery of public numeral systems and algorithms for manipulating those numerals. Learning driven plasticity and cognitive practices The acquisition of CPs depends upon our capacity to learn, and a capacity to learn is in turn dependent upon neural plasticity (Menary 2014). We can think of neural plasticity in three broad ways: the first is structural plasticity – actual changes to the structure of the brain; the second is functional plasticity – actual changes to the function of the brain; and the third is learning driven plasticity (Menary 2014, pp. 293–294).The important thing to note about LDP is that it is not a matter of competitive learning in a neural network with randomised initial weights. Whilst the brain may be constrained or biased to producing certain kinds of functions in ontogeny, the learning environment of humans is highly structured and controlled and not simply the location of undifferentiated input. Even when learning is exploratory it still takes place in a highly structured and informationally rich environment. The scaffolding of culture and education makes an important contribution to the way that the brain develops in children. Learning is a situated activity immersed within a suite of patterned practices. It results in transformational effects on developmentally plastic brains, in the sense that our brains get sculpted by the patterns of practices in our niche. The niche in question is the cultural niche and it contains practices, representations, tools, artefacts, experts, teaching methods and so on. As we shall see in the third section, neural circuitry can be redeployed via LDP such that phylogenetically older circuitry can be redeployed for new cultural functions (such as learning to read, learning to recognise Arabic numerals and so on (see Dehaene & Cohen 2007). We turn next to the evolution of plasticity and the cultural inheritance of structured developmental niches. 75
Richard Menary and Alexander James Gillett
ICS and niche construction We are all familiar with the idea of natural selection, derived from the modern synthesis, of environmental selection pressures that influence populations of phenotypes and the inheritance of genetic material from the previous generation. The relationship between environment and organism is asymmetric in the modern synthesis. An extension of the modern synthesis (it should be noted that this is not a replacement) involves not seeing evolution as an asymmetric relationship of selective pressures from environments to organisms, but as a symmetrical relationship (Godfrey-Smith 1996) where organisms (and phenotypic traits) and environments co-evolve. The traditional model of evolution only recognises one line of inheritance of traits from genes. More recently, biologists interested in niche construction (Odling-Smee, Laland & Feldman 2003) have proposed that there is another line of inheritance: ecological inheritance. Niche construction involves modifications to the ancestral environment that are bequeathed to the next generation. This encompasses physical alterations, such as living in mounds or constructing hives, as well as cultural artefacts, practices and institutions. Niche construction is a process by which organisms modify the selective environment such that there are new selection pressures acting on generations over long periods of time. The modifications change selective pressures which in turn modify traits. This occurs over long periods of evolutionary time (potentially millions of years).9 Humans are cultural “niche constructors par excellence”; however, they don’t just physically alter the environment, they also epistemically or cognitively engineer the environment (Sterelny 2003, 2012). Humans are born into a highly structured cognitive niche that contains not only physical artefacts, but also representational systems that embody knowledge (writing systems, number systems, etc.), and skills and methods for training and teaching new skills (Menary and Kirchhoff 2014). Following Sterelny (2012) we term this “cognitive capital”. These highly structured socio-cultural niches have had profound evolutionary consequences in the hominin lineage. The primary consequence is phenotypic and developmental plasticity. We have evolved to be a very behaviourally plastic species (Sterelny 2012). Rather than thinking of humans as adapted for Pleistocene hunting and gathering environments, we should think of human behavioural and developmental plasticity as an adaptive response to the variability and contingency of the local environment (Finlayson 2009; Potts 2012; Sterelny 2003, 2012). Modern humans are capable of developing a wide range of skills that allow them to cope with a wide variety of environments.This cognitive flexibility requires an extended period of cognitive development, much more so even than that of our nearest relatives, such as the different species of great apes. What’s the importance of the cognitive niche? The main innovations are to add an extra line of inheritance to the single genetic line of inheritance whereby an ecological niche, as well as genetic material, are inherited by the next generation (Odling-Smee, Laland & Feldman 2003). Organisms are born into niches that they inherit from the previous generation. These niches have been acted upon by previous generations often structuring and organising it in ways that would not otherwise occur. The constructed niche places selective pressure onto phenotypes, which in turn results in further modifications of the niche, leading to a reciprocal relationship between organism and niche. Over time the reciprocal relationship can result in evolutionary cascades, which can have profound effects on phenotypes, including morphological and behavioural changes (Sterelny 2005). “Humans are niche constructors par excellence” (Sterelny 2012, p. 145). To understand the nature of human niche construction, we must introduce a third line of inheritance: cultural 76
Embodying culture
inheritance.10 Cultural inheritance includes tools, artefacts and so on, but also more intangible products of human cultures such as knowledge, narratives, skills and representational systems, systems of pedagogy and a large variety of practices.The cultural niche is a rich milieu in which human children learn and develop.The crucial change for behaviourally modern humans is the capacity for cumulative cultural inheritance, “which was ultimately to transform Homo sapiens into the richly cultural species we are today” (Whiten et al. 2011, p. 942). The standard interpretation of the archaeological record indicates that there was a revolution approximately 60,000–40,000 years ago – the Upper Palaeolithic revolution – in which there was a real explosion of novelty and the advent of behaviourally modern humans. However, there is evidence that many of these traits, including symbolic activity, could precede the Upper Palaeolithic revolution and could have appeared and vanished irregularly over the last 150,000 years or so (see Sterelny 2012 for an overview). For instance d’Errico and colleagues (2001) propose that there is evidence of symbolic activity on bone fragments 70,000 years ago. Sterelny argues that this transient appearance of precursors of behavioural modernity implies that behavioural modernity is a cultural achievement premised on multiple factors rather than a single genetic change or cultural innovation. This suggests that establishing the successful retention of cultural innovations is difficult, but once it can be transmitted in a stable manner that cultural niche construction escalates – what Tomasello (1999) calls the “cultural ratchet effect”.11 This fits nicely with the emphasis on cognitive niche construction proposed by the CI position. The explosion of cultural and behavioural diversity that accelerates from the Upper Palaeolithic is dependent on a range of factors coming together: inherited cultural capital, phenotypic and learning driven plasticity, complex social relations and language. In this period we see increasing genuine novelty in tool production and use; art, including jewellery, paintings, sculpture and musical activity; fishing and a wider range of cooperative hunting and foraging; burial practices; cultural diversification; and the first signs of proto-numerical and writing systems as novel representational innovations such as tally notch systems (see Conard 2006 for an overview).These could have been for keeping track of economic exchanges, lunar calendars or hunting tallies (d’Errico & Caucho 1994). The tools themselves, but also the skills necessary to make, maintain and deploy the tools, must be inherited from the previous generation. Tool creation and use requires very refined sensorimotor skills (Stout et al. 2008),12 which must be learned. Basic sensorimotor skills are being retrained and extended during the acquisition process. Here is where LDP really makes a difference; without LDP the acquisition of the skills required for creating, maintaining and manipulating tools would be very difficult. Social learning in highly scaffolded niches and LDP are co-constraining. Without a sufficient degree of neural plasticity social learning is attenuated, but without structured and stable learning environments functional redeployment of neural circuitry cannot happen through learning.This construction accounts for the structuring of the environment and its inheritance by future generations. LDP accounts for how our brains can acquire novel culturally derived cognitive functions. Putting the two together explains how we have evolved to be the cultural creatures that we are. The next section explores the process of enculturation.
Enculturation Tomasello (1999, 2009) has pointed out that although other animals have culture, in humans it is both quantitatively and qualitatively unique. Human culture is quantitatively unique due to the extraordinary amount of techniques and tools and accompanying NPPs which novices 77
Richard Menary and Alexander James Gillett
must necessarily learn in order to survive. But Tomasello also identifies two senses in which human culture is qualitatively unique: cultural ratcheting (accumulative downstream niche construction), and social institutions (“sets of behavioural practices governed by various kinds of mutually recognised norms and rules” (2009, p. xi)) – what we have termed NPPs. Both of these profoundly change the nature of human cognition. Learning NPPs in a developmental niche transforms a human agent’s cognitive capacities so that they can tackle cognitive tasks that were previously impossible or inconceivable. A broad range of theorists have advanced enculturated cognitive positions (see Hutchins 2011; Lende & Downey 2012; Nisbet et al. 2001; Roepstorff et al. 2010; Tomasello 1999; Vygotsky 1978). Here we develop the position advanced by Menary (2007a, b, 2010a, b, 2012, 2013, 2014, 2015) which argues that humans construct and inhabit cognitive niches in which our minds become enculturated and transformed through the learning and mastering of NPPs that govern the manipulation of environmental resources and interactions of social groups. The key factors of the enculturated cognitive position of CI can be summarised as follows: [1] NPPs governing the embodied manipulations of physical tools; which operate in [2] highly structured and cooperative shared cognitive niches, importantly including a developmental component with implicit and explicit teaching through which NPPs are acquired; and this process is in turn dependent on [3] general phenotypic plasticity – especially neural plasticity – that allows for the transformative effects of the learning and enculturating processes to take place.This transformation relies on the recycling or redeploying of older cortical structures to newer cultural acquisitions (Anderson 2010, Dehaene & Cohen 2007). As Tomasello (1999, p. 7) puts it: enculturation processes do not . . . create new cognitive skills out of nothing, but rather they took existing individually based cognitive skills – such as those possessed by most primates for dealing with space, objects, tools, quantities, categories, social relationships, communication, and social learning – and transformed them into new, culturally based cognitive skills with a social-collective dimension. (emphasis added) Importantly, this quote highlights that enculturation is the exaptation or redeployment of pre-existing cortical structures to newer culturally generated functions. But Tomasello also points out that enculturation is both an ancient and ongoing process occurring at three distinct timescales (Tomasello 1999). Firstly, over phylogenetic timescales – the evolution of the human primate; Laland et al. (2010) have collected a wide range of evidence that cultural practices have affected the human genome. Secondly, over historical timescales – this is the accumulation of cognitive capital with the high fidelity transmission of skilled practices and cultural knowledge both horizontally and vertically and downstream epistemic engineering in a specific cognitivecultural niche (Sterelny 2003, 2012). The veridicality of communication and learning channels within the niche allows for the retention of improvements – what Tomasello (1999) calls “cultural ratcheting”. Hutchins (2001) refers to this process as the distribution of cognition across time, whereby cognitive tasks are successfully tackled intergenerationally through the collaborative and distributed effort of multiple agents building and refining shared mediums and tools that are accumulated and refined to manage recurring everyday cognitive tasks. This changes the informational profile of the epistemic niche over time and alters the nature of the cognitive tasks as well. Lastly, enculturation takes place over ontogenetic timescales – this is the inculcation of specific agents in developmental niches (Stotz 2010). Humans have an incredibly high propensity 78
Embodying culture
for teaching and learning (Dean et al. 2012; Keil 2011). A key element of human learning is the functionally correct deployment of tools and perceiving of task-salient affordances of the environment (see Vaesen 2012, p. 206). By learning to master NPPs that govern the cognitive resources that have been accumulated by previous generations, agents are able to engage in cognitive tasks that would otherwise be incredibly difficult, impossible or potentially inconceivable.This is the transformative aspect of enculturation (Menary 2007a).13 LDP and the high degree of plasticity make humans highly susceptible to enculturation processes and acquiring cultural practices and skills. Older cortical structures are redeployed into newer diverse cultural functions which have transformative effects on both neuronal architecture and physiological structure of the body. It also enhances the functional performance of cognitive tasks, enabling agents to tackle novel cognitive tasks. This is supported by an abundance of empirical evidence in a range of experimental paradigms to support enculturation: in cognitive domains such as attention (Ketay et al. 2009), perception and motor processes (Nisbet et al. 2001; Draganski et al. 2004); music (Gaser & Schlaug 2003); literacy and language (Castro-Caldas et al. 1999); moral reasoning, social cognition and emotions (Henrich et al. 2010); categorisation, judgment, reasoning, problem solving and decision making (Henrich et al. 2005, 2010; Nisbet et al. 2001); memory and navigation (Maguire et al. 2000); and tool use (Farne et al. 2007). Downey and Lende (2012) provide a very useful overview of this evidence (and for more critical assessments of some of this research, see Roepstorff et al. 2010; Reynolds Losin et al. 2010). In the next section we will outline the practice of mathematics as a case of the transformative effects of enculturation, and also as partially constitutive of cognitive processes in hybrid ICS encompassing brain–body–niche interactions. Before we do so, it is important to clarify a few key aspects of the transformation thesis. Firstly, to recap: Menary (2014) argues that the convergent evidence of a late-developing cortex; an extended developmental stage in humans; evidence of continuing plasticity in adults; diverse and hostile environments in our hominin evolutionary history; and complex social situations all drive the need for LDP. In developmental niches this allows for the transformation of the agent’s functional capacities through the redeployment of neural circuits to enable the bodily manipulation of external representational vehicles and thus the acquisition of new skills (Menary 2015, p. 9). In turn, this allows the scaffolded agent to both [a] tackle cognitive tasks in new ways and [b] tackle cognitive tasks that could have been previously inconceivable (also see De Cruz & De Smedt 2013; Kirsh 2010; and Nieder & Dehaene 2009). Menary (2015) goes further in clarifying this. He postulates that external material symbols and tools provide “novel” functions (p. 10) – i.e. functional aspects that could not be done merely in the head – and it was these novel factors that lead to their proliferation. As such, Menary argues that a wide range of human cognitive abilities are partially constituted by the learnt NPPs that agents must master in order to tackle novel cognitive problems using shared public symbols and other cognitive resources (also see Dutilh Novaes 2012, 2013).These environmental resources and the NPPs that govern their usage are part of particular culturalcognitive niches that are definitive of human cognition as ICS. As Nersessian puts it: culture is not something additional to human cognition, “culture is what makes human cognition what it is” (2005, pp. 31–32). It is also important to clarify that the transformative effects of deploying cognitive artefacts is often misconstrued as simply “amplifying” or “augmenting” the cognitive capacities of the agent (for example, see Bruner et al. 1966). Cole and Griffin (1980) have rightly observed that the use of epistemic tools does not straightforwardly amplify cognition in the way that a physical tool amplifies our physical prowess. For instance, a spade may improve an agent’s digging abilities and a loudhailer amplifies the volume of someone’s voice, but it is not strictly true 79
Richard Menary and Alexander James Gillett
that the manipulation of physical public symbol systems on a page or in a calculator amplify an agent’s capacities. Instead, it is more accurate to see their manipulation as the alteration of the cognitive task or functional capacities to form a cognitive system that has different and “unique” sets of cognitive properties that are not present in the agent considered in isolation (Hutchins 2006; Norman 1991). This shows that when we consider the transformative effects of enculturation processes we must be careful to discern the level of analysis (Norman 1991). Additionally, the fact that hybrid integrated cognitive systems have properties not reducible to the individual indicates the need for a shift in the unit of analysis to necessarily incorporate the cognitive niche in order to properly understand human cognition as essentially enculturated (Hutchins 2011; Menary 2012, 2013, 2015).
Mathematical cognition as a process of enculturation Experiments with animals (Ansari 2008), young children (Dehaene 1997), bilingual adults (Dehaene et al. 1999) and adults from cultures without discrete number words (Dehaene 2007) in a range of experimental paradigms are highly suggestive of an “ancient number system” (ANS).This system is proposed to be amodal14 (Cantlon et al. 2009; Dehaene et al. 1998, 2004) and displays characteristics which render it approximate and fuzzy – distance effects (whereby the error rates in quantity comparison tasks increase as the distance separating the two quantities decreases) and magnitude effects (error rates increase as the absolute totals of the quantities involved in the tasks increase) (see Dehaene 1997 for an overview). On the basis of a large body of evidence, the ANS is postulated to be evolutionarily ancient. The notion being that a basic capacity for discerning and discriminating quantity is evolutionarily advantageous: whether one can detect larger benefits and avoid larger dangers is something that improves the survival of an organism (Ansari 2008; Dehaene 1997). In humans, numerous neuroimaging studies and neuropathology studies indicate that the neural basis for the ANS is in the intraparietal sulcus and surrounding regions (Dehaene 1997, 2007 and colleagues 1999). But in addition to making approximate judgments about quantities, humans can also perform discrete computations with a “discrete number system” (DNS). A wide range of neuroimaging and behavioural studies indicates that the DNS and ANS share neural correlates (see Lyons et al. 2012 for an extensive list of corroborating studies). The neural basis of a mental number line and ANS involves number-detecting neurons. These neurons were postulated to fire approximately with fat tails: e.g. a number detector that fires for 6 will also partially fire for 5 and 7. This then explains the distance effect because for any value, multiple neurons will fire at differing degrees and this causes a degree of fuzziness for judgments of largest or smallest, etc. Neural net models have been made of the distance and magnitude effects (Dehaene 2007;Verguts & Fias 2004) and these were supported by evidence of single-neuron studies on rhesus monkeys (Nieder et al. 2006; see Ansari 2008 and Nieder & Dehaene 2009 for discussion). The tuning curves of these number-detecting cells overlap in a manner that is consistent with what one would expect with the distance effects. Importantly, rather than undermining the enculturated cognitive position as some have argued (see Zahidi & Myin forthcoming), the transition from the ANS to the DNS is perhaps one of the best examples of enculturation. The two effects and approximate nature of the ANS combine to give the mental number line a logarithmic structure. Dehaene (2007) has argued that the acquisition of symbolic representations in development alters the structure of the mental number line to a more precise linear format. Learning how to manipulate public symbolic notation – cultural practices – has a transformative effect on both cognitive functional performance and also on neural architecture. Numerous sources of evidence lead to this view: 80
Embodying culture
[1] longitudinal studies of brain activity in 8–19-year-olds show decreasing activity in the PFC during mathematical tasks suggestive of automatisation, but also shows increasing activity in the left parietal cortex (the postulated neural substrate of the mental number line); [2] young children asked to space the numbers 1 to a 100 evenly on a page place “10” at the halfway point and bunch all the larger numbers up at one end; this behaviour is absent in adults but is present in some illiterate adults of traditional communities (e.g. the Munduruku of the Amazon) that do not have discrete number words; and [3] there is a mixed response to number tasks by bilinguals that is indicative of the participants switching into their native tongue to carry out the calculation of symbolic tasks (see Dehaene et al. 1999; Lyons et al. 2012; Piazza et al. 2013; Viarouge et al. 2010). This is clear evidence not just of enculturation, but also of the truly transformative effects that learning cultural practices can have on human cognition. Culturally new capacities exapt and recycle older phylogenetic functional regions to newer culturally generated purposes (Dehaene & Cohen 2007). Dehaene further argues that the older phylogenetic functional basis constrains the extent to which it can be recycled/redeployed and shifted into a new function. In this particular case the evidence suggests that an ancient primate or core neural system integrates symbolic numerical representations and that this both transforms mathematical cognitive functional capacities and alters neurological architecture. Additionally, Cantlon and colleagues (2009) present evidence that young children use the same network of brain regions to tackle both symbolic and non-symbolic notations and that this is therefore an abstract, notationindependent appreciation of number. This large body of evidence lends credence to the notion that the evolutionarily new use and manipulation of symbolic mediums recycles an evolutionarily older mechanism. And the experiments by Lyons and colleagues (2012) also lend support to the claim that the number line is altered by enculturation. These experiments reveal a disjunction between symbolic to symbolic processing and symbolic to non-symbolic processing – this matches the “rupture” noted by Radford (2003) in the development of mathematical abilities from pre-symbolic to symbolic manipulations (also see Deacon 1997; Nieder 2009). And this also fits with the wider body of evidence that shows that increased PFC activity in novices diminishes as they become expert in modern mathematical cognitive practices. Finally, these learning driven neuroplastic changes reach their peak in expert mathematicians who have macroscopically altered regions that are involved in both arithmetic and also the visuospatial imagery necessary for the manipulation of complex objects required for advanced mathematics (Aydin et al. 2007).15 If modern mathematical abilities involve the redeployment of older cortical structures to newer functions, we would expect to find both diversity and constraints in how humans from different cognitive-cultural backgrounds perform in mathematical cognitive tasks. And indeed this is what has been found. An experiment by Tang and colleagues (2006) demonstrates that the differing NPPs of different cultural-cognitive niches can have effects on both neuronal architecture and function, and behavioural performance. Tang and colleagues compared two groups of students – English speakers and Chinese speakers – and found that the former had neural correlates in the perisylvian language regions whereas the latter had correlates in the premotor cortex. Additionally, although of comparative intelligence, the Chinese students outperformed their English counterparts. In a review, Cantlon and Brannon (2006) observed that there were many factors from the cognitive-cultural niche that could account for such differences: abacus use; differences in writing styles; differing styles of number words (Chinese number words are much less demanding on working memory); preferred cognitive strategies; and overall education systems (also see Butterworth 1999; cf. Reynolds Losin et al. 2010). 81
Richard Menary and Alexander James Gillett
This example shows the importance of the cognitive niche for how agents approach cognitive tasks. Differing sets of tools, techniques and NPPs alter how cognitive tasks are performed. The importance of the brain–body–niche interactions in mathematical cognition is further demonstrated by a range of behavioural studies. A series of experiments by Zhang (1997; Zhang & Norman 1994) has shown that if the structure of the external representations used for a cognitive task instantiate salient features of the abstract task properties, they facilitate cognitive offloading and reduce the information load on working memory and improve overall task performance. Zhang and Norman (1995) supported these findings with an analytic comparison of various notational systems to show that the prevalence of Hindu-Arabic and general cultural invasiveness is due to the formal structure of the material symbols which makes them far superior to Roman numerals for calculations. The structure of the external representation separates out the base and power dimensions in a perceptually convenient manner. For example, four-hundred and forty-seven in Arabic numerals is 447 = 4 × 10(2) × 4 × 10(1) × 7 × 10(0), and the shape is the base and position is the power. In Roman numerals it is CDXLVII and as such position does not correspond to power and the shape does not correspond to base. In another set of experiments, Landy and Goldstone (2007) subtly modified seemingly non-task specific perceptual groupings around algebraic equations in a series of experiments. This included increasing or decreasing the size of gaps between terms in the equations; adding in shaded areas in the backgrounds of the equations that created perceptual groups; and reordering terms to be either cognisant or contradictory to the FOIL order of operations (also see Dutlih Novaes 2012 for discussion). As in Zhang and colleagues’ work, these modifications of the structure of the external representations either aided or hindered task performance dependent on whether they were congruent to the order of operations in the equations or not. Crucially, these modifications had an effect even when participants knew they were being influenced, indicating that “perceptual groupings” play a larger role in abstract mathematical thinking than is normally acknowledged. We can interpret the work of a wide range of theorists from different fields (Alibali & DiRusso 1999; Landy et al. 2014; Nemirovsky et al. 2013; Radford 2009; Sato et al. 2007) as all broadly arguing that embodied manipulations of cognitive tools – looping brain–body–niche interactions – are incredibly important in mathematical cognition; not just for pedagogy and learning, but also for high-level expert problem solving (Marghetis & Nunez 2013). Building on this we can argue that accumulative downstream cognitive niches constrain and enable how mathematical cognitive tasks are tackled. Along similar lines De Cruz and De Smedt (2013) have argued that symbols (and other external representational vehicles such as body parts, gestures, number words and tally systems – see De Cruz 2008) act as “material anchors” and are “epistemic actions” – whereby the physical manipulations of the environment are not just physical movements but are themselves also movements in an abstract problem space towards a cognitive task (Kirsh & Maglio 1994; also see Hutchins 2005). De Cruz and De Smedt demonstrate their position through a number of historical case studies: zero (0); imaginary (i) and subsequent complex numbers (a+bi); negative numbers (‑n); and algebra (x, y, z, etc.). In each case, they show that the material sign played a role in discovery by facilitating the cementation (stability) of vague ideas which aids the creative effort. For example, in the case of negative numbers, the minus sign was already used as an operator before the drive for closure enabled the invention of numbers “below” or “beyond” zero. This allowed the possibility of conceiving of a task that was previous inconceivable. De Cruz and De Smedt nicely demonstrate this by juxtaposing the seemingly mundane nature of the task in the modern era with a quote from a prominent mathematician Masères from the 17th century: “ ‘3 − 8 is an impossibility; it requires you take from 3 more than there is in 3, which is absurd’ ” (2013, p. 13). As Menary (2010b, 82
Embodying culture
2015) has argued, it is the learning and deploying of NPPs which govern the embodied manipulations of these cognitive artefacts and which transforms the cognitive abilities of the wider integrated cognitive system. As such, the agent trained in the manipulation of mathematical notation is able to tackle cognitive tasks in a superior manner, and also able to comprehend tasks that would otherwise be impossible.
Conclusion Cognitive integration is a framework which allows us to explain how cognition is enculturated. It does so by providing a dual account of the cultural evolution of cognition and learning. It uniquely provides an account of how we embody culture and how culture provides new cognitive functions that redeploy our basic sensorimotor interactions with the environment. These looping interactions and cognitive practices are a form of cognitive niche construction whereby brain–body–niche interactions alter not only the physical environment but also the inherited informational profile in which future generations are enculturated, and transforms the nature of the cognitive tasks that they face.The process of enculturation hinges on the plasticity of our brains and our capacity for flexible redeployment of existing cognitive capacities to innovative cultural functions. It also requires openness to learning in highly scaffolded and social learning environments. The importance of enculturation lies in the acquisition of new capacities, allowing us to perform tasks that we should otherwise be unable to. Culture permeates our physical and mental lives, but it does so through our inherited cognitive capital and the plasticity of our existing cognitive circuitry.
Notes 1 The chapter is jointly authored. Both authors are based at the department of Philosophy, Macquarie University Sydney. Research for this article was supported by the Australian Research Council, Future Fellowship FT 130100960. 2 This is what Menary (2012, 2015) calls a process of enculturation. 3 Coordination dynamics are the interactions between the components of the system – both processes and structures (see Menary 2013 for more details). 4 This refines Menary’s earlier analysis of cognitive practices in terms of Biological coupling, Epistemic actions, Self-correcting actions and Cognitive practices. (Menary 2007a, 2010a, 2010b) The term cognitive practices is now more all-encompassing for all these other kinds of cognitive manipulations. 5 Although interactions of this kind aren’t obviously practice-like, they are often influenced by cultural practice. Sensorimotor capacities that underlie our capacities for various skills, such as driving and writing, are good examples of how we embody cultural practices. 6 The exposition here aims for brevity. Menary (2007a, 2016) provides a detailed account of the developmental aspects of corrective practices. 7 These actions are direct manipulations of the task structure in the environment rather than internal representations. And although experts do perform more physical acts, their performance is faster and more accurate than novices who rely more heavily on internal resources. 8 For such an account see Menary (2007a, Chapters 4–6). 9 See Turner (2000) for plentiful examples. 10 Or we might blend the ecological and cultural into a single line of inheritance. Odling-Smee (2007) has expressed skepticism about the need for a third line of inheritance. He argues that separating the ecological and the cultural is ad hoc and complicated and outweighs the benefits of treating them separately. Irrespectively, cultural inheritance matters for understanding human niche construction; and there does seem to be a prima facie qualitative difference between cultural inheritance and physical engineering. 11 See section three for more discussion of Tomasello. 12 This is evident even in Homo habilis and the Erectines and is another example of a biological interaction. 13 This has been discussed in a number of places: Menary (2007b, 2010a, 2010b, 2012, 2013, 2014, 2015).
83
Richard Menary and Alexander James Gillett 1 4 Or perhaps multi-modal, since the ANS appears to be sensitive to multiple sensory modalities. 15 Potentially, the enculturated cognitive approach offers an interesting perspective on a perennial topic in the psychology of mathematics: the prevalence of the folk-metaphysical belief in Platonism amongst practicing mathematicians. A precursor to this was formulated by the mathematician Keith Devlin (2008). We can rephrase his claim in the following manner: a possible explanation for why Platonism is the default folk-belief system of mathematical practitioners is that by redeploying cortical circuits whose original function was spatial navigation and patterns, these neural circuits bring “baggage” with them – namely, that they are directed at real entities out there in the world. And the prevalence of spatial language in discussions of mathematical entities may be indicative of this.
References Alibali, M. W. & DiRusso, A. A. (1999). The functions of gesture in learning to count more than keeping track. Cognitive Development, 14, 37–56. Anderson, M. L. (2010). Neural Reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33, 245–264. Ansari, D. (2008). Effects of development and enculturation on number representation in the brain. Nature Reviews Neuroscience, 9, 278–291. Aydin, K., Ucar, A., Oguz, K. K., Okur, O. O., Agayev, A., Unal, Z., . . . Ozturk, C. (2007). Increased Gray matter density in the parietal cortex of mathematicians: A voxel-based morphometry study. American Journal of Neuroradiology, 28, 1859–1864. Ballard, D., Hayhoe, M. & Pelz, J. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience, 7(1), 66–80. Butterworth, B. (1999). The Mathematical Brain. London: Macmillan. Bruner, J. R., Olver, R. R. & Greenfield, P. (1966). Studies in Cognitive Growth: A Collaboration at the Center for Cognitive Studies. New York: John Wiley and Sons. Cantlon, J. F. & Brannon, E. M. (2006). Adding up the effects of cultural experience on the brain. Trends in Cognitive Sciences, 11(1), 1–4. Cantlon, J. F., Libertus, M. E., Pinel, P., Dehaene, S., Brannon, E. M. & Pelphrey, K. A. (2009). The neural development of an abstract concept of number. Journal of Cognitive Neuroscience, 21(11), 2217–2229. Castro-Caldas, A., Cavaleiro Miranda, P., Carmo, I., Reis, A., Leote, F., Ribeiro, C. & Ducla-Soares, E. (1999). Influence of learning to read and write on the morphology of the corpus callosum. European Journal of Neurology, 6, 23–28. Cole, M. & Griffin, M. (1980). Cultural amplifiers reconsidered. In D. Olson (Ed.), The Social Foundations of Language and Thought. Essays in Honor of Jerome Bruner (pp. 343–364). New York: Norton Conard, N. (2006). An overview of the patterns of behavioural change in Africa and Eurasia during the Middle and Late Pleistocene. In F. d’Errico & L. Blackwell (Eds.), From Tools to Symbols from Early Hominids to Humans (pp. 294–332). Johannesburg: Wits University Press. De Cruz, H. (2008). An extended mind perspective on natural number representation. Philosophical Psychology, 21(4), 475–490. De Cruz, H. & De Smedt, J. (2013). Mathematical symbols as epistemic actions. Synthese, 190, 3–19. Deacon, T. (1997). The Symbolic Species:The Co-Evolution of Language and the Human Brain. London: Norton. Dean, L. G., Kendal, R. L., Schapiro, S. J., Thierry, B. & Laland, K. N. (2012). Identification of the social and cognitive processes underlying human cumulative culture. Science, 335, 1114–1118. Dehaene, S. (1997). The Number Sense – How the Mind Creates Mathematics. London: Penguin. ———. (2007). Symbols and quantities in parietal cortex: Elements of a mathematical theory of number representation and manipulation. In P. Haggard,Y. Rossetti and M. Kawato (Eds.), Attention & Performance XXII. Sensori-Motor Foundations of Higher Cognition (pp. 527–574). Cambridge, MA: Harvard University Press. Dehaene, S. & Cohen L. (2007). Cultural recycling of cortical maps. Neuron, 56, 384–398. Dehaene, S., Dehaene-Lambertz, G. & Cohen, L. (1998). Abstract representations of numbers in the animal and human brain. TINS, 21(8), 355–361.
84
Embodying culture Dehaene, S., Molko, N., Cohen, L. & Wilson, A. J. (2004). Arithmetic and the brain. Current Opinion in Neurobiology, 14, 218–224. Dehaene, S., Spelke, E., Pinel, P., Stanescu, R. & Tsivkin, S. (1999). Sources of mathematical thinking: Behavioral and brain-imaging evidence. Science, 284, 970–974. d’Errico, F. & Cacho, C. (1994). Notation versus decoration in the upper paleolithic: A case-study from Tossal de la Roca, Alicante, Spain. Journal of Archaeological Science, 21, 185–200. d’Errico, F., Henshilwood, C. & Nilssen, P. (2001). An engraved bone fragment from c. 70,000-year-old Middle Stone Age levels at Blombos Cave, South Africa: implications for the origin of symbolism and language. Antiquity, 75, 309–318. Devlin, K. (2008). A mathematician reflects on the useful and reliable illusion of reality in mathematics. Erkenn, 63, 359–379. Downey, G. & Lende, D. (2012). Neuroanthropology and the enculturated brain. In D. Lende and G. Downey (Eds.), The Enculturated Brain: An Introduction to Neuroanthropology (pp. 23–65). Cambridge, MA: MIT Press. Draganski, B., Gaser, C., Busch,V., Schuierer, G., Bogdahn, U. & May, A. (2004). Neuroplasticity: Changes in grey matter induced by training. Nature, 427, 311–312. Dutilh Novaes, C. (2012). Formal Languages in Logic – A Philosophical and Cognitive Analysis. Cambridge: Cambridge University Press. ———. (2013). Mathematical reasoning and external symbolic systems. Logique & Analyse, 221, 45–65. Farne, Serino & Ladavas (2007). Dynamic size-change of peri-hand space following tool-use: Determinants and spatial characteristics revealed through cross-modal extinction. Cortex, 43, 436–443. Finlayson, C. (2009). The Humans Who Went Extinct – Why Neanderthals Died Out and We Survived. New York: Oxford University Press. Gaser, C. & Schlaug, G. (2003). Brain structures differ between musicians and non-musicians. The Journal of Neuroscience, 23(27), 9240–9245. Godfrey-Smith, P. (1996). Complexity and the Function of Mind in Nature. Melbourne: Cambridge University Press. Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., . . . Tracer, D. (2005). “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioral and Brain Sciences, 28, 795–814. Henrich, J. Heine, S. J. & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–83. Heersmink, R. (2013). A taxonomy of cognitive artifacts: Function, information, and categories. Review of Philosophy and Psychology, 4(3), 465–48. Humphreys, P. (2004). Extending Ourselves: Computational Science, Empiricism, and the Scientific Method. Oxford: Oxford University Press. Hutchins, E. (2001). Distributed cognition. In R. A. Wilson and F. C. Keil (Eds.), The MIT Encyclopedia of the Cognitive Sciences (pp. 2068–2072). Cambridge, MA: MIT Press. ———. (2005). Material anchors for conceptual blends. Journal of Pragmatics, 37, 1555–1577. ———. (2006).The distributed cognition perspective on human interaction. In N. J. Enfield and S. C. Levinson (Eds.), Roots of Human Sociality: Culture, Cognition and Interaction (pp. 375–398). Oxford: Berg. ———. (2011). Enculturating the supersized mind. Philosophical Studies, 152, 437–446. Keil, F. C. (2011). Science starts early. Science, 331, 1022–1023. Ketay, S., Aron, A. & Hedden, T. (2009). Culture and attention: Evidence from brain and behaviour. Progress in Brain Research, 178, 79–92. Kirsh, D. (1995). The intelligent use of space. Artificial Intelligence, 73, 31–68. ———. (2010). Thinking with external representations. AI & Society, 25, 441–454. Kirsh, D. & Maglio, P. (1994). On distinguishing epistemic from pragmatic actions. Cognitive Science, 18, 513–549. Laland, K. N., Odling-Smee, J. & Myles, S. (2010). How culture shaped the human genome: Bringing genetics and the human sciences together. Nature Reviews Genetics, 11, 137–148. Landy, D., Allen, C. & Zednik, C. (2014). A perceptual account of symbolic reasoning. Frontiers in Psychology, 5, 1–10.
85
Richard Menary and Alexander James Gillett Landy, D. & Goldstone, R. L. (2007). How abstract is symbolic thought? Journal of Experimental Psychology, 33(4), 720–733. Lende, D. & Downey, G. (Eds.) (2012). The Enculturated Brain: An Introduction to Neuroanthropology. Cambridge, MA: MIT Press. Lyons, I. M., Ansari, D. & Beilock, S. L. (2012). Symbolic estrangement: Evidence against a strong association between numerical symbols and the quantities they represent. Journal of Experimental Psychology: General, 141(4), 635–641. Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, S. J. & Frith, C. D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. PNAS, 97(8), 4398–4403. Marghetis,T. & Nunez, R. (2013).The motion behind the symbols: A vital role for dynamism in the conceptualization of limits and continuity in expert mathematics. Topics in Cognitive Science, 5, 299–316. Menary, R. (2007a). Cognitive Integration – Mind and Cognition Unbounded. Hampshire: Palgrave Macmillan. ———. (2007b). Writing as thinking. Language Sciences, 29, 621–632. ———. (2010a). Cognitive integration and the extended mind. In R. Menary (Ed.), The Extended Mind (pp. 227–244). Cambridge, MA: MIT Press. ———. (2010b). Dimensions of mind. Phenomenology and Cognitive Science, 9, 561–578. ———. (2012). Cognitive practices and cognitive character. Philosophical Explorations, 15(2), 147–164. ———. (2013). Cognitive integration, enculturated cognition and the socially extended mind. Cognitive Systems Research, 25–26, 26–34. ———. (2014). Neural plasticity, neuronal recycling and niche construction. Mind and Language, 29(3), 286–303. ———. (2015). Mathematical cognition – A case of enculturation. In T. Metzinger and J. M. Windt (Eds.), Open MIND, 25, 1–20. Frankfurt am Main: MIND Group. ———. (2016). Pragmatism and the pragmatic turn in cognitive science. In A. K. Engel, K. J. Friston and D. Kragic (Eds.), The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science. Strüngmann Forum Reports (Vol. 18; pp. 219–237). Cambridge, MA: MIT Press. Menary, R. & Kirchhoff, M. (2014). Cognitive transformations and extended expertise. Educational Philosophy and Theory, 46 (6), 610–623. Nersessian (2005). Interpreting scientific and engineering practices: Integrating the cognitive, social, and cultural dimensions. In M. E. Gorman, R. D. Tweney, D. C. Gooding and A. P. Kincannon (Eds.), Scientific and Technological Thinking (pp. 17–56). London: Lawrence Erlbaum Associates. Nemirovsky, R., Kelton, M. L. & Rhodehamel, B. (2013). Playing mathematical instruments: Emerging perceptuomotor integration with an interactive mathematics exhibit. Journal for Research in Mathematics Education, 44(2), 372–415. Nieder, A. (2009). Prefrontal cortex and the evolution of symbolic reference. Current Opinion in Neurobiology, 19, 99–108. Nieder, A. & Dehaene, S. (2009) Representation of number in the brain. Annual Review of Neuroscience, 32, 185–208. Nieder, A., Diester, I. & Tudusciuc, O. (2006). Temporal and spatial enumeration processes in the primate parietal cortex. Science, 313(5792), 1432–1435. Nisbet, R. E., Choi, I., Peng, K. & Norenzayan, A. (2001). Culture and systems of thought: Holistic versus analytic cognition. Psychological Review, 108(2), 291–310. Norman, D. (1991). Cognitive artifacts. In J. M. Carroll (Ed.), Designing Interaction (pp. 17–38). Cambridge: Cambridge University Press. Odling-Smee, F. J. (2007). Niche inheritance: A possible basis for classifying multiple inheritance systems in evolution. Biological Theory, 2, 276–289. Odling-Smee, F. J., Laland, K. N. & Feldman, M. F. (2003). Niche Construction:The Neglected Process in Evolution. Princeton, NJ: Princeton University Press. O’Regan, J. K. & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioural and Brain Sciences, 24(5), 939–1011. Piazza, M., Pica, P., Izard,V., Spelke, E. & Dehaene, S. (2013). Education enhances the acuity of the nonverbal approximate number system. Psychological Science, 24(6), 1037–1043.
86
Embodying culture Potts, R. (2012). Evolution and environmental change in early human prehistory. Annual Review of Anthropology, 41, 151–167. Radford, L. (2003) Gestures, speech, and the sprouting of signs: A semiotic-cultural approach to students’ types of generalization. Mathematical Thinking and Learning, 5(1), 37–70. ———. (2009). Why do gestures matter? Sensuous cognition and the palpability of mathematical meanings. Educational Studies in Mathematics, 70(2), 111–126. Reynolds Losin, E. A., Dapretto, M. & Iacoboni, M. (2010). Culture and neuroscience: additive or synergistic? SCAN, 5, 148–158. Roepstorff, A. Niewöhner, J. & Beck, S. (2010). Enculturing brains through patterned practices Neural Networks, 23, 1051–1059. Rowlands, M. (1999). The Body in Mind: Understanding Cognitive Processes. Cambridge: Cambridge University Press. ———. (2010). The New Science of the Mind. Cambridge, MA: MIT Press. Sato, M., Cattaneo, L., Rizzolatti, G. & Gallese,V. (2007). Numbers within our hands: Modulation of corticospinal excitability of hand muscles during numerical judgment. Journal of Cognitive Neuroscience, 19(4), 684–693. Sterelny, K. (2003). Thought in a Hostile World – The Evolution of Human Cognition. Oxford: Blackwell Publishing. ———. (2005). Made by each other: Organisms and their environment. Biology and Philosophy, 20, 21–36. ———. (2012). The Evolved Apprentice: How Evolution Made Humans Unique. Cambridge, MA: MIT Press. Stotz, K. (2010). Human nature and cognitive-developmental niche construction. Phenomenology and the Cognitive Science, 9, 483–501. Stout, D., Toth, N., Schick, K. & Chaminade, T. (2008). Neural correlates of Early Stone Age toolmaking: Technology, language and cognition in human evolution. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1499), 1939–1949. Sutton, J. (2007). Batting, habit, and memory: The embodied mind and the nature of skill. Sport in Society, 10(5), 763–786. Tang,Y., Zhang,W., Kewel, C., Feng, S., Ji,Y., Shen, J., Reiman, E. M., . . . Llu,Y. (2006). Arithmetic processing in the brain shaped by cultures. PNAS, 103(28), 10775–10780. Tomasello, M. (1999). The Cultural Origins of Human Cognition. London: Harvard University Press. ———. (2009). Why We Cooperate. London: MIT Press. Turner, J. S. (2000). The Extended Organism:The Physiology of Animal-Built Structures. Cambridge, MA: Harvard University Press. Vaesen, K. (2012). The cognitive bases of human tool use. Behavioral and Brain Sciences, 35, 203–262. Verguts, T. & Fias, W. (2004). Representation of number in animals and humans: a neural model. Journal of Cognitive Neuroscience, 16, 1493–1504. Viarouge, A., Hubbard, E. M., Dehaene, S. & Sackur, J. (2010). Number line compression and the illusory perception of random numbers. Experimental Psychology, 57(6), 446–454. Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press. Whiten, A., Hinde, R., Laland, K. N. & Stringer, C. B. (2011). Culture evolves. Philosophical Transactions of the Royal Society, Biological Sciences, 366, 938–948. Zahidi, K. & Myin, E. (forthcoming). Radically enactive numerical cognition. In G. Etzelmüller and C.Tewes (Eds.), Embodiment in Evolution and Culture. Tübingen: Mohr-Siebich. Zhang, J. (1997). The nature of external representations. Cognitive Science, 21(2), 179–217. Zhang, J. & Norman (1994). Representations in distributed cognitive tasks. Cognitive Science, 18, 87–122. ———. (1995). A representational analysis of numeration systems. Cognition, 57, 271–295.
87
5 THE EVOLUTION OF TRIBALISM Edouard Machery
Tribalism is a complex psychological phenomenon: It involves emotions – such as disgust and sometimes hatred at the members of outgroups or outrage when their behavior violates ingroup norms – preferences – typically, but not always, a preference for interacting with the members of one’s own groups – stereotypes and prejudices – which underlie expectations about ingroup members’ interactions with outgroup members – and normative cognition – people often have different norms governing interactions with ingroup and outgroup members. It is also a socially important phenomenon, which fuels between-group conflicts in the contemporary world – from genocides such as the genocide in Rwanda to unrelenting conflicts such as the Israeli-Palestinian conflict – and possibly within-society cultural phenomena such as racism. Improving our understanding of tribalism may give us more tools for dealing with between-group conflicts and within-society cultural phenomena, for example by allowing better training of mediators involved in between-group conflicts. The goal of this chapter is to review the hypotheses about the evolution of tribalism. In Section 1, I highlight the importance of a distinctively human form of social organization – tribes or ethnies – during human evolution and I describe the selective pressures this form of social organization may have given rise to (Gil-White, 1999, 2001; Richerson and Boyd, 1998, 1999, 2005; Moya and Boyd, 2015). In Section 2, I discuss the psychology of tribalism in greater detail. Finally, in Section 3, I discuss the evolution of tribalism, comparing two types of hypotheses (van den Berghe, 1981; Hirschfeld, 1996; Gil-White, 1999, 2001; Kurzban, Tooby, and Cosmides, 2001; Machery and Faucher, 2005a, b; Richerson and Boyd, 2005; Moya, 2013; Moya and Boyd, 2015).
1. The selective pressures created by tribal organization 1.1. The long history of tribes Culture – the transmission of information that results from observing others or from communicating, including linguistically – is not distinctive of the human species (Whiten et al., 1999; Laland and Janik, 2006), but culture plays an unusually important role in our species: Much of the information that underlies human behavior is culturally transmitted, and the ecological success of the human species is largely due to its capacity for culture (Richerson and Boyd,
88
The evolution of tribalism
2005). Human beings also divide into culturally distinct groups: Members of a given cultural group tend to share many culturally transmitted pieces of information (or “cultural variants”) – ranging from norms, to values, to beliefs, to skills, etc. – and members of distinct cultural groups tend to have access to different culturally transmitted pieces of information. Cultural groups are found at different levels of social organization: In the modern world, nations are typically cultural groups, but so are political parties, racial groups, regional groups, sport fan clubs, and so on. Cultural groups can overlap, and people belong to several distinct such groups. Cultural groups often have a normative unity: Members of a given group are expected to abide by a set of often distinctive, culturally transmitted norms. And cultural groups are often associated with cultural markers: Members of a given cultural group often advertise their affiliation by means of various markers, ranging from accents and dialects, to clothes (e.g., football fans’ scarves), to food, to behaviors (e.g., greeting behavior). Tribes are a distinct type of cultural group (LeVine and Campbell, 1972;Van den Berghe, 1981; Richerson and Boyd, 1998, 1999, 2005; Moya and Boyd, 2015). Tribes are large cultural groups, which encompass thousands of individuals or more. While tribes are naturally very diverse, depending on ecological and cultural contingencies (Kelly, 1995; Richerson and Boyd, 1999), they share some distinctive features. They are characterized by sets of norms that are both extensive and distinctive. These norms include specific cooperation norms, which determine what is owed and when to other tribe members. These cooperation norms often include norms related to conflicts against outgroups, prescribing what tribe members owe the tribe in case of conflict. Tribe members also advertise their affiliation by means of ethnic markers, such as scarification, clothes, dialects, and accents. Tribes are often an important focus for self-identification, and group loyalty is a central component of tribal social organization. People exchange ideas and goods (i.e., trade) across tribal boundaries, but conflicts also often emerge at these boundaries (LeVine and Campbell, 1972). The tribal form of social organization differs from other kinds of social organization: In particular, tribes differ from kin-based social groups (extended families) and small, face-to-face coalitional groups. Members of tribes are not necessarily related and they may not know each other. Among primates, the tribal form of social organization is distinctive of human beings; groups in other primates are much smaller, and they are either based on kin relationships or on coalitional affiliation. Tribal organization is thus a novel trait that emerged since the human lineage split from the chimpanzee lineage. The Nuer of South Soudan (Kelly, 1985), the Yanomamö in the Amazon rainforest in Venezuela and Brazil (Chagnon, 2012), and the many indigenous groups of Papua New Guinea (e.g., the Dani tribe) illustrate this form of social organization. Tribal organization is rare in many parts of the contemporary world, and nation-states and other forms of modern political organization have often replaced it. On the other hand, tribal organization has been an important form of organization in human history, and many cultural groups – including nation states – harbor the features that characterize tribes such as a strong normative identity and cultural markers (from national languages to national accents to national dresses to national dishes, etc.). (I discuss the relation between tribes and other social groups at greater length in Section 1.3 and the relation between tribalism, nationalism, ethnocentrism, racism, and other social-psychological phenomena in Section 2.6.) It is not entirely clear how long tribal organization has existed, but Richerson and Boyd (1998, 1999, 2005) have reviewed evidence that tribal organization has been around for at least 50,000 years, and may have existed for substantially longer. In particular, human beings were already using symbolic markers such as beads and ochre marks 100,000 years ago (Klein, 1999;
89
Edouard Machery
Henshilwood et al., 2011; Moya, 2013), and these markers, although difficult to interpret with certainty, are similar to the markers that convey tribal affiliation in contemporary and historical tribes as well as in tribe-like groups in the archaeological record. 1.2. Selective pressures The existence of large cultural groups characterized by distinctive sets of norms, including cooperation norms, is likely to have created selective pressures resulting in the evolution of a distinct psychology, which I will call “tribalism” (Richerson and Boyd, 1998, 1999, 2005; Gil-White, 2001; Machery and Faucher, 2005a; Kelly, 2011; Moya and Boyd, 2015). Tribalism includes the formation of concepts of the relevant tribes in one’s social environment, the identification of people’s tribal membership (classification or categorization) by means of relevant cues (perhaps with an evolved disposition to pay attention to some types of cues), the capacity to reason about tribes in a distinctive manner (generalizing about ingroup and outgroup members, applying one’s knowledge to ingroups or outgroups, noting exceptions, etc.), and the tendency to behave in a particular manner (preferences, values, emotions, etc.). Being able to identify ingroup tribal members allows learners to single out useful sources of information. Ingroup tribal members are more likely than outgroup members to possess locally useful, ecological, and social knowledge, and children and adults would benefit from learning from the former and avoiding learning from the latter. If sensitivity to tribal membership was selected for its contribution to cultural learning, we would expect it to emerge early in children’s lives. As expected, Buttelmann, Zmyj, Daum, and Carpenter (2013) show that 14-month-old children already prefer to imitate ingroup than outgroup members. Identifying tribal membership is also useful for determining the norms of coordination that should govern one’s behavior (Gil-White, 2001; McElreath, Boyd, and Richerson, 2003; Moya and Boyd, 2015), and generalizing about those norms on the basis of observations or interactions is an important skill. Failing to abide by coordination norms (either because one applies them poorly or because one fails to learn them) may be costly as one may then be shunned from beneficial opportunities. Identifying tribal membership also allows tribe members to identify potential cooperative partners. If people who violate cooperation norms are punished, engaging in cooperative ventures is beneficial, and forsaking them would be costly (Moya and Boyd, 2015). Similar considerations about the costs of failed coordination and the benefits of enforced cooperation suggest that people may also have evolved preferences for interacting with ingroup tribal members. In addition, interactions with outgroup members may trigger a different psychology, involving greater suspicion, caution, etc. 1.3. Ethnicized groups As noted earlier, tribes have disappeared from many parts of the world, and have been replaced with other forms of social organization. However, many modern social groups have tribal characteristics – they are associated with markers, they prime self-identification, they command loyalty, etc. (van den Berghe, 1981). A possible explanation of this phenomenon is that these modern social groups culturally evolved to take advantage of our evolved tribal psychology (van den Berghe, 1981; Richerson and Boyd, 1998, 1999). Groups that manage to elicit loyalty, to prime identification, etc., are more successful than groups that don’t; as a result, existing social groups have tribal features. I will call these groups “ethnicized groups”.
90
The evolution of tribalism
2. Tribalism 2.1. Acquiring ethnic concepts on the basis of ethnic markers One of the main tasks of a tribal psychology is to form concepts of the tribes in one’s social environments, which store information about the important characteristics of ingroup and outgroup tribe members. Machery and Faucher (2005a) hypothesize that an evolved cognitive system – an ethnic concepts acquisition device (ECAD) modeled on Chomsky’s Language Acquisition Device – is dedicated to this task, and Moya has more recently followed suit (Moya, 2013; Moya and Boyd, 2015). The task of this learning mechanism is complicated. Any social environment is made of numerous social groups, some based on kin relations, some based on cooperative interests, some based on religion, etc. The hypothesized ECAD must identify which of those social groups are tribes. Mistakes are costly. Failing to identify ethnic groups would prevent the learner from acquiring useful information from reliable sources as well as limit the number of beneficial interactions she could engage in. This would also lead her to treat ingroup tribal members as outgroup members, something that may result in punishment. One of the main tasks of the hypothesized ECAD is to identify the cues that indicate tribal affiliation, that is, the locally relevant ethnic markers. These cues are diverse, ranging from accents, to dialects (syntax and vocabulary), to various aspects of behavior (greetings, distance from one another in discussion, gestures during conversation, etc.), to clothes and marks on the body, to symbols such as flags, to rituals and customs (food and other cultural traditions). The hypothesized ECAD must be flexible so as to identify the locally relevant cues since ethnic markers vary from cultural context to cultural context. On the other hand, some kinds of ethnic markers seem to recur. Moya (2013) proposes that the ECAD comes equipped with expectations (or evolved priors) that are revised, in a Bayesian way, when they fail to be predictive of tribal membership. In particular, she suggests, and provides some evidence, that learners have an evolved expectation that language marks tribal boundaries, which they can revise in light of relevant evidence. This hypothesis is consistent with the fact that young children treat language and accent as important social cues to determine whom to trust, whom to be friends with, and whom to get information from (e.g., Kinzler, Shutts, DeJesus, and Spelke, 2009; Kinzler, Corriveau, and Harris, 2011). Another task of the ECAD is to form generalizations about ingroup and outgroup tribes. Machery and Faucher (2005a, b) have proposed that people expect tribes and ethnicized groups to have a rich inductive potential (see also Gil-White, 2001; Machery and Faucher, forthcoming). Consistent with this proposal, Birnbaum and colleagues (2010) have shown that when categories are visually and verbally marked, children consistently prefer to use ethnic identity rather than gender, religiosity, social status, and personality to generalize a property from an individual to another (see also Diesendruck and HaLevi, 2006). 2.2. Essentialism, biologization, immutability, and inheritance Tribalism does not boil down to the formation of ethnic concepts; typically, people also reason about tribe members in a peculiar way. A large literature claims that social groups, not just tribes and ethnicized groups, are “essentialized” (e.g., Gelman, 2003; Birnbaum et al., 2010; Rhodes, 2012; Rhodes, Leslie, and Tworek, 2012; Segall, Birnbaum, Deeb, and Diesendruck, 2015). This theory, called “psychological essentialism”, holds that people think that category
91
Edouard Machery
members share a hidden, unknown essence that is manifested in typical and diagnostic properties. Animals, plants, and social groups as diverse as races, religions, and genders are meant to be essentialized in this sense. The literature on psychological essentialism is vast, but it suffers from several problems. Most importantly, the experimental paradigms used to provide evidence for psychological essentialism have little connection with the claim that people think that category members have a hidden, unknown essence (Strevens, 2000; see also Birnbaum et al., 2010; Machery et al., 2013). The switch-at-birth paradigm is commonly used in the psychological essentialism literature. After having been told about a baby raised by an adoptive family, participants are asked whether the growing child will possess the characteristics – e.g., the body type, the beliefs, the race, the religion, the language, etc. – of the birth or of the adoptive family. Another paradigm examines which of a child’s characteristics – e.g., her race, ethnic affiliation, etc. – will remain constant over life.These paradigms, and similar ones, provide useful information about which properties people think are inherited (instead of socially transmitted) and which properties people think are immutable, but the connection with essentialism, strictly understood as a belief in essences, is not clear. Furthermore, there is some evidence that essentialism, so understood, is not universal (Olivola and Machery, 2014; Machery et al., ms), and that whether or not it is prevalent in a given culture is to be in part explained by social and ideological factors (Mahalingam, 2007; Olivola and Machery, 2014). So instead of proposing that tribalism involves the essentialization of tribes and ethnicized groups, tribalism may involve the belief that tribal identity as well as features that are characteristic of tribes or ethnicized groups are inherited at birth from one’s birth parents and that they are stable across life. Gil-White (1999, 2001) has argued that tribes are so conceived because the ethnic cognitive system is based on folk biology: We view ethnic identity as inherited and stable because we biologize tribes and ethnicized groups and because we view species identity and typical characteristics as inherited and stable. Machery and Faucher (2005a) concur. They add that while folk biology is in part universal (Atran, 1990; Berlin, 1992; Medin and Atran, 1999), much of its content (the particular ways species are thought about) is culturally transmitted (Medin and Atran, 2004). As a result, tribes or ethnicized groups should be thought of differently across cultures and times, depending on the culture-specific content of folk biology. For instance, since species were thought to be changing as a function of climate and geography in ancient Greece instead of being immutable, tribes and ethnicized groups must have been thought of as being influenced by climate and geography and as having the potential to change. Scholarship about racial classification in Ancient Greece and Rome confirms this prediction (Isaac, 2004). Recent work has challenged the claim that a belief in inherited and immutable ethnic identity and ethnic characteristics is a typical component of ethnic psychology (Kanovsky, 2007; Moya, Boyd, and Henrich, 2015; Moya and Scelza, 2015). Moya and colleagues argue that, in light of the frequency of migrations in the ethnographic record, we should not expect tribalism to include a belief in inherited and immutable tribal identity and characteristics. In addition, the belief in inherited and immutable tribal identity and characteristics seems to vary across cultures. Moya and Scelza (2015) report that in switch-at-birth experiments, the rate of answers consistent with the view that ethnic membership is thought to be inherited varies from 8% to 87%, a very large spread. Furthermore, they present data with switch-at-birth studies with the Himba, a population in North Namibia: Himba adults do not treat tribal membership or characteristics as being inherited. Similarly, vignettes describing the migration of a teenager from one tribal group to another suggest that Himba adults do not treat tribal membership or characteristics as being immutable either. On the other hand, developmental findings by Astuti and colleagues (2004) and Moya and colleagues (2015) suggest that children may be disposed 92
The evolution of tribalism
to think of tribal identity and characteristics as inherited – this may be an evolved prior – and that this expectation happens to be revised when migration and assimilation allow outgroup tribal members or their children to become genuine ingroup tribal members. Inheritance, immutability, and possibly biologization may be typical characteristic of the way we think – or perhaps are disposed to think – about tribes and ethnicized groups. In addition, many beliefs about such groups are acquired by social transmission. Such beliefs vary across cultures. People don’t think about tribes the same way among the Yanomamö and among the Kung in the Kalahari Desert. The cultural variation of folk biology is one source of this variation in tribalism. 2.3. Cooperative tendencies vs. distrust and hostility Tribalism is not limited to cold cognition, but it also encompasses emotions and behavioral preferences. Of particular importance is a preference for interacting with ingroup tribal members over members of outgroup tribes. That is, tribes are a locus of coordination and cooperation. Much historical and archaeological evidence suggests that tribes have been involved in large-scale warfare, which combines extensive within-group coordination and between-group hostility (Richerson and Boyd, 1998, 1999). In addition, outgroup tribal relations may elicit negative emotions such as distrust. For instance, Diesendruck and Menahem (2015) report that Jewish children expressed more prejudice and hostility against Arabs after having been primed to think of Jews and Arabs as having unique properties and of ethnic membership as being inherited. It has also been hypothesized that disgust has been coopted by our tribal psychology (Faulkner, Schaller, Park, and Duncan, 2004; Navarrete and Fessler, 2006; Kelly, 2011). This proposal has been developed in various ways. Navarrete and Fessler propose that ethnic markers carry information about contamination risk because locally adapted immune systems may not be able to deal with pathogens carried by people living elsewhere (for critical discussion, see Moya and Boyd, 2015). As a result, ethnic markers belong to “the proper domain” of disgust (the proper domain being “all the information that it is the module’s biological function to process”, according to Sperber [1994, 52]). By contrast, Kelly proposes that (among other functions) disgust is exapted and recruited by our tribal psychology so as to promote within-group interactions to the detriment of between-groups interactions. While outgroup interactions may elicit negative emotions, ranging from distrust to disgust, it should not be assumed that our tribal psychology includes a preference for avoiding interactions across tribal boundaries. People have long exchanged goods and ideas across such boundaries, and migrations across groups are an important feature of human history, as is shown by the low level of between-group genetic variation. Rather, people may bring a distinct psychology to bear on outgroup tribal interactions. One may hypothesize that distrust and caution are characteristic of such interactions. Reciprocity is likely to be an important characteristic of successful cooperative interactions in this context, and participants involved in such interactions are likely to be particularly sensitive to whether their partners are actually fulfilling their side of the reciprocal interactions. That is, the psychology of reciprocal altruism may manifest itself particularly clearly in the context of outgroup tribal interactions. 2.4. Tribalism and moral psychology Tribalism interacts with moral psychology in various ways (Prinz, 2006; Greene, 2013). The distinction between us and them, where “us” corresponds to members of the same tribe or 93
Edouard Machery
ethnicized group, and “them” corresponds to members of other tribes or ethnicized groups, seems to be built into moral psychology (Bowles and Gintis, 2004). Different norms govern our interactions with us and them: What we intuitively owe people tracks this distinction. Bernhard, Fischbacher, and Fehr (2006) examine how members of various indigenous groups in Papua New Guinea decide to punish, at their own cost, unfair allocation of windfall gains in a dictator game (third-party punishment game).They report that participants punish much more the violations of norms against ingroup members than those against outgroup members. Habyarimana, Humphreys, Posner, and Weinstein (2007) show that the provision of public goods decreases as a function of ethnic heterogeneity. Their behavioral-economics results suggest that high provision of public goods in conditions of ethnic homogeneity is due to an increased threat of sanction, which results in higher cooperation, instead of being due to greater altruism toward ingroup tribal members. Moralization can also interact with one’s attitudes toward other tribes or ethnicized groups. Reifen Tagar, Morgan, Halperin, and Skitka (2014) examine how Israelis conceive of the IsraeliPalestinian conflict. The relationship between Israelis’ political views and their hostility toward Palestinians (acceptance of collateral damage and support for retribution) was moderated by their moralization of the Israeli-Palestinian conflict. Left-winger and right-winger Israelis who did not moralize their views and feelings about this conflict did not differ in terms of intergroup hostility and support for compromise; right-winger Israelis who moralized their views and feelings about the Israeli-Palestinian conflict expressed more hostility and less support for compromise than left-winger Israelis who moralized their views and feelings. 2.5. The unity of tribal psychology? It is not entirely clear how unified tribalism is. On the one hand, tribalism could be constituted by various components – the detection of ethnic membership, the expectation of rich inductive potential, the essentialist or biological reasoning about ethnic membership and characteristics, a preference for interacting with ingroup tribal members, distinct emotions during interactions with outgroup members, etc. – that are largely independent from one another and that manifest themselves in different ecological and social conditions. On the other hand, these components could be unified and could manifest themselves in nearly all conditions:Whenever one identifies the ethnic membership of another person, particular reasoning schemas would be triggered, particular preferences would get on line, etc. In line with the latter hypothesis, Halperin and colleagues (2011) report that a belief in the immutability of group characteristics in the context of the Israeli-Palestinian conflict (as measured by agreement with statements such as “Groups can’t change their basic characteristics”) not only predicts but also causally influences intergroup hostility and willingness to compromise (see also Diesendruck and Menahem, 2015, mentioned above).These findings suggest that between-group emotional reactions and cognitive schemas are not entirely disunified. On the other hand, Moya and colleagues have recently argued that the components of tribalism are not tightly integrated and dissociate in particular social conditions. They argue that hostility toward outgroup tribal members and the belief about the immutability of tribal membership are likely to be independent from one another. According to Moya and Scelza (2015), the Himba have some “animosity” (although no “open hostility”) toward a wealthier ethnic group, the Damara, but view ethnic membership as relatively fluid: Migrants between the two tribes can acquire the identity of the new tribe. Similarly, their results suggest that tribal identification and the acquisition of tribal stereotypes can be independent of biologization. 94
The evolution of tribalism
2.6. Tribalism and other social-psychological phenomena As noted earlier, various groups in modern societies are ethnicized: They are associated with markers, coordination and cooperation take place within their boundaries, and ingroup members’ identity centers around membership in them despite the fact that, in part because of their sheer size, they are not tribes. These groups seem to trigger psychological processes (from reasoning to preferences to emotional reactions) that are similar to tribalism. This observation suggests that some social-psychological phenomena – perhaps racism, nationalism, and xenophobia – may well be the expression of tribalism in a modern context. That is, social groups such as races and nations may have superficial features that are sufficiently similar to the cues eliciting our ethnic psychology. In this spirit, Gil-White (2001) and Machery and Faucher (2005a, b) have hypothesized that racism is the expression of tribalism in a modern context (see also Kelly, Machery, and Mallon, 2010). Racial groups happen to possess various features that trigger the ethnic cognitive acquisition device. The physical properties that are central to race membership – from skin color in some historical contexts to body type, hair type, and eye color in other historical contexts – are similar to some ethnic markers (Gil-White, 2001, 533–534). Moreover, they tend to be shared by parents and children exactly as ethnic markers tend to be (migrations notwithstanding). These similarities may lead people to form racial concepts on the model of ethnic concepts (e.g., BLACK and WHITE, the concepts of Nordic, Alpine, or Mediterranean races, the concept of an Aryan race, etc.), to reason about races the way they are disposed to reason about tribes (perhaps to biologize them), and to have various behavioral dispositions toward members of one’s and other races, exactly as they would do with respect to members of one’s and other tribal groups. People are more likely to form such racial concepts if social and historical circumstances have led people who, perhaps by historical accident, happen to share superficial physical features to share some distinctive cultural norms too. By contrast, if the superficial racial features do not map at all on distinctive cultural practices or norms and culturally transmitted behaviors, the relevant concepts may be abandoned.The formation of racial concepts may then have a looping effect. People may use the superficial phenotypic features associated with racial membership to determine with whom to coordinate and cooperate, whom to favor and whom to discriminate against, reinforcing the distinctiveness of the racial groups. It is worth emphasizing that Gil-White is not claiming that racial cognition is an adaptation. Rather, our hypothesized tribal psychology is assumed to misfire when it is elicited by the superficial phenotypic features associated with racial membership. On this view, racist social-psychological phenomena are by-products of a psychology evolved for other functions. Nor does the hypothesis under consideration imply that racism is hard to change. Moya and colleagues’ research suggests that the formation of ethnic concepts is relatively flexible. In particular, Moya (2013) provides evidence that people can stop treating linguistic markers (syntax, accent, etc.) as socially meaningful cues. When such cues – including perhaps the phenotypic markers that by hypothesis elicit our tribal psychology – do not map onto meaningful social distinctions (e.g., in societies where skin color is socially meaningless), people stop treating the relevant cues as identifying tribal boundaries. The evidence for the hypothesis that racism and other social-psychological phenomena are by-products of an evolved tribalism is indirect. Machery and Faucher (2005a) have argued that satisfying theories of racial phenomena must be able to explain their recurrence across cultures and times, and have criticized purely historical or social-constructionist accounts of racism on this ground (see also van den Berghe, 1981). By contrast, the hypothesis under consideration makes sense of the widespread historical and cultural distribution of race-related phenomena. 95
Edouard Machery
Furthermore, people seem to think about races and tribes in similar ways (Gil-White, 2001; Machery and Faucher, forthcoming).
3. The evolution of tribalism In this section, I discuss two families of theories about the evolution of ethnic cognition: byproduct theories and adaptationist theories (for additional discussion, see Machery and Faucher, 2005b). 3.1. By-product theories By-product theories start with the idea that human evolved psychology includes various cognitive and emotional processes dedicated to social life, and propose that ethnic cognition is merely an instance of such processes. Within this family of by-product theories, evolutionary theories concur that social cognition should be divided into different systems: For instance, on such views, people do not think about gender or age groups the way they think about kin groups, coalitions, tribes, or ethnicized groups. Social-psychological perspectives that ignore these differences are at best incomplete, if not outright misleading. On the other hand, byproduct theories disagree about the nature of the relevant cognitive and emotional processes: Some highlight processes dedicated to thinking about kin-based groups, others to processes dedicated to thinking about coalitions, while others refer to social groups in general. Furthermore, these theories may disagree about whether the relevant cognitive and emotional processes elicited by tribes have been exapted for this purpose (the way feathers have been exapted for flight after having evolved for insulation) or whether tribes just happen to trigger these processes, which then would not have been modified by natural selection to underlie identification, reasoning, and preferences related to tribes. Following Hamilton’s (1964) groundbreaking work on kin selection, van den Berghe (1978, 1981) has highlighted the importance of kin groups in primate social life, and he has hypothesized that kin selection resulted in a psychology dedicated to identifying kin members, to motivating support for kin group members, as well as discriminating against non-kin. He then proposed that ethnic phenomena are a by-product of this kin psychology. By contrast, Tooby and Cosmides have long emphasized the importance of reciprocal exchanges in primate social life, and they have proposed that reciprocal altruism and indirect reciprocity (Trivers, 1971; Axelrod and Hamilton, 1981; Alexander, 1987) selected for a psychology dedicated to identifying trustworthy cooperation partners. Cheater detection and memory for cheaters are meant to be part of this evolved coalitional psychology (Cosmides, 1989; Cosmides and Tooby, 1992). Together with Kurzban, they have also proposed that racism is a by-product of this evolved coalitional psychology (Kurzban, Tooby, and Cosmides, 2001; Cosmides, Tooby, and Kurzban, 2003; see also Pietraszewski, Cosmides, and Tooby, 2014; Pietraszewski et al., 2015), and their hypothesis plausibly generalizes to social cognition with respect to tribal groups. They present evidence that the encoding of racial membership (but not gender or accent [see Pietraszewski and Schwartz, 2014]) decreases, indeed disappears in some experiments, when racial membership and coalitional cues are orthogonal to one another (Kurzban et al., 2001; Pietraszewski et al., 2014). For instance, in Kurzban and colleagues’ (2001) study 1, participants are presented with eight sentences paired with the pictures of eight speakers. The pictures represent young men dressed identically. From the sentences, one can infer that they divide into two groups that have been involved in some kind of fight. Their races (White and African-American) can be seen on the pictures. Participants are given 96
The evolution of tribalism
a surprise recall task with the pictures still in front of them. Study 2 is identical to study 1, except for the fact that an arbitrary and non-permanent cue (the color of a tee-shirt) indicates affiliation. If races are mere coalitions, providing relevant coalitional information should decrease the reliance on race as a proxy for coalitional affiliation. In a series of experiments, Kurzban and colleagues report consistent findings. These experimental results are in line with the expectation that a coalitional system should be extremely flexible since coalitional affiliation can change very quickly in a primate environment, and can thus be correlated with different cues. Finally, Hirschfeld (1996) has proposed that human-evolved psychology includes a “human kind module”, that is, a cognitive system dedicated to reason about the salient social groups in people’s social environments.This human kind module is flexible in order to take into account the variety of social groups in people’s environments. In modern environments, it is hypothesized to be triggered by castes in India, races in the USA, or recent immigrant groups in most countries (Chinese in South Asia, North Africans or East Europeans in Western Europe, etc.), although it did not evolve to deal with such groups. Hirschfeld’s evidence is developmental. Using switch-at-birth experiments and other experimental tasks, he presents evidence that young children in France and in the USA assume race membership to be inherited and immutable. Furthermore, the development of racial concepts does not seem to be a mere by-product of visual categorization. All these theories are challenged by the differences between kin psychology, coalitional psychology, and tribalism. Despite the frequent use of kin metaphors in an ethnic context (myths often describe tribal members as having a shared ancestry), a point discussed at length by van der Berghe (1981), the cues that underlie ethnic identification and kin identification are very different, and many aspects of kin psychology do not seem to be exported to ethnic membership. Furthermore, coalitional affiliation is much less stable than ethnic affiliation: People can switch from one coalition to another (including contemporary hunter-gatherers and our ancestors during human evolution; see Lewis et al., 2014).While the biologization or essentialization of ethnic membership may not be a universal phenomenon (as discussed above), it is too common for ethnic cognition to be merely an instance of coalitional psychology. 3.2. Distinctively ethnic adaptations Instead of proposing that ethnic cognition is an instance of a form of social cognition evolved for thinking about other types of social groups, a second family of evolutionary theories proposes that it is a distinct psychological adaptation, dedicated to thinking about tribes. These theories disagree about the evolutionary trajectory leading to this distinct psychological adaptation. Gil-White (2001) has proposed that ethnic cognition is an exaptation of human-evolved folk biology. That is, humans have evolved to think of tribes as if they were biological species (for critical discussion, see Machery and Faucher, 2005a, b). Our ancestors whose folk biology was easily triggered by ethnic markers were evolutionary more successful than those whose was not. Thinking about tribes as if they were species may prime people to generalize inductively, exactly as they do about animals. Since ingroup tribal members behave similarly and since there are many behaviors that are distinctive of other tribes (due in part to the fact that tribes have distinctive norms), generalizations may tend to be true (Gil-White, 2001, 530–532). Moreover, a biological view of the tribal world may reduce the frequency of those interactions across ethnic boundaries whose success requires shared norms – particularly mating (Gil-White, 2001, 532). Evidence that ethnic membership as well as the features that are characteristic of particular tribes are thought to be inherited and immutable supports Gil-White’s exaptationist 97
Edouard Machery
scenario. Using switch-at-birth experiments, Gil-White reports evidence that Torguuds and Kazakhs, semi-nomadic pastoralists who live in the same environment but are territorially segregated, biologize ethnic membership. Most of his participants expected tribal membership to be impervious to the rearing environment and to be instead inherited. Research by Diesendruck and colleagues provides further evidence for Gil-White’s proposal (see also Mahalingam’s [2003] consistent findings with Indian Brahmin adults). Diesendruck, Birnbaum, Deeb, and Segall (2013) show that second-graders and older children think that ethnic identity ( Jewish or Arab) is more likely to be inherited than profession, religiosity, socio-economic status, and body build. Kindergartners tend to think that ethnic identity is transmitted biologically rather than socially, although older children leave more room for educational circumstances in ethnic identity determination. Evidence that tribal membership is judged to be socially determined and changeable undermines Gil-White’s proposal. In addition, his exaptationist scenario assumes that migration across tribal boundaries was not extremely common during human evolution, a controversial point (Moya, 2013; Moya and Boyd, 2015). Moya and colleagues have recently developed a different adaptationist scenario. They reject Gil-White’s hypothesis that ethnic cognition is built upon an exapted folk biology; rather, on their view ethnic cognition is its own adaptation. It is dedicated to the identification of ethnic groups in the learner’s environment and the formation of generalizations about the behavioral norms prevalent in different tribes. This system may have revisable expectations about the cues that indicate membership (e.g., linguistic cues) and about the cross-generational transmission and lifelong stability of tribal identity. Whether or not people end up thinking that tribal identity is inherited and immutable may be determined by the frequency of migration in their social environment. Moya and colleagues also insist that different social situations will bring various, largely independent psychological systems, including folk biology, to bear on reasoning about tribes. This view predicts cross-cultural variation in the patterns of reasoning about tribes, with perhaps some cross-cultural tendencies that reflect the evolved predispositions of the system.
Conclusion It is plausible that tribes – a distinctively human form of social organization – have created selective pressures that have influenced human social cognition. Many social groups in contemporary environments – ethnicized groups – ape tribes. A set of cognitive and emotional processes as well as group-related preferences – tribalism – seem involved when we are thinking or acting in the context of tribes or ethnicized groups, although the unity of these systems is controversial.The evolutionary trajectory leading to tribalism is debated, and this chapter has reviewed five different evolutionary scenarios.
References Alexander, R. D. (1987). The Biology of Moral Systems. New Brunswick, NJ: Transaction Publishers. Astuti, R., Carey, S. & Solomon, G. (2004). Constraints on conceptual development: A case study of the acquisition of folkbiological and folksociological knowledge in Madagascar. Monographs of the Society for Research in Child Development, 69(3), 1–135. Atran, S. (1990). Cognitive Foundations of Natural History. Cambridge: Cambridge University Press. Axelrod, R. & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211, 1390–1396. Berlin, B. (1992). Principles of Ethnobiological Classification. Princeton, NJ: Princeton University Press.
98
The evolution of tribalism Bernhard, H., Fischbacher, U. & Fehr, E. (2006). Parochial altruism in humans. Nature, 442, 912–915. Birnbaum, D., Deeb, I., Segall, G., Ben-Eliyahu, A. & Diesendruck, G. (2010). The development of social essentialism:The case of Israeli children’s inferences about Jews and Arabs. Child Development, 81, 757–777. Bowles, S. & Gintis, H. (2004). Persistent parochialism: Trust and exclusion in ethnic networks. Journal of Economic Behavior & Organization, 55, 1–23. Buttelmann, D., Zmyj, N., Daum, M. & Carpenter, M. (2013). Selective imitation of in-group over outgroup members in 14-month-old infants. Child Development, 84, 422–428. Chagnon, N. (2012). The Yanomamö. Belmont, CA: Nelson Education. Cosmides, L. (1989).The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31, 187–276. Cosmides, L. & Tooby, J. (1992). Cognitive adaptations for social exchange. In J. H. Barkow, L. Cosmides and J. Tooby (Eds.), The Adapted Mind (pp. 163–228). New York: Oxford University Press. Cosmides, L., Tooby, J. & Kurzban, R. (2003). Perceptions of race. Trends in Cognitive Science, 7, 173–179. Diesendruck, G., Birnbaum, D., Deeb, I. & Segall, G. (2013). Learning what is essential: Relative and absolute changes in children's beliefs about the heritability of ethnicity. Journal of Cognition and Development, 14(4), 546–560 doi:10.3389/fpsyg.2015.01180. Diesendruck, G. & HaLevi, H. (2006). The role of language, appearance, and culture in children’s social category-based induction. Child Development, 77, 539–553. Diesendruck, G. & Menahem, R. (2015). Essentialism promotes children’s inter-ethnic bias. Frontiers in Psychology, 6. Faulkner, J., Schaller, M., Park, J. H. & Duncan, L. A. (2004). Evolved disease-avoidance mechanisms and contemporary xenophobic attitudes. Group Processes & Intergroup Relations, 7, 333–353. Gelman, S. A. (2003). The Essential Child. Origins of Essentialism in Everyday Thought. New York: Oxford University Press. Gil-White, F. (1999). How thick is blood? The plot thickens . . . : If ethnic actors are primordialists, what remains of the circumstantialists/primordialists controversy? Ethnic and Racial Studies, 22, 789–820. ———. (2001). Are ethnic groups biological ‘species’ to the human brain? Current Anthropology, 42, 515–554. Greene, J. (2013). Moral Tribes: Emotion, Reason and the Gap Between Us and Them. London: Atlantic Books Ltd. Habyarimana, J., Humphreys, M., Posner, D. N. & Weinstein, J. M. (2007). Why does ethnic diversity undermine public goods provision? American Political Science Review, 101, 709–725. Halperin, E., Russell, A. G.,Trzesniewski, K. H., Gross, J. J., & Dweck, C. S. (2011). Promoting the middle east peace process by changing beliefs about group malleability. Science, 333, 1767–1769. Hamilton, W. D. (1964). The genetical evolution of social behaviour. II. Journal of Theoretical Biology, 7, 17–52. Henshilwood, C. S., d’Errico, F., Van Niekerk, K. L., Coquinot, Y., Jacobs, Z., Lauritzen, S.‑E., . . . GarcíaMoreno, R. (2011). A 100,000-year-old ochre-processing workshop at Blombos Cave, South Africa. Science, 334, 219–222. Hirschfeld, L. A. (1996). Race in Making: Cognition, Culture, and the Child’s Construction of Human Kinds. Cambridge, MA: MIT Press. Isaac, B. H. (2004). The Invention of Racism in Classical Antiquity. Princeton, NJ: Princeton University Press. Kanovsky, M. (2007). Essentialism and folksociology: Ethnicity again. Journal of Cognition and Culture, 7, 241–281. Kelly, D. (2011). Yuck! The Nature and Moral Significance of Disgust. Cambridge, MA: MIT Press. Kelly, D., Machery, E. & Mallon, R. (2010). Race and racial cognition. In J. Doris and the Moral Psychology Research Group (Eds.), The Oxford Handbook of Moral Psychology (pp. 433–472). Oxford: Oxford University Press. Kelly, R. C. (1985). The Nuer Conquest: The Structure and Development of an Expansionist System. Ann Arbor: University of Michigan Press. Kelly, R. L. (1995). The Foraging Spectrum. Washington, D.C.: Smithsonian Institution Press. Kinzler, K. D., Corriveau, K. H. & Harris, P. L. (2011). Children’s selective trust in native-accented speakers. Developmental Science, 14, 106–111. Kinzler, K. D., Shutts, K., DeJesus, J. & Spelke, E. S. (2009). Accent trumps race in guiding children’s social preferences. Social Cognition, 27, 623–634.
99
Edouard Machery Klein, R. G. (1999). The Human Career: Human Biological and Cultural Origins. Chicago: University of Chicago Press. Kurzban, R., Tooby, J. & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. Proceeding of the National Academy of Science, 98, 15387–15392. Laland, K. N. & Janik,V. M. (2006). The animal cultures debate. Trends in Ecology & Evolution, 21, 542–547. LeVine, R. A. & Campbell, D. T. (1972). Ethnocentrism:Theories of Conflict, Ethnic Attitudes, and Group Behavior. New York: John Wiley and Sons. Lewis, H. M.,Vinicius, L., Strods, J., Mace, R. & Migliano, A. B. (2014). High mobility explains demand sharing and enforced cooperation in egalitarian hunter-gatherers. Nature Communications, 5, 5789. Machery, E. & Faucher, L. (2005a). Social construction and the concept of race. Philosophy of Science, 72, 1208–1219. ———. (2005b). Why do we think racially? Culture, evolution and cognition. In H. Cohen and C. Lefebvre (Eds.), Categorization in Cognitive Science (pp. 1009–1033). Amsterdam: Elsevier. ———. (Forthcoming). The folk concept of race. In A. Wikforss and T. Marques (Eds.), Shifting Concepts:The Philosophy and Psychology of Conceptual Variability. Oxford: Oxford University Press. Machery, E., Olivola, C.Y., Cheon, H., Kurniawan, I. T., Mauro, C., Struchiner, N. & Susianto, H. (2013). Is folk essentialism a fundamental feature of human cognition? Manuscript submitted for publication. Mahalingam, R. (2003). Essentialism, culture, and power representations of social class. Journal of Social Issues, 59, 733–749. ———. (2007). Essentialism, power, and the representation of social categories: A folk sociology perspective. Human Development, 50, 300–319. McElreath, R., Boyd, R. & Richerson, P. J. (2003). Shared norms can lead to the evolution of ethnic markers. Current Anthropology, 44, 122–130. Medin, D. L. & Atran, S. (Eds.). (1999). Folkbiology. Cambridge, MA: MIT Press. ———. (2004). The native mind: Biological categorization and reasoning in development and across cultures. Psychological Review, 111, 960–983. Moya, C. (2013). Evolved priors for ethnolinguistic categorization: A case study from the Quechua-Aymara boundary in the Peruvian Altiplano. Evolution and Human Behavior, 34, 265–272. Moya, C. & Boyd, R. (2015). Different ethnic phenomena can correspond to distinct boundaries. Human Nature, 26, 1–27. Moya, C., Boyd, R. & Henrich, J. (2015). Reasoning about cultural and genetic transmission: Developmental and cross-cultural evidence from Peru, Fiji, and the United States on how people make inferences about trait transmission. Topics in Cognitive Science, 7, 595–610. Moya, C. & Scelza, B. (2015). The effect of recent ethnogenesis and migration histories on perceptions of ethnic group stability. Journal of Cognition and Culture, 15, 135–177. Navarrete, C. D. & Fessler, D. M. (2006). Disease avoidance and ethnocentrism: The effects of disease vulnerability and disgust sensitivity on intergroup attitudes. Evolution and Human Behavior, 27, 270–282. Olivola, C. & Machery, E. (2014). Is psychological essentialism an inherent feature of human cognition? Behavioral and Brain Sciences, 37, 499. Pietraszewski, D., Cosmides, L. & Tooby, J. (2014). The content of our cooperation, not the color of our skin: An alliance detection system regulates categorization by coalition and race, but not sex. PloS one, 9, e88534. Pietraszewski, D., Curry, O. S., Petersen, M. B., Cosmides, L. & Tooby, J. (2015). Constituents of political cognition: Race, party politics, and the alliance detection system. Cognition, 140, 24–39. Pietraszewski, D. & Schwartz, A. (2014). Evidence that accent is a dedicated dimension of social categorization, not a byproduct of coalitional categorization. Evolution and Human Behavior, 35, 51–57. Prinz, J. J. (2006). Gut Reactions: A Perceptual Theory of Emotion. Oxford: Oxford University Press. Reifen Tagar, M., Morgan, G. S., Halperin, E. & Skitka, L. J. (2014).When ideology matters: Moral conviction and the association between ideology and policy preferences in the Israeli–Palestinian conflict. European Journal of Social Psychology, 44, 117–125. Rhodes, M. (2012). Naïve theories of social groups. Child Development, 83, 1900–1916. Rhodes, M., Leslie, S. J. & Tworek, C. M. (2012). Cultural transmission of social essentialism. Proceedings of the National Academy of Sciences, 109, 13526–13531.
100
The evolution of tribalism Richerson, P. J. & Boyd, R. (1998). The evolution of human ultra-sociality. In I. Eibl-Eibesfeldt and F. K. Salter (Eds.), Indoctrinability, Ideology and Warfare (pp. 71–96). New York: Berghahn Books. ———. (1999). Complex societies: The evolution of a crude superorganism. Human Nature, 10, 253–289. ———. (2005). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago: University of Chicago Press. Segall, G., Birnbaum, D., Deeb, I. & Diesendruck, G. (2015). The intergenerational transmission of ethnic essentialism: How parents talk counts the most. Developmental Science, 18(4), 543–555. Sperber, D. (1994).The modularity of thought and the epidemiology of representations. In L. Hirschfeld and S. Gelman (Eds.), Mapping the Mind. Cambridge: Cambridge University Press, 39–67. Strevens, M. (2000). The essentialist aspect of naive theories. Cognition, 74, 149–175. Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35–57. Van den Berghe, P. L. (1978). Race and ethnicity: A sociobiological perspective. Ethnic and Racial Studies, 1, 401–411. ———. (1981). The Ethnic Phenomenon. New York: Elsevier Whiten, A., Goodall, J., McGrew,W. C., Nishida,T., Reynolds,V., Sugiyama,Y., . . . Boesch, C. (1999). Cultures in chimpanzees. Nature, 399, 682–685.
101
6 PERSONHOOD AND HUMANHOOD An evolutionary scenario John Barresi
Introduction In recent years there has been an enormous interest in the possibility of establishing a naturalistic foundation for human morality that is based, at least in part, on an account of hominin evolution (Binmore, 2005; Boehm, 2012; de Waal, 2006; Greene, 2013; Haidt, 2012; Joyce, 2006; Kitcher, 2011; Krebs, 2011; Nichols, 2004). Each of these accounts assumes that some sort of transformation occurred from the kind of emotion-based pro-social motives that sometimes determine chimpanzee behavior in their social relations with each other to explicitly moral motivations in humans that are often guided by ethical rules and moral norms constituted by the group. For instance, Philip Kitcher contrasts psychological altruism based on sympathetic responses found in chimpanzees to the kind of normative ethical rules and principles that guide human behavior. He sees the origin of what he calls “the ethical project” in the formation of collective normative rules by early modern humans of 50,000 years ago and claims that for these small bands of hunter-gathers: Equality, even a commitment to egalitarianism, was important. . . . In formulating the code, the voices of all adult members of the band needed to be heard: they participated on equal terms. Moreover, no proposal for regulating conduct could be accepted unless all those in the group were satisfied with it. (Kitcher, 2011, 96) Whether these early modern humans were as reflective and egalitarian in formulating norms as appears in this account or not, there is plenty of evidence from ethnographic reports of more recent hunter-gatherers that they are quite egalitarian, and talk a lot about, as well as jointly regulate, each other’s ethical behavior through shared norms. What is certain is that nothing like this regulation of moral behavior by group-determined ethical norms occurs in our nearest great ape relative, the chimpanzee, or in any other animal. So the puzzle of hominin evolution of morality is very much tied to the evolution of normative practice, which involves group- or culturally-based rules, whether explicitly delineated or not, that determine how one should or must behave with respect to others, and not merely describe how one does behave. 102
Personhood and humanhood
My goal in this chapter is to argue that one crucial difference between chimpanzees and humans is that humans conceive of themselves and others as persons and selves, and that without these concepts the normative basis of human moral life would not be possible. An essential requirement to conceive of a moral norm as applying equally to different individuals in comparable situations is not only to recognize the normative demands of the situation, but also to conceive both self and others equally as members of a class of agents whose duty it is to accede to those demands. To do this requires a concept of agent that bridges the gap between self and other. That concept for human beings is the concept of person. It is the bridge concept that makes possible normative guidance that applies to individuals as a function of roles and situations in which they find themselves, not as a function of their personal emotional preferences. For instance, in order for the universal moral norm that one should not cause pain to another human to be experienced as a duty independent of our emotional attitudes toward particular others requires that we recognize self and all other humans equally as agents who could cause pain and recipients who could receive pain and that pain should be avoided regardless of whose pain it is. Nagel (1970) nicely captures this notion of person when he describes the situation of one individual (imagine yourself) standing on the foot of another, whether this other is a friend or complete stranger. “Recognition of the other person’s reality, and the possibility of putting yourself in his place, is essential.You see the present situation as a specimen of a more general scheme, in which the characters can be exchanged” (82). In imagining the exchange, you would expect the other person to release his or her foot, not because it would reduce ‘your’ pain, but because it would reduce ‘someone’s’ pain, some ‘person’s’ pain. This is a unique aspect of the human moral order not found in other animals. Humans can conceive of themselves as just another person and that all persons should be treated equally with respect to moral norms. This uniquely moral response is not to be confused with the kind of sympathy that is found in other animals, which is an emotional response to the expressions of others that typically applies only to kin and close associates. In such cases there is no need to imagine reversed roles. One only needs to conceive of the other’s expressed pain as an extension of one’s own pain or as an object of personal concern. Instead, this response in humans is based on a conception of self and other as persons, impersonal objects of moral concern.While aversion to the perceived pain of the other individual may play a motivational role here, it cannot be a form of aversion that is restricted to individuals with whom one has a natural sympathy, but one that applies uniformly to all individuals that one can conceive of as persons.The other must be experienced as another self, and their pain must be understood as comparable to the pain one would experience in their position. Without a rich capacity for perspective taking that makes possible full imagination of the reversal of positions, mere aversion to pain in another would vary with one’s personal relationship to the other, and this would make it a self-interested motive, not a motive based on a conception of self and other equally as persons whose pain ought to be avoided. This “agent-neutral” (Parfit, 1984; Nagel, 1986) and impersonal way of thinking about and experiencing moral motivations requires a concept of person that contrasts with an “agentrelative” and personal way of experiencing motivations typical in other animals. In his attempt to justify the “possibility of altruism”, Nagel (1970) made a formal distinction between motives or reasons that are personal and apply to a particular person and his or her relationships with others, and impersonal reasons that apply to persons in general vis-à-vis other persons. Based on Nagel’s distinction, Parfit (1984) identified ethical theories as agent-relative (e.g. egoism) and agent-neutral (e.g. utilitarianism). Nagel (1986) later adopted Parfit’s terms and wrote: If a reason can be given a general form which does not include an essential reference to the person who has it, it is an agent-neutral reason. . . . If on the other hand the 103
John Barresi
general form of a reason does include an essential reference to the person who has it then it is an agent-relative reason. (152–153) While Nagel (1970, 1986) hoped to distinguish between personal subjective motives from objective reasons with this distinction, this is not my motivation for adopting this terminology in the present chapter. My interest is in using the terminology to distinguish a naturalistic division between the kind of agent-relative motives that apply to most animals, including chimpanzees, and the kind of agent-neutral motives that apply to humans. While in many circumstances human motives do not differ in kind from motives that we find in chimpanzees and other animals, at least in normative circumstances, human motives are governed by our conception of self and others as persons. And my main concern here is on the evolution of this way of conceiving self and other and on its role in human social life. In what follows I will argue first that it was the adaptive need for a high level of cooperation that caused early hominins to acquire the concepts of person and self in thinking about cooperative activity. Acquiring and understanding the relationship between these two concepts depended on an ability to conceive of the point-of-views of others in the same representational form as one conceived of one’s own point-of-view. The outcome was that they were able to think about the intentional activities and interests of self and other in a single common format that applied uniformly to self and others and could engage in normative guidance based on agent-neutral situational rules generated within one’s group, not just on personal relations. I will then provide a ‘how possibly’ story of this evolution with a particular focus on the role that reciprocal altruism played and how our sense of justice emerged in this process. Second, I will argue that chimpanzees and probably other organisms do not conceive of themselves and others equivalently as persons and selves and that this results in a distinction between us and other organisms in our capacity for agent-neutral thinking. Third, I will show how humans are distinguished from other animals by their early developmental conception of self and others as persons and selves. These concepts become available in the second year of life and go through several important stages in development that are crucial to our way of life as cooperative organisms.
The evolution of cooperation in hominins and the adaptive function of our concept of person I propose that it was the adaptive need for increasing levels of cooperation at a group level that caused early hominins to acquire and develop the concepts of person and self in thinking about cooperative activity. While extending altruism through the capacity to follow orders as suggested in Kitcher’s (2011) account of normative guidance is part of the story, it leaves out both motivational and conceptual resources necessary to create and follow the cooperative norms upon which those orders are based. I believe that the critical move in hominin evolution was engaging in a form of cooperation that required shared intentions that governed social behavior in a more general manner, one where each individual in a group had to take into account the view of the group as a whole when evaluating their own and other individuals’ actions. The outcome was the formation of a concept of person to apply to self and others within the social group, and a notion of what persons in various roles were expected to do. Rather than merely extending an agent-relative perspective on altruism based on natural sympathy with a capacity to follow commands, normative guidance involved shifting to an agent-neutral way of thinking about of the intentional perspectives of self and other and a greater focus on the role that group-based cooperative goals played both in generating and following rules. 104
Personhood and humanhood
What drove this adaptive need for a rich form of cooperation in early hominins and how was it solved? As to the source, there appear to be two main answers in the literature: (1) the need to engage in cooperative foraging, in particular group hunting and food sharing of large animal kills; and (2) the need to engage in cooperative breeding, which included longer periods of dependent infancy and childhood to develop skills needed for survival in varying physical and increasingly complex cultural environments (Chapais, 2008; Hrdy, 2009; Sterelny, 2012). With respect to how these needs were satisfied, two evolutionary mechanisms were particularly important in making this intense form of cooperation possible: kin selection and reciprocal altruism (e.g., Binmore, 2005; Sterelny, 2012). How it all started is uncertain, but when hominids entered the savannas during the late Miocene there was pressure to live in cooperative groups for the purposes of joint defense against predators and joint foraging of dispersed sources of food. Early developed bipedalism made distance traveling possible for the evolving hominins and created the opportunity to use hands for a variety of other purposes, including for the creation and modification of tools used for defense and foraging, and for communication of cooperative intentions. The evolutionary mechanisms of kin selection and reciprocal altruism were important for ramping up high levels of cooperation to survive and evolve in these novel circumstances. Exactly what happened and when it happened is unclear but, with respect to kin selection, there is reason to believe that hominin cooperation among kin shifted from maternal-only relations, as currently found in our closest ape relatives, to maternal and paternal relations, through bonding between particular males and females that made possible recognition by males of their offspring, thus warranting greater investment in their care.This new arrangement increased the role of bilateral kin relations, both close and distant, which opened the door to longer periods of child development and to the exchange of marriage partners between kin groups (Chapais, 2008). With respect to reciprocal altruism, cooperative foraging required tools, coordinated activity, advanced planning, reliable partners, and group-level food sharing of large animals. Group-level cooperative hunting not only involved kin but also non-kin, so free riding by individuals who sought rewards without paying the costs of cooperation became a serious problem requiring normative control. In order to control free riding social contract dynamics eventually led to group structures with a flat dominance hierarchy (egalitarianism), normative rules for sharing, and effective forms of group-based punishment (Binmore, 2005; Boehm, 2012; Boyd & Richerson, 1992; Skyrms, 2004; Trivers, 1971). Coordinated hunting that required evaluation of potential partners (both kin and non-kin), shared intentions, and multiple roles, as well as future-planning for and commitment to shared distribution of uncertain but high-density food sources, required a theory of mind and a temporally extended sense of self and other. Communication and sharing of intentions and knowledge (the basis of language) with non-kin as well as kin required generalized forms of reciprocation. Exchange of marriage partners, within and between bands involving distant kin and non-kin, eventually created long-term reciprocal bonds between non-kin groups. Thus small bands eventually became integrated into tribes that would compete with each other for resources, stimulating an intergroup dynamic for both group selection of cooperative genes and cultural selection of variations of group and individual behavior. The capacity for agent-neutral ways of thinking about the intentional activities of self and other became particularly important in activities involving reciprocation, where balancing of costs and benefits for each of the individuals involved required tracking inequities that might affect fitness, and a metric for uniform calculation of costs and benefits to different individuals. While kin selection and inclusive fitness can function well with agent-relative motivations to 105
John Barresi
act altruistically toward relatives – though with variation due to degree of relationship – altruism to non-relatives requires close attention to costs and benefits to different individuals that can be calculated in an agent-neutral way. So the development of increasingly abstract agentneutral concepts representing the intentional activities of individuals within the group must have been especially driven by cooperative activities involving non-relatives, though even with relatives, attention to relative costs and benefits in an agent-neutral manner was required when costly forms of altruism were involved. To better appreciate the need for abstract forms of agent-neutral thinking, consider a likely scenario for the advanced hominins (perhaps as early as Homo ergaster or Homo erectus) who engaged in hunting for large animals. The group of hunters had to decide on when and what to hunt, which weapons collectively created and owned to bring with them, and how to divide their group into smaller divisions in order to cover the area where they might find the best target animals. This advanced planning, which occurred at a collective level, required temporally extended notions of self and other and the use of future-oriented imagination based in part on past memories of successful hunts, an understanding of the skills of each of the participants, and some insurance or trust that each of the participants would put in roughly equal effort in hunting. Moreover, when some animal was killed, the participants had to trust that all members of the hunting party would get equal access to the best parts of the animal on site to eat without significant dispute, and then work together to bring the animal back to the home base to share, not only with relatives of the hunters, but with the whole group in a sufficiently egalitarian way that the fitness of all individuals in the group would be enhanced to facilitate group survival. The calculation of costs and benefits in this one scenario (and there are many others that could be described) in a way that would maintain any degree of equity would probably be impossible if done in an agent-relative way, requiring each individual to make calculations visà-vis every other individual in the group with respect to the series of events in this scenario and to negotiate an agreement to follow a common plan. Agent-relative thinking and motivation would undoubtedly lead to a breakdown in cooperation at multiple points in the scenario. It is unlikely that cooperative habits and collective thinking that would be required to engage in this scenario could ever evolve with agent-relative thinking, as the limitations of chimpanzee group hunting illustrates (Tomasello & Vaish, 2013; but see Kaufmann, this volume, for an opposing view of chimpanzee cooperation in hunting). Even with agent-neutral thinking, calculating costs and benefits to individuals would be difficult if each hunt was treated as a particular event. But agent-neutral thinking based on roles, temporally extended understanding of self and other including personality traits, and life-course social identities that can be abstractly conceived, along with norms that govern behavior as a function of these categories, afford the possibility of gradual acquisition of norms for scenarios of this sort as integrated within an ever evolving group cultural context. Recently, Ken Binmore (2005) has proposed a social contract version of game theory that captures some of what would be required here. Although his illustrations deal only with indefinitely repeated 2-person games of various sorts, the basic idea helps to understand what agentneutral thinking can do to facilitate the gradual development of equilibria with increased collective payoffs. In his model of social justice, he assumes that each individual not only has a personal utility function, but also creates an accurate utility function for their partner for interpersonal comparisons in various games with multiple equilibria. Some of these equilibria are more optimal than others for them collectively. He proposes that the capacity to create interpersonal utility functions is based on empathy that derives originally from kin selection but is then generalized to non-kin. However, this is an unlikely source, as kin selection only requires 106
Personhood and humanhood
agent-relative thinking that will not produce uniform representations of costs and benefits for self and other. Instead, an agent-neutral form of thinking is required, one that would have had its basis in reciprocal altruism. The important suggestion that he makes is that the capacity to search for and adopt better equilibria when bargaining can be represented metaphorically using Rawls’ (1971) notion of the original position, where participants bargain as if they did not know which person they would turn out to be with respect to the game at hand and use the interpersonal utility functions to make optimal decisions. Binmore proposes that natural justice can evolve within this framework from one equilibrium to another through time, where each equilibrium sets the norms for the participants at one time, which remain stable until some destabilizing change occurs. Without getting into more details of this theory, what appears clear is that it invokes a form of agent-neutral thinking and understanding of self and other equally as persons in various roles in a stable social network with varying power relations. Equity and equality don’t just happen here, but because of our capacity for thinking of our selves and others equally as persons in agent-neutral ways, progress is possible toward better cooperative equilibria approaching egalitarian ideals. Use of the original position in Binmore’s model is similar in some respects to Nagel’s notion of person as a basis for rational reflection about moral judgments. In both models there is an ideal component that purports to be the basis of rational decision-making. However, Binmore’s is more naturalistic, as he claims that rational self-interested actors would progress to better equilibria under his assumptions. However, hidden here is the assumption that rational agents will act in an agent-neutral manner, not in an agent-relative way. Without the capacity to think in an agent-neutral manner about costs and benefits for self and other, the idealization of the original position would not work. Instead, agent-relative motivations would prevent successfully finding cooperative equilibria and result instead in more competitive dominating outcomes of the sort found in chimpanzees, and high levels of cooperation would not occur. Our own high levels rely on agent-neutral thinking and the norms generated over time to legitimize and coordinate our cooperative behavior. While we don’t always behave according to culturally constituted agent-neutral norms, it remains the main means by which our society is organized and holds us together in complex collective agreements. And if agent-neutral ways of thinking are required for cooperation of this sort, then I argue that it was evolution of our concept of self and other as persons that was at the root of the social contracts in which we currently engage as well as for the ideals of social justice that we hope to achieve in the future.
The phylogenesis of persons and selves In common language and experience we use the English terms ‘self ’ and ‘other’ to describe the same kind of thing, an individual human being or person. Like ‘I’ and ‘you’, they are deictic terms that shift with the user. In order to use these terms, we must be able to recognize that you and I are both persons – that we are equivalent in this way. But we experience the personhood of self and other in different ways. Our experience of our own personhood is a first-person experience of our self, while our experience of each other’s personhood is from a third-person perspective. These two forms of experience are intrinsically different but can be conceptually connected. Our first-person experience of our own psychophysical activity, what Barresi and Moore (1996) call intentional relations, is focused outward on the object of our activity, whether it be a goal directed action, an emotional relation to another individual or object, or an epistemic relation to an object or situation. Our experience of ‘self ’ is only in our embodied relations to these activities and objects; it is not typically an object of our attention, 107
John Barresi
but implicit in our relational experience of other objects (see Musholt, 2015). By contrast, our third-person experience of another individual’s intentional relations focuses on the individual as an animate agent, and less so on the object of these relations. We see the other individual’s physical movements, emotional expressions, and direction of gaze; and often we must infer the objects of these activities by attending to the locations toward which these actions are directed. Yet, despite these different perspectives and information available about the activities of self and other, as adult humans we have no difficulty recognizing both self and other as agents of the same kind – as selves and persons – and can ascribe to our activities the same intentional concepts. In the next section we will view human development of these concepts and how they relate to our notions of self and persons. However, here we focus on other animals, and, in particular, those in the hominid line. In considering personhood from a phylogenetic perspective, the key issues are: When does a common conception of self and other as individual embodied agents engaged in intentional relations that are understood in the same way first appear? Why does it first appear at that time? And is this common conception as elaborate as our own human concepts of persons and selves? Barresi and Moore (1996) have argued that most animals represent their own intentional relations in a first-person format and the intentional relations of others in a third-person format, and as a result self and other are represented quite differently. However, certain highly social animals may be capable of representing some activities of self and other in a common format. One form of evidence indicating this possibility occurs when an animal is able to recognize itself in a mirror. While this test does not guarantee an equivalent concept of itself and others, it does indicate that the animal can treat the reflections in a mirror of itself and another equally as indicating the current physical appearance and activities of a particular individual. They appear here to adopt a third-person perspective on self through the use of the mirror. Animals that pass stringent forms of the mirror test include great apes (gorillas, orangutans, chimpanzees, and bonobos), cetaceans (e.g., dolphins and killer whales), elephants, and at least one species of bird (magpie) (Gallup, Anderson, & Platek, 2010). All of these species have relatively large brain/body ratios. Moreover, there is evidence that most of them have the converse capacity to imagine the first-person perspective of others, something that again is unusual among non-human animals. For instance, great apes, cetaceans, and elephants show fairly strong evidence of empathy, not only responding to the expressed distress of kin, but also to unexpressed situational needs even of non-kin (de Waal, 2008).There is also evidence to suggest that great apes and dolphins can imagine the visual viewpoint and knowledge of others. Taken together, these findings suggest that such animals can, in some presumably advantageous social circumstances, conceive of both self and other from a first- and a third-person point of view.These capacities can be useful, for instance, in acquiring a novel skill through imitation, or in taking advantage of the false knowledge of the other in competition for a hidden resource. An important question that remains is whether these two views of self and other are integrated in these organisms. If not integrated, then there may not be a single form of representation, but two separate more limited forms that can be applied to both self and other. Particularly relevant for evaluating whether the chimpanzee’s concept of self and other are similar to our notion of person are the circumstances in which chimpanzees engage in joint activities. Although chimpanzees live in social groups and sometimes work cooperatively for a common goal (e.g., territorial defense, hunting for monkeys), for the most part they pursue goals independently of each other, and often compete with each other for desirable objects. Recent research has tried to investigate chimpanzee cooperation and to compare it to that found in young children (Fletcher, Warneken, & Tomasello, 2012; Rakoczy et al., 2014; Tomasello & Vaish, 2013). At least in experiments scaffolded by humans, chimpanzees can learn to 108
Personhood and humanhood
cooperate with a partner in achieving joint rewards. But in doing these tasks they seem to learn only about their own role, and do not acquire any useful knowledge of the role of their partner. Thus, when the chimpanzees switch roles with their partner, this does not increase their speed in learning the task. By contrast, young children do increase speed upon switching roles in these experiments, which suggests that they represent not only their own task, but also that of their partner in a common framework. These results suggest that the chimpanzee is representing the joint activity only from a first-person point of view of their own task and a thirdperson point of view of the task of the other individual.This indicates that the chimpanzee has only an agent-relative perspective of the tasks of self and other while the child represents the tasks of both agents from an agent-neutral intentional perspective. Other experiments involving cooperation and altruism show that chimpanzees in cooperative tasks only pursue outcomes that are in their own interest and show no interest whatever in whether their partner will be rewarded or not. When they have the option of choosing one response that will reward both, or another that will only reward themselves, they are equally likely to choose one or the other response ( Jensen et al., 2006), whereas preschool children prefer to reward both self and another. Indeed, children are willing to forgo a current reward for self in order that self and other both gain rewards in the future (Thompson, Barresi, & Moore, 1997). Moreover, in tasks that require cooperation, chimpanzees never attempt to communicate with each other to encourage their partner to do the necessary complementary actions. It’s as if they do not represent the task as a joint activity at all, but see it only from their own relative point of view, seeing what they have to do, and observing the actions of others as if the other is pursuing its own ends. Since the idea of a common goal governing their joint activity is apparently absent in their thinking of the task (Tomasello, 2014), there is reason to think that whatever concepts that chimpanzees use to compare self and other are not of the same kind as our human concept of person. The lack of interpersonal relations of a cooperative nature in joint actions may limit the chimpanzee’s ability to represent intentional relations of self and other using a single uniform concept that integrates first- and third-person perspectives. If shared intentional relations ground human understanding of our selves and others as persons, then the limited bridge between self and other that we see in chimpanzee behavior may not be enough. Without integration of perspectives based on joint activity that begins in human infancy, chimpanzees cannot acquire an agent-neutral concept of intentional agent that they can apply equally to self and other in cooperative activities, thus limiting them to an agent-relative view of those activities. Other species, in particular some cetaceans, are more cooperative than chimpanzees and on several measures seem to show a more advanced understanding of self and other as equivalent intentional agents. But comparing humans to cetaceans would do little to answer the question of the origin of the concept of person in the hominid line.
Personhood, self-reflection, and the development of agent-neutral perspectives Given the differences in experience of intentional relations from first- and third-person points of view, how is it that human beings come to understand both self and others as agents of the same kind and are able to ascribe at least some psychological attributes equally to self and others? The two major theories of how we understand mental phenomena in ourselves and others – simulation theory (ST) and the theory theory of mind (TT) – have difficulty explaining the ease with which children acquire this understanding. ST gives precedence to first-person information and representations of mental states and generalizes from self to others. 109
John Barresi
By contrast, TT gives precedence to third-person behavioral information and then generalizes theoretical concepts of mind based on this information to self. Both of these approaches suppose a dualist conception of the relationship between mental phenomena and their behavioral expression, making it difficult to bridge the gap between them based on radically different sources of information about their relationship from self and other. An alternative approach is to focus on the person rather than on mental states (Barresi, Moore, & Martin, 2013; Dow, 2012; Newen & Schlicht, 2009). The inspiration for these theories is Peter Strawson’s (1959) non-dualist account of persons. Strawson insists that we cannot do without two ways of viewing ourselves and others, a first- and a third-person perspective, and that whatever psychological concepts we use to describe self must also be equally and unambiguously useful in describing others, though based on quite different criteria. He views the concept of person as a primitive and essential one that necessarily precedes any notion of a conscious or mental self. He points to a number of contradictions that arise when we attempt to view ourselves as conscious selves, or minds, on criteria that are independent of our bodies and also try to attribute analogous conscious selves to others based on their behavior. These lead to various forms of dualism such as those that appear in TT and ST. Strawson made several suggestions on how we might acquire psychological concepts, despite the different perspectives that we have of our own and another person’s activities. In particular, he noted that some activities, like walking, involve behavioral and mental aspects so intermingled that these activities can be readily bridged in understanding from a first- and third-person perspective. He also noted that some activities, such as group sports, are joint activities with common goals, and he suggested that a notion somewhat akin to that of a ‘group mind’ could play a role in understanding these activities where ‘we’ rather than ‘you’ or ‘I’ have a singular goal to be achieved. In this case, there is no issue about whether there are different concepts being applied to self and other based on different criteria. It is a single concept of what we are doing that is applied, but understood in a first-person manner for self and in a third-person manner for others. Thus, there is a perfect match in the intention as experienced in the firstperson and as experienced at the very same time in the third-person, and hence no gap in the content of the intentional state attributed to self and other based on different criteria. Barresi et al. (2013) have shown how these ideas of Strawson are congruent with events that occur in early child development. Of particular importance for the present chapter, which focuses on the evolution of cooperation, is the second suggestion. One of the remarkable features of early human development compared to that of other animals is that human infants engage with adults, and later on with other children, in joint intentional activities directed at common goals, which often involve mutual imitation of actions, joint attention, and bi-directional communication about objects and goals. As pointed out in the previous section, there is little evidence of cooperative activity of this sort in other existing primates (Tomasello, 2014; Tomasello & Vaish 2013). Because this activity involves shared intentional relations between the infant and the adult, the infant can experience the common mental and physical aspects of the activity from both a first- and a third-person perspective and can link them as involving the same activity. Thus it can represent the activity of self and other in a common format, one that applies to their joint activity, despite the different types of information presented for self and other. This format associates the first-person perspective of self with the third-person perspective of the other into a concept of their joint activity where both perspectives are included. Moreover, this cooperative activity often involves complementary roles that are reversed over time, which also facilitates relating the two sources of information about the same activity and bringing them into a common format that can eventually be applied to each of the individuals separately and not just in joint activity. 110
Personhood and humanhood
As a result of this early joint intentional activity the child comes to understand a variety of intentional relations in a single format that can be applied to self and other, not only when they are engaged in a common activity, but also when they are engaged in different activities. By the end of the second year toddlers have come to recognize themselves and others as distinct agents, who can pursue different activities. Yet the toddler can conceive of these different activities of both self and other as activities of an embodied agent, a person and self. When the toddler observes the actions of another individual she can imagine the first-person perspective of those actions, as if she shared in this activity with the other individual. Thus, prior experience of joint intentional activities provides a basis for recognizing the meaning of these actions from the first-person perspective of the other individual. Conversely, when the child engages in her own actions, she can imagine the third-person perspective of another individual observing her, and joining with her in the actions, and can understand, in a reflective manner, her own activity as an embodied agent or person no different from any other person. Thus, the firstand third-person aspects of intentional activity can now be unified in interpreting actions of individuals, as well as in conditions of joint action. Both self and other are now perceived from a meta-perspective that represents them both as embodied agents that are persons and selves, each of which can be viewed from an ‘objective’ or ‘allocentric’ third-person perspective and each attributed a ‘subjective’ or ‘egocentric’ first-person point of view. In accordance with Strawsonian requirements, there is a perfect symmetry in the representation of intentional actions ascribed to both self and other. The child now understands the other as another self, and the self as another other. As a consequence the child is now able to use deictic terms like ‘I’ and ‘you’ in an appropriate manner, and begins to experience forms of self-consciousness like embarrassment that she could not previously experience because they require a level of representation of the self as a whole agent who is the possible object of another’s attitudes. When engaged at this age in joint activity, the toddler can now readily shift roles, because she can imagine the first-person perspective of the other individual in complementary positions in any activity. Instead of experiencing one’s own role only in a first-person format, and the role of the other only in a third-person format, the child now represents both roles in an integrated format with both first- and third-person aspects. Thus, the child sees joint activity as that of two agents engaged in shared intentional activities with common goals, where both agents and their independent roles are understood so as to allow the child to take on either role if that were required (Rakoczy et al., 2014). From the ages of 2 to 4, children acquire increasing skills in thinking of themselves and others as embodied, intentional agents engaged in activities that involve complementary roles that are often governed by various conventional and moral rules or norms. They become skilled in thinking of these intentional activities in agent-neutral terms that apply norms or rules as a function of the roles that are involved in these activities, whether culturally constituted and acquired from adults or created in collaborative play with other children. They also acquire the necessary executive skills to regulate their actions with respect to these norms.Thus, these children acquire the capacity for normative guidance recognized in Kitcher’s (2011) theory. Indeed, children of this age are sticklers about playing by rules and insist that others as well as self play by them. They are also able to distinguish between conventional and moral norms, the latter being those that they believe hold universally, while the former are arbitrary and restricted to smaller groups (Nichols, 2004; Rakoczy & Schmidt, 2013; Tomasello & Vaish, 2013). Although the 2- and 3-year-old has acquired a concept of person and self and is capable of thinking of self and other in agent-neutral ways, especially when engaged in norm-governed activities, this conception of person and self is limited to the here and now, or extended only 111
John Barresi
with respect to well-known routines. There are two more major advances that are needed for children and adolescents to achieve an adult concept of person (Barresi, 1999). These two advances are the concept of a temporally extended person and self, and a life-course narrative identity. Both of these concepts are necessary in order to recognize and perform adult activities in an agent-neutral manner. A major change occurs during the fourth and fifth years of development. It is at this time that self-reflection enters more fully into the temporal domain and the child becomes capable of moving imaginatively not only across space from person to person in present time but also across time to past and future person positions (Barresi 2001; Moore & Lemmon 2001). The child is able to conceive of its own past and future representations of reality as distinct from its present representations, and begins to appreciate itself as well as others as selves extended in time. Before this time, experiences unfold but are not connected together into an autobiographical stream. Now, retrospective memory and anticipation of the future has this structure. Correlated with these skills are the executive capacity to act for future rather than for present goals and the abilities to understand false belief and what is called level-two perspective taking that fall collectively under the concept of representational theory of mind (Moore, Barresi, & Thompson, 1998). The acquisition of the concept of an extended self makes possible cooperative and moral activity that extends over time. Promises can be made, remembered, and kept, at least over short periods of time. However, over longer periods and different situational contexts there may be a lack of stable moral outlook. Not until adolescence is there an attempt to maintain a stable moral stance across time and through a variety of situations. Consider, for instance, some developmental research on the keeping of promises to friends. Keller (2004) has shown in a cross-cultural study a developmental trend in the likelihood that a child or adolescent will think a same-age actor will keep a promise to be with a friend in a dilemma that requires a choice between keeping the promise versus accepting an invitation to go to a movie with another child, who is new in class. Most Icelandic and Chinese 7- and 9-year-olds thought that the actor would choose to go to the movie, while most 15- and 19-year-olds thought that the actor would keep the promise to be with the friend. Twelve-year-olds were transitional. In justifying their decisions, young Icelandic children thought that going to the movie would be more fun, though they also thought that the right thing to do would be to keep the promise to be with the friend. The Chinese children justified the same action by saying that it is right to be nice to the stranger child. However, adolescent participants in both cultures agreed that it was important to keep the promise to a friend and thought that this was the right thing to do because otherwise the actor would not be perceived as a reliable and trustworthy person, and would not really be a friend. This acquired concern over one’s moral identity that persists through time and varying situations is one outcome of a process of identity formation that typically begins in adolescence (McAdams, 1990). Young adults can now conceive of themselves and others as persons with consistent individual personalities as well as life-course identities constructed out of those available within their culture, thus conceivable in agent-neutral ways. Being known as a trustworthy person becomes an essential personality attribute for long-term cooperative relationships with others such as friendships, marriage, and careers. Importantly, young adults can now formulate and regulate their behavior by abstract moral principles and norms that apply across most situations.They can also join with others in formulating moral norms as well as other agent-neutral conventions of the society in which they live, and can play their part in the general government of each other’s behavior in accordance with those norms.To the extent that such norms are based on principles of equality and equity, they contribute to an egalitarian society. However, variation 112
Personhood and humanhood
in social identities brings with it asymmetries in power and influence, and non-egalitarian norms can persist because individuals in power take advantage of their social identities to maintain those asymmetries. While agent-neutral thinking of all members of the society as persons promotes egalitarian ideals, specific identities and power relationships undermine this kind of thinking with more agent-relative motivations. Overall, my proposal is that as our concepts of person and self develop we acquire a capacity for wider agent-neutral forms of representation. Compared to the more advanced stages, earlier concepts of person and self are more limited and agent-relative. Agent-neutral cooperative and moral activity at each stage can only go so far without the wider perspective that is more inclusive.The narrowest stage does not involve a concept of person and is purely agent-relative.This is the stage that most animals are at, even chimpanzees. If it were not for the adaptive need for more intense forms of cooperation than that required for chimpanzee life our ancestors may never have made the leap to the kind of agent-neutral ways of thinking about self and other equally as persons that provides a necessary conceptual capacity to ground human moral life.
References Barresi, J. (1999). On becoming a person. Philosophical Psychology, 12, 79–98. ———. (2001). Extending self-consciousness into the future. In C. Moore and K. Lemmon (Eds.), The Self in Time: Developmental Perspectives (pp. 141–161). Hillsdale, NJ: Erlbaum. Barresi, J. & Moore, C. (1996). Intentional relations and social understanding, Behavioral and Brain Sciences, 19, 107–122. Barresi, J., Moore, C. & Martin, R. (2013). Conceiving of self and others as persons: Evolution and development. In J. Martin and M. Bickhard (Eds.), The Psychology of Personhood: Philosophical, Historical, SocialDevelopmental, and Narrative Perspectives (pp. 127–146). Cambridge, UK: Cambridge University Press. Binmore, K. (2005). Natural Justice. Oxford: Oxford University Press. Boehm, C. (2012). Moral Origins:The Evolution of Virtue, Altruism, and Shame. New York: Basic Books. Boyd, R. & Richerson, P. J. (1992). Punishment allows the evolution of cooperation (or anything else) in sizable groups. Ethology and Sociobiology, 13, 171–195. Chapais, B. (2008). Primeval Kinship: How Pair-bonding gave Birth to Human Society. Cambridge, MA: Harvard University Press. de Waal, F.B.M. (2006). Primates and Philosophers: How Morality Evolved. Princeton, NJ: Princeton University Press. ———. (2008). Putting the altruism back into altruism: The evolution of empathy. Annual Review of Psychology, 59, 279–300. Dow, J. (2012). On the joint engagement of persons: Self-consciousness, the symmetry thesis and person perception. Philosophical Psychology, 25, 49–75. Fletcher, G.,Warneken, F. & Tomasello, M. (2012). Differences in cognitive processes underlying the collaborative activities of children and chimpanzees. Cognitive Development, 27, 136–153. Gallup, G. G. Jr., Anderson, J. R. & Platek, S. M. (2010). Self-recognition. In S. Gallagher (Ed.), The Oxford Handbook of Self (pp. 80–110). Oxford: Oxford University Press. Greene, J. (2013). Moral Tribes: Emotion, Reason, and the Gap Between us and Them. New York: Penguin Press. Haidt, J. (2012). The Righteous Mind:Why Good People Are Divided by Politics and Religion. New York: Pantheon. Hrdy, S. (2009). Mothers and Others. Cambridge, MA: Harvard University Press. Jensen, K., Hare, B., Call, J. & Tomasello, M. (2006). What’s in it for me? Self-regard precludes altruism and spite in chimpanzees, Proceedings of the Royal Society: B, 273, 1013–1021. Joyce, R. (2006). The Evolution of Morality. Cambridge, MA: MIT Press. Keller, M. (2004). Self in relationship. In D. K. Lapsley and D. Narvaez (Eds.), Moral Development, Self, and Identity (pp. 267–298). Mahwah, NJ: Lawrence Erlbaum Associates. Kitcher, P. (2011). The Ethical Project. Cambridge, MA: Harvard University Press. Krebs, D. L. (2011). The Origins of Morality: An Evolutionary Account. Oxford: Oxford University Press.
113
John Barresi McAdams, D. P. (1990). Unity and purpose in human lives: The emergence of identity as a life story. In A. I. Rabin, R. A. Zucker, R. A. Emmons and S. Frank (Eds.), Studying Persons and Lives (pp. 148–200). New York: Springer. Moore, C., Barresi, J. & Thompson, C. (1998).The cognitive basis of future-oriented prosocial behavior. Social Development, 7, 198–218. Moore, C. & Lemmon, K. (Eds.) (2001). The Self in Time: Developmental Perspectives. Hillsdale, NJ: Erlbaum. Musholt, K. (2015). Thinking about Oneself: From Nonconceptual Content to the Concept of a Self. Cambridge, MA: MIT Press. Nagel, T. (1970). The Possibility of Altruism. Oxford: Oxford University Press. ———. (1986). The View from Nowhere. Oxford: Oxford University Press. Newen, A. & Schlicht,T. (2009). Understanding other minds: A criticism of Goldman’s simulation theory and an outline of the person model theory, Grazer Philosophische Studien, 79, 209–242. Nichols, S. (2004). Sentimental Rules. Oxford: Oxford University Press. Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press. Rakoczy, H., Gräfenhain, M., Clüver, A., Schulze Dalhoff, A. C. & Sternkopf, A. (2014). Young children’s agent-neutral representations of action roles. Journal of Experimental Child Psychology, 128, 201–209. Rakoczy, H. & Schmidt, M. (2013). The early ontogeny of social norms. Child Development Perspectives, 7, 17–21. Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press. Skyrms, B. (2004). The Stag Hunt and the Evolution of Social Structure. New York: Cambridge University Press. Sterelny, K. (2012). The Evolved Apprentice: How Evolution Made Humans Unique. Cambridge, MA: MIT Press. Strawson, P. F. (1959). Individuals. New York: Taylor and Francis. Tomasello, M. (2014). A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. Tomasello, M. & Vaish, A. (2013). Origins of human cooperation and morality. Annual Review of Psychology, 64, 231–255. Thompson, C., Barresi, J. & Moore, C. (1997). The development of future-oriented prudence and altruism in preschool children. Cognitive Development, 12, 199–212. Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35–57.
114
PART II
Developmental and comparative perspectives
7 PLURALISTIC FOLK PSYCHOLOGY IN HUMANS AND OTHER APES Kristin Andrews
How much continuity is there between the social cognition of humans and other animals? To answer this question, we first need accurate descriptions of the kinds of social cognition that exist in humans, and the kinds of social cognition that exist in other animals. Offering such descriptions, it turns out, is surprisingly difficult. Nonetheless, claims of discontinuities abound. Michael Tomasello’s research on the abilities of children and nonhuman great apes leads him to conclude that only humans are true cooperators, who share a joint goal and work together to achieve it (Tomasello 2014). Kim Sterelny’s apprenticeship hypothesis shares such a commitment to human uniqueness in cooperation and mindreading, for these skills are what facilitate the uniquely human practice of active teaching (Sterelny 2012). Tad Zawidzki argues that the uniquely human sociocognitive syndrome, which consists of language, cooperation, imitation, and mindreading, developed due to our intrinsic motivation to shape others and be shaped by others in a way that demonstrates norm following (Zawidzki 2013). And according to Gergely Csibra and György Gergely’s (2011) Natural Pedagogy Hypothesis, humans alone engage in active teaching, because humans alone have an innate mechanism that produces and responds to signals indicating that a learning opportunity is at hand. I aim to challenge the view that there are stark discontinuities between the social psychology of humans and other animals – in particular between humans and the other great apes – by downgrading the mechanisms for human social cognition. Humans often rely on a relatively simple set of mechanisms that, together with the ability to identify intentional action, permit much of our sophisticated-looking social cognitive practices. Our social cognition involves a process of model building and forming expectations of how intentional agents should live up to these models. The models include normative elements – aspirational stereotypes of how people and groups should act – rather than mere descriptions of how people do in fact act. At least some other animals also have elements of pluralistic folk psychology – something that becomes apparent when we look for the right sorts of similarities and differences. I will start this chapter by arguing that mindreading beliefs is not the place to look for continuity between human and nonhuman social cognition, because mindreading beliefs, desires, and other propositional attitudes is a small and late-developing piece of our social cognitive skill set.1 Next, I will argue that a better account of human social cognition is pluralistic. There are three elements to the account of Pluralistic Folk Psychology that I defend: we understand 117
Kristin Andrews
other people in a variety of ways, we build models of individual people and groups, and the models are largely prescriptive rather than descriptive. After sketching the position of pluralistic folk psychology, I will present evidence that human children and other great apes share the ability to identify intentional action and see social behavior through a normative lens. Because perceiving intentional action normatively is key to folk psychology, finding it in other species serves as evidence of continuity between humans and other animals in the domain of social cognition.
Mindreading and the received view A familiar view is that to understand another person, we mindread propositional attitudes; that is, we see that someone has a belief – a propositional attitude that has the aim of representing reality – and a desire – a propositional attitude that has the aim of making the world fit one’s mind. Since together, this coupling of belief and desire can cause their bearer to act, mindreading allows us to predict people’s future behavior and to offer causal explanations of what they have already done. If social cognition has the function of predicting and explaining behavior, the mindreading account appears to offer a description of our key social capacity. This familiar view has been in the background of almost 30 years of empirical research on the ontogeny and phylogeny of social cognition qua belief and desire reasoning. It is widely accepted that human children are able to mindread once they reason about false belief, around 4 years of age, and that other animals fail to demonstrate evidence of belief reasoning – though they may be able to take into account others’ desires or perspectives. Since successful mindreading only permits prediction and explanation if one knows what behaviors are associated with the various beliefs and desires, a mindreader also has some knowledge about the causal consequences of a set of beliefs and desires. The received view has adult social cognition consisting of belief and desire concepts, as well as knowledge of the particular causal relations that obtain between particular beliefs, desires, and behaviors. Hence the received view conceives of social cognition as largely individualistic and internalistic – focused on hidden beliefs and desires of the person of interest. And it presents social cognition as a form of causal reasoning, like our reasoning about the operations of the physical world. The central difference between causal reasoning about people and things is that people’s causes are often hidden – people are selfpropelled – while the movements of non-agents are usually caused by visible external events. The research in the development of social cognition examines the question of when these abilities arise in humans, and the research in the evolution of social cognition examines whether these abilities exist in other animals. But there are reasons to reject this project as based on a mischaracterization of human social cognition. There are at least four reasons for rejecting the idea that adult humans typically ascribe hidden causes to others when they are predicting behavior. First, since we are not aware of thinking about others’ beliefs in our daily interactions with others, this processing would be occurring automatically and without conscious attention. But the claim that belief reasoning is automatically implicated in our predictive tasks has not been supported by evidence. In a direct investigation of the question, Ian Apperly and colleagues found that, in both false belief and true belief contexts, it takes people longer to answer probe questions about belief than to answer probe questions about the situation (Apperly et al. 2006; Back and Apperly 2010). Apperly suggests that a certain motivation is required for adults to mindread beliefs on-line. Mindreading beliefs is unlike mindreading perspectives, since adults track what others see quickly and efficiently, even when we don’t need to and even when it interferes with our own goals (Samson et al. 2010; see the discussion in Apperly 2011). Apperly and Butterfill’s (2009) development of 118
Pluralistic folk psychology
a two-systems model for belief reasoning reflects the view that reasoning about propositional attitudes, like calculating long division problems, is cognitively demanding (but with practice and development of expertise, we can efficiently calculate some division problems, and we can efficiently ascribe beliefs and desires in familiar situations). Even psychologists who had previously argued that belief attribution is automatic (Cohen and German 2009) now think that the automatic/controlled distinction in theory of mind (ToM) processing is not useful (German and Cohen 2012). While we may automatically take in the details of the situation, those details don’t always lead us to think about belief. The idea that it is belief that we are tracking – theoretical entities with causal powers that have the logical property of opacity and a world-tomind direction of fit – rather than, say, patterns of behavior and emotions in situations, isn’t supported by either the behavioral or neuroscience studies on mindreading. A second reason to reject the idea that we typically attribute beliefs to others in order to predict their behavior is that attributing beliefs isn’t a very accurate way of predicting behavior – and we are pretty good at predicting quotidian human behavior. There are a number of reasons to question the accuracy of belief attribution for prediction. For one, thinking about someone’s reasons for action – their beliefs – triggers cognitive biases that lead us to accept the first possible set of reasons for action (by considering someone’s reasons for action, we come to see the action as more likely) (Wilson and LaFleur 1995; see Andrews 2012 for a discussion). And since behavior underdetermines sets of reasons, it is usually not the case that the first reason is the correct one (Andrews 2012). This worry has been articulated by Tad Zawidzki, who argues that the holism of the propositional attitudes causes an intractability problem (Zawidzki 2013).The relationship between observable behavior and the propositional attitudes that presumably cause behavior would be too complex to allow for timely, much less accurate, predictions of behavior. The unmitigated search space would be too great. Apperly (2010) thinks we can limit the search space by appealing to scripts of typical behavior (a type of normative reasoning), and Zawidzki thinks that our ancestors’ practices of mindshaping, which led to cohesion in their community as well as differences between different communities, limit the search space for each community.These moves make mindreading easier, but only once the mindreader has had the right sort of enculturation. They each make mindreading dependent on normative reasoning – thinking about how others should act rather than how they do or will act. I will argue that this normative reasoning is very powerful and can do much of the social cognitive work that is often ascribed to an ability to think about others’ propositional attitudes. A third reason to think that we do not usually ascribe beliefs in order to predict behavior comes from the developmental literature. Infants demonstrate sensitivity to others’ false beliefs via violation of expectation, looking time studies at 15–18 months (Onishi and Baillargeon 2005 at 15 months;Yott and Poulin-Dubois 2012 at 18 months). They will help someone with a false belief open a locked box (Buttelmann et al. 2009), but until 4 years old children won’t make a correct prediction in a false belief task (Wellman et al. 2001). And they do not consistently explain behavior in terms of belief until after age 6 (Priewasser 2009, as cited in Perner in press). Indeed, when 4-year-olds pass the false belief task and are asked to explain their answers, they rarely refer to the character’s false belief. Rather, the children tend to offer explanations like, “He looked for the chocolate there because that’s where he left it” (Wimmer and Mayringer 1998; Andrews and Verbeek unpublished manuscript; Perner et al. 2002). At 6 years children still do not give explanations in terms of beliefs, but instead talk about the character as “not seeing” or “not knowing” (see Perner in press for a discussion). Hannes Rakoczy (2012) has argued that the debate between romantics who think babies mindread, and the killjoys who think they don’t, is based on a confused use of the terms at issue rather than a deep disagreement (see also Schaafsma et al. 2015). The gist of the data, as 119
Kristin Andrews
Rakoczy sees it, is that infants have a subdoxastic state that allows them to predict behavior, whereas older children have propositional attitudes about others’ propositional attitudes. It is a conceptual issue as to whether we want to consider each of those capacities as a capacity to attribute belief – and an answer to that question rests on your account of belief; this is true for Apperly and Butterfill’s two-systems view as well. A representationalist about belief, who takes belief to be an attitude toward a proposition, would have to accept that having a belief about a belief requires having the concept of belief with all its logical properties (e.g., Fodor). A dispositionalist about belief, on the other hand, can have lower standards for the conceptual and logical abilities needed for having a belief about a belief, since such a belief just is a disposition to respond to the disposition seen in another (e.g., Marcus 1990; Schwitzgebel 2002). Butterfill and Apperly explain the infant data via their two-systems model; infants solve the tasks using an early-developing implicit approximate system, and older children who have developed an explicit system that permits forming beliefs about beliefs are able to pass the verbal elicited response false belief style tasks (Apperly and Butterfill 2009; Butterfill and Apperly 2013). Another explanation comes from Cecilia Heyes and Chris Frith, who have argued that infants have a domain-general capacity that allows them to track others’ beliefs, whereas success in verbal false belief tasks requires culturally inherited conceptual knowledge about the nature of beliefs (Heyes and Frith 2014). If infants are able to track false belief, and likely do so without having the rich concept of belief, then however they do it they use mechanisms that are simpler than the mechanism described by the full-blown theory of mind account. These simpler mechanisms may also be at play when young children pass the false belief task, as will be discussed in the next section. Rather than offering evidence in favor of belief reasoning in infants, then, the infant findings about false belief tracking undermine the assumption that older children, and even adults, use belief reasoning when they track false belief. If human cognitive systems prefer fast and frugal heuristics over slower deliberative reasoning whenever possible, then the infant data supports the claim that human adults do not need to reason about belief in their quotidian anticipation of social behavior (Fiebich 2013). A final reason to reject the idea that we are constantly attributing beliefs to others in order to predict behavior is that we have other methods for predicting behavior that do not suffer from the limitations of belief attribution. A positive argument for the received view is a form of exclusion argument, leaving mindreading as the only means for predicting behavior caused by the invisible theoretical entities of belief. But exclusion arguments are only as strong as the alternatives are comprehensive, and the alternatives on the table have been limited to the proposal that there is a single mechanism or at most two mechanisms involved in our tracking behavior, and that these involve a form of belief reasoning. I propose that humans use a number of mechanisms when predicting behavior, and that belief reasoning comes into play when explaining behavior more so than it does in predicting behavior. According to Pluralistic Folk Psychology, in addition to doing a little mindreading from time to time, humans use a host of different methods to predict and explain behavior that don’t involve considering the beliefs of actors (Andrews 2012). We can see what people are going to do next because we understand people not as bags of skin filled with reasons for actions, but as people – richly developed like characters in a good novel, with past histories, relationships, character traits, and habits, who are embedded in a community with social norms and particular roles to play. On this view, regardless of the method of prediction, the folk psychologist sees the actor as an intentional agent. Seeing others as people is key. 120
Pluralistic folk psychology
Pluralistic folk psychology There are three main aspects to Pluralistic Folk Psychology: (1) Pluralism – we understand, predict, and explain other people using a variety of mechanisms and heuristics; (2) Modeling – we build models of individuals and groups; and (3) Normativity – the models are largely prescriptive rather than descriptive. Let us look at each in turn. Pluralism The pluralistic commitment challenges the idea that we primarily understand others in terms of their propositional attitudes. Despite the importance that is often placed on our ability to think about others’ thoughts, when we predict behavior we rarely need to think about beliefs, largely because people behave consistently with their past behavior, their social status, and the norms of their society. Though sometimes it can be useful to offer reasons for action in terms of propositional attitudes when explaining, justifying, or criticizing behavior, in predicting behavior it is relatively rare that we need to think about others’ beliefs and desires. If we don’t regularly use the attribution of propositional attitudes when predicting behavior, how is it that we are so good at coordinating with others in our quotidian social activities? I have identified a number of different methods folk psychologists use to understand others, and there are certainly others (see Andrews 2012 for a full discussion of these). What unites these methods is that they are all used in predicting the behavior of intentional agents. Central to folk psychology, then, is this ability to discriminate intentional actors from the non-agential world. But we can see others as agents without filling their minds with propositional attitudes. Even in those contexts where we suppose mindreading is essential, we may be making predictions without needing to think about others’ mental states. Consider successful performance on the false belief task. To predict that someone will look for an object where they left it, even after the object has been moved, we don’t need to think about the belief of the target, but instead we may use general knowledge about perception and action. If the target didn’t have a direct line of sight to the object when it is moved, she will continue to seek it where she left it, using a “people seek objects where they left them” heuristic. Children can come to learn this generalization from their object play and observations of caregivers. Children whose lives are full of important objects quickly come to learn that these objects sometimes get misplaced, causing consternation in adults. Parents may muse out loud, “Now, where did I put my keys?” Mischievous children may find it fun to hide their parents’ important objects, and then watch the parent looking for the keys on the rack and worrying about where they left them. Experience with the world permits children to recognize patterns of behavior, forming expectations about how others will, and should, act in certain situations. They can do this before they are able to explain why people do as they do in terms of people’s beliefs. Indeed, as we saw, when children pass the false belief task at ages 4 to 6, they do not give propositional attitude attributions to explain their reasoning, even though we might expect such attributions to be primed if they were just used to make a prediction. Instead, children explain by talking about the target’s past action, or the past location of the object. We can predict that the mother will smile when she looks at her smiling infant, we can predict that our friends will like a sushi dinner (who wouldn’t like sushi?), we can predict that the waiter will bring the food we ordered, we can predict that the audience will clap after the performance, that Dexter will bring the doughnuts, that standers will step to the right on the escalator, that getting a raise will make her happy, that the cat will pounce on the toy, and that 121
Kristin Andrews
the nervous student won’t ask a question after the talk. And sometimes these predictions are wrong, leading us to seek an explanation. These methods are not foolproof. Dexter might be busy. Our friends might be vegans. The escalator riders might be visiting from the countryside. Sometimes, too, we appeal to propositional attitudes to predict behavior. But those cases are relatively rare, and they are often cases about propositional attitudes in the first place. Suppose you tell me what you desire or believe, and I want to change your belief or desire. I then do need to think about these mental states – to mindread. For example, you tell me that you believe that candidate A is great, but I want you to vote for candidate B. To change your mind I might think about your beliefs about the issues that the candidates support. We understand people when their behaviors match our expectations.When behavior makes sense, we don’t need to explain it. But when we fail to make sense of someone’s behavior, we might want to explain it. Here too we have pluralism. We can explain behavior in different ways: he put his feet on the table because he is a slob; because he wasn’t raised right; because he’s the dominant; because he wants us to leave, etc. Different kinds of explanations have different social consequences; we can dehumanize by offering certain kinds of explanations (“He has 15 children because he can’t control himself – he’s an animal”), or we can repair damaged relationships (“She spoke harshly because she wants to help you improve your confidence”). Modeling Once we have pluralism, we have to have some way of integrating the different kinds of information we have about others. Following an early suggestion of Ron Giere, I propose that we integrate by constructing and manipulating models (Andrews 2012). Giere suggested that folk psychology might involve modeling social information rather than representing it in propositional form, which he took to dissolve the debate between simulation theory and theory theory (Giere 1996). Giere defends this view based on his acceptance of the semantic view of theory, according to which scientific theories are not sets of propositions that scientists share, but rather methods for modeling aspects of the world (Suppes 1960; Suppe 1972). According to semantic accounts of theory, theories are not composed of laws because there isn’t evidence to support individual laws outside of a context. Only within systems can we extract generalizations the model can make true (Giere 1988). A model is a fictionalized and simplified representation of some subset of the world that we create to make the world easier to understand. Models always abstract away from the complexity of the actual world; but they are used to understand patterns that we can extract from the complexity of the actual world, and they can be used to test these patterns. The model approach to folk psychology sidesteps the original debate between simulation theory and theory theory by suggesting that folk psychological capacities can be understood as the ability to create and manipulate a model, or models. In endorsing something along the lines of Giere’s suggestion, Peter Godfrey-Smith notes that modeling is a strategy used in many domains of science, including robotics, artificial intelligence, physics, and evolutionary biology, but that it is also part of how we understand other people (Godfrey-Smith 2005, 2006). He proposes that our social cognitive capacities comprise a facility with manipulating models. In the practice of folk psychology we create multiple models – models of the individual target, but also general models of human behavior that reflect the typical folk psychological information of which particular sets of beliefs and desires cause behavior. These models of individuals, however, do not consist of just propositional attitudes, but also include the person’s emotional state, moods, and sensations (Godfrey-Smith 2005). When we construe the model as a realistic causal model, we can use it to predict and explain others’ behaviors. 122
Pluralistic folk psychology
Heidi Maibom (2003, 2007, 2009) expands Giere’s approach beyond simply modeling psychological properties of people. She argues that in addition to general models of behavior we also have social models. Social models represent information about social structures, institutions, and relations in a culture, and allow individuals to engage with people in that culture smoothly. Around the same time, a number of philosophers came to the realization that a significant amount of social interaction goes so smoothly because we follow the norms or scripts of our culture; we don’t need to think about the mind of the waiter when we order our iced latté, at least if we are doing it in our local coffee shop (Andrews 2003; Bermúdez 2003; Gauker 2003; Morton 2003).We use theoretical models of social information to engage in this interaction. Maibom makes the point that these models are normative, not just descriptive, because they describe how people in certain roles ought to act. Maibom identifies three kinds of folk psychological models: (1) models of how an individual’s mental states cause individual actions, (2) models of the relationships of mental states to one another, and (3) models of how the world causes an individual’s mental states. Taken together, Maibom thinks that the use of these models can account for the richness and complexity of human social cognition. But the models are not always used together: Since each focuses on a different aspect of subjects and what they do – how an organism relates to its environment, what internal events cause an action, or the role that a subject plays in a social structure – they each provide a different understanding of the situation. (Maibom 2007, 572) Folk psychological models fulfill much the same role as theories do in traditional belief-desire psychology, such that mental states are seen as causing actions, and these actions can be predicted and explained in terms of these causally efficacious mental states. Added to this picture, however, is the ability to make discriminations about types of people based on their social constraints. More recently Albert Newen has advocated a person model account of folk psychology, according to which our folk psychological practices are facilitated by person models of ourselves, other individuals, and of groups (Newen 2015). By “person model”, Newen means “a unity of properties or features that we represent in memory as belonging to one person or a group . . . of persons” (Newen 2015, 12). There are explicit and implicit aspects to person models, such that we do not have introspective access to all aspects of our person models.These person models are supplemented with general folk psychological knowledge, as well as models of situations and cultures. Thus, like Maibom’s account, on Newen’s view we can go some way to understanding individual differences among the people in our social domain. But unlike Maibom, Newen doesn’t stress the normative nature of the models. On my view, humans build two general types of models, models of individuals and models of groups, and neither of these reflect the kind of information typically appealed to in accounts of folk psychological theory. For individuals, our models consist of the information types that come from the pluralistic mechanism described above. The models do not take others to be bags of skin filled with propositional attitudes, but richly drawn out characters with past histories, relationships, ticks, ways of moving, preferences, and personalities. How much would you miss were you to model your wife, your kid, or your mother, using only propositional attitudes? Compare that model with the model that only takes away propositional attitudes and leaves their personality, past history, tendencies, quirks, moods, emotions, relationships, social role, and so forth. How little would be missing from the latter model? 123
Kristin Andrews
Building these models isn’t a lonely task, carried out in an isolated space without input from the target or others in the social context. Rather, these models are built through interaction with their targets. When interacting with another person, your model of her will be affected by her model of you (Andrews 2015a, 2015b). A child whose teacher thinks poorly of her will tend not to do well in class, and not thriving in the teacher’s classroom will cause the child to think poorly of the teacher in turn. Likewise, a teenager who finds out that an attractive acquaintance has a crush on her will tend to think more highly of the acquaintance for having such good taste. Our interactions with others play an important role in the creation of our other models. We also create models of groups of people. These groups may be small, like a family, or large, like the species of humans. In between we model types of people – women, Canadians, philosophers, racists, teenagers, the generous, and so forth. We use stereotypes – generalizations about groups that state the properties and traits that group members supposedly have. These properties can include behaviors, beliefs, attributes of physical appearance, and goals; all these elements are structured such that they relate to one another. Stereotypes permit fast thinking and are also richer than trait attribution (Andersen and Klatzky 1987). Our biases about groups, while sometimes deserving of their bad reputation, are also much more likely to be accurate than is commonly accepted ( Jussim 2012, 2015; see Andrews forthcoming for a discussion). Because a single person is going to fit a number of different stereotypes (the white mother police officer), we also activate the appropriate stereotype pragmatically, according to what is most useful (Stangor et al. 1992), or how accountable you will be in getting it right or wrong (Pendry and Macrae 1996). There are familiar stereotypes for race, gender, and nationalities, but we also construct stereotypes for groups that are salient for us (philosophers and runners), for families (the Munsters and the Clintons), for types of individuals that crosscut other categories (the dominants and subordinates), and for species (humans and dogs). In our human models we get many elements associated with traditional belief-desire psychology, but we also get many of these other elements in our models of other species (as when we think that animals can seek out what they perceive, or that mammal females care for their young). Our easy attribution of mental states to other animals challenges the position that our folk psychology is limited to understanding other humans, or humans of a particular cultural variety. Our models of individuals and groups allow us to make quick and fairly accurate predictions of others, but we can also use them to understand others. For example, we can explain someone’s behavior in terms of a stereotype about her – that she’s a philosopher – in order to demonstrate that the individual is living up to some kind of group norms, rather than demonstrating an individual personality trait. These models are normative, not merely descriptive. I agree with Maibom that we build normative models of social systems (or as I put it, of social groups). Stereotypes are prescriptive – little girls might be criticized for getting too dirty or climbing trees, while little boys are praised for the same action. President Obama has been criticized for not being “black enough”. People think that females should be more nurturing than males and that blacks should be more athletic than whites (Burgess and Borgida 1999; Eagly 1987; Fiske and Stevens 1993). Stereotypes are not just descriptions, but also reflect norms about what someone ought to do. This is true of individual models as well as group models. When a person acts contrary to her usual habits, friends and family worry that something is wrong. For example, when your punctual friend doesn’t show up to dinner at the agreed-upon time, you may wonder what happened to her, but if another person were just as late, you might not be bothered at all. Our individual models are not just descriptive, but they serve as a kind of 124
Pluralistic folk psychology
agreement with the people in our lives; we all agree that the punctual friend is, and should be, punctual, and a violation of that norm is going to let us down.This is why we tend to be harder on a good person who violates a moral norm than a bad person who violates the same norm, because we expect more of the good person. Furthermore, since these models are created by interactions with the person in question, acting against one’s model is seen as a violation of the agreement that was created by past interactions. Like Maibom I think that we build models of social systems and types of people, and that these models are normative. What I add is that the models we build of individuals are also normative. In addition, these models are not individually created, but are co-created through interaction between the modeler and other individuals. Normativity When thinking about models, causal models may be the first example that comes to mind. Causal models are input-output systems that can be used to predict what will happen by appeal to the causal structure from the input until the point of interest is reached. However, causal models are only one kind. Godfrey-Smith describes models this way: [A model is] usually a class of hypothetical systems, similar to each other in general pattern, and constructed from a common repertoire of elements. When a scientist has facility with a model, the scientist has an understanding of a whole category of hypothetical systems. . . .Two scientists can use the same model to help with the same target system while having quite different views of how the model might be representing the target system. . . . For example, one scientist might regard some model simply as an input-output device, as a predictive tool. Another might regard the same model as a faithful map of the inner workings of the target system. So both scientists, in a sense, are hoping for a resemblance between model and target, but they are looking for very different kinds of resemblance. (Godfrey-Smith 2005, 4) Godfrey-Smith stresses that models alone don’t give us science; models need to be construed before we use them to make claims about the nature of the world. In folk psychology, the relationships between mental states and behavior are usually described as causal relations. In a model view of folk psychology, one could construe the model as offering a causal story of the actual workings of the target. However, construing the model in causal terms doesn’t reflect the folk commitment to free will. Human cognitive flexibility is seen in terms of making a choice, and people can be held responsible for the choices they make. Our models of individuals, as well as our models of groups, are largely construed normatively rather than causally.The person who isn’t acting like herself is viewed with suspicion (“What is she up to?”), or the person who is acting outside her group norm may be ostracized from the community (“We don’t want a cakesniffer around here!”). In understanding other people, we think about what they should do given who they are. This permits prediction and explanation, but attributions of propositional attitudes to others also have the force of a commitment. If he really believes that Santa exists, and if he really wants a present, he should be nice rather than naughty. The kind normativity I see in folk psychology isn’t of a moral or a rational sort, but it is the foundation for developing these more sophisticated normative sensibilities. Infants’ early mental states reflect how the world is, and how the world should be; we have evidence of the latter given their protests when things don’t go right. Later, infants begin thinking in other 125
Kristin Andrews
modes, about how things will be, how things were, and even later about how thing could have been. Normative thinking is a distinct mode of thought, on a par with thinking about past and future, counterfactuals and possibility, but it is a mode of thought that is central to typical social cognitive engagement. Inspired by Hannah Ginsborg’s (2011) notion of primitive normativity, I have developed a corresponding account of the cognitive capacity required for such basic normative practice, which I refer to as naïve normativity. The normative lens does not require having any normative concepts or rules. Rather, the normative lens is more accurately characterized by thinking in terms of “ought” rather than “is”, and that allows group members to see how we do things around here (Andrews 2015a, b, in preparation). The two central elements to naïve normativity are the we and the way. Having naïve normativity involves having a feeling of belonging, which later leads to in-group/out-group discrimination, and a motivation to do things the way in-group members do them. Thus, engaging in naïvely normative reasoning requires a feeling of belonging which leads to in-group identification as well as identification of the proper behaviors of the in-group. Others also stress the normative nature of folk psychology. Victoria McGeer argues that the central function of folk psychology, and in particular belief reasoning, is to help us regulate behavior, not to help us in causal thinking for predicting or offering causal explanations (McGeer 2007, 2015). Regulative practices shape how people act and think, and this makes it easier for people to coordinate with one another. When we use folk psychology to make sense of someone’s behavior, we are noting how well the person is living up to the norms, and hence, how intelligible their behavior is.When the behavior fits the model, the pattern that we expect, then we have made sense of that behavior; but when it doesn’t, the person becomes unintelligible. For example, when two people are engaged in a joint action, they should continue to work on the project until it is finished, or until they agree to stop. If one individual walks away mid-project, the partner may inquire, “Why aren’t you staying to finish the project?” Suppose the individual answers, “Because we don’t have the supplies.” This answer sets up another expectation: that if they really do have the supplies, or if the supplies are acquired, the individual will return to the project. Our explanations of our own behavior are mini-contracts with the people to whom we offer them. If we fail to live up to those contracts, then we lose status and trust. The normativity apparent in our individual and group models reflects the tendency the folk have to construe models as descriptions of how people should act, rather than just as causal stories about how they will act; and it forms the foundation for our ability to coordinate behavior – both at an intimate level between dyads and at a global societal level. Of course, this is not to say that people do not also offer what appear to be causal explanations in terms of mental states (“She thought that going to the opera would make her more cultured”) and emotions (“She screamed because she was so happy”). The point is that at their core folk psychological models are interpreted normatively, describing how individuals should act. This difference is illustrated by looking at cultural differences. In Japan, one should walk on the left side of the stairs; in Canada, one should walk on the right. Violations damage coordination and result in protest of one sort or another. There is no reason for one norm over the other, no causal account that is part of the commonsense understanding of how we should do things around here. Norms work so well that they are largely invisible – until visiting a culture that doesn’t share them, or teaching an immigrant our ways. This normative lens through which we see intentional action is at the center of folk psychology and is something we see early on in children, and, as we will see in the next section, may also be present in other species. My working hypothesis is that naïve normativity is an 126
Pluralistic folk psychology
evolutionarily old lens through which we, and some other animals, can’t help but see the social world.
Pluralistic folk psychology in other apes Once we have identified the various aspects of folk psychology that are part of human social cognition, we can begin to investigate those ways in which other species are similar to, and different from, humans. While a full investigation into these abilities in other animals is an ongoing research project, I can here sketch the kind of evidence we have, and evidence we might look for, in order to do the comparative work. The focus will be on great apes, and in particular chimpanzees, since the majority of the research in social cognition has been done on that species. Since recognizing intentional agency is key to being a folk psychologist, the first place to examine continuity is in the ability to discriminate intentional action from nonintentional movement. Biological movement is one place to draw comparisons, and here there is data that other species have this capacity. Given the importance of recognizing biological motion for detecting predators as well as potential social and sexual partners, it wouldn’t be surprising if many species have the capacity to distinguish agential biological motion from nonbiological motion. Infants of 6 months respond to point light walkers – moving dots extracted from video of walking humans – as though they are intentional agents, suggesting that infants are sensitive to mere biological movement by this age (Kuhlmeier et al. 2010). Macaques perform like humans when presented with a point light walker display. Like the infants, monkeys show the ability to distinguish the direction of motion, and they display the same pattern of degradation when the stimulus is modified (Churan and Ilg 2001). The ability to recognize intentional agency has received some attention among animal cognition researchers as well, particularly in research with chimpanzees; and a number of studies suggest the chimpanzees ascribe goals to (what appears to humans as) purposeful behavior (Uller 2004; Warneken and Tomasello 2006). In addition to being able to distinguish between agents and non-agents, chimpanzees also appear to be sensitive to the distinction between the intentional and nonintentional behavior of agents. For example, chimpanzees are more impatient with humans who are unwilling to give them food than with those who are unable to give them food (Call et al. 2004). Chimpanzees can also test to see whether a behavior is intentional or not. A chimpanzee named Cassie noticed when his caregiver started mirroring his movements. Like humans, Cassie would systematically vary his movements while closely watching his caregiver (Nielsen et al. 2005), as through trying to determine whether he was being mirrored. We have good evidence that chimpanzees and humans share the capacity of recognizing intentional agency. The behavioral evidence suggests similarities between humans’ and other primates’ perception of biological movement and intentional agency. But these findings shouldn’t be interpreted as evidence that nonhuman primates have a concept of desire, intention, or goal, nor from these data alone can we make inferences about the mechanisms underlying these abilities. Pluralism What we can do is continue the comparative task, given the evidence that some nonhuman primates can distinguish intentional agents and intentional action from aspects of the nonintentional world. The pluralistic folk psychology approach suggests that the next step is to work 127
Kristin Andrews Table 7.1 Non-propositional methods of predicting intentional action Method of prediction
Brief description
Primary intersubjectivity
Regulating interactions using ostensive signals such as eye contact Expecting others to behave as oneself would Generalizing about types of people Generalizing about what people should do in a typical situation Expecting an individual to act consistently Expecting people to follow social norms Forming generalizations about behavior as caused by non-propositional mental states Forming expectations based on the attribution of a goal understood in terms of achieving some state of affairs Relying on trait attributions of a person in order to predict future behavior Expecting others to act on objects they can see Expecting others to act based on their past experiences
Self-reference Stereotypes or social roles Situation Inductive generalizations over past behavior Norms Emotions and sensations Teleology or goal Trait attribution Perceptual state Causal history
though the menu of predictive methods described in Table 7.1 in order to determine to what extent other apes use and do not use these methods. This is not the venue to rehearse the current state of research on great ape cognitive capacities, though see Table 7.2 for a summary. For a more in depth discussion of the capacities, see Chapters 5 and 6 of The Animal Mind (Andrews 2015c). Modeling While we have evidence that chimpanzees use some of the mechanisms of Pluralistic Folk Psychology, the question remains whether they use these mechanisms to put together richly drawn models of other individuals and groups. More research will be needed before we can determine whether apes sort individuals into groups, and whether they have higher-order group categories that include various other types. For example, in humans our stereotype of women includes information about personality traits, situational behaviors, norms, and emotions, such that our women stereotype is a higher-order category. However, we do have evidence of in-group and out-group thinking in apes, which should be further investigated to see in what ways other apes think about groups. As in humans, the existence of in-group and out-group thought is a developmental process that begins early on. Chimpanzee and human infants spend much of their first few years very close to an adult caregiver – usually the mother. Infant chimpanzees cling to their mothers for the first several years of life, nursing, observing, and feeling her movements. Kim Bard reports that chimpanzee mothers gaze into the face of their infants and spend a lot of time engaged in tactile games. Captive chimpanzee infants reared by humans in rich social settings also display interest in the familiar social games we play with babies. For example, chimpanzee infants have been known to initiate games of peek-a-boo (Bard 2005) and engage in neo-natal imitation, mimicking an open mouth or tongue protrusion demonstrated by a human caregiver when only 7 days old (Bard 2007; Myowa-Yamakoshi et al., 2004). 128
Pluralistic folk psychology Table 7.2 Evidence that apes use methods of prediction identified in human social cognition Method of prediction
References
Kinds of evidence
Primary intersubjectivity; coordination with caregiver (following Trevarthen 1979) Self-reference
Gómez 2010; MyowaYamakoshi et al. 2004
Gorilla infant engages in joint cooperative behavior; chimpanzee neo-natal imitation.
No known data
Unknown whether apes think other apes are like them; future research may investigate using a preference task based on Repacholi and Gopnik 1997. Chimpanzee dominance; special treatment of disabled. Chimpanzees change behavior in different situations (e.g., boundary patrols), but it’s not clear that chimps are predicting others’ behavior; future research can look for violations and responses to violations of situational norms. Chimpanzees learn traits of unfamiliar humans by watching them interacting with another chimpanzee; orangutans can formulate reputation judgments by observing a human interacting nicely or meanly with another orangutan. Chimpanzees protest infanticide; chimpanzees aid in road crossing; bonobos protest unexpected social violations
Stereotypes or social roles Situation
de Waal 1982, 2009
Inductive generalizations over past behavior
Subiaul et al. 2008; Herrmann et al. 2013
Norms
Rudolf von Rohr et al. 2011, 2015; Hockings et al. 2006; Clay et al. 2016 Parr et al. 2007; de Waal 2009 Uller 2004
Emotions and sensations Teleology or goal
No known data
Trait attribution
Subiaul et al. 2008; Melis et al. 2006
Perceptual state
Okamoto et al. 2002; Hare et al. 2000; Karg et al. 2015
Causal history
Wittig et al. 2014
Mindreading belief
Call and Tomasello 1999
Chimpanzees recognize basic emotions on faces; chimpanzee consoles friend after loss Chimpanzee infants pass Gergely et al.’s 1995 teleology task Chimpanzees prefer to beg from a generous human donor over a selfish one; chimpanzees prefer to select more skillful collaborators Chimpanzee infants track eye direction; chimpanzees seek food that a dominant chimpanzee cannot see; chimpanzees project their experience with opaque and transparent goggles onto another Chimpanzees understand the relationships between a past opponent and his social partner Chimpanzees fail nonverbal FB task; there is at this point no published evidence the apes mindread belief
As infants move away from their mothers, they interact more with members of their social group – other infants, older siblings, adult females, and adult males. At first, infants recognize the quality of their relationship with a caregiver, monitoring their mothers to learn how to respond to a new situation. Captive chimpanzees raised in a situation in which they have developed 129
Kristin Andrews
attachment bonds to a human caregiver will, at 14 months, and like human infants, alternate gaze between the caregiver and a novel object in order to gain information about the object. Chimpanzees, like human children, also monitor the emotional valiance of the caregiver’s facial expression and will withdraw from objects that the caregiver looks on in fear (Russell et al. 1997). In addition, 1-year-old chimpanzees look much like 1-year-old human children when tested in the Strange Situation Procedure. Securely attached human and chimpanzee infants will play with toys when the caregiver is present in the strange situation and will seek security at similar rates when the caregiver is out of the room (Van Ijzendoorn et al. 2009). As adults, chimpanzees draw clear distinctions between in-group and out-group members. They protect their social group through territory patrols that have the function of drawing clear boundaries between different chimpanzee communities. During patrols chimpanzees move to the edge of their territory, searching for signs of incursion. Occasionally they will cross the boundary into rival territory, searching for, attacking, and even killing and mutilating male and infant chimpanzees (Boesch and Boesch-Achermann 2000; Watts and Mitani 2001). Like humans, chimpanzees need to distinguish their in-group members from their outgroup members. As chimpanzee females migrate from their natal community into a new community as adolescents, it takes time for them to transition from out-group members to in-group members; when first joining a new community, immigrants have low rank and are the subject of much aggression from resident females (Nishida 1989; Kahlenberg et al. 2008). Male chimpanzees often intervene in these encounters, almost always supporting the immigrant female. It has been hypothesized that immigrant females form strategic alliances with male chimpanzees, leading to a change in the immigrant’s acceptance into the community as indicated by dominance status (Kahlenberg et al. 2008). Normativity There is some evidence that other apes also see the social world through a normative lens. Recall that naïve normativity is the ability to identify with an in-group and be motivated to do things the way in-group members do. For a long time we have known that human infants use social referencing when deciding how to interact with new people and situations (for a review see Walden 1991; Klinnert 1984) – choosing to play with a stranger when the mother treats the stranger as a friend and choosing not to play with a stranger when the mother does not first interact with the stranger in a friendly way (Feiring et al. 1984). We also know that 14-month-old children imitate in-group members more than out-group members (Buttelmann et al. 2013), and that 3- and 4-year-old children prefer to learn from high status in-group members (Chudek et al. 2012). Early on, children discriminate between those who are one of us and those who are one of them. We might expect to see in-group and out-group thinking in chimpanzees given that there are cultural differences between chimpanzee communities (Whiten et al. 1999). Chimpanzee groups use tools differently, they eat different foods, they engage in different courtship behaviors (e.g., leaf clipping), and they groom one another differently (van Leeuwen et al. 2012). It may be the case that chimpanzee self-medicate differently as well (Huffman 2001). Chimpanzee cultural differences cannot be attributed to ecological differences, since they are also seen in communities that are in close proximity. There is evidence that it takes new immigrants some time to learn the new way of doing things around here, such as a different way of cracking nuts (Luncz et al. 2012; Luncz and Boesch 2014). These findings about wild chimpanzee behavior led researchers to test how different groups of captive chimpanzees deal with the introduction of a new behavior into their community. 130
Pluralistic folk psychology
A dominant chimpanzee was shown one way to open a puzzle box that has two different solutions. Once the dominant chimpanzee mastered that method, the next ranking chimpanzee was allowed into the chamber to watch the dominant manipulate the box. Just by watching, the observer learned to open the box in the same way the dominant did. This process continued through the community of chimpanzees, and researchers observed a daisy chain effect in which the original method of demonstration spread through the group. Another group of chimpanzees, in which the dominant female was shown the other way of opening the box, resulted in a community that used this alternate method. In both groups, the undemonstrated method of opening the box was discovered, but was rarely adopted and did not spread among the group members. This study suggests that chimpanzees tend to model high-ranking individuals and prefer to manipulate an apparatus in the way the dominant does; these drives permit high fidelity faithful transfer through a group (Horner et al. 2006). In another study of social learning in captive chimpanzees, infant chimpanzees mimicked the behavior of a mother model while learning a new skill, even when the tools necessary for achieving the goal were not available (Fuhrmann et al. 2014). Infant chimpanzees do not try to gain nuts when they move their hand in synchrony with their mothers’; they are motivated to imitate their mothers’ movements, and this motivation provides the kind of training that later allows infant chimpanzees to better crack nuts when needed. Among free-ranging chimpanzees, there are reports that have been interpreted in terms of imitating influential individuals. In one sanctuary community, chimpanzees began to wear grass in their ears after a dominant female adopted the behavior (van Leeuwen et al. 2014). And, among wild chimpanzees at Bossou, the transmission of the complex hammer and anvil nut cracking style has been described by primatologist Tetsuro Matsuzawa as a case of education by master apprenticeship (Matsuzawa et al. 2001). A field experiment with that population found that when a new nut type was introduced into the community, individuals would only observe chimpanzees older than themselves; they didn’t seek to learn the behavior from younger individuals, even when they were proficient in the behavior (Biro et al. 2003). While chimpanzees mimic behavior, they won’t mimic just anyone. A reliable model is needed, and that reliable model must be of the right type. In a particularly interesting experiment, chimpanzees were shown how to operate an apparatus by a “ghost” – transparent fishing wire. Without a real model demonstrating the behavior, chimpanzees usually failed to learn the new behavior (Hopper et al. 2007), and even when they did learn the action, they demonstrated lower degrees of fidelity (Hopper et al. 2008). This interpretation of these findings is not universal. Richard Moore (2013) argues that salience cuing rather than relationship type or prestige preference can account for many of the above findings. Michael Tomasello (2010) argues that the shared intentionality we see so early in human children is absent in chimpanzees, so while they can coordinate behavior they cannot act together with a shared goal – they do not cooperate. While it appears that chimpanzees prefer to model high-ranking prestigious individuals over low ranking ones, perhaps chimpanzees do not learn by imitating, but instead learn about affordances in the world and act rationally to achieve the end goal. High-ranking individuals may be those who are better at achieving the goal, and so the affordances are more apparent. One reason given for thinking that chimpanzees do not imitate the shape of a behavior, but instead act rationally to achieve a goal, comes from studies of overimitation in chimpanzees. Human infants, famously, will overimitate human models by doing silly and unnecessary actions when a human demonstrator models those actions (Meltzoff 1988; Gergely et al. 2002; but see Paulus et al. 2011). However, in a study in which wild-born sanctuary chimpanzees were presented with an opaque puzzle box with food inside, a human modeled how to open the box, and the chimpanzees faithfully copied all the human model’s actions in order to gain the food. But 131
Kristin Andrews
when the chimpanzees were presented with a transparent box, with all the causal structure of the box visible, chimpanzees failed to overimitate, skipping over the unnecessary actions (such as tapping the top of the box with a stick) (Horner and Whiten 2005). This finding appears to be in stark contrast to the finding that children tend to overimitate actions they know are causally irrelevant, even when warned not to imitate the “silly” actions (Lyons et al. 2007). Like human children who tend to overimitate demonstrators who speak their own language but fail to overimitate demonstrators who speak a different language (Buttelmann et al. 2013), I propose that chimpanzees who are wild born do not overimitate humans (an obvious outgroup), but chimpanzees who have close relationships with humans will spontaneously overimitate them (enculturated apes often appear to identify themselves as human). This proposal is consistent with the finding that chimpanzees prefer to imitate high-ranking individuals, and that chimpanzees, like humans, won’t imitate just anyone. Perhaps they will imitate in-group members, and high-ranking in-group members, just as we see with human children. To examine this issue, apes with close and high quality human relationships can be examined, such as Tetsuro Matsuzawa and his “research partner” chimpanzee Ai, a cross-special dyad who has been working together for almost 40 years. If Ai prefers to do things the way Matsuzawa does, it would reflect a drive to do things the way her human in-group members do and would support the existence of a normative lens through which the social world is viewed. The worry that chimpanzees lack shared intentionality, and hence cannot be normative because they lack the we-sense needed for naïve normativity, is undermined by studies of early mother–infant interaction in chimpanzees (e.g., Bard and Leavens 2014) and in the longterm learning of coordinated goals among adult wild chimpanzees, such as hunting monkeys (Boesch 1994), boundary patrols and intergroup aggressive encounters (Mitani and Watts 2001), and road crossing (Hockings et al. 2006). Experiments with captive chimpanzees find evidence of cooperation in order to access out of reach food (Hirata 2003; Hirata and Fuwa 2007; Melis et al. 2006), sharing high quality food (Byrnit 2015), and spontaneously engaging in joint action when given a choice of whom to interact with (Suchak et al. 2014). Tomasello’s response to the growing evidence of chimpanzee cooperation (that “in almost all cases the key to understanding their cooperation is this same overarching matrix of social competition” [Tomasello 2016, 23]) is unconvincing, given the range of situations in which chimpanzees cooperate and given the relatively little we know about wild chimpanzees. Studies of cooperation in captive chimpanzees are usually structured around food tasks, and we know from wild chimpanzees that food sharing is not a typical chimpanzee behavior. (Looking for evidence of chimpanzee cooperation in food sharing tasks is like looking for evidence of human cooperation in toothbrush sharing tasks.) Furthermore, captive chimpanzees are often actively discouraged from cooperating, for when they do it can cause mayhem for their keepers. Randy Wisthoff, the director of the Kansas City zoo, reports that a group of chimpanzees who escaped their enclosure in 2014 were led by an individual who, after setting up a log to be used as a ladder, “beckoned to another six chimps to join him” (Millward 2014). If we fail to see cooperation in captive chimpanzees, it may be because we are unwilling, or unable, to offer them the rich environments that lead to cooperation in the wild.
Conclusion In order to judge how humans are like and unlike other apes, we need to start with a solid foundation of understanding the abilities of humans and the abilities of other apes. We also need to have good evidence about the mechanisms that underlie these abilities. This makes it
132
Pluralistic folk psychology
exceedingly difficult to justify general claims of continuity or discontinuity between humans and other animals. If Pluralistic Folk Psychology is an accurate description of human social cognition, then it serves as a guide for where to look for continuities in social cognition across species. Given what we know so far about chimpanzees, orangutans, gorillas, and bonobos, there appear to be some places of continuity between humans and the other ape species in their social cognitive capacities. Naïve normativity – that lens through which we see others as in-group members we’re motivated to model, or as out-group members we want to distance ourselves from – is something we have evidence for in chimpanzees. Chimpanzees also appear to make generalizations about other individuals, expecting that another will act in the future as he acted in the past. They also expect individuals to act differently in different situations, and to act according to their dominance role. Furthermore, there is some evidence that chimpanzees expect others to obey certain social norms, be it a method of grooming or tender treatment of infants. Whether chimpanzees form models of other chimpanzees is an open question, but one that may be empirically tractable. Our knowledge of chimpanzee social cognition is in its infancy, but we should expect that their social capacities, like our social capacities, would not be primarily based in metacognitive thinking about the hidden propositional attitudes driving behavior. Just as humans see other humans as persons rather than as biological machines operating mechanistically on beliefs and desires, chimpanzees likely see others as individuals with relationships, social status, past histories, capabilities, and skills.
Acknowledgements Thanks to Richard Moore for some very helpful comments on this paper.
Note 1 In this paper I am going to use the term “mindreading” in this narrow sense of ascribing propositional attitudes such as belief and desire to others.There is no handy term for this subset of mindreading capacity, and “mindreading propositional attitudes” is unwieldy to repeat. “Mindreading” could be used in a wider sense, too, and include ascribing mental content such as perceptions, emotions, and sensations.
References Andersen, S. M. & Klatzky, R. L. (1987). Traits and social stereotypes: Levels of categorization in person perception. Journal of Personality and Social Psychology, 53, 235–246. Andrews, K. (2003). Knowing mental states: The asymmetry of psychological prediction and explanation. In Q. Smith and A. Jokic (Eds.), Consciousness: New Philosophical Perspectives (pp. 201–219). Oxford: Oxford University Press. ———. (2012). Do Apes Read Minds? Toward a New Folk Psychology. Cambridge, MA: MIT Press. ———. (2015a). The folk psychology spiral. Southern Journal of Philosophy, 53, 50–67. ———. (2015b). Pluralistic folk psychology and varieties of self knowledge. Philosophical Explorations, 18(2), 282–296. ———. (2015c). The Animal Mind: An Introduction to the Philosophy of Animal Cognition. London: Routledge. ———. (2016). More stereotypes, please! Behavioral and Brain Sciences. Commentary on Lee Jussim. ———. (In preparation). Naïve normativity. Andrews, K. & Verbeek, P. (unpublished manuscript). Does explanation precede prediction in false belief understanding? Apperly, I. (2010). Mindreaders:The Cognitive Basis of “Theory of Mind.” New York: Psychology Press.
133
Kristin Andrews Apperly, I. A. & Butterfill, S. A. (2009). Do humans have two systems to track beliefs and belief-like states? Psychological Review, 116(4), 953–970. Apperly, I. A., Riggs, K. J., Simpson, A., Chiavarino, C. & Samson, D. (2006). Is belief reasoning automatic? Psychological Science, 17(10), 841–844. Back, E. & Apperly, I. A. (2010). Two sources of evidence on the non-automaticity of true and false belief ascription. Cognition, 115(1), 54–70. Bard, K. A. (2005). Emotions in chimpanzee infants: The value of a comparative developmental approach to understand the evolutionary bases of emotion. In J. Nadel and D. Muir (Eds.), Emotional Development: Recent Research Advances (pp. 31–60). New York: Oxford Univ. Press. Bard, K. A. (2007). Neonatal imitation in chimpanzees (Pan troglodytes) tested with two paradigms. Animal Cognition, 10, 233–242. Bard, K. A. & Leavens, D. A. (2014). The importance of development for comparative primatology. Annual Review of Anthropology, 43(1), 183–200. Bermúdez, J. (2003). Thinking Without Words. Cambridge, MA: MIT Press. Biro, D., Inoue-Nakamura, N., Tonooka, R., Yamakoshi, G., Sousa, C. & Matsuzawa, T. (2003). Cultural innovation and transmission of tool use in wild chimpanzees: evidence from field experiments. Animal Cognition, 6(4), 213–223. Boesch, C. (1994). Cooperative hunting in wild chimpanzees. Animal Behavior, 48, 653–667. Boesch, C. & Boesch-Achermann, H. (2000). The Chimpanzees of the Taï Forest: Behavioural Ecology and Evolution. Oxford: Oxford University Press. Burgess, D. & Borgida, E. (1999). Who women are, who women should be: Descriptive and prescriptive gender stereotyping in sex discrimination. Psychology, Public Policy, and Law, 5, 665–692. Buttelmann, D., Carpenter, M., Call, J. & Tomasello, M. (2013). Chimpanzees, Pan troglodytes, recognize successful actions, but fail to imitate them. Animal Behaviour, 86(4), 755–761. Buttelmann, D., Carpenter, M. & Tomasello, M., (2009). Eighteen-month-old infants show false belief understanding in an active helping paradigm. Cognition, 112, 337–342. Butterfill, S. A. & Apperly, I. A. (2013). How to construct a minimal theory of mind. Mind & Language, 28(5), 606–637. Byrnit, J. (2015). Primates’ socio-cognitive abilities: What kind of comparisons makes sense? Integrative Psychological & Behavioral Science, 49(3), 485–511. Call, J., Hare, B., Carpenter, M. & Tomasello, M. (2004). “Unwilling” versus “unable”: Chimpanzees’ understanding of human intentional action. Developmental Science, 7, 488–498. Call, J. & Tomasello, M. (1999). A nonverbal false belief task: The performance of children and great apes. Child Development, 70, 381–395. Chudek, M., Heller, S., Biro, S. & Henrich, J. (2012). Prestige-biased cultural learning: Bystander’s differential attention to potential models influences children’s learning. Evolution and Human Behavior, 33(1), 46–56. Churan, J. & Ilg, U. J. (2001). Processing of second-order motion stimuli in primate middle temporal area and medial superior temporal area. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 18(9), 2297–2306. Clay, Z., Ravaux, L., de Waal, F. B. & Zuberbühler, K. (2016). Bonobos (Pan paniscus) vocally protest against violations of social expectations. Journal of Comparative Psychology, 130(1), 44–54. Cohen, A. S. & German, T. C. (2009). Encoding of others’ beliefs without overt instruction. Cognition, 111, 356. Csibra, G. & Gergely, G. (2011). Natural pedagogy as evolutionary adaptation. Philosophical Transactions of the Royal Society B, 366, 1149–1157. Eagly, A. H. (1987). Sex Differences in Social Behavior: A Social-Role Interpretation. Hillsdale, NJ: Lawrence Erlbaum Associates. Feiring, C., Lewis, M. & Starr, M. D. (1984). Indirect effects and infants’ reaction to strangers. Developmental Psychology, 20(3), 485–491. Fiebich, A. (2013). Mindreading with ease? Fluency and belief reasoning in 4- to 5-year-olds. Synthese, 191(5), 929–944.
134
Pluralistic folk psychology Fiske, S. T. & Stevens, L. E. (1993). What’s so special about sex? Gender stereotyping and discrimination. In S. Oskamp and M. Costanzo (Eds.), Gender Issues in Contemporary Society (pp. 173–196). Newbury Park, CA: Sage. Fuhrmann, D., Ravignani, A., Marshall-Pescini, S. & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4. doi: 10.1038/srep05283 Gauker, C. (2003). Words Without Meaning. Cambridge, MA: MIT Press. German, T. C. & Cohen, A. S. (2012). A cue-based approach to “theory of mind”: Re-examining the notion of automaticity. British Journal of Developmental Psychology 30(1), 45–58. Gergely, G., Bekkering, H. & Király, I. (2002). Rational imitation in preverbal infants: Babies may opt for a simpler way to turn on a light after watching an adult do it. Nature, 755, 415. Gergely, G., Nadasdy, Z., Csibra, G., & Bíró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165–193. Giere, R. N. (1996). The scientist as adult. Philosophy of Science, 63, 538–541. ———. (1988). Explaining Science: A Cognitive Approach. Chicago, IL: University of Chicago Press. Ginsborg, H. (2011). Primitive normativity and skepticism about rules. Journal of Philosophy, 108, 227–254. Godfrey-Smith, P. (2005). Folk psychology as a model. Philosophers’ Imprint, 5(6), 1–16. ———. (2006). The strategy of model-based science. Biology & Philosophy, 21(5), 725–740. Gómez, J.-C. (2010). The ontogeny of triadic cooperative interactions with humans in an infant gorilla. Interaction Studies, 11(3) (November 1, 2010), 353–379. Hare, B., Call, J., Agnetta, B. & Tomasello, M. (2000). Chimpanzees know what conspecifics do and do not see. Animal Behaviour, 59, 771–785. Herrmann, E., Keupp, S., Hare, B.,Vaish, A. & Tomasello, M. (2013). Direct and indirect reputation formation in nonhuman great apes (Pan paniscus, Pan troglodytes, Gorilla gorilla, Pongo pygmaeus) and human children (Homo sapiens). Journal of Comparative Psychology (Washington, DC, 1983), 127(1), 63–75. Heyes, C. M. & Frith, C. D. (2014). The cultural evolution of mind reading. Science, 344(6190), 1243091. Hirata, S. (2003). Cooperation in chimpanzees. Hattatsu, 95, 103–111. Hirata, S. & Fuwa, K. (2007). Chimpanzees (Pan troglodytes) learn to act with other individuals in a cooperative task. Primates, 48, 13–21. Hockings, K. J., Anderson, J. R. & Matsuzawa, T. (2006). Road crossing in chimpanzees: A risky business. Current Biology, 16(17), R668–R670. Hopper, L. M., Lambeth, S. P., Schapiro, S. J. & Whiten, A. (2008). Observational learning in chimpanzees and children studied through ‘ghost’ conditions. Proceedings of the Royal Societies B, 275, 835–840. Hopper, L. M., Spiteri, A., Lambeth, S. P., Schapiro, S. J., Horner,V. & Whiten, A. (2007). Experimental studies of traditions and underlying transmission processes in chimpanzees. Animal Behaviour, 73, 1021–1032. Horner,V. & Whiten, A. (2005). Causal knowledge and imitation/emulation switching in chimpanzees (Pan troglodytes) and children (Homo sapiens). Animal Cognition, 8, 164–181. Horner,V., Whiten, A., Flynn, E. & de Waal, F. (2006). Faithful copying of foraging techniques along cultural transmission chains by chimpanzees and children. Proceedings of the National Academy of Sciences, 103, 13878–13883. Huffman, M. A. (2001). Self-medicative behavior in the African great apes: An evolutionary perspective into the origins of human traditional medicine. BioScience, 51(8), 651–661. Jussim, L. (2012). Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophecy. New York: Oxford University Press. ———. (2015). Précis of social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. Behavioral and Brain Sciences, First View, 1–66. Published online ahead of print. Kahlenberg, S. M.,Thompson, M. E., Muller, M. M. & Wrangham, R. W. (2008). Immigration costs for female chimpanzees and male protection as an immigrant counterstrategy to intrasexual aggression. Animal Behaviour, 76(5), 1497–1509. Karg, K., Schmelz, M., Call, J., & Tomasello, M. 2015. The goggles experiment: Can chimpanzees use selfexperience to infer what a competitor can see? Animal Behaviour, 105, pp. 211–221. Klinnert, M. D. (1984). The regulation of infant behavior by maternal facial expression. Infant Behavior & Development, 7, 447–465.
135
Kristin Andrews Kuhlmeier, V. A., Troje, N. F. & Lee, V. (2010). Young infants detect the direction of biological motion in point-light displays. Infancy, 15(1), 83–93. Luncz, L.V. & Boesch, C. (2014). Tradition over trend: Neighboring chimpanzee communities maintain differences in cultural behavior despite frequent immigration of adult females, 76(7), 649–657. Luncz, L.V., Mundry, R. & Boesch, C. (2012). Evidence for cultural differences between neighboring chimpanzee communities. Current Biology, 22(10), 922–926. Lyons, D. E., Young, A. G. & Keil, F. C. (2007). The hidden structure of overimitation. Proceedings of the National Academy Sciences, 104(50), 19751–19756. Maibom, Heidi L. (2003). The mindreader and the scientist. Mind & Language, 18(3), 296–315. ———. (2007). Social systems. Philosophical Psychology, 20(5), 557–578. ———. (2009). In defence of (model) theory theory. Journal of Consciousness Studies, 16(6–8), 360–378. Marcus, R. B. (1990). Some revisionary proposals about belief and believing. Philosophy and Phenomenological Research, 50, 133–153. Matsuzawa,T., Biro, D., Humle,T., Inoue-Nakamura, N.,Tonooka, R., & Yamakoshi, G. (2001) Emergence of culture in wild chimpanzees: Education by master-apprenticeship. In: Matsuzawa T (Ed.), Primate origins of human cognition and behavior (pp. 557–574). Tokyo Berlin Heidelberg: Springer. McGeer, V. (2007). The regulative dimension of folk psychology. In D. D. Hutto & M. Ratcliffe (Eds.), Folk Psychology Re-Assessed (pp. 137–156). Dordrecht, The Netherlands: Springer. ———. (2015). Mind-making practices: The social infrastructure of self-knowing agency and responsibility. Philosophical Explorations, 18(2), 259–281. Melis, A. P., Hare, B. & Tomasello, M. (2006). Chimpanzees recruit the best collaborators. Science, 311, 1297–1300. Meltzoff, A. N. (1988). Infant imitation after a 1-week delay: Long-term memory for novel acts and multiple stimuli. Developmental Psychology, 24, 470–476. Millward, D. (2014 4–11). Chimps use ingenuity to make great escape out of zoo. Retrieved from http:// www.telegraph.co.uk/news/worldnews/northamerica/usa/10760267/Chimps-use-ingenuity-tomake-great-escape-out-of-zoo.html Mitani, J. & Watts, D. (2001). Boundary patrols and intergroup encounters in wild chimpanzees. Behaviour, 138(3), 299–327. Moore, R. (2013). Social learning and teaching in chimpanzees. Biology & Philosophy, 28(6), 879–901. Morton, A. (2003). The Importance of Being Understood: Folk Psychology as Ethics. London, UK: Routledge. Myowa-Yamakoshi, M., Tomonaga, M., Tanaka, M. & Matsuzawa, T. (2004). Imitation in neonatal chimpanzees (Pan troglodytes). Developmental Science, 7, 437–442. Newen, A. (2015). Understanding others – The person model theory. In T. Metzinger and J. M.Windt (Eds.), Open MIND: 26(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570320 Nielsen, M., Collier-Baker, E., Davis, J. M. & Suddendorf,T. (2005). Imitation recognition in a captive chimpanzee (Pan troglodytes). Animal Cognition, 8, 31–36. Nishida, T. (1989). Social interactions between resident and immigrant female chimpanzees. In P. G. Heltne and L. A. Marquardt (Eds.), Understanding Chimpanzees (pp. 68e89). Cambridge, MA: Harvard University Press. Okamoto, S., Tomonaga, M., Ishii, K., Kawai, N., Tanaka, M. & Matsuzawa, T. (2002). An Infant Chimpanzee (Pan Troglodytes) Follows Human Gaze. Animal Cognition, 5(2), 107–114. Onishi, K. H. & Baillargeon, R. (2005). Do 15-month-old infants understand false beliefs? Science, 308, 255–258. Parr, L. A., Waller, B. M., Vick, S. J. & Bard, K. A. (2007). Classifying Chimpanzee Facial Expressions Using Muscle Action. Emotion (Washington, D.C.), 7(1), 172–181. Paulus, M., Hunnius, S., Vissers, M. & Bekkering, H. (2011). Imitation in infancy: Rational or motor resonance? Child Development, 82, 1047–1057. Pendry, L. F. & Macrae, C. N. (1996).What the disinterested perceiver overlooks: Goal-directed social categorization. Personality and Social Psychology Bulletin, 22, 249–256. Perner, J. (in press). Theory of mind – An unintelligent design: From behaviour to teleology and perspective. In A. M. Leslie and T. C. German (Eds.), Handbook of Theory of Mind. Mahwah, NJ: Erlbaum.
136
Pluralistic folk psychology Perner, J., Lang, B. & Kloo, D. (2002). Theory of mind and self-control: More than a common problem of inhibition. Child Development, 73, 752–767. Priewasser, B. (2009). Das Verständnis für die Subjektivität von “Glauben” und “Wollen” im kompetitiven Spiel. Unpublished Diploma Thesis, Department of Psychology, University of Salzburg, Salzburg. Rakoczy, H. (2012). Do infants have a theory of mind? British Journal of Developmental Psychology, 30(1), 59–74. Repacholi, B. M. & Gopnik, A. (1997). Early reasoning about desires: Evidence from 14- and 18-month-olds. Developmental Psychology, 33, 12–21. Rudolf von Rohr, C., Burkart, J. M. & van Schaik, C. P. (2011). Evolutionary precursors of social norms in chimpanzees: A new approach. Biology and Philosophy, 26, 1–30. Rudolf von Rohr, C., Schaik, C. P. van, Kissling, A. & Burkart, J. M. (2015). Chimpanzees’ Bystander Reactions to Infanticide. Human Nature, 26(2), 143–160. Russell, C. L., Bard, K. A. & Adamson, L. B. (1997). Social referencing by young chimpanzees (Pan troglodytes). Journal of Comparative Psychology, 111, 185–191. Samson, D., Apperly, I. A., Braithwaite, J. J., Andrews, B. J. & Bodley Scott, S. E. (2010). Seeing it their way: Evidence for rapid and involuntary computation of what other people see. Journal of Experimental Psychology. Human Perception and Performance, 36(5), 1255–1266. Schwitzgebel, E. (2002). A phenomenal, dispositional account of belief. Nous, 36, 249–275. Schaafsma, S. M., Pfaff, D. W., Spunt, R. P. & Adolphs, R. (2015). Deconstructing and reconstructing theory of mind. Trends in Cognitive Sciences, 19(2), 65–72. Stangor, C., Lynch, L., Duan, C. & Glass, B. (1992). Categorization of individuals on the basis of multiple social features. Journal of Personality and Social Psychology, 62, 207–218. Sterelny, K. (2012). The Evolved Apprentice: How Evolution Made Humans Unique. Cambridge, MA: MIT Press. Subiaul, F.,Vonk, J., Okamoto-Barth, S. & Barth, J. (2008). Chimpanzees learn the reputation of strangers by observation. Animal Cognition, 11, 611–623. Suchak, M., Eppley,T. M., Campbell, M.W. & de Waal, F.B.M. (2014). Ape duos and trios: spontaneous cooperation with free partner choice in chimpanzees. Peer Journal, 2, e417. Suppe, Frederick R. (1972). What’s wrong with the received view on the structure of scientific theories. Philosophy of Science 39, 1–19. Suppes, P. (1960). A comparison of the meaning and uses of models in mathematics and empirical sciences. Synthese, 12287–12301. Tomasello, M. (2010). Origins of Human Communication. Cambridge, MA: MIT Press. ———. (2014). A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. ———. (2016). A Natural History of Human Morality. Cambridge, MA: Harvard University Press. Trevarthen, C. (1979). Communication and co-operation in early infancy: A description of primary intersubjectivity. In M. Bullowa (Eds.), Before Speech (pp. 321–347). Cambridge: Cambridge University Press. Uller, C. (2004). Disposition to recognize goals in infant chimpanzees. Animal Cognition, 7, 154–161. van Leeuwen, E.J.C., Cronin, K. A., Haun, D.B.M., Mundry, R. & Bodamer, M. D. (2012). Neighbouring chimpanzee communities show different preferences in social grooming behaviour. Proceedings of the Royal Society of London B: Biological Sciences, 279(1746), 4362–4367. van Leeuwen, E.J.C., Cronin, K. A. & Haun, D.B.M. (2014). A group-specific arbitrary tradition in chimpanzees (Pan troglodytes). Animal Cognition, 17(6), 1421–1425. Van Ijzendoorn, M. H., Bard, K. A., Bakermans-Kranenburg, M. J. & Ivan, K. (2009). Enhancement of attachment and cognitive development of young nursery-reared chimpanzees in responsive versus standard care. Developmental Psychobiology, 51(2), 173–185. de Waal, F. (1982). Chimpanzee Politics: Power and Sex Among Apes. London: Jonathan Cape. ———. (2009). The Age of Empathy: Nature’s Lessons for a Kinder Society. Toronto: McClelland & Stewart. Walden, Tedra A. (1991). Infant social referencing. In Judy Garber and Kenneth A. Dodge (Eds.), The Development of Emotion Regulation and Dysregulation (pp. 69–88). New York: Cambridge University Press. Warneken, F. & Tomasello, M. (2006). Altruistic helping in infants and young chimpanzees. Science, 311, 1301–1303. Watts, D. & Mitani, J. (2001). Boundary patrols and intergroup encounters in wild chimpanzees. Behaviour, 138(3), 299–327.
137
Kristin Andrews Wellman, H. M., Cross, D. & Watson, J. (2001). Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72, 655–684. Whiten, A., Goodall, J., McGrew, W. C., Nishida, T., Reynolds, V., Tutin, Sugiyama, Y. . . . Boesch, C. (1999). Cultures in chimpanzees. Nature, 399, 682–685. Wilson, T. D. & LaFleur, S. J. (1995). Knowing what you’ll do: Effects of analyzing reasons on self-prediction. Journal of Personality and Social Psychology, 68, 21–35. Wimmer, H. J. & Mayringer, H. (1998). False belief understanding in young children: Explanations do not develop before predictions. International Journal of Behavioral Development, 22, 403–422. Wittig, R. M., Crockford, C., Langergraber, K. E. & Zuberbühler, K., (2014).Triadic social interactions operate across time: A field experiment with wild chimpanzees. Proceedings of Royal Society (B), 281(1779), p. 20133155. Yott, J. & Poulin-Dubois, D. (2012). Breaking the rules: Do infants have a true understanding of false belief? The British Journal of Developmental Psychology, 30(Pt 1), 156–171. Zawidzki, T. W. (2013). Mindshaping: A New Framework for Understanding Human Social Cognition. Cambridge, MA: A Bradford Book.
138
8 THE DEVELOPMENT OF INDIVIDUAL AND SHARED INTENTIONALITY1 Hannes Rakoczy
Intentionality, according to many philosophical accounts since Brentano, is the mark of the mental. The present chapter will approach varieties of intentionality from the point of view of developmental and comparative psychology. How do different forms of intentionality develop in human ontogeny? In particular, how do most basic forms of it emerge in early childhood? How does this development compare to that of other species, notably non-human primates? How far do commonalities go, and where might uniquely human capacities begin? And which of the latter might be foundational for uniquely human forms of social and cultural life? In pursuing such questions, the chapter will focus on shared intentionality and explore the idea that shared intentionality lies at the heart of uniquely human cognitive capacities and is an essential foundation of uniquely human social and cultural life.
The developmental and comparative psychology of different forms of individual intentionality First-order intentionality Intentionality, in the broad philosophical sense of “aboutness”, is the capacity of agents to entertain contentful attitudes (beliefs, desires, intentions etc.) towards the world and to be guided by these in reasoning and rational action. The paradigmatic form of intentionality is individual intentionality: the capacity of an individual to believe, think, judge, hope, reason, intend etc. From the point of view of developmental and comparative psychology, while many forms of such individual intentionality may be cognitively very complex and derived, dependent upon the acquisition of complex linguistic and other skills (think, for example, of mathematical cognition), it seems clear that basic forms of individual intentionality develop early in ontogeny and are widely shared among humans and other animals. To illustrate, let us briefly look at two fundamental milestones of intentionality: the capacity for objective thought, and the capacity to think about and act towards ends. All thinking requires a minimal notion of objectivity: the objects thought about exist independently from the perceiver and enduringly out there in the world. Regarding human ontogeny, Piaget has described infants’ development from initial undifferentiated sensation without 139
Hannes Rakoczy
any notion of persisting objects (“out of sight, out of mind”) to what he called “object permanence” – the appreciation that objects continue existing objectively whether perceived or not. In their actions infants begin to display object permanence from (at the latest) the end of their first year; they begin to search for occluded and hidden objects they previously perceived. Furthermore, infants from around 1 year not only track objects as chunks of matter continuously existing in space and time; they also individuate objects as objects of certain kinds, e.g., this chair, that table, that rabbit. Recent findings suggest that by 1 year of age infants begin to apply our commonsense metaphysical framework of objects as enduring substances, individuated under sortal (kind) concepts – and thus share the rudiments of our adult conceptual architecture of objective thought (Xu, 2007). Many other animals are on a par with infants; many primate species, and dogs, for example, reach the highest levels of Piagetian object permanence (indicated in active and systematic search behavior), levels typically reached by infants in the second year (Tomasello & Call, 1997). Recent research suggests that some monkeys and great apes also individuate objects qua objects of certain kinds much in the same ways as human 1-year-olds do (Mendes, Rakoczy & Call, 2008, 2011; Phillips & Santos, 2007). Concerning the capacity to think about and act towards ends, clear instances of intentional instrumental action, i.e., actions done purposefully and in a planned way in order to achieve some end in mind, appears in human ontogeny at the latest towards the end of the first year: infants organize their behavior in means-ends structures and indicate an awareness of the relations between means and ends. In a classic example, infants remove barriers in order to reach a desired object or pull a cloth, on which a desired object is placed, towards them in order to be able to grasp it. And they persist until they achieve their end, varying their means if necessary (Piaget, 1952; Willats, 1985, 1999). These phenomena are also widespread in the nonhuman animal kingdom. Many species, notably primates, show instrumental problem-solving of remarkable complexity – Köhler’s apes perhaps being the most famous examples. Second-order intentionality Much of our intentionality is not restricted to first-order intentional attitudes vis-à-vis the world but ascends to second-order intentional attitudes about others’ and our own intentionality. We do not only perceive cats on mats, but perceive others perceiving cats on mats etc. Second- and higher-order intentionality has been hypothesized to play foundational roles for many crucial aspects of human social and cultural life such as communication (Grice, 1975), cooperation (Bratman, 1992), conventionality (Lewis, 1969) or free will (Frankfurt, 1971). As a consequence, second-order intentionality has become the focus of much empirical work in developmental and comparative cognitive science in the last four decades. Human children, much of this work has shown, develop sophisticated and explicit forms of higher-order intentionality from around age 4 when they begin to use concepts of “belief ” and other propositional attitudes (Wellman et al., 2001). And more basic forms of second-order intentionality, such as understanding perception and intentional action, and more implicit forms of understanding propositional attitudes develop already very early in infancy (Baillargeon et al., 2010;Tomasello et al., 2005). From a comparative point of view, it was long thought that second-order intentionality as such was an important, perhaps even the single most important, fundamental cognitive divide between humans and other animals; and that this divide explained why only humans came to develop linguistic communication, conventional culture and sophisticated cooperation (e.g. Tomasello, 1999). However, recent evidence led to qualifications of this proposal. It has been found that chimpanzees and other great apes manifest some basic forms of second-order 140
Individual and shared intentionality
intentionality as those found in human 1-year-olds. First, they reveal a simple understanding of intentional action; for example, they systematically distinguish between unfulfilled acts where the actor is unwilling from those where the actor is unable (Call et al., 2004). Second, they also are capable of understanding perception and thus perspective taking, in that they take into account what conspecifics have and have not seen (Hare et al., 2000).
The development of shared intentionality In recent years, these new findings on continuities in individual first- and second-order intentionality shifted the focus of comparative and developmental cognitive science away from purely individual forms of intentionality. Perhaps basic cognitive differences between humans and other primates were not so much to be searched for in any form of individual intentionality of whatever order, but in shared or collective intentionality (Tomasello & Rakoczy, 2003; Tomasello et al., 2005)? Now, what is shared intentionality? In the case of individual intentionality, empirical cognitive science starts from our pre-theoretical commonsense notions of intentional attitudes even if more technical approaches in philosophy disagree massively about the right way of analyzing intentionality. Similarly, when investigating the development of shared intentionality, what we start from are our pre-theoretical concepts of shared intentionality even if philosophical accounts disagree massively over how to best analyze it (Bratman, 1992; Gilbert, 1990; Searle, 1990; Tuomela & Miller, 1988). Intuitively, in shared intentionality two or more agents form a joint “we” attitude in a way that is not straightforwardly reducible to mere sums of individual intentional attitudes. When you and I meet and agree to take a walk together, to use an example from Margaret Gilbert (1990), we form and then pursue the joint we-intention “We walk together”, which is not reducible to the sum of my individual intention “I walk” plus your analogous one.When I pursue my individual intention to walk and you pursue yours, we might end up walking beside each other, even responding in coordinated ways to each other so not to bump in each other, but not together. Philosophical accounts differ with respect to the question whether shared intentionality is reducible to more complex aggregates of coordinated and interlocking individual attitudes (Bratman, 1992) or involves some or other irreducibly collective element such as specific forms of “we”-contents (Tuomela, 1995), “we”-attitudes (Searle, 1990) or “we”-agents (Gilbert, 1989) (see Pacherie, 2007 for a very helpful overview). On the one hand, empirical cognitive science approaches to shared intentionality have been drawing much inspiration from these different philosophical accounts. On the other hand, starting off from the pre-theoretical folk notions of collective intentionality, cognitive science remains theoretically neutral vis-à-vis the different philosophical accounts. But while the cognitive science of the development of collective intentionality thus needs neither to wait for agreed upon philosophical analysis to come forth (if this should ever happen) nor to commit to any one of the current accounts on offer, empirical results of developmental and other cognitive science research might well have implications for the philosophical debates. For instance, it might turn out that developmental or comparative research documents certain forms of shared intentionality in children or animals that some accounts are more suitable for describing than others. So, where does shared intentionality, ontogenetically, begin? According to a widespread picture of the mind (e.g. Searle, 1983), the logically and ontogenetically most basic forms of the mental are the most world-directed ones at its fringes, so to speak, namely perception (on the cognitive side) and intentional action (on the conative side). Following this picture, primary forms of shared intentionality should be found in the forms of shared perception and shared 141
Hannes Rakoczy
intentional action. The early development of shared perception ( joint attention) and shared action, respectively, will therefore be the focus of the next sections. Joint attention Children begin to operate with a basic grasp of other agents’ intentionality, often termed “perception-goal psychology” (in contrast to the later developing fully fledged belief-desire folk psychology) from around 9 months: they understand what others perceive of their surroundings and what intentions they pursue in their actions (Tomasello et al., 2005; Wellman, 2002). And it is from around this time that earliest forms of joint attention emerge as well (Bakeman & Adamson, 1984). Intuitively, joint attention involves two (or more) subjects looking at some object or situation together. For example, at a Californian beach A and B might decide to go and look at the sunset together (“Let’s see how it sets over the sea”). What makes such an episode one of truly joint attention? It is neither sufficient that each of them look at the same target nor that, asymmetrically, one sees the other looking at a given target and follows her gaze there. Not even is it sufficient, more symmetrically, that both look at the same target, each knowing that the other looks at the target as well (otherwise my neighbor and I would be jointly watching our favorite TV show whenever we hear through the wall that the other is watching it). Rather, in some intuitive sense, notoriously difficult to spell out in more precise conceptual ways, both agents have to attend to the same target in joint and coordinated ways. When in development does joint and coordinated attention-sharing emerge? Here, as in many other areas of developmental and comparative research, we are faced with a fundamental methodological problem: when it comes to adults and older children, linguistic data (such as “Let’s see how it sets”) usually disambiguate whether a given episode reflects merely parallel or truly joint attention. In very young, pre-verbal children and in non-verbal animals, however, we have to rely on purely pre-verbal indicators and manifestations of joint attention. Empirically, earliest forms of social coordination of attention that have been considered to manifest joint attention emerge from around 9–12 months of age (Carpenter, Nagell & Tomasello, 1998). Children begin to passively follow the gaze of others and actively direct it to objects and situations.This is not asymmetrical following or directing of individual attention, however, since infants alternate their gaze between partner and object, check the partner’s attention and actively coordinate and align the partner’s attention and their own by communicative (gestural) means. Furthermore, some studies have directly analyzed “sharing” and “knowing” looks by the infant towards the partner that intuitively appear to be pre-verbal analogues of “Let’s look . . .” or “We’re looking . . .” (Hobson & Hobson, 2007). Additional evidence suggests that the social gaze coordination emerging at this time manifests truly joint attention rather than mere attention following or manipulation. In their protodeclarative pointing (pointing out situations or states of affairs without any further instrumental ends in mind but simply for the sake of “telling” the social partner), infants expect certain – joint attentional – responses (Liszkowski, Carpenter, & Tomasello, 2007a, 2007b): when an infant points out a situation to a partner (e.g. that there is a ball over there), she will only be satisfied (and thus stop pointing) when the adult not only looks at the specific situation, but alternates gaze in coordinated ways between the situation and the infant (as if saying, “Yes, I saw it, it’s the ball we’re talking about”). And infants keep track of what was in the focus of joint attention with a given partner (a proxy of what was mutual knowledge among them) over time: they understand one and the same ambiguous communicative act (such as “Can you give it to me?” vis-à-vis several objects) systematically differently as a function of the previous joint experience they had with the interlocutor (Moll et al., 2008): when one interlocutor and 142
Individual and shared intentionality
the child previously jointly engaged with object A, they give this object to the interlocutor, but give to another interlocutor object B to which they both had previously jointly attended. Later in development, children have been found to use joint attention in systematic and sophisticated ways for coordinated and rational action planning. In a recent study, the child and a partner (in a Stag Hunt coordination game) each faced the choice of pressing button A to get a moderate reward, or to press button B to get a higher reward, but this reward could only be achieved if both pressed B. In this situation, 4-year-old children actively alternated gaze with the partner and decided for B only when the partner emitted alternating, coordinated and “knowing” looks between the child and the apparatus (Wyman, Rakoczy & Tomasello, 2012). To summarize, children from around age 1 begin to engage in sharing attention with others that plausibly reflects true joint attention given the systematic interpersonal coordination at a time and over time – and thus a primordial form of perceptual we-intentionality. From the perspective of comparative psychology, children’s relations to others’ perceptual intentionality reveal very interesting commonalities and differences with the cognitive capacities of nonhuman primates. Commonalities are found in second-order individual intentionality: great apes and some monkeys reliably engage in gaze-following and manipulate others’ gazes for instrumental purposes in proto-imperative pointing. And they take into account what others see or have seen for strategic individual action planning (e.g. foraging food that competitors cannot see; Hare et al., 2000). There are crucial differences, however, in that non-human primates seem not to enter into any form of truly joint attention given the absence of systematic gaze alternation and coordination, “knowing” looks, proto-declarative pointing and the like (Carpenter & Call, 2013; Tomasello et al., 2005). Joint action The paradigmatic case of collective intentionality is acting together. Cooperative activities are what most philosophical accounts of collective intentionality focus on. And it is cooperative activities that present the clearest case for the development of collective intentionality. Children begin to reliably engage in intentional cooperative activities with others in the course of the second year, both in joint instrumental action aimed at some further end, and in joint playful actions that serve as ends in themselves (Tomasello & Hamann, 2012). Initially, from around 14–18 months, children coordinate and communicate successfully with others in simple collaborative actions involving some basic division of labor (for example, retrieving a reward from an apparatus where one needs to open a door so that the other can reach and retrieve the object; Brownell & Carriger, 1990; Warneken, Chen & Tomasello, 2006). Subsequently, in the second and third year, the joint-ness of the actions becomes much clearer, and the interpretation of children’s social coordination as true shared intentionality much less ambiguous. Cooperation now manifests a suite of features all pointing towards true we-ness: children not only coordinate and communicate in acting with one another; they also reveal some grasp of the basic structure of complementary roles underlying the division of labor in their so-called “role reversal imitation”: when they learn a novel collaborative activity comprising the complementary roles A and B by performing A (while the partner performs B), they do not just acquire egocentric information about A. Rather, after having learned to perform A, they then spontaneously switch roles and perform B as well (Carpenter, Tomasello & Striano, 2005). Concerning roles, children do not just coordinate in taking up complementary roles, but respond in sophisticated ways when a partner fails in her fulfillment of the role: they then try to re-assign the role to her communicatively (by pointing out to her the object to be acted upon or the location where to act), help her to fulfill it and generally try to re-engage her for the 143
Hannes Rakoczy
cooperation (Warneken et al., 2006). Interestingly, they do so in flexible and context-sensitive ways, specifically when the partner is still generally willing to participate in the cooperation yet unable to fulfill the role, but not when the partner is unwilling to cooperate (Warneken, Gräfenhain & Tomasello, 2012). From around age 3, children show explicit signs of feeling committed to the pursuit of a cooperative activity. A recent study involved children in a mildly interesting cooperative activity with a partner, and then seduced them by the option of doing something much more exciting. What happened was that children often hesitated and then excused themselves (“Sorry, I gotta go”) before leaving the joint action (Gräfenhain et al., 2009). Children this age also reveal a sense of commitment to pursuing joint projects in other ways: Hamann and colleagues (Hamann, Warneken, & Tomasello, 2012) had pairs of peers cooperate by operating an apparatus with complementary roles such that successful fulfillment of the roles resulted in rewards for each player. What was crucial, however, was that player A’s reward was issued earlier than player B’s reward. From the individualistic perspective of A, she could basically stop at that point. And this is exactly what happened in a control condition in which the two players acted separately in parallel (each fulfilled a role, yet in independent and uncoordinated ways). In the cooperation condition, however, player A did not stop after receiving her reward but still continued her part until player B received her reward as well. Similarly, when the apparatus issued a joint reward for the two players together, children took great pains to distribute it equally (but again did not do so in a control condition in which two agents acted individually, independently and in parallel; Hamann et al., 2011). Children’s grasp of cooperative activities, their underlying division of labor, role structure and their normative aspects become more and more sophisticated in subsequent development. For instance, 4-year-olds have completely agent-neutral conceptions of complementary action such that a given role (like a variable) can be filled by any agent at any time; and they flexibly use such a conception for planning future joint activities and their parts therein (Fletcher, Warneken & Tomasello, 2012; Rakoczy et al., 2014). In general, however, the development of shared intentional activities from children’s earliest joint games to fully fledged adult cooperation is currently not well understood yet and stands in need of systematic further investigation. Recently, novel approaches in cognitive and neurosciences have begun to explore the cognitive structures involved in cooperative activities at different levels of analysis. Theoretical work has introduced distinctions between a hierarchy of representations of shared intentions at different levels, ranging from personal-level conceptualized future-directed intentions to act together to sub-personal motor representations of coordinated social behavior such as how to move one’s vocal muscles in relation to a duet partner’s singing (Pacherie, 2008, 2011). Experimental adult research has shown that such sub-personal motor representations of shared activities form and operate quickly, swiftly and spontaneously mostly below the threshold of subjects’ awareness (Sebanz, Bekkering & Knoblich, 2006). From an ontogenetic perspective, little is currently known about the development of the cognitive underpinnings of shared action. But one recent pioneering study has suggested that similar kinds of fine-grained subpersonal motor representations of shared actions might be in operation even in preschool-aged children (Milward, Kita & Apperly, 2014). From a comparative perspective, much recent research suggests that great apes (and perhaps other non-human primates) share basic forms of individual second-order intentionality with us: they have some basic understanding of others’ individual intentionality and systematically use this understanding of what others perceive and intend for strategic purposes in competitive interactions (Call & Tomasello, 2008).Yet whether they go beyond such individual intentionality of the second order and engage in truly shared intentionality in the form of joint action is highly controversial. Various experimental findings suggest that apes are quite skillful in 144
Individual and shared intentionality
social coordination with others, perhaps even involving something like division of labor (Melis, Hare & Tomasello, 2006). But whether such coordination amounts to true cooperation remains questionable, in light of the fact that apes do not show the characteristic signatures of acting together present in children such as re-engagement of partners, re-assignment of roles, sharing of rewards, helping others to fulfill their role or excusing oneself (Tomasello & Hamann, 2012). Future research will need to shed more light on the question whether/to which degree fundamental forms of joint action are a distinctively human or a more widespread capacity.
The development of institutional reality and normativity There is a particular and peculiar sub-form of collective intentionality that, according to many conceptual analyses, underlies our institutional and societal life. Unlike basic forms of cooperative action such as, say, walking together, this form of collective intentionality is inherently conventional, rule-governed and fact-creating. According to one influential analysis, the logical structure of this form of collective intentionality is to be captured by the complementary notions of status function assignment and institutional fact (Searle, 1995). Status functions are such that they pertain to objects or actions exclusively in virtue of the fact we collectively treat them as having these functions: nothing in paper money is inherently valuable, nothing in a given person inherently makes her a teacher. Things are money or teachers in virtue of our collective practices. The corresponding institutional facts (that a given object is money or a teacher) of the form “This X counts as Y in a given context X” are socially constructed facts – facts that only hold relative to our social creation, much in contrast to so-called raw facts that hold independently of any collective practice or perspective. Status functions are essentially normative: the status collectively assigned to an object licenses certain forms of acting on the object while rendering other actions inappropriate. That something is a knight in chess, say, entitles one to use it in certain ways but not in others. Being a teacher or a president entitle both holders of the role and interactors to specific forms of actions but not to others. When we turn to human ontogeny, when and where do we find the primordial forms of such collective intentionality with status assignment? For most of the standard examples of institutional facts such as those related to political power, linguistic meaning or economic matters, it seems evident – given their complexity and holistic embedding in larger institutional networks – that they are far beyond the cognitive grasp of young children. In a rather different domain, though, children from very early on do engage in activities that seem to share the basic logical structure of status assignment and institutional reality, namely different types of games. From their second year on, children begin to engage in pretend play and in simple non-pretense rule games. In pretend play – say, in pretending that a wooden block is an apple – objects are assigned fictional status (“The block counts as ‘apple’ in the context of our pretense”) in much the same way objects are assigned serious status (this X counts as Y in context C) in institutional practices generally. And children from around ages 2 to 3 grasp the basic logical structure of fictional status assignment in joint pretense and its inferential and normative consequences. They do not only engage in solitary and isolated acts of pretending, but they track, understand and respect the stipulations of joint pretense scenario set up by a play partner (such as “This wooden block is our ‘apple’, and this pen is our ‘knife’ ”) and guide their own actions in the course of the pretense accordingly. In particular, they produce acts that are normatively appropriate, inferentially licensed by the fictional status assignment. For example, they pretend to cut the wooden block with the pen, handle the pen “carefully” because it is “sharp” etc. (Harris & Kavanaugh, 1993; Rakoczy, Tomasello & Striano, 2004). Crucially, they not only follow the pretense stipulations in their own inferentially appropriate actions, but also 145
Hannes Rakoczy
indicate an awareness of the normative structure of such practices more directly and actively by third-party norm-enforcement: when a third person joins the game, but makes a “mistake” by not respecting the pretense status of an object (confusing the fictional identities of several objects, for example), they protest and criticize her (Rakoczy, 2008;Wyman, Rakoczy & Tomasello, 2009a). And young children’s awareness and enforcement of the normative structure and implications of fictional status assignment is already sensitive to the context-relativity typical of status assignment. One form of context-relativity pertains to multiple statuses: that an X counts as a Y1 in a given context C1 leaves open the possibility that the very same object can have some other status (Y2) in some other context (C2). A given card may be a trump in one kind of card game but a lousy card in another. Similarly, one kind of object may at the same time have one kind of fictional status in one pretense game, and a different one in another game. Children at age 3 do understand this multiple fictional status, flexibly switch between contexts and adapt their actions accordingly (Wyman, Rakoczy & Tomasello, 2009b). Another, related form of context-relativity is the following: given X counts as Y in C, within the context C there are normative implications as to how to treat X such that a given action may constitute a mistake that do not hold outside of the context C so that the very same kind of act may be perfectly fine. Again, recent research has found that children aged 2–3 understand this form of context-relative normativity: they protest against the very same kinds of act when performed in a context in which it constitutes a mistake in light of a given status assignment, but do not do so when the same kind of act is performed outside of this context (for example, when the agent had announced to not take part in this specific joint fictional game prior to acting; Rakoczy, 2008; Wyman et al., 2009a). By the third year of life, then, children have entered into basic forms of this remarkable practice of games of pretending, collectively treat objects they know to be Xs as Ys, follow and respect the implications of the proto-constitutive rules of the game and normatively criticize deviations from the rules. In embryonic and isolated form, one can thus see the basic structure of institutional reality in the games of 2-year-olds. Of course, this is a long way from money, marriage and universities, but the seeds are there, and so joint pretending quite plausibly can be considered the central cradle for, and the entering gate into, institutional life. There are good reasons, in fact, to assume that it may be no coincidence that pretense and other games constitute one, perhaps even the, cradle for growing into institutional reality more generally. A fundamental problem in coming to participate in institutional life is its holistic structure: most forms of status (e.g. political) cannot be understood without understanding many other forms of status intimately connected (e.g. economic, power relations etc.). It is thus a major challenge for the child to break into this circle. Games may be well suited do the trick. First of all, they are in some intuitive sense “non-serious”, and however this elusive notion is to be spelled out, one crucial aspect of it is that games are quarantined from the rest of institutional life to a considerable degree. Second, whereas the contexts of many forms of institutional reality are abstract and far-reaching on both spatial and temporal dimensions (think of currency areas etc.), the contexts of simple joint pretense games are very tangible, short-lived and action-based (“in this very pretense we’re engaged right here and now . . .”). Third, setting up fictional status, even in very young children, is intimately linked with language in a way typical for institutional reality more generally. One (if not the) paradigmatic form of status assignment are declarative speech acts of the form “This (X) is now a Y” such as “You are now husband and wife” or “From now on, you are called Peter” (Searle, 2010). In their joint pretense, children routinely set up the scenario by declaring things like “This (block) is now the apple, and this (pen) is the knife”, often with specialized grammatically marked construction that signal the non-literal force of the speech act (Kaper, 1980). From an ontogenetic point of view, thus, 146
Individual and shared intentionality
pretense declarations such as “This is now the apple” may constitute the foundation for serious status declarations such as “You are now husband and wife”. Such a general picture of pretense as an ontogenetic foundation for institutional reality is in the spirit of a fascinating account by Kendall Walton (Walton, 1990) that ascribes a similar foundational role to pretense as a basis for all kinds of representational art. From the perspective of comparative psychology, we do not have any convincing evidence in any non-human species for any kind of social practice with the structure of status function assignment. With regard to play, rough and tumble and other kinds of sensorimotor play are a widespread phenomenon in non-human primates and other mammals. But there is no solid and convincing evidence (that would go beyond highly ambiguous anecdotes from natural observations) for pretend play proper or other types of rule-governed games (Gómez, 2008). It might be objected, though, that many animals seem to respect social status in some serious domains, for example, in the form of dominance hierarchies etc. Is this not incompatible with the above claims, then? The problem here is that there are at least two radically different notions of dominance and social status. According to an institutional reading, dominance status – in a corporation, for example – indeed is a matter of convention and collective assignment. In contrast, however, there is a brute reading according to which dominance status is a purely causal notion (ultimately to be cashed out in terms of physical force and the like). Now, while there is much evidence that non-human animals are sensitive to social status in the latter sense, there is basically no evidence to suggest they respect the former.
Conclusion The empirical study in the cognitive sciences of collective intentionality and its development is a relatively recent phenomenon. In some respects, we surely have learned from this investigation more about potential roots, earliest forms and developmental courses of different forms of collective intentionality; yet in many respects this inquiry has just begun to scratch the surface. Future research will be faced with many fundamental conceptual and empirical challenges to tackle. From an empirical point of view, many fundamental questions concerning the ontogeny and phylogeny of collective intentionality remain open: What are the ontogenetic origins and roots of collective intentionality? Once basic forms of collective intentionality in the form of shared perception ( joint attention) and shared intentional action are in place in early childhood, how do more complex forms such as collective beliefs develop? Concerning children’s participation in joint status assignment and institutional life: once they take part in such activities and reveal a practical grasp of the structure of status and its normative implications, how do they move on from there to develop more sophisticated and reflective notions of the logical structure of institutional, observer-dependent facts that contrast categorically and sharply with brute facts? More generally, how should development best be described: does it proceed in discrete and qualitatively distinct stages (e.g. Tomasello et al., 2012)? And how should the cognitive underpinnings be characterized: might there be qualitatively different systems and/ or processes, for example for minimalist vs. full-blown collective intentionality – much in the same way as often assumed in other areas of cognitive development such as numerical or social cognition (Apperly & Butterfill, 2009; Carey, 2009)? From a comparative perspective, more systematic research into the commonalities and differences in the development of individual and collective intentionality of human and nonhuman primates is required. Is collective intentionality as we see it in human ontogeny from the second year per se uniquely human? Or can basic forms of collective intentionality be found in non-human primates as well? 147
Hannes Rakoczy
Beyond addressing such empirical questions, future cognitive science research may have interesting broader implications vis-à-vis the – mostly philosophical – projects of conceptual analysis. As mentioned at the outset of this chapter, the empirical cognitive science of collective intentionality and its development usually starts off simply from our pre-theoretic notions of collective intentionality. There is thus no need for the empirical approaches to take sides in the debate between different philosophical proposals for conceptual analysis of “collective intentionality” and related notions. However, the empirical results of developmental cognitive science may well have implications for the plausibility of different such accounts. For example, empirical findings of early competence in collective intentionality present prima facie trouble for Gricean reductionist accounts that analyze shared intentionality in terms of complex forms of higher-order individual intentionality (Bratman, 1992, 2014). This trouble for reductionist accounts, which can be seen in analogous forms in other areas (such as communication), can be captured with the following schematic trilemma (Breheny, 2006; Rakoczy, 2006): First, shared intentionality presupposes higher-order recursive propositional attitudes (the main conceptual premise of reductionist accounts). Second, young children do not yet have such attitudes (as suggested by empirical findings in cognitive development), but third, young children manifest shared intentionality (as suggested, again, by empirical findings). This triad is clearly inconsistent. So, which of the three propositions should be given up or suitably modified? The most plausible solution, it seems, lies in a refinement and qualification of the first: the reductionist accounts might be right about full-blown and complex adult shared intentionality that may in fact presuppose such complex higher-order attitudes. Nevertheless, this still leaves room for developmentally (and evolutionarily) primary and less complex forms of shared intentionality that can be present without the complex higher-order attitudes (Butterfill, 2012; Pacherie, 2013). Related questions pertain to the conceptual and developmental relations of second-order individual intentionality and collective intentionality more generally. Gricean approaches, Bratman’s (1992) in particular, hold second-order individual intentionality necessary and sufficient for collective intentionality (such that the latter is a complex and coordinated form of the former); in a sense, therefore, the development of collective intentionality, according to this account, just amounts to the development of a certain complex form of individual intentionality. Anti-reductionist accounts such as Searle’s (1995), in contrast, assume that collective intentionality is a primitive phenomenon and thus seems to imply that second-order individual intentionality is not only not sufficient, but also not necessary for collective intentionality. The two kinds of intentionality, according to this reading of Searle, might thus develop without intimate relations to each other. In contrast to both of these positions, there might be an interesting third way: second-order individual intentionality and collective intentionality may be intimately related and thus develop in closely related ways. On the one hand, some form of second-order individual intentionality may be necessary for collective intentionality yet not by itself sufficient. And on the other hand, joint attention and cooperation, as basic forms of collective intentionality, may present the primary contexts in which individual intentionality of the second order (ascribing perceptual perspectives, goals etc. to interaction partners) is put into practice (Moll & Meltzoff, 2011).
Note 1 This chapter draws on some material from previous papers, in particular from “The development of collective intentionality”, forthcoming in the Routledge Handbook on Collective Intentionality (edited by Kirk Ludwig and Marija Jankovic).
148
Individual and shared intentionality
References Apperly, I. A. & Butterfill, S. A. (2009). Do humans have two systems to track beliefs and belief-like states? Psychological Review, 116(4), 953–970. doi: 10.1037/a0016923. Baillargeon, R., Scott, R. M. & He, Z. (2010). False-belief understanding in infants. Trends in Cognitive Sciences, 14(3), 110–118. doi: 10.1016/j.tics.2009.12.006. Bakeman, R. & Adamson, L. B. (1984). Coordinating attention to people and objects in mother-infant and peer-infant interaction. Child Development, 55(4), 1278–1289. Bratman, M. E. (1992). Shared cooperative activity. The Philosophical Review, 101(2), 327–341. ———. (2014). Shared Agency: A Planning Theory of Acting Together. Oxford: Oxford University Press. Breheny, R. (2006). Communication and folk psychology. Mind and Language, 21(1), 74–107. Brownell, C. & Carriger, M. S. (1990). Changes in cooperation and self-other differentiation during the second year. Child Development, 61, 1164–1174. Butterfill, S. (2012). Joint action and development. The Philosophical Quarterly, 62(246), 23–47. doi: 10.1111/j.1467–9213.2011.00005.x Call, J., Hare, B., Carpenter, M. & Tomasello, M. (2004). ‘Unwilling’ versus ‘unable’: Chimpanzees’ understanding of human intentional action. Developmental Science, 7(4), 488–498. Call, J. & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192. Carey, S. (2009). The Origin of Concepts. New York, NY: Oxford University Press; US. Carpenter, M. & Call, J. (2013). How joint is the joint attention of apes and human infants? In J. Metcalfe and H. S. Terrace (Eds.), Agency and Joint Attention (pp. 49–61). New York: Oxford University Press. Carpenter, M., Nagell, K. & Tomasello, M. (1998). Social cognition, joint attention, and communicative competence from 9 to 15 months of age. Monographs of the Society for Research in Child Development, 63(4), 176. Carpenter, M.,Tomasello, M. & Striano,T. (2005). Role reversal imitation and language in typically developing infants and children with autism. Infancy, 8(3), 253–278. Fletcher, G. E., Warneken, F. & Tomasello, M. (2012). Differences in cognitive processes underlying the collaborative activities of children and chimpanzees. Cognitive Development, 27(2), 136–153. doi: 10.1016/j. cogdev.2012.02.003 Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68, 5–20. Gilbert, M. (1989). On Social Facts. London: Routledge. ———. (1990).Walking together: A paradigmatic social phenomenon. Midwest Studies in Philosophy, 15, 1–14. Gómez, J. C. (2008). The evolution of pretence: From intentional availability to intentional non-existence. Mind & Language, 23(5), 586–606. doi: 10.1111/j.1468–0017.2008.00353.x Gräfenhain, M., Behne, T., Carpenter, M. & Tomasello, M. (2009). Young children’s understanding of joint commitments. Developmental Psychology, 45(5), 1430–1443. Grice, H. P. (1975). Logic and conversation. In P. Cole and J. Morgan (Eds.), Syntax and Semantics (pp. 41–58). New York: Academic Press. Hamann, K., Warneken, F., Greenberg, J. R. & Tomasello, M. (2011). Collaboration encourages equal sharing in children but not in chimpanzees. Nature, 476(7360), 328–331. Hamann, K., Warneken, F. & Tomasello, M. (2012). Children’s developing commitments to joint goals. Child Development, 83(1), 137–145. doi: 10.1111/j.1467–8624.2011.01695.x Hare, B., Call, J., Agnetta, B. & Tomasello, M. (2000). Chimpanzees know what conspecifics do and do not see. Animal Behaviour, 59(4), 771–785. Harris, P. L. & Kavanaugh, R. D. (1993).Young children’s understanding of pretense. Monographs of the Society for Research in Child Development, 58(1)[231], v–92. Hobson, J. & Hobson, P. (2007). Identification: The missing link between joint attention and imitation? Development and Psychopathology, 19(2), 411–431. doi: 10.1017/S0954579407070204 Kaper, W. (1980). The use of past tense in games of pretend*. Journal of Child Language, 7, 213–215. Lewis, D. K. (1969). Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. Liszkowski, U., Carpenter, M. & Tomasello, M. (2007a). Pointing out new news, old news, and absent referents at 12 months of age. Developmental Science, 10(2), F1–F7.
149
Hannes Rakoczy ———. (2007b). Reference and attitude in infant pointing. Journal of Child Language, 34(1), 1–20. Melis, A. P., Hare, B. & Tomasello, M. (2006). Chimpanzees recruit the best collaborators. Science, 311(5765), 1297–1300. Mendes, N., Rakoczy, H. & Call, J. (2008). Ape metaphysics: Object individuation without language. Cognition, 106(2), 730–749. ———. (2011). Primates do not spontaneously use shape properties for object individuation: A competence or a performance problem? Animal Cognition, 14, 407–414. Milward, S. J., Kita, S. & Apperly, I. A. (2014). The development of co-representation effects in a joint task: Do children represent a co-actor? Cognition, 132(3), 269–279. doi: http://dx.doi.org/10.1016/j. cognition.2014.04.008 Moll, H. & Meltzoff, A. N. (2011). Joint attention as the fundamental basis of understanding perspectives. In A. Seemann (Ed.), Joint Attention: New Developments in Psychology, Philosophy of Mind, and Social Neuroscience (pp. 393–413). Cambridge, MA: MIT Press. Moll, H., Richter, N., Carpenter, M. & Tomasello, M. (2008). Fourteen-month-olds know what “we” have shared in a special way. Infancy, 13(1), 90–101. Pacherie, E. (2007). Is collective intentionality really primitive? In M. Beaney, C. Penco and M. Vignolo (Eds.), Mental Processes: Representing and Inferring (pp. 153–175). Cambridge: Cambridge Scholars Press. ———. (2008). The phenomenology of action: A conceptual framework. Cognition, 107(1), 179–217. doi: 10.1016/j.cognition.2007.09.003 ———. (2011). Framing joint action. Review of Philosophy and Psychology, 2(2), 173–192. doi: 10.1007/ s13164–011–0052–5 ———. (2013). Intentional joint agency: Shared intention lite. Synthese, 190(10), 1817–1839. doi: 10.1007/ s11229–013–0263–7 Phillips,W. & Santos, L. R. (2007). Evidence for kind representations in the absence of language: Experiments with rhesus monkeys (Macaca mulatta). Cognition, 102(3), 455–463. Piaget, J. (1952). The Origins of Intelligence. New York: Basic Books. Rakoczy, H. (2006). Pretend play and the development of collective intentionality. Cognitive Systems Research, 7, 113–127. ———. (2008).Taking fiction seriously:Young children understand the normative structure of joint pretend games. Developmental Psychology, 44(4), 1195–1201. Rakoczy, H., Gräfenhain, M., Clüver, A., Schulze Dalhoff, A. & Sternkopf, A. (2014).Young children’s agentneutral representations of action roles. Journal of Experimental Child Psychology, 128, 201–209. Rakoczy, H.,Tomasello, M. & Striano,T. (2004).Young children know that trying is not pretending – a test of the “behaving-as-if ” construal of children’s early concept of “pretense”. Developmental Psychology, 40(3), 388–399. Searle, J. R. (1983). Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge University Press. ———. (1990). Collective intentions and actions. In P. Cohen, J. Morgan and M. Pollack (Eds.), Intentions in Communication (pp. 401–415). Cambridge, MA: MIT Press. ———. (1995). The Construction of Social Reality. New York: Free Press. ———. (2010). Making the Social World.The Structure of Human Civilization. Oxford: Oxford University Press. Sebanz, N., Bekkering, H. & Knoblich, G. (2006). Joint action: Bodies and minds moving together. Trends in Cognitive Sciences, 10(2), 71–76. Tomasello, M. (1999). The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press. Tomasello, M. & Call, J. (1997). Primate Cognition. New York: Oxford University Press. Tomasello, M., Carpenter, M., Call, J., Behne, T. & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675–735. Tomasello, M. & Hamann, K. (2012). The 37th Sir Frederick Bartlett lecture: Collaboration in young children. The Quarterly Journal of Experimental Psychology, 65(1), 1–12. doi: 10.1080/17470218.2011.608853 Tomasello, M., Melis, A. P., Tennie, C., Wyman, E. & Herrmann, E. (2012). Two key steps in the evolution of human cooperation: The interdependence hypothesis. Current Anthropology, 53(6), 673–692. doi: 10.1086/668207
150
Individual and shared intentionality Tomasello, M., & Rakoczy, H. (2003). What makes human cognition unique? From individual to shared to collective intentionality. Mind and Language, 18(2), 121–147. Tuomela, R. (1995). The Importance of us: A Philosphical Study of Basic Social Notions. Stanford: Stanford University Press. Tuomela, R. & Miller, K. (1988). We-intentions. Philosophical Studies, 53, 367–389. Walton, K. L. (1990). Mimesis as Make-believe. Cambridge, MA: Harvard University Press. Warneken, F., Chen, F & Tomasello, M. (2006). Cooperative Activities in Young Children and Chimpanzees. Child Development, 77(3), 640–663. doi: 10.1111/j.1467–8624.2006.00895.x Warneken, F., Gräfenhain, M. & Tomasello, M. (2012). Collaborative partner or social tool? New evidence for young children’s understanding of joint intentions in collaborative activities. Developmental Science, 15(1), 54–61. doi: 10.1111/j.1467–7687.2011.01107.x Wellman, H. (2002). Understanding the psychological world: Developing a theory of mind. In U. Goswami (Ed.), Blackwell Handbook of Childhood Cognitive Development (pp. 167–187). Malden, MA: Blackwell Publishers. Wellman, H., Cross, D. & Watson, J. (2001). Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72(3), 655–684. doi: 10.1111/1467-8624.00304. Willatts, P. (1985). Adjustment of means-ends coordination and the representation of spatial relations in the production of search errors by infants. British Journal of Developmental Psychology, 3(3), 259–272. ———. (1999). Development of means-end behavior in young infants: Pulling a support to retrieve a distant object. Developmental Psychology, 35(3), 651–667. Wyman, E., Rakoczy, H. & Tomasello, M. (2009a). Normativity and context in young children’s pretend play. Cognitive Development, 24(2), 146–155. ———. (2009b).Young children understand multiple pretend identities in their object play. British Journal of Developmental Psychology, 27, 385–404. ———. (2012). Joint attention enables children’s coordination with others in a ‘stag hunt’ game. European Journal of Developmental Psychology. doi: http://dx.doi.org/10.1080/17405629.2012.726469 Xu, F. (2007). Sortal concepts, object individuation, and language. Trends in Cognitive Sciences. 11(9), 400–406.
151
9 FALSE-BELIEF UNDERSTANDING IN THE FIRST YEARS OF LIFE Rose M. Scott, Erin Roby, and Megan A. Smith
As members of a social species, a large part of our everyday lives involves predicting, interpreting, and responding to the behavior of other individuals. Adults typically do this by considering others’ underlying mental states. Thus, we readily understand that Dorothy wants to return home from Oz, does not know that the Wizard is actually just a man pretending to be a wizard, and falsely believes that he can send her home. Developmental psychologists have long been interested in how this psychological-reasoning ability develops. In particular, considerable research has focused on when children understand that others can be mistaken, or hold false beliefs, about the world. False-belief understanding provides evidence of the ability to distinguish between the mind and reality – to recognize that mental states are internal representations rather than direct reflections of the world. This sophisticated, and perhaps uniquely human (e.g., Kaminski, Call, & Tomasello, 2008; Marticorena, Ruiz, Mukerji, Goddu, & Santos, 2011), ability has been argued to play a vital role in cooperation, communication, and learning (e.g., Baillargeon et al., 2013; Herrmann, Call, Hernández-Lloreda, Hare, & Tomasello, 2007; Sperber & Wilson, 1995). When and how does false-belief understanding develop? For several decades, this question was investigated using elicited-response false-belief tasks, in which children were asked direct questions about an individual who held a false belief (for a review, see Wellman, Cross, & Watson, 2001). These tasks are described as elicited-response tasks because the experimenter explicitly asks the child to predict or explain the behavior of the mistaken individual. In one such task (Baron-Cohen, Leslie, & Frith, 1985), children hear a story enacted with props: Sally puts her marble in a box and then leaves the room. In her absence, her friend Anne moves the marble to a nearby basket. Children are then asked where Sally will look for her marble when she returns. Beginning at around age four, children typically answer that Sally will look in the box, where she falsely believes the marble to be. In contrast, younger children indicate that Sally will look in the basket, the marble’s current location, failing to demonstrate an understanding of Sally’s false belief. This shift from below- to above-chance performance has been widely replicated using several different elicited-response false-belief tasks (e.g., Gopnik & Astington, 1988; Perner, Leekam, & Wimmer, 1987; Wellman et al., 2001) with children around the world, although above-chance performance is not attained until age seven in some cultures (e.g., Lecce & Hughes, 2010; Liu, Wellman, Tardif, & Sabbagh, 2008; Mayer & Träuble, 2013; Naito & Koyoma, 2006; Vinden, 2002). These findings led many to conclude that false-belief 152
False-belief understanding
understanding constituted a major milestone in children’s psychological reasoning that was not attained until at least four years of age (Carlson & Moses, 2001; de Villiers & de Villiers, 2003; Gopnik & Wellman, 1994; Perner, 1995). This conclusion was challenged, however, by Onishi and Baillargeon’s (2005) groundbreaking discovery that young infants demonstrated false-belief understanding when tested via other means. This study launched a new wave of research that has yielded substantial evidence of false-belief understanding in the first three years of life, shed light on the nature and extent of this early understanding, and identified factors that affect young children’s performance in false-belief tasks. In this chapter, we review this past decade of research and present theoretical accounts that have been offered to explain these findings. Finally, we discuss the broader implications of these findings for children’s social and cognitive development.
False-belief understanding in infants and toddlers The first evidence that infants could attribute false beliefs to agents came from the seminal work of Onishi and Baillargeon (2005), who tested 15-month-old infants in a violation-of-expectation version of the Sally/Anne task. Infants first saw a familiarization trial in which a toy watermelon sat on the floor of a puppet stage between two boxes, one yellow and one green. A female agent entered, played with the watermelon, and then placed it inside the green box.The agent paused with her hand inside the green box until infants looked away and the trial ended. In the second and third familiarization trials, the agent entered and reached into the green box, as if grasping the watermelon, and then paused. Next, infants viewed one of several belief-induction trials. For instance, in the FB-green condition, the agent was absent and the watermelon moved from the green box to the yellow box; in the TB-yellow condition, the agent watched as the watermelon moved to the yellow box. Following the belief-induction trial, infants viewed a single test trial in which the agent reached into either the yellow (yellow-box event) or the green box (green-box event) and paused. Infants in the FB-green condition looked reliably longer if they received the yellow-box event than if they received the green-box event, suggesting that they attributed to the agent a false belief that the watermelon was in the green box, expected her to reach there in order to obtain the watermelon, and looked longer when she instead reached into the yellow box. Infants in the TB-yellow condition exhibited the opposite pattern: they expected the agent to reach into the yellow box, where she knew the watermelon was located, and looked longer if she reached into the green box instead.Together with the results of several additional conditions, these findings suggested that by 15 months of age, infants expect agents to act in accord with their beliefs, regardless of whether those beliefs are true or false. Subsequent investigations have replicated and extended Onishi and Baillargeon’s (2005) findings in a number of ways. Several studies have confirmed that infants’ performance in falsebelief tasks reflects an understanding of false belief rather than a simpler capacity to reason about ignorance (e.g., Southgate, Senju, & Csibra, 2007; Wellman, 2010) by demonstrating that infants respond differently to ignorant and mistaken agents (e.g., He, Bolz, & Baillargeon, 2011; Knudsen & Liszkowski, 2012a; Scott & Baillargeon, 2009; Scott, Baillargeon, Song, & Leslie, 2010; Scott, Richman, & Baillargeon, 2015). If an agent is merely ignorant about an object’s location – she does not know whether it is in location-A or location-B – then infants hold no specific expectation about where she should search and look equally regardless of whether she reaches to location-A or location-B (e.g., Scott & Baillargeon, 2009; Scott et al., 2010). If, however, the agent holds a false belief that the object is in location-A, infants expect her to search in location-A and look longer if she searches in location-B instead (Onishi & Baillargeon, 2005). Thus, infants reason about ignorance and false belief as distinct mental states. 153
Rose M. Scott, Erin Roby, and Megan A. Smith
Infants can attribute a variety of false beliefs to agents, including false beliefs about the presence (e.g., Kampis, Parise, Csibra, & Kovács, 2015; Kovács, Téglás, & Endress, 2010; Southgate & Vernetti, 2014), location (e.g., Buttelmann, Carpenter, & Tomasello, 2009; Song, Onishi, Baillargeon, & Fisher, 2008; Southgate et al., 2007; Surian, Caldi, & Sperber, 2007; Träuble, Marinovic´, & Pauen, 2010), identity (Buttelmann, Suhrke, & Buttelmann, 2015; Scott & Baillargeon, 2009; Scott et al., 2015; Song & Baillargeon, 2008), contents (Buttelmann, Over, Carpenter, & Tomasello, 2014), and non-obvious properties (Scott et al., 2010) of objects. For instance, Scott and Baillargeon (2009) examined 18-month-olds’ ability to reason about a false belief about an object’s identity.The experiment involved two toy penguins that were identical except that one could come apart (2-piece penguin) and one could not (1-piece penguin). In each familiarization trial, a female agent watched as an experimenter placed the 1-piece penguin and the disassembled 2-piece penguin on platforms or in shallow containers. The agent then put a key in the bottom piece of the 2-piece penguin and stacked the two pieces; the two penguins were then visually indistinguishable. In the test trials, the agent was initially absent. The experimenter assembled the 2-piece penguin, covered it with a transparent cover, and then covered the 1-piece penguin with the opaque cover.The agent then returned with her key and reached for one of the covers. Infants looked reliably longer when the agent reached for the transparent cover, suggesting they expected her to have a false belief that the penguin under the transparent cover was the 1-piece penguin, and hence to falsely believe that the 2-piece penguin was under the opaque cover.This pattern reversed if the agent was present throughout the test trials. In addition to understanding a range of false beliefs, infants can predict and interpret a variety of belief-based responses produced by a mistaken agent. This includes physical action responses, such as where the agent will reach (e.g., Luo, 2011; Song & Baillargeon, 2008; Senju, Southgate, Snape, Leonard, & Csibra, 2011; Träuble et al., 2010); verbal responses, such as the intended referent of a mistaken agent’s utterance (e.g., “There’s a sefo in this box”; Southgate, Chevallier, & Csibra, 2010); and the emotional responses that a mistaken agent should produce when she discovers her false belief (e.g., Moll, Kane, & McGowan, 2016; Scott, 2015). In this last study (Scott, 2015), 20-month-old infants were tested in a violation-of-expectation task involving two familiarization trials and a single test trial. In the first familiarization trial, a female agent (A1) created two rattles by placing marbles inside two cups and closing them with lids. She then shook each cup in turn, demonstrating that both produced a rattling sound. In the next familiarization trial, A1 was absent; in her absence, a male agent (A2) removed the lid from one of the cups, took the marbles, and replaced the lid. A2 then repeatedly shook each of the cups in turn, demonstrating that the emptied cup no longer rattled but the other cup still did so. Next, infants received either a consistent or an inconsistent test trial. At the start of the consistent test trial, A1 was again present; A2 and the stolen marbles were absent. A1 reached for the noisy cup, shook it (it rattled), and produced a satisfied expression. She then shook the silent cup and expressed surprise. A1 continued to shake the cups in turn, producing the corresponding facial expressions, until infants looked away. In the inconsistent trial, the pairings of cups and facial expressions were reversed: A1 was surprised by the noisy cup and satisfied with the silent cup. Infants looked significantly longer if they received the inconsistent rather than the consistent trial, suggesting they attributed to A1 the false belief that both cups rattled, expected her to be surprised when she shook the silent cup and discovered she was mistaken, and therefore looked longer if she was surprised by the noisy cup instead (this effect was eliminated if A1 knew that one of the cups had been manipulated). Infants can demonstrate their false-belief understanding in a variety of paradigms that assess a range of infant responses. As illustrated above, infants as young as 11 months of age succeed 154
False-belief understanding
in violation-of-expectation tasks, looking longer when agents act in ways that are inconsistent (as opposed to consistent) with their false beliefs (e.g., Luo, 2011; Onishi & Baillargeon, 2005; Song & Baillargeon, 2008; Surian et al., 2007;Träuble et al., 2010). By 18 months of age, infants also visually anticipate where a mistaken agent will search for an object (e.g., anticipatory-looking tasks; Clements & Perner, 1994; Garnham & Ruffman, 2001; Senju et al., 2011; Southgate et al., 2007; Surian & Geraci, 2012), spontaneously point to inform a mistaken agent that an object has been moved in her absence (e.g., anticipatory-pointing tasks; Knudsen & Liszkowski, 2012a, 2012b), and use an agent’s false belief to guide prompted responses such as retrieving an object for the agent (e.g., elicited-intervention tasks; Buttelmann et al., 2009; Buttelmann et al., 2014; Buttelmann et al., 2015; Rhodes & Brandone, 2014; Southgate et al., 2010). For instance, in the elicited-intervention task devised by Buttelmann et al. (2009), an experimenter showed 18-month-old infants how to lock and unlock two boxes, leaving them unlocked. A male agent then entered and showed the infants a toy. He placed the toy into one of the boxes and left the room. While he was gone, the experimenter moved the toy to the other box and locked both boxes. The agent then returned and attempted to open the (now empty) box where he had previously hidden the toy. When he could not open it, he sat down between the two boxes looking disappointed. If infants did not spontaneously intervene, the experimenter encouraged the infant by saying, “Go on, help him.” Most infants approached the box that contained the agent’s toy, suggesting that they interpreted the agent’s actions on the empty box as an attempt to retrieve his toy, which he falsely believed was in that location, and therefore they opened the box that held the toy in an attempt to help the agent achieve his goal. This pattern reversed if the agent saw the experimenter move the toy to the other location. In this case, infants approached and opened the empty box, suggesting that they assumed the agent’s goal was to open the box he acted on rather than to retrieve the toy. These results have now been extended to helping scenarios in which the agent holds a false belief about the contents (e.g., Buttelmann et al., 2014) or identity (e.g., Buttelmann et al., 2015) of an object. Recent studies using neurological measures have provided converging evidence that young infants represent agents’ beliefs (e.g., Kampis et al., 2015; Southgate & Vernetti, 2014). Infants and adults activate regions of the motor cortex when generating predictions about an agent’s actions (Southgate & Begus, 2013; Stadler et al., 2012). Southgate and Vernetti (2014) thus used changes in motor cortex activation, as measured by EEG, to assess 6-month-olds’ ability to predict the actions of a mistaken agent. Infants viewed two types of test trials in which an agent sat behind a closed box. In false-belief-present test trials, the lid of the box opened, then a ball rolled on screen and jumped into the box. The lid closed and a curtain came down in front of the agent.The ball then jumped out of the box and rolled off screen.The curtain then went up to reveal the agent looking down at the closed box. The agent remained stationary for 1500ms before reaching for the lid of the box. In false-belief-absent trials, the ball initially jumped out of the box and rolled off screen.The curtain lowered in front of the agent and the ball returned and jumped into the box.The curtain then went up and the agent remained stationary. Analysis of motor cortex activation during the 1500ms period after the curtain was raised revealed a significant increase in motor activation during the false-belief-present trials but not during the false-belief-absent trials. This selective increase in motor activation indicates that infants expected the agent to reach for the box when she falsely believed it contained a ball but not when she falsely believed the box was empty.1 Finally, positive evidence of false-belief understanding has been obtained with infants and toddlers in numerous Western countries (e.g., Buttelmann et al., 2009; Kovács et al., 2010; Meristo et al., 2012; Southgate et al., 2007; Surian et al., 2007). Recently, Barrett and colleagues extended these results to include a Salar community in western China, a Shuar/Colono 155
Rose M. Scott, Erin Roby, and Megan A. Smith
community in Ecuador, and a Yasawan community in Fiji (Barrett et al., 2013). These three communities differ from one another substantially in location, language, and cultural practices (see Barrett et al., 2013, supplementary online material), while sharing a number of features that distinguish them from the countries in which most early false-belief research has been conducted: they are small, rural, non-industrialized, less wealthy communities with low levels of formal education (Henrich, Heine, & Norenzayan, 2010). Barrett et al. (2013) administered three false-belief tasks that had been previously used with children in the United States (e.g., He, Bolz, & Baillargeon, 2012; Scott et al., 2010; Scott, He, Baillargeon, & Cummins, 2012) and obtained positive results in all tasks at all three sites. No differences were found across sites and in each case children performed quite similarly to children tested in the United States. This evidence suggests that the capacity to attribute false beliefs to others emerges universally early in development.
Theoretical accounts of false-belief understanding The studies reviewed above suggest that infants can reason about a rich set of belief-based behaviors across a range of belief-inducing situations, that infants can demonstrate their understanding in a variety of ways, and that this understanding emerges in infancy across cultures. Yet the results of elicited-response false-belief tasks suggest that the capacity to represent beliefs does not emerge until at least age four and that the acquisition of this ability exhibits substantial cross-cultural variability. How can we reconcile these two conflicting sets of findings? Broadly speaking, two types of accounts have been proposed to explain this discrepancy.
Conceptual-shift accounts Many researchers argue that elicited-response tasks measure a different understanding than do the tasks used with infants (Apperly & Butterfill, 2009; De Bruin & Newen, 2012; de Villiers & de Villiers, 2012; Devine & Hughes, 2014; Gopnik & Wellman, 2012; Heyes, 2014; Low, 2010; Perner, 2010; Perner & Roessler, 2012; Rakoczy, Bergfeld, Schwarz, & Fizke, 2015; Ruffman, 2014; San Juan & Astington, 2012). We refer to these accounts as conceptual-shift accounts because they share the common assumption that false-belief understanding emerges after four years of age as the result of a qualitative change in children’s understanding of the mind. These accounts maintain that young children’s failure on elicited-response tasks results from an inability to understand beliefs and, correspondingly, that passing elicited-response tasks demonstrates that they have attained this understanding. Although conceptual-shift accounts disagree on the precise nature of this transition, they generally agree that false-belief understanding is a learned, culturally-constructed skill (e.g., Heyes & Frith, 2014; Ruffman, 2014; Wellman, 2014). Proponents of the conceptual-shift view argue that the tasks used with infants do not measure an understanding of belief and instead attribute infants’ successful performance to a variety of alternative factors. Some assume that these tasks measure entirely non-mentalistic reasoning (e.g., Heyes, 2014; Perner, 2010; Ruffman, 2014), while others assume that infants are capable of mentalistic reasoning of a more minimal, rudimentary sort (e.g., Apperly & Butterfill, 2009; Low, 2010). Non-mentalistic accounts Two non-mentalistic accounts have been offered for infants’ performance in false-belief tasks. According to the low-level process account (Heyes, 2014), the tasks used with infants do 156
False-belief understanding
not measure a capacity to represent mental states but rather reflect the operation of domaingeneral processes such as perception, attention, and memory. This account assumes that infants’ responses in false-belief tasks are driven primarily by low-level perceptual novelty: infants look longer at events that, relative to other recent events, have novel spatiotemporal relations amongst actions and objects. Heyes (2014) has argued that responses to perceptual novelty and responses based on mental states have been consistently confounded in the tasks administered to infants, giving rise to apparent false-belief understanding (for critical discussion of this claim, see Scott & Baillargeon, 2014). For instance, the infants in the FB-green condition of Onishi and Baillargeon (2005) might have looked longer when the agent reached to the yellow box because this was perceptually novel (she had not reached there before) rather than because this was inconsistent with her false belief. The behavioral-rule account argues that the tasks used with infants assess expectations about behavior rather than an understanding of mental states (Perner, 2010; Ruffman, 2014). In everyday life children gather information, in the form of statistical regularities or behavioral rules, about how agents typically behave in particular situations. When children observe an agent in one of these situations in a laboratory task, they retrieve the appropriate behavioral rule and use it to interpret or predict the agent’s actions. For instance, it has been argued that infants expect agents to look for objects where they last saw them (Perner & Ruffman, 2005) and this expectation gives rise to a variety of responses such as anticipatory looks towards the location where an agent last saw an object (e.g., Southgate et al., 2007), and looking longer when an agent searches for an object in a location other than where she last saw it (e.g., Onishi & Baillargeon, 2005). Detailed critiques of these accounts have been offered elsewhere, and a full consideration of those arguments is beyond the scope of this chapter (e.g., Apperly & Butterfill, 2009; Carruthers, 2013; Christensen & Michael, 2016; Jacob, forthcoming; Scott, 2014; Scott & Baillargeon, 2014). Instead, we merely point out that although these accounts might be able to explain the results of isolated experimental conditions, neither can account for the wealth of evidence for false-belief understanding (and psychological reasoning more generally) in infancy. For instance, the low-level novelty account cannot explain the results of paradigms that assess responses other than looking time, such as the helping task described earlier (e.g., Buttelmann et al., 2009, 2014, 2015; see also Knudsen & Liszkowski, 2012a; Moll et al., 2016; Southgate et al., 2010), and both accounts have difficulty explaining cases where infants display differential responses to identical test events (e.g., Scott et al., 2015; Senju et al., 2011; for discussion see Scott, 2014; Scott & Baillargeon, 2014). Such findings cast doubt on both of these nonmentalistic accounts in their present forms. Minimalist accounts Perhaps the most influential minimalist account is the two-systems view advocated by Apperly and colleagues (e.g., Apperly & Butterfill, 2009; Butterfill & Apperly, 2013; Low, Drummond, Walmsley, & Wang, 2014; Low & Watts, 2013). According to this account, humans possess two distinct systems for psychological reasoning: a late-developing system that emerges around age four, and an early-developing system that is present in infancy.The two systems exist in parallel in adulthood, each guiding different sets of responses. The late-developing system is capable of representing beliefs as such and is required for explicit judgments about others’ behavior. Its emergence thus enables successful responding on elicited-response false-belief tasks. This system is highly flexible, allowing children and adults to represent any belief that they themselves can entertain. But this flexibility comes at the cost of efficiency: the late-developing system is slow, effortful, and dependent on language and executive function resources. 157
Rose M. Scott, Erin Roby, and Megan A. Smith
The early-developing system is incapable of representing beliefs as such. Instead, it tracks simpler belief-like “registrations”: an agent who encounters an object registers its location and properties. Registrations can be used to interpret and predict an agent’s actions, and this allows infants to succeed in many false-belief tasks. For example, if an agent encounters an object in location-A and then it is moved to location-B in her absence, the early-developing system can predict that the agent will search for the object in location-A because this is where she last registered it (e.g., Onishi & Baillargeon, 2005; Southgate et al., 2007). Because this system represents simpler states, its operation is fast, automatic, and independent of language and executive function. This system enables psychological reasoning in infants, who have limited language and executive function resources, and guides rapid, automatic responses (e.g., anticipatory looking) in older children and adults. This account predicts that the performance of the early-developing system should exhibit a number of “signature limits”. First, registrations can only capture physical relations between an agent and an object. As a result, the early-developing system should fail in situations that require reasoning about how an agent construes an object, such as those involving false beliefs about identity (e.g., Low & Watts, 2013). Second, because the early-developing system is encapsulated from other cognitive processes, it should be incapable of handling situations that place considerable demands on executive function recourses, such as those involving complex, interlocking sets of mental states that interact causally (e.g., Butterfill & Apperly, 2013; Low et al., 2014). Contrary to these predictions, a growing body of evidence suggests that infants can reason about situations involving false beliefs about identity2 (e.g., Buttelmann et al., 2015; Scott & Baillargeon, 2009; Scott et al., 2015; Song & Baillargeon, 2008) and complex causal interactions amongst mental states (e.g., Choi & Luo, 2015; Moll et al., 2016; Scott, 2015; Scott & Baillargeon, 2009; Scott et al., 2015). For instance, Scott et al. (2015) examined whether 17-montholds could reason about the actions of a deceptive agent who sought to lure another agent into holding a false belief about the identity of an object. In several experiments, the thief attempted to steal a desirable rattling toy during its owner’s absence by substituting a less desirable silent toy. Infants realized the thief could only succeed if the silent toy was visually identical to the rattling toy and the owner did not routinely shake her toy when she returned. When these conditions were met, infants expected the owner to hold a false belief that the silent toy was the rattling toy. These findings suggest that by 17 months, infants’ psychological reasoning does not exhibit either of the signature limits thought to characterize the early-developing system. Although these findings cannot rule out the possibility that humans possess two systems for psychological reasoning, they cast strong doubt on the two-systems account in its current form (for additional arguments against this account, see Carruthers, in press, 2016; Christensen & Michael, 2016; Helming, Strickland, & Jacob, in press; Michael & Christensen, 2016; Thompson, 2014).
Mentalistic accounts In contrast to conceptual-shift accounts, mentalistic accounts assume that infants’ successful performance in false-belief tasks does demonstrate an understanding of belief (e.g., Carruthers, 2013; Gergely, 2010; Helming, Strickland, & Jacob, 2014; Leslie, 2005; Luo & Baillargeon, 2010; Mitchell, Currie, & Ziegler, 2009; Southgate et al., 2007). Although several mentalistic accounts have been proposed (e.g., Baron-Cohen, 1995; Carruthers, 2013; Gergely & Csibra, 2003; Johnson, 2005; Leslie, 1994; Premack & Premack, 1995; Spelke & Kinzler, 2007), we focus here on the account offered by Baillargeon, Scott, and colleagues (e.g., Baillargeon, Scott, & He, 2010; Baillargeon et al., 2015; Scott & Baillargeon, 2009), which builds on a large body of 158
False-belief understanding
evidence that infants successfully reason about a variety of mental states in the first years of life (for a review, see Baillargeon et al., 2015). According to this mentalistic account, infants possess an evolved, domain-specific psychologicalreasoning system that contains a skeletal causal framework for interpreting the behavior of agents. This system, which operates largely outside of conscious awareness, is triggered whenever infants attempt to interpret the behavior of an entity that they construe as an agent (for discussion of how infants categorize entities as agents, see Baillargeon, Scott, & Bian, 2016). The system then enables infants to infer some of the likely mental states that underlie the agent’s actions. Specifically, the psychological-reasoning system allows infants to attribute at least three different types of mental states to agents: motivational states, which capture the agent’s motivation in the scene (e.g., goals, preferences); epistemic states, which reflect the knowledge that an agent possesses or lacks about a scene; and counterfactual states, which include any false or pretend beliefs that the agent holds about the scene. In addition to these mental states, the psychological-reasoning system is thought to include constraints and principles that allow infants to predict how any agent ought to behave, given his or her mental states. One such principle is the principle of rationality, which states that all other things being equal, agents should expend as little effort as possible in order to achieve their goals (e.g., Gergely & Csibra, 2003; Gergely, Nádasdy, Csibra, & Bíró, 1995; Scott & Baillargeon, 2013; Southgate, Johnson, & Csibra, 2008). The psychological-reasoning system is presumably quite skeletal at first, much like other domain-specific systems that have been proposed for physical and biological reasoning (e.g., Baillargeon et al., 2012; Gelman, 1990). As a result, infants’ early expectations about the behavior of agents are likely highly abstract and lacking in mechanistic detail (e.g., Keil, 1995, 2006). Infants may also face situations where they lack sufficient knowledge to infer an agent’s mental states. For instance, an infant who observes a parent typing on a laptop might very well view this action as deliberate and intentional while being unable to discern what the goal of this action might be. Consistent with this possibility, infants’ ability to infer the goal of various object-directed actions improves considerably over the first year of life. Six-month-olds who observe an agent repeatedly reach for object-A instead of object-B readily infer that the agent has the goal of obtaining object-A and expect her to reach for it in the future (e.g., Luo & Baillargeon, 2005; Spaepen & Spelke, 2007; Woodward, 1998). However, they have difficulty inferring the goal of other actions, such as pointing or looking at object-A (Kim & Song, 2008; Woodward, 2003; Woodward & Guajardo, 2002).Younger, 3-month-old infants have difficulty inferring the goal of a repeated grasping action and require additional support, such as prior experience grasping objects themselves, in order to do so (e.g., Sommerville, Woodward, & Needham, 2005). The causal framework of the psychological-reasoning system thus provides infants with a starting point for learning about the behaviors of agents. As infants observe and interact with agents, this enriches their psychological-reasoning system, enabling them to reason about agents more effectively in the future (see also Carruthers, 2013; Christensen & Michael, 2016). Much as everyday environments provide sufficient exposure to patterned light to support normal development of the visual system, it seems likely that infants routinely obtain sufficient exposure to agents to support development of the psychological-reasoning system. Recent evidence that infants in a wide range of cultures successfully reason about agents’ motivational, epistemic, and counterfactual states supports this prediction (e.g., Barrett et al., 2013; Callaghan et al., 2011). False-belief failures: competence vs. performance If infants can represent false beliefs, as argued by mentalistic accounts, then why do children fail elicited-response false-belief tasks until at least age four? We argue that this pattern results from 159
Rose M. Scott, Erin Roby, and Megan A. Smith
a distinction between competence and performance: as many researchers have pointed out, the capacity to represent false beliefs does not guarantee successful performance in false-belief tasks (e.g., Baillargeon et al., 2010; Bloom & German, 2000; Carruthers, 2013, 2016; Chandler, Fritz, & Hala, 1989; Helming et al., 2014, in press; Leslie, 1994; Lewis & Osborne, 1990; Mitchell & Lacohée, 1991; Roth & Leslie, 1998; Scholl & Leslie, 2001; Siegal & Beattie, 1991;Yazdi, German, Defeyter, & Siegal, 2006). Indeed, there are likely many factors that mediate between the capacity to represent false beliefs and successful performance in any given situation. Here we discuss a few such factors suggested by recent research. Attention/motivation The psychological-reasoning system is triggered only when one attends to the behavior of an agent. Given that children readily orient to social stimuli from early in infancy (e.g., Farroni, Csibra, Simion, & Johnson, 2002; Farroni et al., 2005; Johnson, Dziurawiec, Ellis, & Morton, 1991), it is often assumed that children will naturally attend to agents in experimental paradigms. However, this may not always be the case. For instance, children diagnosed with autism spectrum disorder (ASD) orient less readily to social stimuli (e.g., Dawson, Meltzoff, Osterling, Rinaldi, & Brown, 1998).When compared to typically developing children, children diagnosed with ASD spend relatively more time attending to objects as opposed to people (e.g., Klin, Johnes, Schultz, Volkmar, & Cohen, 2002; Osterling, Dawson, & Munson, 2002; Swettenham et al., 1998). This atypical attention to social stimuli likely contributes to the well-established difficulties that children with ASD exhibit in tests of psychological reasoning (e.g., Klin et al., 2002), and may also compromise the development of the psychological-reasoning system. Even amongst typically developing children, events that disrupt children’s attention to the agent interfere with their performance on false-belief tasks (e.g., Rubio-Fernández & Geurts, 2013). Moreover, recent work with adults suggests that the inclination to attend to others’ mental states may be more predictive of everyday social functioning than the ability to correctly represent and infer mental states (e.g., Brandone,Werner, & Stout, 2015).Thus, the capacity to represent mental states is insufficient for successful false-belief reasoning (or for navigating everyday social interactions): one must also be inclined to attend to and reason about other individuals.3 Processing demands Even if children attend to an agent and infer the contents of that agent’s false belief, they might still fail a false-belief task due to difficulties expressing this understanding. The processing-load account put forth by Baillargeon, Scott, and colleagues argues that children’s performance in any given false-belief task depends both on the processing demands imposed by that task and their ability to cope with those demands (e.g., Baillargeon et al., 2010; Baillargeon et al., 2015). If the processing demands of a task exceed children’s processing abilities, then they will fail the task despite their ability to represent false beliefs. According to this account, one reason young children fail elicited-response false-belief tasks is because these tasks generally impose greater processing demands than do the tasks that are used with young infants. In particular, when children are asked the test question in standard elicited-response tasks (e.g., “Where will Sally look for her marble?”), a response-selection process is activated: children must interpret the test question, choose to answer it, and generate an appropriate response (Scott & Baillargeon, 2009; see also Mueller, Brass, Waszak, & Prinz, 2007; Saxe, Schulz, & Jiang, 2006). In standard elicited-response tasks, executing this responseselection process triggers a prepotent tendency to answer the question based on the marble’s 160
False-belief understanding
actual location. At present, the source of this bias remains unclear: it could arise because (1) children’s own beliefs are naturally more salient, (2) explicitly mentioning the marble draws children’s attention to its actual location, (3) children misinterpret the test question as an indirect request for the marble’s actual location, (4) children misinterpret the test question as asking where Sally ought to look for her marble (or where she will eventually have to look in order to find the marble), or (5) because children mistakenly adopt the experimenter’s perspective rather than Sally’s perspective (e.g., Baillargeon et al., 2015; Carruthers, 2013; Goldman, 2012; Hansen, 2010; Helming et al., 2014, in press; Lewis, Hacquard, & Lidz, 2012; ; Leslie & Polizzi, 1998; Mitchell et al., 2009 Rubio-Fernández & Geurts, 2013; Siegal & Beattie, 1991). Regardless of its source, children must inhibit this prepotent response in order to answer the test question correctly based on Sally’s false belief; this is challenging for younger children with immature inhibitory skills (e.g., Carlson & Moses, 2001). Finally, simultaneously holding in mind the agent’s false belief while planning and executing a response imposes substantial working memory demands (e.g., Freeman & Lacohée, 1995; Setoh, Scott, & Baillargeon, 2011). The processing-load account predicts that if processing demands were sufficiently reduced, younger children might succeed in an elicited-response task. Several recent findings provide support for this prediction (e.g., Rubio-Fernández & Geurts, 2013, in press; Scott & Setoh, 2012; Setoh, et al., 2011). For instance, Setoh et al. (2011) devised a modified elicited-response task that was designed to reduce the inhibitory and working memory demands associated with response selection. In this task, 2.5-year-old children heard a false-belief story accompanied by a picture book. In each of six story trials, an experimenter turned a page of the book to reveal a new picture and recited a line of the story. The story introduced Emma, who found an apple in one of two containers (e.g., a box), moved it to the other container (e.g., a bowl), and then went outside to play with her ball. In her absence, her brother Ethan found the apple and took it away. Emma then returned to look for her apple. In the test trial, children saw pictures of the two containers and were asked where Emma would look for her apple. Because the apple was removed to an unknown location, the prepotent response evoked by the test question should be weaker and easier for the children to inhibit. To further reduce the responseselection demands of the test trial, two practice trials were interspersed amongst the story trials. In one, children saw an apple and a banana and were asked “Where is Emma’s apple?”; in the other, they saw a ball and a Frisbee and were asked “Where is Emma’s ball?” In each case, children were required to point to the matching picture. These trials thus gave children practice interpreting a “where” question and producing an appropriate response by pointing to one of two pictures. Children performed reliably above chance in the test trial, pointing to the container Emma falsely believed held her apple. Additional experiments replicated these results and revealed that children failed if they received only one practice trial (Setoh et al., 2011) or if the practice questions (“Which one is Emma’s apple?”) differed in linguistic form from the test question (“Where will Emma look for her apple?”; Scott & Setoh, 2012). These results demonstrate that young children are easily overwhelmed by simultaneously representing an agent’s false belief and answering a question about this belief and that when these demands are reduced, they can succeed in elicited-response tasks at younger ages. In contrast to elicited-response tasks, the tasks used with infants and toddlers have been designed to impose fewer processing demands on children, allowing them to express their falsebelief understanding at younger ages. In particular, tasks that assess spontaneous responses such as anticipatory-looking (e.g., Southgate et al., 2007), violation-of-expectation (e.g., Onishi & Baillargeon, 2005), or preferential-looking (e.g., Scott et al., 2012) do not involve responseselection demands: because children are not asked direct questions, no response-selection process is activated. However, this does not imply that such tasks do not involve any processing 161
Rose M. Scott, Erin Roby, and Megan A. Smith
demands, nor that children’s success on such tasks is guaranteed. According to the processingload account, children’s performance in any false-belief task is jointly determined by the processing demands imposed by that task and children’s ability to cope with those demands. Just as decreasing processing demands facilitates young children’s performance on elicited-response tasks, increasing processing demands should impede young children’s performance on spontaneousresponse tasks. Children’s performance on high-demand spontaneous-response tasks should be correlated with their processing skills, just as performance on elicited-response tasks is correlated with skills such as inhibitory control (e.g., Carlson & Moses, 2001) and linguistic ability (e.g., Milligan, Astington, & Dack, 2007). Scott and Roby (2015) recently tested these predictions using a novel high-demand preferentiallooking task. Preferential-looking tasks take advantage of the well-established tendency for children and adults to look spontaneously at images that match the sentences they hear (e.g., Scott et al., 2012; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In this task, 3-year-old children heard a change-of-location false-belief story accompanied by a picture book. The story introduced Mia, who wanted to give her grandmother a cookie for her birthday. She placed the cookie in a box, then left. In her absence, her brother moved the cookie to a nearby bag. Next, children saw Mia running into the room in a coat and heard, “ ‘Hurry hurry,’ says Mia’s mom! ‘We’re leaving for Grandma’s!’ Mia puts on her coat and quickly runs in to get Grandma’s cookie.” This story line was ambiguous and open to two interpretations: (1) Mia runs in and hastily grabs the container that she believes contains the cookie (the box; falsebelief interpretation), and (2) Mia runs in, locates the cookie, and takes the container holding it (the bag; reality interpretation). Although both interpretations were plausible, adults find the false-belief interpretation more appropriate. In the subsequent test trial, children saw two pictures of the back of an unknown individual in a hooded coat. In one image, the individual carried the box (original-container picture) and in the other, the individual carried the bag (current-container picture). While viewing these images, children heard, “There’s Mia walking to Grandma’s. She’s carrying Grandma’s present”, and their looking time to each image was measured. Because children tend to look at images that match the sentences they hear, they should look longer at the individual that they thought was Mia carrying Grandma’s present.4 However, children never saw which container Mia selected: in order to decide which individual was Mia, they had to infer which container she must have taken based on the story. Thus, children who arrived at the more appropriate false-belief interpretation should look longer at the original-container picture, whereas those who arrived at the less appropriate reality interpretation should instead look longer at the current-container picture. Results indicated children’s performance was strongly correlated with their verbal ability: children with more advanced verbal abilities looked longer at the original-container picture, whereas those with low verbal abilities looked longer at the current-container picture. These results thus confirm that even in a spontaneous-response task, children’s performance depends on processing demands and processing skills (see also Schneider, Lam, Bayliss, & Dux, 2012;Yott & Poulin-Dubois, 2012). Universal origins, culturally specific endpoints We have argued that the capacity to represent beliefs develops universally in the first years of life as a part of the psychological-reasoning system, but that children’s performance in falsebelief tasks is constrained by factors such as attention, motivation, and processing demands. This raises several possible avenues by which environmental factors could lead to individual and cultural differences in the expression of a universal capacity for representing beliefs. For 162
False-belief understanding
instance, there is a well-established link between social input and children’s performance on false-belief tasks (e.g., Adrián, Clemente, Villanueva, & Rieffe, 2005; Ensor, Devine, Marks, & Hughes, 2014; Mayer & Träuble, 2013; Meins et al., 2003; Ruffman, Slade, & Crowe, 2002;Taumoepeau, & Reese, 2013). Children whose mothers use more mental-state terms in conversation (especially cognitive terms such as think and know) perform better on elicited-response false-belief tasks (e.g., Adrián et al., 2005; Ensor & Hughes, 2008; Ruffman et al., 2002; Symons, Peterson, Slaughter, Roche, & Doyle, 2005). In cultures where the discussion of mental states is less socially appropriate, children pass elicited-response tasks at later ages (Mayer & Träuble, 2013) and they also perform more poorly on other tests of social understanding (Taumoepeau, Reese, & Gupta, 2012). Recent evidence suggests that the relationship between social input and false-belief performance also holds for spontaneous-response false-belief tasks. Roby and Scott (2015) found that 2.5-year-olds who hear more mental-state terms more readily anticipate where a mistaken agent will search for a goal object. Conversely, deaf infants of hearing parents, who are exposed to significantly less mental-state language than hearing infants of hearing parents (Morgan et al., 2014), fail anticipatory-looking false-belief tasks at 17 months of age (Meristo et al., 2012). These studies demonstrate that from early in infancy, social input is related to children’s performance in both spontaneous- and elicited-response false-belief tasks. One possibility is that children who frequently engage in conversations about mental states more readily attend to the mental states of others. Alternatively, frequent discussion of mental states could provide children with practice inferring the content of others’ mental states, which would then facilitate children’s false-belief inferences in both spontaneous- and elicited-response tasks. While further research is needed to disentangle these possibilities, these findings demonstrate one way in which environmental/cultural differences interact with a universal capacity for belief representation to yield individual differences in false-belief performance.5
Implications for children’s social and cognitive development As discussed at the outset of this chapter, understanding the origins of false-belief reasoning has implications beyond theoretical debates about the cognitive mechanisms that drive psychological reasoning. The capacity to represent beliefs has been argued to play a fundamental role in both the development and evolution of many human social behaviors (e.g., Baillargeon et al., 2013; Sperber & Wilson, 1995; Herrmann et al., 2007). For instance, the ability to recognize that others’ representations of the world may differ from our own may allow us to more effectively interpret the communicative behaviors of conversational partners and to tailor our own communicative behaviors to meet the needs of our addressees (e.g., Brown-Schmidt, 2009; Hanna, Tanenhaus, & Trueswell, 2003; Lockridge & Brennan, 2002; Nadig & Sedivy, 2002; Shwe & Markman, 1997; Southgate et al., 2010; Tager-Flusberg, 2000). This is essential to cultural learning, where novice learners’ assumptions and beliefs may be very different from those of their teachers. False-belief reasoning might also facilitate cooperation and living in large social groups by allowing us to distinguish between what we think and feel internally and what we convey externally to others (e.g., Baillargeon et al., 2013). A number of studies attest to the importance of false-belief understanding for other aspects of development: children’s performance on elicited-response false-belief tasks is correlated with their social competence (Lalonde & Chandler, 1995), peer acceptance (Fink, Begeer, Peterson, Slaughter, & de Rosnay, 2015; Slaughter, Dennis, & Pritchard, 2002), and cooperation in peer interactions (Dunn & Cutting, 1999). However, the causal relationship between false-belief understanding and these developmental outcomes remains unclear and depends in part on 163
Rose M. Scott, Erin Roby, and Megan A. Smith
how one interprets recent positive evidence of false-belief understanding in infancy. If successful performance on elicited-response tasks indicates the emergence of a representational understanding of the mind, then these associations suggest that developing the capacity to represent others’ minds leads to advances in a variety of social behaviors (e.g., Wellman, 2014). If, however, the capacity to represent others’ minds emerges universally in infancy, then a different explanation of these correlations is warranted. Clearly, elicited-response tasks measure an ability that is broadly related to other aspects of development. If this is not the capacity to represent others’ minds per se, then what might it be? Perhaps these correlations reflect important individual differences in attention to, or skill at inferring the contents of, others’ mental states. If so, then performance on spontaneous-response tasks might also correlate with skills such as social competence and cooperation. Alternatively, these associations might reflect the importance of individual differences in children’s ability to express their false-belief understanding while coping with other demands. Scott and Baillargeon (2009) suggested that successful performance in elicited-response false-belief tasks might reflect maturation of neural connections between brain regions devoted to representing beliefs and those devoted to response selection. The maturation of such pathways might also support the expression of false-belief understanding in social situations, giving rise to the associations described above. Clarifying these issues has important implications for supporting optimal social cognitive development in children. For instance, some research suggests that preschool children from lower socioeconomic backgrounds perform more poorly on elicited-response tasks than their more advantaged peers (e.g., Cutting & Dunn, 1999; Holmes, Black, & Miller, 1996; Hughes et al., 2000) and this may place them at risk for poorer outcomes, including poorer social competence. These children might benefit from interventions intended to improve their prospects. However, what skills should such an intervention target? This depends on the underlying cause of the relationship between elicited-response tasks and developmental outcomes. Thus, additional research is needed to clarify the abilities assessed by different types of false belief tasks as well as how these abilities relate to other areas of development.
Notes 1 If the motor system were used to infer mental states, as suggested by some simulation accounts (e.g., Gallese & Goldman, 1998), then increased motor activation should have occurred in both conditions: in both cases, infants needed to infer the agent’s mental states. The selective pattern of activation observed thus suggests that motor cortex is involved in predicting what agents will do given their mental states rather than inferring those states per se (see also Jacob & Jeannerod, 2005). 2 These findings are at odds with several recent studies reporting that adults fail certain anticipatorylooking tasks involving false beliefs about identity (e.g., Low & Watts, 2013; Low et al., 2014; Wang, Hadi, & Low, 2015). However, these negative findings likely reflect task demands rather than an inability to process false beliefs about identity (see Carruthers, in press, 2016; Scott et al., 2015). 3 Of course, attending to an agent’s behavior does not guarantee that one will successfully infer that agent’s beliefs. One might have difficulty inferring beliefs due to a lack of situational knowledge (e.g., Christensen & Michael, 2016) or processing demands (see Carruthers, 2016). 4 When presented with a single scene, children tend to respond with increased attention if that scene violates their expectations (i.e., violation-of-expectation tasks). However, several decades of research has shown that when children view multiple images accompanied by a word or sentence, they tend to look at the image that matches the language they hear (e.g., Golinkoff, Ma, Song, & Hirsh-Pasek, 2013). Thus, the prediction here was that children would look at the image that was consistent with their interpretation of the story, rather than the image that was “unexpected” based on the story (see also Scott et al., 2012). 5 In addition to affecting the expression of children’s intuitive psychological reasoning, culture also certainly impacts the creation of an explicit “folk theory” of others’ minds (for discussion, see Lillard, 1998).
164
False-belief understanding
References Adrián, J. E., Clemente, R. A.,Villanueva, L. & Rieffe C. (2005). Parent-child picture-book reading, mothers’ mental state language and children’s theory of mind. Journal of Child Language, 32, 673–686. Apperly, I. A. & Butterfill, S. A. (2009). Do humans have two systems to track beliefs and belief-like states? Psychological Review, 116, 953–970. Baillargeon, R., He, Z., Setoh, P., Scott, R. M., Sloane, S. & Yang, D.Y.J. (2013). False-belief understanding and why it matters: The social-acting hypothesis. In M. R. Banaji and S. A. Gelman (Eds.), Navigating the Social World: What Infants, Children, and Other Species Can Teach Us (pp. 88–95). New York: Oxford University Press. Baillargeon, R., Scott, R. M. & Bian, L. (2016). Psychological reasoning in infancy. Annual Review of Psychology, 67, 159–186. Baillargeon, R., Scott, R. M. & He, Z. (2010). False-belief understanding in infants. Trends in Cognitive Sciences, 14, 110–118. Baillargeon, R., Scott, R. M., He, Z., Sloane, S., Setoh, P., Jin, K., Wu, D. & Bian, L. (2015). Psychological and sociomoral reasoning in infancy. In M. Mikulincer and P. R. Shaver (Eds.), E. Borgida and A. Bargh (Assoc. Eds.), APA Handbook of Personality and Social Psychology:Vol.1. Attitudes and Social Cognition (pp. 79–150). Washington, DC: American Psychological Association. Baillargeon, R., Stavans, M., Wu, D., Gertner, Y., Setoh, P., Kittredge, A. & Bernard, A. (2012). Object individuation and physical reasoning in infancy: An integrative account. Language Learning and Development, 8, 4–46. Baron-Cohen, S. (1995). Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press/ Bradford Books. Baron-Cohen, S., Leslie, A. M. & Frith, U. (1985). Does the autistic child have a “theory of mind”? Cognition, 21, 37–46. Barrett, H. C., Broesch, T., Scott, R. M., He, Z., Baillargeon, R., Wu, D., . . . Laurence, S. (2013). Early falsebelief understanding in traditional non-Western societies. Proceedings of the Royal Society of London B: Biological Sciences, 280, 20122654. Bloom, P. & German, T. P. (2000). Two reasons to abandon the false belief task as a test of theory of mind. Cognition, 77, B25–B31. Brandone, A. C.,Werner, J. & Stout,W. (2015, March) Individual differences in theory of mind reasoning and relations to social competence in college students. Paper presented at the biennial meeting of the Society for Research in Child Development, Philadelphia PA. Brown-Schmidt, S. (2009). The role of executive function in perspective taking during online language comprehension. Psychonomic Bulletin & Review, 16, 893–900. Buttelmann, D., Carpenter, M. & Tomasello, M. (2009). Eighteen-month-old infants show false belief understanding in an active helping paradigm. Cognition, 112, 337–342. Buttelmann, D., Over, H., Carpenter, M. & Tomasello, M. (2014). Eighteen-month-olds understand false beliefs in an unexpected-contents task. Journal of Experimental Child Psychology, 119, 120–126. Buttelmann, F., Suhrke, J. & Buttelmann, D. (2015). What you get is what you believe: Eighteen-month-olds demonstrate belief understanding in an unexpected-identity task. Journal of Experimental Child Psychology, 131, 94–103. Butterfill, S. A. & Apperly, I. A. (2013). How to construct a minimal theory of mind. Mind & Language, 28, 606–637. Callaghan, T., Moll, H., Rakoczy, H., Warneken, F., Liszkowski, U., Behne, T. & Tomasello, M. (2011). Early social cognition in three cultural contexts. Monographs of the Society for Research in Child Development, 76, 1–142. Carlson, S. M. & Moses, L. J. (2001). Individual differences in inhibitory control and children’s theory of mind. Child Development, 72, 1032–1053. Carruthers, P. (2013). Mindreading in infancy. Mind & Language, 28, 141–172. ———. (in press). Mindreading in adults: Evaluating two-systems views. Synthese. ———. (2016). Two systems for mindreading? Review of Philosophy and Psychology, 7, 141–162.
165
Rose M. Scott, Erin Roby, and Megan A. Smith Chandler, M., Fritz, A. S. & Hala, S. (1989). Small-scale deceit: Deception as a marker of two-, three-, and four-year-olds’ early theories of mind. Child Development, 60, 1263–1277. Choi,Y. J. & Luo,Y. (2015). 13-month-olds’ understanding of social interactions. Psychological Science, 26, 274–283. Christensen,W. & Michael, J. (2016). From two systems to a multi-systems architecture for mindreading. New Ideas in Psychology, 40, 48–64. Clements, W. A. & Perner, J. (1994). Implicit understanding of belief. Cognitive Development, 9, 377–395. Cutting, A. L. & Dunn, J. (1999). Theory of mind, emotion understanding, language, and family background: Individual differences and interrelations. Child Development, 70, 853–865. Dawson, G., Meltzoff, A. N., Osterling, J., Rinaldi, J. & Brown, E. (1998). Children with autism fail to orient to naturally occurring social stimuli. Journal of Autism and Developmental Disorders, 28, 479–485. De Bruin, L. C. & Newen,A. (2012).An association account of false belief understanding. Cognition, 123, 240–259. de Villiers, J. & de Villiers, P. (2003). Language for thought: Coming to understand false beliefs. In D. Gentner and S. Goldin-Meadow (Eds.), Language in Mind: Advances in the Study of Language and Thought (pp. 335– 384). Harvard, MA: MIT Press. de Villiers, P. A. & de Villiers, J. G. (2012). Deception dissociates from false belief reasoning in deaf children: Implications for the implicit versus explicit theory of mind distinction. British Journal of Developmental Psychology, 30, 188–209. Devine, R. T. & Hughes, C. (2014). Relations between false belief understanding and executive function in early childhood: A meta-analysis. Child Development, 85, 1777–1794. Dunn, J. & Cutting, A. L. (1999). Understanding others, and individual differences in friendship interactions in young children. Social Development, 8, 201–219. Ensor, R., Devine, R.T., Marks, A. & Hughes, C. (2014). Mothers’ cognitive references to 2-year-olds predict theory of mind at ages 6 and 10. Child Development, 85, 1222–1235. Ensor, R. & Hughes, C. (2008). Content or connectedness? Mother-child talk and early social understanding. Child Development, 79, 201–216. Farroni, T., Csibra, G., Simion, F. & Johnson M. H. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences, 99, 9602–9605. Farroni, T., Johnson, M. H., Menon, E., Zulian, L., Faraguna, D. & Csibra, G. (2005). Newborns’ preference for face-relevant stimuli: Effects of contrast polarity. Proceedings of the National Academy of Sciences, 102, 17245–17250. Fink, E., Begeer, S., Peterson, C. C., Slaughter, V. & Rosnay, M. (2015). Friendlessness and theory of mind: A prospective longitudinal study. British Journal of Developmental Psychology, 33, 1–17. Freeman, N. H. & Lacohée, H. (1995). Making explicit 3-year-olds’ implicit competence with their own false beliefs. Cognition, 56, 31–60. Gallese,V. & Goldman, A. (1998). Mirror neurons and the simulation theory of mind-reading. Trends in Cognitive Sciences, 2, 493–501. Garnham, W. A. & Ruffman, T. (2001). Doesn’t see, doesn’t know: Is anticipatory looking really related to understanding or belief? Developmental Science, 4, 94–100. Gelman, R. (1990). First principles organize attention to and learning about relevant data: Number and the animate-inanimate distinction as examples. Cognitive Science, 14, 79–106. Gergely, G. (2010). Kinds of agents: The origins of understanding instrumental and communicative agency. In U. Goshwami (Ed.), Blackwell Handbook of Childhood Development, 2nd edition (pp. 76–105). Oxford: Backwell Publishers. Gergely, G. & Csibra, G. (2003). Teleological reasoning in infancy: The naïve theory of rational action. Trends in Cognitive Sciences, 7, 287–292. Gergely, G., Nádasdy, Z., Csibra G. & Bíró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165–193. Goldman, A. I. (2012). Theory of mind. In E. Margolis, S. Laurence and S. Stich (Eds.), The Oxford Handbook of Philosophy of Cognitive Science (pp. 402–424). Oxford, UK: Oxford University Press. Golinkoff, R. M., Ma, W., Song, L. & Hirsh-Pasek, K. (2013). Twenty-five years using the intermodal preferential looking paradigm to study language acquisition: What have we learned? Perspectives on Psychological Science, 8, 316–339.
166
False-belief understanding Gopnik, A. & Astington, J. W. (1988). Children’s understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction. Child Development, 59, 26–37. Gopnik, A. & Wellman, H. M. (1994).The theory theory. In L. A. Hirschfeld and S. A. Gelman (Eds.), Mapping the Mind: Domain Specificity in Cognition and Culture (pp. 257–293). New York, NY: Cambridge University Press. ———. (2012). Reconstructing constructivism: Causal models, Bayesian learning mechanisms, and the theory theory. Psychological Bulletin, 138, 1085–1108. Hanna, J. E., Tanenhaus, M. K. & Trueswell, J. C. (2003). The effects of common ground and perspective on domains of referential interpretation. Journal of Memory and Language, 49, 43–61. Hansen, M. B. (2010). If you know something, say something: young children’s problem with false beliefs. Frontiers in Psychology, 1, 1–7. He, Z., Bolz, M. & Baillargeon, R. (2011). False-belief understanding in 2.5-year-olds: Evidence from violation-of-expectation change-of-location and unexpected-contents tasks. Developmental Science, 14, 292–305. ———. (2012). 2.5-year-olds succeed at a verbal anticipatory-looking false-belief task. British Journal of Developmental Psychology, 30, 14–29. Helming, K. A., Strickland, B. & Jacob, P. (2014). Making sense of early false-belief understanding. Trends in Cognitive Sciences, 18, 167–170. Helming, K., Strickland, B. & Jacob, P. (in press). Solving the puzzle about early belief ascription. Mind & Language. Henrich, J., Heine, S. J. & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–83. Herrmann, E., Call, J., Hernández-Lloreda, M. V., Hare, B. & Tomasello, M. (2007). Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science, 317, 1360–1366. Heyes, C. (2014). False belief in infancy: A fresh look. Developmental Science, 17, 647–659. Heyes, C. M. & Frith, C. D. (2014). The cultural evolution of mind reading. Science, 344, 1243091. Holmes, H. A., Black, C. & Miller, S. A. (1996). A cross-task comparison of false belief understanding in a Head Start population. Journal of Experimental Child Psychology, 63, 263–285. Hughes, C., Adlam, A., Happé, F., Jackson, J., Taylor, A. & Caspi, A. (2000). Good test-retest reliability for standard and advanced false-belief tasks across a wide range of abilities. Journal of Child Psychology and Psychiatry, 41, 483–490. Jacob, P. (Forthcoming). A puzzle about belief-ascription. In B. Kaldis (Ed.), Mind and Society: Cognitive Science Meets the Philosophy of the Social Sciences. Berlin, Germany: Synthese Philosophy Library, Springer. Jacob, P. & Jeannerod, M. (2005).The motor theory of social cognition: A critique. Trends in Cognitive Sciences, 9, 21–25. Johnson, M. H., Dziurawiec, S., Ellis, H. & Morton, J. (1991). Newborns’ preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40, 1–19. Johnson, S. C. (2005). Reasoning about intentionality in preverbal infants. In P. Carruthers, S. Laurence and S. Stich. (Eds.), The Innate Mind: Structure and Contents (Vol. 1, pp. 254–271). New York: Oxford University Press. Kaminski, J., Call, J. & Tomasello, M. (2008). Chimpanzees know what others know, but not what they believe. Cognition, 109, 224–234. Kampis, D., Parise, E., Csibra, G. & Kovács, Á. M. (2015). Neural signatures for sustaining object representations attributed to others in preverbal human infants. Proceedings of the Royal Society of London B: Biological Sciences, 282, 20151683. Keil, F. C. (1995). The growth of causal understandings of natural kinds. In D. Sperber, D. Premack and A.J Premack (Eds.), Causal Cognition: A Multidisciplinary Debate (pp. 234–267). Oxford: Oxford University Press. ———. (2006). Explanation and understanding. Annual Review of Psychology, 57, 227–254. Kim, M. & Song, H. (2008). Korean 9- and 7-month-old infants’ ability to encode the goals of others’ pointing actions. The Korean Journal of Developmental Psychology, 21, 41–61.
167
Rose M. Scott, Erin Roby, and Megan A. Smith Klin, A., Jones, W., Schultz, R., Volkmar, F. & Cohen, D. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59, 809–816. Knudsen, B. & Liszkowski, U. (2012a). 18-month-olds predict specific action mistakes through attribution of false belief, not ignorance, and intervene accordingly. Infancy, 17, 672–691. ———. (2012b). Eighteen- and 24-month-old infants correct others in anticipation of action mistakes. Developmental Science, 15, 113–122. Kovács, Á. M., Téglás, E. & Endress, A. D. (2010). The social sense: Susceptibility to others’ beliefs in human infants and adults. Science, 330, 1830–1834. Lalonde, C. E. & Chandler, M. J. (1995). False belief understanding goes to school: On the social-emotional consequences of coming early or late to a first theory of mind. Cognition & Emotion, 9, 167–185. Lecce, S. & Hughes, C. (2010). The Italian job?: Comparing theory of mind performance in British and Italian children. British Journal of Developmental Psychology, 28, 747–766. Leslie, A. M. (1994). Pretending and believing: Issues in the theory of ToMM. Cognition, 50, 211–238. ———. (2005). Developmental parallels in understanding minds and bodies. Trends in Cognitive Sciences, 9, 459–462. Leslie, A. M. & Polizzi, P. (1998). Inhibitory processing in the false belief task: Two conjectures. Developmental Science, 1, 247–253. Lewis, C. & Osborne, A. (1990). Three-year-olds’ problems with false belief: Conceptual deficit or linguistic artifact? Child Development, 61, 1514–1519. Lewis, S., Hacquard,V. & Lidz, J. (2012). The semantics and pragmatics of belief reports in preschoolers. Proceedings of Semantics and Linguistics Theory, 22, 247–267. Lillard, A. (1998). Ethnopsychologies: Cultural variations in theories of mind. Psychological Bulletin, 123, 3–32. Liu, D., Wellman, H. M., Tardif, T. & Sabbagh, M. A. (2008). Theory of mind development in Chinese children: A meta-analysis of false-belief understanding across cultures and languages. Developmental Psychology, 44, 523–531. Lockridge, C. B. & Brennan, S. E. (2002). Addressees’ needs influence speakers’ early syntactic choices. Psychonomic Bulletin & Review, 9, 550–557. Low, J. (2010). Preschoolers’ implicit and explicit false-belief understanding: Relations with complex syntactical mastery. Child Development, 81, 597–615. Low, J., Drummond, W., Walmsley, A. & Wang, B. (2014). Representing how rabbits quack and competitors act: Limits on preschoolers’ efficient ability to track perspective. Child Development, 85, 1519–1534. Low, J. & Watts, J. (2013). Attributing false beliefs about object identity reveals a signature blind spot in humans’ efficient mind-reading system. Psychological Science, 24, 305–311. Luo,Y. (2011). Do 10-month-old infants understand others’ false beliefs? Cognition, 121, 289–298. Luo,Y. & Baillargeon, R. (2005). Can a self-propelled box have a goal? Psychological reasoning in 5-monthold infants. Psychological Science, 16, 601–608. ———. (2010). Toward a mentalistic account of early psychological reasoning. Current Directions in Psychological Science, 19, 301–307. Marticorena, D.C.W., Ruiz, A. M., Mukerji, C., Goddu, A. & Santos, L. R. (2011). Monkeys represent others’ knowledge but not their beliefs. Developmental Science, 14, 1406–1416. Mayer, A. & Träuble, B. E. (2013). Synchrony in the onset of mental state understanding across cultures? A study among children in Samoa. International Journal of Behavioral Development, 37, 21–28. Meins, E., Fernyhough, C.,Wainwright, R., Clark-Carter, D., Das Gupta, M., Fradley, E. & Tuckey, M. (2003). Pathways to understanding mind: Construct validity and predictive validity of maternal mind-mindedness. Child Development, 74, 1194–1211. Meristo, M., Morgan, G., Geraci, A., Iozzi, L., Hjelmquist, E., Surian, L. & Siegal, M. (2012). Belief attribution in deaf and hearing infants. Developmental Science, 15, 633–640. Michael, J. & Christensen,W. (2016). Flexible goal attribution in early mindreading. Psychological Review, 123, 219–227. Milligan, K., Astington, J.W. & Dack, L. A. (2007). Language and theory of mind: Meta-analysis of the relation between language ability and false-belief understanding. Child Development, 78, 622–646.
168
False-belief understanding Mitchell, P., Currie, G. & Ziegler, F. (2009).Two routes to perspective: Simulation and rule-use as approaches to mentalizing. British Journal of Developmental Psychology, 27, 513–543. Mitchell, P. & Lacohée, H. (1991). Children’s early understanding of false belief. Cognition, 39, 107–127. Moll, H., Kane, S. & McGowan, L. (2016). Three-year-olds express suspense when an agent approaches a scene with a false belief. Developmental Science, 19, 208–220. Morgan, G., Meristo, M., Mann, W., Hjelmquist, E., Surian, L. & Siegal, M. (2014). Mental state language and quality of conversational experience in deaf and hearing children. Cognitive Development, 29, 41–49. Mueller,V. A., Brass, M.,Waszak, F. & Prinz,W. (2007).The role of the preSMA and the rostral cingulate zone in internally selected actions. Neuroimage, 37, 1354–1361. Nadig, A. S. & Sedivy, J. C. (2002). Evidence for perspective-taking constraints in children’s on-line reference resolution. Psychological Science, 13, 329–336. Naito, M. & Koyama, K. (2006). The development of false— belief understanding in Japanese children: Delay and difference? International Journal of Behavioral Development, 30, 290–304. Onishi, K. H. & Baillargeon, R. (2005). Do 15-month-old infants understand false beliefs? Science, 308, 255–258. Osterling, J. A., Dawson, G. & Munson, J. A. (2002). Early recognition of 1-year-old infants with autism spectrum disorder versus mental retardation. Development and Psychopathology, 14, 239–251. Perner, J. (1995). The many faces of belief: Reflections on Fodor’s and the child’s theory of mind. Cognition, 57, 241–269. ———. (2010). Who took the cog out of cognitive science? Mentalism in an era of anti-cognitivism. In P.A. Frensch and R. Schwarzer (Eds.), Cognition and Neuropsychology: International Perspectives on Psychological Science (Vol. 1, pp. 241–261). Hove, UK: Psychology Press. Perner, J., Leekam, S. R. & Wimmer, H. (1987). Three-year-olds’ difficulty with false belief: The case for a conceptual deficit. British Journal of Developmental Psychology, 5, 125–137. Perner, J. & Roessler, J. (2012). From infants’ to children’s appreciation of belief. Trends in Cognitive Sciences, 16, 519–525. Perner, J. & Ruffman, T. (2005). Infants’ insight into the mind: How deep? Science, 308, 214–216. Premack, D. & Premack, A. J. (1995). Origins of human social competence. In M. S. Gazzaniga (Ed.), The Cognitive Neurosciences (pp. 205–218). Cambridge, MA: MIT Press. Rakoczy, H., Bergfeld, D., Schwarz, I. & Fizke, E. (2015). Explicit theory of mind is even more unified than previously assumed: Belief ascription and understanding aspectuality emerge together in development. Child Development, 86, 486–502. Rhodes, M. & Brandone, A. C. (2014). Three-year-olds’ theories of mind in actions and words. Frontiers in Psychology, 5, 263. Roby, E. M. & Scott, R. M. (2015, March). How does social input influence false-belief reasoning? Parent mental-state language and toddlers’ false-belief understanding. Paper presented at the biennial meeting of the Society for Research in Child Development Conference, Philadelphia, PA. Roth, D. & Leslie, A. M. (1998). Solving belief problems: Toward a task analysis. Cognition, 66, 1–31. Rubio-Fernández, P. & Geurts, B. (2013). How to pass the false-belief task before your fourth birthday. Psychological Science, 24, 27–33. ———. (in press). Don’t mention the marble! The role of attentional processes in false-belief tasks. Review of Philosophy and Psychology. Ruffman, T. (2014). To belief or not belief: Children’s theory of mind. Developmental Review, 34, 265–293. Ruffman, T., Slade, L. & Crowe, E. (2002). The relation between children’s and mothers’ mental state language and theory-of-mind understanding. Child Development, 73, 734–751. San Juan, V. & Astington, J. W. (2012). Bridging the gap between implicit and explicit understanding: How language development promotes the processing and representation of false belief. British Journal of Developmental Psychology, 30, 105–122. Saxe, R., Schulz, L. E. & Jiang,Y.V. (2006). Reading minds versus following rules: Dissociating theory of mind and executive control in the brain. Social Neuroscience, 1, 284–298. Schneider, D., Lam, R., Bayliss, A. P. & Dux, P. E. (2012). Cognitive load disrupts implicit theory-of-mind processing. Psychological Science, 23, 842–847.
169
Rose M. Scott, Erin Roby, and Megan A. Smith Scholl, B. J. & Leslie, A. M. (2001). Minds, modules, and meta-analysis. Child Development, 72, 696–701. Scott, R. M. (2014). Post hoc versus predictive accounts of children’s theory of mind: A reply to Ruffman. Developmental Review, 34, 300–304. ———. (2015, March). Beyond a simple reach: 20-month-olds understand the emotional impact of false beliefs. Paper presented at the Biennial Meeting of the Society for Research in Child Development, Philadelphia, PA. Scott, R. M. & Baillargeon, R. (2009). Which penguin is this? Attributing false beliefs about object identity at 18 months. Child Development, 80, 1172–1196. ———. (2013). Do infants really expect agents to act efficiently? A critical test of the rationality principle. Psychological Science, 24, 466–474. ———. (2014). How fresh a look? A reply to Heyes. Developmental Science, 17, 660–664. Scott, R. M., Baillargeon, R., Song, H. J. & Leslie, A. M. (2010). Attributing false beliefs about non-obvious properties at 18 months. Cognitive Psychology, 61, 366–395. Scott, R. M., He, Z., Baillargeon, R. & Cummins, D. (2012). False-belief understanding in 2.5-year-olds: Evidence from two novel verbal spontaneous-response tasks. Developmental Science, 15, 181–193. Scott, R. M., Richman, J. C. & Baillargeon, R. (2015). Infants understand deceptive intentions to implant false beliefs about identity: New evidence for early mentalistic reasoning. Cognitive Psychology, 82, 32–56. Scott, R. M. & Roby, E. (2015). Processing demands impact 3-year-olds’ performance in a spontaneousresponse task: New evidence for the processing-load account of early false-belief understanding. PloS ONE, 10, e0142405. Scott, R. M. & Setoh, P. (2012, June). Nature of response practice affects 2.5-year-olds’ performance in elicited-response false-belief tasks. Poster presented at the 18th International Conference on Infant Studies, Minneapolis, MN. Senju, A., Southgate,V., Snape, C., Leonard, M. & Csibra, G. (2011). Do 18-month-olds really attribute mental states to others? A critical test. Psychological Science, 22, 878–880. Setoh, P., Scott, R. M. & Baillargeon, R. (2011, March). False-belief reasoning in 2.5-year-olds: Evidence from an elicited-response low-inhibition task. Paper presented at the biennial meeting of the Society for Research in Child Development, Montreal, Canada. Shwe, H. I. & Markman, E. M. (1997).Young children’s appreciation of the mental impact of their communicative signals. Developmental Psychology, 33, 630–636. Siegal, M. & Beattie, K. (1991). Where to look first for children’s knowledge of false beliefs. Cognition, 38, 1–12. Slaughter, V., Dennis, M. J. & Pritchard, M. (2002). Theory of mind and peer acceptance in preschool children. British Journal of Developmental Psychology, 20, 545–564. Sommerville, J. A., Woodward, A. L. & Needham, A. (2005). Action experience alters 3-month-old infants’ perception of others’ actions. Cognition, 96, B1–B11. Song, H. J. & Baillargeon, R. (2008). Infants’ reasoning about others’ false perceptions. Developmental Psychology, 44, 1789–1795. Song, H. J., Onishi, K. H., Baillargeon, R. & Fisher, C. (2008). Can an agent’s false belief be corrected by an appropriate communication? Psychological reasoning in 18-month-old infants. Cognition, 109, 295–315. Southgate,V. & Begus, K. (2013). Motor activation during the prediction of nonexecutable actions in infants. Psychological Science, 24, 828–835. Southgate, V., Chevallier, C. & Csibra, G. (2010). Seventeen-month-olds appeal to false beliefs to interpret others’ referential communication. Developmental Science, 13, 907–912. Southgate,V., Johnson, M. H. & Csibra, G. (2008). Infants attribute goals even to biomechanically impossible actions. Cognition, 107, 1059–1069. Southgate,V., Senju, A. & Csibra, G. (2007). Action anticipation through attribution of false belief by 2-yearolds. Psychological Science, 18, 587–592. Southgate,V. & Vernetti, A. (2014). Belief-based action prediction in preverbal infants. Cognition, 130, 1–10. Spaepen, E. & Spelke, E. (2007). Will any doll do? 12-month-olds’ reasoning about goal objects. Cognitive Psychology, 54, 133–154. Spelke, E. S. & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10, 89–96.
170
False-belief understanding Sperber, D. & Wilson, D. (1995). Relevance: Communication and cognition. Oxford, UK: Blackwell Publishers. Stadler, W., Ott, D. V., Springer, A., Schubotz, R. I., Schutz-Bosbach, S. & Prinz, A. (2012). Repetitive TMS suggests a role of the human dorsal premotor cortex in action prediction. Frontiers in Human Neuroscience, 6, 20. Surian, L., Caldi, S. & Sperber, D. (2007). Attribution of beliefs by 13-month-old infants. Psychological Science, 18, 580–586. Surian, L. & Geraci, A. (2012). Where will the triangle look for it? Attributing false beliefs to a geometric shape at 17 months. British Journal of Developmental Psychology, 30, 30–44. Swettenham, J., Baron-Cohen, S., Charman, T., Cox, A., Baird, G., Drew, A., . . . Wheelwright, S. (1998). The frequency and distribution of spontaneous attention shifts between social and nonsocial stimuli in autistic, typically developing, and nonautistic developmentally delayed infants. Journal of Child Psychology and Psychiatry, 39, 747–753. Symons, D. K., Peterson, C. C., Slaughter, V., Roche, J. & Doyle, E. (2005). Theory of mind and mental state discourse during book reading and story-telling tasks. British Journal of Developmental Psychology, 23, 81–102. Tager-Flusberg, H. (2000). Language and understanding minds: Connections in autism. In S. Baron-Cohen, H. Tager-Flusberg and D. J. Cohen (Eds.), Understanding Other Minds: Perspectives from Autism and Developmental Cognitive Neuroscience, 2nd edition (pp. 124–149). Oxford: Oxford University Press. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632–1634. Taumoepeau, M. & Reese, E. (2013). Maternal reminiscing, elaborative talk, and children’s theory of mind: An intervention study. First Language, 33, 388–410. Taumoepeau, M., Reese, E. & Gupta, E., (2012). Parent-child conversations about mental states and toddlers’ social understanding: Evidence from a pacific island context. Poster presented at the 18th International Conference on Infant Studies, Minneapolis, MN. Thompson, J. R. (2014). Signature limits in mindreading systems. Cognitive Science, 38, 1432–1455. Träuble, B., Marinovic´, V. & Pauen, S. (2010). Early theory of mind competencies: Do infants understand others’ beliefs? Infancy, 15, 434–444. Vinden, P. G. (2002). Understanding minds and evidence for belief: A study of Mofu children in Cameroon. International Journal of Behavioral Development, 26, 445–452. Wang, B., Hadi, N.S.A. & Low, J. (2015). Limits on efficient human mindreading: Convergence across Chinese adults and Semai children. British Journal of Psychology, 106, 724–740. Wellman, H. M. (2010). Developing a theory of mind. In U. Goswami (Ed.), The Blackwell handbook of cognitive development, 2nd edition (pp. 258–284). Oxford, UK: Blackwell. ———. (2014). Making Minds: How Theory of Mind Develops. Oxford, UK: Oxford University Press. Wellman, H. M., Cross, D. & Watson, J. (2001). Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72, 655–684. Woodward, A. L. (1998). Infants selectively encode the goal object of an actor’s reach. Cognition, 69, 1–34. ———. (2003). Infants’ developing understanding of the link between looker and object. Developmental Science, 6, 297–311. Woodward, A. L. & Guajardo, J. J. (2002). Infants’ understanding of the point gesture as an object-directed action. Cognitive Development, 17, 1061–1084. Yazdi, A. A., German, T. P., Defeyter, M. A. & Siegal, M. (2006). Competence and performance in beliefdesire reasoning across two cultures: The truth, the whole truth and nothing but the truth about false belief? Cognition, 100, 343–368. Yott, J. & Poulin-Dubois, D. (2012). Breaking the rules: Do infants have a true understanding of false belief? British Journal of Developmental Psychology, 30, 156–171.
171
10 CROSS-CULTURAL CONSIDERATIONS IN SOCIAL COGNITION Jane Suilin Lavelle
1. Introduction How do we understand other people’s behaviours? The standard answer, according to our best philosophy of mind textbooks and social cognition papers, is that we understand another’s behaviour by attributing beliefs and desires to them, and this practice of attributing beliefs and desires to others in order to explain and predict their behaviours is known as ‘commonsense’ or ‘folk’ psychology. These epithets were intended to capture the humdrum nature of such explanations: one need not be a professional psychologist to apply them; any person, without any formal training, can, and indeed does, create explanations of their own behaviours and those of others using the belief-desire framework. It is, as Fodor puts it, the daily psychological strategy deployed by the ‘Man on the Clapham Omnibus’ (1985/1993, p. 272). As readers of this volume will no doubt appreciate, folk psychology has dominated the social cognition research programme. Few have questioned the end point,1 namely, that everyone acquires the belief-desire framework; efforts have instead focussed on how that framework is acquired. Nowhere is this more apparent than in the huge literature examining young children’s ability to attribute false beliefs to others. As Daniel Dennett (1978) and Gilbert Harman (1978) famously observed, understanding that another person’s beliefs may not match with how the world actually is, is a hallmark of a mature folk psychology. We are all familiar with what followed: Wimmer and Perner’s seminal finding that, prior to their fourth birthday, children systematically fail to attribute false beliefs to others (1983); Baron-Cohen and colleagues’ work demonstrating that children on the autistic spectrum do not develop this ability until their sixth or seventh birthday, and sometimes much later (Baron-Cohen & Leslie, 1985); and Onishi and Baillargeon’s finding that 15-month-old infants appear to pass non-verbal versions of the false belief task (2005; see also Baillargeon et al. 2010). These findings have been reliably replicated over countless studies, resulting in an increasingly detailed account of a child’s steps towards a mature folk psychology. The vast majority of these studies use participants from, to coin Henrich and colleagues’ acronym, WEIRD populations: groups that are White, Educated, Industrialised, Rich and Democratic (Henrich, Heine, & Norenzayan, 2010). In fact, Henrich and colleagues report that an analysis of the top psychology journals from 2003–2007 revealed that 96% of participants 172
Cross-cultural considerations
came from industrialised Western societies: in sum, that ‘96% of psychological samples come from countries with only 12% of the world’s population’ (ibid., p. 63). In other words, the Man on the Clapham Omnibus (and his children) is at best representative of 12% of the world’s population. This naturally leads to the following questions:When we study the child’s ability to attribute beliefs and desires to others, are we studying the development of a cognitive capacity peculiar to a small subset of the world’s population? And do we have reason to believe that the considered end point – the ability to successfully explain and predict other people’s behaviours by appeal to their psychological states – exists in non-WEIRD cultures? These questions need not be worrisome in themselves, provided authors are clear that their participants come from a subset of the world’s population whose strategy for understanding others may be unique to that group. What is worrisome is a tacit assumption in the social cognition literature that experiments conducted with children from WEIRD populations have the potential to yield insights into a human-wide ability and that the belief-desire framework is a universal strategy for understanding others. Fodor is a paradigm of the former claim, writing, ‘There is, so far, no human group that doesn’t explain behaviour by imputing beliefs and desires to the behaviour. (And if an anthropologist claimed to have found such a group, I wouldn’t believe him)’ (1989, p. 132). This chapter presents some data, some of which comes from anthropologists, that might be taken to challenge Fodor’s claim (sections two and five). But it will also present work that appears to demonstrate a significant amount of congruency in children’s performance on various social cognition tasks (sections three and four). The second part of the chapter explores how these findings affect two overlapping but distinct debates in cognitive science: the very large debate concerning Nativism and Empiricism; and the on-going disagreement about how to explain the difference between infants’ performance on implicit response social cognition tasks, and pre-schoolers’ failure in comparable tasks that require explicit responses. The overall moral will be that while cross-cultural data have undoubtedly contributed to more rigorous and detailed accounts of social cognition, they cannot in and of themselves arbitrate between the main contenders. However, the role these data play in forcing each position to become more specific and careful continues to be invaluable in advancing our understanding of the field.
2. Why there might be differences There is huge variation in human culture: from what we eat and where we live, to moral and political systems, there are indefinitely many dimensions along which we vary. There is no obvious reason why our strategies for understanding and explaining other people’s behaviour might not also vary across populations. Two broad examples serve to illustrate this point: more specific cases will follow later in the chapter. First, there is the well-documented difference between ‘holistic’ and ‘analytic’ systems of thought, demonstrated across East Asian (e.g. Chinese, Japanese, Korean) and European/American2 populations, respectively. Holistic cultures prioritise group harmony, resulting in a collective society where the flourishing of the group is considered more important than that of the individual.3 As a result, there is a strong emphasis on roles and hierarchy within society and family, which promotes group cohesion. By contrast, analytic cultures value the needs of the individual over group harmony, resulting in more frank discussions and a higher tolerance of disagreement between individuals. Michael Morris and Kaiping Peng (1994) have argued that this is reflected in the explanatory frameworks preferred by each group when it comes 173
Jane Suilin Lavelle
to explaining another’s behaviour, with collectivist groups preferring to explain other people’s behaviours by reference to situational factors, e.g. that person’s role in society, or how they relate to other people in that situation; and more analytic groups with a strong emphasis on the individual preferring explanations which reference the actor’s character traits or inner psychological states. They support this claim with their study which contrasted Chinese language and American newspapers’ accounts of two mass murders (ibid.). The first was a Chinese graduate student who had lost an award competition and subsequently failed to get an academic posting. He shot his advisor, the person handling the award process, several bystanders and then himself. The second was an Irish-American postal worker who lost his job and was unable to find employment. He shot his supervisor, the person handling his appeal, several bystanders and then himself. A comparison of different newspaper reports revealed that American papers referenced the psychological state of the killer in each case, e.g. ‘a very bad temper’, ‘a darkly disturbed man who drove himself to success and destruction’, ‘mentally unstable’ (ibid., p. 961). Chinese newspapers, by contrast, focussed on the relationships between the victims and the killer, e.g. ‘did not get along with his advisor’, and the status pressures of being a top Chinese student: ‘Lu was a victim of the “Top Students” Education Policy’ (ibid.). These differences in reporting were said to illustrate different priorities in East Asians’ and Westerners’ explanatory accounts of behaviour, with Western explanations highlighting inner psychological states as causes and East Asian accounts focussing on external, situational factors. This is not to say that East Asians do not ever use or understand explanations that reference inner psychological states, but rather that this mode of explanation is not the default, which contrasts against Western cultures. A second point of comparison comes with different groups’ willingness to talk about psychological states. For example, many Pacific Island societies maintain that ‘it is impossible, or at least extremely difficult to know what other people think or feel’ (Robbins & Rumsey, 2008, p. 407; see also Lillard 1998, Barrett et al. 2013a, b). As a consequence, commenting on someone else’s thoughts is a taboo, as one should not comment on what one cannot know. Gossip speculating on the contents of another’s mind results in informal social punishment, e.g. ostracism. Other groups have a significantly smaller mental state vocabulary in contrast to Western cultures: for example, Angeline Lillard (1998, p. 13) cites anthropologists who claim that the Chewong in Malaysia have just five mental state terms in their vocabulary (‘want’, ‘want very much’, ‘know’, ‘forget’, ‘miss/remember’). This linguistic feature is taken to reflect a broader cultural attitude to psychological states, namely, that they take less precedence as explanations or justifications of behaviours than other factors (e.g. strict societal rules). In short, practices concerning reasoning about mental states, and the precedence they are given in everyday explanations, vary considerably across human cultures. What exactly does this evidence with regard to universal cognitive structures? A bias towards non-psychological explanations does not mean that psychological explanations are beyond the conceptual capacities of the populations in question. But it might suggest that mental state concepts are less central to their understanding of others, and in doing so challenges the tacitly accepted claim that social cognition consists primarily in the ability to attribute psychological states to others.
3. Synchrony in the development of false belief understanding What kinds of factors might we expect to affect a group’s everyday understanding of psychological states? Three have already been mentioned: the precedence given to psychological explanations, societal attitudes to commenting on psychological states, and linguistic resources available for describing psychological states. These factors are clearly related: groups which 174
Cross-cultural considerations
consider it taboo to talk about psychological states must appeal to different types of explanation for behaviour; and a diminished societal emphasis on psychological explanations might reasonably correlate with a more minimal vocabulary for dealing with such explanations, as appears to be the case with the Chewong mentioned above. This section examines how such variation in social attitudes to psychological states might affect how children learn and develop psychological state concepts. Clark Barrett and colleagues offer some insights to this question in their 2013 work, which describes a series of implicit response false-belief trials with children from three non-Western cultures: Shuar (Ecuador), Salar (China) and Yasawan (Fiji). These groups were chosen in part because of their quite different attitudes to children, and particularly the exposure children have to talk about mental states. In North American/Western European cultures, children are usually cared for by an adult, and direct speech between adults and babies is commonplace. In particular, it is not unusual for young children to be asked about their own mental states or for caregivers to talk aloud about their mental lives with the child, e.g. ‘Mummy wants a coffee, what do you want?’ This contrasts with the groups in Barrett and colleagues’ study, where child-directed speech is less common, and children’s opinions are rarely sought. For example, the authors write of the Yasawan that Yasawan children are typically regarded as the lowest ranking members of Yasawan society. Not only are children regarded as low ranking, parents also report that children do not ‘understand’ language and cannot ‘think’ or ‘feel’ pain or pleasure until well into their second year of life. [. . .] Yasawan parents’ beliefs are also reflected in their behaviour, as they engage in very little face-to-face dialogue with infants and young children. [. . .] It is also uncommon to have discussions about the feelings or thoughts of a child. (2013b, p. 10) Barrett and colleagues issued a variety of implicit response false belief tests with children in each of these cultures, using experimenters and artefacts which were familiar to the children. In each study an implicit measure was used, e.g. how long a child looked at a particular event, or where the child looked first. For instance, in one verbal, anticipatory looking study, children were told a story, each sentence of which was illustrated with two pictures.4 The methodology of this study was based on the established phenomenon that children and adults prefer to look at pictures which match sentences they hear (e.g. Scott et al., Chapter 9 this volume; Scott et al., 2012; Tanenhaus et al., 1995).5 For example, the Shuar experimenter reads ‘Noemi has an orange. See? Noemi has an orange’ while two pictures, one of Noemi holding an orange, and another of Jacqueline holding a corncob, are displayed. Children reliably looked longer at the picture of Noemi holding an orange, which matches the sentence they heard. The story follows a typical false belief setup, each stage of which is illustrated with two pictures, one of which is congruent with the sentence read and another which is not. Noemi hides her orange then leaves the scene; Jacqueline moves the orange to a new location; Noemi returns to retrieve her orange. In this final, critical stage, children hear ‘Noemi wakes up and is hungry. She looks for her orange’, while being shown two pictures: one of Noemi looking in the container where she left her orange and another of her looking in the container that the orange has been moved to. North American toddlers (M = 31.6 months) looked reliably longer at the picture showing the protagonist looking where she left her object, that is, they look longer at the picture which matches the sentence they have heard. This finding was replicated in all three field sites, with children aged 175
Jane Suilin Lavelle
between approximately 25.5–52 months.6 Similar results were reported for the other tasks: in each case the response of infants and children from Salar, Yasawan and Shuar communities matched those of American infants and children on the same tasks. These data suggest that performance in false belief tasks that use implicit measures is the same in infants from diverse cultures. Infants and children raised in social environments that have quite different attitudes to psychological states and the status of children in contrast to European-American cultures, nevertheless appear to perform in just the same way as their European-American peers in these social cognition tasks. Further discussion of these results will follow in section six.
4. Differences in the development of false belief understanding When it comes to explicit, verbal false belief tasks, the cross-cultural story is significantly more complicated, and lively debates persist as to whether the data supports synchrony in the development of false belief understanding, or cultural differences. Tara Callaghan and colleagues (2005) argue for synchrony. These researchers conducted an interactive false belief paradigm with children in Peru, Canada, India, Thailand and Samoa. Here, the false belief task was set up as a game, where the child helped move the object from one location to another in the protagonist’s absence. Children were then asked to point to the location the protagonist would go to when she returned. Children’s performance was comparable across the groups, with few three-year-olds passing the task, 45% of four-year-olds passing, and 85% of five-year-olds passing. As in European-American cultures, age was a clear predictor of success in this particular false belief task. However, attempts to replicate these data have met with varied success. Mayer and Träuble (2013) tested Samoan children, aged 3–14 years, on a false belief task nearly identical to that used by Callaghan and colleagues, and found that it wasn’t until children were 8 years old that the majority (32 out of 58) were able to succeed (p. 25). Of the five-year-olds tested, just 11 out of 35 children passed. By contrast, Callaghan and colleagues reported that 13 out of 18 Samoan five-year-olds were able to pass the task (2005, p. 381). Like Callaghan and colleagues, Mayer and Träuble concluded that age was a significant predictor of children’s performance in false belief tasks; however, unlike Callaghan and colleagues, they also believe that the Pacific Island cultural reticence to discuss mental states affects children’s performance in false belief tasks, by delaying their ability to pass them relative to European/American children. Passing a false belief task which involves pointing to or articulating where a character will look for their hidden toy is only one of many measures for understanding another’s mental states. In recognition of this, Henry Wellman and David Liu (Wellman & Liu, 2004; Wellman, Fang, Liu, Zhu, & Liu, 2006) developed a ‘theory of mind’ scale, describing a series of mental state attribution tasks in the order that children are able to pass them. The pattern follows the Guttmann scale, meaning that children who can pass the later tests can also pass all the earlier ones. Beginning with the task that the youngest children are able to pass, and ending with the most difficult, the scale is as follows (Wellman et al., 2006, p. 1075): a) b) c) d) e)
Diverse desires (people can have different desires for the same thing). Diverse beliefs (people can have different beliefs about the same situation). Knowledge-ignorance (something can be true, but someone might not know that). False belief (something can be true, but someone might believe something different). Hidden emotions (someone can feel one way, but display a different emotion).
176
Cross-cultural considerations
Children in Australia, the United States and Germany follow this developmental pattern in their understanding of others’ mental states, beginning first with grasping diverse desires at around 2 years 11 months, and ending with being able to attribute hidden emotions to others sometime around their fifth birthdays (Wellman, 2014, p. 95). However, one does observe differences in the order of the scale across cultures. For example, Chinese pre-schoolers’ success on the ‘knowledge-ignorance’ task precedes success on the ‘diverse beliefs’ task (Wellman et al., 2006). Wellman attributes this to the greater emphasis placed on knowledge acquisition in Chinese cultures, writing that while both Chinese and American parents talk to their children about other people’s mental states, ‘Chinese parents comment predominantly on knowing (Tardiff & Wellman, 2000), whereas U.S. parents comment more on thinking (Bartsch & Wellman, 1995)’ (ibid., p. 99). Iranian children also manifest understanding of ‘knowledge-ignorance’ prior to grasping ‘diverse desires’, and in Iranian culture there is a similar strong emphasis on knowledge acquisition (Shahaeian et al., 2011). Wellman’s use of Iranian, Chinese and Australian/European/American pre-schoolers serves to demonstrate a particular strength of three-cultural comparisons. As is discussed in Norenzayan and Heine (2005), it’s not always clear when comparing data from two cultures, A and B, what the relevant causal factor may be. For example, in contrasting the Chinese and European/ American sample, the explanatory cause for the change in the developmental pattern may be due to the majority of Chinese children being raised as only-children.There is robust evidence that children from larger families pass false belief tasks earlier than those in smaller families (Perner, Ruffman, & Leekam, 1994), and one might therefore argue that it is the family size that causes Chinese children to pass the knowledge-ignorance task prior to the diverse beliefs task, and not a cultural emphasis on knowledge acquisition. However, by introducing the Iranian group,Wellman and colleagues bring a third culture into the mix, one that shares a feature with the Chinese group by having a strong cultural emphasis on knowledge acquisition, but unlike Chinese children (and more like the European/American group) Iranian children are usually raised with siblings. The discovery that Iranian children also succeed on knowledge-ignorance before diverse belief tasks strengthens the claim that it is the cultural emphasis on knowledge acquisition that is the main contributing factor, and not the size of the family.
5. Micro- or macro-cultural divergence? This array of data into false belief understanding in children across the world is a step towards broadening the participant base for this research. But how should we begin to make sense of this data, and how does it affect our accounts of mindreading? This section highlights some of the messier issues raised by these findings. Nisbett’s work describing the differences in thought between ‘collectivist’ and ‘individualist’ cultures has been highly influential in the cross-cultural mindreading debate (Nisbett et al., 2001; Nisbett & Miyamoto, 2005; Nisbett 2010). These categorisations apply at a macro-level, describing the biases of entire populations. However, one may worry that by looking at differences between large-scale populations, more significant local factors may be overlooked. Just within the European/American groups it has been documented that children with siblings pass the classic ‘Maxi’ false belief test earlier than only-children (see above); that children from families with low socioeconomic status (SES) are slower to develop mastery of false belief tasks (Homes, Black, & Miller, 1996); and that the frequency of mental state talk within the family affects false belief performance (Brown, Conelan-McCall, & Dunn, 1996). Although these children are raised in a broadly ‘individualistic’ culture with an emphasis on mental states, they
177
Jane Suilin Lavelle
nevertheless display developmental differences in their understanding of mental states due to factors in their immediate family surroundings. One finds analogous results in non-Western cultures too. Mele Taumoepeau (2015) worked with families from the Pacific Islands living in New Zealand, examining the relation between the strength of caregivers’ ethnic identity and children’s use of mental state terms. While all the caregivers increased their use of mental state terms in their conversations with their toddlers as they aged, those caregivers who strongly identified with Island culture were slower to do so in contrast to those who identified less with Island culture. This in turn predicted children’s performance on knowledge attribution and emotion understanding tasks, with children in families with strong Island identities passing these tasks slightly later. A broad focus on Pacific Islanders’ general tendency to use mental state terms less than other groups is in danger of blurring more subtle, but nevertheless important, differences between families, and the effects this has on children’s understanding of mental states. Similarly, a study conducted by Ike Anggraika Kuntoro and colleagues (2013) comparing children from a low SES background in Indonesia with middle class Indonesian children found that the low SES children were slower to understand another’s knowledge access and emotional situation compared to the middle class children (their understanding of another’s false belief, however, developed at the same time). While, broadly speaking, Indonesian parenting practices share the East Asian emphasis on knowledge acquisition, one sees local differences in children’s grasp of another’s access to knowledge dependent on SES-related factors (level of education of caregivers; amount of time spent with adults, etc.). Of course this is not to imply that macro- and micro-cultural influences are distinct. Caregivers from Pacific Island cultures may well use fewer mental state terms in their interactions with children in contrast to other groups, and this is compatible with the finding that within the population there are differences in children’s development of mental state understanding dependent on the strength of their family’s ethnic identity. The point is, simply, that we should be careful not to be distracted by exotic broad distinctions between groups at the risk of missing more local, and possibly more salient, factors.
6. Universality, nativism and empiricism The second part of this chapter explores some of the ways in which the data presented above might affect debates within cognitive science more generally, and social cognition more specifically. This section examines how the data affect the perennial tension between Nativists and Empiricists; the next looks more closely at how it affects on-going arguments about the cognitive architecture underlying social cognition. The terms ‘Nativism’ and ‘Empiricism’ are well-established in philosophy and psychology, with both Nativists and Empiricists agreeing that a trait is innate if it is not learned. There is, however, a growing movement of philosophers questioning the usefulness of the distinction (Griffiths, 2002; Oyama, 2000; see Samuels, 2007 in response). For example, Matteo Mameli and Patrick Bateson (2011) suggest that researchers use the concept ‘innate’ in a way that conflates several distinct properties, including (but not limited to): not learned; genetically coded; robustly developing; heritable. This is problematic, as there are tensions between these properties. To take one of Bateson and Mameli’s examples, one might find that in a species of songbird the ability to produce a specific song correlates with the presence of a particular gene. But, there is also learning involved in acquiring the song, and only those birds with the specific gene are able to develop the learning processes required to acquire the song. It therefore seems as though the acquisition of the song is both genetically coded (meeting one of the criteria for innateness) but learned (contravening a broadly agreed criterion for a trait being innate). 178
Cross-cultural considerations
It may be that the properties associated with the concept ‘innate’ have an underlying common feature: whether this is so is an empirical question. But it might just as easily be the case that this congeries of properties shares no common thread, in which case ‘innate’ ends up being what Mameli and Bateson refer to as a ‘clutter’ concept. A more careful discussion of whether ‘innate’ is a clutter concept is beyond the scope of this chapter. The lesson to draw from this discussion is that it is not always enough to ask whether a set of data points to a trait being ‘innate’; one must specify exactly which property associated with the innateness concept one is referring to, and to judge the evidence on this basis. We are now in a position to ascertain how the data presented in this chapter impacts on the debate between Nativists and Empiricists with respect to mental state attribution. First, it seems that some psychological abilities appear to be universal across cultures, e.g. success in implicit false belief tasks, and the order in which one passes various tasks as presented on Wellman and Liu’s ‘theory of mind scale’ (with the exception of the diverse beliefs/knowledge attribution task, on which more presently). Which of the properties associated with innateness does this evidence? It is tempting to assume that if a cognitive feature appears in all human populations, and particularly in all human infants, then it is highly unlikely to be learned. There are two strands to this way of thinking: 1 What an infant learns will depend strongly on what she is exposed to in her immediate surrounds. Infants across the world have quite different environmental surroundings, therefore it is infeasible to think that infants would all learn the same concepts by the same age given the quite different experiences they each have shaping their learning. 2 If a relatively complex concept appears to be present in young infants, then Poverty of the Stimulus arguments urge us towards the view that it cannot be learned. Those of an Empiricist leaning would challenge both strands of this argument.Their first point is that it is possible that a particular cognitive trait is found across human cultures because there are experiences in common across those groups (Norenzayan and Heine, 2005, p. 778). Henry Wellman, for instance, warns against assuming that uniformity in infants’ performances on implicit false belief tasks is best explained by claiming that the mindreading concepts which facilitate this performance are innate (2014). Instead, he suggests that Bayesian learning mechanisms allow infants to learn low-level mindreading concepts from their experiences of the world.When faced with the counter that infants have very different experiences depending on their cultural environment, Empiricists can respond by suggesting that the differences are not as great as we think. People the world over look for objects where they believe them to be, have broadly similar perceptual limitations, and the things they reach for are nearly always things that they want. Infants do not face such a varied behavioural data-set as one may initially suppose. Wellman cites research with deaf infants to support this view. Meristo and colleagues (2012) conducted an anticipatory looking study of 17–26-month-old infants in Sweden, contrasting the performance of deaf infants from hearing families with that of hearing infants (in hearing families) on implicit non-verbal false belief tasks. They found that ‘all 10 of the hearing infants – but none of the 10 deaf infants – looked first at the correct location in the FB [false belief] condition’ (p. 636), but nearly all the infants in both groups looked at the correct location in the true belief condition. It is well documented that deaf infants in hearing families face significant challenges in their access to communication and linguistic input (see Schick et al., 2007 for a review), and Wellman implies that these data suggest that one’s immersion in language can affect the acquisition of low-level mindreading concepts. This speaks to an 179
Jane Suilin Lavelle
Empiricist mindreading account, for if the infant’s low-level mindreading concepts were innate one would not expect experience to play such a critical role in their manifestation. In response, Nativists would point out that ‘innate’ does not mean ‘blind to environmental factors’. Experience can and does play a critical role in the Nativist views of mindreading, and Nativism would be incoherent if it did not accommodate this. For example, Peter Carruthers, known predominantly for his Nativist stance towards mindreading, maintains that there is a critical role for learning in his account of cognition. Specifically, he claims that we have cognitive modules, which are domain-specific ‘innate learning mechanisms’ (2011, p. 228), each of which evolved to learn about particular aspects of the world, e.g. number, minds, objectpermanence, spatial location, etc. Because these modules are learning systems, what they learn will of course be affected by the agent’s environment. A deaf infant in a hearing family lacks the environmental triggers of over-hearing conversation and the aural cues that draw a hearing infant’s attention to daily mindreading events, and as a consequence does not acquire mindreading at the same time as her hearing peers. The second strand of the Nativist’s argument – namely, that concerns growing from the Poverty of the Stimulus arguments should push us towards Nativism – is challenged by a growing corpus of research based on computational models of Bayesian hierarchical learning (e.g. Goodman et al. 2011), which strongly suggest that theories, even theories of quite abstract concepts like causality, can be learned after far fewer trials than had originally been thought. While there are indefinitely many hypotheses available to explain the data, learners nevertheless converge on the same one after relatively few trials. A Bayesian explanation for this is that there are priors which constrain the space of hypotheses available to learners, as Gopnik and Wellman explain: While many structures may be possible, some of the structures are going to be more likely than others. Bayesian methods give you a way of determining the probability of the possibilities. They tell you whether some hypothesis is more likely than others given the evidence. (2012, p. 1088) There is space for disagreement within Empiricist views as to which priors are innate, in the sense of being robustly developing and not learned, and which are learned. Wellman (2014), for instance, maintains that there are likely non-learned domain-specific priors, e.g. prior probabilities assigned to hypotheses about the causal power of mental states, of physical forces, of addition and subtraction, etc. Mindreading priors shape the space of hypotheses for picking a particular type of mental state as a cause of a token behaviour. As the child develops, so too does the precision of her mindreading hypotheses, building from the very general (e.g. reaching is caused by a goal), to the more specific (e.g. reaching for that cup means the agent is thirsty). Does the cross-cultural data documented in this chapter advance our understanding of the debate between Nativists and Empiricists? Both sides maintain that there is a critical role for learning in their accounts, and appeal to this to explain differences in children’s and adults’ social cognition across cultures. That both sides agree on this is indicative of a broader trend in cognitive science, namely, that the divide between Nativism and Empiricism is becoming increasingly blurred. Peter Carruthers writes of Gopnik’s Constructivist thesis that In light of her most recent position, however, it might be possible for Gopnik to claim that there are multiple statistical-learning mechanisms in the human mind capable of extracting underlying causal structure (one for mindreading, one for physical forces, one for biological kinds, and so forth). [. . .] Notice, however, that the upshot would 180
Cross-cultural considerations
be indistinguishable from a form of modular account. For it would postulate an innate specialized learning mechanism designed for mindreading. (2011, p. 232) While one may tweak the rhetoric to suit one’s own dialectic preferences (Nativists welcoming Empiricists over to their side, or vice versa), the overall moral is the same: the division between the two positions is a messy one, and likely to manifest itself in details over how learning progresses. If this is the case, then cross-cultural data will be but one small contribution to the debate, with more of the work stemming from computer models of learning and testing the different predictions yielded by alternative accounts of learning.
7. The impact on theories of mind A good account of social cognition must be able to explain both the similarities and the differences that we see in the ability to attribute mental states to others. As should now be clear, the phenomenon to be explained is complicated, and acknowledging this forms the first step towards a clearer understanding of it. This section focuses on one specific aspect of social cognition, namely, our ability to attribute psychological states to others, known as ‘mindreading’. A rough and ready summary of the data presented above brings out the following explanatory desiderata for a theory of mindreading: IMPLICIT RESPONSES TASKS Why infants’ responses in implicit response false belief trials seem to be the same across cultures. DIFFERENCES IN THE THEORY OF MIND SCALE Why we see some differences in the theory of mind scale (see section 4) across cultures, e.g. Chinese children pass ‘knowledge-ignorance’ tasks before the ‘diverse beliefs’ task, whereas the converse obtains for North American children. SIMILARITIES IN THE THEORY OF MIND SCALE Why some features of the ‘Scaling’ task can change, but others appear to remain consistent (e.g. there is evidence for ‘knowledge-ignorance’ and ‘diverse beliefs’ switching, but none as yet for understanding ‘diverse beliefs’ prior to ‘diverse desires’). ADULT MINDREADING DIFFERENCES How there can be such variation in mature mindreading, e.g. some groups prefer nonmentalistic to mentalistic explanations of behaviour (section 2). One question that arises from these data is whether they all involve attributing psychological states to others. In particular, there are several philosophers and psychologists, call them ‘infant behaviourists’, who deny that success in implicit response false belief tasks actually requires a grasp of false belief (Heyes, 2014; Perner & Ruffman, 2005). This contrasts with ‘mentalistic’ approaches which claim that the best explanation of the ‘implicit response’ data is that infants can attribute false beliefs to others (Baillargeon et al., 2010; Carruthers, 2011, 2013; Onishi & Baillargeon, 2005; Scott & Baillargeon, 2009; Scott et al., Chapter 9 this volume). A third set of views are the ‘Conceptual Change’ approaches, which argue that infants’ success in implicit response tasks is best explained by an ability to attribute psychological states to others, but that the states they attribute are not representational, and therefore do not warrant the label ‘belief ’. Both the ‘Conceptual Change’ and ‘infant behaviourist’ approaches maintain that success in 181
Jane Suilin Lavelle
explicit, verbal false belief tasks does require that the child has the concept of false belief; where they differ from mentalistic approaches is in their shared claim that infants’ performance in implicit response trials does not evidence a grasp of the concept of false belief. The rest of this piece offers some thoughts on how two of the mindreading accounts mentioned, the ‘Conceptual Change’ and ‘mentalistic’ accounts, might address the data summarised above. Mentalistic accounts Mentalistic accounts of mindreading hold that infants can attribute false beliefs to others. Moreover, nearly all the advocates of this approach make the further claim that this ability is ‘innately channelled’.7 This naturally leads to the question: if infants can attribute false beliefs to others, why do pre-schoolers fail on the verbal false belief task? There is disagreement within the mentalistic camp over the best explanation for this phenomenon; however, the general view seems to be that explicit tasks require considerably more in the way of cognitive processing than implicit ones, as the child is required to produce a response (by pointing, or saying what the character will do next). Failure in verbal tasks is taken to show that the cognitive demands of the task hinder the proper use of the false belief concept; it is not indicative of the absence of that concept. Rose M. Scott and colleagues (Chapter 9 this volume) are among those who take the mentalistic approach. They explain cross-cultural differences in children’s performances on explicit false belief tasks by reference to the impact the environment has on the cognitive demands that mask false belief competence in these tasks. On their view, three factors affect children’s performance on explicit tasks: attentional and motivational factors, inferential factors, and processing factors. The first of these, attentional and motivational factors, refer to Scott and colleagues’ claim that having the capacity to attribute false beliefs to others does not guarantee that one is motivated to use this ability in social interactions. In addition, one must also have the inclination to attend to other people before any successful false belief reasoning can take place. Scott and colleagues suggest that attentional and motivational factors could explain why children in environments where there is less talk about mental states pass explicit false belief tasks later than their counterparts in environments where conversations about mental states are more frequent. In environments where mental states are not mentioned very much, children are not inclined to pay attention to them, and this is said to explain their delayed performance on explicit false belief tasks; their performance does not indicate a delay in acquiring the concept of false belief. It’s not clear that this can explain the difference in infant and pre-school performances in experiments, though, as one needs to explain why infants are motivated to attend to the stimuli whereas pre-schoolers are not. The second feature Scott and colleagues mention is inferential factors: having the false belief concept is distinct from the ability to infer the contents of another’s false belief in all situations, and pre-schoolers’ performance in explicit false belief tasks could be indicative of a failure to infer the protagonist’s false belief. This response faces the same problem as mentioned above, namely, why infants are able to successfully infer false beliefs in tasks that are comparable in their set-up to the explicit tasks that pre-schoolers fail. If one can overcome this, however, a neat response to cross-cultural differences becomes available: children learn new strategies to infer other people’s mental states, and the strategies that are learned, and how quickly they are acquired, will depend on their social environment. Plenteous exposure to references to mental states should correlate with faster learning of strategies to infer them; less exposure correlates with slower learning, and possibly fewer strategies that are learned.
182
Cross-cultural considerations
The final explanans for pre-school performances on explicit tasks is ‘processing factors’.This is nicely explained by Peter Carruthers (2011, 2013) who observes that the verbal nature of such tasks place a triple burden on the mindreading module (2013, p. 153): 1 2 3
Keeping track of the character’s false belief Interpreting the experimenter’s communicative intention behind her question Generating an action that will communicate to the experimenter a prediction of the character’s behaviour
While a child’s executive functions are relatively immature, this load is too much to bear. The mindreading system has tracked the character’s false belief, but this representation is ‘lost’ under the considerable processing strain undergone by the rest of the mindreading system. As a consequence, the child defaults to reality when indicating where the character will look; a strategy that works in the majority of social cognition cases, but not in false belief tasks. Passing these tasks comes with ‘maturational expansion of the processing resources available to the mindreading faculty’, meaning that it is able to process this triple burden placed upon it (ibid.). Alternatively, or perhaps additionally, ‘increasing efficiency in the interactions between the thought attribution systems and executive systems’ also helps (ibid.), as the systems which generate the child’s communicative action have swifter access to the output of the mindreading system. In order for this response to accommodate the data above, though, the mentalistic account must address the following issues. First, it needs to explain the ‘scaling’ phenomenon, that is, why giving verbal responses to some types of mindreading tasks is easier for younger children than responding to the false belief task. A first pass at an answer is that the other tasks do not involve inhibiting one’s own knowledge of reality to the same extent as the false belief task (e.g. in the diverse belief task the child does not know whether her own belief, or that of the protagonist, is true), and therefore require less in the way of executive function. But does the account have the potential to explain the differences in scaling phenomena? Carruthers claims that while ‘mindreading capacities are independent of language’ (2011, p. 253),‘experience with language (and with communication more generally) might enhance the development of the mindreading system itself, helping to improve its efficiency’ (2013, p. 153).The precise details of the relation between the mindreading and language comprehension and production have yet to be fully developed. However, there is enough here to sketch out the beginning of an explanation for the differences perceived in children passing the ‘knowledge-ignorance’ and ‘diverse beliefs’ tasks on the theory of mind scale. In those groups where there is a strong emphasis on knowledge acquisition, and ignorance is stigmatised, both concepts will become more salient for children than other psychological concepts, like belief. The increased salience of these concepts means that interpreting questions about them becomes easier for the child, meaning that the cognitive burden of (2) is greatly reduced. By contrast, if beliefs are not so salient in the culture, then the cognitive burden of (2) is more significant, making the task more difficult to solve. How do mentalistic accounts explain the differences perceived at an adult level in the types of explanation given for behaviour? The answer to this is not clear, but one interpretation is that such accounts maintain that infant and pre-school mindreading draw on different conceptual resources to those used to construct explanations for behaviour, and that therefore the latter is a quite different phenomenon. Carruthers touches on this (2006, pp. 203–210) by suggesting that children and adults acquire norms of explanation from their social group. The normative explanatory practices of a group may well be distinct from the psychological
183
Jane Suilin Lavelle
mechanisms that facilitate successful interaction, with roots in the social history of the group rather than in the individual’s psychology. There is much more to be explored here, and I will return to this theme presently. Mentalistic accounts appeal to the effect of the environment on processing demands, inferring mental states, and attention and motivation to mindread to explain the differences perceived across cultures in passing explicit mindreading tasks. Critically, for this account, crosscultural differences are not indicative of the absence of the false belief concept, but rather of the effect of the environment on manifesting an understanding of false belief. However, there is a competing explanation for the data in the form of the ‘Conceptual Change’ accounts. Conceptual Change accounts There are a variety of ‘Conceptual Change’ accounts of mindreading, but the one I focus on here is the ‘two systems’ account proposed by Stephen Butterfill and Ian Apperly (2009, 2013; Apperly, 2010).Two systems accounts maintain that humans have available to them two distinct mindreading strategies, one of which is metarepresentational, dubbed ‘high-level mindreading’, and another which is not, namely, ‘low-level mindreading’. Furthermore, low-level mindreading is the strategy that facilitates the majority of our social interactions, and indeed is the only strategy available to infants and non-human animals. Low-level mindreading entails attributing relational, rather than representational, states to others. Apperly and Butterfill introduce two relational states: encounterings and registrations. They also introduce the concept of a ‘field’:‘An agent’s field at any given time is a set of objects’ (Butterfill & Apperly, 2013, p. 10). There are reliable physical and spatial constraints which determine an agent’s field, and these constraints are easily tracked by infant and animal cognitive systems using a few basic rules, e.g. if there is an opaque barrier between the other’s eyes and an object, that object is not in the other’s field. The relation between an agent and an object in her field is captured by the ENCOUNTERING concept. An agent can only act on those objects she has encountered, so, a cognitive system which has the ENCOUNTERING concept can track those objects which are in another’s field, and which the agent could potentially act on. This yields considerable manipulative and predictive power: for example, if you know that a competitor will take your food if she encounters it, and you can track what the other encounters, then you can safe-guard your food by ensuring that the other does not encounter it. ‘Registrations’ are like encounterings, but they persist even when the object is no longer in an agent’s field. Apperly and Butterfill initially characterise registrations as holding between an agent, an object and a location: ‘One stands in the registering relation to an object and location if one encountered it at that location and if one has not since encountered it somewhere else’ (Apperly & Butterfill, 2009, p. 962). Registration relations therefore have correctness conditions: one can encounter and thus register an object at location A, but the object might move to B when you are not encountering it (e.g. you have your back turned), consequently your registration relation is incorrect. ‘High-level’ mindreading refers to the ability to grasp representational states like beliefs, desires, etc. It allows for significantly more flexibility than low-level mindreading, as it allows one to recognise how a particular state of affairs is represented to the other, e.g. that she sees the man over there as ‘Clark Kent’ whereas I see him as ‘Superman’. The concepts of highlevel mindreading just are our folk psychological concepts, as used by other people to explain behaviour and to talk about psychological states. On this view, low-level mindreading is a human universal, and likely to ‘develop early, under relatively tight genetic and environmental constraints’ (Apperly, 2010, p. 137). There is very little 184
Cross-cultural considerations
variation across individuals in their low-level mindreading abilities, and this neatly accommodates the data demonstrating uniformity in infants’ responses in implicit response tests.8 The story with relation to high-level mindreading is more complicated. This is because Apperly maintains that high-level mindreading is a critical, but not necessarily dominant, ability for navigating the social world. On his view, the role of scripts and situation-models are much under-represented in the social cognition literature. Situation-models are models one builds mapping the schemas of situations that most frequently recur in daily life, e.g. going to the shops, classrooms at school, playing with friends, etc. Children and adults recall these models when in the relevant situation, and they guide their actions and attention. The models a child builds are completely dependent on the situations she is exposed to, and how her caregiver guides her through these situations. High-level mindreading is still necessary to interpret details of the model, e.g. one’s ‘shop’ model can guide one’s attention to the check-out assistant when bagging and paying for shopping, but one needs high-level mindreading to interpret her instructions and questions to you. On this view, most of our daily social interactions rely on quick responses guided by low-level mindreading, and conforming our behaviours to social scripts and norms. High-level mindreading does not underpin the majority of our social interactions. What then of the developmental gap between implicit response tasks and verbal social cognition tasks? Are situation-models required to pass the verbal false belief task but not the implicit task? The answer is no, for the same reasons rehearsed above: the implicit and explicit tasks are comparable in terms of the scenario the child watches. However, Apperly suggests that it is highly possible that when a child is invited to comment on a situation, her high-level mindreading resources and situation-models are automatically invoked to solve the problem, even though it could easily be settled using low-level mindreading abilities.These resources are cognitively draining, and when building her model of the false belief situation she faces, the child initially is prone to egocentric biases, using her own model of the situation rather than the protagonist’s (Apperly, 2010, p. 154). She gradually overcomes these as she becomes more experienced in false belief and other social cognitive scenarios, and as her model of these situations becomes more detailed. The situation-models that a child acquires will clearly depend on the norms of her social environment. We should therefore expect to see considerable variation across individuals on any task that invokes high-level mindreading, as this involves the child’s ability to successfully model the situation and the protagonist’s perspective within it. Furthermore, the models she builds may incorporate folk psychological states to a greater or lesser extent depending on the practices of her culture. In this instance, the question ‘Is high-level mindreading a cultural universal?’ can seem misplaced. While children across the world eventually come to succeed in social cognitive tasks that require explicit responses, the extent to which they all do this because of competence with high-level mindreading concepts is questionable. Each society may have a schema or situation-model that can give an explanation or prediction of the protagonist’s behaviour, but different models may incorporate different psychological concepts, or assign more or less weight to psychological factors, resulting in significant differences across individuals’ performances on these tasks. Much more work on the role of situation-models in social cognition is required to fully address the questions raised in this chapter. Situationmodels certainly cope easily with differences perceived in social cognitive practices, but more work needs to be done to delimit the scope of their influence. For example, do they play a role in the simple explicit response tasks documented in Wellman and Liu’s theory of mind scale, as Apperly suggests? Or are they only called into play when dealing with more complex social cognitive explanations as documented in section 2? Carruthers’ mention of social norms certainly leaves space for situation-models to play a critical role in the latter situation, but his 185
Jane Suilin Lavelle
commitment to mentalistic accounts of mindreading would leave less of a role for such models in the former situations. Future work detailing the scope of high-level mindreading is required in order to progress on these issues.
8. Conclusions This chapter has brought together some of the key findings in the psychological and anthropological literature concerning social cognition across the world. Collecting this data is an on-going challenge, but critical to informing a scientifically responsible account of social cognition. Whatever one’s preferred account of social cognition, it must explain why some mindreading phenomena remain robust across populations, while others manifest variation. Cross-cultural data impacts two significant debates in philosophy. The first is whether it can shed any light on the on-going debate between Nativism and Empiricism. I have argued that cross-cultural data alone cannot arbitrate this particular debate, mainly because there is a larger issue at stake, namely, what the terms of the Empiricist/Nativist debate are. It seems that both sides can accommodate cross-cultural data, and that both do so by appeal to learning. More work, therefore, needs to be done to properly define the scope of this debate before crosscultural data can be very helpful in contributing to its resolution. The second debate discussed was between ‘Conceptual Change’ and ‘mentalistic’ accounts of social cognition. Once again, both offer explanations for the cross-cultural data. However, mentalistic accounts need to do more to justify why it is harder to successfully infer or pay attention to psychological states in explicit response tasks than implicit response tasks. They must also develop their ‘cognitive load’ response to accommodate Wellman and Liu’s theory of mind scale phenomenon. Apperly and Butterfill’s ‘two systems’ Conceptual Change account appears to accommodate cross-cultural differences and similarities more easily by appeal to a universally developing ‘first’ theory of mind system, and a high-level mindreading system that is closely tied to the culturally dependent ‘situation-models’ that children develop. Further work is required to clarify the role of situation-models within social cognitive situations, and to work through the details of how these models interact with high-level mindreading. Is Fodor right, then, that folk psychology is a universal phenomenon? It depends how one understands folk psychology. If, in line with mentalistic accounts, you see it as congruent with the concepts infants use to understand other people’s psychological states, then you will answer in the affirmative: infants across the world ascribe representational states to others to predict their behaviour, and differences seen in children’s performances are best explained by difficulties in applying psychological concepts rather than an inability to grasp the concepts at all. By contrast, ‘Conceptual Change’ accounts could answer ‘no’ to this question. Non-folk psychological concepts like ‘registrations’ and ‘encounterings’ are universal, but high-level folk psychological concepts like ‘belief ’ and ‘desire’ could look very different across individuals depending on the normative practices of your group. Once again, cross-cultural work is insufficient on its own to arbitrate between these positions. But it continues to play an essential role in guiding progress in these debates, and in forestalling the pitfalls that accompany the dominant focus on WEIRD populations in psychological literature.
Notes 1 Although as Ian Apperly (2010) has observed, there are still significant lacunae in our understanding of adult mindreading abilities. 2 More specifically, North American and Northern/Western Europeans.
186
Cross-cultural considerations 3 These differences and possible historical causes for them are comprehensively discussed in Richard Nisbett’s and Ara Norenzyan’s work (Nisbett, 2003; Nisbett, Peng, Choi, & Norenzayan, 2001; Choi, Nisbett, & Norenzyan, 1999). 4 This matched a procedure used earlier by Scott and colleagues (2012). 5 For more information on why longer looking time at the ‘correct’ false belief scenario is indicative of false belief understanding using this method, as opposed to looking longer at the ‘wrong’ false belief outcome, as is the case with the violation of expectation method, see Scott et al., this volume. 6 A larger age range was used to maximise the number of participants. 7 This is not a necessary commitment: one could claim that infants attribute false beliefs to others and that this concept is learned. 8 For a detailed account of how low-level mindreading explains infants’ performance in implicit response trials, see Butterfill and Apperly (2013).
Bibliography Apperly, I. (2010). Mindreading. Psychology Press. Apperly, I. & Butterfill, S. (2009). Do humans have two systems to track beliefs and belief-like states? Psychological Review, 116, 953–970. Baillargeon, R., Scott, R. & He, Z. (2010). False-belief understanding in infants. Trends in Cognitive Sceinces, 14, 110–118. Baron-Cohen, S. & Leslie, A. (1985). Does the autistic child have a theory of mind. Cognition, 21, 37–46. Barrett, H., Broesch, T., Scott, R., He, Z., Baillargeon, R., Wu, D., . . . Laurence, S. (2013a). Early false-belief understanding in traditional non-Western societies. Proceedings of the Royal Society B, 280, 1–5. Barrett, H., Broesch, T., Scott, R., He, Z., Baillargeon, R., Wu, D., . . . Laurence, S. (2013b). Early false-belief understanding in traditional non-Western societies [Supplementary material]. Proceedings of the Royal Society B: Biological Sciences, 280, 1–29. Bartsch, K. & Wellman, H. (1995). Children Talk About the Mind. Oxford University Press. Brown, J., Conelan-McCall, N. & Dunn, J. (1996). Why talk about mental states? The significance of children’s conversations with friends, siblings and mothers. Child Development, 67, 836–849. Butterfill, S. & Apperly, I. (2013). How to construct a minimal theory of mind. Mind and Language, 28, pp. 606–637. Callaghan, T., Rochat, P., Lillard, A., Claux, M., Odden, H., Itakura, S., . . . Singh, S. (2005). Synchrony in the onset of mental state reasoning. Psychological Science, 16, 378–384. Carruthers, P. (2006). The Architecture of Mind. Oxford University Press. Carruthers, P. (2011). The Opacity of Mind. Oxford University Press. Carruthers, P. (2013). Mindreading in infancy. Mind and Language, 28, 141–172. Choi, I., Nisbett, R. & Norenzyan, A. (1999). Causal attribution across cultures: variation and universality. Psychological Bulletin, 125, 47–63. Dennett, D. (1978). Beliefs about beliefs. Behavioral and Brain Sciences, 1, 568–570. Fodor, J. (1985/1993). Fodor’s guide to mental representation: The intelligent auntie’s Vade-Mecum. In A. Goldman (Ed.), Readings in Philosophy and Cognitive Science (pp. 271–296). MIT Press. Fodor, J. (1989). Psychosemantics. MIT Press. Goodman, N. D., Ullman, T. D. & Tenenbaum, J. B. (2011). Learning a theory of causality. Psychological Review, 118, 110–119. Gopnik, A. & Wellman, H. M. (2012). Reconstructing constructivism: Causal models, Bayesian learning mechanisms, and the theory. Psychological Bulletin, 138, 1085–1108. Griffiths, P. (2002). What is innateness. Monist, 85, 70–85. Harman, G. (1978). Studying the chimpanzee’s theory of mind. Behavioral and Brain Sciences, 1, 515–526. Henrich, J., Heine, S. & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–83. Heyes, C. (2014). False belief in infancy: A fresh look. Developmental Science, 17, 647–659. Homes, H., Black, C. & Miller, S. (1996). A cross-task comparison of false belief understanding in a Head Start population. Journal of Experimental Child Psychology, 63, 263–285.
187
Jane Suilin Lavelle Kuntoro, I., Sarawati, L., Peterson, C. & Slaughter, V. (2013). Micro-cultural influences on theory of mind development A comparative study of middle-class and pemulung children in Jakarta, Indonesia. International Journal of Behavioral Development,, 37, 266–273. Lillard, A. (1998). Cultural variations in theories of mind. Psychological Bulletin, 123, 3–32. Mameli, M. & Bateson, P. (2011). An evaluation of the concept of innateness. Philosophical Transactions of the Royal Society B: Biological Sciences, 366, 436–443. Mayer, A. & Träuble, B. (2013). Synchrony in the onset of mental state understanding across cultures? A study among children in Samoa. International Journal of Behavioral Development, 37, 21–28. Meristo, M., Morgan, G., Geraci, A., Iozzi, L., Hjelmquist, E., Surian, L. . . . Siegal, M. (2012). Belief attribution in deaf and hearing infants. Developmental Science, 15, 633–640. Morris, M. & Peng, K. (1994). Culture and cause: American and Chinese attributions for social and physical events. Social Psychology, 67, 949–969. Nisbett, R. (2003). The Geography of Thought. Free Press. Nisbett, R., & Miyamoto, Y. (2005). The influence of culture: holistic versus analytic perception. Trends in Cognitive Sciences, 9, 467–473. Nisbett, R., Peng, K., Choi, I. & Norenzayan, A. (2001). Culture and systems of thought: holistic versus analytic cognition. Psychological Review, 108, 291–310. Norenzayan, A. & Heine, S. J. (2005). Psychological universals: What are they and how can we know? Psychological Bulletin, 131(5), 763. Onishi, K. & Baillargeon, R. (2005). Do 15 month-old infants understand false beliefs? Science, 308, 255–258. Oyama, S. (2000). Evolution’s Eye. A Systems View of the Biology-Culture Divide. Duke University Press. Perner, J. & Ruffman, T. (2005). Infants’ insight into the mind: How deep? Science, 308, 214–216. Perner, J., Ruffman, T. & Leekam, S. (1994). Theory of mind is contagious:You catch it from your sibs. Child Development, 65, 1228–1238. Robbins, J. & Rumsey, A. (2008). Cultural and linguistic anthropology and the opacity of other minds. Anthropology Quarterly, 81, 407–494. Samuels, R. (2007). Is innateness a confused concept. In P. Carruthers, S. Stich and S. Laurence (Eds.), The Innate Mind,Volume 3 (pp. 17–36). Oxford: Oxford University Press. Schick, B., De Villiers, P., De Villiers, J. & Hoffmeister, R. (2007). Language and theory of mind: A study of deaf children. Child Development, 78, 376–396. Scott, R. & Baillargeon, R. (2009).Which penguin is this? Attributing false beliefs about identity at 18 months. Child Development, 80, 1172–1196. Scott, R., He, Z., Baillargeon, R. & Cummins, D. (2012). False-belief in 2.5 year olds: evidence from two novel verbal spontaneous response tasks. Developmental Science, 15, 181–193. Scott, R., Roby, E. & Smith, M. (2017) False-belief understanding in the first years of life. Shahaeian, A., Peterson, C., Slaughter,V. & Wellman, H. (2011). Culture and the sequence of steps in theory of mind development. Developmental Psychology, 47, 1239. Tardiff, T. & Wellman, H. (2000). Acquisition of mental state language in Mandarin-and Cantonese-speaking children. Developmental Psychology, 36, 25. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632–1634. Taumoepeau, M. (2015). From talk to thought: Strength of ethnic identity and caregiver mental state talk predict social understanding in preschoolers. Journal of Cross-cultural Psychology, 46, 1169–1190. Wellman, H. (2014). Making Minds. Oxford University Press. Wellman, H., Fang, F., Liu, D., Zhu, L. & Liu, G. (2006). Scaling of theory-of-mind understanding in Chinese children. Psychological Science, 17, 1075–1081. Wellman, H. & Liu, D. (2004). Scaling of theory of mind tasks. Child Development, 75, 523–541. Wimmer, H. & Perner, J. (1983). Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13, 103–128.
188
11 THE SOCIAL FORMATION OF HUMAN MINDS Jeremy I. M. Carpendale, Michael Frayn, and Philip Kucharczyk
Human minds are social in various ways, such in the ability to interact with others. A further question, however, concerns the role of social experience in the development of human minds. All approaches to human development must assume some role for both biological and social factors, but beyond this necessary agreement they differ radically in how such factors and their interaction is conceptualized. One useful way to group these divergent approaches is by the basic preconceptions they begin with. These sets of preconceptions, or worldviews, influence the way problems are conceptualized and evidence is interpreted. Therefore, it is essential to examine the influence of such worldviews on research programs. We examine two worldviews that differ in their starting points. One begins from the individual’s experience ( Jopling, 1993) in setting up the problem infants face. This results in two solutions: (1) infants are born with the abilities to understand others, or (2) infants develop social skills through relying on their own experience that they apply to understand others. We review difficulties with these proposed solutions and the assumed way of setting up the problem. We then introduce an alternative approach with a different starting point and, thus, a different way of framing the problem and resolving it. From this second perspective, human minds develop within social relations. That is, social activity is constitutive of human minds ( Jopling, 1993; Müller & Carpendale, 2004)
Beginning from the individual The theories we consider first begin from a set of preconceptions regarding the mind as private and accessible only to the self. This means that infants are assumed to just encounter other bodies and have to figure out that others have minds – this way of framing the issue is known as “the problem of other minds”. This is based on a Cartesian worldview, and taking this problem as the starting point assumes minds to begin with and thus cannot explain how minds develop. A question that arises from this perspective is what must the infant start with in the way of cognitive abilities in order to make social interaction possible? We consider two general solutions to this problem. One position is that infants are born with such abilities. A second is that such abilities develop based on the individual’s experience that is then applied in order to understand others.
189
Jeremy I. M. Carpendale et al.
Claims of innate social abilities One way of attempting to explain human minds is the proposal that the mind consists of a number of innate modules that have evolved to solve particular problems present in an ancestral environment (e.g., Pinker, 1997). Similarly, Onishi and Baillargeon (2005, p. 257) “assume that children are born with an abstract computational system that guides their interpretation of others’ behavior”. Leslie, Friedman, and German (2004, p. 531) also assert that the human ability to understand the mind “is part of our social instinct” and that “for a young brain” to learn “about invisible, intangible, abstract states like belief ” it is necessary to “attend to such states in the first place”, which is made possible by an innate module. These examples of nativist accounts appear to be biologically based because they rest on biological assumptions, which may give them a certain cachet, but, in fact, it seems that they are not biologically plausible given current knowledge (e.g., Lerner & Benson, 2013). Presumably, according to these accounts, thinking is based on neural interconnectivity – as neuroscientists certainly assume (e.g., Kandel, Barres, & Hudspeth, 2012). If thinking is claimed to be innate, then it follows that children must either be born with the neural interconnectivity necessary for such cognition or, alternatively, it must emerge via a genetically pre-determined process, largely independent of learning (e.g., Chomsky, 1993). However, there is currently no strong evidence in humans of genes or suites of genes for modules, and, instead, there is some evidence of the selection of genes for brain size (Linquist & Rosenberg, 2007). Indeed, this unidirectional, genetically-controlled understanding of infant neural development does not fit with the significantly more nuanced view of the key role of plasticity in development that has emerged from the brain sciences over the past half century (e.g., Anderson, Spencer-Smith, & Wood, 2011; Lillard & Erisir, 2011). Indeed, Donald Hebb first published his speculative ideas on how learning occurs in the brain in the 1940s (Hebb, 1949). Although his work has been subject to considerable neuroscientific revision since then, his core assumption that human nervous systems are fundamentally plastic, and are thus structurally alterable throughout a lifetime, now serves as the cornerstone of our most comprehensive neurodevelopmental theories (e.g., Song, Miller, & Abbot, 2000; Bi & Poo, 2001; Caporale & Dan, 2008; Babadi & Abbott, 2010; Shouval,Wang, & Wittenberg, 2010).1 The implication of Hebb’s discovery is that neural activity must be seen as driving structural change in the brain, which in turn alters subsequent neural activity, thus creating a bidirectional and reciprocal relationship of continuous structural and functional neural development. Of course, this should not be misinterpreted as implying that the brain is a type of “blank slate” sculpted entirely by experience, as the gross, macro-structure of the brain is principally formed prenatally via molecular guidance cues (Sanes & Jessel, 2012). Nevertheless, both the inter- and intraregional connectivity patterns of the brain are subject to substantial elaboration, elimination, and reorganization after birth, suggesting that they develop largely in concert with the infant’s postnatal experiences and activities ( Johnson, Jones, & Gliga, 2015; Karmiloff-Smith, 2015; Mareschal, Johnson, Sirois, Spratling, Thomas, & Westermann, 2007; Sanes & Jessel, 2012). In contrast to the genetically-driven nativist accounts, according to developmental systems theory and neuroconstructivism, genetic activity must be seen as just one part of a complex developmental system that includes a great many factors at many levels including genes, factors in the cytoplasm, hormones, as well as social experience affecting gene expression and the formation and pruning of synapses (e.g., Fisher, 2006; Karmiloff-Smith, 2015; Lickliter & Honeycutt, 2009, 2010, 2013; Mareschal et al., 2007; Meaney, 2010; Overton, 2015; Slavich & Cole, 2013). These and other factors are so intertwined that it is not helpful to think of preexisting social and biological factors that then interact. Instead, developmental systems theory 190
The social formation of human minds
is an attempt to do without this dichotomy and, instead, view development as occurring within a system of bidirectionally interacting factors (Gottlieb, 2007; Griffiths & Tabery, 2013; Jablonka & Lamb, 2005). We return to developmental systems theory below, but first consider a developmental approach to explaining human social abilities.
The development of social abilities based on individual experience Other accounts also begin from the same starting Cartesian assumptions regarding the mind, and, thus, it is assumed that infants face the problem of other minds because it is assumed that experience of one’s own mind is private and direct, but other minds are concealed and inaccessible. But some approaches take development more seriously in attempting to explain how infants overcome this problem. The question is still what does the infant bring in order to understand others, and the answer is their own experience which is assumed to be drawn on in order to understand others. For example, Meltzoff assumes that infants receive sensory experience of other people’s physical movements but not their mental states (e.g., Meltzoff, Gopnik, & Repacholi, 1999).Thus, the problem is to figure out others’ mental states as inferred from their physical movements, and the solution proposed is through drawing on infants’ knowledge of their own mind and applying this to others. Meltzoff (2011) has argued that infants are born with an innate ability to understand others as “like me”. This is based on research he interprets as evidence that newborn infants can imitate adults’ facial expressions. According to Meltzoff, this ability to detect similarity between their own actions and the actions of others gives infants the foundation in order to reason about others by analogy. Meltzoff argues that infants understand that when they perform a certain bodily act they have a particular experience. When they see others perform the same act, they assume that since the other is acting in the same way they must have the same experience (e.g., Meltzoff, 2011; Meltzoff et al., 1999). However, the empirical research that is interpreted as evidence of neonatal imitation on which Meltzoff ’s theory is based is controversial. It has been argued that the evidence is only reliable in the case of infants matching tongue protrusion, which can be more simply explained in terms of oral exploration of close-by objects (see Carpendale & Lewis, 2006, for discussion). The analogical argument Meltzoff draws on has a long history, and an equally long history of criticism. One difficulty is that infants experience their own lived body from within, yet they observe others from without (Scheler, 1913/1954). If this difficulty could be overcome, which Meltzoff assumes is innately given, then in addition to analogical reasoning infants must also be able to understand others’ feelings on the basis of feelings that the infant is not currently experiencing, which would appear to require counterfactual reasoning, and this seems unlikely for such young infants (Müller & Carpendale, 2004). Furthermore, the assumption is already made that other people’s movements are expressive and not just physical, yet how this is reached is taken for granted and not spelled out (Scheler, 1913/1954; Zahavi, 2008). Finally, the analogical argument cannot logically lead to an understanding of others and other minds. It could only lead to the conclusion that there is another one of my mental states (Müller & Carpendale, 2004; Scheler, 1913/1954). The notion of self and other is smuggled into the conclusion, but that is exactly what the approach is meant to explain. This, of course, is an argument that the analogical argument cannot be the source of understanding others. Once adults understand self and other they can use analogical reasoning. Although it is true that adults can and may occasionally reason by analogy based on their own experience, this does not mean that it is the primary way of understanding others. This is an outcome of development, and therefore cannot be the source of that development. This 191
Jeremy I. M. Carpendale et al.
approach already assumes the mind in order to figure out other minds – thus assuming what it is meant to explain (e.g., Carpendale & Lewis, 2004, 2015a; Müller & Carpendale, 2004; Zahavi, 2008). A somewhat similar simulation solution to the problem of other minds has been proposed by Tomasello and his colleagues.Tomasello (2014) argues for “shared intentionality” as a crucial difference between humans and other primates. He suggests that understanding and participating in cooperative interaction depends on a suit of cognitive abilities referred to as shared intentionality. Tomasello and Carpenter (2007, p. 121), describe shared intentionality as “collaborative interactions in which participants share psychological states”. But they also shift from description of a form of human interaction to the cognitive abilities that are assumed to be required to engage in that form of interaction, and they state that shared intentionality is “an adaptation for participating in collaborative activities involving shared intentionality” (Tomasello, Carpenter, Call, Behne, & Moll, 2005, p. 690), “the underlying psychological processes that make these unique forms of cooperation possible” (Tomasello, 2009, p. xiii), or “a suite of social-cognitive and social-motivational skills that may be collectively termed shared intentionality” (Tomasello & Carpenter, 2007, p. 121). This is a redescription of cooperative human activity because it is assumed that particular cognitive tools make the cooperative interaction possible. But evidence that the child has developed the cognitive tools is that they can engage in the interaction. There is no independent way to assess the cognitive abilities.Thus all we have is the interaction.The assumption that this must be evidence for the underlying cognitive processes is only that, an assumption based on a taken-for-granted worldview. As a form of interaction it is a description of the problem needing an explanation (Bibok, 2011). Tomasello (2014) asks, “what does the individual bring to the interaction that enables her to engage in joint attention in a way that other apes and younger children cannot” (p. 152), and his answer is “that something like recursive mind-reading or inferring – still not adequately characterized, and in most instances fully implicit – has to be part of the story of shared intentionally” (p. 152). The question is how do infants develop this? Tomasello claims that the communication 12-month-old infants engage in requires that they must have an understanding of others as having attention and intentions, that is, others are intentional agents. And infants engage in a process of inference in order to communicate, but that this inference is “fully implicit”. The problem is how do infants develop this insight? Tomasello et al. (2005) have proposed an ontogenetic hypothesis according to which “infants begin to understand particular kinds of intentional and mental states in others only after they have experienced them first in their own activity and then used their own experience to simulate that of others” (p. 688). More recently Tomasello and Carpenter (2013) reiterate and clarify that, when the infant understands that someone “sees” something, all she knows about seeing is her own experience of seeing, and so that is what she takes the other to be doing. There is no reflection on her own mental states involved. (p. 402) Infants are not born with social cognitive abilities, but they have their own experience, such as seeing, and they see others as like themselves and so they understand others in terms of their own experience (e.g., seeing). A problem with this claim is that it conflates two forms of experience. Baldwin (1906) warned psychologists about overlooking this distinction. Infants have immediate experience, just like adults do, e.g., a toothache, but adults can also have reflective 192
The social formation of human minds
experience (e.g., a dentist can ask if it is a dull throbbing pain or a sharp shooting pain and we can answer, infants could not). It is this second form of experience that is needed to apply to others in order to understand them (Carpendale & Lewis, 2015b). Infants do not have access to a form of experience they can apply to others (Carpendale, Atwood, & Kettner, 2013). We have briefly reviewed some problems with attempting to explain human minds by beginning from individuals, either through assuming innate abilities or through development based on infants’ own experience. These difficulties highlight the need for an alternative account based on the social origin of minds.
A social account: beginning from interaction Rather than presupposing the mind as given, we begin from infants’ interaction with the world they experience and, thus, explain the development of minds. The starting point for infant development from this perspective does not assume that infants begin with knowledge or forms of thinking. We begin from a constructive view of knowledge, according to which infants learn about the interactive potential of their world through learning what they can do with it, and through this process they come to anticipate outcomes of their actions.They come to see the world in terms of what they could potentially do. This is a practical, sensorimotor knowledge – it is an understanding in action. Human ways of thinking become more complex and abstract through reflection on this early practical knowledge (Bibok, Carpendale, & Lewis, 2008). Clearly, a large part of infants’ world involves experience with other people. In describing the niche in which human infants develop, a number of authors have noted that the extended period of helplessness human infants experience results in the need for care, and thus in a necessarily social environment of social relations in which infants develop (e.g., Portmann, 1944/1990). When social relations are emphasized in theoretical approaches it is often assumed that this necessarily implies downplaying biological factors. We want to be clear that this is not the case. We do not swing from one extreme to another. In taking a developmental systems approach the idea is to do without dichotomies between social and biological levels (Lewontin, 2001; Oyama, Griffiths, & Gray, 2001). This is because biological factors are so intertwined with social factors and they mutually influence each other so that it is not possible to draw a clear line between them in development. Biological characteristics of infants and parents create the potential for social experience, and infants’ brains are shaped within this experience, which then results in the potential for further forms of social experience (e.g., Johnson et al., 2015). We illustrate this by addressing the question of how the social process begins, and in interpreting research in neuroscience from the perspective we endorse. From the perspective we take, a question that arises is how does the social process begin in early infancy? Thus, it is important to study the biological characteristics of infants and adults that draw them into interaction with each other and result in social processes. Instead of assuming that infants start from their own mind and are faced with attempting to solve the (unsolvable) problem of other minds and figure other people out, the question is how does the social process begin? One example of a biological characteristic that results in structuring the infant’s social environment is that human infants are interested in eyes. Even when tested at 2 to 5 days of age, infants show a sensitivity to gaze and prefer to look at a face with eyes open compared to eyes closed, and they prefer to look at a face that is looking toward them compared to a face with gaze averted from them (Farroni, Massaccesi, Pividori, & Johnson, 2004). In fact, the eye as a 193
Jeremy I. M. Carpendale et al.
sense organ enjoys a singular role in psychology because it is not merely receptive but plays a role in communicating intention, whether this is intended or not, because an individual’s reorientation can be followed by others, resulting in a point of contact in personal relations. Human eyes are unusual compared to other primate eyes because the iris is highly visible due to being surrounded by a particularly large area of white sclera. This means that human eyes are particularly salient, and gaze tracking is easier (Senju & Johnson, 2009). Kobayashi and Kohshima (2001) argue that it would be a disadvantage for less sociable animals to have such eyes because a conspecific’s gaze often signals a potential threat. A darker, less visible sclera would lower the probability that any accidental eye contact would be noticed and project hostility. For humans, however, eye contact facilitates social bonding (Kobayashi & Kohshima, 2001). It has been suggested that human eyes evolved in this way because, in contrast to the eyes of other primate species, which conceal gaze direction, human eyes make it easier to follow others’ gaze directions (Kobayashi & Kohshima, 1997, 2001), and this could support cooperative social interaction (Tomasello, Hare, Lehmann, & Call, 2007). In addition, the salience of human eyes may facilitate infants’ entry into social relations by encouraging them to pay attention to an essential aspect of their environment through which they can learn about their social world. There is also evidence that typically developing infants are influenced by whether gaze is shifted toward versus away from them. Infants at 6 to 10 months show a difference in neural activity assessed with event related potentials in response to a face shifting gaze toward them compared to shifting gaze away from them, but infant siblings of children with autism who later were diagnosed with autism themselves lacked this differential response (Elsabbah et al., 2012). In contrast to typical infants’ increase in their fixation of eyes from 2 months on, infants later diagnosed with autism declined in their fixation on eyes from 2 to 6 months, suggesting a derailment in the typical social engagement resulting in difficulties in social understanding ( Jones & Klin, 2013). Typically developing infants seem to be drawn to eyes, and mothers tend to hold their infants at about 20 cm away from their faces, which when coupled with infants’ visual restrictions may account for the salience faces have in infant development (Turkewitz & Kenny, 1982), and through experience in this environment infants become experts on eyes and faces, which are crucial aspects of the social world. Parents may interpret infants looking at them as an expression of interest, and may respond by talking to and smiling at their infant. The multidirectional relationships between the biological characteristics of infants and parents, as well as parents’ tendency to respond to the infant’s behavior in social ways, are likely to have fundamental roles in starting the social process, the continuously changing developmental system in which infants develop. The interaction between parent and infant becomes increasingly complex as the infant develops further skills and becomes able to participate in and elicit everyday activities with parents. An additional characteristic that may draw infants into social relations is that they prefer to attend to biological motion compared to other forms of motion, even when tested at a few days of age, and this preference is also documented across other species such as monkeys, cats, and birds (Klin et al., 2009). Unlike typically developing children, 2-year-olds with autism do not orient towards biological motion and prefer biological motion in a display of point-lights (Klin et al., 2009). This difference in preference may result in different environments in which they develop, and consequently, different developmental trajectories. We have used vision as an example of how human infants are drawn into and develop in social interaction, but since blind children also develop social skills, other factors must be able to compensate for a lack of sight. 194
The social formation of human minds
The human developmental system we are outlining is saturated with emotions. This emotional dimension of the human developmental system is an essential ingredient in further development, and it itself requires a developmental story (Bowlby, 1958; Suttie, 1935). Emotions structure a system of communication and coordination in early human interaction. Typically developing infants develop interest in others, and this leads to rewarding interaction in which infants come to enjoy activity with others. Emotions are an essential factor in beginning the social process, and they develop further within this process (Bowlby, 1958; Hobson, 2002/2004; Reddy, 2008; Shanker, 2004; Suttie, 1935). Infants’ smiling, for example, develops in the context of their close emotional engagement with their parents ( Jones, 2008; Messinger & Fogel, 2007). Smiling is then a social skill that can be used to elicit further social interaction with adults. Early evidence of infants’ emerging social understanding can be seen when they experience contingent interaction in which their mothers respond to their actions. These infants attempt to get the social engagement going again by smiling when their mothers have been told to hold a “still face” and not to respond to them. By smiling at their mothers, these infants made social bids in an attempt to get the pleasurable interaction going again (Mcquaid, Bibok, & Carpendale, 2009).This reflects infants’ understanding of this form of interaction and their enjoyment in engaging in it. The fact that it is difficult for mothers to refrain from responding to their infants indicates how mutual this engagement is. Emotions are part of what gets infants engaged with others, and social skills also develop further within this bidirectional interactive system. Infants may differ in terms of emotional reactivity in the sense of being under reactive or over reactive to social experience, and if this causes infants to avoid social engagement, such a lack of social experience could result in different developmental trajectories and difficulties in further development (Shanker, 2004). Emotions structure human patterns of interaction that form the foundation for communication. Bates et al. (1975) point out that “The mutual joy taken in such interactions provides the first loop in the construction of declarative communication: the formulation of social interaction as a goal in itself ” (p. 213). The objects that infants are engaged with can become a pivot around which interaction and communication occurs. On the adults’ side, it has been found that parents’ use of pointing gestures to direct their infants’ attention is linked to their expression of positive affect (Leavens, Sansone, Burfield, Lightfoot, O’Hara, & Todd, 2014). We have just given some examples to illustrate the relational developmental system approach we take. Also, a complete account would need to explicate how such preferences and characteristics develop in young infants including development before birth.
Interaction and communication: joint activity Starting from an action-based view of how knowledge develops, infants gradually learn about their world through learning the interactive potential of aspects of this world. They learn what they can do with it and how it reacts, that is, what happens as a result of their actions. They come to perceive their world in terms of potential actions. This concerns the physical world when they are learning to get around and explore things of interest, and it also involves the social world in which human infants necessarily develop (e.g., Portmann, 1944/1990). Infants learn about the activity of others around them and they come to anticipate others’ actions (Fenici, 2015; Woodward, 2013). For example, they learn about being picked up by their parents. Two- to four-month-old infants learn about their parents picking them up and they learn to stiffen their bodies in anticipation in order to coordinate with this action (Reddy, Markova, & Wallot, 2013). This example of an early form of interaction is clearly not an adult 195
Jeremy I. M. Carpendale et al.
form of cooperation among equals in reaching a mutual, conscious, long-term goal, but this does involve the increasing complexity of coordinating actions with each other. Human development could be described as an increase in the complexity of forms of coordinating actions (Carpendale & Lewis, 2015a). The sort of evidence discussed above suggests that infants pay attention to the activities of others and through this experience learn to anticipate patterns of human activity. For example, infants, by 6 to 10 months, have learned that grasping is the end point of reaching and that different hand shapes are linked to grasping different objects. The ability to anticipate which object is being reached for depending on the size of the object and the hand shape is linked to the action already being in the infant’s own action repertoire (Ambrosini, Reddy, de Looper, Constantini, Lopez, & Sinigaglia, 2013). One indication of infants’ early social understanding is that they can pass tests apparently demonstrating some anticipation of others’ action based on the others’ previous actions. These are referred to as infant false belief tests. Although it is often assumed that understanding others requires attributing intentions, this is not necessary. From our perspective, infant false belief understanding is a practical level of understanding others through developing expectations about others’ actions and gaze, and it is not yet verbal. That is, this is action reading, and it does not require attributing mental states, so-called “mind reading” (cf. Fenici, 2015; Uithol & Paulus, 2014). The social world reacts differently compared to the physical world. Although the floor will not help infants as they struggle to learn to crawl, their parents may. When an infant reaches toward her father, her desire to be picked up by him is manifest in her action. He may respond by picking her up. This is the meaning the infant’s action has for the other. There is meaning in this sequence of actions and reactions. To begin with, infants are not aware of this meaning for others, but over time they come to anticipate the adult’s response to their action. As an infant comes to anticipate the whole sequence of interactivity, she can begin to initiate her action with the expectation of the outcome, and in this sense she becomes aware of the meaning that already existed in the social relations (Mead, 1934). For Mead (1934), human forms of communication develop through becoming aware of the meaning already pre-existing in social relations. Communication is a particular form of knowledge acquisition. This social skill develops within social relations, within the human developmental system, and so it involves a host of characteristics of the child and adult, such as emotional engagement, interest, as well as levels of arousal, and learning. When infants lean or reach toward caregivers this is a manifestation of their desire to be held or picked up.The meaning of their action is there in the interaction and is evident to the parent before the infant is aware of communicating, and through experience the infant learns how parents respond.That is, they learn the meaning their action has for others and then can anticipate the outcome and communicate intentionally with the expectation of others responding. This is now a new form of communication. It is what is seen in human communication, but this awareness of how one affects others is not necessary for the communication of ants and other social insects. This is a practical social knowledge involving anticipation of what will happen socially. It is the form of meaning on which much of human communication is based (Mead, 1934). Approaches that account for the development of communication and minds begin from social relations and, thus, explain, rather than presuppose, the mind. That is, in contrast to presupposing a private mind, which is inaccessible to others, and assuming that children face the unsolvable problem of other minds, we begin from interactivity in which intentions, desires, interest, directedness, and emotions are all manifest in action and interaction. Forms of 196
The social formation of human minds
communication emerge within patterns of action and reaction, and the increasing coordination of parent–child interaction. Adults and infants learn to anticipate each other’s action within social activities, and communication is rooted in these social routines. Participants learn how others respond to their actions, which involves a process of the infants’ shifting orientation from their own perspective to the attitude of others. The other’s anticipated response becomes part of the infant’s action. This is action reading rather than “mind reading”. Self-awareness develops through infants learning how others respond to them, and so others’ attitudes toward them become part of their perspective. Self-awareness has a social origin because infants’ perspectives are shifted by experiencing others’ attitudes toward them. Thus, they can experience themselves through others’ reactions to them (Carpendale & Lewis, 2012; Mead, 1934).
From gestures to language and tools for thought We have argued against the view that infants begin with experience of their own mind and are faced with figuring out that other people have minds and how to communicate with them, either through being born with such abilities or developing them based on access to their own experience extended by analogy to others. This way of conceptualizing the problem arises due to taking the first-person perspective of subjective experience and the third-person perspective of an external observer as primary. Instead, according to our approach, these perspectives are developmentally derived from the second-person perspective of active experience in relation to others, which is primary (Fuchs, 2013). That is, communication begins through infants and parents learning to coordinate their actions, and infants master the social skills of communicating with gestures as they learn how others respond to their actions. Children then learn to add words to social situations of shared experience. For example, once infants have learned to make requests with gestures they can then use a word such as want in these social situations as well as the gesture, and later in place of the gesture. Similarly, words like look and see can be used along with pointing gestures. In this way children begin to learn how to talk about human activity in psychological terms. This ability to talk in this way about others and the self may then be used as a tool for thinking about others in these terms and to reflect on their own experience. This is what is required to pass verbal false belief tasks. This socio-genetic approach, according to which human forms of thinking are first observed in the social dimension before being mastered by individuals as forms of thinking, is known from Vygotsky as well as Piaget (Chapman, 1993). Language is first used for communication and then can be used by individuals as a form of thought (e.g., Fernyhough, 2008). What we have discussed here is the particular case of thinking about social matters.Vygotsky (1978) also discussed many other social cultural tools that are mastered by individuals as forms of thinking. These include “systems for counting; mnemonic techniques; algebraic symbol systems; works of art; writing; schemes; diagrams . . .”. These are social in the sense that they are used in social contexts and are also developed socially and modified by others. If human forms of reflective thinking are viewed as based on a system of meaning, then we require an adequate account of meaning. That is, humans think about the world and thus such thinking must be about the world, and it must be meaningful. Although a common view of meaning is that meaning is attached to representations, Wittgenstein (1968) and others have convincingly argued that meaning cannot be attached to representations such as words, utterances, or images (Goldberg, 1991). Instead, words, utterances, or images can be used to convey many different meanings in differing social contexts. It also follows from this discussion of meaning that the meaning of psychological words cannot be learned through mapping such words onto mental entities. An alternative view of meaning is that meaning is based on shared 197
Jeremy I. M. Carpendale et al.
social practices, routine social activities in which participants know what is coming up next (Canfield, 2007). This is the view of meaning from Mead that we have explicated above in the context of the development of infants’ gestures. This means that non-social processes cannot account for meaning on their own, and, instead, human forms of communication and cognition necessarily begin from social processes. That is, children do not learn words such as see, want, happy, sad, try, forget, remember, and decide by mapping the word onto an inner experience. Instead, adults can see when a young child has seen an object and is reaching toward it in an attempt to grasp it. The child’s desire and intention are manifest in her activity, and adults can talk about her activity in such psychological terms, and can also talk about others’ activity in such terms. The child can then learn these words on the basis of such shared experience of human activity. Once this language is mastered, then an older child may be able to reflect and think about what they want and the process of deciding, remembering, and so on. Language can then be used to give and ask for explanations. Mead (1934) argued that we are rational because we are social. Reasoning is first a social process because reasons are given to others, such as in situations when actions are not immediately understandable. This is a process with social origins that individuals master and can then use as an individual process in considering reasons for decisions in everyday reasoning (Chapman, 1993; Mead, 1934).
Lessons from systems neuroscience We now briefly consider the neuroscientific foundation for the perspective taken in this chapter. This is an essential step for a complete account because otherwise neuroscience research tends to be unreflectively interpreted from the perspective we are critiquing. The traditional view of mental function is that of the sandwich model of information processing: perceptual input → cognition → motor output. This approach, strongly espoused by computational theory of mind (CTM) proponents, emphasizes a hierarchical yet serial processing framework where information is always flowing in only one direction: sense information travels up the hierarchy, cognition occurs on that information, and then motor information travels down the hierarchy in the form of muscle commands. However, this is a psychological interpretation of neuroscience research. For several decades now, neuroscientific research has shown this understanding of our neural systems to be ultimately fallacious (Mountcastle 1978, 1995; Young, 1993; Elston, 2003; Amaral & Strick, 2012; Olson & Colby, 2012). Although it is true that the cortex can be seen as being hierarchically organized, neural activity must be understood as traveling in at least a bidirectional fashion, as there are just as many, if not more, feedback connections projecting from higher regions to lower regions as there are feedforward connections. Furthermore, the cortex is not one single hierarchy but a vast number of hierarchies, most of which are highly interconnected, allowing for substantial lateral travel of neural activity as well. For some time now, a speculative, systems-level theory of the brain has been slowly emerging in the mind/brain sciences which suggests that the core process performed by the brain is that of anticipation, or prediction (Hebb, 1953; Mumford, 1992; Hawkins & Blakeslee, 2004; Mareschal, et al., 2007; Bubic, von Cramon, & Schubotz, 2010). According to this theory anticipation is “an inherent design characteristic of the brain. There is no anticipation module in the brain, and no focal lesion selectively abolishes the brain’s propensity to anticipate” (Kinsbourne & Jordan, 2009, p. 103). Following from the brain’s sprawling connectivity patterns, anticipation is necessarily multi-modal, as no one region, let alone an entire hierarchy, is isolated or cut off from the rest of the cortical and subcortical areas (Seger & Miller, 2010). 198
The social formation of human minds
Indeed, a straightforward example of this reciprocal interdependency comes from Hebb (1980) and concerns our basic ability to perceive objects. Even simple objects require several fixations, or saccades of the eye, before they can be recognized, meaning that our retinas actually experience a temporal sequence of numerous spatial patterns, not one single spatial pattern: “That the whole is there as a whole, static and complete in detail like a picture before the mind’s eye, is illusion” (Hebb, 1980, p. 118). In other words, the sequence of visual patterns that give rise to whole object perception cannot be isolated from the sequence of motor patterns that allow those visual patterns to be experienced in proper temporal sequence. As Hebb (1980) states: “Perception is an active process, not mere passive reception of information. . . . What follows what, in this sequence, is determined by the intervening motor element – which therefore has an essential integrating function” (pp. 117–118). Particularly striking examples of higher-level, cross-modal anticipation can be found in the activity of canonical and mirror neurons ( Jacob, 2008; Kinsbourne & Jordan, 2009). Canonical neurons are active not only when an individual acts upon an object but also when that same individual simply perceives an object that can be acted upon. Moving beyond the relatively lowlevel visual and motor pattern integration discussed above with basic object recognition, here the visual patterns due to an object’s perception are not in fact isolated from the high-level motor patterns that are typically performed with that object. In a similar fashion to canonical neurons, mirror neurons are active not only when an individual acts upon an object but also when that same individual simply perceives someone else performing that same action. Here it can be seen that by recognizing the action patterns of another, individuals are able to anticipate their subsequent movements, while at the same time anticipating their resultant perception of those movements. In contrast to the sweeping conclusions some researchers have drawn about canonical and/or mirror neurons, particularly regarding their uniqueness, we see here that they are simply two examples of how anticipation is a fundamental process of the brain.2 In summary, neuroscientific research demonstrates that the sandwich model of information processing is simply untenable, and this has been recognized for some time. Indeed, as Rizzolatti and Kalaska (2012) state: Perception, cognition, and action have traditionally been considered distinct and serially ordered functions: An individual perceives the world, reflects on the resultant internal image of the world, and finally acts. This perspective relegates the motor system to the role of a passive apparatus that implements the decisions made by cleverer parts of the brain. Contemporary research indicates that perception, cognition, and action are neither functionally independent nor anatomically segregated. (p. 892) Following from this, it is suggested that anticipation is not a unique process found only in the perceptual, cognitive, or motor areas of the brain. Instead it is proposed as a system-wide process, which can in fact serve to help explicate why it is ultimately misleading to attempt to draw rigid boundaries between the perceptual, cognitive, and motor areas of the brain in the first place. This view of neural activity fits with the constructivist view of knowledge introduced above, according to which children learn about their world through learning how they can act on it, which involves developing expectations about how it will respond. That is, children develop skills in interacting with their world and they come to understand the world in terms of what they can do with it (Bibok et al., 2008). For example, as children interact with balls they learn what they can do with them, such as roll them, and then they can perceive balls and other similar objects in terms of the potential to interact with them in this way. As infants 199
Jeremy I. M. Carpendale et al.
learn about interacting with other people they come to see others in terms of the potential for interaction. This is a gradual process of acquiring skills in particular practices. Knowledge is implicit in these developing skills in interacting with the physical and the social world.
The social roots of cooperation and morality An essential aspect of the complex social world humans must successfully navigate is morality, which involves coordination and cooperation with others, and a sense of obligation regarding how others ought to be treated. We now turn to address the development of morality from the perspective we have outlined. Morality concerns right and wrong, and this normativity may appear challenging to account for in terms of a social origin, yet it is crucial to explicate the social roots of this aspect of being human. In order to provide a context for our view of the social origin of moral norms, we first consider two possible explanations for how morality develops. We suggest that both types of explanation are problematic and incomplete, and really attempt to explain away rather than explain moral norms. As an alternative we argue that moral norms are based on and develop from interpersonal coordination. One view of the source of moral norms is that they are imposed from the outside, from previous generations through the socialization of children. Although this may be an aspect of social development, socialization does not provide a complete explanation because it does not account for the origin or changes in moral norms. Furthermore, it just focuses on conformity as the source of social norms. If there is anything more to morality than conformity, it is important to consider other sources of social norms. A second possible account of social norms is from the inside, determined biologically, primarily directed by genes. We have argued that claims that human infants are born with some aspects of morality (e.g., Hamlin, 2013) are problematic for reasons discussed above because it is not biologically plausible to get from genes to even a rudimentary sense of justice, and that instead the complexity of the whole human developmental system in which human children develop must be considered. However, because a sense of moral obligation is a typical outcome of human development it could be said that, in some sense, this evolved. That sense, however, differs radically between gene-centered versus developmental systems approaches (Saunders, 2013). In order to take development seriously, it is important to consider the biological characteristics of young infants that result in forms of interaction in which children learn about morality. Therefore, we suggest taking a relational developmental systems approach (Carpendale, Hammond, & Atwood, 2013). We argue that moral normativity is constituted within interaction, and that the roots of moral development are found in the emotional bonds developing in infancy. Caring about others must emerge in these early relationships or mutual affection; it cannot be added afterwards through reasoning. Humans develop and exist as persons through being treated that way by others (Spaemann, 1996/2006). Children are treated as persons in the sense that adults respond to their contributions.Valuing them is implicit in such responses, whether or not their contributions are intentional yet. This can lead to children also treating others as persons, which is the foundation of caring and makes the development of morality possible (Carpendale, Hammond, & Atwood, 2013). The developmental system, the niche in which infants develop, already has moral and ethical foundations built in. That is, being responsive to others is present within the social relations infants experience ( Jopling, 1993). Morality, therefore, is not just something added later, but, rather, it is part of the system of interaction in which children develop (Carpendale, Hammond, & Atwood, 2013), although the interaction children may experience will vary in different families. 200
The social formation of human minds
Emotions are necessary in structuring the relationships children experience in which morality develops – that is, caring about others is required in the development of morality, and emotions develop further within such relationships. The emotional bonds with others provide the goals that individuals reason about because reasoning is not separate from values. Relationships of mutual affection form the social situations that require resolving conflicts through considering all the perspectives involved (Forst, 2005; Habermas, 1990; Mead, 1934; Piaget, 1932/1965). Humans live embedded in social relationships, and children grow up with conflicts between their own goals and their emotional connections to others. Thus, their goals must be meshed with the goals of others whom they interact with and are emotionally connected to. They cannot treat others as just material obstacles in the way of achieving their goals. Instead, others must be treated as persons with goals of their own. Although relationships will vary in how cooperative they are, within cooperative relationships among equals based on mutual affection and respect children can develop practical ways of interacting with each other. The principles that are implicit in this interaction are the constitutive rules. This process is first implicit in interaction as children work out ways of interacting before they are able to articulate the principles that underlie their engagement with others. A second level concerns the rules that can be agreed upon within such relationships. These are constituted rules (Piaget, 1932/1965). This process is what Mead (1934) referred to as a “moral method” because it involves coordinating all of the conflicting perspectives in moral conflict. This is also the process implicit in Habermas’s (1990) ideal speech situation in which there is no coercion and everyone feels free to speak and present their perspectives. This approach entails an implicit notion of development or progressivity (Chapman, 1988) in the sense that some moral decisions can be better or more complete than others if they include more perspectives.The notion of cooperative relationships is not a value imposed from outside of human ways of life. Instead, there is the potential for it to emerge based on aspects of what it is to be human, including the typical development of mutual affection and enjoyment in joint engagement.Thus, clashes between individual desires can be balanced by concern for others due to mutual affection and caring. Therefore, “morality is the logic of interaction” given mutual concern (Piaget, 1932/1965). From this perspective, moral norms do not preexist either in the culture or the genes. Instead, instances emerge within particular forms of interaction involving cooperative relationships based on mutual respect and caring. In taking a relational developmental systems approach, multiple characteristics of children, parents, and environments interact and can lead to different outcomes. Clearly, morality is not always the outcome – there is a great deal of injustice in our human world. However, the ability to recognize this as injustice must be based on the potential for justice and caring, and this is what must be explained. There are diverse forms of cooperation in human forms of life. This begins from examples of young infants learning to anticipate others’ actions, to cooperation among equals working toward the same goal. Taking a developmental approach means describing the increasingly complex forms of interaction. Human development can be thought of as increasingly complex forms of cooperation, and these forms of interaction or cooperation can also be thought of as forms of communication. As an early example of cooperation, toddlers’ helpfulness is of particular interest. One view of toddlers’ helpfulness is that this is early evidence of an evolved tendency for altruism, that is, helping others at a cost to the self (Warneken & Tomasello, 2009). Alternatively, from our perspective, toddlers’ activity that researchers label “helping” might arise due to infants’ development of an interest in being involved in the activities of adults (Carpendale, Kettner, & Audet, 201
Jeremy I. M. Carpendale et al.
2015). This may be a form of interaction that is a step in a process leading to more complex forms of moral action. A question that arises concerns whether forms of cognition are necessary for individuals to engage in cooperation and helping, as suggested by the idea of shared intentionality discussed above. Alternatively, from our perspective, we focus on the increasing complexity of interaction that children experience and engage in.
Conclusion We have briefly sketched in an account of the social formation of human minds. We began by reviewing problems with explanations of human minds based on beginning with individuals. Such approaches tend to assume that infants face the problem of figuring out other minds and how to communicate with them, which presuppose the mind to begin with and, thus, does not explain it. This is an illustration of how philosophical preconceptions underlie and influence theories and empirical research (Overton, 2015). Therefore, it is important to be aware of such preconceptions and to critically examine them. In this chapter we question these typical starting assumptions and instead begin from a relational developmental systems approach, according to which we explain the development of human minds by beginning from the process of interaction (e.g., Carpendale, Atwood, & Kettner, 2013). As an alternative based on a constructivist view of knowledge, we have outlined a social account according to which children learn to anticipate potential for interaction with their physical and social world. We discussed some examples of biological characteristics such as the nature of human eyes that help in starting the social process. A further example is the helplessness that human infants experience that results in a necessarily social environment because they have to be cared for (Portmann, 1944/1990). This helplessness also is linked to forming strong emotional bonds – babies’ lives depend on love (Suttie, 1935). The problem space infants experience results in the emergence of communication such as requests. Infants learn about their social experience and develop expectations about interactivity, which could be described as early social understanding, but this is also coordination with others or cooperation, and communication. The idea of others and the self is linked to learning about relationships. As children learn how others respond to their actions in routine social situations they can start to intentionally communicate with expectations of how others will respond. This begins with gestures and is further extended with the addition of words in the context of shared social activities. The use of language is one means for further understanding more complex social situations. Human communication is adapted for others to understand. That is, speech must anticipate others’ perspective and so this involves taking others’ roles. And language must emerge within experience of being treated as someone, not something. These are the same conditions for morality to develop (Spaemann, 1996/2006). Thus, the roots of moral development can be seen in the emotional bonds developing in infancy and in human forms of interaction in which others are treated as persons. Caring must emerge in these early relationships; it cannot be added afterwards through reasoning. Morality is not added later; instead, the root for morality is already there in the responsiveness of early relationships, and this forms the structure for communication. Morality is already implicit in conversation (Habermas, 1990; Jopling, 1993), or at least potentially so in particular forms of interaction involving cooperation and treating others as equals (Piaget, 1932/1965).
202
The social formation of human minds
Notes 1 Note that such perspectives should not be confused with the theory of equipotentiality (Zillmer, Spiers, & Culbertson, 2008) – i.e., the idea that there are practically limitless reorganizational capabilities in the cortex – as was done by Pinker (2002) when he felt the need to emphasize that neural plasticity is “not a magical protean power” (p. 100). For a particularly comprehensive and informative review of the limits of cortical reorganization and recovery following injury or disease, see Anderson, SpencerSmith, & Wood (2011). 2 A related approach can be found in the “Bayesian brain” hypothesis (e.g., Friston, 2009; Clark, 2013). According to this hypothesis, what the brain is doing, at a systems-level, is engaging in prediction. However, it diverges from the present account by embracing Bayesian inference as a method for modeling how the brain engages in prediction (i.e., by mathematically modeling how prior knowledge can be employed in order to evaluate the probability that new evidence fits with a particular hypothesis). In our view, this theoretical move is premature. This changes the problem from figuring out how the brain does prediction, to attempting to figure out how the brain does Bayesian inference. This has the problematic effect of imposing a speculative, high-level conceptual scheme on concrete, lower-level neural processes.
References Amaral, D. G. & Strick, P. L. (2012). The organization of the central nervous system. In E. R. Kandel, J. H. Schwartz, T. M. Jessel, S. A. Siegelbaum and A. J. Hudspeth (Eds.), Principles of neural science, 5th edition (pp. 337–355). New York: McGraw-Hill. Ambrosini, E., Reddy, V., de Looper, A., Constantini, M., Lopez, B. & Sinigaglia, C. (2013). Looking ahead: Anticipatory gaze and motor ability in infancy. PLOS One, 8, 1–9. Anderson, V., Spencer-Smith, M. & Wood, A. (2011). Do children really recover better? Neurobehavioral plasticity after early brain insult. Brain, 134(2197), 2197–2221. Babadi, B. & Abbott, L. F. (2010). Intrinsic stability of temporally shifted spike-timing dependent plasticity. PLOS Computational Biology, 6(11), 1–14. Baldwin, J. M. (1906). Thoughts and Things,Vol. 1: Functional logic. New York: The MacMillan Company. Bates, E., Camaioni, L. & Volterra,V. (1975). The acquisition of performatives prior to speech. Merrill-Palmer Quarterly, 21, 205–226. Bi, G. & Poo, M. (2001). Synaptic modification by correlated activity: Hebb’s postulate revisited. Annual Review of Neuroscience, 24, 139–166. Bibok, M. B. (2011). Re-conceptualizing joint attention as social skills: A micorgenetic analysis of the development of early infant communication. (Unpublished doctoral dissertation). Simon Fraser University, Burnaby. Bibok, M. B., Carpendale, J.I.M. & Lewis, C. (2008). Social knowledge as social skill: An action based view of social understanding. In U. Müller, J.I.M. Carpendale, N. Budwig and B. Sokol (Eds.), Social Life and Social Knowledge:Toward a Process Account of Development (pp. 145–169). New York: Taylor Francis. Bowlby, J. (1958).The nature of the child’s tie to his mother. International Journal of Psychoanalysis, 39, 350–373. Bubic, A., von Cramon, D.Y. & Schubotz, R. I. (2010). Prediction, cognition and the brain. Frontiers in Human Neuroscience, 4(25), 1–15. Canfield, J. V. (2007). Becoming Human: The Development of Language, Self, and Self-consciousness. New York: Palgrave Macmillan. Caporale, N. & Dan, Y. (2008). Spike timing-dependent plasticity: A Hebbian learning rule. Annual Review of Neuroscience, 31, 25–46. Carpendale, J.I.M., Atwood, S. & Kettner,V., (2013). Meaning and mind from the perspective of dualist versus relational worldviews: Implications for the development of pointing gestures. Human Development, 56, 381–400. Carpendale, J.I.M., Hammond, S. I. & Atwood, S. (2013). A relational developmental systems approach to moral development. In R. M. Lerner and J. B. Benson (Eds.), Embodiment and Epigenesis: Theoretical and Methodological Issues in Understanding the Role of Biology within the Relational Developmental System. Volume 45 of Advances in Child Development and Behavior (pp. 125–153). New York: Academic Press.
203
Jeremy I. M. Carpendale et al. Carpendale, J.I.M, Kettner,V. A. & Audet, K. N. (2015). On the nature of toddlers’ helping: Helping or interest in others’ activity? Social Development, 24, 357–366. Carpendale, J.I.M. & Lewis, C. (2004). Constructing an understanding of mind: The development of children’s social understanding within social interaction. Behavioral and Brain Sciences, 27, 79–151. ———. (2006). How Children Develop Social Understanding. Oxford: Blackwell. ———. (2012). Reaching, requesting and reflecting: From interpersonal engagement to thinking. In A. Foolen, U. Lüdtke, J. Zlatev and T. Racine (Eds.), Moving Ourselves: Bodily Motion and Emotion in the Making of Intersubjectivity and Consciousness (pp. 243–259). Amsterdam: John Benjamins Publishing Company. ———. (2015a). The development of social understanding. In L. Liben and U. Müller (Vol. Eds.), Vol. 2: Cognitive Processes, R. Lerner (editor-in-chief), 7th edition of the Handbook of Child Psychology and Developmental Science (pp. 381–424). New York: Wiley Blackwell. ———. (2015b).Taking natural history seriously in studying the social formation of thinking: Critical analysis of a natural history of human thinking by Michael Tomasello. Human Development, 58, 55–66. Chapman, M. (1988). Contextuality and directionality of cognitive development. Human Development, 31, 92–106. ———. (1993). Everyday reasoning and the revision of belief. In J. M. Puckett and H. W. Reese (Eds.), Mechanisms of Everyday Cognition (pp. 95–113). Hillsdale, NJ: Erlbaum. Chomsky, N. (1993). Language and Thought. Berkley, CA: Moyer Bell. Clark, A. (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral & Brain Sciences, 36, 181–253. Elsabbah, M., Mercure, E., Hudry, K., Chandler, S. Pasco, G., Charman, T., Pickles, A., Baron-Cohen, S., Bolton, P., Johnson, M. H. & the BASIS Team (2012). Infant neural sensitivity to dynamic eye gaze is associated with later emerging autism. Current Biology, 22, 1–5. Elston, G. N. (2003). Cortex, cognition and the cell: New insights into the pyramidal neuron and prefrontal function, Cerebral Cortex, 13, 1124–1138. Farroni, T., Massaccesi, S., Pividori, D. & Johnson, M. H. (2004). Gaze following in newborns. Infancy, 5, 39–60. Fenici, M. (2015). A simple explanation of apparent early mindreading: Infants’ sensitivity to goals and gaze direction. Phenomenology and Cognitive Science, 14, 497–515. Fernyhough, C. (2008). Getting Vygotskian about theory of mind: Mediation, dialogue, and the development of social understanding. Developmental Review, 28, 225–262. Fisher, S. E. (2006). Tangled webs: Tracing the connections between genes and cognition. Cognition, 101, 270–297. Forst, R. (2005). Moral autonomy and the autonomy of morality:Toward a theory of normativity after Kant. Graduate Faculty Philosophy Journal, 26, 65–88. Friston, K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13(7), 293–301. Fuchs, T. (2013). The phenomenology and development of social perspectives. Phenomenology and Cognitive Science, 12, 655–683. Goldberg, B. (1991). Mechanism and meaning. In J. Hyman (Ed.), Investigating Psychology: Sciences of the Mind After Wittgenstein (pp. 48–66). New York: Routledge. Gottlieb, G. (2007). Probabilistic epigenesis. Developmental Science, 10, 1–11. Griffiths, P. E. & Tabery, J. (2013). Developmental systems theory: What does it explain, and how does it explain it? In R. M. Lerner & J. B. Benson (Eds.), Embodiment and Epigenesis:Theoretical and Methodological Issues in Understanding the Role of Biology within the Relational Developmental System.Volume 44 of Advances in Child Development and Behavior (pp. 65–94). New York: Academic Press. Habermas, J. (1990). Moral Consciousness and Communicative Action. Cambridge, MA:The MIT Press. (Original work published 1983) Hamlin, J. K. (2013). Moral judgment and action in preverbal infants and toddlers: Evidence for an innate moral core. Current Directions in Psychological Science, 22, 186–193. Hawkins, J. & Blakeslee, S. (2004). On Intelligence. New York: Henry Holt and Company.
204
The social formation of human minds Hebb, D. O. (1949). The Organization of Behaviour: A Neuropsychological Theory. New Jersey: Lawrence Erlbaum Associates. ———. (1953). On human thought. Canadian Journal of Psychology, 7(3), 99–110. ———. (1980). Essay on Mind. New Jersey: Lawrence Erlbaum Associates. Hobson, R. P. (2004). The Cradle of Thought. London: Macmillan/Oxford University Press. (Original work published 2002) Jablonka, E. & Lamb, M. J. (2005). Evolution in Four Dimensions: Genetic, Epigenetics, Behavioral, and Symbolic Variation in the History of Life. Cambridge, MA: MIT Press. Jacob, P. (2008). What do mirror neurons contribute to human social cognition? Mind & Language, 23(3), 190–223. Johnson, M. H., Jones, E.J.H. & Gliga, T. (2015). Brain adaptation and alternative developmental trajectories. Development and Psychopathology, 27, 425–442. Jones, S. (2008). Nature and nurture in the development of social smiling. Philosophical Psychology, 21, 349–357. Jones, W. & Klin, A. (2013). Attention to eyes is present but in decline in 2–6-month-old infants later diagnosed with autism. Nature, 504, 427–431. Jopling, D. (1993). Cognitive science, other minds, and the philosophy of dialogue. In U. Neisser (Ed.), The Perceived Self (pp. 290–309). Cambridge, MA: MIT Press. Kandel, E. R., Barres, B. A. & Hudspeth, A. J. (2012). Nerve cells, neural circuitry, and behavior. In E. R. Kandel, J. H. Schwarts, T. M. Jessel, S. A. Siegelbaum and A. J. Hudspeth (Eds.), Principles of Neural Science, 5th edition (pp. 21–37). New York: McGraw-Hill. Karmiloff-Smith, A., (2015). An alternative to domain-general or domain-specific frameworks for theorizing about human evolution and ontogenesis. AIMS Neuroscience, 2, 91–104. Kinsbourne, M. & Jordan, J. S. (2009). Embodied anticipation: A neurodevelopmental interpretation. Discourse Processes, 46, 103–126. Klin, A., Lin, D. J., Gorrindo, P., Ramsay, G., & Jones, W. (2009). Two-year-olds with autism orient to nonsocial contingencies rather than biological motion. Nature, 459, 257–261. Kobayashi, H. & Kohshima, S. (1997). Unique morphology of the human eye. Nature, 387, 767–768. ———. (2001). Unique morphology of the human eye and its adaptive meaning: Comparative studies of external morphology of the primate eye. Journal of Human Evolution, 40, 419–435. Leavens, D. A., Sansone, J., Burfield, A., Lightfoot, S., O’Hara, S. & Todd, B. K. (2014). Putting the “joy” in joint attention: Affective-gestural synchrony by parents who point for their babies. Frontiers in Psychology, 5, 1–7. Lerner, R. M. & Benson, J. B. (Eds.) (2013). Embodiment and Epigenesis: Theoretical and Methodological Issues in Understanding the Role of Biology within the Relational Developmental System. Volume 44 of Advances in Child Development and Behavior. New York: Academic Press. Leslie, A. M., Friedman, O. & German, T. P. (2004). Core mechanisms in ‘theory of mind.’ Trends in Cognitive Sciences, 8, 528–533. Lewontin, R. C. (2001). Gene, organism and environment. In S. Oyama, P. E. Griffiths and R. D. Gray (Eds.), Cycles of Contingency: Developmental Systems and Evolution (pp. 55–66). Cambridge, MA: MIT Press. (Original work published 1983) Lickliter, R. & Honeycutt, H. (2010). Rethinking epigenesis and evolution in light of developmental science. In M. Blumberg, J. Freeman and S. Robinson (Eds.), Oxford Handbook of Developmental Behavioral Neuroscience: Epigenetics, Evolution, and Behavior 30–47. New York:. Oxford University Press. ———. (2013). A developmental evolutionary framework for psychology. Review of General Psychology, 17, 184–189. Lillard, A. S. & Erisir, A. (2011). Old dogs learning new tricks: Neuroplasticity beyond the juvenile period. Developmental Review, 31, 207–239. Linquist, S. & Rosenberg, A. (2007). The return of the Tabula Rasa. Philosophy and Phenomenological Research, 74(2), 476–497. Mareschal, D., Johnson, M. H., Sirois, S., Spratling, M. W., Thomas, M.S.C. & Westermann, G. (2007). Neuroconstructivism: How the Brain Constructs Cognition (Vol. 1). New York: Oxford University Press.
205
Jeremy I. M. Carpendale et al. Mcquaid, N., Bibok, M. & Carpendale, J.I.M. (2009). Relationship between maternal contingent responsiveness and infant social expectation. Infancy, 14, 390–401. Mead, G. H. (1934). Mind, Self and Society. Chicago: University of Chicago Press. Meaney, M. J. (2010). Epigenetics and the biological definition of gene x environment interactions. Child Development, 81, 41–79. Meltzoff, A. N. (2011). Social cognition and the origin of imitation, empathy, and theory of mind. In U. Goswami (Ed.), Wiley-Blackwell Handbook of Childhood Cognitive Development, 2nd edition (pp. 49–75). Malden, MA: Wiley-Blackwell. Meltzoff, A. N., Gopnik, A. & Repacholi, B. M. (1999). Toddlers’ understanding of intentions, desires, and emotions: Explorations of the dark ages. In P. D. Zelazo, J.W. Astington and D. R. Olson (Eds.), Developing Theories of Intention (pp. 17–41). Mahwah, NJ: Erlbaum. Messinger, D. & Fogel, A. (2007). The interactive development of social smiling. In R. Kail (Ed.), Advances in Child Development and Behavior, 35, 327–366. Oxford: Elsevier. Mountcastle,V. B. (1978). An organizing principle for cerebral function. In G. M. Edelman and V. B. Mountcastle (Eds.), The Mindful Brain (pp. 7–50). Cambridge, MA: MIT Press. ———. (1995). Evolution of ideas concerning the function of the neocortex. Cerebral Cortex, 5, 289–295. Müller, U. & Carpendale, J.I.M. (2004). From joint activity to joint attention: A relational approach to social development in infancy. In J.I.M. Carpendale and U. Müller (Eds.), Social Interaction and the Development of Knowledge (pp. 215–238). Mahwah, NJ: Lawrence Erlbaum Associates. Mumford, D. (1992). On the computational architecture of the neocortex II: The role of cortico-cortical loops. Biological Cybernetics, 66, 241–251. Olson, C. R. & Colby, C. L. (2012). The organization of cognition. In E. R. Kandel, J. H. Schwartz, T. M. Jessell, S. A. Siegelbaum and A. J. Hudspeth (Eds.), Principles of Neural Science, 5th edition (pp. 392–411). New York: McGraw-Hill. Onishi, K. & Baillargeon, R. (2005). Do 15-month-old infants understand false beliefs? Science, 308, 255–258. Overton,W. F. (2015). Processes, relations and relational-developmental-systems. In W. F. Overton and P.C.M. Molenaar (Eds.), Theory and Method, Vol. 1. Handbook of Child Psychology and Developmental Science, 7th edition (pp. 9–62). Hoboken, NJ: Wiley. Oyama, S., Griffiths, P. E. & Gray, R. D. (2001). Introduction: What is developmental systems theory? In S. Oyama, P. E. Griffiths and R. D. Gray (Eds.), Cycles of Contingency: Developmental Systems and Evolution (pp. 1–11). Cambridge, MA: The MIT Press. Piaget, J. (1965). The Moral Judgment of the Child. New York: The Free Press. (Original work published 1932) Pinker, S. (1997). How the Mind Works. New York: W. W. Norton & Company. ———. (2002). The Blank Slate:The Modern Denial of Human Nature. USA: Penguin Books. Portmann, A. (1990). A Zoologist Looks at Humankind. New York: Columbia University Press. (Original work published 1944) Reddy,V. (2008). How Infants Know Minds. Cambridge, MA: Harvard University Press. Reddy, V., Markova, G. & Wallot, S. (2013). Anticipatory adjustments to being picked up in infancy. PLOS One, 8, 1–9. Rizzolatti, G. & Kalaska, J. F. (2012).Voluntary movement:The parietal and premotor cortex. In E. R. Kandel, J. H. Schwarts, T. M. Jessel, S. A. Siegelbaum and A. J. Hudspeth (Eds.), Principles of Neural Science, 5th edition (pp. 865–893). New York: McGraw-Hill. Sanes, J. R. & Jessel, T. M. (2012). Experience and the refinement of synaptic connections. In E. R. Kandel, J. H. Schwarts,T. M. Jessel, S. A. Siegelbaum and A. J. Hudspeth (Eds.), Principles of Neural Science, 5th edition (pp. 1259–1283). New York: McGraw-Hill. Saunders, P. T. (2013). Evolutionary psychology: A house built on sand. In R. M. Lerner and J. B. Benson (Eds.), Embodiment and Epigenesis: Theoretical and Methodological Issues in Understanding the Role of Biology within the Relational Developmental System. Volume 44 of Advances in Child Development and Behavior (pp. 257–284). New York: Academic Press. Scheler, M. (1954). The Nature of Sympathy (translated by P. Heath). Hamden, CT: Archon Books. (Original work published 1913) Seger, C. A. & Miller, E. K. (2010). Category learning in the brain. Annual Review of Neuroscience, 33, 203–219.
206
The social formation of human minds Senju, A. & Johnson, M. H. (2009). The eye contact effect: Mechanisms and development. Trends in Cognitive Sciences, 13, 127–134. Shanker, S. G. (2004). Autism and the dynamic developmental model of emotions. Philosophy, Psychiatry & Psychology, 11, 219–233. Shouval, H. Z.,Wang, S. S.-H. & Wittenberg, G. M. (2010). Spike timing dependent plasticity: A consequence of more fundamental learning rules. Frontiers in Computational Neuroscience, 4(19), 1–13. Slavich, G. M. & Cole, S. W. (2013). The emerging field of human social genomics. Clinical Psychological Science, 1, 331–348. Song, S., Miller, K. D. & Abbott, L. F. (2000). Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience, 3(9), 919–926. Spaemann, R. (2006). Persons: The Difference Between ‘Someone’ and ‘Something.’ New York: Oxford University Press. (Original work published 1996) Suttie, I. D. (1935). The Origins of Love and Hate. Harmondsworth, Middlesex: Penguin Books. Tomasello, M. (2009). Why We Cooperate. Cambridge, MA: MIT Press. ———. (2014) A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. Tomasello, M. & Carpenter, M. (2007). Shared intentionality. Developmental Science, 10, 121–125. ———. (2013). Dueling dualists: Commentary on Carpendale, Atwood, and Kettner. Human Development, 56, 401–405. Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28, 675–735. Tomasello, M., Hare, B., Lehmann, H. & Call, J. (2007). Reliance on head versus eyes in the gaze following of great apes and human infants:The cooperative eye hypothesis. Journal of Human Evolution, 52, 314–320. Turkewitz, G. & Kenny, P. A. (1982). Limitations on input as a basis for neural organization and perceptual development: A preliminary theoretical statement. Developmental Psychobiology, 15, 357–368. Uithol, S. & Paulus, M. (2014). What do infants understand of others’ actions? A theoretical account of early social cognition. Psychological Research, 78, 609–622. Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press. Warneken, F. & Tomasello, M. (2009).The roots of human altruism. British Journal of Psychology, 100, 455–471. Wittgenstein, L. (1968). Philosophical Investigations. Oxford: Blackwell. Woodward, A. L. (2013). Infant foundations of intentional understanding. In M. R. Banaji & S. A. Gelman (Eds.). Navigating the Social World: A Developmental Perspective (pp. 75–80). Oxford: Oxford University Press. Young, M. P. (1993).The organization of neural systems in the primate cerebral cortex. Proceedings of the Royal Society of London B: Biological Sciences, 252(1333), 13–18. Zahavi, D. (2008). Simulation, projection and empathy. Consciousness and Cognition, 17, 514–522. Zillmer, E. A., Spiers, M. V. & Culbertson, W. C. (2008). Principles of Neuropsychology, 2nd edition. Belmont, CA: Thomson/Wadsworth.
207
12 PLURALISM, INTERACTION, AND THE ONTOGENY OF SOCIAL COGNITION Anika Fiebich, Shaun Gallagher, and Daniel D. Hutto
This chapter aims to provide an overview of the development of a variety of socio-cognitive processes and procedures1 throughout ontogeny. For many years the received view of social cognition took it for granted that it was based, wholly or primarily, in mindreading abilities – the ability to attribute contentful mental states to others by means of deploying folk psychological theories or by running simulation routines on one’s own mental states. From the late 1970s through the 1990s it would be fair to say that the received view was that everyday social cognition was always based in mental state attribution achieved by theorizing about other minds, simulating other minds, or some combination of the two. Today research is generally more pluralist in outlook, allowing that other kinds of nonattributional processes and practices can also play an important part in and in some cases are sufficient for social cognition. Some pluralist approaches do more than add to the existing options; they insist that we rethink the nature and basis of social cognition more fundamentally. For example, some versions of pluralism allow that theoretical inference and simulation can sometimes do important work in enabling an understanding of others but understand the role that such theorizing and simulating plays in ways that are quite unlike traditional ‘theory of mind’ accounts of social cognition. In promoting pluralism of this stripe, our analysis in what follows lays stress on the interactive character of socio-cognitive skills and the intersubjective contexts in which they are acquired. In particular, we focus on processes and procedures that enable us to understand the behavior of others in terms of person-specific character traits, habits, and attitudes. These processes and procedures range from those that involve person-specific associations that shape infants’ interactive behavior early in ontogeny to person-focused narratives that may presuppose mastery of language and special discursive practices. The latter capacities come into play when interpreting or explaining reasons – when we make sense of why another person has done something in light of his or her personality and personal history. All in all, we present a pluralist vision of social cognition – one that assumes that rather than relying on a single or even default procedure in social cognition, individuals use a variety of methods to keep track of and understand other minds. We explore the idea that, as a rule of thumb, individuals use processes and procedures that are the cognitively least effortful ones to them, as appropriate to context. We conclude with a discussion of the costliness of various socio-cognitive processes and procedures, suggesting that those that emerge at the beginning 208
The ontogeny of social cognition
of ontogeny are cognitively ‘cheap’ and continue to play an important role in every social understanding in adulthood.
Traditional accounts of social cognition and alternatives How do we understand the minds of others? What socio-cognitive processes come into play and what procedures do we employ when we seek to understand what other people think and feel in a given situation? Traditionally, the choice was between two main options: Theory Theory (TT) and Simulation Theory (ST), or some combination of thereof. According to TT, to understand other minds requires employing a folk psychological theory containing laws about how mental states interrelate and motivate agents to act. Proponents of the empiricist version of TT (e.g., Wellman and Gopnik 1992) state that we learn, modify, and revise our folk psychological theories in the course of development and interaction with the environment. Proponents of the nativist version of TT (e.g., Baron-Cohen 1995; Segal 1996) propose that distinct mindreading modules are innate but emerge in the course of their own developmental timetable. ST, in contrast, claims that we put ourselves imaginatively ‘into the shoes’ of another person and simulate the thoughts and feelings we would experience in his or her situation (e.g., Goldman 2006) – thus we directly manipulate our own mental states in the process obviating the need to consult any laws about mental states. Despite accounting for different socio-cognitive procedures, TT and ST share a number of assumptions about social understanding: advocates of both mindreading theories assume that social cognition always, essentially, depends on the attribution of mental states. Traditionally, proponents of TT and ST also subscribed to an ‘unobservability principle’ about mental states – which holds that other people’s mental states – in particular beliefs and desires – are explanatory constructs and thus not directly perceivable (see Krueger 2012 for a discussion of this principle).2 Versions of TT and ST that accept this idea introduce a gap between ourselves and others that observers need to bridge by, say, inferring the thoughts and feelings of other people via theorizing or simulating processes. Traditional mindreading theories promote the idea of social understanding as an observational enterprise, rather than one involving interaction. Hence, the majority of early studies that have been conducted to investigate the development of children’s understanding of other people’s behavior in terms of mental states are observational paradigms (see Wellman et al. 2001 for an overview). Relatedly, given this focus, classic mindreading theories had little to say about what knowledge of person-specific characteristics or attributes – such as character traits or habits – played in understanding others. Times have changed. In recent years, several authors have claimed that mindreading theories are in fact compatible with a more interactive understanding of social cognition that tolerates the possibility of direct perception (Herschbach 2008; Lavelle 2012). For example, Carruthers (2011), despite the fact that most theory theorists, including Carruthers himself, had been describing social cognition in terms of making inferences “when observing the other’s behavior” (Carruthers 2002, p. 666,), tells us that, “both Gallagher and Hutto are mistaken . . . in construing [the traditional mindreading theories] as purely third-person and observer based. . . . For everyone thinks that that primary use of mindreading is in face-to-face interactions with others” (p. 231). Defenders of mindreading theories who allow for the direct perception of others’ mental states still hold that the sub-personal processes that enable such face-to-face interactions are inferential – theoretical or simulative – in character. Arguably, phenomenological considerations will not settle this debate; to do so requires taking the dispute to a whole new level, namely, the sub-personal level.3 209
Anika Fiebich et al.
Proponents of mindreading theories of social cognition allow that even though various non-mindreading processes and procedures can play roles in everyday social understanding, mindreading is always required. This is true even of hybrid theories that combine elements of TT and ST. For example, hybrid theories that take TT to be core hold that theoretical inference is always involved in social understanding – because we must consult the core laws of folk psychology, even if doing so needs to be supplemented by simulation procedures (e.g., Botterill and Carruthers 1999).Vice versa, hybrid theories that take ST to be core hold that simulation is always in operation when understanding others, even if it needs to be supplemented or supported by theory (e.g., Goldman 2006). Pluralists reject such necessity claims but they also reject the weaker assumption that there is a default procedure that is consistently deployed in every social cognitive task. They hold that people use a variety of socio-cognitive procedures and that no single procedure is typically employed as a default whenever attempts are made to understand other minds. On this view, social cognition involves a combination of capabilities or processes, “one of which may be appropriate or practical for one kind of situation, and another for a different kind of situation”, but that there is no default procedure that we tend to use in every situation (Gallagher 2015, p. 18; see, e.g., Fiebich and Coltheart 2015). Accordingly, we are quite flexibly in how we deal with and understand others: should one procedure fail, for example, other procedures may come into play. Despite this degree of liberality, pluralism must not be confused with a smorgasbord approach to social cognition. Not everything goes. For example, it is difficult (indeed impossible without making some strained theoretical adjustments) to accommodate standard TT and ST accounts – whether pure or hybrid – within a pluralist approach. Pluralists can allow for theory or simulation to play a role in acts of social cognition. Yet that is quite different from assuming that the specialized cognitive mechanisms that TT or ST take to underwrite different forms of mindreading exist or are deployed, separately or in combination, in certain acts of social cognition.4 In rejecting the standard mindreading proposals, the kind of pluralism we endorse differs importantly from integrationist accounts (see, e.g., Bohl and van den Bos 2012). Integrationist accounts want to reconcile TT, ST, and non-mindreading alternatives in a way that we think is problematic. The competing core assumptions of pure TT and ST, and the way these have to be accommodated in hybrid theories, make it difficult to understand how TT and ST could be combined in a truly integrated way in acts of social cognition. In contrast, theory and simulation might play different roles in social understanding as long as they are not understood under the auspices of TT or ST, which assume the existence of quite distinct kinds of mindreading mechanisms. A softer reading of ‘theory’ and ‘simulation’ that makes no such assumption is, by contrast, clearly compatible with a genuinely pluralist perspective. In general, pluralist approaches contend that third-person mental state attributions that may involve theory or simulation do not lie at the heart of everyday social understanding and often comes into play peripherally in ambiguous or unfamiliar contexts that make individuals puzzled about the other person’s behavior (Gallagher 2001; Fiebich and Coltheart 2015). Other factors such as trading second-person narratives; being sensitive to environmental contexts, norms, habits, social conventions; and having knowledge of character traits of familiar individuals are far more central (see also Andrews 2012 for a take on pluralism similar to the one offered here). Such pluralist accounts need to be distinguished from so-called ‘multi-system accounts of mindreading’ (e.g., Christensen and Michael 2015) that focus particularly on socio-cognitive procedures that rely on the attribution of mental states. With support from findings from developmental psychology and social psychology, Fiebich and Coltheart (2015) argue that fluency and cognitive effort are main factors in determining 210
The ontogeny of social cognition
which particular socio-cognitive procedure an individual will be prone to use in a given situation of social understanding. Rather than there being a standard default procedure of social understanding used in all contexts, people will – as a rule of thumb – make use of socio-cognitive procedures that are cognitively least effortful to them in a given context; that is, ‘fluency’ plays a central role (call this the ‘fluency assumption’).This account draws on a definition of ‘fluency’ in social psychology according to which fluency can be understood as “the subjective experience of ease or difficulty associated with completing a mental task” (Oppenheim 2008, p. 237). A number of findings from social psychology in other domains (e.g., economic games) have shown that people typically make use of that cognitive procedure that is cognitively least effortful to them in a given context – unless they experience cognitive strain, which may place them on higher-order reasoning levels (see Kahneman 2011 for an overview). Fiebich and Coltheart (2015) propose that the same may hold true in social cognition (see Fiebich 2014 for a discussion of the role of fluency in early social cognition).
Varieties of social understanding in ontogeny What part might social interactions themselves play in the acquisition and use of various sociocognitive procedures throughout ontogeny? Drawing on previous work, we present a refined pluralist account of social cognition that emphasizes the role of intersubjective interaction (Fiebich 2015; Gallagher 2001, 2005, 2007; Gallagher and Hutto 2008). There is ample evidence that children acquire various socio-cognitive skills that allow them to understand other people’s behaviors, actions, intentions, and emotions. Moreover, there is reason to think that the great bulk of these basic socio-cognitive processes do not obviously or necessarily rely on any kind of mental state attribution. Interaction Theory (IT) was inspired by Trevarthen’s (1979; 1998; Trevarthen and Aitken 2001) notions of primary and secondary intersubjectivity: it focuses on basic embodied, sensorimotor capacities for interacting with others and the relevance of environmental contexts to account for non-linguistic and pre-reflective aspects of social understanding that are present early in ontogeny (Gallagher 2001, 2004, 2012). Hutto (2008) develops a view that focuses on linguistic and narrative competencies, arguing that folk psychological narrative practices that people typically use when understanding other people’s behavior in terms of beliefs and desires builds upon socially supported story-telling activities and needs to be understood as a skillful know-how. Combining the key insights of these approaches, Gallagher and Hutto (2008) offered an integrated embodied and narrative practices account of social cognition which distinguished between three different sets of socio-cognitive skills or competencies acquired throughout ontogeny: (i) primary intersubjective processes and skills in dyadic relations that presuppose sensitivity towards embodied emotions and interactive behavioral patterns; (ii) secondary intersubjective processes that involve triadic relations of joint attention and interaction, and allow for understanding embodied intentions in social (normative) and pragmatic contexts; and (iii) communicative and narrative skills that allow for sophisticated triadic relations involving linguistic and narrative competencies for understanding behaviors and reasons. In the present chapter we elaborate this account in the spirit of a pluralist approach to social cognition. This account is enhanced and made more precise by drawing upon literature from developmental and social psychology that provides details about the ages at which infants start to understand other people’s behavior, not only on the basis of interaction and communicativenarrative competencies, but also by associating particular traits, attitudes, habits, and the like with specific familiar persons, or members of specific social groups (see Fiebich and Coltheart 2015 for a discussion). Personal and social relationships between the interacting partners shape 211
Anika Fiebich et al.
interaction and social cognition in significant ways early in ontogeny as primary and secondary intersubjective relations develop. Moreover, once communicative and narrative competencies are acquired, the knowledge an individual has about another’s person-specific characteristics and history will inform our narrative practices in ways that are the basis for our understanding of another’s reasons for action (see Hutto 2008 for a discussion). After considering these issues, we conclude with a general discussion about the role of interaction and fluency in social cognition. Primary intersubjectivity Primary intersubjectivity (Trevarthen 1979; Trevarthen and Aitken 2001) is based upon sensorimotor abilities and perceptual capacities in processes of strong interaction, where ‘strong interaction’ is understood as a co-regulated coupling between at least two autonomous agents, where (i) the coregulation and the coupling mutually affect each other, constituting a self-sustaining organization in the domain of relational dynamics, and (ii) the autonomy of the agents involved is not destroyed. (De Jaegher et al. 2010, pp. 442–443) Social interaction is thus characterized as a reciprocal, two-way enterprise in which two persons enter into a dynamic coupling that affect each other’s actions and experiences. Human beings experience primary intersubjectivity from birth onwards, characterized by infants’ sensitivity to and emotional regulation of the others’ bodily gestures, movements, and facial expressions perceived in a dyadic interaction. Trevarthen contends that even newborns are engaged in dyadic interactions (mostly with their mother) in an embodied and stimulus-dependent way as evidenced by studies of neonatal imitation (Meltzoff and Moore 1977; Field et al. 1982). Since neonates only imitate gestures like tongue protrusion and mouth openings performed by persons but not mechanical devices, neonatal imitation has been regarded as a genuine socialinteractive response (Legerstee 1991a, 1991b). Numerous studies show that infants strive for emotional engagement and contingent patterns of interaction early in ontogeny. For example, a recent follow-up study of the so-called ‘still face paradigm’, initially conducted by Tronick et al. (1978), revealed that even 4-day-old infants show patterns of distress when their social interaction partner disrupts the emotional connection in an ongoing interaction by meeting them with a neutral instead of an emotional and responsive facial expression (Nagy 2008). Two- to three-month-olds also display signs of distress when they are faced with a time-delayed video-taped interactive response where the timing and embodied dynamics of interaction are missing (Murray and Trevarthen 1985). Neuroscientific research shows that being engaged in social interaction suffices to activate reward-related brain regions (Schilbach et al. 2006; Schilbach et al. 2013), and it is likely that such neuronal activity can be found already in young infants and that the experience of reward comes along with phenomenal experiences of pleasure. Other studies show that 3-month-olds are sensitive not only towards embodied emotions but to the particular dynamics of interaction and to goal-directed movements of another person (Sommerville et al. 2005). At that age, infants differentially respond to intentional movement and (imperfectly contingent) relations versus causal (perfectly contingent) relations. Initially infants show a clear preference for looking at perfectly contingent images, but this preference changes around month 5 to high-but-imperfect contingent images (Watson 1979; 212
The ontogeny of social cognition
Bahrick and Watson 1985). Mirror neuron activation may be involved when infants observe goal-directed behavior such as grasping, and there is evidence for such activation at 6 months (Nyström 2008). Even earlier, however, 3-month-olds exhibit a sensitivity towards the structure of interactive play routines (Fantasia et al. 2014). Infants not only prefer to interact with animate over inanimate entities but also favor interactions with familiar individuals or members of familiar social groups. Even newborns are sensitive towards other people’s person-specific identity as indicated by their capability to recognize the face of persons they most interact with (e.g., their mother) as long as also the hairlines of these persons and outer contours are visible and not covered by a scarf (Pascalis et al. 1995). From month 3 onwards, infants are sensitive to the social identity of an individual, i.e., social group membership, and prefer to interact with people with own-race faces (Kelly et al., 2005). That is, early in ontogeny, personal and social relationships between the interaction partners shape the (preferences for) social interaction itself. As we will see, associations with person-specific or group-specific traits and habits may be formed in further dimensions of intersubjectivity. In general, the socio-cognitive processes that come into play in primary intersubjective relations rely essentially on an emotional sensitivity to the gestures of the interaction partner as well as the experienced quality of dyadic interactions, dependent on the interaction partner’s personspecific or social identity as well as the experienced contingency of the interaction dynamics. Secondary intersubjectivity In the first year of life, secondary intersubjectivity, characterized by the “systematic combining of purposes directed to objects with those that invoked interest and interaction from a companion” (Trevarthen 1998, p. 31), emerges. The capacity for joint attention allows infants to enter triadic interactions, in which they can learn what things are called and what they are used for in situations of ‘social referencing’ (Striano and Rochat 2000). The defining feature of secondary intersubjectivity is that objects or worldly events become a focus between people; they are communicated about and shared reference is made to the things in the surrounding world (Gallagher 2004; Hobson 2002). It has been assumed traditionally that the capacity of joint attention marks a ‘revolution’ in the ontogenetic development of human infants around month 9 (Tomasello 1999). However, recent studies suggest that infants are engaged in triadic interactions involving joint attention even earlier (see Reddy 2008 for a discussion). For example, Hamlin et al. (2008) found that 7-month-old infants prefer and reach for toys that they have seen an experimenter grasping before. Notably, this study had been carried out in an interactive experimental design. That is, the experimenter attracted the infant’s attention: she first ensured that the infant looked at each of the two toys, snapping behind each one if necessary to direct the infant’s attention to it.Then, she called the infant’s attention to herself, briefly making eye contact, and demonstrated the grasp or back of hand action, holding eye and hand contact with the toy for 5 seconds. (p. 488) All of this was accompanied by verbal expressions such as ‘Look!’ or ‘Oooh!’. As pointed out by Csibra (2010), communicative settings are established by ostensive cues such as waving, calling or making eye contact. In interactive settings, infants enter a social community by learning what an object means ‘to us’ – how ‘we’ use it or value it.These situations often involve social referencing in which infants refer to another person, checking their reaction or facial expression, to learn 213
Anika Fiebich et al.
about the values and functions of objects. In general, sense-making activities can be understood as lying on a continuum; for example, some situations of social referencing are instances in which one agent learns from the other in an interactive context. In other interactive situations, in contrast, infants may be engaged in joint sense-making activities in which both agents make sense of the object together (De Jaegher and Di Paolo 2007; De Jaegher et al. 2010). That is, in interactive settings, infants may learn the values and functions of objects in the world. However, as pointed out by Egyed et al. (2013), we need to distinguish between interactional contexts (where there are second-person engagements between at least two people) and observational contexts (where one person takes a third-person observational stance towards another) in such experimental paradigms. In both contexts the knowledge that observing or interacting agents may have of another person’s particular preferences, traits, and behaviors may play a role (Fiebich and Coltheart 2015). Egyed et al. (2013) conducted a study that compared infants’ learning in interactive compared to observational settings; they found that infants will hand an unknown adult one of two objects significantly more often if they have seen a positive emotional expression made by another adult towards this particular object in a communicative setting. The infant acts on a generalization about the value of the object itself (see also Moore, ch.2 this volume, for a discussion of social learning and natural pedagogy). Notably, this effect hasn’t been found in observational contexts. When infants observe from a third-person perspective another person grasping for one of two objects, they associate the preference with that very person rather than understanding the object as being likeable per se. No generalization is made about the object’s value for others. Such associations may shape infant’s expectations of that particular person’s behavior (but not the behavior of others). For example, 6-month-olds may be surprised (as indicated by longer looking times) when they observe a person grasping for an object he or she has not previously grasped (Woodward 1998). Finally, such associations may shape infants’ interactive behavior with that very person; for example, they may hand over the object they expect the other person to prefer in helping contexts. In general, this sociocognitive process should be understood as a type of associative learning. Around age 2, children associate particular behaviors with members of specific social groups and they use that knowledge when engaged in pretend play such as ‘mother soothes her baby’ (O’Reilly and Bornstein 1993). Later in ontogeny, when linguistic skills are acquired, associations may become explicit and enter into narratives we generate about others (see next section).5 Often the pragmatic context plays a central role in understanding another person’s behavior in secondary intersubjective relations. Children before one year of age start to attend to the pragmatic context to understand the agent’s embodied intentions. Understanding social rules and roles may facilitate behavior understanding in specific play contexts. In other play contexts, knowledge of rules acquired via social referencing or understandings of the situation that may have been built on the basis of associative learning may be supportive of social understanding. From age 2 years onwards, children comprehend rules and protest if another person does not observe these rules in the appropriate context, for example, when breaking rules in a game (Rakoczy et al. 2008). Even before 3 years of age, in secondary intersubjective relations, children take into account socio-normative and pragmatic aspects of the contexts in which the social interactions take place. Between ages 3 and 4, children build scripts that classify the typical course of an event such as a birthday party (Fivush and Hamond 1990). Communicative and narrative practices According to empiricist theory theorists, individuals understand (or can explain) another person’s behavior in terms of beliefs and desires via folk psychological theories that rely essentially 214
The ontogeny of social cognition
on mental state term acquisition and conceptual change. For example, although 2-year-old children understand that other people have their own experiences (e.g., desires or emotions such as wanting or fearing things), they do not yet have a concept of belief. That is, at this age children understand other people’s behavior on the basis of a ‘desire psychology’ that includes an elementary conception of simple desires and emotions. Later on, children acquire a ‘beliefdesire psychology’ according to which beliefs and desires are thought to jointly determine actions (Wellman and Gopnik 1992). It had been assumed on this empiricist TT view that a full-fledged folk psychological understanding of other people’s true and false beliefs is acquired between ages 4 and 5 (see Wellman et al. 2001 for a review). Yet recent studies suggest that such understanding is not acquired until the 6th year of life, for only then do children show signs of understanding true beliefs (Fabricius et al. 2010).6 As pointed out by Gopnik (1998), the structural features of folk psychological theories include (i) abstractness, (ii) coherence, (iii) causality, and (iv) ontological commitment, features also present in other kinds of theories such as causal theories or behavioral theories (see Baron-Cohen et al. 1985 for a discussion). In general, folk psychological rules have been understood as psychological generalizations. They have been construed as laws employed across many contexts, although their adequacy depends on their being hedged by ceteris paribus clauses; for example, if A wants to drink a beer and believes that there is a bottle of beer left in the fridge, then ceteris paribus, A will go to the fridge to get that bottle. On an alternative view, it seems clear that social interaction is essential for language acquisition and the development of any folk psychological competence that involves explicit and articulate mastery of mental state terms. Once children have acquired linguistic skills, they are capable of being engaged in verbal communication and narrative practices, i.e., story-telling activities that may refer to the reasons why an agent has behaved in a certain way.These are folk psychological narratives (Hutto 2008). Narrative practices appear to play a major part in the development of this competence. Taumoepeau and Ruffman (2006), for example, found in a longitudinal study that maternal ‘mentalistic narrative practice’, which involves the explicit reference to another person’s mental states, directed at 15- and 24-month-olds correlated with the children’s later mental state language and emotion understanding (see also Slaughter et al. 2007; Taumoepeau and Ruffman 2006 for similar findings). However, whereas ‘mentalistic narrative practices’ are prevalent in action explanations in Western cultures such as Germany or the US, members of Eastern cultures such as Japan or China are prone to be engaged in ‘behavioralcontextual narrative practices’, which involve an explicit reference to the normative behavior of another person in a specific socio-situational context. Indeed, children from, at least some, Eastern cultures pass tests of the development of their understanding of other people’s behavior in terms of beliefs and desires later than their Western peers (Naito and Koyama 2006; Liu et al. 2008; Lavelle, ch.10 this volume).This may be due, at least partially, to the culture-specific divergences in narrative practices (see Fiebich, in press, for a discussion).7 Various experimental studies investigate how children’s understanding of other people’s actions as guided by mental states unfolds throughout ontogeny. In one version of the so-called ‘false belief task’, for example, children observe the story character Maxi putting a chocolate bar into cupboard x. Then, in his absence, his mother displaces the chocolate bar from cupboard x to cupboard y. When Maxi returns, the children are asked where Maxi will look for his chocolate bar. This task is typically passed by 4 or 5 years of age, and only those children who are capable of distinguishing Maxi’s (false) belief (‘chocolate bar is in x’) from their own belief (‘chocolate bar is in y’) point correctly to the cupboard where Maxi falsely believes the chocolate bar being located (see Wellman et al. 2001 for a review). Of course, TT explanations of the findings from these traditional, elicited false belief tasks presuppose the correctness of 215
Anika Fiebich et al.
the theory theory (see, e.g., Stich and Nichols 1992, p. 66). However, in such cases it is likely that children do not just predict an agent’s behavior by applying folk psychological rules about how the agent’s mental states motivate the agent to act; they may also need to invoke behavioral rules such as ‘an agent who has not perceived an unexpected change in her environment will believe that her environment has remained the same’ (see Perner 1999, p. 412, for a discussion). We argue that this experimental setting cannot count as a paradigm example of everyday social cognition. Here, our facility with mental state explanations needs to be understood as a kind of skillful ‘knowing-how’ rather than a theory-based ‘knowing-that’ (see Hutto 2008 for a discussion). Accordingly, the mature folk psychological competence that we gain through engaging in narrative practices is not essentially theoretical in nature. On this view, folk psychology is the exercise of special competence – a narrative practice – of making sense of another person’s action by attributing reasons to her in a concrete socially structured context. In general, attributing ‘beliefs’ and attributing ‘reasons’ need to be conceptually distinguished from each other. The former is necessary but not sufficient for the latter. Thus while belief attribution might be achieved by simple inference, explaining reasons is more complex: it requires knowing how propositional attitudes, such as beliefs and desires, interrelate. The Narrative Practice Hypothesis contends that the latter knowledge is acquired through mastery of narrative practices (Hutto 2007; 2008). Considerations about autism lend some support to the idea that folk psychology competence requires more than, or something other than, theory. People with autism are typically impaired in understanding other people’s behavior in terms of beliefs and desires. Yet some manage to pass false belief tasks by explicitly learning the required folk psychological rules and applying them in a thoroughly conscious and deliberative manner. Even so, they remain impaired in applying such rules flexibly across the many and varied situations of social understanding in everyday life.The application of general rules is especially problematic in interactive contexts in which other people’s embodied intentions and feelings are often ambiguous and require quick reactions. Applying explicit folk psychological rules in such circumstances can be difficult or even impossible for autistic people (Zahavi and Parnas 2003; Gallagher 2004). This implies that normally developing individuals, who make such attributions fluently and fluidly, possess a competence that involves more than the possession of general rules. To understand a person’s reasons involves generating and consuming narratives in which mental states do not feature at all. In many cases we understand others by focusing on their particular attributes, history, roles, or situations. We associate particular character traits and habits with particular individuals or social groups that we are familiar with. Any and all of these factors may be highlighted in our narratives when offering what Malle et al. (2007) have called ‘causal history explanations’, such as “Bill gave a large tip, because he is generous”. The important point is that attributing generosity to Bill may help us to understand him even though we are not thereby attributing any mental states to him (Fiebich and Coltheart 2015). Moreover, there are serious limits to understanding others if we call on only general, theoretical knowledge. For example, when trying to understand why Laura is going to India, we might need to appeal to generalizations informed by general knowledge if we have never met Laura, or don’t know her personally. For example, we might conjecture that: “Laura, like many young, American college students, may believe that India is a romantic place and that she can learn about Eastern meditation practices there and have an adventure. So Laura might desire to go to India for such reasons.” At best, such speculation will yield only a thin and unreliable understanding of why Laura may have taken her trip. It is more secure to draw on our personal knowledge of Laura to better understand her possible reasons. For example: “Laura is generally motivated to help people. She may be going to India to work in impoverished villages.” But this too is 216
The ontogeny of social cognition
speculative, even if grounded on our knowledge about her. It is far more secure still, although by no means infallible, to rely on what Laura has told us about her reasons for going – that is, to attend to her first-person narrative. Of course she may not be able to supply her true reason for acting: Laura may not know her reasons, she may lie, or might be deceiving herself. Even so, it is still the case that asking her directly for her narrative is the richest, and most epistemically secure, way to get at her reasons (see Hutto 2004; Gallagher and Hutto 2008 for a discussion).
Summary and outlook In this chapter, we have outlined a complex, pluralist account of social cognition. On this account, understanding other people’s behavior typically relies on developing a wide variety of diverse socio-cognitive processes, practices, and procedures. In any particular situation of social understanding, why is one process, procedure or practice employed rather than another? One possibility is that, as a rule of thumb, individuals are prone to make use of those socio-cognitive processes or procedures that are cognitively least effortful to them in a given context (Fiebich and Coltheart 2015). Another possibility is that the normative constraints (Andrews, in press) or other circumstantial factors of the given situation elicit one (or a combination of) sociocognitive process(es) or procedure(s) rather than another one. In identifying the various processes, procedures, and practices that may be involved in social cognition, we gave an overview of how and when some of these processes are likely to emerge throughout human ontogeny. In particular, we emphasized the various roles that social interaction may play in the ontogenetic development of such processes. More generally, we hypothesized that intersubjective processes that emerge near the beginning of ontogeny are also least cognitively demanding. On this view, primary and secondary intersubjectivity continue to constitute our primary and most pervasive way of understanding other minds throughout the lifespan (Gallagher 2001). Empirical evidence for this can be found in behavioral studies of adult embodied interactions and joint actions in work situations (e.g., Lindblom 2015), and the role of gestures in communicative actions (e.g., Kendon 1990). Not just infants but also adults are able to perceive emotions and intentions on the basis of sensitivity to minimal behavioral information (see Gallagher 2012 for further discussion). Also, rather than relying on the costly set of meta-cognitive operations that are likely involved in explicit third-person theoretical inference or simulation, people primarily understand the actions of others in a narrative fashion that involves attending to social roles, group traits, or the history and attributes of particular individuals (Hutto 2008). In such cases it may often be easier to use what we know about a person’s character traits or behaviors – gained via associations that are assumed to be cognitively cheaper than trying to infer reasons from actions using general principles (Fiebich and Coltheart 2015). Research on adult social cognition speaks in favor of the ‘fluency assumption’. For example, Malle et al. (2007) found that people typically refer to beliefs and desires in ‘reason explanations’ when explaining the behavior of a foreigner, but refer to traits or other person-specific characteristics (e.g., a person’s attitudes, habits, and so on) in ‘causal history explanations’ when they are familiar with the individual whose behavior they need to explain. In line with the fluency assumption, making use of already established associations ought to be cognitively cheaper than, say, engaging in theorizing or simulation routines. Accordingly, this proposal about the appeal to the least cognitive effort principle may help to explain Malle et al.’s (2007) findings. In line with this, Apperly et al. (2006) found that belief reasoning in adults does not function automatically, whilst a number of studies show that stereotype activation, i.e., associations of traits with members of a particular social group, does (see, e.g., Macrae and Quadflieg 2010, for an overview). 217
Anika Fiebich et al.
We’ve suggested that social cognition can involve a wide variety of socio-cognitive processes, practices, and procedures, which are acquired in ontogenetic development that involves primary and secondary intersubjective interactions, augmented by mastery of communicative and narrative practices. We rely on these same processes as adults, and we understand others by deploying such processes separately, or in conjunction or combination, depending on the situation. Some cases may involve only interactive, perception-based attending to the other’s embodied movements, gestures, facial expressions, and vocal intonations. Some may require us to focus on the physical, pragmatic, social, or cultural peculiarities of the context. Other cases may require us to appeal to general theoretical knowledge. In others still our knowledge about a particular person may be brought into play or we may need to appeal to a person’s background narrative. Social cognition is clearly not just one thing; and it’s not just a capacity that resides within individuals as individuals. It is always context dependent and draws on a number of capacities that involve the presence and/or participation of others.
Acknowledgments Thanks to Kristin Andrews and Julian Kiverstein for helpful comments on an earlier version of this chapter. This work has been supported by a Humboldt Feodor-Lynen fellowship awarded to Anika Fiebich for research stays abroad in 2015 to collaborate with Shaun Gallagher in Memphis (USA) and Dan Hutto in Wollongong (Australia) on a project “Varieties of Social Understanding: The Role of Interaction”, and by the Humboldt Anneliese Maier Research Prize awarded to Shaun Gallagher.
Notes 1 Methodologically, we distinguish between socio-cognitive processes that occur automatically and typically unconsciously and socio-cognitive procedures that may be subject to conscious and deliberative control. 2 Sometimes such theorists have even gone so far as to openly endorse an ‘inner world hypothesis’ (Wellman 1992). 3 For arguments against the assumptions of mindreading theories about the relevant sub-personal processes, see Gallagher (2015) and Hutto (2015). 4 The oddity of such a possibility is revealed by the fact that even those who favor hybrid TT-ST accounts of some sort do not assume that we have two entirely distinct, yet sufficient, mindreading devices that might be used on different occasions when making sense of others. 5 Note that forming associations about a particular individual needs to be distinguished from having a model of an individual person or group (Newen 2015; Andrews, ch.7 this volume). 6 Intriguingly, and in line with the traditional findings, false belief tasks seem to be passed by means of belief reasoning already by 4- to 5-year-olds. Using different methods in false belief tasks compared to true belief tasks in children at that age can be explained by cognitive dissonance and fluency (see Fiebich 2014 for a discussion). 7 Note, however, that narrative practices that focus on situational factors as opposed to mental states may also refer implicitly to the agent’s actions as being driven by mental states; for example, “She went to the café because [she thinks that] they have the best cappuccino” (Malle et al. 2007 call this an ‘unmarked reason explanation’).
References Andrews, K. (2012). Do Apes Read Minds? London: MIT Press. ———. (2016). Pluralistic folk psychology in humans and other apes. (this volume). ———. (2015). The folk psychological spiral. Southern Journal of Philosophy, 53(S1), 50–67.
218
The ontogeny of social cognition Apperly, I. A., Riggs, K. J., Simpton, A., Chiavarino, C. & Samson, D. (2006). Is belief reasoning automatic? Psychological Science, 17(10), 841–844. Bahrick, L. R. & Watson, J. S. (1985). Detection of intermodal proprioceptive-visual contingency as a potential basis of self-perception in infancy. Developmental Psychology, 21, 963–973. Baron-Cohen, S. (1995). Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press. Baron-Cohen, S., Leslie, A. M. & Frith, U. (1985). Does the autistic child have a “theory of mind”? Cognition, 21, 37–46. Bohl, V. & Van den Bos, W. (2012). Towards an integrative account of social cognition: Marrying theory of mind and interactionism to study the interplay of Type 1 and Type 2 processes. Frontiers in Human Neuroscience, 6(274), 1–15. Botterill, G. & Carruthers, P. (1999). The Philosophy of Psychology. Cambridge: Cambridge University Press. Carruthers, P. (2002). The cognitive functions of language. Behavioral and Brain Sciences, 25, 657–726. ———. (2011). The Opacity of Mind: An Integrative Theory of Self-knowledge. Oxford: Oxford University Press. Christensen,W. & Michael, J. (2015). From two systems to a multi-systems architecture for mindreading. New Ideas in Psychology, 30, 1–17. Csibra, G. (2010). Recognizing communicative intentions in infancy. Mind & Language, 25, 141–168. De Jaegher, H. & Di Paolo, E. (2007). Participatory sense-making. An enactive approach to social cognition. Phenomenology and the Cognitive Sciences, 6(4), 485−507. De Jaegher, H., Di Paolo, E. & Gallagher, S. (2010). Can social interaction constitute social cognition? Trends in Cognitive Sciences, 14(10), 441−447. Egyed, K., Király, I. & Gergeley, G. (2013). Communicating shared knowledge in infancy. Psychological Science, 24(7), 1–6. Fabricius, W. V., Boyer, T., Weimer, A. A. & Carroll, K. (2010). True or false: Do five-year-olds understand belief? Developmental Psychology, 46, 1402−1416. Fantasia,V., Fasulo, A., Costall, A. & López, B.(2014). Changing the game: Exploring infants’ participation in early play routines. Frontiers in Psychology, 5(522). doi: 10.3389 / fpsyg.2014.00522 Fiebich, A. (2014). Mindreading with ease? Fluency and belief reasoning in 4- to 5-year-olds, Synthese, 191(5), 929–944. ———. (2015). Varieties of Social Understanding. Paderborn: Mentis. Fiebich, A. (2016). Narratives, culture, and social understanding. Phenomenology and the Cognitive Sciences, 15(1), 135–149. doi: 10.1007/s11097-014-9378-7 Fiebich, A. & Coltheart, M. (2015).Various ways to understand other minds. Towards a Pluralistic approach to the explanation of social understanding. Mind and Language, 30(3), 238–258. Field, T. M., Woodson, R. & Greenberg, R. (1982). Discrimination and imitation of facial expressions by neonates. Science, 218, 1791−81. Fivush, R. & Hamond, N. (1990). Autobiographical memory across the preschool years:Towards reconceptualizing childhood amnesia. In R. Fivush & J. A. Hudson (Eds.), Knowing and Remembering in Young Children (pp. 223–248). New York: Cambridge University Press. Gallagher, S. (2001).The practice of mind:Theory, simulation, or primary interaction? Journal of Consciousness Studies, 8 (5–7), 83–107 ———. (2004). Understanding interpersonal problems in autism: Interaction theory as an alternative to theory of mind. Philosophy, Psychiatry, and Psychology, 11(3), 199–217. ———. (2005). How the Body Shapes the Mind. Oxford: Oxford University Press. ———. (2007). Simulation trouble. Social Neuroscience, 2(3–4), 353–365. ———. (2012). In defense of phenomenological approaches to social cognition: Interacting with the critics. Review of Philosophy and Psychology, 3(2), 187–212. ———. (2015). The new hybrids theories of social cognition. Consciousness and Cognition. 36, 452–465. ———. (2015). The new hybrids theories of social cognition. Consciousness and Cognition, 36, 452–465. http://dx.doi.org/10.1016/j.concog.2015.04.002 Gallagher, S. & Hutto, D. (2008). Understanding others through primary interaction and narrative practice. In J. Zlatev, T. Racine, C. Sinha and E. Itkonen (Eds.), The Shared Mind: Perspectives on Intersubjectivity (pp. 17–38). Amsterdam: John Benjamins.
219
Anika Fiebich et al. Goldman, A. (2006). Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. Oxford: Oxford University Press. Gopnik, A. (1998). The scientist as child. In A. Gopnik and A. Meltzoff (Eds.), Words, Thoughts, and Theories (pp. 13–47). Massachusetts Institute of Technology: MIT Press. Hamlin, J. K., Hallinan, E.V. & Woodward, A. L. (2008). Do as I do: 7-month-old infants selectively reproduce others’ goal. Developmental Science, 11(4), 487−494. Herschbach, M. (2008). Folk psychological and phenomenological accounts of social perceptions. Philosophical Explorations, 11(3), 223–235. Hobson, P. (2002). The Cradle of Thought. London: Macmillan. Hutto, D. D. (2004). The limits of spectatorial folk psychology. Mind and Language, 19(5), 548–573. ———. (2007). The narrative practice hypothesis: Origins and applications of folk psychology. Philosophy. Royal Institute of Philosophy Supplement, 82(60), 2007. Also in Hutto, D. (Ed.), Narrative and Understanding Persons (pp. 43–68). Cambridge: Cambridge University Press. ———. (2008). Folk Psychological Narratives: The Socio-Cultural Basis of Understanding Reasons. Cambridge, MA: MIT Press. ———. (2015). Basic social cognition without mindreading: Minding minds without attributing contents. Synthese. doi: 10.1007/s11229–015–0831–0 Kahneman, D. (2011). Thinking, Fast and Slow. London: Penguin Books. Kelly, D., Quinn, P., Slater, A. M., Lee, K., Gibson, A., Smith, M., Ge, L. & Pascalis, O. (2005). Three-montholds, but not newborns, prefer own-race faces. Developmental Science, 8, F31–F36. Kendon, A. (1990). Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press. Krueger, J. (2012). Seeing mind in action. Phenomenology and the Cognitive Sciences, 11(2), 149–173. Lavelle, J. S. (2012). Theory-theory and the direct perception of mental states. Review of Philosophy and Psychology, 3(2), 213–230. ———. (2016). Cross-cultural considerations in social cognition. (this volume). Legerstee, M. (1991a). The role of person and object in eliciting early imitation. Journal of Experimental Child Psychology, 51, 423−433. ———. (1991b). Changes in the quality of infants sounds as a function of social and non-social stimulation. First Language, 11, 327−343. Lindblom, J. (2015). Embodied Social Cognition. Heidelberg: Springer. Liu, D., Wellman, H. M., Tardiff, T. & Sabbagh, M. A. (2008). Theory of mind development in Chinese children: A meta-analysis of false-belief understanding across cultures and languages. Developmental Psychology, 44, 523–531. Macrae, C. N. & Quadflieg, S. (2010). Perceiving people. In S. Fiske, D. T. Gilbert and G. Lindzey (Eds.), The Handbook of Social Psychology, 5th edition (pp. 428–463). New York: McGraw-Hill. Malle, B. F., Knobe, J. M. & Nelson, S. E. (2007). Actor-observer asymmetries in explanations of behavior: New answers to an old question. Journal of Personality and Social Psychology, 93(4), 491−514. Meltzoff, A. N. & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198, 75−78. Moore, R. T. (2016). Pedagogy and social learning in human development. (this volume). Murray, L. & Trevarthen, C. (1985). Emotional regulations of interactions between two-month-olds and their mothers. In T. M. Field and N. A. Fox (Eds.), Social Perception in Infants (pp. 177–197). Norwood, NJ: Ablex. Nagy, E. (2008). Innate intersubjectivity: Newborn’s sensitivity to communication disturbance. Developmental Psychology, 44(6), 1779−1784. Naito, M. & Koyama, K. (2006). The development of false-belief understanding in Japanese children: Delay and difference? International Journal of Behavioral Development, 30, 290−304. Newen, A. (2015). Understanding others: The person model theory. In M. Metzinger and J. M. Windt (Eds.), Open MIND: 26(T), 1–28. Frankfurt am Main: Mind Group. doi: 10.15502/9783958570320. Nyström, P. (2008). The infant mirror neuron system studied with high density EEG. Social Neuroscience, 3(3–4), 334−347. Oppenheim, D. M. (2008). The secret life of fluency. Trends in Cognitive Sciences, 12(6), 2372−2341.
220
The ontogeny of social cognition O’Reilly, A. & Bornstein, M. (1993). Caregiver-interaction in play. New Directions for Child and Adolescent Development, 59, 55–66. Pascalis, O., de Schonen, S., Morton, J., Deruelle, C. & Fabre-Grenet, M. (1995). Mother’s face recognition by neonates: A replication and an extension. Infant Behavior and Development, 1, 79−85. Perner, J. (1999). Metakognition und Introspektion in entwicklungspsychologischer Sicht: Studien zur “Theory of Mind” und “Simulation”. In W. Janke and W. Schneider (Eds.), Hundert Jahre Institut für Psychologie und Würzburger Schule der Denkpsychologie (pp. 411−431). Göttingen: Hofgrede. Rakoczy, H.,Warneken, F. & Tomasello, M. (2008).The sources of normativity:Young children’s awareness of the normative structure of games. Developmental Psychology, 44(3), 875−881. Reddy,V. (2008). How Infants Know Minds. Cambridge, MA: Harvard University Press. Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T. & Vogeley, K. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36, 393–462. Schilbach, L.,Wohlschläger, A. M., Newen, A., Krämer, N., Shah, N. J., Fink, G. R. & Vogeley, K. (2006). Being with others: Neural correlates of social interaction. Neuropsychologia, 44(5), 718–730. Segal G. (1996). The modularity of theory of mind. In P. Carruthers and P. Smith (Eds.), Theories of Theories of Mind (pp. 141–157). Cambridge: Cambridge University Press. Slaughter,V., Peterson, C. C. & Mackintosh, E. (2007). Mind what mother says: Narrative input and theory of mind in typical children and those on autism spectrum. Child Development, 78(3), 839–858. Sommerville, J. A., Woodward, A. L. & Needham, A. (2005). Action experience alters 3-month-old infants’ perception of others’ action. Cognition, 96, B1–B11. Stich, S. & Nichols, S. (1992). Folk psychology: Simulation or tacit theory? Mind and Language, 7, 35−71. Striano, T. & Rochat, P. (2000). Emergence of selective social referencing in infancy. Infancy, 1(2), 253−264. Taumoepeau, M. & Ruffman,T. (2006). Mother and infant talk about mental states relates to desire language and emotion understanding. Child Development, 77(2), 465–481. Tomasello, M. (1999). The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press. Trevarthen, C. (1979). Communication and cooperation in early infancy: A description of primary intersubjectivity. In M. Bullowa (Ed.), Before Speech. The Beginning of Interpersonal Communication (pp. 321–348). Cambridge: Cambridge University Press. ———. (1998).The concept and foundation of infant intersubjectivity. In S. Braten (Ed.), Intersubjective Communication and Emotion in Early Ontogeny (pp. 15–46). Cambridge: Cambridge University Press. Trevarthen, C. & Aitken, K. J. (2001). Infant intersubjectivity: Research, theory, and clinical applications. The Journal of Child Psychology and Psychiatry, 42(1), 3–48. Tronick, E., Als, H., Adamson, L., Wise, S. & Brazelton, T. B. (1978). The infants’ response to entrapment between contradictory messages in face-to-face interactions, Journal of the American Academy of Child Psychiatry, 17, 1−13. Watson, J. S. (1979). Perception of contingency as a determinant of social responsiveness. In E.Thoman (Ed.), The Origins of Social Responsiveness (pp. 33−64). Hillsdale, NJ: Erlbaum. Wellman, H. M. (1992). The Child’s Theory of Mind. Cambridge, MA: MIT Press. Wellman, H. M., Cross, D. & Watson, J. (2001). Meta-analysis of theory-of-mind development: the truth about false belief. Child Development, 72, 655−684. Wellman, H. M. & Gopnik, A. (1992). Why the child’s theory of mind really is a theory. Mind and Language, 7(1), 145−171. Woodward, A. L. (1998). Infants selectively encode the goal object of an actor’s reach. Cognition, 69, 1−34. Zahavi, D. & Parnas, J. (2003). Conceptual problems in infantile autism research:Why cognitive science needs phenomenology. Journal of Consciousness Studies, 10(9–10), 53−71.
221
13 SHARING AND FAIRNESS IN DEVELOPMENT Philippe Rochat and Erin Robbins
Introduction Issues of sharing and fairness pervade human life. They are a central piece of the human social mind puzzle. From a developmental perspective, natural observations of family life show that more than 80% of all conflicts among young siblings revolve around issues of possession and resource distribution (Dunn, 1988). Looking at political history, the same is true for all major conflicts among adults across cultures and since the ancestral time. So, what is the psychology behind such recurrent sources of conflicts and group disharmony? What mechanisms allow humans to resolve recurrent conflicts around possession, values, and resource distribution? Aside from the sheer power of coercion attached to Darwinian selection and other lion’s share principles pervading Nature, a major question is what are the origins of the ways humans manage to cooperate in sharing values and resources without automatically resorting to force? How do we manage sometimes to agree on the value of things and how do children learn to somehow compromise with others? That is the general question discussed in this chapter, in light of recent developmental research. Our goal is to review existing theoretical ideas and empirical evidence regarding the development that leads each individual child across a large variety of cultural and often highly contrasted demographic contexts to understand reciprocity in social exchanges. A second goal will be to understand how children systematically end up building some principled notions of equivalence and shared values with others, fostering cooperation, and somehow transcending competition or any forms of coercion. In the first part of the chapter we discuss sharing in development. We review evidence on the primordial development of shared and reciprocated experience, documenting what would be the universal chronological development of three forms of intersubjectivity between birth and 24 months.This development leads the child toward the first expression of an ethical stance by the third year of life. In the second part of the chapter, we then turn toward the development of fairness proper, construed as equity norms that guide exchanges with peers and others. We view this development as co-emerging with self-consciousness, in particular with a new care for reputation and the onset of a propensity to construe oneself through the evaluative eyes of others (Rochat, 2009; 2014). Before concluding and providing a general summary of the main ideas, in a third part we consider the degree to which culture plays a role in the development of fairness in sharing with 222
Sharing and fairness in development
a particular focus on the expression of inequity aversion by young children from around the world growing up in highly contrasted socio-economic and cultural circumstances.
Sharing in development1 Progress in infancy research during the past 40 years has debunked many classical theoretical assumptions; assumptions revolving around the ill-informed intuition of a starting state characterized by un-differentiation and an initial state of emotional, social, perceptual, and cognitive incompetence in newborns. It is now well established that we are not born in a blooming, buzzing confusion, in some state of undifferentiated fusion with the environment, as proposed by William James over a century ago, and assumed also by many pioneer child psychologists such as Piaget, Wallon, Baldwin, and Freud and many of his followers like Mahler or Klein (Rochat, 2011). We now know that newborns perceive their own body as a differentiated entity among other entities. For example, they root significantly more toward the finger of someone touching their cheek (single touch), than toward their own fingers touching their cheek (double touch; Rochat & Hespos, 1997). Furthermore, research shows that hour-old infants are already sensitive to distal objects and not just proximal stimulations hitting the senses (Slater et al., 1990; Kellman & Arterberry, 2006). Infants from birth show remarkable attunement to particular features in the environment. They discriminate among and show a preference for animate as opposed to inanimate things; face vs. non-face entities (see Rochat, 2001 for a review); and familiar as opposed to unfamiliar people based on even pre-natal experience of the maternal voice and the taste of maternal amniotic fluid (Marlier, Schaal, & Soussignan, 1998). In the following, we want to discuss and present some evidence regarding marked changes in the form and content of sharing in early development. By at least six weeks, if not earlier, infants are sensitive to eye gaze, ‘motherese’, and turn-taking contingency. As pointed out by Csibra (2010), this sensitivity shows that infants are able, from a very early age, if not from birth, to recognize that they are being addressed by someone else’s communicative intentions long before they are able to specify what those intentions are. The basic ability by which the young child distinguishes between persons and inanimate things allows them to develop various levels of experiential sharing, which we would like to review first. This development follows the marked and rapid expansion of children’s awareness of being with others in the world. In what follows, we describe three major levels unfolding in development between birth and five years. These levels are in turn primary, secondary, and tertiary levels. Each of these levels emerging in development determine ways and forms of sharing that are fundamentally different in both content and function. At each level, and from the earliest age, children engage in dynamic co-regulation with others that amounts to an open-ended system of negotiation, where this includes the dynamic process of constant affect monitoring and emotional alignment with others, i.e., a mutual adjustment between self and others’ experience. As will be proposed next, each of these three basic levels adds a new layer of meaning to sharing, progressively expanding from the individual to the group. This enlargement follows a path that parallels and echoes the development of self-consciousness (cf. Rochat, 2009), leading children, from the exchange of gazes and smiles (primary intersubjectivity), to the sharing of attention toward objects, including the actual offering and request for physical things (secondary intersubjectivity), and ultimately to the negotiation with others of the relative value of things, be they material (e.g. bartering exchanges of prized 223
Philippe Rochat and Erin Robbins
possessions; see Faigenbaum, 2005), or immaterial (e.g. agreement on rules and what is right or wrong; see Rakoczy, Warneken, & Tomasello, 2008). This latter level of sharing (tertiary intersubjectivity; see Rochat & Passos-Ferreira, 2009 for further discussion), leads children from the second year toward the development of an ethical stance in reciprocal exchanges with others. Level 1: affective sharing (2 months and up) By approximately six weeks postpartum, a new kind of mutuality emerges that is distinct from the primeval biological and instinctive co-regulation we find already at birth. It is from this time onwards that infants engage in face-to-face interaction, and display the first socially elicited smiling. It is this first active sharing of affects in proto-conversation with others that amounts to the so-called primary intersubjectivity (Trevarthen, 1980). It is the original ground for sharing in the literal sense of reciprocal exchanges. Infancy researchers have documented and characterized this sharing in terms of rhythmical turn taking (Gergely & Watson, 1999), and two-way shared mutual gaze (Stern 1985; Stern et al. 1985). It goes beyond mere affective mirroring or emotional contagion as such exchanges take place for the first time within openended, co-created transactions made up of successive emotional bids.To share an experience with someone else is not to have an experience of one’s own and then simply to add knowledge about the other’s perspective on top; rather a shared experience is a qualitatively new kind of experience, one that is quite unlike any experience one could have on one’s own. The other’s presence and reciprocation makes all the difference. Infants at birth open their eyes and orient their gazes toward faces, preferring faces to non-face objects. Even though they are documented to imitate facial gestures and emotional expression (like tongue protrusion or sad faces; Meltzoff & Moore, 1997; Field, 1984), the gaze of newborns remains often sluggish and hard to capture, as if it is passing through you. Staring straight at a newborn with open eyes often gives the impression that the child is looking through you rather than at you. By six to eight weeks, however, the gaze becomes unmistakably shared and mutual, inaugurating a proto-conversational space of genuinely open-ended social exchanges made of turn taking and a novel sensitivity. Mothers commonly report that they now discover a person in their baby.Whereas eye-to-eye contact is often a threatening sign and tends to be avoided in other primate species, it is a major attractor in humans and becomes a critical index of engagement in proto-conversational and early intersubjective exchanges. It is a variable picked up by the child as a measure of the relative degree to which others are socially engaged and attentive, affectively attuned and effectively ‘with’ them. It gives rise to prototypical narrative envelopes co-constructed in interaction with others, made, for example, of tension build-ups and sudden releases of tension, like in peek-a-boo games that are universally compelling to infants starting in the second month (Stern, 1985; Rochat, 2001). Such exchanges are primarily scaffolded by strong affective marking and compulsive affective amplification on the part of the caretaker producing high-pitched inflections of voice and exaggerated facial expressions (‘motherese’), tapping into the child’s attentional capacities and perceptual preferences (Gergely & Watson 1999; Stern et al. 1985; Rochat 1999, 2001). The adult’s systematic tendency toward affective scaffolding and amplification, a running emotional commentary that is attuned to the child’s expressed emotions, combined with the novel attentional capacities of the child by the second month (Wolff, 1987), makes such proto-conversation more than mere complementary actions between adult and child. Play and sharing games give children privileged access to their own limits and possibilities as agents in their environment. It is in such affective, face-to-face, playful exchanges of gazes and smiles that infants first gauge their social 224
Sharing and fairness in development
situation: the impact they have on others, and the quality of social attention they are able to generate and receive from others. It is from this point on that we can talk of sharing as a process that rests on reciprocation and putative co-creation of affects in interactions with others. Importantly, in relation to our topic, this is a process in which for the first time self and other are engaged together in an openended, emotional bid building process. This emergence defines a novel horizon for development that leads the child toward symbolic functioning, explicit self-consciousness as opposed to implicit self-awareness, linguistic competence, and ultimately the development of an ethical stance toward others (i.e., strong reciprocity in sharing; see Robbins & Rochat, 2011). It also provides a basis for infants to become socially selective and sensitive to social identity markers like language, manifesting already from approximately three months relative preference and affiliation with particular others that are more familiar. For example, recent research shows that by six months, infants prefer strangers who speak with no foreign accent (Kinzler, Dupoux, & Spelke, 2007), who respond to them in a familiar temporal manner (Bigelow & Rochat, 2006), or who act in prosocial as opposed to antisocial ways (Hamlin, Wynn, & Bloom 2007). Level 2: referential sharing (7–9 months and up) If by two months infants begin to share experience in face-to-face, open-ended proto-conversation with others, things change again by seven to nine months when infants break away from mere face-to-face reciprocal exchanges to engage in referential sharing with others about things in the world outside of the dyadic exchange. This transition is behaviourally indexed with the emergence of social referencing and triadic joint attention whereby a triangular reciprocal exchange emerge between child and others in reference to objects or events in the environment (Striano & Rochat, 2000;Tomasello, 1995). By triangulation of attention, objects become jointly captured and shared. Objects start feeding into the exchange. This is the sign of a ‘secondary’ intersubjectivity (Trevarthen, 1980) adding to the first exchanges of 2–6-month-olds. Prototypical instances of triadic joint attention include not only cases where the child is passively attending to the other, but also cases where the infant, through acts of protodeclarative pointing, actively invites another to share its focus of attention. In either case, the infant will often look back and forth between adult and object and use the feedback from his or her face to check whether joint attention has been realized. Importantly, the jointness of the attention is not primarily manifest in the mere gaze alternation, but in the shared affect that, for instance, is expressed in knowing smiles. One proposal has been that interpersonally coordinated affective states may play a pivotal developmental role in establishing jointness (Hobson & Hobson, 2011). Another suggestion has been to see joint attention as a form of communicative interaction. On this proposal, it is communication, which for instance can take the form of a meaningful look (i.e., it does not have to be verbal), that turns mutually experienced events into something truly joint (Carpenter & Liebal, 2011). This new triangulation emerging by seven to nine months is also, and maybe more importantly, about social affiliation and togetherness. Like the optical parallax that gives depth cues to viewers, first signs of joint attention gives children a new measure of their social affiliation, a novel social depth. By starting to point to objects in the presence of others, by presenting or offering grasped objects to social partners, infants prey for others’ mental focus by creating and advertising for a shared attention. Psychologically, it also corresponds to the first appropriation of an object as topic of social exchange, in the same way that in the course of a conversation someone might spontaneously appropriate an object (pen, stick of wood, any small object) to help in the telling of a story. The object, used as a conversational prop in early bouts of joint 225
Philippe Rochat and Erin Robbins
attention, becomes the infants’ new ‘fishing hook’ to capture, gauge, and eventually possess others’ attention against which they can gauge further their relative agentive role, control, and impact in relation to others: their situation and place in the social environment. It is reasonable to state that in joint attention we find the roots of the child’s first socially shared mental projection of control over an object (i.e., possession in the literal sense). In starting to bring other people’s attention onto things in the environment, the infant opens up the possibility of claiming ownership of both the initiation of a conversation about something and the thing itself. Pointing, offering, or presenting objects to others are all new social gestures becoming prominent in the healthy child from seven to nine months. An object that is presented or offered can now be retrieved or taken away by others, given back or ignored by them. It gives rise to all sorts of new, complex, and objectified social transactions. It is in these new objectified social transactions that the child consolidates the concept and idea of what eventually will become in a few months of developmental time and with the emergence of language the explicit claim of ownership: the assertion of “that’s mine!” and “not yours!”; an explicit assertion of ownership that on its part allows for new forms of sharing. From this point on, and at this pre-linguistic stage of development, objectified and socially shared centrifugal and centripetal forces are the new playing field created by children (Tomasello et al., 2005, Rochat & Striano, 1999). It is a crucial step in the development of sharing. Feeding their basic affiliation need, children learn from then on that with objects, others’ attention and recognition can be earned and shared. Note that what develops are new forms and objects of reciprocation all presupposing the same basic self–other differentiation and empathic stance that appear to be expressed and maintained from the outset. By 11–12 months, the child adds a novel layer of meaning to referential sharing. This layer corresponds to a novel understanding of the manners in which sharing and exchange games are played. They begin to modulate their ways of sharing and reciprocating, becoming more selective of the person they share with, trying to imitate or to coordinate actions in attempts of cooperation. From 12 months of age, infants also begin to show significantly greater modulation and flexibility by engaging spontaneously in role reversal imitation (Ratner & Bruner, 1978). For example, imagine a situation where an adult engages the infant to play a collaborative game where the adult holds a basket and the infant throws toys into it. If the adult suddenly stops holding the basket and now wants to throw, 12-month-olds seeing this are able to switch roles to continue the joint game: the infant will spontaneously stop throwing, grab, and hold the basket to let the adult throw the toys (Carpenter et al., 2005). Typical development of social experience leads children toward an inclination to identify with others. Indeed, Hobson argues that in affective sharing the process of ‘identifying-with’ plays a very early and pivotal role in typical social development by structuring “social experience with polarities of self-other differentiation as well as connectedness” (Hobson 2008, p. 386). From 12 months, infants can follow through and maintain the sharing, collaborative game by taking the role of the other, that is, the child begins to show some rudiments of perspective-taking and the budding ability to get into the shoes of others. The investigation of joint attention suggests that we to a large extent come to understand others by sharing objects and events with them. Moll, Carpenter, and Tomasello (2007) have argued that by the second year infants in situations of joint engagements where they are directly being addressed by the adult and involved in her actions are able to learn things and display skills they otherwise could not. Indeed, it has been suggested that infants come to learn about the social world, not “from ‘he’s’ or ‘she’s’ whom they observe dispassionately from the outside” but “from ‘you’s’ with whom they interact and engage in collaborative activities with 226
Sharing and fairness in development
joint goals and shared attention” (Moll & Meltzoff, 2011, p. 294). By 14 months, the infant becomes explicit in discriminating the shared experience of an object as special. They are able to discriminate objects experienced by ‘we’ as opposed to ‘I’ alone (Tomasello et al., 2005; Moll et al., 2008). Level 3: co-conscious sharing (21 months and up) The expletive “Mine!” that children utter from around the same age (approximately 21 months; Bates, 1990; Tomasello, 1998) is symptomatic of a major transition happening at this stage. The explicit assertion of ownership parallels the emergence of explicit self-recognition and self-objectification in the mirror (Rochat & Zahavi, 2011), but also novel expressions of selfconscious emotions like blushing, shame, envy, or pride. The awareness of being evaluated by others starts to shape toddlers’ social and affective lives. It is from this point on that children show first signs of systematic self-management, starting to care about their own reputation in relation to others as both individuals and groups of individuals (Rochat, 2013). Related to selfmanagement and audience awareness, it is also from then on that children develop a renewed ability to conceal their mental states, manipulating what they expose of themselves to others. As part of this major developmental step, children become particularly sensitive to approbations or dis-approbations from others, constantly gauging and promoting their own social affiliation. They probe and see what works and what doesn’t in sharing with others, starting a new era of bartering and endless negotiation of permissions that parents of two- and three-year-olds know too well. They properly start to have others in mind in the sharing process, while never confounding their own perspective with that of others. This transition toward tertiary intersubjectivity is briefly illustrated below with empirical findings on (a) the development of an ethical stance taken by children toward others between three and five years, and (b) the parallel emergence of a sensitivity to group norms and affiliation, including explicit ostracism from six to seven years and beyond. (a) When asked to split a small collection of valuable tokens with another, three-year-olds tend to self-maximize in their distribution, becoming significantly more equitable by five years of age. This developmental phenomenon is robust and has been documented across at least seven highly contrasted cultures (Rochat et al., 2009; Robbins, Starr, & Rochat, 2016). Between three and five years, children start to act toward others according to some ethical principles of fairness they internalize and seemingly hold for themselves.They become ‘principled’, sensitive to the moral and ethical dimension of sharing possession with others and try to reach ‘just’ decisions. More generally speaking, children typically develop as autonomous moral agents as opposed to strict conformists who simply obey and abide the greatest, more powerful majority in order to feed a basic social affiliation need. From this point on, they start to show signs that they care about their moral identity and reputation by actively claiming their own view and perspective in moral space (Taylor, 1989). They begin to show clear signs that they try to maintain self-unity (i.e., own moral stance) and coherence in instances of justice distribution. They start avoiding dissonance when facing a moral dilemma such as splitting an odd number of desirable goods between two individuals. From then on, children tend to show increasing signs of inequity aversion, including expressions of costly sacrifice to enforce principled equity (Robbins & Rochat, 2011). (b) Parallel to the development of principled sharing, children also become progressively more sensitive to what people think of them. Sharing is the primary context in which children 227
Philippe Rochat and Erin Robbins
establish their own moral perspective and moral identity in the evaluative eyes of others. Beyond six years of age, further layers are added, where children increasingly refer and abide to trade rules and the pragmatics of what become ritualized exchanges sanctioned by institutions (group norms, collective ways of being, school or playground culture). They become progressively more sensitive and aware of the cultural context: the institutional or consensual collective order that transcends and ultimately governs personal wants and inclinations (Rochat, 2014). In contrast to the referential sharing of secondary intersubjectivity, which remains a form of dyadic we-experience, the co-conscious sharing occurring at the tertiary level of intersubjectivity becomes normative at a larger collective scale. It is not only limited to dyadic communication about objects and events (i.e., pointing and joint attention), marking a new (ethical) level of we-experience referring to collective rules and norms. This transition corresponds to the predictable development of two successive forms of shared intentionality that Michael Tomasello coins as joint intentionality and collective intentionality, respectively (Tomasello, 2014). As children start manifesting an ethical stance between the ages of three and five, they also start to expand their experience of being part of a larger we by becoming sensitive to group affiliation and its necessary counterpart: the potential of being socially excluded. Entering institutions that extend the family environment to peers (i.e., preschools and other kindergarten), children develop a new sense of group belongingness. They start to identify with the group, they show in-group biases, and they start to endorse group attitudes. They come to share the view and preferences of the group. Classic instances of strong group conformity (Ash, 1956) are replicated in three- to four-year-old children who tend to reverse their own objective perceptual judgments to fit a peer group majority opinion (Corriveau & Harris, 2010; Corriveau et al., 2013; Haun & Tomasello, 2011; Haun, van Leeuwen & Edelson, 2013). From five years and beyond, sharing drastically expands, and begins to map onto the social psychology of individuals in their relation to the group, in particular the in-group/out-group dynamic described in adult social psychology experiments. Multiple experiments show that children are quick to affiliate with particular groups based on minimal criteria (blue team vs. red team). By four years, they are prompt to manifest out-group gender or racial stereotypes and other implicit group attitude biases toward others (Cvencek, Meltzoff, & Greenwald, 2011). From approximately seven years, children also begin to manifest active ostracism and social rejection in order to affirm one’s own group affiliation and identity (Aboud, 1988; Nesdale, 2008). From the time children become aware of and start to internalize the other’s evaluative attitude towards themselves, the content of what they identify as their own characteristics (who they are as persons in the larger social context) become increasingly determined by how they compare to the perceived and represented (belief) characteristics of others as individuals but also as particular groups of individuals (e.g., siblings vs. peers, parents vs. strangers). This is evidenced by the inseparable development of self-conceptualizing and the early formation of gender identity and social prejudice, the way children construe their relative affiliation and manifest affinities to particular groups by ways of self-inclusion and identification, as well as by social exclusion: the counterpart of any social identification, affiliation, or group alliance (Dunn, 1988; Nesdale et al., 2005). Extending the original cognitive-developmental work of Kohlberg (1966) on sex-role concepts and attitudes, research shows that by the middle of the third year (i.e., 31 months), children correctly identify their own gender (Weintraub et al., 1984). Interestingly, the degree of gender identity expressed by three-year-olds depends on parental characteristics. Weintraub and colleagues found that, compared to other parents, fathers who have more conservative 228
Sharing and fairness in development
attitudes toward women, who tend to engage less in activities that are stereotyped as feminine, and who score low on various femininity scores have children scoring higher on the gender identity task. These findings demonstrate the early onset of group identity (i.e., gender) and the role of social influences in the determination of early group categorization and identification. In relation to social prejudice, research investigating children’s social identity development suggests that, contrary to gender, it is only by age four to five years that children are aware of their own ethnic and racial identity. Only then do they begin to show identification with and preference for their own ethnic group (see Gibson-Wallace, Robbins, & Rochat, 2015). Early on, children derive self-esteem, and hence a conception of self-worth, from group membership and group status. According to Nesdale (2004), for example, ethnic and racial preference manifested by five-year-olds is based on a drive to assert their own in-group affiliation, and not yet focusing on the characteristics of out-group members that they would eventually discriminate or exclude. Social prejudices, whereby some children might find self-assertiveness in focusing on negative aspects of out-group members, are manifested in development no earlier than seven to eight years of age based on Nesdale’s research and interpretation. From seven years on, the self and social identity begin to be conceptualized on the basis of combined social affiliation and exclusion processes. These combined processes are contrasting or ‘bringing out’ the self positively by association with some persons and negatively by dissociation with others. From then on, children are subject to group norm influences. They begin to construe their social identity through the looking glass of the group they affiliate with, as well as the members of other groups they exclude. In this dual complementary process, combining affiliation and contrast or opposition to selected others, children manifest new ways of asserting and specifying who they are as persons, for themselves as well as for others as individuals and groups of individuals.
Fairness in development So far, we examined three levels of sharing as they relate to the child’s experience of intersubjectivity. We demonstrated that sharing does not involve only persons and objects, but more broadly, the relationships between persons, and with respect to objects. Here we consider in further detail the cognitive capacities that might subtend children’s sharing at the level of tertiary intersubjectivity. The question we would like to explore is how children develop into moral agents, moving from the detection of sameness and inequity into a more prescriptive ‘ethical stance’ about how things ought to be shared. Central to this question is inequity aversion, a description of the malaise individuals experience when they have more (advantageous inequity) or less (disadvantageous inequity) than another. What is inequity aversion? The basic tenant of inequity aversion is that individuals may be motivated by both self-interest and other-regarding preferences. According to Fehr and Schmidt (1999), inequity aversion is characterized by two parameters: envy, or the distaste for disadvantageous outcomes (e.g., having less than one’s partner), and compassion, or the distaste for advantageous outcomes (e.g., having more than one’s partner). This position has been substantiated through the use of exchange games, where experimental evidence indicates that adults make offers that are very close to the equitable solution of an even split, and reject offers that are perceived as too stingy, typically less than 20%–30% of the shared good (Camerer, 2003; Murnighan & Saxon, 1998; Camerer & Thaler, 1995).This tendency is pervasive in Western settings, although 229
Philippe Rochat and Erin Robbins
cross-cultural evidence suggests it might also depend on market inclusion and social context (Henrich et al., 2006; Dwyer, 2000). Such findings are not unique to humans. Growing evidence suggests that equity norms are important for cleaner fish (Raihani & McAuliffe, 2012), canines (Horowitz, 2012; Range et al., 2009), and non-human primates (Burkart et al., 2007; Lakshminarayananan, Chen, & Santos, 2008; Brosnan & de Waal, 2003; but see however Silk et al., 2005, as well as Jensen, Call, & Tomasello, 2007, for examples of antisocial reactions to inequity in chimpanzees). The roots of inequity aversion extend deep into phylogeny and, as we shall demonstrate, ontogeny. Here we briefly review the developmental evidence regarding the socio-cognitive capacities that would support inequity aversion and that we conjecture are necessary pre-requisites. This would include children’s general understanding of numeracy and proportionality (what constitutes the what of sharing); their understanding of self and other including perspective-taking and social evaluation (what constitutes the who of sharing); and their reasoning about ownership, possession, and exchange relationships (what constitutes the how of sharing). The ‘what’ of sharing Inequity aversion presumes that there are quantifiable things that can be distributed. To understand the ‘what’ of children’s sharing, it is useful to address children’s understanding of quality as well as quantity. What are the dimensions that children value? Four- to five-year-old children attach value to perceptual features of objects, such as size, colour, and attractiveness (Fox & Kehret-Ward, 1990). These perceptual features can be graded, so that the quality of them becomes a relevant dimension by which children value objects. Given the choice between stickers, for example, children will pick those that are the biggest or the most colourful, and not necessarily those that are the most numerous (see Rochat et al., 2009 for a similar manipulation of this kind). The value of an object may also be derivative of the relative effort it takes to produce it, and this valuation may be grounded in how children understand ownership. As early as three years, children recognize that creative labour implies ownership over objects (Kanngiesser, Gjersoe, & Hood, 2010).Valuation also stems from the attainment of objects.Three- to five-year-olds report liking better objects that they already own versus identical objects that they do not own (Lucas, Wagner, & Chow, 2008) in what are signs of an early endowment effect. Between five and seven, abstract properties feature into children’s determination of value. These are often pragmatic affordances of an object (i.e., it is easy to use or play with; it is durable or strong), but associative affordances take on importance as well. At this age children value objects that create a shared sense of group (e.g., we are friends because we both have the same shirt; see Faigenbaum, 2005 for a comprehensive discourse analysis on the topic). Especially relevant to discussions about inequity aversion, the concept of ‘half ’ seems to subtend children’s earliest understanding of proportion. By six years children are capable of computing proportions with both discrete and non-discrete quantities (Spinillo & Bryant, 1991), and by seven years children grasp the inverse relation between the number of parts into which a quantity is divided and the size of those parts (Sophian, Garyantes, & Chang, 1997). Such competencies may be evident in even younger children (three to four years old) if they are presented as analogies between conceptual referents (e.g., a half pizza came from a whole pizza, therefore a half bar of chocolate must come from a whole bar of chocolate; SingerFreeman & Goswami, 2001). Finally, judgments of equity and fairness often involve more than assessments of absolute quantity. Sharing can be relative, involving what one has in comparison to another. Adam’s (1963) 230
Sharing and fairness in development
theory of equity, for example, maintains that egalitarian preferences depend on proportional reasoning in the sense that individuals compare and weight the relative wealth, contributions or attributes of others (which need not necessarily be material) to determine what payoffs each person should receive. Whether young children are capable of this level of transitivity has been contested in developmental literature. Studying 5–14-year-olds, Piaget (1970) argued that the ability to transform values in one domain (e.g., speed) to another (e.g., distance) did not emerge until relatively late, around 12 years. Accordingly, young children would be unable to make conversions between another’s initial wealth, need, or effort and their deserved amount of payoff, praise, or rebuke. Notably, in the social domain, it seems as though proportional reasoning emerges earlier, depending on which attributes of a child’s sharing partner are highlighted. In studies that manipulate the relative effort of a sharing partner, for example, data routinely demonstrate that children younger than nine years eschew sharing proportionally (e.g., giving the lion’s share to the party who has worked more) in favour of splitting resources in a strict egalitarian fashion (Leventhal & Anderson, 1970; Anderson & Butzin, 1978; Hull & Reuter, 1977; Nisan, 1984). In general, this developmental trend remains even after other factors are manipulated, including the nature of the shared resource (Peterson, Peterson, & McDonald, 1975; Larsen & Kellogg, 1974), the standards for determining relative effort, and the child’s status in the game (i.e., worker and potential recipient vs. observer deciding how to split goods between labouring third parties; see Olejnik, 1976; Sigelman & Waitzman, 1991; Thomson & Jones, 2005). In the cases where children do deviate from strict egalitarianism, some evidence suggests they allocate greater rewards for their own labour over that of a partner. In other words, their considerations about individual effort may be constrained by a self-serving bias (Kanngiesser & Warneken, 2012). In contrast, if the relative need or prosociality of partners is manipulated, children are much more likely to engage in proportional sharing. Children associate value with the act of portioning things, and they factor proportional resource distribution into their social evaluation of sharing partners. Cooperation and collaboration seem to be particularly salient features upon which children assess merit: when children as young as three labour jointly toward an outcome, their sharing is significantly more likely to be proportionally equitable (Ng, Heyman, & Barner, 2011; Hamann, Bender, & Tomasello, 2014). Unlike manipulations of effort, appeals to relative wealth (“this person is poor”) or emotional status (“she is sad because she doesn’t have a lot of candy”) routinely produce consistent preferences for proportional equity, even in preschool children. Four- to eight-year-olds reliably distribute proportionally more of their resources to partners described as needy than to themselves in first-party sharing (Streater & Chertkoff, 1976; Malti et al., 2015) or to the more needy of two partners in a third-party context (McGillicuddy-de Lisi et al., 1994; Zinser, Starnes, & Wild, 1991; Paulus & Moore, 2014). Information about a partner’s prior prosocial acts is also relevant to young children, and by five years, they judge as being ‘nicer’ partners who give proportionally more resources, above and beyond the absolute number of goods given (e.g., 3 of 4 coins vs. 6 of 12 coins; McCrink, Bloom, & Santos, 2010). There are other kinds of computations children must consider in addition to relative wealth or deservingness. A certain amount of uncertainty and risk are inherent to exchange relationships. In iterative exchange games, participants can weigh what they know of a partner’s behaviour against the probability that they will continue to act this way. And, of course, the very nature of indirect reciprocity – the notion that if I help you now, someone else may help me at some undetermined time in the future (Nowak & Sigmund, 2005) – is a gamble in the most abstract sense. The issue of uncertainty begs the question of who should shoulder the burden of risk in an exchange. It also brings to mind issues of reputation and trustworthiness. 231
Philippe Rochat and Erin Robbins
Such considerations of self and other are at the heart of social evaluation, in what we deem the who of sharing. The ‘who’ of sharing Social evaluation begins early in development. Infants and children both demonstrate signs of parochialism and in-group bias by preferring to interact with members of their own group. For example, ten-month infants prefer to engage with objects that have been modelled by or associated with a speaker of their native language (Kinzler, Dupoux, & Spelke, 2012). Preference for in-group members may translate to preferential distribution of resources. At 2.5 years, children will share toys with a speaker of their native language over another adult (Kinzler, Dupoux, & Spelke, 2012) while in third-party sharing, three-year-olds asked to assist a doll in distributing resources will give more to partners described as kin or friends, but not as strangers (Olson & Spelke, 2008). And in first-party sharing, three- to seven-year-olds all demonstrate signs of parochialism by sharing equitably with anonymous partners described as classmates versus children labelled as peers from a different class (Fehr, Bernhard, & Rockenbach, 2008). As mentioned previously, from an early age children are also sensitive to how others elect to distribute resources or act prosocially. Three-month-olds who view a vignette in which an agent is helped or hindered in the attainment of a goal react more negatively (as indexed by longer looking) to an antisocial hinderer (Hamlin & Wynn, 2011). Between 6 and 12 months, infants shift focus and become more inclined toward the prosocial helper (as indexed by preferential reaching tasks; Hamlin, Wynn, & Bloom, 2007). Infants also seem to evaluate how adults interact with third parties. At 19 months they look longer when adults have split resources inequitably between identical animate puppets, and by 21 months they anticipate that collaborators on a task should be equally rewarded by an experimenter (Sloane, Baillargeon, & Premack, 2012).This same negative appraisal of antisocial or unfair others is also evident during the preschool years. Three-year-olds show non-verbal signs of discomfort (i.e., negative affect, averted gaze) when sharing outcomes are inequitable (LoBue et al., 2011), and by five years children selectively share with partners who have previously shown them generosity (Robbins & Rochat, 2011; but see also Baumard, Boyer, & Sperber, 2010 and Kenward & Dahl, 2011 for examples of this in third-party sharing with younger cohorts). Social evaluation of others is ubiquitous. The question is the extent to which children also understand that they may be likewise socially evaluated. Concern for social evaluation (what is also sometimes referred to as reputation effects) has long been considered an important factor in models of prosociality and cooperation. Great apes, for example, prefer conspecifics who have demonstrated competency on a collaborative task (Melis, Hare, & Tomasello, 2006), as well as human experimenters who have been generous versus selfish in previous interactions (Subiaul et al., 2008; Russell, Call & Dunbar, 2008), and there is evidence that such ‘reputation effects’ are present in canines (Kundey et al., 2011) and certain species of fish (Bshary & Grutter, 2006). As Axelrod (1984) notes, a reputation helps define the ‘shadow of the future’ by projecting information about prior behavioural consistency and expected future outcomes, including adherence to socially desirable norms for cooperation and reciprocity. Many developmental studies of reputation effects have focused on peer perceptions of behavioural traits, such as friendliness and popularity (Hill & Pillow, 2006; Gifford-Smith & Brownell, 2003; but see also Zeller et al., 2003 for a review). Children as young as three evaluate others’ actions both in relation to normative appeals (e.g., for fairness; Dunn, 2006; Ingram & Bering, 2010) as well as descriptive rules (e.g., discriminating between doing something ‘naughty’ versus doing something ‘different’; see Cosmides, 1989; Harris & Nuntez, 232
Sharing and fairness in development
1996; Rakoczy, Warneken, & Tomasello, 2008).Young children also demonstrate an awareness of being evaluated by others. Around 21 months, the same age that they begin to manifest explicit understanding of ownership and reciprocal exchange, children increasingly call attention to their achievements during free-play situations (Stipek at al., 1992). In terms of selfpresentation and evaluation, three- to seven-year olds tell white lies in contexts that encourage politeness, such as neglecting to inform an adult experimenter that she has a potentially embarrassing mark on her face (Talwar & Lee, 2008), and have been shown to spontaneously inhibit negative affective displays in the presence of an experimenter who has established an expectation for positive affective reactions (Cole, 1986). Four- to nine-year-olds tend to judge their own behaviour more favourably compared to that of a sibling who has completed identical actions (Ross et al., 2004), and also show evidence of the ‘subtle eyes’ effect demonstrated in adults by sharing more generously in the presence of a mirror (Ross, Anderson, & Campbell, 2011). Recent work demonstrates that concern for reputation is explicitly linked to children’s egalitarian sharing (Robbins & Rochat, in prep; Leimgruber et al., 2012). Between five to seven years, children distribute resources more equitably if the outcome of their sharing is public. In contrast, if the outcome is private and unobservable to sharing partners, children at this age are more self-maximizing in their distribution of resources. (Note, however, that a sizable portion of five- and seven-year-olds do not show this effect and are egalitarian regardless of context.) With regard to intersubjectivity, evaluation and appraisal of the self in relation to others has been linked to the so-called moral or self-conscious emotions, including guilt, shame, and empathy (Eisenberg, 2000). Guilt and shame, for example, may be elicited in response to unacceptable impulses and may therefore evoke feelings of responsibility in response to a perceived violation of a moral norm that is presumably shared with others (Ferguson & Stegge, 1998). Of the so-called moral emotions, empathy has arguably received the most attention (for a comprehensive review of its proximate and ultimate causes, see Preston & de Waal, 2002). Broadly defined, empathy is an affective response driven by the comprehension of another’s emotional state (Eisenberg, 2000), and so construed, it is associated with prosocial acts such as helping behaviour (particularly oriented toward distressed peers; Eisenberg, 2000; Holmgren, Eisenberg, & Fabes, 1989). If initially infants respond to the pain of others in an attempt to mitigate their own distress or emotional contagion (Zahn-Waxler & Radke-Yarrow, 1982; Ungerer et al., 1990; Sagi & Hoffman, 1976), by 14 months personal distress is not required to motivate prosocial behaviours such as comforting (Eisenberg et al., 1998) or assisting an adult in the attainment of a goal, even when this assistance is not rewarded (Warneken & Tomasello, 2006). In later childhood, around three to four years, this tendency toward helping is tied to both the child’s understanding of conventionality as well as their ability to engage in perspective-taking (Gopnik & Wellman, 1992; Wellman, Cross, & Watson, 2001). By 34 months children not only discriminate between conventional and moral transgressions (Smetana & Braeges, 1990) but are more likely to report feelings of guilt and remorse following their own moral transgressions (Stipek, Gralinski, & Kopp, 1990; Zahn-Waxler & Robinson, 1995). Children who report experiencing these emotions frequently are also more likely to accept responsibility and focus on reparation following a transgression event (Kochanska et al., 1994), suggesting that at some level they see themselves as accountable. Later in childhood and with regard to fairness, in hypothetical judgments about how a good should be distributed, children frequently provide rationales indicative of empathic concern, such as wanting to make a friend happy (Singh, 1997; Enright et al., 1984; Damon, 1975). In short, the ‘who’ of sharing depends on several factors. Social perspective-taking may provide three- to seven-year-olds a window into the needs and desires of their sharing 233
Philippe Rochat and Erin Robbins
partners. Children evaluate their sharing partners, and by five years are sensitive to the fact that they themselves are also evaluated. These evaluations carry affective overtones, the so-called self-conscious or moral emotions, that may be elicited in response to perceived inequity or transgressions. The ‘how’ of sharing As the child’s ability to consider multiple perspectives strengthens, value judgments and appeals to norms (i.e., to share equitably) begin to characterize how children determine the appropriateness of societal interactions. Faigenbaum (2005) notes that as children abandon purely instrumental understanding of objects, negotiation (and particularly reciprocal exchange) features prominently in defining and re-defining the value of a good or an act. Here we briefly address how this understanding unfolds in early development. Although a rich literature describes concepts of possession and ownership in infancy, we next address the developmental changes that occur after the preschool years when inequity aversion first begins to manifest in children’s own first-party sharing. In any exchange of resources, children must (at least implicitly) identify who has what. Whereas ownership is an intangible, invisible, and abstract property of objects, possession, insofar as it involves physical contact, is visible to others. Early conflicts over resources are therefore conflicts of possession (“who has it?”) rather than ownership (“whose is it?”). Prior to three years, children would demonstrate a ‘first possessor bias’ by which the first person who owns or controls the object retains ownership over it (Friedman & Neary, 2008; Friedman & Neary, 2009). In principle, early conflicts about possession tend to be conventional in nature, disputes about how to use a toy or perform an activity (Dunn, 1988; Faigenbaum, 2005). Sharing entails both an understanding of ownership and transference of that ownership. Transfer of objects does not imply transfer of ownership. In a sharing game, for example, many individuals may possess a toy, but this temporary state of having does not mean the current possessor owns the toy. Three-year-olds protest partners who do not return objects to their original owner (Hook, 1993) or who usurp possession and claim their own control over an object (Rossano, Rakoczy, & Tomasello, 2011). As a consequence, rules of transfer become important to children starting around age four, when children begin to protest illegitimate acquisition of objects (e.g., theft) or wrongful use of them (e.g., breaking a toy;Vaish, Carpenter, & Tomasello, 2009;Vaish, Carpenter, & Tomasello, 2010). By five years, this conventional understanding takes on normative overtones. Five-year-olds will appeal to rights that owners have over their objects and will describe transgressions of transference rules as ‘unfair’ (Blake & Harris, 2009; Kim & Kalish, 2009; Rossano, Fiedler, & Tomasello, 2015). By seven years, children engage in restitution following a transfer transgression by either punishing or compensating the wronged party (Hook, 1993). In brief, the developmental story regarding ownership is one in which children move from notions of possession that have their roots deep in infancy, focusing primarily on individual action like first contact, to an understanding of ownership that is more reciprocal, and in some cases contractual in nature. Considering that the developmental niche of children around the world varies in significant ways, an important question is whether culture plays a role in the early development of sharing and fairness we discussed so far. To tackle this question, the next section presents some relevant cross-cultural research that point to both universal and variable features in this development. In all, they emphasize the importance of factoring culture, something that is not frequently done and fortunately begins to catch the attention of developmental researchers. 234
Sharing and fairness in development
Factoring culture We have advanced the hypothesis that sharing is both about resolving material disparity (e.g., inequity aversion), but also about the creation of shared values and meanings. Such negotiations always occur within a larger framework of institutions, collective rules, and norms that govern exchanges in general. The studies reviewed above largely represent children from W.E.I.R.D. (Western, educated, industrial, rich, and democratic) populations (Henrich et al., 2010), thus calling into question how generalizable these findings might be outside such contexts. In one of the most direct tests of this question to date, Rochat et al. (2009) presented three- to five-year-old children of seven highly contrasted cultures with a sharing game that manipulated the number of items shared (even or odd), the kinds of items shared (high or low value), and the child’s role in the distribution (recipient or non-recipient). In general, and across cultures, three-year-olds tended to be more self-maximizing in their sharing of the resources than five-year-olds. However, the magnitude of this developmental trend was culturally variable. Already by three years, heightened egalitarianism and generosity were more common in cultures broadly characterized by collectivism and small-scale subsistence living (e.g., Samoa or rural Peru) relative to individualistic and highly urbanized cultures (e.g., United States) that show a steeper developmental trend between three and five years. This general developmental trajectory of egalitarian sharing emerging by five years has been observed in other cross-cultural samples including Columbian preschoolers (Pilgrim & Rueda-Riedle, 2002) as well as Indian and Chinese preschoolers (Rao & Stewart, 1999). In these free-play, spontaneous sharing games, the converging evidence seems to support the idea that inequity aversion emerges between three and five years. The story is more nuanced when considering costly sharing, particularly in the context of forced choice games that pit an equitable outcome against an inequitable outcome that is costly to either the child or her partner. Here, egalitarian behaviour comes at a personal expense, and recent evidence suggests that in such contexts, children across highly contrasted populations all tend to be self-maximizing, and that fair-minded behaviour does not emerge until around seven to eight years, when children share in ways that are culturally variable and consistent with the sharing behaviour of adults in their community (House et al., 2013). Given such findings, what is more likely to vary across cultures may not be an aversion to inequity per se, but rather the means by which equitable outcomes are achieved. Spontaneous requests to share and protests of unfair outcomes are more common in Western contexts, and evidence also suggests that the frequency with which children sanction unfair behaviour may also be culturally specific. Robbins and Rochat (2011) introduced American and Samoan children to a sharing game in which three- to five-year-olds split collections of tokens with identical dolls, one of which shared generously with the child and the other of which was selfish. Children were then given an opportunity to engage in costly punishment by sacrificing one of their own coins to take five away from the puppet of their choosing. By five years, although children in both cultures selectively punished the stingy puppet, the frequency of such costly punishment was significantly greater in US children. What might be the driving force behind such cultural differences? One possibility is that collectivism and communal living predisposes children to relatively egalitarian or generous ways of sharing. However, in their comparison of six highly contrasted cultures – which included hunter-gatherer, horticultural, foraging, and urban societies – House et al. (2013) found that communal, small-scale populations fell on both sides of sharing norms, exhibiting both hyper-generosity and marked stinginess. Rather than communalism proper, extensive 235
Philippe Rochat and Erin Robbins
work on economic reasoning in adults suggests that market inclusion and population density may be more influential in shaping the equity norms of a particular population (Henrich et al., 2006; Dwyer, 2000), and that collectivism in and of itself does not necessarily entail egalitarian ways of resource distribution. Converging on this point, a recent replication and extension of the Rochat et al. (2009) study found that Tibetan children raised in a communal exile community in urban India did not significantly differ in their sharing behaviour from children in any of the other seven cultures. Despite being educated in a context heavily emphasizing traditional Buddhist practices of mindfulness and compassion, these Tibetan children showed comparable levels of self-maximization as children in urban United States, China, and Brazil (Robbins, Starr, & Rochat, 2016). The most notable cross-cultural differences appeared to be driven by Peruvian children who, while they do live in a communal context, also notably live in a region that is not as densely populous or as integrated into Western trade economies as the other societies sampled. Further research on the relative role of these demographic features is surely warranted. Finally, another good illustration of combined universal and variable features regarding sharing and possession are data we collected cross-culturally on the development of reasoning around the question of who owns what and why (Rochat et al., 2014). We asked three- and five-year-old children of seven cultures to determined ownership of a disputed object between two puppets. Following a simple script, the child was told that the two puppets were friends who, after taking a walk, find a coveted object and end up fighting about it (“This is mine! No this is mine!”), all as enacted by the experimenter. A series of conditions tested different ownership rationales by manipulating the various background of the puppets before the fight: the friends were either rich or poor (equity principle); creator or non-creator of the object (labour principle); familiar or unfamiliar with the object (familiarity entitlement principle); and had or had not previously touched and controlled the object first (precedence principle). After the vignette, children were asked who should have the object of contention and who owned it.We sampled children at both ages of middle and low socio-economic status in North America as well as rich, poor, and very poor street children from Brazil; children growing up in rural and highly traditional small-scale societies of Vanuatu and Samoa in the South Pacific; and Chinese children from a communist preschool in Shanghai. When the object of contention was splittable in two equal halves, close to 40% of the Chinese as well as middle-class American children spontaneously split the object between the two puppets, independently of conditions, a significant cross-cultural variation that still begs explanation and that future studies should investigate. However, overall and across cultures, we found that children were universally more inclined, from at least five years of age, to attribute the object to the puppet that created it (i.e., laboured for it), vindicating the early recognition of the labour principle put forth by John Locke in the 17th Century as primary principle of property attribution. Across cultures, there is a primacy of the labour principle in child development regarding ownership attribution, seemingly preceding familiarity, ethics (rich vs. poor), and precedence (first contact) with the object (Rochat et al., 2014).
Summary and conclusion We reviewed existing facts on the development of sharing and emerging signs of a sense of fairness in children, trying also to factor culture in this presentation. Regarding sharing, it appears that its psychological meaning changes radically between birth and five years.We tried to qualify these changes along three major steps: from affective, to referential, and finally co-conscious sharing, each corresponding to radically different levels of intersubjectivity (primary, secondary, 236
Sharing and fairness in development
and tertiary intersubjectivity). The tertiary level is inseparable from the emergence of a propensity to construe oneself through the evaluative eyes of others. In general, the idea proposed is that the development toward tertiary intersubjectivity parallels the emergence of self-consciousness, a special trait of our species, as well as a growing sense of self-reputation in relation to others. It is in this general context that children start to adopt an ethical stance, eventually resulting by five years in the manifestation of a principled and contractual sense of fairness. Within this general context, in the second part we tried to get closer to putative mechanisms driving this development. We reviewed facts regarding what would be the socio-cognitive capacities supporting inequity aversion in development. We considered these capacities as necessary pre-requisites. Among other that have yet to be uncovered, we pointed to developmental changes in the expression of inequity aversion linked to changes in the understanding of numeracy and proportionality (what constitutes the what of sharing). We also considered such changes in relation to developing construal of self in relation to others in terms of perspectivetaking and social evaluation (what constitutes the who of sharing). We then reviewed parallel facts on the development of reasoning around the determination of ownership, possession, and exchange relationships (what constitutes the how of sharing). In all, this review points to important socio-cognitive and self development driving the expression of inequity aversion, in particular the emergence of an explicit ethical and normative stance from around five years of age. In the last part of the chapter, we tried to factor culture in the development of sharing and fairness, pointing to remarkable invariance, at least up to five years, but also to subtle yet marked cross-cultural differences that hopefully more research will investigate in the future and would help in trying to articulate both proximate and ultimate mechanisms that drive sharing and fairness in human development. In conclusion, sharing and the notion of fairness are both pillars of the social mind, the topic of this philosophically minded book. By focusing on the report of empirical facts and research, the overarching goal of the chapter was to demonstrate the benefits of trying to naturalize these complex aspects of the social mind. Looking at how children come to own and share, how they eventually become cogent and assertive regarding what’s fair in the face of sharing sparse resources, help us to construe these highly elusive concepts, both at the core of human social life. We hope to have convinced the reader that by providing an empirical context, the developmental perspective can reveal what is actually at stake and what it takes to have a sense of fairness and equity as we share with others.
Note 1 See also Rochat (2014) and Zahavi and Rochat (2015).
References Aboud, F. E. (1988). Children and Prejudice. New York: Blackwell. Adams, J. S. (1963). Toward an understanding of inequity. Journal of Abnormal Psychology, 67(5), 422–436. Anderson, N. H. & Butzin, C. A. (1978). Integration theory applied to children’s judgments of equity. Developmental Psychology, 14(6), 593–606. Ash, S. E. (1956). Studies of independence and conformity. Psychological Monographs, 70(9), 1–76. Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books. Bates, E. (1990). Language about me and you: Pronominal reference and the emerging concept of self. In D. Cicchetti and M. Beeghly (Eds.), The Self in Transition: Infancy to Childhood. Chicago: University of Chicago Press. Baumard, N., Boyer, P. & Sperber, D. (2010). Evolution of fairness: Cultural variability. Science, 329(5990), 388.
237
Philippe Rochat and Erin Robbins Bigelow, A. E. & Rochat, P. (2006). Two-month-old infants’ sensitivity to social contingency in mother– infant and stranger–infant interaction. Infancy, 9(3), 313–325. Blake, P. R. & Harris, P. L. (2009). Children’s understanding of ownership transfers. Cognitive Development, 24(2), 133–145. Bshary, R. & Grutter, A. S. (2006). Image scoring and cooperation in a cleaner fish mutualism. Nature, 441(7096), 975–978. Burkart, J. M., Fehr, E., Efferson, C. & van Schaik, C. P. (2007). Other-regarding preferences in a non-human primate: Common marmosets provision food altruistically. Proceedings of the National Academy of Sciences, 104(50), 19762–19766. Brosnan, S. F. & de Waal, F.B.M. (2003). Monkeys reject unequal pay. Nature, 425(6955), 297–299. Camerer, C. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton, NJ: Princeton University Press. Camerer, C. & Thaler, R. H. (1995). Anomalies: Ultimatums, dictators, and manners. Journal of Economic Perspectives, 9, 209–219. Carpenter, M., Call, J. & Tomasello, M. (2005). Twelve-and 18-month-olds copy actions in terms of goals. Developmental Science, 8(1), F13–F20. Carpenter, M. & Liebal, K. (2011). Joint attention, communication, and knowing together in infancy. In A. Seemann (Ed.), Joint Attention: New Developments in Psychology, Philosophy of Mind, and Social Neuroscience (pp. 159–182). Cambridge, MA: MIT Press. Cole, P. M. (1986). Children’s spontaneous control of facial expression. Child Development, 57(6), 1309–1321. Cosmides, L. (1989).The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Watson selection task. Cognition, 31(3), 187–276. Csibra, G. (2010). Recognizing communicative intentions in infancy. Mind & Language, 25(2), 141–168. Corriveau, K. H. & Harris, P. L. (2010). Preschoolers (sometimes) defer to the majority in making simple perceptual judgments. Developmental Psychology, 46(2), 437. Corriveau, K. H., Kim, E., Song, G. & Harris, P. L. (2013).Young children’s deference to a consensus varies by culture and judgment setting. Journal of Cognition and Culture, 13(3–4), 367–381. Cvencek, D., Meltzoff, A. N. & Greenwald, A. G. (2011). Math–gender stereotypes in elementary school children. Child development, 82(3), 766–779. Damon, W. (1975). Early conceptions of positive justice as related to the development of logical operations. Child Development, 46(2), 301–312. Dunn, J. (1988). The Beginnings of Social Understanding. Cambridge, MA: Harvard University Press. ———. (2006). Moral development in early childhood and social interaction in the family. In M. Killen and J. G. Smetana (Eds.), Handbook of Moral Development (pp. 331–350). Mahwah, NJ: Lawrence Erlbaum. Dwyer, P. D. (2000). Mamihlapinatapai: Games people (might) play. Oceania, 70(3), 231–251. Eisenberg, N. (2000). Emotion, regulation, and moral development. Annual Review of Psychology, 51(1), 665–697. Eisenberg, N., Fabes, R. A. & Spinrad, T. L. (1998). Prosocial development. In W. Damon and R. M. Lerner (Eds.), Handbook of Child Psychology (Vol. 3, pp. 646–718). Hoboken, NJ: John Wiley & Sons. Enright, R. D., Bjerstedt, A., Enright, W. F., Levy, V. M., Lapsley, D. K., Buss, R. R., . . . M. Zindler (1984). Distributive justice development: Cross-cultural, contextual, and longitudinal evaluations. Child Development, 55(5), 1737–1751. Faigenbaum, G. (2005). Children’s Economic Experience: Exchange, Reciprocity and Value. Buenos Aires: Libros EnRed. Fehr, E., Bernhard, H. & Rockenbach, B. (2008). Egalitarianism in young children. Nature, 454, 1079–1084. Fehr, E. & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. Quarterly Journal of Economics, 14(3), 817–868. Ferguson, T. J. & Stegge, H. (1998). Measuring guilt in children: A rose by any other name still has thorns. In J. Bybee (Ed.), Guilt and Children (pp. 19–74). San Diego, CA: Academic Press. Field, T. M. (1984). Early interactions between infants and their postpartum depressed mothers. Infant Behavior and Development, 7(4), 517–522. Fox, K. F. & Kehret-Ward,T. (1990). Naive theories of price: A developmental model. Psychology & Marketing, 7(4), 311–329.
238
Sharing and fairness in development Friedman, O. & Neary, K. (2008). Determining who owns what: Do children infer ownership from first possession? Cognition, 107(3), 829–849. ———. (2009). First possession beyond the law: Adults’ and young children’s intuitions about ownership. Tulane Law Review, 83, 1–12. Gergely, G. & Watson, J. S. (1999). Early socio-emotional development: Contingency perception and the social-biofeedback model. Early Social Cognition: Understanding Others in the First Months of Life, 60, 101–136. Gibson-Wallace, B., Robbins, E. & Rochat, P. (2015). White bias in 3–7 year-old children across cultures. Journal of Cognition and Culture, 15(3–4), 344–373. Gifford-Smith, M. E. & Brownell, C. A. (2003). Childhood peer relationships: Social acceptance, friendships, and peer networks. Journal of School Psychology, 41(4), 235–284. Gopnik, A. & Wellman, H. M. (1992). Why the child’s theory of mind really is a theory. Mind & Language, 7(1–2), 145–171. Hamann, K., Bender, J. & Tomasello, M. (2014). Meritocratic sharing is based on collaboration in 3-year-olds. Developmental Psychology, 50(1), 121. Hamlin, J. K. & Wynn, K. (2011). Young infants prefer prosocial to antisocial others. Cognitive development, 26(1), 30–39. Hamlin, J. K.,Wynn, K. & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450(7169), 557–559. Harris, P. L. & Núntez, M. (1996). Understanding of permission rules by preschool children. Child Development, 67(4), 1572–1591. Haun, D. & Tomasello, M. (2011). Conformity to peer pressure in preschool children. Child Development, 82(6), 1759–1767. Haun, D. B.,Van Leeuwen, E. J. & Edelson, M. G. (2013). Majority influence in children and other animals. Developmental Cognitive Neuroscience, 3, 61–71. Henrich, J., Heine, S. J. & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. Henrich, J., McElreath, R., Barr, A., Ensminger, J., Barrett, C., Bolyanatz, A., . . . Ziker, J. (2006). Costly punishment across human societies. Science, 312(5781), 1767–1770. Hill, V. & Pillow, B. H. (2006). Children’s understanding of reputations. Journal of Genetic Psychology, 167, 137–157. Hobson, R. P. (2008). Interpersonally situated cognition. International Journal of Philosophical Studies, 16(3), 377–397. Hobson, R. P. & Hobson, J. A. (2011). Joint attention or joint engagement? Insights from autism. In A. Seemann (Ed.), Joint Attention: New Developments in Philosophy, Psychology, and Neuroscience (pp. 115– 136). Cambridge, MA: MIT Press. Holmgren, R. A., Eisenberg, N. & Fabes, R. A. (1998). The relations of children’s situational empathy-related emotions to dispositional pro-social behavior. International Journal of Behavioral Development, 22, 169–193. Hook, J. (1993). Judgments about the right to property from preschool to adulthood. Law and Human Behavior, 17(1), 135. Horowitz, A. (2012). Fair is fine, but more is better: Limits to inequity aversion in the domestic dog. Social Justice Research, 25(2), 195–212. House, B. R., Silk, J. B., Henrich, J., Barrett, H. C., Scelza, B. A., Boyette, A. H., . . . Laurence, S. (2013). Ontogeny of prosocial behavior across diverse societies. Proceedings of the National Academy of Sciences, 110(36), 14586–14591. Hull, D. & Reuter, J. (1977). The development of charitable behavior in elementary school children. The Journal of Genetic Psychology, 131(1), 147–153. Ingram, G. P. & Bering, J. M. (2010). Children’s tattling: The reporting of everyday norm violations in preschool settings. Child Development, 81(3), 945–957. Jensen, K., Call, J. & Tomasello, M. (2007). Chimpanzees are rational maximizers in an ultimatum game. Science, 318(5847), 107–109. Kanngiesser, P., Gjersoe, N. & Hood, B. M. (2010). The effect of creative labor on property-ownership transfer by preschool children and adults. Psychological Science, 21(9), 1236–1241.
239
Philippe Rochat and Erin Robbins Kanngiesser, P. & Warneken, F. (2012). Young children consider merit when sharing resources with others. PLoS One, 7(8), e43979. Kellman, P. J. & Arterberry, M. E. (2006). Perceptual development. The handbook of child psychology: Cognition, Perception, and Language, 6, 109–160. Kenward, B. & Dahl, M. (2011). Preschoolers distribute scarce resources according to the moral valence of recipients’ previous actions. Developmental Psychology, 47(4), 1054. Kim, S. & Kalish, C.W. (2009). Children’s ascriptions of property rights with changes of ownership. Cognitive Development, 24(3), 322–336. Kinzler, K. D., Dupoux, E. & Spelke, E. S. (2007). The native language of social cognition. Proceedings of the National Academy of Sciences, 104(30), 12577–12580. ———. (2012). ‘Native objects and collaborators: Infants’ object choices and acts of giving reflect favor for native over foreign speakers. Journal of Cognition and Development,13(1), 67–81. Kochanska, G., DeVet, K., Goldman, M., Murray, K. & Putnam, S. P. (1994). Maternal reports of conscience development and temperament in young children. Child Development, 65(3), 852–868. Kohlberg, L. (1966). A cognitive-developmental analysis of children’s sex-role concepts and attitudes. In E. E. Maccoby (Ed.), The Development of Sex Differences (pp. 82–172). Stanford, CA: Stanford University Press. Kundey, S. M., De Los Reyes, A., Royer, E., Molina, S., Monnier, B., German, R. . . . Coshun, A. (2011). Reputation-like inference in domestic dogs (Canis familiaris). Animal Cognition, 14(2), 291–302. Lakshminaryanan, V., Chen, M. K. & Santos, L. R. (2008). Endowment effect in capuchin monkeys. Philosophical Transactions of the Royal Society B-Biological Sciences, 363(1511), 3837–3844. Larsen, G. Y. & Kellogg, J. (1974). A developmental study of the relation between conservation and sharing behavior. Child Development, 45(3), 849–851. Leimgruber, K. L., Shaw, A., Santos, L. R. & Olson, K. R. (2012). Young children are more generous when others are aware of their actions. PLoS ONE, 7(10), e48292. doi:10.1371/journal.pone.0048292 Leventhal, G. S. & Anderson, D. (1970). Self-interest and the maintenance of equity. Journal of Personality and Social Psychology, 15(1), 57. LoBue,V., Nishida, T., Chiong, C., DeLoache, J. S. & Haidt, J. (2011). When getting something good is bad: Even three-year-olds react to inequality. Social Development, 20(1), 154–170. Lucas, M. M., Wagner, L. & Chow, C. (2008). Fair game: The intuitive economics of resource exchange in four-year-olds. Journal of Social, Evolutionary, and Cultural Psychology, 2(3), 74–88. McCrink, K., Bloom, P. & Santos, L. R. (2010). Children’s and adults’ judgments of equitable resource distributions. Developmental Science, 13(1), 37–45. McGillicuddy-de Lisi, A. V., Watkins, C. & Vinchur, A. J. (1994). The effect of relationship on children’s distributive justice reasoning. Child Development, 65(6), 1694–1700. Malti, T., Gummerum, M., Ongley, S., Chaparro, M., Nola, M. & Bae, N. Y. (2015). “Who is worthy of my generosity?” Recipient characteristics and the development of children’s sharing. International Journal of Behavioral Development, 40(1), 31–40. doi: 10.1177/0165025414567007. Marlier, L., Schaal, B. & Soussignan, R. (1998). Neonatal responsiveness to the odor of amniotic and lacteal fluids: A test of perinatal chemosensory continuity. Child development, 69(3), 611–623. Melis, A. P., Hare, B. & Tomasello, M. (2006). Chimpanzees recruit the best collaborators. Science, 311(5765), 1297–1300. Meltzoff, A. N. & Moore, M. K. (1997). Explaining facial imitation: A theoretical model. Early Development & Parenting, 6(3–4), 179. Moll, H., Carpenter, M. & Tomasello, M. (2007). Fourteen-month-olds know what others experience only in joint engagement. Developmental Science, 10(6), 826–835. Moll, H. & Meltzoff, A. N. (2011). Perspective-taking and its foundation in joint attention. In N. Eilan, H. Lerman, and J. Roessler (Eds.), Perception, Causation, and Objectivity. Issues in Philosophy and Psychology (pp. 286–304). Oxford: Oxford University Press. Moll, H., Richter, N., Carpenter, M. & Tomasello, M. (2008). Fourteen-month-olds know what “we” have shared in a special way. Infancy, 13(1), 90–101. Murnighan, J. K. & Saxon, M. S. (1998). Ultimatum bargaining by children and adults. Journal of Economic Psychology, 19, 415–445.
240
Sharing and fairness in development Nesdale, D. (2004). Social identity processes and children’s ethnic prejudice. In M. Bennett, F. Sani (Eds.), The Development of the Social Self (pp. 219–245). New York: Psychology Press. ———. (2008). Peer group rejection and children’s intergroup prejudice. In S. R. Levy and M. Killen (Eds.), Intergroup Attitudes and Relations in Childhood Through Adulthood (pp. 32–46). Oxford: Oxford University Press. Nesdale, D., Maass, A., Durkin, K. & Griffiths, J. (2005). Group norms, threat, and children’s racial prejudice. Child Development, 76(3), 652–663. Ng, R., Heyman, G. D. & Barner, D. (2011). Collaboration promotes proportional reasoning about resource distribution in young children. Developmental Psychology, 47(5), 1230. Nisan, M. (1984). Distributive justice and social norms. Child Development, 55(3), 1020–1109. Nowak, M. A. & Sigmund, K. (2005). Evolution of indirect reciprocity. Nature, 437(7063), 1291–1298. Olejnik, A. B. (1976). Effects of reward-deservedness on children’s sharing [Article]. Child Development, 47(2), 380–385. Olson, K. R. & Spelke, E. S. (2008). Foundations of cooperation in young children. Cognition, 108, 222–231. Paulus, M. & Moore, C. (2014). The development of recipient-dependent sharing behavior and sharing expectations in preschool children. Developmental Psychology, 50(3), 914–921. Peterson, C., Peterson, J. & McDonald, N. (1975). Factors affecting reward allocation by preschool children. Child Development, 46(4), 942–947. Piaget, J. (1970). The Child’s Conception of Movement and Speed. New York: Basic Books. Pilgrim, C. & Rueda-Riedle, A. (2002).The importance of social context in cross-cultural comparisons: First graders in Columbia and the United States. The Journal of Genetic Psychology, 163(3), 283–295. Preston, S. D. & de Waal, F. B. M. (2002). Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences, 25, 1–72. Raihani, N. J. & McAuliffe, K. (2012). Does inequity aversion motivate punishment? Cleaner fish as a model system. Social Justice Research, 25(2), 213–231. Rakoczy, H.,Warneken, F. & Tomasello, M. (2008).The sources of normativity: young children’s awareness of the normative structure of games. Developmental Psychology, 44(3), 875. Range, F., Horn, L.,Viranyi, Z. & Huber, L. (2009). The absence of reward induces inequity aversion in dogs. Proceedings of the National Academy of Sciences, 106(1), 340–345. Rao, N. & Stewart, S. M. (1999). Cultural influences on sharer and recipient behavior: Sharing in Chinese and Indian preschool children. Journal of Cross-Cultural Psychology, 30(2), 219–241. Ratner, N. & Bruner, J. (1978). Games, social exchange and the acquisition of language. Journal of Child Language, 5(03), 391–401. Robbins, E. & Rochat, P. (2011). Emerging signs of strong reciprocity in human ontogeny. Frontiers in Psychology, 2, 1–14. doi: 10.3389/fpsyg.2011.00353 Robbins, E. & Rochat, P. (in prep). Emerging care for reputation in young (3–7 year old) children. Robbins, E., Starr, S. & Rochat, P. (2016). Fairness and distributive justice by 3–5-year-old Tibetan children. Journal of Cross-Cultural Psychology, 47(3), 333–340. Rochat, P. (1999). Early Social Cognition. Mahwah, NJ: Lawrence Erlbaum and Associates Publishers. ———. (2001). The Infant’s World. Cambridge, MA: Harvard University Press. ———. (2009). Others in Mind: Social Origins of Self-Consciousness. New York: Cambridge University Press. ———. (2011). Possession and morality in early development. New Directions for Child and Adolescent Development, 132, 23–28. ———. (2014). Origins of Possession: Owning and Sharing in Development. Cambridge: Cambridge University Press. ———. (2013). The gaze of others. In M. Banaji and S. Gelman, Eds., Navigating the Social World. New York: Oxford University Press. Rochat, P., Dias, M. D. G., Guo, L. P., Broesch, T., Passos-Ferreira, C., Winning, A. & Berg, B. (2009). Fairness in distributive justice by 3- and 5-year-olds across seven cultures. Journal of Cross-Cultural Psychology, 40(3), 416–442. Rochat, P. & Hespos, S. J. (1997). Differential rooting response by neonates: Evidence for an early sense of self. Early Development and Parenting, 6(34), 105–112.
241
Philippe Rochat and Erin Robbins Rochat, P. & Passos-Ferreira, C. (2009). From imitation to reciprocation and mutual recognition. In J. A. Pineda (Ed.), Mirror Neuron Systems (pp. 191–212). New York: Humana Press. Rochat, P., Robbins, E. Passos-Ferreira, C., Donato Oliva, A., Dias, M.D.G., Liping G. (2014). Ownership reasoning in children of 7 cultures. Cognition, 132, 471–484. Rochat, P. & Striano, T. (1999). Emerging self-exploration by 2-month-old infants. Developmental Science, 2(2), 206–218. Rochat, P. & Zahavi, D. (2011). The uncanny mirror: A re-framing of mirror self-experience. Consciousness and Cognition, 20(2), 204–213. Ross, H. S., Smith, J., Spielmacher, C. & Recchia, H. (2004). Shading the truth: Self-serving biases in children’s reports of sibling conflicts. Merrill-Palmer Quarterly, 50(1), 61–85. Ross, J., Anderson, J. R. & Campbell, R. N. (2011). Situational changes in self-awareness influence 3- and 4-year-olds’ self-regulation. Journal of Experimental Child Psychology, 108(1), 126–138. Rossano, F., Fiedler, L. & Tomasello, M. (2015). Preschoolers’ understanding of the role of communication and cooperation in establishing property rights. Developmental Psychology, 51(2), 176. Rossano, F., Rakoczy, H. & Tomasello, M. (2011). Young children’s understanding of violations of property rights. Cognition, 121(2), 219–227. Russell,Y. I., Call, J. & Dunbar, R. I. (2008). Image scoring in great apes. Behavioural Processes, 78(1), 108–111. Sagi, A. & Hoffman, M. L. (1976). Empathic distress in the newborn. Developmental Psychology, 12(2), 175. Sigelman, C. K. & Waitzman, K. A. (1991). The development of distributive justice orientations: Contextual influences on children’s resource allocations. Child Development, 62(6), 1367–1378. Silk, J. B., Brosnan, S. F., Vonk, J., Henrich, J., Povinelli, D. J., Richardson, A. S., . . . Schapiro, S. J. (2005). Chimpanzees are indifferent to the welfare of unrelated group members. Nature, 437(7063), 1357–1359. Singer-Freeman, K. E. & Goswami, U. (2001). Does half a pizza equal half a box of chocolates?: Proportional matching in an analogy task. Cognitive Development, 16(3), 811–829. Singh, R. (1997). Group harmony and interpersonal fairness in reward allocation: On the loci of the moderation effect. Organizational Behavior and Human Decision Processes, 72(2), 158–183. Slater, A., Morison,V., Somers, M., Mattock, A., Brown, E. & Taylor, D. (1990). Newborn and older infants’ perception of partly occluded objects. Infant Behavior and Development, 13(1), 33–49. Sloane, S., Baillargeon, R. & Premack, D. (2012). Do infants have a sense of fairness? Psychological Science, 23(2), 196–204. Smetana, J. G. & Braeges, J. L. (1990). The development of toddlers’ moral and conventional judgments. Merrill-Palmer Quarterly, 36(3), 329–346. Sophian, C., Garyantes, D. & Chang, C. (1997).When three is less than two: Early developments in children’s understanding of fractional quantities. Developmental Psychology, 33(5), 731–744. Spinillo, A. G. & Bryant, P. (1991). Children’s proportional judgments:The importance of “half ”. Child Development, 62(3), 427–440. Stern, D. N. (1985). The Interpersonal World of the Infant. New York: Basic Books. Stern, D. N., Hofer, L., Haft, W. & Dore, J. (1985). Affect attunement: The sharing by means of intermodal fluency. In T. M. Field and N. A. Fox (Eds.), Social Perception in Infants (pp. 249–286). Norwood, NJ: Ablex. Streater, A. L. & Chertkoff, J. M. (1976). Distribution of rewards in triad: Developmental test of equity theory. Child Development, 47(3), 800–805. Striano, T. & Rochat, P. (2000). Emergence of selective social referencing in infancy. Infancy, 1(2), 253–264. Stipek, D. J., Gralinski, J. H. & Kopp, C. B. (1990). Self-concept development in the toddler years. Developmental Psychology, 26(6), 972. Stipek, D., Recchia, S., McClintic, S. & Lewis, M. (1992). Self-evaluation in young children. Monographs of the Society for Research in Child Development, 57(1), 1–95. Subiaul, F.,Vonk, J., Okamoto-Barth, S. & Barth, J. (2008). Do chimpanzees learn reputation by observation? Evidence from direct and indirect experience with generous and selfish strangers. Animal Cognition, 11(4), 611–623. Talwar, V. & Lee, K. (2008). Social and cognitive correlates of children’s lying behavior. Child Development, 79(4), 866–881. Taylor, C. (1989). Sources of the Self:The Making of Modern Identity. Cambridge, MA: Harvard University Press.
242
Sharing and fairness in development Thomson, N. R. & Jones, E. F. (2005). Children’s, adolescents’, and young adults’ reward allocations to hypothetical siblings and fairness judgments: Effects of actor gender, character type, and allocation pattern. Journal of Psychology, 139(4), 349–367. Tomasello, M. (1995). Joint attention as social cognition. In C. Moore and P. J. Dunham (Eds.), Joint Attention: Its Origins and Role in Development (pp. 103–130). Hillsdale, NJ: Erlbaum. ———. (1998). One child’s early talk about possession. In J. Newman (Ed.), The Linguistics of Giving (pp. 349– 373). Amsterdam: John Benjamins. ———. (2014). A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. Tomasello, M., Carpenter, M., Call, J., Behne, T. & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Science, 28, 675–735. Trevarthen, C. (1980). The foundations of intersubjectivity: Development of interpersonal and cooperative understanding in infants. In D. Olson (Ed.), The Social Foundations of Language and Thought: Essays in Honour of J.S. Bruner. (pp. 316–342). New York: W.W. Norton. Ungerer, J. A., Dolby, R., Waters, B., Barnett, B., Kelk, N. & Lewin, V. (1990). The early development of empathy: Self-regulation and individual differences in the first year. Motivation and Emotion, 14(2), 93–106. Vaish, A., Carpenter, M. & Tomasello, M. (2009). Sympathy through affective perspective taking and its relation to prosocial behavior in toddlers. Developmental Psychology, 45(2), 534. ———. (2010).Young children selectively avoid helping people with harmful intentions. Child Development, 81(6), 1661–1669. Warneken, F & Tomasello, M. (2006). Altruistic helping in human infants and young chimpanzees. Science, 311, 1301–1303. Weintraub, M., Clemens, L. P., Sockloff, A., Ethridge, T., Graceley, E. & Myers, B. (1984). The development of sex role stereotypes in the third year: Relationships to gender labelling, gender identity, sex-typed toy preference, and family characteristics. Child Development, 55, 1493–1503. Wellman, H. M., Cross, D. & Watson, J. (2001). Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72(3), 655–684. Wolff, P. H. (1987). The Development of Behavioral States and the Expression of Emotions in Early Infancy: New Proposals for Investigation. Chicago: University of Chicago Press. Zahavi, D. & Rochat, P. (2015). Sharing ≠ Empathy. Consciousness and Cognition, 36, 543–553. Zahn-Waxler, C. & Radke-Yarrow, M. (1982). The development of altruism: Alternative research strategies. In N. Eisenberg (Ed.), The Development of Prosocial Behavior (pp. 109–137). New York: Academic Press. Zahn-Waxler, C. & Robinson, J. (1995). Empathy and guilt: Early origins of feelings of responsibility. In J. P. Tangney (Ed.), Self-Conscious Emotions: The Psychology of Shame, Guilt, Embarrassment, and Pride (pp. 143– 173). New York: Guilford Press. Zeller, M.,Vannatta, K., Schafer, J. & Noll, R. B. (2003). Behavioral reputation: A cross-age perspective. Developmental Psychology, 39(1), 129. Zinser, O., Starnes, D. M. & Wild, H. D. (1991). The effect of need on the allocation behavior of children. Journal of Genetic Psychology, 152(2), 35–46.
243
PART III
Mechanisms of the moral mind
14 DOING THE RIGHT THING FOR THE WRONG REASON Reputation and moral behavior Jan M. Engelmann and Christian Zeller
People routinely share valuable resources with needy others, spare their precious time to volunteer at homeless shelters, and donate money to noble causes. Philosophers from Plato to Hume and Kant to Rawls have speculated about the motivations underlying such behaviors. Generally speaking, people might act morally for two different reasons. On the one hand, people might behave morally because they judge it the right thing to do, because they identify with moral principles. Some have called this doing the right thing for the right reason (Scanlon, 2010). We will refer to such motivations as ‘genuine compliance’. On the other hand, people might behave morally for strategic reasons, and, more specifically, to improve their reputation. So people might also do the right thing for the wrong reason. We will refer to such motivations as ‘strategic compliance’. In the past decade or so, experimental research has started to seriously address the motivations underlying moral behavior. Numerous empirical investigations have found strong evidence for a strategic compliance perspective. Results from such diverse disciplines as biology, psychology, and economics convincingly show that reputational incentives provide a powerful motivation for moral behavior. However, some authors argue not only that strategic reasons for moral behavior exist, but that all moral behavior is in fact reducible to such self-interested motivations (e.g. Bateson, Nettle, & Roberts, 2006). In the following, we will refer to this perspective as the ‘reductionist view’. Our aim here is twofold. First, we give an overview of recent empirical work on strategic morality and discuss the set of cognitive abilities and behavioral dispositions necessary to engage in such forms of morality. Second, and based on this first part, we will argue that moral behavior is not reducible to reputational concerns. We propose an alternative interpretation of experimental findings taken as evidence for the ‘reductionist view’. While on the surface these experiments might provide sound evidence for ‘strategic compliance’, we will argue that the findings are more appropriately interpreted in terms of ‘genuine compliance’. We proceed as follows. In section 1, we present a short discussion of reputation. Section 2 summarizes representative empirical findings on strategic compliance. In addition, we present conclusions drawn from these findings by supporters of the ‘reductionist view’. In section 3, we analyze cognitive and behavioral prerequisites of both strategic and genuine agents. This will naturally lead us, in section 4, to an alternative model that better accounts for the existing empirical data. 247
Jan M. Engelmann and Christian Zeller
1. What is reputation? When we refer to someone’s reputation, what exactly are we referring to? The first important point is that a reputation always expresses a certain evaluation of a given person. So, in the moral domain, someone’s reputation reveals an assessment of the moral character of that person. It is important to stress, however, that someone’s reputation should not be confused with our personal judgment or opinion of that person. Rather, as highlighted by Tomasello (2014) and Sperber and Baumard (2012), a reputation represents a collective and public judgment of a given person. It is a shared representation based on common knowledge of how ‘we’ think of a given person. This implies that reputation and personal evaluation can diverge. It is perfectly possible for agents to individually have a positive opinion of a given person, while that same person, at the same time, suffers from a negative reputation. This is important because a reputation comes with the full weight of the collective, and thus with real normative force. In much the same way that moral principles are more than just personal opinions but collective claims, reputations are created and maintained (and often destroyed) by the public discourse in a given group. The point that reputations represent collective evaluations of a group with normative force is supported by research showing that when humans have access to both reputational information and personal observation of a given individual, they trust the reputational information even in situations where it contradicts their personal evaluation (Sommerfeld, Krambeck, Semmann, & Milinski, 2007). A person’s reputation thus has important consequences for her relations with others. People with a good reputation are sought and admired; people with a bad reputation are shunned and condemned. Since individuals are aware of this, they invest much time and energy in behaviors and strategies to improve their reputation. Investing in one’s reputation, on our account, is understood as a strategy to improve one’s relations with others and, in particular, to gain the trust of others. A large body of empirical work (reviewed in the next section) shows that from a young age onwards humans pay attention to their own reputation, strategically attempting to improve the way they are evaluated by those around them. Reputation thus has two sides to it. On the one hand, reputations are used as informational tools, which allow agents to enter into smooth relations with one another. Reputations allow cooperators to selectively identify and interact with other cooperators, to the exclusion of cheaters and social loafers. People constantly evaluate others and their reputations, show a keen interest in the latest gossip, readily pass on reputation-related information themselves, and thereby create the public and collective evaluation that is a person’s reputation. However, people not only evaluate other people. They are, simultaneously, aware that they are also being evaluated by those around them; this is the second aspect of reputation. Since a good reputation facilitates and improves relations with others, people spend a great deal of time and energy to build up and maintain their public persona. Proponents of the ‘strategic compliance’ view hold that humans often behave morally with the aim of a good reputation. We now review the evidence for this view.
2. Doing the right thing for the wrong reason: Empirical findings and the ‘reductionist view’ Two key predictions of reputational theories of moral norm compliance are that agents should behave more morally when (i) they are observed compared to when they are alone and (ii) chances are high that others might hear about their actions through gossip. Both predictions are supported by a large body of empirical findings. 248
Doing the right thing for the wrong reason
Typically, in these experimental studies, participants partake in controlled experiments in the laboratory or in more naturalistic settings in one of two different conditions. In private conditions, the behavior of participants is anonymous, no audience is present. In public conditions on the other hand, the participant’s behavior is observed; an audience is present which might additionally be of strategic relevance to the participant (for example by way of access to valuable resources). An increase in norm compliant behavior in the public relative to the private condition is taken as evidence for the view that individuals engage in strategic norm compliance with the ultimate goal of improving their relations with others via a good reputation. In a famous experiment, for example, Milinski and colleagues (2002) showed that people behave cooperatively in a public goods game (by contributing significantly to a common pool) if such a game is coupled with a simple version of direct reciprocity. Agents’ need to maintain positive relations with their partners during direct reciprocity rounds provides them with a strategic interest to invest in their reputation in public goods rounds and thus ‘helps to solve the tragedy of the commons’. Furthermore, agents strategically attempt to gain a better reputation than competitors, a phenomenon known as ‘competitive altruism’ (Barclay & Willer, 2007). Importantly, such forms of strategic compliance seem to emerge early in ontogeny. In a recent experiment, we presented 5-year-old children with two different tasks, helping and stealing (Engelmann, Herrmann, & Tomasello, 2012).What varied according to condition was whether participants were observed by a peer or engaged in the task on their own. Results suggest that even preschool age children attempt to improve their relations with peers by strategically investing in their reputation. When children were observed they stole less and helped more compared to when they were in the room on their own. A second experiment showed that 5-year-old children’s concern for their reputation is sensitive to specific audiences and thus not motivated by a general and inflexible audience effect (Engelmann, Over, Herrmann, & Tomasello, 2013). Children were asked to divide ten stickers between themselves and an anonymous and absent third party. In one condition, they were observed by a peer who could share some of his high-value stickers with the participant afterwards. In a second condition, participants were again observed by a peer, but this time no opportunities for indirect reciprocity were presented. The results were clear-cut: children shared significantly more stickers in the indirect reciprocity condition, strategically improving their relations with important others despite their young age. The second key prediction of reputational theories of norm compliance, involving the expectation that agents should behave more morally when others are likely to hear about their actions through gossip, is supported by a set of recent experiments (Sommerfeld, Krambeck, & Milinski, 2008; Sommerfeld et al., 2007). In these studies, a participant observed the behavior of a second party and was then given the chance of passing on information about this behavior to a different participant. Crucially, this participant was in the process of deciding whether to engage in a cooperative game with the observed party. Results show that (i) participants readily gossip about others’ behavior, (ii) recipients of gossip use this information, and (iii) the use of gossip makes cooperation more stable, since it allows cooperators to selectively interact with other cooperators. A study by Beersma and Van Kleef (2011) furthermore showed that participants contribute more resources to a group when they are observed by individuals who have a high tendency to gossip compared to individuals who show no such tendency. Taken together, these experimental findings provide strong evidence for the two key predictions of reputational theories of norm compliant behavior. Humans pay close attention to potential witnesses of their actions and, additionally, are sensitive to whether observers have a tendency to spread important reputational information. In both cases, agents adjust their behavior accordingly and act in compliance with moral norms. In other words, there is robust 249
Jan M. Engelmann and Christian Zeller
experimental evidence that people manage their reputations strategically – as the ‘strategic compliance’ view would predict. At the same time, evidence is also mounting for the ‘genuine compliance’ view and its key prediction that agents should act morally also in the absence of any strategic incentives. Converging experimental evidence from a diverse set of disciplines, from behavioral economics and political science to evolutionary biology and social psychology, demonstrates that individuals from across the globe are intrinsically motivated to comply with moral norms and thus do the right thing for the right reason (for example, Henrich et al., 2010). Support for this view comes mostly from an experimental paradigm called the dictator game. In this game, subjects prototypically receive an initial monetary endowment and can decide whether to share anything, and if so how much, with an anonymous and absent recipient. Any non-zero offers are taken as support for the view that participants are intrinsically motivated to comply with moral norms of sharing. After all, the fact that the game is one-shot (non-iterated) and played in fully anonymous conditions rules out any potential alternative explanations (e.g. strategic incentives such as reciprocity or reputation). Results of the dictator game show that individuals across the globe consistently produce non-zero offers, with the exact percentage varying from 20 to 50% across cultures (Henrich et al., 2010). Results of experimental studies thus provide evidence for proponents of the strategic compliance view as well as defenders of the genuine compliance view. It is undeniable that humans show greater levels of norm compliance when they are observed compared to when alone. However, they also comply with norms even in the absence of observers and, correspondingly, in the absence of strategic incentives. The story does not end here. A number of authors have recently proposed that strategic motives are not only one way, but in fact the only way to motivate norm compliant behavior (e.g. Bateson et al., 2006). They argue that purportedly genuine motivations to do the right thing are reducible to strategic concerns. Evidence for this ‘reductionist view’ comes from a series of clever experimental studies showing that even very subtle audience cues can regulate norm compliant behavior. If such manipulations indicate the absence of an audience, norm compliance decreases; if such manipulations indicate the presence of an audience, norm compliance increases. Consider a recent experiment by Haley and Fessler (2005). Participants played a standard dictator game. In one condition, participants were given noise-canceling earmuffs to create the illusion of anonymity. In a second condition they could hear, but were presented with stylized spots designed to look like eyes on a computer screen in front of them. Intriguingly, subjects donated less in the former and more in the latter condition (compared to a control condition), suggesting that even subtle cues of the absence or presence of an audience can have important effects on norm compliant behavior. Far from being an effect of an artificial laboratory setting, this finding has been replicated in a wide variety of naturalistic contexts. For example, in a famous ‘watching eyes’ experiment, Bateson et al. (2006) put up a picture of eyes in a university cafeteria and investigated its effects on contributions to a so-called honesty box. The results are startling: cafeteria visitors paid nearly three times as much for their drinks when the watching eyes image had been put up compared to when this image was replaced with a picture of flowers. Many authors have, with varying levels of explicitness, taken these studies as evidence for the ‘reductionist view’ (Barclay, 2004; Bateson et al., 2006; Burnham, 2003; Burnham & Hare, 2007; Haley & Fessler, 2005; van Vugt & Hardy, 2010). If subtle audience cues have a profound effect on norm compliant behavior, the argument goes, then supposedly private and anonymous experimental situations are in fact not perceived as such by participants.This has great repercussions for empirical investigations of genuine norm compliance, as it suggests that even donations 250
Doing the right thing for the wrong reason
in the dictator game might be motivated by subtle audience cues such as hearing someone talk outside the testing room or seeing someone passing by the window. And since norm compliance in anonymous situations has been taken as evidence for a genuine motivation to comply with norms, we might as well abandon the view that two distinct motivations underlie compliant behavior: one is sufficient. Bateson and colleagues (2006, p. 413) are most explicit: If even very weak, subconscious cues, such as the photocopied eyes used in this experiment, can strongly enhance cooperation, it is quite possible that the cooperativeness observed in other studies results from the presence in the experimental environment of subtle cues evoking the psychology of being observed.The power of these subconscious cues may be sufficient to override the explicit instructions of the experimenter to the effect that behaviour is anonymous. If this interpretation is correct, then the self-interested motive of reputation maintenance may be sufficient to explain cooperation in the absence of direct return. Subtle audience cues such as the watching eyes have an effect on norm compliant behavior. This part is uncontroversial. However, are these results really sufficient to discard the ‘genuine compliance’ view? We don’t think so. On the contrary, we argue that the presence of an observer, or even just subtle audience cues such as photocopied eyes, can prompt agents to do the right thing for the right reason (i.e. genuine compliance). That is because observers prompt perspective taking, which leads agents to an evaluation of their current behavior, an internal investigation of whether their current behavior or impulse is consistent with valued ideals and standards. We make our case by first analyzing behavioral and cognitive prerequisites of the strategic and the genuine agent. This will then, in a second step, allow us to show how perspective-taking cues, such as watching eyes, can motivate agents to comply with moral norms.
3. Cognitive and behavioral prerequisites of norm-complying agents In this section, we provide a general characterization of norm-complying agents. A strategic agent complies with a given norm with the aim of building up a good reputation. A genuine agent complies with a given norm with the aim of living up to her moral ideals as anchored in her moral identity. The genuine agent identifies with moral principles that constitute part of her self-conception and so behaving according to a given norm is important to the agent’s sense of who she is, a phenomenon we will refer to as practical or, more specifically, moral identity (Korsgaard, 2009). Now what sort of psychological and behavioral prerequisites does an agent need to possess in order to successfully comply with moral norms? To begin with, at a basic level, any type of norm compliance presupposes an agent who knows the relevant norms and, furthermore, is aware that others do so too. In other words, a norm complier is aware that the given norm represents a form of mutual knowledge (Clark, 1996). Moreover, a norm complier operates under the assumption that others do not only have knowledge of the relevant norm, but also accept the claims originating from the norm as relevant to their behavior. Last but not least, she appreciates the conditions under which the given norm applies, and knows the actions necessary to comply with it. Norm compliance is, furthermore, critically dependent on self-regulation or inhibitory control, i.e. the ability to inhibit or override unwanted but dominant response tendencies (Tomasello, 2014). Such self-regulation with respect to norm compliance consists of three main constituents (see Inzlicht, Bartholow, & Hirsh, 2015): desired goals that carry motivational 251
Jan M. Engelmann and Christian Zeller
power; psychological mechanisms that detect potential conflicts between desired goals and current impulses; and behavioral strategies that reduce such emotionally aversive conflicts. Finally, norm compliers have to answer to certain standards and as a consequence have to provide justifications for their behavior (Darwall, 2006). In the case of the strategic agent, the agent has to answer to her own self-interest. This implies that the strategic agent has to justify potential courses of action to her self-interest, choosing the course that maximizes the expected and perceived utility. In the case of the genuine agent, the individual has to answer to and provide justifications toward her moral identity. This means that potential courses of action are examined with respect to their congruence with morally relevant aspects of the agent’s identity. To sum up, norm compliant behavior can be characterized in terms of three component parts: knowledge of the norm, self-regulation, and justification. In what follows, we will provide a characterization of the two component parts that exhibit a difference between the strategic and the genuine agent: justification and self-regulation. The strategic agent To characterize a reputation-driven individual, let’s use the fictional example of one of the participants in the ‘watching eyes’ experiment by Bateson et al. (2006). It is safe to assume that this agent knows the relevant norm: if you consume a drink, you have to pay the corresponding amount into the honesty box. Now, what is the agent’s first impulse in this context, an anonymous university cafeteria lacking means to enforce payment? Given that we are dealing with a purely self-interested agent, it is safe to assume that the agent’s first impulse is to pay nothing. However, a picture of eyes is looking at the agent from above the honesty box. As suggested by Bateson et al., the effect of the watching eyes can be described as follows. By evoking the image of an observer, the eyes prompt the agent to act according to her goal of attaining a good reputation for reasons of self-interest. In the authors’ words: “ . . . we believe that images of eyes motivate cooperative behavior because they induce a perception in participants of being watched. [. . .] Our results therefore support the hypothesis that reputational concerns may be extremely powerful in motivating cooperative behavior” (p. 413). If we consider the three aforementioned constituents of self-regulation, what is the strategic agent’s desired goal? Attaining a good reputation. The second main constituent of selfregulation, a conflict between desired goals and current impulses, arises due to the agent’s first impulse regarding the honesty box, which is to pay nothing. The conflict consists in the agent’s long-term goal of building up a good reputation and her current impulse of taking a drink without paying for it. The watching eyes highlight the agent’s goal of attaining a good reputation, leading the agent to pay the adequate amount into the honesty box and thereby reducing the emotionally aversive conflict between the agent’s current impulse and her longstanding goal. Now, why does the conflict evoked by the watching eyes cause participants to act contrary to their current impulse? Numerous psychological studies show that such conflicts require a solution since they are experienced as aversive (Carver & Scheier, 1998, 2011). This is where the third component of norm compliance comes in, justification. The conflict (between the agent’s current impulse to pay nothing and her long-standing goal for a good reputation) is resolved due to the fact that the agent cannot justify this impulse to her self-interest. As suggested by the previous sentence, the strategic agent has to justify herself to her self-interest. Such justifications are a common feature of daily life.This is precisely what we engage in when reflecting about optimal courses to attain personal goals; in everyday life, we are confronted 252
Doing the right thing for the wrong reason
with a myriad of situations that call for deliberation about potential means in light of certain ends. In such cases, we choose the means that can be best justified instrumentally. In the case at hand, for example, the strategic agent who behaves prosocially only with the goal of attaining a good reputation might deliberate as follows: “If I give my fair share, this will help me to build up a good reputation. If I fail to pay, this might have bad consequences for my relations with other people.” The prospect of a good reputation justifies the costly prosocial act. Taken together, the strategic agent can be characterized as follow. He experiences a conflict between his ultimate goal of attaining a good reputation and his immediate desire of not paying for his drink, solves this conflict by way of justification to his authority, self-interest, and finally decides to pay for his drink. Now, what about the genuine agent, who does the right thing for the right reason? The genuine agent The basic prerequisite for norm compliant behavior, the knowledge of relevant norms, shows no difference between the strategic and genuine agent. How can the effect of the watching eyes be described in the case of the genuine agent? Before we turn to this question, it is necessary to unpack the concept of a practical identity given that it plays a key role in our characterization of the genuine agent (Korsgaard, 2009). First of all, why practical? Because it is a conception of our identity that is relevant to the choices and decisions we make. The point is that the ways we think of ourselves, whether as a woman, a friend, a parent, or a scientist, directly bear on the choices we make in a variety of situations. Or, to put the point slightly differently, identities or normative self-conceptions do not come for free; we cannot just say: “I am your friend” or “I am a scientist” and become your friend or a scientist. In order to be someone’s friend or a scientist, we have to act like one. If you want to think of yourself as someone’s friend, you have to take that person’s needs into account in ways that you don’t do with just anyone’s needs. Likewise, if you want to think of yourself as a scientist, you for example cannot just put forward unsubstantiated theories, but have to carefully test your hypotheses, collect data, and refute competing hypotheses. The point of this is simply that a specific identity comes with a unique set of what can be called identity-based obligations. The workings of a practical identity are extremely context-sensitive. In one situation, let’s say at work, my practical identity as a scientist might be constitutive of my decisions and actions, while in a different context, at home, my practical identity as a husband and father might matter most to me. This is not to say that the delineation is always so clear-cut: my practical identity as a scientist and a father might come into conflict when I both want to have a career as a researcher and spend time with my children. A particularly important practical identity in the case of the genuine agent is her moral identity (Blasi, 1983; Hardy & Carlo, 2005). Hardy and Carlo (2005) define moral identity as the degree to which being a moral person is important to an individual’s identity. A moral identity can include moral principles regarding sympathy, fairness, honesty, and compassion. The more central a given agent considers such principles to who she is, the more developed is her moral identity, the more she is motivated to live up to her moral standards. In much the same way that the degree to which I consider myself a scientist influences just how seriously I take those obligations associated with the identity ‘scientist’, the degree to which I consider being moral a part of my identity determines how seriously I take moral claims. It is important to stress, however, that a moral identity is not just like any other practical identity, especially when it comes to context-sensitivity. A person’s moral identity governs her other identities and provides boundary conditions for them. While other important practical identities, such as 253
Jan M. Engelmann and Christian Zeller
being a friend to Peter and being a scientist, might not regularly influence one another, a moral identity is pervasive with respect to other practical identities. Several experimental studies show that valued identities can motivate behavior as predicted by practical identity theory. For example, Bryan and colleagues (2011) found that even subtle linguistic cues appealing to a practical identity as a voter can have positive effects on voter turnout. Simply framing the process of voting as directly relevant to one’s practical identity not only increased interest in registering to vote but also actual voter turnout – compared to a situation where voting was simply framed as a behavior. More recently, Gino and Mogilner (2013) investigated the effects of priming time, rather than money, on participants’ moral behavior. Results suggest that priming time induced people to reflect on who they are and want to be, thereby presenting them with a chance to manage their practical identity, and consequently led to lower levels of anti-social behavior. How does this all relate to the watching eyes and the honesty box? While the watching eyes highlighted the instrumental goal of achieving a good reputation in the strategic agent, in the case of the genuine agent, the watching eyes prompt the agent’s moral identity which includes the ideal of behaving honestly. From this it follows that the second component of norm compliant behavior, self-regulation, shows interesting differences between the two agents. Imagine that the genuine agent, in much the same way as the strategic agent, feels an initial temptation not to pay for her drink. This will evoke a conflict, which again instigates self-regulation. However, while the conflict for the strategic agent consisted in a discrepancy between the ultimate goal of a good reputation (and behaving honestly only figured as a means to this goal) and the current impulse not to pay, the conflict for the genuine agent consists in a contradiction between her moral identity (including the ideal of behaving honestly) and the initial temptation not to pay. We can now ask the analogous question as before: why does this conflict lead the intrinsic agent to act according to her moral identity? This is the point where we have to consider the third component of norm compliant behavior, justification. For it is, just like in the case of the strategic agent, due to not being able to justify the current impulse that the conflict is resolved. Remember, at this very same point, the strategic agent deliberated: “Can I justify taking this drink?” He came to the conclusion that not paying for his drink could not be justified as it was contrary to his self-interest of attaining a good reputation. The intrinsic agent poses the very same question. The crucial difference consists in the type of authority toward which this call for justification is addressed. While the strategic agent’s authority consisted in her self-interest, the genuine agent’s authority consists in her moral identity. In the case at hand, the genuine agent’s relevant part of her practical identity might be norms such as fairness (‘pay for what you have consumed’) and/or solidarity (‘give your share’). The genuine agent thinks of herself as a fair person and cannot reasonably justify to herself behaviors that contradict this identity, such as taking a drink without paying for it. The genuine agent might initially be tempted, just like the strategic agent, but finds that it is inconsistent with her moral identity to simply take the drink. Deliberating whether certain behaviors can be justified to one’s moral identity is, just like justifications towards one’s self-interest, a common feature of daily life. Imagine finding a wallet on the street or being given too much change in a shop. You deliberate whether you should keep the money, and the result of this process of attempted justification often takes the form of “I am just not the type of person that keeps others’ money.” In much the same way, in the current context of the honesty box, the genuine agent, having reflected whether not paying can be justified, says to herself: “I am just not the type of person that takes things without paying for them.” 254
Doing the right thing for the wrong reason
4. Do the watching eyes experiments really provide evidence for the ‘reductionist view’? There is no doubt that humans show increased levels of norm compliance in the presence of an audience. There is doubt, however, regarding the interpretation of this behavior. Proponents of the ‘strategic compliance’ perspective hold not only that reputational incentives provide a powerful motivation for norm compliance, but that in fact all norm compliant behavior is reducible to such strategic incentives. The main evidence for this view, which we have called the ‘reductionist view’, comes from a set of experiments showing that even subtle audience cues have a strong effect on moral behavior: the watching eyes experiments. However, do these experiments really measure strategic behavior? This alternative interpretation proceeds in two steps. We will argue first, as evidenced by a large body of psychological studies, that humans automatically compute other’s perspectives – even if that someone is just represented by a cartoon figure or a picture of eyes. Second, following Mead (1925) and Tomasello (2016), we will argue that taking someone’s perspective constitutes the crucial prerequisite for any form of moral behavior. In Mead’s (1925) words: “Social control depends, then, upon the degree to which the individuals in society are able to assume the attitudes of the others who are involved with them in common endeavor” (p. 326). Step 1: Automatic perspective taking The claim is that participants in the Bateson et al. experiments automatically compute the perspective of the watching eyes. What is the evidence for this? First of all, a long tradition of phenomenological research has stressed that being observed can trigger perspective taking and a subsequent objectification of the self (for an overview, see Stack & Plant, 1982). It was no other than Sartre (1968) who maintained that we become objects of judgment and evaluation when we are looked at. Second of all, a robust body of psychological studies provides evidence for automatic perspective taking in humans.To begin with, studies show that adults spontaneously follow others’ gazes (Frischen, Bayliss, & Tipper, 2007). In fact, this does not have to be a real person’s gaze – a picture of a face or just eyes are sufficient. And, more to the point, such gaze following does not only amount to a basic attentional cueing effect (Qureshi, Apperly, & Samson, 2010; Samson, Apperly, Braithwaite, Andrews, & Bodley Scott, 2010; Surtees & Apperly, 2012). Rather, as shown most convincingly by Samson and colleagues (2010), following someone’s gaze also involves a representation of the person’s perspective.The authors provide support for this argument by showing that one’s own and someone else’s perspective influence each other, the so-called interference effect.When adults are asked to engage in self-perspective judgments in the presence of an individual with a different perspective, these judgments suffer. Said another way, humans spontaneously compute what another person can see – even if such perspective taking is not relevant to the current task. In fact, Samson and colleagues (2010) report that, at least in some circumstances, judgments about someone else’s perspective are made more easily than self-perspective judgments.What we can conclude from this body of work is that human adults engage in automatic and effortless computations of others’ perspectives.The presence of another human being, or just a picture of another person (as in the Samson et al. studies), is sufficient. The interference effect presents evidence for rapid and involuntary perspective taking – the sort of perspective taking, in other words, that is likely to take place in studies using pictures of eyes. It is thus reasonable to conclude that participants of the watching eyes experiment automatically compute the perspective of the human eyes. And the human eyes, in this study, focus on the participant. 255
Jan M. Engelmann and Christian Zeller
This interpretation of the effects of the watching eyes is consistent with self-awareness theory which distinguishes between subjective and objective self-awareness (Duval & Wicklund, 1972). While a state of subjective self-awareness is constituted by a focus on the world outside of us, objective self-awareness is characterized by a focus on ourselves, looking inward as if seeing ourselves through the eyes of another person. Relevant to the current interpretation, self-awareness theory argues that objective self-awareness can be induced through tools that prompt perspective taking, such as mirrors or watching eyes. Step 2: Perspective taking and social regulation Given that participants in the Bateson et al. study compute the perspective of the watching eyes (which are focused on themselves), how can this lead to social regulation and the decision to pay for a drink rather than just taking it? According to Mead (1925), any form of social regulation (including morality) crucially involves assuming the perspectives of agents engaged with us in cooperative activities. Social action is possible only when interacting agents represent not only their own perspective, but also their collaborator’s perspective. Perspective taking is thus a fundamental prerequisite for social action, from simple activities like taking a walk together or throwing a ball to complex behaviors such as helping someone in need or following a social norm. How can social control depend on assuming others’ perspectives? Let’s start with a basic example: throwing a ball to each other. When I throw a ball for you to catch it, I start off by representing your perspective. For current purposes, let us assume that the sun is setting right behind me. Taking your perspective as a catcher, I realize that you could not see (and not catch) the ball if I threw it towards you in an ecliptic trajectory as the sun would be blinding you. I thus decide to throw the ball to you in a horizontal line. As can be seen from this simple example, perspective taking is a prerequisite for coordinated social action. However, Mead’s insight goes deeper than that. Mead shows that human social action is not a result of the addition and coordination of solitary asocial acts. Instead, each individual act is already social in nature in that it constitutively includes the entire social act. When I throw the ball for you to catch it, I do not only represent my throwing but also your catching. This in turn means not only that perspective taking is integral to coordinated social action, but taking your perspective in fact regulates or controls my behavior from its onset. Assuming your perspective causes me to throw the ball to you in a different way. How can we translate this simple coordinative case of ball throwing to the moral sphere? In the case of ball throwing, the only aspect of your perspective that is relevant to our joint goal is your visual perspective. In a moral context, the relevant aspects of an agent’s perspective might include her attitudes, feelings, dispositions, and normative standards. To become clearer about the ways in which perspective taking in the context of social interaction can regulate morally relevant behavior, let’s consider an everyday second-personal interaction. Assume that you and I are friends and we are having lunch together. I am enjoying my steak and, in particular, the Sauce Hollandaise that comes with it. Since I am not ready to miss any drop of the sauce, I am about to lick it of my knife. However, at this very moment, I become aware that you are observing my behavior, anticipatory disgust flicking over your face. I represent your perspective on this action along with your often expressed disgust of people engaging in precisely this action, and so I decide to place my knife back on my plate. In much the same way that I decided to throw the ball to you in horizontal fashion, I have now decided not to lick my knife. In both cases my actions are regulated via taking your perspective. In one case, the relevant aspect of your perspective only involves the visual sphere. In the other case it involves, in addition, your 256
Doing the right thing for the wrong reason
attitude towards a certain type of behavior. While this latter case might not amount to fullblown moral behavior, it can be considered a second-personal form of it (Tomasello, 2016), seeing that it involves, in the case of transgression, a proto-moral reactive attitude on your part, a reaction to my failure (in your eyes) to not give adequate weight to your attitudes in deciding how to act. And why do I care about not hurting your feelings? Because being your friend is part of my practical identity and being a good friend precisely involves respecting the other’s feelings, attitudes, and expectations. But how does perspective taking regulate behavior on a more obviously moral level? Let’s go back to the watching eyes experiment and assume for a moment that an agent is tempted not to pay for his drink. What effect will the picture of eyes have on his behavior? The agent will assume the perspective of the watching eyes. From this perspective, she identifies her potential actions – behaving honestly or not – as relevant to moral principles.This is reinforced by the fact that the agent faces a ‘honesty’ box. In other words, the agent’s moral identity is on the line. Analogous to the ball throwing case and the second-personal context, this representation of the moral perspective results in a regulation of the agent’s behavior. The agent decides to pay for her drink as the selfish impulse, not to pay, cannot be justified toward the agent’s moral identity as it is inconsistent with her normative self-conception. The agent might thus deliberate along the following lines: “I am not and don’t want to be the type of person that takes a drink without paying for it.” The argument that assuming someone’s perspective on oneself activates valued identitybased standards is, again, consistent with self-awareness theory (Duval & Wicklund, 1972). According to this theory, once we focus our attention on ourselves in a state of objective selfawareness (triggered for example by a mirror or watching eyes), we compare and evaluate our current intention or behavior against our personal ideals and values. To resolve any emerging discrepancies, the objectively self-aware agent modifies her behavior so as to align it with her personal values and ideals. This fits with our interpretation of the effects of the watching eyes on the genuine agent. A perspective-taking trigger (the watching eyes) motivates the agent to evaluate his current impulse (not to pay) against her moral identity, finds that she cannot justify the impulse to this authority, and pays for her drink. Testing the alternative interpretation This alternative interpretation of the watching eyes studies creates testable predictions. A first prediction of the current interpretation is that individuals living in cultures that routinely evaluate their behavior from a perspective of objective self-awareness should not modify their behavior as a function of the presence of perspective-taking tools such as pictures of eyes and mirrors. The ‘reductionist view’, on the other side, predicts that audience cues such as eyes or mirrors should lead to higher levels of norm compliance independent of cultural context. After all, proponents of the ‘reductionist view’ argue that any audience cue should evoke an unconscious and evolutionarily evolved reputational concern. It is well known that cultures vary in regard to how concerned people are about their public presentation and being evaluated by others, or, to put it in Duval and Wicklund’s (1972) terminology, how objectively self-aware they are. More specifically, people from East Asian contexts have been described to be particularly attentive to how they imagine they appear to others, and are thus particularly likely to assume a state of objective self-awareness. In Japan, people are said to continually represent society’s gaze (seken), an internalized set of standards that highlights how individuals appear to society. According to our interpretation, audience cues should have no effect on individuals living in cultures that place such emphasis on the moral perspective. If 257
Jan M. Engelmann and Christian Zeller
you are already engaging in perspective taking, a further perspective-taking cue should have no effect. The reductionist view, on the other hand, predicts an effect across cultures. Support for the former view comes from a study by Heine and colleagues (2008). Participants from Japan (representing a more ‘objectively self-aware’ culture) and North America (representing a less ‘objectively self-aware’ culture) conducted a simple performance task either in front of a plain wall or in front of a mirror. Replicating past research on North American participants, individuals in the mirror condition were significantly less likely to cheat. For Japanese participants, on the other hand, the mirror did not affect cheating rates. The authors’ interpretation of these results is in line with our current interpretation: individuals from cultures that place emphasis on evaluating their behavior from a perspective of objective self-awareness are not affected by audience cues. A second prediction of the current hypothesis is that agents should act morally even at the cost of a good reputation. On our account, humans have both strategic and genuine motivations for moral behavior and the latter are not reducible to the former. The ‘reductionist view’ argues that the latter are reducible to the former. Following Sperber and Baumard (2012), if moral behavior were indeed reducible to strategic concerns, one should never be tempted to act morally if by doing so one would damage one’s reputation. In a recent study, we have investigated a situation of this kind (Engelmann, Herrmann, Rapp, & Tomasello, 2016). Specifically, 5-year-old children were introduced to two same-aged peers who could decide whether participants would be given access to real rewards later on. Children thus had a strategic incentive to invest in a positive reputation with their peers. In the crucial phase of the experiment, children could either conform to the immoral behavior of their peers (and thereby gain a good reputation with them), or, alternatively, do the right thing even in the face of peer pressure. Results show that across two studies and two moral contexts (helping and stealing), 58% of children do the right thing even at the cost of a good reputation with strategically relevant peers, suggesting that they are not exclusively motivated by strategic concerns but also have a strong disposition to do the right thing.
5. Discussion In the current chapter we have discussed the motivations underlying moral behavior. People do the right thing for the right reason; they identify with moral principles and consider these principles part of their moral identity. People also do the right thing for the wrong reason; they want to improve their reputation and so their relations with others. On our account, both of these motivations present powerful reasons for moral behavior. Much of the moral behavior that we observe in daily life is inspired by a combination of these two motivations. We have reviewed the evidence for a reputational account of moral motivation. In the first part, we have summarized relevant empirical evidence showing that reputational concerns provide a powerful motivation for moral action. There is, in our view, no doubt that reputational concerns are underlying much of the prosocial behavior observed in daily life, from helping old people cross the street and giving money to homeless people to public donations to charity and costly sacrifices in wartimes. In the second part we have provided a critical account of the more extreme reading of the data which suggests that all moral behavior is reducible to strategic concerns. We have said, positively, that supposed evidence for such a ‘reductionist view’ can plausibly be interpreted in terms of a genuine moral motivation to do the right thing. We have argued that genuine morality consists in living up to standards or ideals that form part of our moral identity. The sheer psychological force of moral demands, the inescapability of its claims are a consequence of the fact that they come from deep within us and are linked 258
Doing the right thing for the wrong reason
to what we think of ourselves, why we find our lives worth living – in other words, they are anchored in our moral identity. Three objections might be raised to our conceptualization of the genuine moral agent. First, our view might be criticized for relying too heavily on careful reflective deliberation, including justifications toward our moral identities. Much work shows that such reflective agency is much rarer then we like to think and that our behavior in moral contexts is rather caused by unconscious and automatic intuitions (Haidt, 2007). However, describing the cognitive machinery underlying genuine moral action as normative self-conception does not mean that the current account cannot explain fast, intuitive judgments.Thoughts about our identity do not have to enter our moral reasoning in any conscious, explicit way. If we sufficiently deeply identify with a given identity, we won’t have to deliberate in a slow and systematic way before we engage in judgment and action. If we have worked as scientists for a long time and deeply identify with this identity, we don’t need to think twice about making up data: we just won’t do it (our judgment here will be fast and automatic, a moral intuitionist judgment). That identity-based motivation stemming from our moral identity is in fact underlying our actions in morally relevant contexts (and not just a simple preference for e.g. fairness) becomes clear when we consider the slow, often cumbersome way our moral deliberation works when we are faced with a moral dilemma. Sometimes we just don’t know what to do when it comes to complex moral dilemmas – a frequent phenomenological experience that is not grasped by moral intuitions. In such cases we often weigh the pros and cons of a particular decision and then decide for the path that is most consistent with our practical identity, which in such cases can also enter our reasoning in an explicit, conscious way (“I have to support him, I am his best friend” or “I cannot leave her alone, I am her mother”). Thus the current account can explain both fast and automatic moral intuitions as well as slow, deliberate moral reasoning. A second objection to the current conceptualization of the genuine moral agent is that it ascribes an unrealistically strong motivational power to our practical identities, and, in particular, our moral identity. This objection is warranted, at least to some extent. Thinking of oneself as a moral person, and considering compliance with moral principles as relevant to who one is, does not translate automatically and unfailingly to actual moral behavior. We are experts at constructing elaborate justifications and excuses aimed at rationalizing apparent moral shortcomings not in terms of individual moral responsibility but situational contingencies. The motivational force of identity-based obligations lies in the fact that these self-conceptions are important to who we are, they are the basis of our self-esteem, and so we are reluctant to give them up. The problem is that we can commit ‘moral mistakes’, acting contrary to the standards inherent in our moral identity, without having to give up this valued identity. However, it is also obvious that this strategy only gets us so far. Mild moral transgressions that happen irregularly might be reconcilable with our moral identity. But at the same time we are aware that this is a strategy whose application is limited in time and space and we recognize our hypocrisy reliably (Amir, Ariely, & Mazar, 2008).Thus, while we acknowledge the motivational shortcomings of a moral identity, we maintain that identity-based obligations are anything but fragile. These obligations stem from conceptions of ourselves that make our lives worthwhile and so routinely neglecting them is not an option. A third objection assumes a cross-cultural perspective, and criticizes our notion of a practical identity as too individualistic and Western-centric (see also Haidt, 2012). Isn’t the concept of a person having a bounded, enclosed identity a construct of the Western world that is not transferable to many of the world’s cultures that have a more sociocentric view of the self, where society comes first, and selves are interdependent and identities are defined not as individualistic entities but as other-relating constructs? We maintain that the current account 259
Jan M. Engelmann and Christian Zeller
is not contingent on a specific cultural view of identity but rather culturally neutral and thus applicable to any cultural concept of identity. Following Korsgaard (2009), we conjecture that a form, indeed any form, of practical identity is necessary for engaging in human action. What specific form or content a given practical identity takes on is not crucial to the current account. An agent living in a sociocentric culture might thus include in his practical identity obligations towards his society as primary. On the account that we are proposing, strategic and genuine moral agents self-regulate their behavior by something other than their immediate egoism. Agents regulate their behavior as a function of someone’s evaluation or expectation. In the reputational case this someone is another person that we wish to impress for whatever reason. In the true moral case, this someone is just our own evaluating self. We postulate a structural continuity from modifying our behavior as a function of others’ social evaluations to modifying our behavior as a function of our own evaluations; from managing the impressions we make on others to managing our self-concept. Strategic agents feel pressure from outside – they want to conform to others’ expectations.True moral agents, on the other hand, in addition to feeling pressure from outside also feel pressure from within – they want to conform to their own standards and ideals.
References Amir, O., Ariely, D. & Mazar, N. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45, 633–634. Barclay, P. (2004). Trustworthiness and competitive altruism can also solve the “tragedy of the commons”. Evolution and Human Behavior, 25(4), 209–220. Barclay, P. & Willer, R. (2007). Partner choice creates competitive altruism in humans. Proceedings of the Royal Society of London B: Biological Sciences, 274(1610), 749–753. doi:10.1098/rspb.2006.0209 Bateson, M., Nettle, D. & Roberts, G. (2006). Cues of being watched enhance cooperation in a real-world setting. Biology Letters, 2(3), 412–414. doi:10.1098/rsbl.2006.0509 Beersma, B. & Van Kleef, G. A. (2011). How the grapevine keeps you in line. Gossip increases contributions to the group. Social Psychological and Personality Research, 2(6), 642–649. Blasi, A. (1983). Moral cognition and moral action: A theoretical perspective. Developmental Review, 3, 178–210. Bryan, C. J.,Walton, G. M., Rogers,T. & Dwecka, C. S. (2011). Motivating voter turnout by invoking the self. Proceedings of the National Academy of Sciences, 108, 12653–12656. Burnham,T. C. (2003). Engineering altruism: A theoretical and experimental investigation of anonymity and gift giving. Journal of Economic Behavior & Organization, 50(1), 133–144. Burnham, T. C. & Hare, B. (2007). Engineering human cooperation: Does involuntary neural activation increase public goods contributions? Human Nature, 18(2), 88–108. doi:10.1007/s12110–007–9012–2 Carver, C. S. & Scheier, M. F. (1998). On the Self-Regulation of Behavior. Cambridge: Cambridge University Press. ———. (2011). Self-regulation of action and affect. In K. D. Vohs and R. F. Baumeister (Eds.), Handbook of Self-regulation: Research,Theory, and Applications (pp. 3–21). New York: Guilford Press. Clark, H. H. (1996). Using Language. Cambridge: Cambridge University Press. Darwall, S. (2006). The Second-Person Standpoint. Cambridge, MA: Harvard University Press. Duval, T. S. & Wicklund, R. (1972). A Theory of Objective Self-Awareness. New York: Academic Press. Engelmann, J. M., Herrmann, E., Rapp, D. & Tomasello, M. (2016).Young children (sometimes) do the right thing even when their peers do not. Cognitive Development, 39, 86–92. Engelmann, J. M., Herrmann, E. & Tomasello, M. (2012). Five-year olds, but not chimpanzees, attempt to manage their reputations. PLoS ONE, 7, e48433. doi:10.1371/journal.pone.0048433 Engelmann, J. M., Over, H., Herrmann, E. & Tomasello, M. (2013). Young children care more about their reputation with ingroup members and potential reciprocators. Developmental Science, 16(6), 952–958.
260
Doing the right thing for the wrong reason Frischen, A., Bayliss, A. P. & Tipper, S. P. (2007). Gaze cueing of attention:Visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 694–724. Gino, F. & Mogilner, C. (2014). Time, money, and morality. Psychological Science, 25(2), 414–421. Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 998–1002. doi:10.1126/ science.1137651 ———. (2012). The Righteous Mind:Why Good People Are Divided by Politics and Religion. New York: Paragon. Haley, K. J. & Fessler, D. M. (2005). Nobody’s watching? Subtle cues affect generosity in an anonymous economic game. Evolution and Human Behavior, 26, 245–256. doi:10.1016/j.evolhumbehav.2005.01.002 Hardy, S. & Carlo, G. (2005). Identity as a source of moral motivation. Human Development, 48, 232–256. Heine, S. J.,Takemoto,T., Moskalenko, S., Lasaleta, J. & Henrich, J. (2008). Mirrors in the head: Cultural variation in objective self-awareness. Personality and Social Psychology Bulletin, 34, 879–887. Henrich, J., Ensminger, J., McElreath, R., Barr, A., Barrett, C., Bolyanatz, A., . . . Ziker, J. (2010). Markets, religion, community size, and the evolution of fairness and punishment. Science, 327(5972), 1480–1484. doi:10.1126/science.1182238 Inzlicht, M., Bartholow, B. D. & Hirsh, J. B. (2015). Emotional foundations of cognitive control. Trends in Cognitive Sciences, 19(3), 126–132. Korsgaard, C. M. (2009). Self-Constitution: Agency, Identity, and Integrity. Oxford: Oxford University Press. Mead, G. H. (1925). The genesis of the self and social control. International Journal of Ethics, 35(3), 251–277. Milinski, M., Semmann, D. & Krambeck, H.-J. (2002). Reputation helps solve the ‘tragedy of the commons’. Nature, 415, 424–426. doi:10.1038/415424a Qureshi, A., Apperly, I. A. & Samson, D. (2010). Executive function is necessary for perspective-selection, not Level-1 visual perspective-calculation: Evidence from a dual-task study of adults. Cognition, 117, 230–236. Samson, D., Apperly, I. A., Braithwaite, J. J., Andrews, B. J. & Bodley Scott, S. E. (2010). Seeing it their way: Evidence for rapid and involuntary computation of what other people see. Journal of Experimental Psychology Human Perception and Performance, 36(5), 1255–1266. Sartre, J. P. (1968). The Reprieve. New York:Vintage. Scanlon,T. (2010). Moral Dimensions: Permissibility, Meaning, Blame. Cambridge, MA: Harvard University Press Sommerfeld, R. D., Krambeck, H.-J. & Milinski, M. (2008). Multiple Gossip statements and their effect on reputation and trustworthiness. Proceedings of the Royal Society of London B: Biological Sciences, 275(1650), 2529–2536. doi:10.1098/rspb.2008.0762 Sommerfeld, R. D., Krambeck, H.-J., Semmann, D. & Milinski, M. (2007). Gossip as an alternative for direct observation in games of indirect reciprocity. Proceedings of the National Academy of Sciences of the United States of America, 104, 17435–17440. doi:10.1073/pnas.0704598104 Sperber, D. & Baumard, N. (2012). Moral reputation: An evolutionary and cognitive perspective. Mind & Language, 27, 495–518. Stack, G. J. & Plant, R. W. (1982). The phenomenon of “the look”. Philosophy and Phenomenological Research, 42(3), 359–373. Surtees, A.D.R. & Apperly, I. A. (2012). Egocentrism and automatic perspective taking in children and adults. Child Development, 83(2), 452–460. Tomasello, M. (2014). A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. ———. (2016). A Natural History of Human Morality. Cambridge, MA: Harvard University Press. van Vugt, M. & Hardy, C. L. (2010). Cooperation for reputation: Wasteful contributions as costly signals in public goods. Group Processes & Intergroup Relations, 13, 101–111.
261
15 IS NON-CONSEQUENTIALISM A FEATURE OR A BUG? Fiery Cushman
Introduction Human moral values mix about nine parts function with one part flop. As psychologists, most of our time is spent trying to understand the flops. It is not that we neglect the functions entirely; they are well identified in the literature. Morality can promote cooperation (Nowak, 2006), decrease violence (Pinker, 2011), foster monogamy (Henrich, Boyd, & Richerson, 2012), protect property (Maynard Smith, 1982), promote trade (Henrich et al., 2010), and even improve our personal health (Rozin, 1997). We psychologists pay homage to the functions of morality with the regularity of a priest reciting an honored liturgy, and at least half the enthusiasm. Then, of course, we flip back to the flops with the thrill of a priest hearing scandalous confessions. Famously, people say it is wrong to sacrifice a person’s life to save many others (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001), and they explain this judgment by saying that it is wrong to kill (Cushman, Young, & Hauser, 2006; Hauser, Cushman, Young, Jin, & Mikhail, 2007). This has all the trappings of a flop. Do these people somehow fail to notice that the bad part of killing – the dead people – is precisely the harm minimized? Or was there some part of killing they thought to be worse than that minor detail of death? In another well-worn flop, people claim it is wrong for siblings to engage in non-procreative sex (Haidt, Koller, & Dias, 1993).Yet, the best reason they can give is to avoid negative consequences for the child (Bjorklund, Haidt, & Murphy, 2000). Are we truly so fearful of miraculous conception? To choose a case that is less discussed but more commonplace, when I go to Europe I feel obligated to tip the waiters 20%.This is well above local moral norms, and I have no special fondness for European waiters. Why are my morals burning a hole in my pocket? Cases such as these share two common themes. First, in each case a moral value is defined over some property of an action rather than the likely consequence of the action. For instance, people prioritize the injunction against killing (an action) over the minimization of death (the outcome), the injunction against incest over the stipulated harmlessness of a specific case, and the act of tipping over the expectations of a waiter. These examples illustrate an important lesson that has been carefully documented over dozens, perhaps hundreds of psychological investigations: Human moral judgment is often non-consequentialist. But ours isn’t just any old non-consequentialism, which is the second feature that these cases share in common. Ours is “near miss” non-consequentialism: Each of the cases above comes tantalizingly close to consequentialism, as if they were cases of intended consequentialism gone 262
Is non-consequentialism a feature or a bug?
slightly awry. Killing is usually bad, tipping is usually good, and the costs of sibling incest tend to outweigh its benefits. A natural inference is that we were really designed to be consequentialist, and that deviations from this ideal are just bugs in the program.Yet, if you point out the bugs to people they most often treat them like features, doubling down on their non-consequentialist intuitions (Cushman et al., 2006; Haidt, 2001). So, which is it: Is non-consequentialism a feature or a bug? Why is near miss non-consequentialism a pervasive feature of human psychology? This essay reviews three kinds of explanations. First, the errors may just be errors – by-products of an imperfect adaptive search process. Second, the errors may reflect a tradeoff between cognitive efficiency and precision. Third, the errors may reflect a strategic commitment in the context of social interaction. These possibilities are not mutually exclusive. As we shall see, however, resolving their relative explanatory power holds important implications for the very definition of morality. What is a consequentialist moral psychology? In moral philosophy, consequentialism holds that moral value is a strict function of states of affairs of the world. Thus, the optimal moral behavior maximizes the desirable states of affairs. There are at least two natural ways to define consequentialism at a psychological level. One approach is to ask, “Did the individual choose an action by computing the expected value of resultant states of affairs in the world?” This approach defines consequentialism in terms of the psychological process of decision-making, and so I will refer to it as process consequentialism. A second approach is to ask, “Did the individual engage in a value-maximizing behavior, given the information available to her at the time of action?” This approach is not concerned with how the individual chooses behavior, but instead with the actual behavior chosen – in other words, the policy of situation-behavior mappings that the individual enacts.1 I will therefore refer to it as policy consequentialism. These two senses of psychological consequentialism are independent. For instance, consider a computer program designed to play blackjack. One approach would be for the programmer to specify the optimal move given every state of play (e.g., “If a 14 is showing, hit”). If the programmer specified the optimal set of moves, then the resulting program would satisfy the criteria for policy consequentialism, but it would not involve process consequentialism because nothing about the program involves computations of expected value. As we shall see, this provides an effective analogy to some evolved reflexes, culturally inherited norms, and learned stimulus-response habits. Conversely, consider a chess player who has memorized certain heuristics concerning favorable configurations of the board (e.g., “It is good to be in control of the center”). When choosing her next move the player carefully considers the sequence of subsequent moves likely to play out, and she chooses a move that maximizes the likelihood of a favorable configuration.This involves process consequentialism because the mechanism that the player relies upon involves a computation of expected value derived from consideration of likely outcomes.Yet, it does not achieve perfect policy consequentialism, defined as the policy most likely to result in winning the game. The outcomes that the player considers are heuristic approximations of the value of states of the board, rather than exactly specified calculations for each state. If we analogize between “winning chess” and “maximizing Darwinian fitness”, then cases of this latter kind are ubiquitous. For instance, our sense of taste for nutritional and calorie-rich food is an approximation of fitness-enhancing outcomes, not an exactly specified calculation. Yet, it enters into psychological processes of expected value maximization (e.g., when an animal forages, or a human chooses a dish at a restaurant). As this example makes salient, policy consequentialism 263
Fiery Cushman
can only be evaluated after committing to a position on what states of affairs should be maximized (e.g., checkmate, or fitness). It also illustrates that true policy consequentialism will rarely be attained for complex tasks; a more pertinent question is how closely it is approximated. The utility of approximating policy consequentialism has bite, however, when the task in question is “maximizing fitness”. In this case policy consequentialism is simply defined as the optimally fitness-enhancing set of behaviors for each possible state of affairs. This is because natural selection itself is consequentialist, in the sense that its processes are determined exclusively by states of affairs in the world (as opposed, say, to the past causal history that brought those states of affairs about). Yet, while policy consequentialism can be defined as optimal from a fitness perspective, often process consequentialism cannot. Process consequentialism can guarantee a decision-maker the opportunity to make the optimal decision (given her beliefs), but it makes no guarantee about how much time or effort is involved. For tasks of real-world complexity, the computational burden of exhaustively considering all possible sequences of actions usually outweighs its benefits. We can now revisit the motivating question of this essay with greater specificity. Human moral values appear to be organized around the functional goal of maximizing certain states of affairs (e.g., the welfare of others, or one’s own fitness), yet imperfectly. Why are our moral values so often a near miss to policy consequentialism, including in circumstances where a “hit” seems attainable? And, given that process consequentialism is apparently best suited to the task of achieving policy consequentialism, why do we so often rely upon mechanisms that are nonconsequentialist in the moral domain?
Incomplete search Morals are shaped by natural selection operating both genetically and culturally.The basic logic of natural selection is that any random variation that enhances fitness will tend to increase in prevalence – not, of course, that only the ideal forms of random variation will flourish. To return to an example from above, our taste preferences are adapted to the task of providing sufficient calories and nutrients in our diet, but they do not operate perfectly: Aspartame has a pleasing taste even though it lacks either calories or nutritive value. An ideal system of taste preferences would track caloric and nutritive value with greater fidelity, but perhaps the necessary adaptive events simply have not occurred. We are left with a system that operates pretty well but not perfectly. Similarly, human moral values may approximate fitness maximization rather than achieving its ideal form simply because the search over the landscape of possible norms has not yet uncovered the single highest peak. The adaptive search is incomplete. This model of “incomplete search” is often invoked in the literature as part of a broader argument that our moral instincts evolved in an environment that differs from our contemporary environment. There is a direct analogy to the evolution of our tastes: The fact that our tastes are “susceptible” to aspartame is not particularly surprising because it wasn’t available in the environment in which we evolved. (More trenchantly, our prodigious appetite for highcalorie foods was tuned in an environment where they were hard to obtain, but may now be maladaptive in an environment where high-calorie foods are easily obtained in a short time, at a low cost, and in a convenient foil wrapper). This logic motivates one of Greene’s (2008, p. 43) explanations for our selective sensitivity to “up close and personal” forms of physical harm: The rationale for distinguishing between personal and impersonal forms of harm is largely evolutionary.“Up close and personal” violence has been around for a very long 264
Is non-consequentialism a feature or a bug?
time, reaching back far into our primate lineage (Wrangham and Peterson, 1996). Given that personal violence is evolutionarily ancient, predating our recently-evolved human capacities for complex abstract reasoning, it should come as no surprise if we have innate responses to personal violence that are powerful, but rather primitive. That is, we might expect humans to have negative emotional responses to certain basic forms of interpersonal violence, where these responses evolved as a means of regulating the behavior of creatures who are capable of intentionally harming one another, but whose survival depends on cooperation and individual restraint. (Sober and Wilson, 1998; Trivers, 1971) According to this element of Greene’s model, selective pressures favor the phenotype not harming people (at least to a first approximation – if they are innocent, etc.). Our modern capacity for complex abstract reasoning allows us to maximize realization of this phenotype with greater reliance on process consequentialism, by estimating the harm that each of our available actions would cause to a person and then selecting the harm-minimizing action. In our deep evolutionary past, however, the only way that we tended to harm other people was through “up close and personal” violence – hitting, kicking, biting etc. Although it would have been optimal to evolve an aversion to the consequences of the actions, we happen to have first evolved a different solution: to innately feel that the very actions themselves are aversive. This adaptation prevented many harmful acts in the ancestral environment. But, it is less equipped to succeed at this task in the modern world, where our actions can often cause great harm at a physical distance, after a temporal delay, or through a chain of intermediary events. Just as our evolved tastes are susceptible to burritos, our evolved morals are susceptible to bombs. Similar arguments are invoked widely in the literature on moral psychology, both in the context of biological evolution (Slovic, 2007) and cultural evolution (Nisbett & Cohen, 1996). And although this argument is not often applied to morals learned through direct experience (itself a kind of adaptive process), it could be. Consider the possibility that learning morals proceeds by serial hypothesis testing about the optimal behavioral policy. In this case, a person might settle on an approximate, non-consequentialist policy that works well enough before ultimately alighting on true consequentialism. These diverse variants of the incomplete search hypothesis share two important features. First, they explain why people may sometimes define primitive moral values over actions (e.g., biting) rather than over consequences (e.g., harm). This would appear difficult to explain on the assumption that moral values are the product of natural selection, because it is only the consequences of an action that have implications for the fitness of the agent. Yet, a moral value defined over features of an action may sufficiently enhance fitness that it is favored in the absence of a more ideal variant. Second, they view non-consequentialism as fundamentally suboptimal – as a bug, not a feature. If random variation happened to have produced the appropriate processconsequentialist norm, selection would have favored it. In this respect the “incomplete search” explanation of non-consequentialism differs from the other two explanations considered next.
Cognitive efficiency Above we made a simplifying assumption: Because fitness depends upon the consequences of actions, an ideal cognitive mechanism for fitness maximization would select actions by considering their expected consequences. In other words, a person would choose what to do by exhaustively considering all possible outcomes of all possible actions they could perform. From 265
Fiery Cushman
the practical standpoint of information processing, however, there is nothing even remotely ideal about such a mechanism. The cognitive costs associated with this information processing task would almost certainly outweigh the benefits of subjectively optimal action selection, given the availability of a well-tuned heuristic. The intrinsic tradeoff between computation and accuracy in decision-making has long been a foundational principle in psychology (Anderson, 1996; Thorndike, 1898) and, indeed, throughout the behavioral sciences (Kahneman, 2011). Contemporary dual process theories of cognition emphasize the competition between controlled processes, which tend to favor accuracy at the cost of cognitive effort, and automatic processes, which tend to favor cognitive efficiency at the cost of inaccuracy. Dual process models are widely applied to moral judgment and behavior (Greene, 2008; Haidt, 2001), and it has been influentially argued that controlled processes tend to embody consequentialist principles, while automatic processes embody deontological principles. On this view, non-consequentialism is an adaptive response to information processing constraints. It is a feature, not a bug. What, precisely, is the relationship between non-consequentialism and cognitive efficiency? Much current research seeks to identify decision-making algorithms that effectively balance computation and accuracy (Botvinick, Niv, & Barto, 2009; Daw & Shohamy, 2008; Dolan & Dayan, 2013; Sutton & Barto, 1998; Sutton, Precup, & Singh, 1999), and several of these approaches embody non-consequentialist principles. Figure 15.1 contrasts stylized representations of three types of approaches to this problem in the form of a decision tree (of course,
A. Full search
B. Truncated search
reflex
++ ++ ++
--
+
+
+
-
Policy consequentialism
Policy non-consequentialism
Process consequentialism
Process non-consequentialism
Figure 15.1 A schematic diagram of three approaches to learning and decision-making. An individual represents states of the world (circles), possible actions (arrows), and value (+/-). In order to employ full search the individual learns a representation of the world structure and then evaluates the long term consequences of sequences of action. In order to rely on habit the individual learns the value of specific actions by recording the historical rewards associated with those. In order to rely on truncated search the learner associates value with intermediate states and then plans ahead just until those intermediate states.
266
Is non-consequentialism a feature or a bug?
many other productive approaches exist). Nodes indicate decision states – discrete states of affairs for the agent. Arrows indicate actions available to the agent. Bolded areas of the decision tree are representations subject to consideration by the agent, while greyed areas of the decision tree are not explicitly considered. Thus, the greater the bolded area, the greater the cognitive demands on the agent. “Full search” of the decision tree involves exhaustively computing the value and probabilities of all expected outcomes of every available action (Figure 15.1A). This approach is maximally computationally intensive, but it also guarantees optimal action selection (subject to the accuracy of the agent’s representation of the world, and the appropriateness of their assignment of reward to states of affairs). One heuristic approach to action selection is to assign value to intermediate states that are predictive of subsequent value (Figure 15.1B). Above we encountered an example: A chess player might assign values to states of the board that do not correspond to checkmate, yet are associated with a relatively high probability of achieving checkmate. This allows the player to identify optimal moves without considering the full decision tree, reducing computational demands. This approach cannot guarantee optimal action selection, however, if the value assigned to intermediate states generalizes over variable circumstances. For instance, perhaps controlling the middle of the chessboard with a rook increases the probability of a subsequent checkmate for most, but not all, opponents. A player who saves cognitive effort by applying this value representation to all opponents will occasionally make suboptimal moves against the minority of opponents.Were she to instead search the full tree (with knowledge of each opponent’s strategy), she would achieve superior performance. This approach to action selection is consequentialist at a process level, but it does not achieve full policy consequentialism. Several other approaches share this characteristic. For instance, it is possible to plan towards an outcome that is correlated with but not identical to the desired outcome, as when following smoke to find fire. This corresponds to the heuristic of attribute substitution (Kahneman, 2011). An alternative family of heuristic approaches to action selection is non-consequentialist at the process level. These work by assigning value not to states of any kind – terminal or intermediate – but to state-action pairs. In other words, given each state an agent could be in, these methods assign value directly to candidate actions that the agent could perform. Reflexes and habitual actions have this characteristic. In each case, a basic association is encoded between a stimulus and potential behavioral responses. Although the assignment of value to state-action pairs is shaped by their historical association with consequences (for instance by natural selection, or by learning), the process of action selection proceeds without considering expected consequences. This is what makes reflexes and habits cognitively efficient.2 Innate action values Several theories emphasize the innate assignment of moral value to specific actions or stateaction pairs (Greene, 2013; Haidt, 2012; Hauser, 2006; Lieberman, Tooby, & Cosmides, 2007; Mikhail, 2011). For instance, humans have an innate aversion to physical intimacy with siblings. The adaptive value of this aversion is well understood: Genetic diversity is associated with increased fitness, and increased relatedness between parents is associated with decreased genetic diversity in their offspring. Yet, the psychological representations that give rise to our innate aversion to incest have nothing to do with these fitness costs. Rather, we assign negative value directly to the act itself. The relevant alternative to an innate aversion to the act of incest would be innate aversion to its negative fitness consequences, together with innate knowledge of the relationship 267
Fiery Cushman
between the act and those consequences (or, on a truncated search model, some intermediary consequence such as “pregnancy resulting from incest”). This alternative mechanism is consequentialist. It would lead to qualitatively different behaviors: For instance, all else being equal, people would be comfortable engaging in non-procreative acts of intimacy with siblings. (They are not (Haidt, Koller, & Dias, 1993; Lieberman et al., 2007;Westermark, 1891)).Yet, it is plausible that a consequentialist adaptation to incest avoidance would not be favored over the empirically observed non-consequentialist adaptation, simply because the cost of the cognitive effort involved in computing the value of incestuous acts would not justify the negligible fitness gain (if any) of non-procreative intimacy with siblings. In other words, efficiency plausibly outweighs accuracy in this case. Greene (2008, p. 60) applies the logic of an efficiency/accuracy tradeoff to explain actionbased moral principles, such as the aversion to personal harms, the attraction to spiteful revenge, and the taboo status of certain harmless actions: Why should our adaptive moral behavior be driven by moral emotions as opposed to something else, such as moral reasoning? The answer, I believe, is that emotions are very reliable and efficient responses to re-occurring situations, whereas reasoning is unreliable and inefficient in such contexts. . . . Nature doesn’t leave it to us to figure out that saving a drowning child is a good thing to do. Instead, it endows us with a powerful “moral sense” that compels us to engage in this sort of behavior (under the right circumstances). In short, when Nature needs to get a behavioral job done, it does it with intuition and emotion wherever it can. Thus, from an evolutionary point of view, it’s no surprise that moral dispositions evolved, and it’s no surprise that these dispositions are implemented emotionally. The same adaptive logic that favors biological inheritance of non-consequentialist moral values may also favor the cultural inheritance of such values. For instance, it is argued that the moral norm of monogamous marriage is a cultural adaptation, and that a key function of the norm is to reduce violence by reducing the number of sexually mature single males (Henrich et al., 2012).3 Clearly, however, this consequence does not play a significant mechanistic role in the psychological processes responsible for our moral commitment to the institution of monogamous marriage. Although in this case the moral value in question is culturally acquired rather than innate, in both cases greater efficiency may be attained by stating an action-based policy rather than building in causal knowledge and relying on expected value maximization. To choose an example from outside the moral domain, cultural knowledge of CPR is typically transmitted in the form of action-based rules (“Thirty chest compressions, two breaths, repeat until medics arrive”) rather than an exhaustive seminar on cardiopulmonary health.This is presumably because the efficiency benefits of simple action-based rules outweigh the theoretical and likely marginal accuracy benefits of exhaustive knowledge followed by planning. Indeed, planning in this case might actually be lethal to the patient. A recent theoretical model formalizes this cost-benefit tradeoff in a model of the evolution of cooperation (Rand & Bear, 2016). The basic result is that, for a large parameter space, there exists a stable strategy that uniformly applies the heuristic of cooperative behavior when the costs of full deliberation are high. This leads to many instances of rational cooperation (e.g., in an iterated prisoners’ dilemma) but also to instances of locally irrational cooperation (e.g., in a one-shot prisoners’ dilemma). When the costs of deliberation are sufficiently low, however, the individual conditions her strategy on the particular features of the game she faces.
268
Is non-consequentialism a feature or a bug?
Learned action values Of course, there are some circumstances where it is preferable to learn a behavioral policy from experience, rather than to inherit one shaped by biological or cultural selection. This occurs, for example, when environments are not sufficiently predictable on an evolutionary timescale to encode sufficiently optimal reflexes (i.e., innate stimulus-response mappings), or sufficiently accurate innate knowledge (i.e., a model from which optimal behaviors may be computed) (Littman & Ackley, 1991). What is the relationship between learning and moral (non)-consequentialism? It is useful to begin with current psychological and neuroscientific models of value-guided learning and decision-making. These are frequently structured using the formalism of reinforcement learning, which in this case refers not to the behaviorist tradition in psychology, but rather to a branch of computer science (Sutton & Barto, 1998). Research into reinforcement learning aims to find practical algorithmic solutions to the problem of choosing an optimal policy to maximize reward in a given environment. Remarkably, this branch of computer science converged on a distinction with a long-standing and pervasive role in psychology: the distinction between planning and habit (Dickinson, Balleine, Watt, Gonzalez, & Boakes, 1995; Montague, Dayan, & Sejnowski, 1996; Schultz, Dayan, & Montague, 1997; Thorndike, 1898). This marriage of computational and empirical approaches revolutionized the field (Dolan & Dayan, 2013; Glimcher, 2011). It also provides a natural explanation for the distinction between consequentialist and non-consequentialist moral values (Crockett, 2013; Cushman, 2013). When an agent engages in planning, she derives the expected value of candidate actions by considering the outcomes that those actions are likely to bring about. This depends, of course, on the agent representing an internal model of the statistical associations between actions and outcomes.The agent may conduct an exhaustive search of their represented decision tree (“full planning”, illustrated in Figure 15.1A), or a partial search (“approximate planning”, illustrated in Figure 15.1B) – either way, her behavior is derived from an internally represented model of the world. In the reinforcement learning literature, therefore, such methods are referred to as model-based. It is easy to see how model-based reinforcement learning corresponds to process consequentialism. If social outcomes are defined as rewarding (e.g., minimizing suffering, or achieving revenge), then a model-based agent will choose actions by performing expected value computations over those outcomes. Habits also involve an assignment of value to actions. Rather than deriving these from an internally represented model, however, these are assigned based on the history of past reward. In other words, when an agent performs an action and it leads to a favorable state of affairs, it increases the value associated with that action (specific to that context), and the opposite for actions that lead to unfavorable states of affairs. Figure 15.1C depicts a representation of the resulting summary of value to action. Because it does not require an internally represented model linking actions to their expected outcomes, this family of approaches is referred to as model-free. It has the virtue of increased computational efficiency, because the agent does not have to engage in a costly search over a statistical model of action-outcome contingencies. On the other hand, it has the drawback of inflexibility, because an agent cannot use their model to update their action values based on model-free methods alone. Crucially, it also embodies a version of non-consequentialism. Although habit learning tends to converge to reward maximization in the long run (i.e., approximate policy consequentialism), the underlying psychological processes are non-consequentialist in the sense that they assign value directly to actions, rather than deriving those values from a model of expected outcomes.
269
Fiery Cushman
Both of these mechanisms – model-based and model-free learning – are argued to play a key role in explaining the structure of moral cognition (Crockett, 2013; Cushman, 2013). Two elements of this proposal are provocative. The first is that it implies an overlap between the mechanisms supporting non-moral reward learning – indeed, non-social reward learning – and the mechanisms underlying moral judgment and behavior. In fact, much evidence supports this conclusion (Kvaran & Sanfey, 2010; Ruff & Fehr, 2014). For instance, empathy for others’ pain activates similar neural substrates to the experience of pain oneself (Lamm, Decety, & Singer, 2011). Much the same is true for personally rewarding outcomes and the observation of rewards obtained by others (Braams et al., 2013; Mobbs et al., 2009). And when people are put in the position to choose goods for others, they tend to use the same mechanisms implicated in choosing goods for themselves (Cooper, Dunne, Furey, O’Doherty, 2012; Harbaugh, Mayr, & Burghart, 2007; Shenhav & Greene, 2010), including in situations where this generosity carries a personal cost (Hare, Camerer, Knoepfle, O’Doherty, & Rangel, 2010; Janowski, Camerer, & Rangel, 2013; Zaki & Mitchell, 2011). The second provocative element of this proposal is the claim that habit learning, in particular, can help to explain moral non-consequentialism.The concept of a “moral habit” itself has a long and distinguished philosophical pedigree, dating back at least as far as Aristotle’s Nichomachean Ethics. Only recently, however, has the mechanism of habit learning been proposed to specifically support non-consequentialist judgments. In the trolley problem, for instance, an outcome-based assessment favors doing direct harm to a focal individual, but people find it difficult to endorse such harm. This can be understood as the consequence of negative value assigned habitually to an action: direct, physical harm (Cushman, 2013). Indeed, research has shown that people persist in their intuitive aversion to typically harmful actions (e.g., pulling the trigger of a gun pointed at a person) even when they have explicit knowledge that, in this context, the action is harmless (e.g., because it is a non-functional replica gun) (Cushman, Gray, Gaffey, & Mendes, 2012). This form of “irrational” persistence is a key signature of habit learning (Dickinson et al., 1995). Other proposals in the literature account for moral behavior in a similar manner, but without explicitly invoking the psychological concept of a habit, or the associated category of model-free reinforcement learning algorithms. For instance, the Social Heuristics Hypothesis (SHH) explains intuitive moral behavior as a consequence of reward learning mechanisms that tend to exhibit contextual inflexibility: According to the SHH, people internalize strategies that are typically advantageous and successful in their daily social interactions. . . . More reflective, deliberative processes may then override these generalized automatic responses, causing subjects to shift their behaviour towards the behaviour that is most advantageous in context. Thus, the SHH can be thought of as taking theories related to social emotions and norm internalization and making them explicitly dual process. (Rand et al., 2014) These approaches share the basic insight that the cognitive efficiency of non-consequentialism may outweigh its inferior accuracy.
Adaptive commitment In addition to incomplete search and cognitive efficiency, a third rationale for non-consequentialist moral judgment is its potential strategic advantage in the context of social interaction. A classic illustration is the Cold War doctrine of mutual assured destruction (MAD). Nuclear states 270
Is non-consequentialism a feature or a bug?
sometimes commit themselves to a policy of total and catastrophic retaliation in the event of an adversary launching a first strike. This behavior is non-consequentialist in the sense that it was widely perceived to imply the complete and mutual destruction of both participants in the conflict, whereas non-retaliation against a limited first strike could avoid complete destruction for both participants. Worse still, it could rapidly escalate a non-catastrophic strike into a catastrophic one. Yet, credible commitment to the MAD policy imparts a strategic advantage by rendering any attack by potential adversaries strictly irrational. In other words, nations achieve the best possible outcome by committing themselves to a suboptimal policy. How does this work? The resolution to this apparent paradox lies at the intersection of a social dilemma and an inter-temporal dilemma (Rachlin, 2002). Let us make the problem concrete: President Kennedy is setting US strategic nuclear policy, and he is considering the possibility of a first strike by the USSR. In July, when Kennedy sets his policy, he is interested in setting a policy that dissuades the USSR from launching a first strike. The MAD policy is an effective deterrent if the USSR will know and trust that the US is truly committed to it. Now, suppose that Kennedy commits to this strategy in July, yet the USSR launches a first strike nevertheless in October. As the first salvo of missiles flies through the air, what is the optimal course of action for Kennedy? At this point, following through with the MAD policy is no longer preferred (assuming, as usual, that this would assure the complete destruction of both antagonists). Thus there is an inter-temporal dilemma: MAD, the policy that is optimal for Kennedy in July, is no longer optimal in October. At first blush, it would appear that the optimal policy for the US is to project commitment to MAD in July, but never to act on it in the event of a future first strike.Yet, if the USSR has the capacity for accurate inference of the US’s commitment, this will not be a viable approach – the USSR will not view the US’s pseudo-MAD policy as a credible threat. This is the intersecting social dilemma: The US may favor a policy that occasionally produces suboptimal action if the expected costs are outweighed by the benefit of a social partner’s trust. In other words, a rational bid to attain trust may require committing yourself to a subsequently irrational action. This occurs when social partners have the capacity to accurately assess the strength of your strategic commitments – your willingness to stand by July’s logic even in a nuclear October. How could commitment be achieved at a mechanistic level? It is clear what will not work: The organism cannot use mechanistic consequentialism (i.e., model-based reasoning) over values that align with its own fitness interests. Such a mechanism would defeat its own attempts at commitment, backing out of policies that once maximized fitness as circumstances change. One possibility is to physically outsource commitment using a physical device, such as the “doomsday machine” envisioned in Dr. Strangelove. (This machine was designed to launch a retaliatory strike without the possibility of human override, foreclosing a rational override of July’s decisions in October, as it were.) Enforceable contracts may be used to similar effect. Another possibility, fully implemented psychologically, is to circumvent expected value calculations in October by blindly enacting a stimulus-response routine: If a first strike is launched, reply with a retaliatory strike.This method of action selection would clearly be nonconsequentialist at a process level. A third, related approach to assuring commitment to the MAD policy would be to assign intrinsic value to “irrational” behavior: For instance, placing great positive value on the annihilation of one’s antagonist and little negative value on the certain consequent annihilation of one’s self. Is such a mechanism consequentialist? On the one hand, decision-making following a first strike could conform algorithmically to process consequentialism (i.e., expected value maximization) while still resulting in implementation of the MAD policy. Commitment 271
Fiery Cushman
would be achieved not by circumventing model-based reasoning, but instead by redirecting it towards the very goals of commitment. On the other hand, this approach manifestly violates policy consequentialism in that it functions precisely by omitting consideration of the specific outcomes most relevant to fitness – possibly including, in the case of MAD, one’s own survival. This is the sense in which placing intrinsic value on commitment – for instance, the commitment to retaliation – is an adaptive form of non-consequentialism. Employing this logic, seminal research by Frank (1988) linked the adaptive logic of nonconsequentialism to emotion. Frank argued that emotions exhibit two hallmark features of adaptive non-consequentialism. First, emotions often have the consequence of compelling actions that violate local subjective consequentialism. For instance, gratitude may commit us to costly reciprocity, anger may commit us to costly acts of spite, and the fidelity of love can commit us to the adaptive opportunity cost of foregone mating opportunities. Second, emotions tend to be observable by social partners; for instance, among people you interact with, you could probably guess who is most likely to harbor gratitude, anger, or love towards you. When observable emotions credibly bind us towards non-consequentialist action, they are ideal candidates for a psychological implementation of adaptive non-consequentialism. Current theoretical work applies this basic insight to several specific cases, providing rigorous support for the core logic based on a combination of game theory and evolutionary dynamics. For instance, in our own research we have investigated the way that it contributes to retributive punishment (Morris, McGlashan, Littman, & Cushman, in prep). We consider a setting in which one player (the thief) has the opportunity to steal from another (the victim), and then the victim has the opportunity to engage in costly punishment of the thief. Both the thief and the victim may choose their behavioral response based on a rational calculation of expected value conditioned upon their experience with their social partner, or instead based on a heritable reactive strategy. We find that adaptive processes favor flexible expected value maximization on the part of thieves, but instead favor the reactive strategy of retributive punishment on the part of victims. This can lead to functionally non-consequentialist behavior – for instance, persistent punishment of thieves who are unable to learn from such punishment. It provides an adaptive advantage, however, in that the inviolable commitment to retributive punishment drives the population equilibrium away from theft. Reputation, partner choice, and strategic non-consequentialism The logic of adaptive non-consequentialism becomes especially powerful when combined with a family of evolutionary models involving reputation, or “partner choice”. The key idea behind mutualistic partner choice is that social agents participate in an open market for cooperative partners. Put simply, because your friends have a lot of other potential friends to choose from, it makes sense to be a good friend. If you aren’t good, your friends will leave you for greener pastures, and then you’ll miss out on the benefits of friendship (i.e., repeated non-zerosum exchange) (Baumard, André, & Sperber, 2013; Noë & Hammerstein, 1994; Roberts, 1998). Similar logic applies, of course, to a wide variety of social relationships. And even in mandatory social relationships, where choosing a new partner is not possible, it can be adaptive to build and maintain a positive reputation by sticking to moral commitments. This encourages investment by the mandatory social partner, to the mutual profit of both parties. Insofar as consequentialist thought actually undermines commitment, then, the perception of consequentialist thought in others may undermine trust, reputation, friendship, and the like. Consistent with this logic, both empirical and theoretical evidence indicate that good friends act nice without thinking – i.e., for non-consequentialist reasons. People afford greater trust, 272
Is non-consequentialism a feature or a bug?
friendship and moral worth to individuals whose prosocial actions are grounded in intuitive gut feelings, compared with those whose identical prosocial actions are grounded in deliberation (Critcher, Inbar, & Pizarro, 2013; Pizarro, Uhlmann, & Salovey, 2003). Current proposals suggest that partner choice models may therefore provide an explanation for many nonconsequentialist values (Baumard & Sheskin, 2015; Everett, Pizarro, & Crockett, 2016). Consistent with this interpretation, people tend to make more positive personal attributions towards social partners who make characteristically deontological, versus consequentialist, moral judgments (Everett et al., 2016; Uhlmann, Zhu, & Tannenbaum, 2013), and also trust them more in the context of economic exchange (Everett et al., 2016). A recent theoretical model provides a compelling explanation for these empirical phenomena in terms of partner choice (Hoffman,Yoeli, & Nowak, 2015).The model centers on a game in which a person faces the option of cooperating with or defecting against a partner in an iterated social dilemma. The allocator has a choice between considering the payoff consequences themselves on each round, or else deliberately neglecting these consequences. (Because the consequences to the self vary from round to round, cooperation will sometimes yield a greater payoff to the self, and other times yield a lesser payoff to the self). After each decision by the allocator, the responder may either continue the social interaction with the allocator or else terminate it unilaterally and irrevocably. This feature of the model corresponds to partner choice, in the sense that the social link between the players is either maintained or ceased. For a sizable parameter space, there is a subgame perfect equilibrium in which the allocator always cooperates without considering the consequences to themselves, and the responder continues the game only if the allocator maintains this behavior. Intuitively, this equilibrium is favored by the responder because it can rely on cooperation, and it is favored by the allocator because it maintains a social relationship that is, on average, profitable. The allocator’s behavior is nonconsequentialist at the process level because it deliberately excludes payoff-relevant information from its decision (Baumard & Sheskin, 2015). As we have seen, the strategic equilibrium described could be realized using any of several different psychological mechanisms. One approach would be to eschew any consequence-based reasoning, and to simply assign value to the action of cooperating with social partners.This corresponds to process non-consequentialism.Another approach would be to engage in consequencebased reasoning but to systematically exclude payoffs to oneself from the reward function, maximizing exclusively on payoffs to the relevant social partners. This approach is process consequentialist. There is a mismatch, however, between the consequences that make the policy adaptively favored (positive fitness consequences for oneself) and the consequences that are maximized at the level of psychological mechanism, which exclusively apply to others. In this way, even the latter approach may be thought of as a kind of adaptive non-consequentialism. Reputation and moral identity As we have seen, non-consequentialism works best when it is perceived to arise from nonstrategic motives. Thus you will find the best friends if you are perceived to be irrationally charitable, and not to be strategically angling for reciprocity. Likewise, you will dissuade potential adversaries best when you are perceived to be irrationally vengeful, not a strategic bluffer. On the crucial assumption that people have some ability to perceive others’ decision strategies (albeit a limited ability), this furnishes an adaptive rationale for actual non-consequentialist moral values, not just apparent ones. This final point – the limitations of our ability to detect others’ motives – has crucial implications for the nature of moral thought and behavior. It means that we are forever in a position 273
Fiery Cushman
of uncertainty, attempting to assess whether the social behavior we observe in others reflects a genuine and reliable commitment (i.e., process non-consequentialism), or whether instead it reflects veiled strategic motives and plans. Such worries must have weighed heavily on the minds of those world leaders who controlled nuclear arsenals during the height of the Cold War. They are probably just as common among anyone who has ever been to high school, fallen in love, gossiped over a water cooler, or wielded any of the other nuclear arms of our daily social lives. This inferential problem is fraught because many moral acts can be explained equally well by two hypotheses: true commitment, or strategic feigning. When a lover gives roses on Valentine’s Day, is this an expression of love or an act of manipulation? When a bully threatens, is this a manifestation of hair-trigger aggression or a shallow bluff? Often, we cannot know. There are some circumstances, however, that provide strong evidence in favor of a single hypothesis, such as when a lover binds himself to the contract of marriage (committed!), or is spied through a window hitting on his waitress (strategic!). Such episodes can trigger the reassessment of past behavior – suddenly last year’s Valentine’s Day roses don’t look so rosy. As a consequence, when a person is suspected of being strategic, it can be difficult to repair reputational damage. After all, future acts of apparent commitment may always be interpreted in light of one’s past departure from the primrose path. This introduces an imperative interest in maintaining one’s moral reputation (Aquino & Reed II, 2002; Gausel & Leach, 2011; Jordan, Hoffman, Bloom, & Rand, 2016; Sperber & Baumard, 2012; Wojciszke, 2005), and it suggests that even small deviations from moral behavior may be catastrophic (Tannenbaum, Uhlmann, & Diermeier, 2011). It also suggests that we should have psychological mechanisms designed specifically for the task of assessing others’ moral character, with a powerful purchase on our judgments and behaviors (Martin & Cushman, 2015). It may also explain why moral features are considered a particularly fundamental dimension of personal identity (Strohminger & Nichols, 2014, 2015). Finally, it suggests a natural explanation for the motivating concepts and intuitions of philosophical virtue ethics. In summary, an account of act-based non-consequentialism married with the logic of partner choice provides a natural account of person-based moral judgment (Uhlmann, Pizarro, & Diermeier, 2015).
Conclusion: What is morality? I have reviewed three distinct explanations for the non-consequentialism of human moral psychology: incomplete search, cognitive efficiency, and an adaptive commitment. According to the first model, non-consequentialism is a bug. According to the second model, it is a feature, but it carries a distinct cost.The desirable property of non-consequentialist judgments is not that they are non-consequential, but merely that they are cognitively efficient. According to the third model, however, the very property of non-consequentialism embodies a key adaptive advantage. Presumably each of these explanations plays a role in explaining some subset of nonconsequentialist morals. Future work must resolve whether one of them tends to be a more prevalent explanation than the others. Insofar as this dictates whether non-consequentialism tends more often to be a feature or a bug, it may carry implications for the normative status of non-consequentialist ethical systems. The relative scope of each explanation has a second implication, however, that is at once less obvious and yet more certain: It may help us to define morality. According to the models of incomplete search and cognitive efficiency, non-consequentialism is not at all special to morality. Rather, the same forces that give rise to moral non-consequentialism 274
Is non-consequentialism a feature or a bug?
would be expected to give rise to non-consequentialist mechanisms in the domain of personal choice and resource maximization. In fact, in some cases the very same mechanisms might be responsible for decision-making in both domains. This encapsulates a dominant view in social and cognitive neuroscience: Moral decision-making is a subset of ordinary decision-making that involves more-or-less identical mechanisms operating in more-or-less the same neural substrates (Ruff & Fehr, 2014). At a mechanistic level, there are two features of morality that stand out as semi-distinctive. First, the core architecture of decision-making interfaces with an architecture designed to represent and reason about others’ mental states (Buckner, AndrewsHanna, & Schacter, 2008; Young & Dungan, 2012). Second, the architecture operates with distinct primitive forms of reward in the moral domain – for instance, the reward of reciprocity in a state of gratitude, or of revenge in a state of fury. At its heart, however, this perspective does not favor a definition of morality in terms of specific mechanism; rather, it invites a definition in terms of the adaptive function accomplished by relatively domain-general mechanisms. This perspective is widely adopted in the field. For instance, Haidt (2008) offers: “Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.” He refers to this as a “functionalist” definition of morality: “Rather than specifying the content of moral issues (e.g., “justice, rights, and welfare”), this definition specifies the function of moral systems” (Haidt & Kesebir, 2010, p. 800). Greene (2015) takes a similar approach, arguing that some natural kinds are “bound together, not at the mechanical level, but at the functional level. I believe that the same is true of morality. So far as we can tell, the field of moral cognition does not study a distinctive set of cognitive processes (Greene, 2014, but see Mikhail, 2011.) Instead, it studies a set of psychological phenomena bound together by a common function. As I (Greene, 2013) and others (Frank, 1988; Gintis, 2005; Haidt, 2012) have argued, the core function of morality is to promote and sustain cooperation.” The model of adaptive commitment invites quite a different perspective on the definition of morality, especially when combined with the logic of partner choice and the demands of maintaining a moral identity. This model furnishes a unique rationale for non-consequentialist mechanisms in moral judgment and behavior. Moreover, it predicts that people will be motivated to signal their moral identity by adhering generally to moral norms, and will be motivated to identify such signals in others. It therefore favors a definition of the moral domain that is more squarely focused on a set of psychological mechanisms than their underlying functional rationale (although certainly a coherent functional rationale exists). In short, it suggests a definition of morality centered precisely on the property of non-consequentialism. This would be a controversial definition. For instance, it has been forcefully argued that at least some human moral judgments are basically consequentialist at a process level, and that these mechanisms are to be normatively preferred (Greene, 2013). How might this tension be resolved? One obvious solution is definitional pluralism, admitting both a functional definition of morality (including judgments and behaviors derived from domain-general processes) alongside a mechanistic definition (embodying the logic of non-consequentialist commitment). How different are these definitions, in their practical effect? A long time ago, or in places far away – when social interactions were governed by kinship, reputation, and coalition – the mechanisms of adaptive non-consequentialism often constituted the most effective means of reaping the adaptive benefits of cooperation. There and then, functional and mechanistic definitions of morality mostly picked out the same features of our psychology and behavior. In the contemporary West, however, social interaction is often governed by formal institutions such as states, corporations, and clubs. These largely undermine the rationale for adaptive 275
Fiery Cushman
non-consequentialism. I do not have to ask about the reputation of the taxi driver whom I hail, nor does she need to ask about mine, because institutional structures align each of our interests towards a successful exchange of money for service. Here and now, the psychological mechanisms that fulfill the functional definition of morality often diverge from those that fulfill its mechanistic definition.
Notes 1 The version of policy consequentialism advocated here is not that the agent always chooses the behavior that, ex post, turned out to be optimal. Rather, it is that the agent always chooses the behavior that, ex ante, she would rationally have believed to be optimal. Thus, an agent is a policy consequentialist if she places a bet on a coin coming up heads for the 19th time out of 20 flips, even if the coin happens to come up tails. 2 In the stylized diagram Figure 15.1 there appears to be little advantage to this approach, but in fact the advantage is large if the set of possible outcomes of any particular action is large (i.e., for stochastic state transitions). An example is the decision to hit vs. stick in blackjack. Although only the two available actions are considered, hundreds of outcomes must be considered to compute the expected value of each action. 3 This adaptive model posits selection at the level of groups rather than individuals, but this distinction does not bear on the present analysis.
References Anderson, John R. (1996). ACT: A simple theory of complex cognition. American Psychologist, 51(4), 355. Aquino, Haul & Reed II, Americus. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83(6), 1423. Baumard, Nicolas; André, Jean-Baptiste & Sperber, Dan. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral and Brain Sciences, 36(1), 59–78. Baumard, Nicholas & Sheskin, Mark. (2015). 3 Partner choice and the evolution of a contractualist morality. In J. Decety & T. Wheatley (Ed.), The Moral Brain: A Multidisciplinary Perspective (pp. 35–48). Cambridge, MA: MIT. Bjorklund, Fredrik; Haidt, Jonathan & Murphy, Scott. (2000). Moral dumbfounding:When intuition finds no reason. Lund Psychological Reports, 2, 1–23. Botvinick, Matthew; Niv,Yael & Barto, Andrew C. (2009). Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective. Cognition, 113(3), 262–280. Braams, Barbara R.; Güroğlu, Berna; de Water, Erik; Meuwese, Rosa; Koolschijn, P. Cédric; Peper, Jiska S. & Crone, Evaline A. (2013). Reward-related neural responses are dependent on the beneficiary. Social Cognitive and Affective Neuroscience, 9, 1030–1037. Buckner, Randy; Andrews-Hanna, Jessica R. & Schacter, Daniel L. (2008). The brain’s default network. Annals of the New York Academy of Sciences, 1124(1), 1–38. Cooper, Jeffrey C; Dunne, Simone; Furey, Teresa & O’Doherty, John P. (2012). Human dorsal striatum encodes prediction errors during observational learning of instrumental actions. Journal of Cognitive Neuroscience, 24(1), 106–118. Critcher, Clayton R.; Inbar,Yoel & Pizarro, David A. (2013). How quick decisions illuminate moral character. Social Psychological and Personality Science, 4(3), 308–315. Crockett, Molly J. (2013). Models of morality. Trends in Cognitive Sciences, 17(8), 363–366. Cushman, Fiery. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273–292. doi:10.1177/1088868313495594 Cushman, Fiery; Gray, Kurt; Gaffey, Allison & Mendes, Wendy Berry. (2012). Simulating murder: The aversion to harmful action. Emotion, 12(1), 2–7. Cushman, Fiery; Young, Liane & Hauser, Marc. (2006). The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm. Psychological Science, 17(12), 1082–1089. Daw, Nathaniel & Shohamy, Daphna. (2008). The cognitive neuroscience of motivation and learning. Social Cognition, 26(5), 593–620.
276
Is non-consequentialism a feature or a bug? Dickinson, Anthony; Balleine, B.; Watt, Andrew; Gonzalez, Feli & Boakes, Robert A. (1995). Motivational control after extended instrumental training. Learning & Behavior, 23(2), 197–206. Dolan, Ray J. & Dayan, Peter. (2013). Goals and habits in the brain. Neuron, 80(2), 312–325. Everett, Jim; Pizarro, David; & Crockett, Molly. (2016). Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology: General, 145(6), 772. Frank, Robert H. (1988). Passion Within Reason:The Strategic Role of the Emotions. New York: Norton. Gausel, Nicolay & Leach, Colin W. (2011). Concern for self-image and social image in the management of moral failure: Rethinking shame. European Journal of Social Psychology, 41(4), 468–478. Glimcher, Paul. (2011). Understanding dopamine and reinforcement learning:The dopamine reward prediction error hypothesis. Proceedings of the National Academy of Sciences, 108(Supplement 3), 15647–15654. Greene, Joshua. (2008). The secret joke of Kant’s Soul. In W. Sinnott-Armstrong (Ed.), Moral Psychology (Vol. 3; pp. 35–79). Cambridge, MA: MIT Press. ———. (2013). Moral Tribes: Emotion, Reason and the Gap between Us and Them. New York:The Penguin Press. ———. (2015). The rise of moral cognition. Cognition, 135, 39–42. Greene, Joshua; Sommerville, Brian; Nystrom, Leigh; Darley, John & Cohen, Jonathan. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108. Haidt, Jonathan. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834. ———. (2008). Morality. Perspectives on Psychological Science, 3, 65–72. ———. (2012). The Righteous Mind:Why Good People Are Divided by Politics and Religion. New York: Pantheon. Haidt, Jonathan; Koller, Silvia H. & Dias, Maria G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613–628. Haidt, J. & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, and G. Lindzey (Eds.), Handbook of Social Psychology, 5th Edition (pp. 797–832). Hobeken, NJ: Wiley. Harbaugh, William T.; Mayr, Ulrich & Burghart, Daniel R. (2007). Neural responses to taxation and voluntary giving reveal motives for charitable donations. Science, 316(5831), 1622–1625. Hare, Todd; Camerer, Colin; Knoepfle, Daniel; O’Doherty, John & Rangel, Antonio. (2010). Value computations in ventral medial prefrontal cortex during charitable decision making incorporate input from regions involved in social cognition. The Journal of Neuroscience, 30(2), 583–590. Hauser, Marc. (2006). Moral Minds: How Nature Designed a Universal Sense of Right and Wrong. New York: Harper Collins. Hauser, Marc; Cushman, Fiery;Young, Liane; Jin, Kang-Xing & Mikhail, John. (2007). A dissociation between moral judgment and justification. Mind and Language, 22(1), 1–21. Henrich, Joseph; Boyd, Robert; & Richerson, Peter. (2012).The puzzle of monogamous marriage. Philosophical Transactions of The Royal Society B - Biological Sciences, 367(1589), 657–669. Henrich, Joseph; Ensminger, Jean; McElreath, Richard; Barr, Abigail; Barrett, Clark; Bolyanatz, Alexander . . . Ziker, John. (2010). Markets, religion, community size, and the evolution of fairness and punishment. Science, 327(5972), 1480. doi: 10.1126/science.1182238 Hoffman, Moshe; Yoeli, Erez & Nowak, Martin. (2015). Cooperate without looking: Why we care what people think and not just what they do. Proceedings of the National Academy of Sciences, 112(6), 1727–1732. Janowski,Vanessa; Camerer, Colin & Rangel, Antonio. (2013). Empathic choice involves vmPFC value signals that are modulated by social processing implemented in IPL. Social Cognitive and Affective Neuroscience, 8(2), 201–208. Jordan, J. J., Hoffman, M., Bloom, P. & Rand, D. G. (2016).Third-party punishment as a costly signal of trustworthiness. Nature, 530(7591), 473–476. Kahneman, Daniel. (2011). Thinking, Fast and Slow. New York: Farrar Straus & Giroux. Kvaran, Trevor & Sanfey, Alan. (2010). Toward an integrated neuroscience of morality: The contribution of neuroeconomics to moral cognition. Topics in Cognitive Science, 2(3), 579–595. Lamm, Claus; Decety, Jean & Singer, Tania. (2011). Meta-analytic evidence for common and distinct neural networks associated with directly experienced pain and empathy for pain. Neuroimage, 54(3), 2492–2502. Lieberman, Debra; Tooby, John & Cosmides, Leda. (2007). The architecture of human kin detection. Nature, 445(7129), 727–731.
277
Fiery Cushman Littman, Michael & Ackley, David. (1991). Adaptation in Constant Utility Non-Stationary Environments. Paper presented at the ICGA. Martin, Justin & Cushman, Fiery. (2015). To punish or to leave: Distinct cognitive processes underlie partner control and partner choice behaviors. PloS ONE, 10(4), e0125193. Maynard Smith, John. (1982). Evolution and the Theory of Games. Cambridge: Cambridge University Press. Mikhail, John. (2011). Elements of Moral Cognition: Rawls’ Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. Cambridge: Cambridge University Press. Mobbs, Dean;Yu, Rongjun; Meyer, Marcel; Passamonti, Luca; Seymour, Ben; Calder, Andrew; . . . Dalgleish, Tim. (2009). A key role for similarity in vicarious reward. Science, 324(5929), 900–900. Montague, Read; Dayan, Peter & Sejnowski, Terrence. (1996). A framework for mesencephalic dopamine systems based on predictive Hebbian learning. The Journal of Neuroscience, 16(5), 1936–1947. Morris, Adam; McGlashan, James; Littman, Michael & Cushman, Fiery. (in prep). Flexible theft and resolute punishment. Nisbett, Richard & Cohen, Dov. (1996). Culture of Honor: The Psychology of Violence in the South. Boulder: Westview Press Inc. Noë, Ronald & Hammerstein, Peter. (1994). Biological markets: Supply and demand determine the effect of partner choice in cooperation, mutualism and mating. Behavioral Ecology and Sociobiology, 35(1), 1–11. Nowak, Martin. (2006). Five rules for the evolution of cooperation. Science, 314, 1560–1563. Pinker, Steven. (2011). The Better Angels of Our Nature:Why Violence Has Declined. Penguin Books. Pizarro, David; Uhlmann, Eric & Salovey, Peter. (2003). Asymmetry in judgments of moral blame and praise: The role of perceived metadesires. Psychological Science, 14(3), 267–272. Rachlin, Howard. (2002). Altruism and self-control. Behavioral and Brain Sciences, 25(2), 239–296. Rand, David & Bear, Adam. (2016). Intuition, deliberation, and the evolution of cooperation. Proceedings of the National Academy of Sciences, 113(4), 936–941. Rand, David; Peysakhovich, Alexander; Kraft-Todd, Gordon; Newman, George;Wurzbacher, Owen; Nowak, Martin & Greene, Joshua. (2014). Social heuristics shape intuitive cooperation. Nature Communications, 5, 36–77. Roberts, Gilbert. (1998). Competitive altruism: from reciprocity to the handicap principle. Proceedings of the Royal Society of London. Series B: Biological Sciences, 265(1394), 427–431. Rozin, Paul. (1997). Moralization. In A. Brandt and P. Rozin (Eds.), Morality and Health (pp. 379–401). New York: Routledge. Ruff, Christian & Fehr, Ernst. (2014). The neurobiology of rewards and values in social decision making. Nature Reviews Neuroscience, 15(8), 549–562. Schultz, Wolfram; Dayan, Peter & Montague, Read. (1997). A neural substrate of prediction and reward. Science, 275, 1593–1599. Shenhav, Amitai & Greene, Joshua. (2010). Moral judgments recruit domain-general valuation mechanisms to integrate representations of probability and magnitude. Neuron, 67(4), 667–677. Slovic, Paul. (2007). “If I look at the mass I will never act”: Psychic numbing and genocide. Judgment and Decision Making, 2(2), 79–95. Sperber, Daniel & Baumard, Nicholas. (2012). Moral reputation: An evolutionary and cognitive perspective. Mind & Language, 27(5), 495–518. Strohminger, Nina & Nichols, Shaun. (2014). The essential moral self. Cognition, 131(1), 159–171. ———. (2015). Neurodegeneration and identity. Psychological Science, 26, 1469–1479. Sutton, Richard & Barto, Andrew. (1998). Introduction to Reinforcement Learning. Cambridge, MA: MIT Press. Sutton, Richard; Precup, Doina & Singh, Satinder. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1), 181–211. Tannenbaum, David; Uhlmann, Eric & Diermeier, Daniel. (2011). Moral signals, public outrage, and immaterial harms. Journal of Experimental Social Psychology, 47(6), 1249–1254. Thorndike, Edward. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Monographs: General and Applied, 2(4), 1–109. Uhlmann, Eric; Pizarro, David & Diermeier, Daniel. (2015). A person-centered approach to moral judgment. Perspectives on Psychological Science, 10(1), 72–81.
278
Is non-consequentialism a feature or a bug? Uhlmann, Eric; Zhu, Luke & Tannenbaum, David. (2013). When it takes a bad person to do the right thing. Cognition, 126(2), 326–334. Westermark, Edward. (1891). The History of Human Marriage. London: MacMillan. Wojciszke, Bogdan. (2005). Morality and competence in person-and self-perception. European Review of Social Psychology, 16(1), 155–188. Young, Liane & Dungan, James. (2012). Where in the brain is morality? Everywhere and maybe nowhere. Social Neuroscience, 7(1), 1–10. Zaki, Jami & Mitchell, Jason. (2011). Equitable decision making is associated with neural markers of intrinsic value. Proceedings of the National Academy of Sciences, 108(49), 19761–19766.
279
16 EMOTIONAL PROCESSING IN INDIVIDUAL AND SOCIAL RECALIBRATION Bryce Huebner and Trip Glazer
Herman Melville (1853) presents a bleak world in Bartleby, the Scrivener: A Story of Wall Street. His narrator, an anonymous Wall Street lawyer, describes his interactions with three scriveners that he employs to copy documents in his small legal office: Nippers, Turkey, and the titular Bartleby. At first, Bartleby seems like an eager employee. But when he falls behind on his work, and the narrator asks him to work harder, Bartleby responds, “I would prefer not to.” As the story proceeds, Bartleby declines into deep despair, answering each subsequent directive with a variant of this same phrase. He prefers not to vacate the office, and he eventually starves to death in a prison cell, having preferred not to eat. Late in the story, the narrator considers the possibility that Bartleby had an innate propensity toward hopelessness, which was heightened by working in a dead-letter office. Supposing that Bartleby’s despair was triggered by his unwillingness to meet the demands of the modern workplace, we might interpret his despair as an internal nudge toward social conformity. Other aspects of the story suggest that Bartleby’s despair was part of a clever, though ultimately unsuccessful, Machiavellian bargaining strategy. The narrator is flustered by Bartleby’s hopelessness and feels compelled to accommodate Bartleby’s preferences. When Bartleby prefers not to work, the narrator gets others to do his work for him. When Bartleby prefers not to leave the office upon being fired, the narrator relocates his office. Bartleby meets an unfortunate end, but he is never forced to act against his own preferences. Both of these interpretations focus on Bartleby’s failure to live up to the demands of his employer. But Bartleby may also be a story about Wall Street. Melville was troubled by the increasing class polarization and consolidation of property ownership in Manhattan (Foley 2000). John Jacob Astor, whom Melville mentions in the opening lines of Bartleby, had purchased much of the property in lower Manhattan; and a public spectacle had emerged around the construction of Trinity Church, a location that plays a prominent role in the story. The Episcopal diocese closed its missions, leaving many people with nowhere to live, while simultaneously providing low-rent leases to a few wealthy New Yorkers – including Astor. Perhaps Astor owned the law office described in Bartleby, or perhaps it was a low-rent lease from the Episcopal Church, but in either case it could represent this troubling situation. If so, we might treat Bartleby’s despair as alerting him to problems with his social and material environment. Turkey and Nippers get drunk to cope with their low wages and monotonous jobs; 280
Emotional processing
and Bartleby engages in a practice familiar to urban radicals (including those in 19th century Manhattan): he squats! On this reading, his refrain, “I would prefer not to”, is an act of passive resistance that reveals the absurdity of the narrator’s demands (Deleuze 1997). And the despair that pervades the story is a sign of a broken system, not evidence of a deviant psychology. This doesn’t change Bartleby’s misery. But it can help us to re-conceptualize it. In this chapter, we explore three social functions of emotion, which parallel these three interpretations of Bartleby. We argue that emotions can serve as commitment devices, which nudge us toward social conformity and thereby increase the likelihood of ongoing cooperation. We argue that emotions can play a role in Machiavellian strategies, which help us get away with norm violations. And we argue that emotions can motivate social recalibration, by alerting us to systemic social failures.Then, in the second half of the chapter, we argue that emotions guide behavior in all three ways by attuning us to our social environment through a process of error-driven learning. We suggest that their most basic social function is to reveal tensions between individuals and social norms, and that because they are socially scaffolded, they leave open multiple strategies for resolving such tensions. Specifically, we argue that differences in conceptualization, as well as differences in circumstance, can generate emotions that nudge us toward cooperation, lead us to adopt Machiavellian strategies, or motivate us to engage in forms of social recalibration. Bartleby’s despair signaled a mismatch between his internal dispositions and the expectations of Wall Street, and any of these three strategies could have moderated his misery.
The social functions of emotions For the purposes of this chapter, we adopt a social functionalist perspective on emotion (Keltner & Haidt 1999; Keltner & Haidt 2001; Keltner, Haidt, & Shiota 2006). We treat emotions as coordinated changes in physiology, cognition, and behavior, which evolved because they tend to enhance the fitness of organisms (Nesse 1990). And while some emotions evolved to serve non-social functions, we focus on those that help organisms navigate the distinctive challenges of social living (Keltner & Haidt 2001) – problems of reproduction (e.g. finding and keeping mates, rearing offspring), problems of cooperation (e.g. rewarding altruism, punishing exploitation), and problems of group organization (e.g. signaling, conferring, and withdrawing social statuses). We also adopt an informational perspective, according to which emotions often provide those who experience them with situation-relevant information. They attune us to aspects of the world that demand our immediate attention, and they keep us apprised of our progress toward reaching our goals (Fridja 1988, 354; Clore, Gasper, & Garvin 2001; Schwarz & Clore 2007). Emotions also provide bystanders with situation-relevant information through their outward expressions. The experience of guilt informs an individual that they have violated a norm, prompting reform, expression of guilt thereby informs others that the individual is troubled by this violation, which may lead them to forgive the wrongdoing. Consequently, we examine both the intrapersonal and interpersonal functions of emotions, focusing on the flow of action-guiding information within and between social actors (Keltner & Haidt 1999). In this first section, we review three social functions of emotions: they can function as commitment devices, which motivate cooperation and other pro-social behavior (Frank 1988; Greene 2013; Sterelny 2012); they can play a role in Machiavellian strategies, which help us to get away with norm violations or to regain the trust of others if we are caught (Griffiths 2002; Griffiths & Scarantino 2009); and they can serve as tools for social recalibration, which motivate us to reform problematic social arrangements (Scheman 1980; Nietzsche 1998; Nussbaum 2013). Although these functions appear to be heterogeneous, we argue in the sequel that they are variations on a single theme. In each case, emotions convey information about conformity or 281
Bryce Huebner and Trip Glazer
nonconformity with social norms, and motivates adaptive responses. The functions differ only with regard to the target of recalibration: individuals, bystanders, and society at large. Emotions as commitment devices Individuals can often benefit more by working together than they would by working alone, making cooperation rational. But as long as others are already cooperating, individuals can benefit the most by cheating, or by enjoying the benefits of cooperation without putting in their fair share of the work. Animals often find ways to take advantage of cooperation, cheating and defecting when they can get away with it, generating an anti-cooperative arms race (Dawkins 1976). Withholding cooperation thus becomes a dominant social strategy, making it unreasonable to trust others when the going gets tough. A precommitment to cooperation can solve this problem (Schelling 1966). If both parties can lock themselves into cooperation, such that defection carries a higher cost, then cooperation becomes the dominant strategy. And Robert Frank (1988) argues that emotions like guilt and indignation may have evolved in early hominin societies as precommitment devices, to prevent cheats and shirks from taking advantage of others. He contends that emotions can provide us with the motivation to follow through on our commitments when defection would be rationally preferable; and he suggests that emotional expressions are costly signals that allow us to garner trust from others. Guilt generates a disincentive against cheating, and since we have good reason to believe others will feel guilty when they defect, expressions of guilt can bolster trust in social situations. Indignation provides an incentive to punish wrongdoing, even when doing so seems rash. Since bullies will thrive when they aren’t punished, social anger may have evolved to motivate punishment in ways that increase cooperation and conformity in the long run (Boehm 2012). Frank even suggests that love may have evolved to stabilize long-term partnerships.While it would be imprudent to stay with someone who would leave whenever a better option came along, he argues that love increases the likelihood that a partner will suffer if they lose you. Since loving partners tend to remain faithful, it weakens the threat of defection, making long-term relationships a more worthwhile investment. Joshua Greene (2013) develops a similar suggestion, though his claims are tied more closely to research on psychological mechanisms. Like Frank, he holds that some emotions evolved as solutions to commitment problems that arise in small groups, and he holds that these emotions generate reflexive action tendencies that foster cooperation and conformity within groups. On his approach, these emotions are “designed to promote cooperation among otherwise selfish individuals” (Greene 2013, 61–62). But unlike Frank, Greene acknowledges that these reactions are attuned to the norms of small group life, and they can generate disagreements between groups, as well as disagreements over how a group should be organized. Greene contends that we must rely on reflective and rational processes when commitment problems arise in such contexts – as they have for much of human history. This is a core commitment of this commonly held view of emotion: our emotions come up short when we face problematic social situations; so we must plan, deliberate, discuss, and decide how to move forward when conflicts arise between groups, and when we want to improve the structures of the groups that we inhabit. More reflective processes depend on mechanisms that are slow, computationally costly, and often open to ideological distortion. So they are only useful where we have the time and resources to think clearly – that is, when we are not dominated by emotion. We should accept the claim that commitment problems played a significant role in the social lives of early hominins, and we bear the mark of these evolutionary pressures. But it’s hard to see why emotions should continue to play precisely the same roles that they played 282
Emotional processing
in the Pleistocene. As Kim Sterelny (2016) argues, while guilt and indignation may have been necessary to eliminate cheats and bullies at some points in our evolutionary history, we inhabit a world that makes cooperation easier and defection more costly. So there is reason to believe that many emotions may serve other purposes (Rozin, Haidt, & McCauley 2009; Griffiths 2002). This is not to deny that we experience emotions “that are motivationally powerful, emotions that are triggered by perceived violations of trust and fairness, emotions whose motivational saliences are relatively insensitive to utilitarian calculation, emotions whose occurrence are easily recognized and difficult to fake” (Sterelny 2012, 109).The changes in our social world over the past 75,000 years have not made norm transgressions less aversive or norm conformity less satisfying, and we are still motivated to punish norm transgressions in our current evolutionary niche. In general, we are happier to follow the rules than to break them (Klucharev et al. 2009, 2011; Milgram & Sabini 1978; Prinz 2006), and emotions play an important role in amplifying the salience of norm transgressions and enhancing the benefits of norm conformity. So even if they are no longer necessary to solve commitment problems, it is important to remember that emotional lessons are learned more easily. Guilt can provide us with a painful reminder that we ought not to cheat. Gratitude can provide us with a pleasurable reminder that we should cooperate. And despair can motivate us to re-think our relationship to the social world. Where things are going well, we tend to increase social effort; but dampened motivation becomes adaptive when it helps to “inhibit dangerous or wasteful actions in situations characterized by committed pursuit of an unreachable goal, temptations to challenge authority, insufficient internal reserves to allow action without damage, or lack of a viable life strategy” (Nesse 2005, 18). Put differently, despair may arise in response to noncompliance, make our actions salient and painful, and provide us with a nudge back toward social conformity. Emotions as Machiavellian strategies Suppose that some emotions evolved as solutions to commitment problems and continue to enhance the salience of norm conformity. It might seem clear that their social function is to lead us to accept the norms that govern our society or our immediate social group. Indeed, this seems to be the suggestion advanced by Frank and Greene, though in slightly different ways. Emotional alarm bells alert us to the deviance of particular behaviors, and motivate re-alignment with an existing normative order.This claim should be familiar to philosophers, who often note that reactive attitudes play a critical role in structuring the normative domain (Strawson 1962; Watson 2008). And we concede that emotions often facilitate living and acting together, in ways that allow us to take advantage of the significant benefits of group life. But this isn’t the end of the story.While emotions often lead us toward conformity, they do not universally lead to cooperation (Sterelny 2012). As Paul Griffiths (2002) and Andrea Scarantino (Griffiths & Scarantino 2009) argue, this approach to the social function of emotions needs to be amended. Building on the work of Jean-Paul Sartre (1939), Griffiths and Scarantino argue that emotions often serve a Machiavellian function. Instead of fostering a drive toward conformity and cooperation, they sometimes allow us to juke the system for our own advantage. Even granting that some emotions initially evolved to weed out cheats and shirks, emotions can also help us become better cheats and more sophisticated shirks, especially once a set of cooperative practices is in place. In our current evolutionary context, where most people are cooperative, individuals can sometimes flourish by expressing emotions that allow them to gain fitness benefits they would not otherwise have access to. And this introduces a more local variant of anti-cooperative arms race that is common among social animals. 283
Bryce Huebner and Trip Glazer
Within a cooperative enterprise, expressions of guilt might secure forgiveness from others when we are caught transgressing social norms. As Frank notes, if someone wrongs you but appears to be sorry, it is easier to forgive them and to cooperate with them in the future. But if enough expressions of guilt have been followed by cooperation in the past, and if the controlled expression of guilt isn’t too costly to cultivate, some people might be able to use this expression to secure re-acceptance into a collaborative community, thereby continuing to garner the benefits of social life. This is not to deny that guilt may have served a more cooperative role in the past; but in a world designed to foster cooperation, expressions of guilt may serve as components of a Machiavellian strategy for improved cheating and shirking (Griffiths 2002). Similarly, pace Frank, Griffiths suggests that romantic love can help people get away with adultery in committed long-term relationships. Having been caught in an adulterous act, it may be possible to plea (with varying levels of success), “I am so sorry! I was overcome by emotion! I still love you!” And where this works, it may allow for an increase in reproductive fitness without a corresponding loss. So perhaps emotions do not provide unambiguous guidance in line with dominant social norms, and perhaps they have socially significant functions that outstrip their role as precommitment strategies. Some emotions may alert us to possibilities for improving our own situation, undergirding forms of Machiavellian intelligence. And their expression might serve as manipulative signals, which lead others to believe that we are more cooperative than we really are. To do this, emotions must still alert us to our deviations from social norms, but they must do so in a way that doesn’t directly motivate conformity with an existing normative order.The tension between individual dispositions and social expectations is still present, but the individual copes with this tension by prompting others to recalibrate their perceptions, manipulating them into thinking that the tension has been resolved. Emotions as motivation for social recalibration Thus far, we have suggested that emotions can sometimes sustain cooperation and that they can sometimes promote manipulation. But emotions also seem to have another important social function. Philosophers have often noted that emotions can help us identify social arrangements that hinder individual flourishing, and motivate us to reform those arrangements. However, this suggestion has not yet been explored from a social functionalist perspective. Consider two influential accounts of emotions that play such a role: • In the Genealogy of Morality, Nietzsche (1887/1998) argues that feelings of ressentiment played an important role in the emergence of Judeo-Christian morality. He claims that people resented their mistreatment at the hands of more powerful leaders; but instead of motivating them to punish those leaders, using resources available within the existing normative order (which couldn’t have been accomplished successfully), their resentment motivated them to impose a new normative order that demonized the strong and valorized the weak. By reforming the norms of their society, they secured a novel form of flourishing. Crucially, for the weak, resentment motivated a creative imposition of a novel normative order, and not a re-alignment with an existing normative order. • In “Anger and the Politics of Naming”, Naomi Scheman (1980) describes Alice’s experience of strong negative emotions with regard to her role as a housewife. She believes that she feels guilty, and takes this to signal that she needs to accept and embrace her social role. But after attending a feminist consciousness-raising group, Alice learns to perceive her emotion as anger. She discovers that she was angry all along, because she was relegated to 284
Emotional processing
a subordinate social role. By learning to interpret her experience as anger, she came to see that her emotion wasn’t nudging her back into her designated social role, but motivating her to protest it. Her emotion revealed a problem with existing social expectations, not a problem with her own psychology. This genealogical account of morality and feminist account of social injustice suggest that emotions can sometimes motivate forms of social resistance and social imagination, instead of just motivating social conformity or individual resistance. But how is this possible? On the one hand, we might suppose that some emotions evolved to motivate social (and not just individual) recalibration, and that they reveal situations where social disruptions are likely to be beneficial to individual interests. But there are few similarities between situations where it would be reasonable to resist an existing normative order. So it is unlikely that such capacities would have evolved and persisted in social animals. As Greene agues, socially relevant emotions do not seem to play the psychological roles that would be required for prospective reasoning and planning for the future (Greene 2013). On the other hand, complex normative judgments could guide forms of political resistance; but they are unlikely to be targets of biological or social evolution. So while it is clear that evolved emotional mechanisms could play an important role in the conservative and cooperative aspects of human cognition, it is substantially less clear how they play a role in the (perhaps rare) cases where people attempt to reconfigure their environments to better suit their needs and interests. Nonetheless, some emotions do seem to motivate resistance. Our aim in the next section is to explain how this can be the case, and we do this by focusing on the role of learning in grounding the social function of emotions.
Re-conceptualizing the social functions of emotion So far, we have assumed that some emotions evolved as capacities for regulating social life, that emotions provide situation-relevant information about our world, and that emotions can evoke socially significant action tendencies – approaching, fleeing and excluding, attacking, and ignoring (Fischer & Van Kleef 2010, 210). But notice that none of these assumptions entails that emotions must specify determinate responses to particular social situations. Even if emotions alert us to tensions between our current actions and social expectations, and even if they motivate us to resolve these tensions, they may not specify precise or situation-invariant behavioral responses. Human cognition is flexible, and our affective capacities play a critical role in our ability to adapt to different social and ecological niches. To skillfully cope with environments that we pass through, we must learn what possibilities those environments afford. And this process relies on the dynamic integration of affectively valenced and action-guiding neural processes (Barrett 2014, 292). In the remainder of this chapter, we argue that variations in our social and material situation can evoke emotions that will motivate individuals to re-establish cooperation, to adopt Machiavellian strategies, or to strive for social change. But to understand how emotions can serve all of these functions, we need to dig deeper into the nature of emotional learning. It is common to situate discussions of emotion within a particular theory, such as Basic Emotions Theory, Appraisal Theory, or Psychological Constructivism. But our focus on learning suggests a different approach. We begin from a well-supported theory of the role of learning in human cognition. The details emerge below, but we propose to treat the brain as a hierarchically organized predictive machine (Clark 2015; Colombo this volume), and we argue that this view is compatible with an account of biological preparedness (Cummins & Cummins 1999; Seligman 1971) that provides a plausible foundation for emotional processing. We 285
Bryce Huebner and Trip Glazer
sometimes appeal to research that has been presented in support of a particular theory of emotion; but we don’t intend to tie our own view to any of these horses. For example, we follow Psychological Constructivists in arguing that while approach and avoidance motivations are a typical part of our evolutionary inheritance, emotions with precise social content require the integration of sensory input, background beliefs, and biologically grounded expectations with self-referential information to yield a unified representation of what the world affords (cf. Sellars 1978; Clark 2015). But we also acknowledge that sophisticated defenses of Basic Emotions and Appraisal Theories can accommodate the learning and flexibility we are most interested in (Scarantino & Griffiths 2011; Scarantino 2012, 2014). Similarly, our discussion of the affordances generated by emotional expressions draws on work carried out by Appraisal Theorists (e.g. Van Kleef 2009; Van Kleef, de Dreu, & Manstead 2010). But we believe these claims are compatible with Psychological Constructivist theories of emotion. Over the course of the next two sections, we hope to show that our approach is consistent with what we know about evolution, and about the operation of cognitive systems more broadly. And we show that it is not implausible to draw all of these resources together to yield an integrated approach to the social functions of emotions. Emotions as forms of social learning Over the past two decades, research across the cognitive sciences has converged on the hypothesis that the brain is a predictive machine, which integrates the representations produced by numerous error-driven learning systems, which operate in parallel (Clark 2015; Colombo this volume; Hohwy 2013; Rescorla 1988). Earlier systems process more precise and concrete sources of information, while later systems process information that is more abstract and categorical. Although earlier systems provide inputs to later systems, this network of processes does not construct a representation of the world through a feed-forward process. Instead, each system attempts to predict what the incoming data will be, given its model of the world; each system generates an error-signal when these predictions are mistaken; and each system revises its future predictions to minimize such errors. Subjective experience, including emotional experience, is produced by a recurrent cascade of these ‘top-down’ predictions, based on what each system ‘knows’ about the world and about the current situation (Dennett 2015; Clark 2015). There is an intriguing twist on this story in the domain of affective processing. Like all other animals, we are biologically prepared to respond to motivationally salient features of the world. Pavlovian capacities allow us to rapidly learn which things we should approach and which we should avoid. Rats rapidly acquire a taste aversion after exposure to a mildly toxic substance in sweetened water; but it takes a long time for them to learn to avoid sweetened water if they are reinforced with moderate electric shocks (Garcia & Koelling 1966). Likewise, people rapidly acquire aversions to biologically significant threats (e.g. predators, unfamiliar places, and outgroup members), but sustained reinforcement is necessary for them to learn that an arbitrary cue indicates danger (Seligman & Hager 1972). Most humans also attune to the distinctive challenges of group life, in part because conformity feels rewarding while deviance feels like an error to be corrected (Milgram & Sabini 1978; Milgram et al. 1986). Moreover, reflexive inclinations to behave pro-socially are species-typical (Zaki & Mitchell 2013); and the suffering of others is motivationally significant (Crockett et al. 2014). Finally, humans possess evolved tendencies to punish cheats and bullies, even at a cost to themselves, and even when it is unlikely that they will interact with a particular partner again (Bowles & Gintis 2011; Henrich et al. 2005; Sterelny 2016). These forms of biological preparedness provide the shared and basic foundation upon which all affective learning takes place. 286
Emotional processing
This being the case, the social functionalist is right to think that our affective responses have been shaped by biological and social evolution. But these basic affective responses (specified in terms of particular approach and avoid motivations, as well as levels of arousal) do not exhaust our emotional lives. While these forms of core affect constrain the range of possible affective states we can come to acquire, and while biological preparedness does make it easier for us to learn about what the world affords, each of these affective responses plays multiple roles in many emotional states. In a typical situation, multiple expectations will be computed in parallel to guide situation-relevant behavior. And as predictive approaches to cognition suggest, even high-level conceptual information can play a role in structuring how affectively significant information shows up to self-conscious agents. So, for affective inclinations to guide sociallysensitive forms of behavior, three things must happen: Some form of valuational learning must occur, which shapes our initial drives in accordance with the social phenomena we have frequently encountered. Like all mammals, we possess a network of evaluative systems that allow us to achieve this kind of affective attunement (Huebner 2012). Dopaminergic neurons in the basal ganglia generate predictions about the value and distribution of primary rewards, updating these predictions when rewards are better or worse than expected (Schultz, 1998). Over time, if the value and distribution of rewards remains fairly constant, such predictions converge on accurate representations of the evaluative landscape. But since the world is complex and unpredictable, brains must also track fluctuations in the value of rewards, monitor changes in the probability of gains and losses, and adjust subjective estimates of risk and uncertainty (Montague et al. 2012; Adolphs 2010); and human brains attune us to local norms, treating conformity as rewarding and deviance as an error to be corrected (Klucharev et al. 2009, 2011). All of this occurs with little effort; and in each case, we rely on error-driven learning systems, whose content is made more precise and more accurate by the top-down cascade of expectations that guides goal-directed behavior. 2 Just as importantly, emotional processing must be action-oriented, and the information encoded by evaluative systems must be packaged to guide socially-sensitive forms of behavior (Cushman 2013). But crucially, the link between evaluative learning and behavior does not involve an additional step in processing. As we pass through different environments, we construct evaluative maps that allow us to place particular actions in evaluative contexts. Connections between systems engaged in action-planning and systems dedicated to evaluative learning allow us to generate situation-relevant actionoutcome representations. 3 Top-level expectations, and the downward cascade of expectations they produce, must be able to resolve ambiguities in the data that are present at lower levels of processing. In Scheman’s example, for instance, Alice reflexively processes information relevant to her social position and evaluative state, but this information underdetermines what her current emotional state is. For her experience to crystallize as an emotion, she must categorize her situation as one that affords guilt or one that affords anger. This allows her to resolve the ambiguity that is inherent in her experience (but there is nothing that requires resolving it in one way rather than another). Likewise, the Machiavellian strategists must represent their social situation as one where cooperation is expected, and adjust their explicit expectations to experience their emotional reaction as evidence that a situation affords the opportunity to juke the system. None of this requires robust self-consciousness, and much of the relevant processing occurs sub-personally. Self-consciousness can play a critical role in top-level processing, as it does after consciousness raising. But at minimum, 1
287
Bryce Huebner and Trip Glazer
each adjustment requires action-oriented processing that places an agent in social space, and informs them about what the world affords. Drawing this all together, we should see the brain as a system that is continually producing representations of what we did in similar situations, how we acted, and how we felt about the possibilities that the world affords. This information plays a critical role in helping us guess what will happen next, not just in the world, but in our thoughts, feelings, and perceptions. While it is easy to assume that we first see and then believe, according to this picture we see what we expect; and what we expect is based largely on what we feel (Barrett & Simmons 2015). To make this point concrete, suppose a person is walking through a neighborhood in southeast DC. They become more aware of their surroundings and more alert, because the last time they were there, someone told them that it was a dangerous neighborhood. Racialized expectations about danger are also brought online, as they have strong associations between race and violence. So as they walk, they reflexively generate expectations about the actions they will have to take in the near future. As a result, their heart rate and breathing quicken, and motor-systems are prepared for fighting-or-fleeing. In this process, racialized expectations and mistaken beliefs about violent crimes play a critical role in the production of action tendencies, as well as the initiation of search strategies that are guided by these expectations. Because this person expects to see a gun, even a small amount of gun-consistent information (embodied in a cell-phone) can resonate with these action-oriented motivations; as a result, affectively valenced perceptions might affect perception, making it more likely that this person will see a cell-phone as a gun, and react fearfully (Barrett 2015). In this case, expectations impose structure on experience at multiple levels of processing: the awareness of potential threats is increased, negative valence is generated, the accessibility of associations between race and violence is enhanced, and the construction of action plans for addressing potential threats is triggered (cf. Wilson-Mendenhall et al. 2011). In this context, embodied expectations that evoke feelings of tension are also rapidly recoded as experiences of fear, which generate the further motivations and expectations that collectively guide ongoing behavior.There is a great deal of complexity to this story. But the most salient thing to notice is that the structure of incoming data will always underdetermine the possibilities that the world affords in this situation. Until expectations are imposed, and until fear-based conceptualizations are triggered, visual information cannot reveal a threat (Black males with cell phones are not dangerous). We believe that most real-world situations are affectively ambiguous in a similar way, even if the social relevance of this fact is not always as obvious. Ambiguities in our current state, as well as ambiguities in the social structures of our world, are resolved in accordance with our expectations, and this shifts our perceptions of the world. Once ambiguities are resolved, approach and avoidance-responses that are generated by Pavlovian and model-free learning systems can guide behavior.1 But situated emotional experiences depend on a complex network of predictive processes. People rapidly compute evaluative representations of their current situation (LeDoux 2007; Montague 2006). But conceptual knowledge, scripts, and schemata also play a significant role in the production of emotional experience. Put differently, emotional representations are computed on-the-fly, integrating internal and external sources of information in ways that are sensitive to current task demands as well as long-term goals. As a result, classes of emotions cluster as family-resemblance kinds, bound together with emotional labels but varying quite within these classes (Barrett 2014). Presumably, this is why neuroimaging results routinely reveal that capacities as diverse as representing bodily states, tracking the salience of events, remembering past situations, shifting attention, perceiving, and conceptualizing are 288
Emotional processing
operative in the production of emotional experiences (Wilson-Mendenhall et al. 2011, 2013). And this is probably the reason why attempts to localize emotional processing in specific neural circuits, and to articulate boundaries around the experience of particular emotional states, routinely fail to reveal anything like stable natural kinds (Barrett, Wilson-Mendenhall, & Barsalou 2015; Griffiths 1997; Lindquist et al. 2012). To be clear, we do not intend to deny that recognizing potential threats is evolutionarily advantageous. Nor do we intend to deny that evolved responses to threats (e.g. fleeing, fighting, and freezing) might arise reflexively for some evolutionarily stable phenomena. But ‘fear’, as a high-level representation that guides behavior in a diverse range of situations has a variety of looks and feels, suggesting that it may not be a unified state: When you fear a flying cockroach, you might grab a magazine and swat it; when you fear disappointing a loved one, you might think of other ways to make them feel good about you; when you fear a mysterious noise late at night, you might freeze and listen; when you fear giving a presentation, you might ruminate about the audience reactions or over-prepare; when you fear getting a flu shot, you might cringe anticipating the pain; when you fear hurting a friend’s feelings, you might tell a white lie. Sometimes you will approach in fear, and sometimes you will avoid. Sometimes your heart rate will go up, and sometimes it will go down. (Wilson-Mendenhall et al. 2011, 1108) We must learn how to use our fear responses in ways that are sensitive to the demands of our current situation. And part of what it means to be a successful emotional agent is to be able to adapt our reflexive responses to suit our current situation. Like fear, guilt looks and feels differently when it is a response to forgetting a friend’s birthday, forgetting to respond to an email, or falling behind on one’s research commitments. Sometimes guilt requires demonstrating that you are sorry and asking for forgiveness; sometimes it requires knuckling down, and getting to work on something you are behind schedule on. Similarly, experiences of love vary widely across the course of a relationship, and they are expressed differently depending on what has happened recently. Sometimes demonstrating your love requires approaching someone, but sometimes it requires giving them space. Sometimes it feels good (when things are going well), but sometimes it feels awful. Finally, as Scheman’s story of Alice suggests, strong negative affect in a social situation can help to sustain conceptualizations that transform an avoidance motivation like guilt into a transformative motivation like indignation. How affect is experienced, and how it is used in online cognition often depends on what conceptualizations are available, given a person’s learning history and given their ideological perspective. And this is why emotions that play a critical role in social transactions require a dynamic sensitivity to our current situation, and it is to these social factors that we now turn. Emotions as social transactions Socially relevant decisions must often be made even when we are uncertain about the beliefs, motivations, and interests of our social partners (Van Kleef et al. 2010). And we often rely on emotional cues to reduce this uncertainty (Manstead & Fischer 2001; Parkinson, Fischer, & Manstead 2005; Griffiths & Scarantino 2009). But how we use the interpersonal information embodied in emotional expressions depends on our expectations regarding the current situation: in contexts we judge to be cooperative, we often treat emotional information as evidence about the world – this leads to emotional contagion, responses consistent with emotions 289
Bryce Huebner and Trip Glazer
expressed by our social partners, and attempts at social mood management; in contexts we judge to be competitive, by contrast, we are more likely to use emotional information strategically, as it provides evidence about how our social partner feels, what is important to them, and actions they are likely to take in the near future – emotional contagion appears to be reduced in such contexts, and it seems to play a less significant role in social decisions (Van Kleef 2009; Van Kleef et al. 2010, 49). The immediate affective responses triggered by the uptake of emotional information can be captured by four kinds of action tendencies: • • • •
the motivation to move toward or to approach a social partner; the motivation to move away from or to excluding a social partner; the motivation to move against or to attack a social partner; and the motivation not to move, to ignore, or to hide from a social partner.
As Van Kleef and his colleagues (2010) argue, inferential processes play a significant role in the uptake of emotional information in competitive situations and in situations where we have epistemic motivation to deliberate strategically. By contrast, the action tendencies and appraisal patterns that are commonly taken to characterize emotional content take precedence in cooperative contexts, where the need for strategic reasoning may be lower. Positive affect typically leads us to consider a wider range of factors in evaluating a situation (Fredricksen and Branigan 2005, 315), to increase our willingness to adopt a novel strategy (Isen 2001), and to increase our willingness to revise initial judgments in light of new information (Ashby, Isen, & Turken 1999; Estrada, Isen, & Young 1997; Isen, Rosenzweig, & Young 1991). In cooperative contexts, feelings of happiness also lead us to move toward social partners in ways that foster collaboration, promoting trust, liking, and cooperation (Van Kleef et al. 2010, 65), and evoking a drive toward affiliation, and a perception of our world as filled with opportunity (Van Kleef et al. 2010, 59). In competitive contexts, however, expressions of happiness provide us with evidence that we are being duped or outdone, and this can increase competition. More intriguingly, where such expressions signal that our social partners are doing well, we sometimes assume that they will be more likely to make concessions; and this can evoke a Machiavellian search for better ways to exploit our opponent’s weaknesses (Van Kleef et al. 2010, 60). All told, the expression of positive affect in a competitive context appears to trigger a motivation to move against social partners, increasing competitive motivations and decreasing cooperative ones (Van Kleef et al. 2010, 72). A similar set of phenomena arise in the case of anger. In cooperative contexts, anger tends to increase vigilance, producing antisocial or aggressive reactions that can motivate us to disengage, to refuse to make concessions, and to undermine cooperation (Van Kleef et al. 2010, 60). The uptake of anger is different in competitive contexts, at least where the angry party is perceived as having a higher or equal social status to us. Here, expressions of anger lead us to make concessions, in hopes of minimizing the risk of more serious forms of punishment. Paradoxically, anger increases the willingness to cooperate within a competitive context. But people are less willing to re-engage with an angry social partner in the future – while they may defer to an angry partner now, they prefer, in the long run, to move away from people who express anger (Van Kleef et al. 2010, 75). Expressions of sadness and distress convey the information that others need aid or support, and this can sometimes yield an increase in cooperation (Van Kleef et al. 2010, 68). This isn’t particularly surprising, as even young children engage in helping behavior when someone expresses sadness or distress. But in competitive contexts, expressions of sadness or distress are often ignored, unless there is a high degree of epistemic motivation, and a robust belief that 290
Emotional processing
ignoring their sadness will lead to more deleterious outcomes. Put bluntly, signals of supplication trigger a desire not to move in competitive contexts. Finally, appeasement emotions like guilt, regret, and embarrassment often signal a willingness to make amends or engage in ameliorative behavior, and a desire to resume conformity with social norms (Van Kleef et al. 2010, 61). The tendency toward appeasement can foster re-engagement in cooperative contexts, and this can reduce competitive tendencies. But in competitive settings these emotions can invite increased competition and drive us to search for ways to exploit a social partner’s weaknesses: since “such emotions signal that the transgressor is willing to compensate, it becomes less necessary to make concessions oneself; instead, one can wait for the other to give in and exploit the situation to further one’s own goals” (Van Kleef et al. 2010, 61). The upshot of this discussion is that our affective responses to others are often shaped by our own appraisal of a situation, and by social information about what kind of situation we are in. In cooperative contexts, expressions of happiness and sadness both increase the likelihood of cooperation, and this is true in spite of the fact that these emotions have different affective valences. In competitive contexts, the interpersonal effects of emotional information are primarily strategic. The key insight, then, is that the structure of our current social situation plays a significant role in the social uptake of emotion: People become more competitive when their counterpart shows signs of happiness because they infer that the other is undemanding and ready to concede more. People become more cooperative when faced with expressions of anger because they infer that the other has ambitious goals and is a tough player. People exploit their counterpart when s/he shows guilt or regret, because such appeasement emotions are taken as a sign that the other feels s/he has already claimed too much and is willing to make concessions. (Van Kleef et al. 2010, 78) So what are we to make of these facts as regards the possibilities of using emotions as interpersonal signals that can foster conformity, the adoption of Machiavellian strategies, or signal a need to revise the operative social norms? In cooperative contexts, we can use the emotions expressed by others as evidence about how to adjust our own behavior. Seeing a partner as happy leads us to invest more effort in preserving our ongoing cooperative activity; seeing our partner as sad leads us to offer help, to rebuild a cooperative setting; seeing our partner as angry leads us to abandon our current actions, in hopes of preserving cooperation; and seeing our partner as guilty or embarrassed leads us to work toward rebuilding a foundation of trust and cooperation. Put much too simply, in cooperative situations, social emotions tend to nudge us toward further cooperation. Inferential processes and strategic reasoning seem to play a more significant role in the uptake of social emotions in competitive situations (Van Kleef at al. 2010), and this allows us to treat social emotions as interpersonal signals for individual or social recalibration. But the way we use them will depend on our conceptualization of our circumstance and our available options. Seeing our competitors as happy can trigger a motivation to move against them, but whether we do so individually or socially will depend on whether we have a social group that can help us to overturn the problematic normative order that their illicit happiness represents. Similarly, while seeing our social opponents as angry can often lead us to make concessions, in hopes of minimizing the risk of more serious forms of punishment, it need not do so. Seeing a social competitor as unjustifiably angry can also lead us to work together to undercut her 291
Bryce Huebner and Trip Glazer
position of power, serve to clarify the absurdity of their demands. Of course, we tend to perceive the people we interact with as cooperative partners, but as feminists, critical race theorists, and disability theorists have long argued, learning to see the ways that others are exploiting or manipulating us can transform the ways that we interpret their behavior. With this view of social interaction in hand, we now return to our claims about emotional learning to explain how our perception of a social situation as cooperative or competitive can help to transform the ways that we interpret the interpersonal signals that social emotions convey, and then act in light of this information to re-establish cooperation, to dupe competitive social partners, or to strive to produce another world. From feeling to acting Drawing together insights of the previous two sections, we can summarize our primary claim as follows. Emotions reveal – both intra- and interpersonally – the existence of a tension between an individual and their social surroundings.They do this because of conflicts that arise at multiple levels of the computational hierarchy. We reflexively attune to affectively salient patterns that we encounter frequently. And conflicts between our learned expectations and incoming flows of socially salient information tend to trigger an affective response. In general, the initial recruitment of affect runs by way of attuned and elaborated forms of response, which have been built upon forms of biological preparedness. But for these approach and avoidance motivations to become emotions, with precise social content, and precise tendencies to move toward, away from, or against a social partner, these states must be integrated with sensory input, background beliefs, and self-referential information to yield a unified representation of what the current situation affords. In many cases, social information, filtered through our construal of a situation as competitive or cooperative, exerts an important pressure on the representation that we construct. As a result, what one does with the information embedded in socially situated error-signal often depends on how they conceptualize their place in society, or within a smaller social group. Put differently, emotions call attention to “the importance of events to relevant concerns, help prioritize goals, and generate a state of action readiness that prepares the individual to respond to changes in the environment” (Van Kleef et al. 2010, 47). And since conceptual representations often depend on ongoing patterns of social engagement, the interpersonal information an emotion provides can sometimes be integrated into our conceptualization of an affective experience, yielding downstream effects on the emotions we experience. This is the critical upshot of the data on social transactions that we addressed in the previous section. In many cases, emotions that signal a mismatch between our current state and our current situation can motivate a person to uphold the prevailing norms in their society; and where this occurs, it is often because the available conceptual information, as well as the broader background of cooperative interactions, suggest that this is the most plausible behavioral response to the current situation. But at other times, such emotions can motivate a person to take advantage of a breakdown in cooperative norms, in part because they may perceive their current situation as competitive, or as a situation where a sucker can be hoodwinked. In neither case does the conceptual information need to be represented explicitly or consciously, and the precise structure of a person’s conceptualization of their emotion will always depend on contingent facts about their social history. But expecting a situation to be competitive, and treating it as a situation where a benefit can be gained by defecting from a cooperative enterprise, can shift the import of the signal that is conveyed by an affective response to a situation. Finally, and to our minds most importantly, emotions that signal a mismatch between a person’s current state 292
Emotional processing
and their social situation can sometimes be interpreted as evidence of the necessity to revise the operative social norms, especially where a novel conceptual framework can open up new possibilities for engagement with the world that were previously unnoticed – from the outside by way of consciousness raising, from the inside by way of deliberative reasoning, or through creative and shared explorations of novel conceptual possibilities. Importantly, no particular response is built into the structure of an emotion. Evolutionary forces have provided us with affective responses that are objectless and directionless; but “when affect is conceptualized and labeled with emotional knowledge, it becomes associated with an object in a specific situation, providing the experiencer with information about how best to act in that specific context” (Kashdan, Barrett, & McKnight 2015, 12). This is a deeply Darwinian point (though not one that Darwin himself would have made). The experience of an affective state always underdetermines the conceptualization of an emotion, as well as how an agent should take up the information embodied in the effective response. From this perspective, socially salient emotions are adaptive not because they convey a fixed kind of information, but because they help to generate flexible strategies for remaining attuned to a fast moving and highly fluctuating social environment. Emotions couldn’t do this if they didn’t allow us to incorporate conceptual information about our current situation into our ongoing, affectively valenced responses to the world that we experience. In light of this fact, we contend that it is a mistake to think that the information conveyed by an emotion is always fixed by some facts about our evolutionary history – and this remains true even on the assumption that any plausible theory of emotion must be bound to an evolutionary situation. But why does any of this matter? We contend that the information conveyed by an emotion is the result of a complex set of interactions between evolved affect responses, conceptual representations, and information embodied in the material structure of the world we inhabit. As a result, we believe that the meaning of an emotion can shift as a result of changes at many different points in this distributed informational system. Bottom-up signals can evoke discomfort, leading us to retreat from novel possibilities, and to conceptualize our retreat as warranted by the current situation. And as a result of the dynamic feedback relations between our immediate responses and our construal of a situation, we can come to experience emotions that nudge us back toward social conformity. By contrast, the perception of novel possibilities can reshape our immediate experience of aversive affect, sometimes in ways that are highly productive. In this regard, we think that Scheman is right to think that we can re-conceptualize our emotions, taking ourselves to feel guilty because we have failed to live up to a norm, or taking ourselves to have been angry because the norm was unjust. But it would be a mistake to think that there is one correct conceptualization to an affective signal. Instead, we suggest that different conceptualizations of affective responses reveal different possibilities for socially significant action. By conceptually labeling our affective responses, we can exploit information that is present in what we feel, and do so in a way that conveys information about the current situation and possible courses of action (Barrett 2012). Emotional states that are labeled become easier for us to regulate, especially where they pull us away from our long-term goals, and where they must be incorporated into our ongoing understanding of the world (Kashdan, Barrett, & McKnight 2015). In her recent book, Political Emotions, Martha Nussbaum (2013) argues that while anger is often the first and strongest response we have to unjust social arrangements, love and compassion are sometimes better suited to creating a social order that minimizes the suffering of society’s most vulnerable members. But while agents might tend to respond to anger by becoming more antagonistic, this is at least partly due to our current cultural understanding of what it means to be angry (Stearns & Stearns 1986). In recognizing that our understanding of anger is socially contingent, and conceptually malleable, we can begin to see that different 293
Bryce Huebner and Trip Glazer
cultural scripts and prescriptions can shift the behavioral responses that tend to be evoked by the losses that elicit anger, including responses that are more conducive to social change. Similarly, the affective response of loss that turns us inward, and leads us to focus on our own feelings, does not always need to be detrimental to our immediate or long-term well-being. By prompting us to focus on what we did wrong, the immediate inhibitory response that is characteristic of sadness can nudge us into a deliberative mindset; in controlled experimental settings, the experience of sadness can lead people to focus more directly on the outcomes of their actions, and to think about the big picture that they want to achieve. Put differently, while sadness inhibits goal striving, it can also enhance goal setting (Maglio, Gollwitzer, & Oettingen 2013). No doubt, it is difficult to focus on the process of setting goals when feelings of sadness wash over us; but if we can shift our conceptualization of a situation, and use negative affect in a positive and creative way, even the desire to say “no” can be made productive. Nietzsche got this much right. He observed that creativity is often required to enable ressentiment to serve as motivation for social recalibration, rather than as a commitment device. Without the ability to establish a critical distance from the norms of one’s society, it is impossible to take an emotion to signal that there is something wrong with those norms, and that the best way to resolve the tension created by that emotion is to change society itself. Our conformist tendencies make it hard for us to succeed in this regard. But there is nothing that requires us to descend into despair when we contravene the norms of our society. That said, it is often a bad idea to attempt to overturn a normative order on your own. And this is part of the reason why the consciousnessraising groups can be so important to the transformation of guilt into anger in the case described by Scheman. Often, the ability to re-conceptualize our affective state requires a little bit of help from our friends, and it’s a good thing because collective attempts at liberation also require help from our friends (Huebner forthcoming). This brings us back to Bartleby. Regardless of Melville’s intentions, the resources he provided in his story can sustain interpretations of Bartleby as hopeless, manipulative, or socially prescient. What we do with those resources depends on the knowledge and expectations we bring to bear on the story. Possessing a different understanding of Melville, 19th century Manhattan, or the nature of social change can have a significant impact on the information that the story affords. And our conceptualization of the story will depend on the top-down factors that shape how the story shows up to us. Over the course of this chapter, we hope to have shown that something similar holds for the dynamics of social emotions. Conceptualizations and social uptake shape the information that our emotions provide, and it is the interactions between our affective response to the world and our conceptualization of our situation that opens up and closes off various options for socially situated action. As it is with Bartleby, so it is with humanity.
Note 1 The upshot is that stored associations as well as situated expectations must be changed to address the moral problem here. This means changing the world we live in to suit our values, not just changing our morally problematic on-line coping strategies. For an extended discussion of this issue, see Huebner (2016).
References Adolphs, R. (2010). What does the amygdala contribute to social cognition? Annals of the New York Academy of Sciences, 1191, 42–61. Ashby, F.G., Isen, A.M. & Turken, A.U. (1999). A neuropsychological theory of positive affect and its influence on cognition. Psychological Review, 106, 529–550.
294
Emotional processing Barrett, L.F. (2012). Emotions are real. Emotion, 3, 413–429. ———. (2014). The conceptual act theory: A précis. Emotion Review, 6, 292–297. ———. (2015). When a gun is not a gun. New York Times, 17 April 2015; Retrieved 12 August 2015 from http://goo.gl/PvXLDt Barrett, L.F. & Simmons, W.K. (2015). Interoceptive predictions in the brain. Nature Reviews Neuroscience, 16, 419–429. Barrett, L.F., Wilson-Mendenhall, C.D. & Barsalou, L.W. (2015). The conceptual act theory: A road map. In L.F. Barrett and J.A. Russell (Eds.), The psychological construction of emotion, pp. 83–110). New York: Guilford. Boehm, C. (2012). Costs and benefits in hunter-gatherer punishment. Behavioral and Brain Sciences, 35(1), 19–20. Bowles, S. & Gintis, H. (2011). A Cooperative Species – Human Reciprocity and Its Evolution. Princeton, NJ: Princeton University Press. Clark, A. (2015). Surfing uncertainty. New York: Oxford University Press. Clore, G. L., Gasper, K. & Garvin, E. (2001). Affect as information. In J.P. Forgas (Ed.), Handbook of affect and social cognition (pp. 121–144). Mahwah, NJ: Erlbaum. Colombo, M. (2016). Social norms: a reinforcement learning perspective. (this volume) Crockett, M.J., Kurth-Nelson, Z., Siegel, J.Z., Dayan, P. & Dolan, R.J. (2014). Harm to others outweighs harm to self in moral decision making. PNAS 111(48): 17320–17325. Cummins, D. & R. Cummins (1999). Biological preparedness and evolutionary explanation. Cognition, 73, B37–B53. Cushman, F. (2013). Action, outcome, and value. Personal and Social Psychological Review, 17(3), 273–292. Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press. Deleuze, G. (1997). Bartleby; Or, the formula. In D.W. Smith and M.A. Greco (Trans.) Essays Critical and Clinical (pp. 68–90). Minneapolis: University of Minnesota Press. Dennett, D. C. (2015). Why and how does consciousness seem the way it seems? Open MIND: 10(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570245 Estrada, C.A., Isen,A.M. &Young, M.J. (1997). Positive affect facilitates integration of information and decreases anchoring in reasoning among physicians. Organizational and Human Decision Processes, 72, 117–135. Fischer, A.H. & Van Kleef, G.A. (2010). Where have all the people gone? A plea for including social interaction in emotion research. Emotion Review, 2, 208–211. Foley, B. (2000). From Wall Street to Astor Place: Historicizing Melville’s ‘Bartleby’. American Literature, 72(1), 87–109. Frank, R. (1988). Passions Within Reason. New York: W. W. Norton & Company. Fredricksen, B.L. & Branigan, C. (2005). Positive emotions broaden the scope of attention and thoughtaction repertoires. Cognition and Emotion, 19(3):,313–332. Fridja, N. (1988). The Laws of Emotion. American Psychologist, 43, 349–358. Garcia, J. & Koelling, R.A. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4(1), 123–124. Greene, J. (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. New York: Penguin. Griffiths, P. (1997). What Emotions Really Are. Chicago: University of Chicago Press. ———. (2002).Towards a Machiavellian theory of emotional appraisal. In P. Cruse and D. Evans (Eds.), Emotion, Evolution, and Rationality (pp. 89–105). Oxford: Oxford University Press. Griffiths, P. & Scarantino, A. (2009). Emotions in the Wild. In P. Robbins and M. Aydede (Eds.), The Cambridge Handbook of Situated Cognition (pp. 437–453). New York: Cambridge University Press. Henrich, J., Boyd, R., Bowles, S., Gintis, H., Fehr, E., Camerer, C., . . . Henrich, N. (2005). “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavior and Brain Sciences, 28(6), 795–781. Hohwy, J. (2013). The Predictive Mind. New York: Oxford University Press. Huebner, B. (2012). Surprisal and valuation in the predictive brain. Frontiers in Psychology, 3, 415 ———. (2016). Implicit bias, reinforcement learning, and scaffolded moral cognition. In Michael Brownstein and Jennifer Saul (Eds.), Implicit Bias and Philosophy (Vol. 1; pp. 47–79). Oxford: Oxford University Press.
295
Bryce Huebner and Trip Glazer ———. (forthcoming). Planning and prefigurative politics. In B. Huebner (Ed.), Essays on Dennett. New York: Oxford University Press. Isen, A.M. (2001). An influence of positive affect on decision making in complex situations:Theoretical issues with practical implications. Journal of Consumer Psychology, 11(2), 75–85. Isen, A.M., Rosenzweig, A.S. & Young, M.J. (1991).The influence of positive affect on clinical problem solving. Medical Decision Making, 11, 221–227. Kashdan, T.B., Barrett, L.F. & McKnight, P.E. (2015). Unpacking emotion differentiation by transforming unpleasant experience by perceiving distinctions in negativity. Current Directions in Psychological Science, 24(1), 10–16. Keltner, D. & Haidt, J. (1999). Social functions of emotions at four levels of analysis. Cognition and Emotion, 13, 505–521. ———. (2001). Social functions of emotions. In T. Mayne and G.A. Bonanno (Eds.), Emotions: Current Issues and Future Directions (pp. 192–213). New York: Guilford Press. Keltner, D., Haidt, J. & Shiota, L. (2006). Social Functionalism and the Evolution of Emotions. In M. Schaller, D. Kenrick and J. Simpson (Eds.) Evolution and Social Psychology (pp. 115–142). New York: Psychology Press. Klucharev,V., Hytonen, K., Rijpkema, M., Smidts, A. & Fernandez, G. (2009). Reinforcement learning signal predicts social conformity. Neuron, 61(1), 140–151. Klucharev, V., Munneke, M.A., Smidts, A. & Fernandez, G. (2011). Downregulation of the posterior medial frontal cortex prevents social conformity. Journal of Neuroscience, 31, 11934–11940. LeDoux, J. (2007). The amygdala. Current Biology, 17(20), R868–R874. Lindquist, K.A., Wager, T.D., Kober, H., Bliss-Moreau, E. & Barrett, L.F. (2012). The brain basis of emotion: A meta-analytic review. Behavior and Brain Sciences, 35(3), 121–143. Maglio, S. Gollwitzer, P. & Oettingen, G. (2013). Action Control by Implementation Intentions:The Roll of Discrete Emotions. New York: Oxford University Press. Manstead, A.S.R. & Fischer, A.H. (2001). Social appraisal: The social world as object of and influence on appraisal processes. In K.R. Scherer and A. Schorr (Eds.) Appraisal Processes in Emotion: Theory, Methods, Research (pp. 221–232). New York: Oxford University Press. Melville, H. (1853). Bartleby, the scrivener: A story of Wall Street. Putnam’s Magazine, 2(11) (November), 546–557; and 2(12) (December), 609–615. Milgram, S., Liberty, H. J.,Toledo, R. & Wackenhut. J. (1986). Response to intrusion into waiting lines. Journal of Personality and Social Psychology, 51(4), 683–689. Milgram, S. & Sabini, J. (1978). Advances in environmental psychology 1, the urban environment. In A. Baum, J.E. Singer and S. Valins (Eds.), On Maintaining Social Norms: A Field Experiment in the Subway (pp. 31–40). Mahwah, NJ: Erlbaum Associates. Montague, P. R., Dolan, R. J., Friston, K. J. & Dayan P. (2012). Computational psychiatry. Cognitive Science, 16, 72–80. Montague, R. (2006). Why Choose This Book? New York: Dutton Press. Nesse, R.M. (2005). Twelve crucial points about emotions, evolution, and mental disorders. Psychology Review, 11(4), 12–14. ———. (1990). Evolutionary explanations of emotion. Human Nature, 1(3), 261–289. Nietzsche, F. (1998). On the Genealogy of Morality. Indianapolis: Hackett Publishing Company. Nussbaum, M. (2013). Political Emotions. Cambridge, MA: Harvard University Press. Parkinson, B., Fischer, A. H. & Manstead, A.S.R. (2005). Emotion in Social Relations. New York: Psychology Press. Prinz, J. (2006). The emotional basis of moral judgments. Philosophical Explorations, 9(1), 29–43. Rescorla, R. (1988). Pavlovian conditioning: It’s not what you think it is. American Psychologist, 43(3), 151–160. Rozin, P., Haidt, J. & McCauley, C. (2009). Disgust. In D. Sander and K. Scherer (Eds.), Oxford Companion to Affective Sciences (pp. 121–122). New York: Oxford University Press. Sartre, J.P. (1939). Esquisse d’une Théorie des Emotions. Paris: Hermann. Scarantino, A. (2012). Discrete emotions: From folk psychology to causal mechanisms. In Peter Zachar and Ralph Ellis (Eds.), Categorical and Dimensional Models of Affect: A Seminar on the Theories of Panksepp and Russell (pp. 135–154). Amsterdam: John Benjamins.
296
Emotional processing ———. (2014). Basic emotions, psychological construction and the problem of variability. In James Russell and Lisa Barrett (Eds.), The Psychological Construction of Emotion (pp. 334–376). New York: Guilford Press. Scarantino, A. & Griffiths, P. (2011). Don’t Give Up on Basic Emotions. Emotion Review, 3(4), 1–11. Schelling, T.C. (1966). Arms and Influence. New Haven, CT:Yale University Press. Scheman, N. (1980). Anger and the politics of naming. In S. McConnell-Ginet, R. Borker and N. Furman (Eds.), Women and Language in Literature and Society (pp. 174–187). New York: Praeger. Schultz, W. (1998). Predictive reward signal of dopamine neurons. Journal of neurophysiology, 80(1), 1–27. Schwarz, N. & Clore, G.L. (2007). Feelings and phenomenal experiences. In E. T. Higgins and A. W. Kruglanski (Eds.), Social Psychology: Handbook of basic principles, 2nd edition (pp. 385–407). New York: Guilford. Seligman, M. (1971). Phobias and preparedness. Behavior therapy, 2, 307–320. Seligman, M.E.P. & Hager, J. L. (1972). Biological Boundaries of Learning. New York: Appleton-Century-Crofts. Sellars, W. (1978). The role of imagination in Kant’s theory of experience. Retrieved on 19/01/16 from http://www.ditext.com/sellars/ikte.html. The paper was originally published in Categories: a Colloquium (Ed. H.W. Johnstone, University Park, PA: Pennsylvania University Press. p. 231–45. Stearns, C.Z. & Stearns, P. (1986.) Anger. Chicago: University of Chicago Press. Sterelny, K. (2012). The Evolved Apprentice: Cambridge, MA: MIT Press. ———. (2016). Cooperation, culture, and conflict. The British Journal for the Philosophy of Science, 67(1), 31–58. Strawson, P.F. (1962). Freedom and Resentment. Proceedings of the British Academy, 48, 1–25. Van Kleef, G.A. (2009). How emotions regulate social life:The emotions as social information (EASI) model. Current Directions in Psychological Science, 18, 184–188. Van Kleef, G., De Dreu, C. & Manstead, A. (2010). An interpersonal approach to emotion in social decision making: The emotions as social information model. Advances in Experimental Social Psychology, 42, 45–96. Watson, G. (2008). Responsibility and the limits of evil: Variations on a Strawsonian theme. In M. McKenna and P. Russell (Eds.), Free Will and Reactive Attitudes (pp. 115–142). Surrey, UK: Ashgate. Wilson-Mendenhall, C.D., Barrett, L.F. & Barsalou, L.W. (2013). Situating emotional experience. Frontiers in Human Neuroscience, 7(164), 1–16. Wilson-Mendenhall, C.D., Barrett, L.F., Simmons, W.K. & Barsalou, L.W. (2011). Grounding emotion in situated conceptualization. Neuropsychologia, 49, 1105–1127. Supplemental Materials. Zaki, J. & Mitchell, J. (2013). Intuitive prosociality. Current Directions in Psychological Science, 22(6), 466–470.
297
17 IMPLICIT ATTITUDES, SOCIAL LEARNING, AND MORAL CREDIBILITY1 Michael Brownstein
1. Spontaneity and credibility Dichotomous frameworks for understanding human decision-making often distinguish between spontaneous or intuitive judgments, on the one hand, and deliberative, reasoned judgments, on the other. The precise qualities thought to characterize these two kinds of judgments – sometimes aggregated under the headings “System I” and “System II” – change from theory to theory.2 Recent years, however, have seen a shift in dual systems theorizing from attempts to specify the precise qualities that characterize these two kinds of judgments to descriptions of the distinct neural and computational mechanisms that underlie them. In turn, these mechanisms are coming to be a focal point for the current incarnation of a long-standing debate about whether and why spontaneous judgments are ever good guides for decision-making and action.3 Do our intuitions, emotional reactions, and unreasoned judgments ever have authority for us? Are they morally credible? On the one hand, one might think that the nature of the neural and computational systems underlying spontaneous judgments demonstrates that they are paradigmatically short-sighted and morally untrustworthy. On the other hand, the nature of these mechanisms might vindicate at least a defeasible authority accorded to our spontaneous judgments in some circumstances. This debate can be articulated in terms of the mechanisms of evaluative learning. What kinds of processes are involved in forming our spontaneous judgments? Are those processes responsive to past experience, situationally flexible, etc. in ways that make them good guides for decision-making and action? In what follows, I’ll first describe a general argument others have made for the defeasible moral credibility of spontaneous judgments, based on the neural and computational mechanisms that underlie them (§2).4 I’ll then focus on the particular case of implicit social attitudes (§3). I do so for two reasons. First, these attitudes can be understood as subserved in large part by the same neural and computational learning mechanisms that I describe in §2. This shows that implicit social attitudes count as an instance of spontaneous judgments in the relevant sense, and can in principle be good guides for decision-making and action. Second, the cases in which implicit social attitudes are good guides for action are extremely hard to distinguish on the ground, so to speak, from the cases in which they amount to morally deplorable implicit biases.This illuminates what I call the “credibility question”: under 298
Implicit attitudes
what conditions are one’s spontaneous judgments good guides for action, compared with the conditions under which one ought to override one’s immediate inclinations?5 Peter Railton (2014) identifies certain traits and experiences of people whose spontaneous judgments have putative moral authority. Focusing on implicit social attitudes as a test case, I consider the evidence for his suggestion in the final section of the chapter (§4).
2. Value and attunement Research in computational neuroscience suggests the existence of two distinct mechanisms for value-based decision-making. These are typically called “model-based” and “model-free” systems (Blair et al., 2004, 2013; Crockett, 2013; Cushman, 2013).6 I briefly describe each (§2.1), then consider evidence suggesting that model-free systems can produce morally credible spontaneous judgments (§2.2). 2.1. Model-based and model-free evaluative learning Model-based evaluative learning systems produce map-like representations of the world. They represent the actions that are available to an agent, along with the potential outcomes of those actions and the values of those outcomes. This comprises what is often called a “causal model” of the agent’s world. In evaluating an action, a model-based system runs through this map, calculating and comparing the values of the outcomes of different choices, based on the agent’s past experiences, as well as the agent’s abstract knowledge. A simple example imagines a person navigating through a city, computing and comparing the outcomes of taking one route versus another, and then choosing the fastest route to their destination.7 The agent’s internal map of the city comprises a causal model of the agent’s action-relevant “world”.This model can also be thought of as a decision-tree. Running through the “branches” of a decision-tree is what many commonly refer to as “reasoning” (Cushman, 2013). That is, model-based systems are thought to subserve the process in which agents consider the outcomes of various possible actions, and compare those outcomes in light of what the agent cares about or desires.This sort of reasoning is inherently prospective, since it requires projecting into the likely outcomes of hypothetical actions. For this reason, model-based systems are sometimes referred to as “forward-models”. In contrast, a model-free system computes the value of one particular action, based on the agent’s past experience in similar situations. Model-free systems enable value-based decisionmaking without representing complex map-like causal links between actions and outcomes. Essentially, model-free systems compute the value of one particular action, without modeling the “world” as such.8 The computation is done on the basis of a comparison between the agent’s past experience in similar situations and whether the current action turns out better or worse than expected. For example, faced with a decision of turning left or right at a familiar corner, a model-free system can generate a prediction that turning one direction (e.g., left) will be valuable to the agent. The system does this based on calculations of the size of any discrepancy between how valuable turning left was in the past and whether turning left this time turns out better than expected, worse than expected, or as expected. Suppose that in the past, when the agent turned left, the traffic was better than she expected. The agent’s “prior” in this case for turning left would be high. But suppose this time the agent turns left, and the traffic is worse than expected. This generates a discrepancy between the agent’s past reinforcement history and her current experience, which is negative given that turning left turned out worse than expected. This discrepancy will feed into her future predictions; her prediction about the 299
Michael Brownstein
value of turning left at this corner will now be lower than it previously was. Model-free systems rely on this “prediction-error” signaling, basing new predictions on comparisons between past reinforcement history and the agent’s current actions. While there is no obvious folk psychological analogue for model-free processing, the outputs of this system are commonly described as gut feelings, spontaneous inclinations, and the like. This is because these kinds of judgments do not involve reasoning about alternate possibilities, but rather they offer agents an immediate positive or negative sense about what to do. Model-based systems are often described as flexible, but computationally costly, while model-free systems are described as inflexible, but computationally cheap. Navigating with a map enables flexible decision-making, in the sense that one can shift strategies, envision sequences of choices several steps ahead, utilize “tower of Hanoi” like tactics of taking one step back in order to take two more forward, etc. But this sort of navigation is costly in the sense that it involves computing many action-outcome pairs, the number of which expand algorithmically even in seemingly simple situations. On the other hand, navigating without a map, based on the information provided by past experiences for each particular decision, is comparatively inflexible. Model-free systems only enable one to evaluate one’s current action, without considering options in light of alternatives or future consequences. But navigating without a map is easy and cheap. The number of options to compute are severely constrained, such that one can make on-the-fly decisions, informed by past experience, without having to consult the map (and without risking tripping over one’s feet while one tries to read the map, so to speak). If one accepts the rough generalization that the outputs of model-based processing are deliberative judgments and the outputs of model-free processing are spontaneous judgments,9 then one might also think that deliberative judgments are flexible but inefficient and spontaneous judgments are inflexible but efficient. Indeed, this is what many people think, whether they are focused on the more general level of System I and System II or whether they are focused on the specific model-free and model-based learning systems that appear to subserve System I and II.10 But there is reason to question the putative inflexibility of spontaneous judgments and to do so based on what model-free systems can do. Moreover, it is not just situationally flexible behavior that model-free learning can support, but socially attuned, experience-tested behavior and decision-making as well. 2.2. Wide competence and model-free learning Consider three cases in which agents’ spontaneous reactions tend to outperform their deliberative judgments. In the “Iowa Gambling Task” (Bechara et al., 1997), participants are presented with four decks of cards and $2000 in pretend gambling money. They must choose facedown cards, one at a time, from any of the decks.Two of the decks are “good” in the sense that choosing from them offers an overall pattern of reward, despite only small rewards offered by the cards at the top of the deck. Two of the decks are “bad” in the sense that picking from them gives the participant a net loss, despite large initial gains. It takes subjects on average about 80 card-turns before they can say why they prefer to pick from the good decks. After about 50 turns, most participants can say that they prefer the good decks, even if they aren’t sure why. But even before this, during what Bechara and colleagues call the “pre-hunch” phase, most participants prefer the good decks (as revealed by their actual choices). But when stopped and asked about their preferences and beliefs about the decks every 10 turns, participants don’t report having any preferences or strategic beliefs during this phase. Most intriguingly, after about only 20 turns, most participants have higher anticipatory skin conductance responses before picking from the bad decks.11 300
Implicit attitudes
It’s not just in tracking statistical regularities that spontaneous judgments have the potential to outperform deliberative judgments. A second set of examples stem from cases of expert action, particularly in sports. Oftentimes, expert athletes do not have greater conscious or declarative access to the reasons for which they make the choices they do, such as playing a particular shot or swinging at a particular pitch (Beilock, 2010; Brownstein, 2014; Michaelson & Brownstein, 2015). This is perhaps why the best athletes don’t necessarily make good coaches; while experts’ performances can be extraordinary, their understanding of what distinguishes their abilities is often just ordinary. Instead, expert athletes appear to have a special ability to make nearly instantaneous action-guiding predictions about the relevant variables in the sport (Yarrow et al., 2009). In ball sports like baseball, for example, experts’ motor performance (e.g., hitting) is tied to their ability to accurately predict when and where the ball will cross the plate.12 Baseball expertise, it seems, is not just determined by greater physical strength, a more determined will, and better coordination, but also by the ability to make spontaneous and accurate on-the-fly predictions about the outcomes of valued events under ambiguous conditions.13 A final set of examples have to do with interpersonal social fluency, which is commonly recognized as “people skills” or “tact”. Interpersonal social fluency requires one’s spontaneous gestures and “micro-expressions” to be attuned to others and to the general demands of the situation.14 Consider, for example, then President-Elect Obama’s inauguration in 2009. In front of millions of viewers, Obama and Chief Justice of the Supreme Court John Roberts both fumbled the lines of the Oath, opening the possibility for a disastrously awkward moment.15 But after hesitating for a moment, Obama smiled widely and nodded slightly to Roberts, as if to say, “It’s okay, go on.” These gestures received little explicit attention, but they defused the awkwardness of the moment, enabling the ceremony to go on in a positive atmosphere. Despite his nervousness and mistakes, Obama’s social fluency was on display. Most of us know people with similar skills, which require real-time, fluid spontaneity, and can lead to socially valuable ends (Manzini et al., 2009).16 There is reason to think that model-free learning mechanisms can do substantial work in explaining agents’ decisions and behavior in cases like these – in estimating statistics, in expert athletics, and in interpersonal interaction. In some cases, there is direct evidence. Statistical competence, for example, has been traced to model-free learning mechanisms (Daw et al., 2011).17 These findings are consistent with wide-ranging research suggesting that agents’ – even nonhuman agents’ – spontaneous judgments are surprisingly competent at tracking regularities in the world (e.g., Kolling et al., 2012; Preuschoff et al., 2006). Yarrow and colleagues (2009) combine this with research on motor control to understand expertise in athletics. They focus on experts’ ability to make predictive, rather than reactive, decisions on the basis of values generated for particular actions. More research is clearly needed, but this is suggestive that modelfree learning is essential to the skilled spontaneous judgment that distinguishes experts in sports from beginners and even skilled amateurs. It is relatively uncontroversial, however, to say that model-free learning helps to explain spontaneous judgment and behavior in cases in which the relevant variables are repeatedly presented to the agent in a relatively stable and familiar environment. In cases like batting in baseball, this repeated presentation of outcomes in familiar situations enables the agent to update her predictions on the basis of discrepancies between previous predictions and current actions. But what about cases in which an agent spontaneously displays appropriate, and even skilled, behavior in unfamiliar environments? Interpersonal social fluency requires this. Offering a comforting smile can go terribly awry in the wrong circumstance; interpersonal fluency requires deploying the right reaction in changing and novel circumstances.The question 301
Michael Brownstein
then is whether model-free systems can explain a kind of “wide” competence in spontaneous decision-making and behavior.18 Wide competencies are not limited to a particular familiar domain of action. Rather, they can manifest across a diverse set of relatively unfamiliar environments. One reason to think that model-free systems can subserve wide, rather than narrow (i.e., context-bound), abilities is that these systems can treat novel cues which are not rewarding as predictive of other cues which are rewarding. Huebner (2016, 55) describes this process: For example, such a system may initially respond to the delicious taste of a fine chocolate bar. But when this taste is repeatedly preceded by seeing that chocolate bar’s label, the experience of seeing that label will be treated as rewarding in itself – so long as the label remains a clear signal that delicious chocolate is on the way. Similarly, if every trip to the chocolate shop leads to the purchase of that delicious chocolate bar, entering the shop may come to predict the purchasing of the chocolate bar, with the label that indicates the presence of delicious chocolate; in which case entering the shop will come to be treated as rewarding. And if every paycheck leads to a trip to the chocolate shop . . .19 This kind of “scaffolding” of reward prediction is known as “temporal difference reinforcement learning” (TDRL; Sutton, 1988; Cushman, 2013). It enables model-free systems to treat cues in the environment which are themselves not rewarding, but are predictive of rewards, as intrinsically rewarding. The chocolate shop is not itself rewarding, but is predictive of other outcomes (eating chocolate) that are rewarding. (Better: the chocolate shop is predictive of buying chocolate which is predictive of eating chocolate which is predictive of reward.) The key point is that the chocolate shop itself comes to be treated as rewarding. The agent need not rely on a causal map that abstractly represents A leading to B, B leading to C, and C leading to D. This helps to explain how a spontaneous and socially attuned gesture like Obama’s grin can be generated by a model-free learning system. Smiling-at-Chief-Justices-during-PresidentialInaugurations is not itself rewarding. Or, in any case, Obama did not have past experiences that would have reinforced the value of this particular action in this particular context. But presumably Obama did have many experiences that contributed to the fine-tuning of his microexpressions, such that these spontaneous gestures have come to be highly adaptive. Indeed, this adaptiveness is unusually salient in Obama’s case, where he displayed a high degree of interpersonal social fluency. And yet he might have very little abstract knowledge that he should smile in situations like this one (Brownstein & Madva, 2012b). As Cushman (2013) puts it, a modelfree algorithm knows that some choice feels good, but it has no idea why. The upshot is that the outputs of model-free learning ought to be accorded some kind of defeasible authority for us. That is, the wide competence and experience-tested qualities of model-free learning suggest that there are times when we ought to trust our spontaneous judgments. Seligman and colleagues (2013) count four related reasons for thinking that the learning system subserving paradigmatic spontaneous judgments should be accorded some practical authority in decision-making. First, these systems enable agents to learn from experience, given some prior expectation or bias. Second, they enable prior expectations to be overcome by experience over time, through the “washing out” of priors.Third, they are set up such that expected values will, in principle, converge on the “real” frequencies found in the environment, so that agents really do come to be attuned to the world. And fourth, they adapt to variance when frequencies found in the environment change, enabling relatively successful decision-making in both familiar and relatively novel contexts. Together, these features of model-free learning underlie what I mean by the defeasible moral credibility of our spontaneous judgments. 302
Implicit attitudes
Of course, our spontaneous judgments are only defeasibly good guides for action, and for several reasons. As Huebner (2009, 2016) emphasizes, these learning systems will only be as good as the environment in which they are trained. An agent whose past experiences are morally blinkered, perhaps due to being raised in an isolated and xenophobic environment, is likely to have morally blinkered spontaneous judgments. Likewise, people who live in an unjust world (like us), suffused with prejudice and negative stereotypes, are likely to become attuned to common prejudicial attitudes and to reflect social stereotypes in their reward predictions. In cases like these, agents’ spontaneous social judgments may still be thought of as attuned to the social world, but just to the wrong features of it. Worries like these give rise to the idea that a decision-making system that ought to hold authority for us must do more than just represent first-order values. One might think, for example, that a morally credible action-guidance system must be responsive to values that an agent reflectively endorses, and not just to predictors of good feelings. The fact that our spontaneous judgments lack features like these gives rise to the credibility question, that is, the question for practical agents of knowing when the defeasible authority of their spontaneous judgments has indeed been defeated.20
3. Implicit attitudes Proponents of the view that model-free learning systems can have moral credibility have developed this claim using examples such as interpersonal social fluency (Railton, 2014), judgments involving moral luck (Kumar, ms; Martin & Cushman, 2016), and even the “Statistical Victim Effect” (Railton, 2015). Here I consider the claim in light of a distinct but related set of phenomena. Research on “implicit attitudes” has grown rapidly over the past 25 years, and there is good reason to believe that model-free learning can explain substantial features of how these states function (§3.1). If this is right, then implicit attitudes should be defeasibly credible guides to action. They should act as both valuable social tuning devices, that is, but also be highly susceptible to bias (§3.2). Given this, how can we tell when to trust them? I raise some worries about the difficulty of answering this question (§3.3), then consider one kind of solution (§4). 3.1. Implicit attitudes and model-free learning People hold implicit attitudes toward food, clothing, brands, alcohol, and, most notably, social groups. Implicit attitudes are generally understood as preferences that need not enter into focal awareness and are relatively difficult to control.Virtually all theoretical models of implicit attitudes understand them to be the product of a complex mix of cognitive and affective processes.21 Elsewhere I have offered an account of how these cognitive and affective processes cohere into a particular kind of (implicit) mental state (Madva & Brownstein, forthcoming). Here I consider what kind of learning mechanisms might subserve these states. I suggest that model-free evaluative learning systems explain more about implicit attitudes that others might suppose. This makes possible the idea that implicit attitudes are defeasibly good guides to action. Of course, amassing sufficient evidence for this claim would require a paper of its own. My aim instead is to sketch a conceptual architecture on the basis of which this is plausible and then to consider the credibility question about implicit attitudes. My view is much indebted to Huebner (2016), who argues that implicit attitudes are constructed by the aggregate “votes” cast by basic “Pavlovian” stimulus-reward associations, model-free reward predictors, and model-based decision-trees. Pavlovian stimulus-reward associations are distinguished from model-free reward predictors in that the former passively bind innate responses to biologically salient rewards, whereas the latter compute decisions 303
Michael Brownstein
based on the likelihood of outcomes and incorporate predictors of predictors of rewards, as described in §2. This process of decision-system voting is substantiated by previous research (Crockett, 2013; Daw et al., 2011; Huys et al., 2012). In short, on Huebner’s view, Pavlovian associative mechanisms track cues in the environment that are biologically salient, such as signs of danger or sexual reward. In a world such as ours, which is suffused with images and stories tying particular social groups to signs of danger, sex, etc., these basic associative mechanisms will attune agents to these racialized and sexualized representations. These socially saturated stimulus-response reactions aren’t tantamount to implicit attitudes just as such, however. This is because implicit attitudes aren’t only responsive to threats and biological rewards, but also to social norms. Tracking and updating according to the demands of social norms is the work of model-free systems, which can treat social norms as stand-ins for expected rewards. Huebner here draws on research showing that model-free prediction-error signaling is largely responsible for norm conformity (e.g., Klucharev et al., 2009). Finally, Huebner draws on evidence showing that, in some cases, implicit attitudes are responsive to inferential processing and argument strength (Mandelbaum, 2016). This responsiveness relies upon model-based representations of alternate possibilities, competing goals, and abstract values. Implicit attitudes, then, reflect the potentially conflicting pull of these three decision-making systems. Huebner (2016, 64) summarizes: these systems could produce conflicting pulls toward everything from the positive value of norm conformity (understood as attunement to locally common patterns of behavior), to the aversive fear associated with an out-group, and the desire to produce and sustain egalitarian values, among many other situation relevant values. Where the outputs of these systems diverge, each will cast a vote for its preferred course of action . . . This account is compelling, particularly for its ability to accommodate a large range of otherwise seemingly conflicting data.There are two different ways to interpret Huebner’s view. One is that it represents a computational explanation of implicit attitudes as such. On this interpretation, Pavlovian, model-free, and model-based learning mechanisms cast “votes”, the aggregated outcome of which represents the content of an agent’s particular implicit attitude. Implicit attitudes are the product of the competition between these three evaluative systems, in other words. A second interpretation is that these three learning systems cast votes, the aggregated outcome of which determines an agent’s behavior. On this second interpretation, implicit attitudes represent a component of the competition between these three systems. It is possible on this second interpretation that the agent’s implicit attitude is exclusively or primarily the product of one kind of learning system, the output of which then competes with the outputs of the agent’s other learning systems. Huebner seems to have both of these interpretations in mind. His stated aim is to provide a computational account of implicit biases (51), but he also suggests that both our implicit and our explicit attitudes represent the combined influences of these three types of evaluative systems (58), and that our implicit attitudes are themselves regulated by model-based evaluations (64). My investigation into the defeasible credibility of model-free learning systems might seem wrongheaded if implicit attitudes as such are the product of competition between Pavlovian, model-free, and model-based evaluative learning systems. For why focus on model-free learning alone if implicit attitudes are the product of multiple overlapping systems? However, if behavior is the result of the competition between these systems, and implicit attitudes represent the output of one component of this competition, then it might not be so wrongheaded to 304
Implicit attitudes
think that we can learn about the potential moral credibility of implicit attitudes by considering model-free learning in particular. On this interpretation, on which behavior is the result of the competition between learning systems, implicit attitudes may be thought of as the output of model-free learning mechanisms in particular (though perhaps not exclusively). That is, behavior is the result of the combined influence of our reflexive reactions to biologically salient stimuli (which are paradigmatically subserved by Pavlovian mechanisms), our implicit attitudes (paradigmatically subserved by model-free mechanisms), and our explicit attitudes (paradigmatically subserved by model-based mechanisms). Now, this picture is surely too simplistic. As Huebner rightly emphasizes, these processes mutually influence each other. For example, one’s implicit attitudes are likely to be affected by one’s “desire to produce and sustain egalitarian values”, a desire which Huebner suggests is the product of model-based mechanisms.22 But to accept this mutual penetration of cognitive and affective processes is not tantamount to the view that these systems mutually constitute one’s implicit attitudes. It is one thing to say that implicit attitudes are mental states paradigmatically produced by model-free evaluative learning systems, which are in important ways influenced by other learning systems. It is another thing to say that implicit attitudes are mental states produced by the competition between these learning systems themselves. Huebner can in fact embrace both of these interpretations – that implicit attitudes are the product of these three evaluative learning systems and also that implicit attitudes are a component of the competition between these three systems – because he holds a dispositional view of attitudes. On this dispositional view, attitudes (in the psychological sense of likings and dislikings) are not mental states that can occur; rather, they are multi-track dispositions to behave in particular ways across varied situations.23 Since the dispositional view denies that implicit attitudes are a unified cognitive state, and are better understood as stable dispositional traits, then there is no problem in saying that implicit attitudes are both a product and a component of the competition between evaluative learning systems. In other words, there is no problem, on the dispositional view, in saying that the competition between learning mechanisms issues in both attitudes and behavior, since this view denies that there is a meaningful distinction between attitudes and behavior. As mentioned above, I have argued elsewhere for a particular conception of implicit attitudes as a relatively unified kind of mental state. I won’t focus here on the debate between mental state and dispositional views of attitudes. Rather, I consider the right way to think of the competition between learning systems given a mental state view of implicit attitudes. In short, it is hard to understand how a competition between learning mechanisms could issue in both attitudes and behavior on a mental state view. Rather, it is more parsimonious, on the assumption that implicit attitudes are mental states, to think that what the competition between learning systems helps to explain is how particular decisions and behavior are produced by the interaction of biologically attuned reflexes, implicit attitudes, and explicit attitudes. What then remains open is how best to understand biologically attuned reflexes, implicit attitudes, and explicit attitudes in terms of the learning mechanisms that subserve them. My view is that model-free learning explains more about implicit attitudes than Pavlovian or model-free mechanisms do. Architecturally, a reasonable, albeit loose, way of thinking (as described above) is that biologically attuned reflexes are the paradigmatic causal outputs of Pavlovian mechanisms, implicit attitudes are the paradigmatic outputs of model-free learning mechanism, and explicit attitudes are the paradigmatic outputs of model-based learning mechanisms. Implicit attitudes are certainly affected by biologically salient stimuli – for example, those that elicit aversive fear – as well as by an agent’s explicit values, but the attitude itself is the association between two attitude objects. Of course, much more would need to be said to substantiate this. 305
Michael Brownstein
My aim, though, is to establish sufficient centrality of model-free systems in understanding implicit attitudes in order to show that implicit attitudes have a defeasible moral credibility, due to the learning mechanisms that subserve them. I now turn to give an example of how the defeasible credibility of implicit attitudes works in practice. In some situations, one and the same set of implicit attitudes seems be both authoritative and morally disastrous. This leads to what I call the credibility question. 3.2. The gift and the curse of fear Victims of violent assault often say, after the fact, that “something just felt wrong” about the person walking on the other side of the street or offering to help carry the groceries into their apartment. But to their great regret, they dismissed these feelings, thinking that they were just being paranoid or suspicious. In The Gift of Fear, Gavin de Becker argues that the most important thing a person can do to avoid becoming a victim of violent assault is to trust their intuition when something about a person or situation seems amiss. He writes, A woman is waiting for an elevator, and when the doors open she sees a man inside who causes her apprehension. Since she is not usually afraid, it may be the late hour, his size, the way he looks at her, the rate of attacks in the neighborhood, an article she read a year ago – it doesn’t matter why. The point is, she gets a feeling of fear. How does she respond to nature’s strongest survival signal? She suppresses it, telling herself: ‘I’m not going to live like that; I’m not going to insult this guy by letting the door close in his face.’ When the fear doesn’t go away, she tells herself not to be so silly, and she gets into the elevator. Now, which is sillier: waiting a moment for the next elevator, or getting into a soundproofed steel chamber with a stranger she is afraid of? (1998, 30–31) De Becker offers trainings promising to teach people how to notice their own often very subtle feelings of fear and unease – their “Pre-Incident Indicators” – in potentially dangerous situations. These indicators, he argues, are responsive to nonverbal signals of what other people are thinking or planning. For example, we may feel unease when another’s “micro-expressions”, like a quick sideways glance, or rapid eye-blinking, or slightly downturned lips, signal their intentions, even though we might not notice these cues consciously. De Becker’s trainings have been adapted for police officers too, who also often say, after violent encounters, that they could tell that something was wrong in a situation, but they overrode those feelings because they didn’t seem justified at the time. De Becker’s Pre-Incident Indicators are good candidates for valuable social tuning devices that are produced by implicit attitudes.24 They typically emerge into an agent’s peripheral, rather than focal awareness, which is a hallmark of implicit attitudes (Gawronski et al., 2006; Brownstein & Madva, 2012b). They are also relatively automatic, in the same way in which outcomes of measures of implicit attitudes, like the Implicit Association Test (IAT; Greenwald et al., 1998), are relatively automatic. This is evident in the way in which de Becker describes one’s intuitions as often in conflict with one’s reflective judgments (as in the case in which a person feels that something is amiss but can’t find any reason to justify the feeling). Finally, these Pre-Incident Indicators seem likely to be generated by model-free learning systems. Agents who are enculturated in typical ways make and refine predictions about subtle social signaling, such as posture and eye gaze, and presumably update these predictions on the basis of discrepancies between those predictions and outcomes. 306
Implicit attitudes
Assuming this is right, and that de Becker’s approach is indeed a valuable tool for protecting oneself, then we seem to have reason to treat our Pre-Incident Indicators as good guides for decision-making.25 But there is a problem – perhaps an obvious one – with de Becker’s advice. Consider, for example, research on “shooter bias”. In a computer simulation, participants are quickly shown images of people holding guns or harmless objects (e.g., cell phones) and must try to shoot all and only those people shown holding guns. The results are deeply unsettling. Participants are more likely to shoot an unarmed black man than an unarmed white man and are more likely to fail to shoot an armed white man than an armed black man (Correll et al., 2002). Measures of implicit bias like the IAT predict these results. People who demonstrate strong implicit racial biases (in particular, strong implicit associations between “black” and “weapons”) are more likely to make these race-based mistakes than people who demonstrate weaker or no implicit racial biases (Glaser & Knowles, 2008). Moreover, while some experimental results have been mixed, a recent meta-analysis finds that police officers fare no better on the shooter bias simulations compared to civilians in terms of unbiased performance (Mekawi & Bresin, 2015). These findings are ominous in light of continued and recent police shootings of unarmed black men in the United States. Findings like these show that the way we perceive and act upon our perceptions of microexpressions and subtle social signals is often influenced by stereotypes and prejudices that most of us disavow. Shooter bias involves acting on the basis of subtle feelings of fear that most white Americans are more likely to feel (but not necessarily notice themselves feeling) when they are in the presence of a black man compared to a white man.26 Research shows that these feelings are indeed race-based. For example, shooter bias is exacerbated after participants read newspaper stories about black criminals, but not after they read newspaper stories about white criminals (Correll et al., 2007). These subtle feelings of fear pervade many mundane situations, too, often in ways that only victims of prejudice notice. George Yancy (2008), for example, describes the purse-clutching, averted gazes, and general unease of some white women when they are in an elevator with a black man, such as himself. In commenting on the death of Trayvon Martin, Barack Obama made a similar point: . . . there are very few African-American men who haven’t had the experience of walking across the street and hearing the locks click on the doors of cars. That happens to me, at least before I was a senator. There are very few African-Americans who haven’t had the experience of getting on an elevator and a woman clutching her purse nervously and holding her breath until she had a chance to get off. That happens often.27 So while one’s Pre-Incident Indicators might be justifiably set off by a potential assailant’s posture or gestures, they might also be set off by an innocent person’s skin color, hoodie, or turban. This means that it might be both true that subtle feelings and intuitions can act like social antennae, tuning us into what’s happening around us, and also true that these very same feelings can be profoundly affected by prejudice and stereotypes. Our Pre-Incident Indicators might be a valuable source of attunement to the world, in other words, but they might also be a tragic source of moral failing. This is a grave point, particularly given de Becker’s recommendations to police officers to trust their intuition about potential criminal suspects. 3.3. The difficulty of the credibility question How, then, can we tell our morally good implicit attitudes from our morally bad ones? Are there conditions under which one’s implicit attitudes are likely to have moral credibility? This 307
Michael Brownstein
is harder to answer than it might at first seem. There are three reasons for this difficulty: a feasibility worry, a conceptual worry, and a “relearning” worry.28 The feasibility worry is that, even if we could recognize, through deliberation, the properties of morally good implicit attitudes, in mundane, real-time social interaction we often have to rely on our spontaneous judgment. It is simply not feasible for creatures like us to rely on deliberation to evaluate our spontaneous actions and reactions most of the time. There are several reasons for this. One is that real-time interaction does not offer agents the time required to deliberate about what to say or do. This is evident when you think of a witty comeback to an insult, but only once it’s too late. Most of us have neither Oscar Wilde’s spontaneous wit nor George Costanza’s dogged willingness to fly to Ohio to deliver a desired retort. To be witty requires rapid and fluent assessments of the right thing to say in the moment. In addition to time pressure, social action inevitably relies upon implicit attitudes because explicit thinking exhausts us. We are simply not efficient enough users of cognitive resources to reflective check our spontaneous actions and reactions all the time. When we try to do this, we become “cognitively depleted” (i.e., we become mentally tired; Baumeister et al., 1998). And when we become cognitively depleted, the quality of our social interactions is likely to suffer. People who are cognitively depleted are quicker to act aggressively, for example, and are more likely to act on the basis of implicit biases.29 This points to a final element of the feasibility worry. The minor actions and reactions subserved by our implicit attitudes promote positive and prosocial outcomes when they are fluently executed. Hagop Sarkissian (2010, 10) describes some of the ways in which these seemingly minor gesture can have major ethical payoffs: For example, verbal tone can sometimes outstrip verbal content in affecting how others interpret verbal expressions (Argyle et al. 1971); a slightly negative tone of voice can significantly shift how others judge the friendliness of one’s statements, even when the content of those statements are judged as polite (Laplante & Ambady 2003). In gametheoretic situations with real financial stakes, smiling can positively affect levels of trust among strangers, leading to increased cooperation (Scharlemann et al. 2001). Other subtle cues, such as winks and handshakes, can enable individuals to trust one another and coordinate their efforts to maximize payoffs while pursuing riskier strategies. (Manzini et al., 2009) The conceptual worry is that some prosocial and even ethical actions are constitutively tied to acting spontaneously on the basis of one’s implicit attitudes. If this is the case, then we are really stuck with the credibility question, since it is not just that acting deliberatively is sometimes not feasible, but that sometimes acting deliberatively undermines things we value. Some of the phenomena Sarkissian describes fall into this category. A smile that reads as calculated or intentional – a so-called “Pan-Am” smile – is less likely than a genuine – or “Duchenne” smile – to positively affect levels of trust among strangers. Here it seems as if it is the very spontaneity of the Duchenne smile – precisely that it is not a product of explicit thinking – which we value. More broadly, research suggests that people value ethically positive actions more highly when those actions are performed spontaneously rather than deliberatively (Critcher et al., 2013). Related to this are cases in which it seems as if the only way to act well in a given situation is to act spontaneously, on the basis of implicit, rather than explicit, attitudes. Bernard Williams’ (1981) famous example of saving one’s drowning spouse without having “one thought too many” can be interpreted this way.30 Jason D’Cruz (2013) has more directly described cases of what he calls “deliberation-volatility”. For example, one might have reasons to eat ice cream for dinner on a whim every once in a while. One of the reasons 308
Implicit attitudes
to do this is that doing things spontaneously can be joyous. Were one to deliberate about whether to eat ice cream for dinner tonight, one would no longer be acting spontaneously. Thus one would no longer have reasons to eat ice cream for dinner on a whim. Deliberating itself is reasons-destroying, in a case like this, in which the distinctive value of the act is tied to the action being done spontaneously. Of course, eating ice cream for dinner too often is a bad policy. The risk of just acting without deliberation is that one will act irresponsibly. This illustrates the conceptual worry, writ small. The value of acting on a whim is sometimes constitutively tied to acting on a whim. The agent can either deliberate about whether to be spontaneous, and thus risk forfeiting what is valuable about being spontaneous, or the agent can just act spontaneously, and thus risk acting poorly.31 Finally, the relearning worry is that, were we so fortunate as to have morally credible implicit attitudes, we must find a way to maintain their moral status over time in a world in which they are constantly threatened by exposure to injustice. One’s implicit attitudes might come to reflect egalitarian values through practice and effort, for example, but in pretty much any society, one will constantly be bombarded with sexist and racist images, slogans, narratives, and so on that push one’s implicit attitudes back toward the status quo.32 So the practical difficulty for agents in real-time is not only knowing how to act on the basis of their morally good implicit attitudes while avoiding acting on the basis of their morally bad ones, but also recognizing when one’s formerly morally credible implicit attitudes have become comprised by living in an unjust world. The difficult task is ongoing.33
4. Moral authority Of course, it’s not as if scholars in the 20th and 21st centuries have all of a sudden woken up and identified a heretofore unrecognized challenge of acting virtuously yet spontaneously. Railton, for instance, proposes an answer to the credibility question by drawing upon classic Aristotelian virtue ethics. He suggests that we look to the conditions that give rise to ethical exemplars: With the help of anecdotes, supplemented by some evidence from genuine research done by others, I have made a few, tentative suggestions about when intuitive moral assessments might be expected to have greater credibility – even when they oppose one’s own considered judgment: for example, when individuals have wider and more representative experience, a better-developed ability to imagine what things would be like from the standpoints of others, a better ‘feel’ for the underlying dynamics in personal and social situations, or greater foresight in imagining alternatives.These are also, I think, characteristics of those people whose intuitive moral responses we especially value or trust. What is it about these people that gives their intuitive responses greater authority for us? Is it that they hold moral principles we share? Many people who share our principles are decidedly not individuals to whom we would turn in difficult decisions. I suspect that we seek out people who strike us as having well-developed implicit social and emotional competencies in virtue of which they are better attuned to the evaluative landscape of concerns, values, risks, and potentialities inherent in the actual, messy situations we face. These are individuals whose intuitive assessments are, by our own lights, likely to be more reasons-responsive than our own. (2014, 858) I find this answer to the credibility question appealing. It is clearly influenced by longstanding views – in both Western and Eastern virtue ethics traditions – about how to cultivate 309
Michael Brownstein
virtuous dispositions. I also think that it comprises a straightforwardly empirical claim.34 Do people with wider and more representative experience, better-developed ability to take the perspectives of others, better “feel” for social dynamics, and greater foresight in imagining alternatives really have more morally credible attitudes? This is a broad claim, perhaps too broad to assess just as such.35 In order to make the question more tractable, in what remains I’ll examine whether the evidence supports Railton’s claim with respect to implicit attitudes in particular. Should we expect agents’ “intuitive moral assessments” – that is, their implicit attitudes – to have greater moral credibility under these four conditions? The four characteristics Railton proposes must be “operationalized” in order to locate the relevant experimental data (where it exists). This is relatively straightforward in the case of agents who have “wider and more representative experience” and “a better-developed ability to imagine what things would be like from the standpoints of others”. In intergroup psychology, wide and representative experience is known as “social contact”, and the claim that intergroup social contact promotes moral ends is known as the “contact hypothesis”. Seeing things from the standpoint of others also has a relatively clear analogue in intergroup psychology; it is called “perspective-taking”. Researchers have examined whether, and under what conditions, both social contact and perspective-taking change and/or improve agents’ implicit biases. Things are murkier, however, when it comes to “social feel” and “imagining alternatives”. 4.1. Social contact The contact hypothesis has a long history in the study of intergroup prejudice. Originally proposed by Gordon Allport (1954), the core claim of the contact hypothesis is that intergroup contact promotes positive social relationships and helps to undo prejudice.The contact hypothesis is well-supported in the study of explicit prejudice.36 There is also evidence supporting the notion that intergroup contact promotes unprejudiced implicit attitudes. Higher levels of social contact with members of the LGBTQ community is associated with lower levels of implicit anti-gay bias, for example (Dasgupta & Rivera, 2008). Even regardless of one’s past experiences, mere exposure to (i.e., contact with) pictures, stories, and information about admired gay men and women appears to lower anti-gay implicit biases (Dasgupta & Rivera, 2008; Dasgupta, 2013). In the domain of implicit racial attitudes, Shook and Fazio (2008) found in a field study that random assignment to a black roommate led white college students to have more positive implicit attitudes towards black people. There are, however, seemingly necessary conditions under which social contact promotes intergroup harmony. Pettigrew and Tropp’s (2006) review of the literature on explicit intergroup attitudes suggests that, for social contact to work, members of different groups must be of relatively equal status, involved in pursuing shared goals, cooperative, and engaged in activities that are sanctioned by a mutually recognized source of authority.37 One important point is that it is not yet clear whether these same conditions are necessary for social contact to promote unprejudiced implicit attitudes. It is often not a safe assumption that the observed effects on explicit attitudes will translate to equivalent changes to implicit attitudes.38 A second important point may be obvious: intergroup contact is not always – or perhaps not often – experienced under these conditions. Due to vast stratification in occupations, wealth, and socio-economic status, racial intergroup contact is very often experienced under conditions of inequality. In some cases, “local” status may be more salient. College roommates, for example, may perceive each other as of equal status despite coming from different socio-economic backgrounds. But not always. And when the conditions Pettigrew and Tropp identify for positive intergroup contact don’t obtain, or aren’t salient, things can backfire.39 310
Implicit attitudes
Having wider and more representative experience, understood as having greater intergroup social contact, is a promising path toward promoting unbiased implicit attitudes. The two central open questions for future research are (1) whether the evidence for the conditions under which social contact promotes desirable explicit attitudes are equivalent to the conditions under which social contact promotes desirable implicit attitudes; and (2) whether these necessary conditions are problematically rare or relatively attainable. 4.2. Perspective-taking Perspective-taking involves actively contemplating the psychological experiences of others. Like for the contact hypothesis, there is evidence suggesting that perspective-taking leads to more unbiased attitudes. Blatt and colleagues (2010), for example, asked white physician-assistant students to contemplate the experience of black patients prior to clinical interactions, and found that patient satisfaction with these interactions increased. Todd and colleagues (2011) report that perspective-taking leads to more “approach-oriented” behavior toward black people (e.g., placing one’s chair closer to a black partner in an interracial interaction) and causes people to be less likely to deny the existence of discrimination.Todd and colleagues (2011) also demonstrate an effect of perspective-taking on implicit attitudes: diminished bias on the standard race-evaluation IAT. Follow-up studies suggest that this effect lasts, at least up to 24 hours after intervention (Todd & Burgmer, 2013). Todd and Galinsky (2014) have suggested that perspective-taking might work by creating a self–outgroup associative merger. By actively contemplating others’ psychological experiences, we strengthen the link between our self-concept and our conception of the outgroup, thereby making members of outgroups more “self-like”. This proposal helps to explain a noteworthy finding. The positive effects of perspective-taking appear to be limited to people with positive self-evaluations. People who like themselves, in other words, are more likely to benefit from perspective-taking. Because these people have positive self-evaluations, when their self and outgroup concepts merge, the outgroup is thought to take on the positive evaluation that one holds of oneself. It is not yet clear if this proposal about a self–outgroup associative merger is correct. If it is, it suggests that perspective-taking promotes more morally credible implicit attitudes only for relatively confident people. Other studies on perspective-taking also warrant caution. Bruneau and Saxe (2012) find that perspective-taking can have negative effects in situations of longstanding intergroup conflict, for instance. Given that many intergroup conflicts – such as relations between black and white Americans, or relations between Israelis and Palestinians – are indeed long-standing, we must be hesitant to endorse perspective-taking whole hog. My point in raising these worries isn’t to cast doubt on the importance of perspectivetaking as such. As above, in the discussion of social contact, the point is that further conditions seem to be required in order for Railton’s proposal to be secure. Moreover, simply more research is needed. 4.3. Social feel There is no extant research (to my knowledge) examining the relationship between people who have a “feel” for the underlying dynamics of social relations and implicit intergroup bias. “Social feel” seems to be not one particular thing, but rather a collection of skills and traits. One thought is that people who are extroverted are likely to have high levels of social feel. Unfortunately, the relationship between extroversion and implicit bias is unclear in the current 311
Michael Brownstein
empirical literature. Another possibility is that social feel is related to “social tuning”, or acting in such a way as to create a shared reality with another. One study by Sinclair and colleagues (2005) found that people who are more likely to exhibit social tuning are also more likely to have diminished implicit biases. But the effect was found only when the person being tuned into exhibited explicitly egalitarian attitudes. Other possibilities for getting an empirical grip on social feel include considering people with large friend networks or people who score high on social sensitivity tasks. The relationship between these skills and traits and implicit intergroup attitudes should be studied. As with social contact and perspective-taking, I think there is reason to be cautious when endorsing the effects of social feel on implicit attitudes. There is the possibility of a doubleedged sword.40 Social skill may result, in part, from a greater-than-usual ability to recognize common social attitudes and subtle social stereotypes. People with social feel, in other words, may have a feel for “picking up on” social biases. They may be more in touch with, or attuned to, these biases. And why presume that, if they are, this will result in rejecting those biases rather than endorsing them or acting upon them?41 Indeed, evidence suggests that the relative accessibility of social stereotypes – how easily they come to mind – plays a central role in the likelihood that they will affect one’s behavior (Madva, 2016). A related thought is that social feel, while tuning one into the dynamics of social relations between others, may have little to do with the ability to recognize one’s own biases. Thus those with social feel might succumb just as much as others to what is known as the “blind spot bias” (i.e., the fact that it is easier to spot others’ biases than one’s own; Pronin et al., 2002). Perhaps social feel is a necessary but not sufficient element of having prosocial implicit attitudes. Again, more research is clearly needed. 4.4. Imagining alternatives Finally, it is also difficult to know exactly how to test the relationship between imaginative foresight and implicit attitudes. The ability to imagine alternatives is perhaps related to fluid intelligence, but no research (of which I am aware) speaks to the relationship between fluid intelligence and implicit bias. Hofmann and colleagues (2008) report that automatic (i.e., implicit) attitudes toward temptations, such as unhealthy food, have a stronger influence on participants with high working memory capacity. But this finding doesn’t necessarily translate to either imaginative foresight or to implicit social attitudes. A related alternative is that imaginative foresight is expressed in the “need for cognition”, or the tendency to engage in and enjoy thinking (Cacioppo & Petty, 1982). A few studies have considered the relationship between need for cognition and implicit bias, but none have done so directly.42 A final possibility is that imaginative foresight is related to creativity. And at least one study suggests that inducing a creativity mindset lowers implicit stereotype activation (Sassenberg & Moskowitz, 2005).
5. Conclusion I’ve argued that the neural and computational mechanisms underlying our implicit attitudes give us reasons to think that the spontaneous judgments these attitudes create can have moral credibility. By examining research on implicit bias, I’ve shown how the defeasible moral authority of our spontaneous judgments leads to what I’ve called the credibility question. This question focuses on identifying conditions under which our implicit attitudes ought to have authority for us. Using Peter Railton’s proposal as a launching pad, I’ve considered four plausible conditions. At present, evidence for the salutary effects of these conditions is mostly 312
Implicit attitudes
incomplete, and is at times mixed. There is significant support for the first two of Railton’s suggestions – social contact and perspective-taking – although the role of significant mediating and moderating conditions must be examined. There is less evidence for the second two of his suggestions. Of course this does not mean that these suggestions are wrong, but rather that more research is needed.This research must also include longitudinal studies with multiple attitude and behavioral measures, in order to see how durable and “multi-track” the effects of the traits and skills presumably engendered by these conditions are. Future research must also examine in general whether what works for improving the moral credibility of our explicit attitudes also works for improving the moral credibility of our implicit attitudes. Ultimately, I am in agreement with Railton (2015, 37) when he likens the cultivation of moral implicit attitudes to skill learning: People, we know, can acquire greater competency in a given domain when they gain more extensive and variegated experience, can make use of what they learn, and benefit from clear feedback. That is the moral of skill-learning generally, from language acquisition to playing championship bridge. Skill learning does provide a good model for improving the credibility of our implicit attitudes. Future research must focus on the particulars of which skills we must learn and how best to acquire them.
Notes 1 Many thanks to Jason D’Cruz, Bryce Huebner, Julian Kiverstein,Victor Kumar, and Alex Madva, members of the Minorities and Philosophy chapter at SUNY-Albany, and members of the Manhattan College philosophy department for invaluable feedback on this chapter. I am also grateful to the Leverhulme Trust for funding the Implicit Bias and Philosophy workshops at the University of Sheffield, from which some of the ideas in this chapter sprung. 2 See Stanovich (1999) and Stanovich and West (2000) for review. 3 See Railton (2009, 2014, 2015); Brownstein and Madva (2012a, b); Paxton et al. (2012); Greene (2013); Seligman et al. (2013); Kumar and Campbell (2016); Kumar (ms); Martin and Cushman (2016). 4 My concern is epistemic, not moral as such. I take it for granted that some spontaneous judgments are morally good and others are morally bad, and that much of the time, our normative theories will agree about which are which. Notwithstanding particular salient (and important) cases of moral disagreement, in other words, we tend to agree that judgments that promote happiness, prosociality, cooperativeness, and so on are morally good, and that judgments that promote suffering, anti-sociality, selfishness, and so on are morally bad. My concern is epistemic in the sense that it is about knowing which of these moral ends our spontaneous judgments are going to promote when we are in the flow of action and decisionmaking. This is to say that my concern is about practical reason in precisely those cases of judgment and behavior when explicit reasoning does not or cannot happen. 5 In asking this question, I take up the worry left unresolved in Brownstein and Madva (2012a). Note also that I use “credible” in the broad sense of being trusted, not the narrower epistemic sense of being believable. 6 See Kishida et al. (2015) and Pezzulo et al. (2015) for evidence of information flow between model-free and model-based evaluative learning systems. I discuss some of the upshots of integration between these learning systems in §3.1. 7 I am indebted to Fiery Cushman for this example. 8 But see Kishida et al. (2015) for evidence of fluctuations in dopamine concentration in the striatum in response to both actual and counterfactual information. 9 But see §3.1 for discussion. 10 For instance, see Greene (2013). 11 Maia and McClelland (2004) argue that participants in the Iowa Gambling Task may, in fact, have conscious knowledge of the most advantageous strategies as soon as they behave according to these strategies.
313
Michael Brownstein See also Dunn et al. (2006) for critique of Bechara et al. (1997). See Bechara et al. (2005) for a response, who show, for example, that anticipatory skin conductance response occurs before participants have conscious knowledge of advantageous strategies. 12 This is shown using two related experimental scenarios. In a temporal occlusion scenario, athletes are shown the first part of a scenario – for example, a pitcher winding up and releasing the pitch – but then the action is cut off (Müller et al., 2006). In a spatial occlusion scenario, athletes’ vision is obscured (Müller & Abernethy, 2006). 13 I am not advancing the strong claim that model-free learning mechanisms are sufficient for explaining how agents make good spontaneous judgments, as in the case of the Iowa Gambling task, or perform skilled spontaneous behavior, in cases like batting in baseball. My claim is that model-free mechanisms are surprisingly explanatory. Moreover, in some cases, model-free learning can help to explain what distinguishes experts from novices. It is likely that competent but not expert baseball players have complex model-based representations of potential outcomes of potential actions, perhaps even as complex as experts’ model-based representations. What seems to distinguish experts, however, is the quality of their model-free representations of the value of particular actions. 14 The following example was originally discussed in Brownstein and Madva (2012a). See Madva (2012) for further discussion of the concept of interpersonal fluency. 15 For a clip of the event, and an “analysis” of the miscues by CNN’s Jeanne Moos, see: 16 Moreover, research suggests that poor social sensitivity is, in some cases, a result of “choking”, much as in athletics and high-stakes testing. See Knowles et al. (2015). 17 However, as Daw and colleagues (2011) find, statistical learning typically involves integration of predictions made by both model-based and model-free systems. See discussion in §3.1 on the integration of multiple learning systems. 18 So far as I know, Railton (2014) first discussed wide competence in spontaneous decision-making in this sense. 19 See also Cushman (2013, 280). 20 Philosophers enamored of the concept of “reasons-responsiveness” are invited to understand me as saying that model-free learning helps to explain why our spontaneous judgments often seem to be responsive to reasons, but can nevertheless run afoul of our overriding moral reasoning. 21 See, for instance, Fazio (1990); Gawronski and Bodenhausen (2006); Amodio and Ratner (2011). 22 For example, one’s motivation to act in egalitarian ways “for its own sake” (rather than to appear unprejudiced in the eyes of others) strongly moderates the effects of the implicit attitude on one’s behavior and judgment (Plant & Devine, 1998). See also Glaser and Knowles (2008). 23 Huebner (2016) endorses Machery’s (2016) dispositional account of implicit attitudes. 24 In discussing model-free learning, Railton makes a similar claim, writing that the “core” of “spontaneous yet apt responsiveness to reasons for belief and action has at its core the operation of implicit affective processes” (2014, 847). This is a telling remark, suggesting that the model-free learning structures I’ve been discussion are akin to the processes that generate what social psychologists call implicit attitudes. But in the same paper, Railton also speaks of implicit knowledge and understanding, implicit social and cultural competence, implicit models of social situations, and implicit attentional and motivational processes. This suggests that his use of the term “implicit” is not specifically meant to refer to what social psychologists call implicit attitudes. 25 This approach has been influential. De Becker designed the MOSAIC Threat Assessment System, which is used by many police departments to screen threats of spousal abuse, and is also used to screen threats to members of the United States Congress, the CIA, and Federal Justices, including the Justices of the Supreme Court. See: 26 One source of evidence for the fact that shooter bias involves acting on the basis of subtle feelings of fear stems from the fact that shooter bias can be mitigated by practicing the plan, “If I see a black face, I will think ‘safe!’ ” (Stewart & Payne, 2008). But planning to think “quick!” or “accurate!” doesn’t have the same effect on shooter bias. 27 28 For original discussion of the feasibility and conceptual worries, see Tamar Gendler’s talk “Moral Psychology for People with Brains”. See also my discussion of these worries in Brownstein (2016). 29 For cognitive depletion and anger, see Stucke and Baumeister (2006), Finkel et al. (2009), and Gal and Liu (2011). For cognitive depletion and implicit bias, see Richeson and Shelton (2003) and Govorun and Payne (2006).
314
Implicit attitudes 30 Williams’ direct target is the idea that moral principles are required to justify actions such as preferring to save one’s drowning wife before saving a drowning stranger. 31 See Brownstein (ms) for further discussion of deliberation-volatility. 32 See Huebner (2009) for elaboration of this worry.See also Huebner’s reply to Railton (2014) on the Pea Soup blog (). Also see Madva (ms) for discussion. 33 Victor Kumar (personal correspondence) suggests that people might be able to structure their environments in such a way as to facilitate good implicit judgment, perhaps thereby avoiding at least the feasibility and relearning worries. To whatever extent it is possible to restructure one’s environment in this way, I support it. I am not so sure how possible it is, however. Without becoming a hermit, it is hard to see how one could insulate oneself from being bombarded by racist and sexist stereotypes in a culture like ours. Perhaps rather than hiding from the world, then, the solution is to change it. Here the likelihood of success – of sufficiently changing the world around us so that our implicit attitudes are better – seems even slimmer (although I don’t take this as a reason not to try). See Brownstein (2016) for more discussion of this question. 34 Railton’s view actually comprises two empirical claims. One is that people with these characteristics have implicit attitudes with greater moral credibility. Another is that people with these characteristics have moral authority (i.e., they are, in fact, treated as moral exemplars). I will ignore this second empirical claim. Instead, I will treat it as a normative upshot of the first claim.That is, I will interpret Railton to be saying that people with these four characteristics will tend to have more moral implicit attitudes, and for that reason, we ought to treat people with these characteristics as having moral authority (not to make moral commandments, of course, but to lead by example as moral exemplars). 35 Although the wave of philosophical literature on the “situationist” critique of virtue ethics can be seen as looking for empirical validation (or a lack thereof) for related broad claims about ethical exemplars. See, for instance, Doris (2002). 36 For a review of the evidence, see Pettigrew and Tropp (2006) and Pettigrew et al. (2011). 37 Pettigrew and Tropp inherit these conditions from Allport (1954). Pettigrew and colleagues (2011) report an updated view that anxiety and empathy are the major mediators of the effects of intergroup contact on social attitudes. Conditions that diminish anxiety and promote empathy appear to improve intergroup relations. 38 For discussion, see Bodenhausen and Gawronski (2014). 39 See Al Ramiah and Hewstone (2013). Calvin Lai (p.c.) also reports ironic effects of intergroup contact in unpublished data. 40 Thanks to Alex Madva for this suggestion. 41 The professional poker player Annie Duke talks about using other players’ gender biases against them in this sense, by picking up on the predictions they make about her decisions. For example, some men – almost all the other players are men – seem to expect that she will play meekly, while others seem to expect that she will go easy on them if they flirt with her. Duke can then use these expectations to her advantage. See 42 Florack and colleagues (2001) show that implicit and explicit prejudiced judgments are more likely to correlate in participants who score low in need for cognition. Briñol and colleagues (2002) find that argument strength affects the implicit attitudes of people who score high in need for cognition, compared with people who score low in need for cognition.
References Allport, G. (1954). The Nature of Prejudice. Reading: Addison-Wesley. Al Ramiah, A. & Hewstone, M. (2013). Intergroup contact as a tool for reducing, resolving, and preventing intergroup conflict: Evidence, limitations, and potential. American Psychologist, 68(7), 527–542. Amodio, D. & Ratner, K. (2011). A memory systems model of implicit social cognition. Current Directions in Psychological Science, 20(3): 143–148. Argyle, M., Alkema, F. & Gilmour, R. (1971). The communication of friendly and hostile attitudes by verbal and non-verbal signals. European Journal of Social Psychology, 1(3), 385–402.
315
Michael Brownstein Baumeister, R., Bratslavsky, E., Muraven, M. & Tice, D. (1998). Ego depletion: is the active self a limited resource? Journal of Personality and Social Psychology, 74(5), 1252. Bechara, A., Damasio, H., Tranel, D. & Damasio, A. R. (1997). Deciding advantageously before knowing the advantageous strategy. Science, 275(5304), 1293–1295. ———. (2005). The Iowa Gambling Task and the somatic marker hypothesis: Some questions and answers. Trends in Cognitive Sciences, 9(4), 159–162. Beilock, S. (2010). Choke: What the Secrets of the Brain Reveal about Getting It Right When You Have To. New York: Free Press. Blair, J, Mitchell, D., Leonard, A., Budhani, S., Peschardt, K. & Newman, C. (2004). Passive avoidance learning in individuals with psychopathy: modulation by reward but not by punishment. Personality and Individual Differences, 37(6), 1179–1192. ———. (2013). The neurobiology of psychopathic traits in youths. Nature Reviews Neuroscience, 14(11), 786–799. Blatt, B., LeLacheur, S., Galinsky, A., Simmens, S. & Greenberg, L. (2010). Does perspective-taking increase satisfaction in medical encounters? Academic Medicine, 85, 1445–1452. Bodenhausen, G. & Gawronski, B. (2014). Attitude change. In D. Reisberg (Ed.), The Oxford Handbook of Cognitive Psychology (pp. 957–969). New York: Oxford University Press. Briñol, P., Horcajo, J., Becerra, A., Falces, C. & Sierra, B. (2002). Implicit attitude change. Psicothema, 14(4), 771–775. Brownstein, M. (2014). Rationalizing flow: Agency in skilled unreflective action, Philosophical Studies, 168, 545–568. ———. (2016). Implicit bias, context, and character. In M. Brownstein and J. Saul (Eds.), Implicit Bias and Philosophy: Volume 2, Moral Responsibility, Structural Injustice, and Ethics (pp. 215–234). Oxford: Oxford University Press. Brownstein, M. & Madva, A. (2012a). Ethical Automaticity. Philosophy of the Social Sciences, 42(1), 67–97. ———. (2012b). The Normativity of Automaticity. Mind and Language, 27(4), 410–434. Bruneau, E. & Saxe, R. (2012). The power of being heard: The benefits of ‘perspective-giving’ in the context of intergroup conflict. Journal of Experimental Social Psychology, 48, 855–866. Cacioppo, J. & Petty, R. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116. Correll, J., Park, B., Judd, C., & Wittenbrink, B. (2002). The police officer’s dilemma: Using race to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology, 83, 1314–1329. ———. (2007).The influence of stereotypes on decisions to shoot. European Journal of Social Psychology, 37(6), 1102–1117. Critcher, C., Inbar,Y. & Pizarro, D. (2013). How quick decisions illuminate moral character. Social Psychological and Personality Science, 4(3), 308–315. Crockett, M. (2013). Models of morality. Trends in Cognitive Sciences, 17(8), 363–366. Cushman, F. (2013). Action, outcome, and value a dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273–292. Dasgupta, N. (2013). Implicit attitudes and beliefs adapt to situations: A decade of research on the malleability of implicit prejudice, stereotypes, and the self-concept. Advances in Experimental Social Psychology, 47, 233–279. Dasgupta, N. & Rivera, L. (2008). When social context matters: The influence of long-term contact and short-term exposure to admired group members on implicit attitudes and behavioral intentions. Social Cognition, 26, 112–123. Daw, N., Gershman, S., Seymour, B., Dayan, P. & Dolan, R. (2011). Model-based influences on humans’ choices and striatal prediction errors. Neuron, 69, 1204–1215. D’Cruz, J. (2013).Volatile reasons. Australasian Journal of Philosophy, 91:1, 31–40. De Becker, G. (1998). The Gift of Fear. New York: Dell. Doris, J. (2002). Lack of Character: Personality and Moral Behavior. Cambridge: Cambridge University Press. Dunn, B., Dalgleish, T. & Lawrence, A. D. (2006). The somatic marker hypothesis: A critical evaluation. Neuroscience & Biobehavioral Reviews, 30(2), 239–271.
316
Implicit attitudes Fazio, R. (1990). Multiple processes by which attitudes guide behavior: The MODE model as an integrative framework. Advances in Experimental Social Psychology, 23, 75–109. Finkel, E., DeWall, C. Slotter, E., Oaten, M. & Foshee,V. (2009). Self-regulatory failure and intimate partner violence perpetration. Journal of Personality and Social Psychology, 97(3), 483–499. Florack, A., Scarabis, M. & Bless, H. (2001). When do associations matter? The use of automatic associations toward ethnic groups in person judgments. Journal of Experimental Social Psychology, 37(6), 518–524. Gal, D. & Liu, W. (2011). Grapes of wrath: The angry effects of self-control. Journal of Consumer Research, 38(3), 445–458. Gawronski, B. & Bodenhausen, B. (2006). Associative and propositional processes in evaluation: an integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5), 692–731. Gawronski, B., Hofmann, W. & Wilbur, C. (2006). Are “implicit” attitudes unconscious? Consciousness and Cognition, 15, 485–499. Glaser, J. & Knowles, E. (2008). Implicit motivation to control prejudice. Journal of Experimental Social Psychology, 44, 164–172. Govorun, O. & Payne, B. K. (2006). Ego – depletion and prejudice: Separating automatic and controlled components. Social Cognition, 24(2), 111–136. Greene, J.D. (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. New York: Penguin. Greenwald, A., McGhee, D. & Schwartz, J. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464–1480. Hofmann, W., Gschwendner, T., Friese, M., Wiers, R. W. & Schmitt, M. (2008). Working memory capacity and self-regulatory behavior: Toward an individual differences perspective on behavior determination by automatic versus controlled processes. Journal of Personality and Social Psychology, 95(4), 962. Huebner, B. (2009). Trouble with stereotypes for Spinozan minds. Philosophy of the Social Sciences, 39, 63–92. ———. (2016). Implicit bias, reinforcement learning, and scaffolded moral cognition. In M. Brownstein and J. Saul (Eds.), Implicit Bias and Philosophy: Volume 1, Metaphysics and Epistemology (pp. 47–79). Oxford: Oxford University Press. Huys, Q., Eshel, N., O’Nions, E., Sheridan, L., Dayan, P. & Roiser, J. (2012). Bonsai trees in your head: How the Pavlovian system sculpts goal-directed choices by pruning decision trees. PLoS Computational Biology, 8, 3, e1002410. Kishida, K., Saez, I., Lohrenz, T., Witcher, M., Laxton, A., Tatter, S., . . . Montague, P. R. (2015). Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward. Proceedings of the National Academy of Sciences. Doi: 10.1073/pnas.1513619112. Klucharev,V., Hytönen, K., Rijpkema, M., Smidts, A., & Fernández, G. (2009). Reinforcement learning signal predicts social conformity. Neuron, 61, 140–151. Knowles, M., Lucas, G., Baumeister, R. & Gardner, W. (2015). Choking under social pressure: Social monitoring among the lonely. Personality and Social Psychology Bulletin, 41(6), 805–821. Kolling, N., Behrens,T.E.J., Mars, R.B. & Rushworth, M.F.S. (2012). Neural mechanisms of foraging. Science, 366, 95–98. Kumar,V. (Manuscript). Empirical vindication of moral luck. Kumar,V. & Campbell, R. (2016). Honor and moral revolution. Ethical Theory and Moral Practice, 19, 147–159. Laplante, D. & Ambady, N. (2003). On how things are said: Voice tone, voice intensity, verbal content, and perceptions of politeness. Journal of Language and Social Psychology, 22(4), 434–441. Machery, E. (2016). DeFreuding implicit attitudes. In M. Brownstein and J. Saul (Eds.), Implicit Bias and Philosophy:Volume 1, Metaphysics and Epistemology (pp. 104–129). Oxford: Oxford University Press. Madva, A. (2012). The Hidden Mechanisms of Prejudice: Implicit Bias and Interpersonal Fluency, PhD dissertation, Columbia University. ———. (2016).Virtue, social knowledge, and implicit bias. In M. Brownstein and J. Saul (Eds.), Implicit Bias and Philosophy:Volume 1, Metaphysics and Epistemology (pp. 191–215). Oxford: Oxford University Press. ———. (Manuscript). Biased against de-biasing: On the role of (institutionally sponsored) self-transformation in the struggle against prejudice. Madva, A. & Brownstein, M. (Manuscript). Stereotypes, prejudice, and the taxonomy of the implicit social mind.
317
Michael Brownstein Maia, T. V. & McClelland, J. L. (2004). A reexamination of the evidence for the somatic marker hypothesis: What participants really know in the Iowa gambling task. Proceedings of the National Academy of Sciences of the United States of America, 101(45), 16075–16080. Mandelbaum, E. (2016). Attitude, association, and inference: On the propositional structure of implicit bias. Nous, 50(3), 629–658. Manzini, P., Sadrieh, A., & Vriend, N. (2009). On smiles, winks and handshakes as coordination devices. The Economic Journal, 119,537, 826–854. Martin, J. & Cushman, F. (2016).The adaptive logic of moral luck. In J. Sytsma and W. Buckwalter (Eds.), The Blackwell Companion to Experimental Philosophy, 190–202. Chichester, UK: Wiley-Blackwell. Mekawi, Y. & K. Bresin. (2015). Is the evidence from racial bias shooting task studies a smoking gun? Results from a meta-analysis. Journal of Experimental Social Psychology, 61, 120–130. Doi:10.1016/j. jesp.2015.08.002. Michaelson, E. & Brownstein, M. (2015). Doing without believing: Intellectualism, knowledge-how and belief-attribution. Synthese. doi:10.1007/s11229-015-0888-9 Müller, S. & Abernethy, B. (2006). Batting with occluded vision: An in situ examination of the information pick-up and interceptive skills of high-and low-skilled cricket batsmen. Journal of Science and Medicine in Sport, 9(6), 446–458. Müller, S., Abernethy, B. & Farrow, D. (2006). How do world-class cricket batsmen anticipate a bowler’s intention? The Quarterly Journal of Experimental Psychology, 59(12), 2162–2186. Paxton, J., Ungar, L. & Greene, J. (2012). Reflection and reasoning in moral judgment. Cognitive Science, 36(1), 163–177. Pettigrew, T. & Tropp, L. (2006). A meta-analytic test of intergroup contact theory. Journal of Personality and Social Psychology, 90, 751–783. Pettigrew, T., Tropp, L., Wagner, U. & Christ, O. (2011). Recent advances in intergroup contact theory. International Journal of Intercultural Relations, 35(3), 271–280. Pezzulo, G., Rigoli, F. & Friston, K. (2015). Active inference, homeostatic regulation and adaptive behavioural control. Progress in Neurobiology, 134, 17–35. Plant, E. & Devine, P. (1998). Internal and external motivation to respond without prejudice. Journal of Personality and Social Psychology, 75(3), 811. Preuschoff, K., Bossaerts, P. & Quartz, S. (2006). Neural differentiation of expected reward and risk in human subcortical structures. Neuron, 51, 381–390. Pronin, E., Lin, D.Y. & Ross, L. (2002).The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28(3), 369–381. Railton, P. (2009). Practical competence and fluent agency. In D. Sobel and S. Wall (Eds.), Reasons for Action (pp. 81–115). Cambridge: Cambridge University Press. ———. (2014). The affective dog and its rational tale: Intuition and attunement. Ethics 124(4), 813–859. ———. (2015). Dual-process models of the mind and the “Identifiable Victim Effect.” In I. Cohen, N. Daniels and N. Eyal (Eds.) Identified versus Statistical Lives: An Interdisciplinary Perspective (pp. 24–40). Oxford: Oxford University Press. Richeson, J. & Shelton, J. (2003). When prejudice does not pay: Effects of interracial contact on executive function. Psychological Science, 14(3), 287–290. Sarkissian, H. (2010). Minor tweaks, major payoffs: The problems and promise of situationalism in moral philosophy. Philosopher’s Imprint, 10(9), 1–15. Sassenberg, K. & Moskowitz, G. (2005). Don’t stereotype, think different! Overcoming automatic stereotype activation by mindset priming. Journal of Experimental Social Psychology, 41(5), 506–514. Scharlemann, J., Eckel, C., Kacelnik, A. & Wilson, R. (2001).The value of a smile: Game theory with a human face. Journal of Economic Psychology, 22, 617–640. Seligman, M., Railton, P., Baumeister, R. & Sripada, C. (2013). Navigating into the future or driven by the past. Perspectives on Psychological Science, 8(2), 119–141. Shook, N. & Fazio, R. (2008). Interracial roommate relationships an experimental field test of the contact hypothesis. Psychological Science,19(7), 717–723.
318
Implicit attitudes Sinclair, S., Lowery, B. S., Hardin, C. D. & Colangelo, A. (2005). Social tuning of automatic racial attitudes: The role of affiliative motivation. Journal of personality and social psychology, 89(4), 583. Stanovich, K. (1999). Who Is Rational? Studies of Individual Differences in Reasoning. Mahwah, NJ: Psychology Press. Stanovich, K. & West, R. (2000).Advancing the rationality debate. Behavioral and Brain Sciences, 23(5), 701–717. Stewart, B. & Payne, B. (2008). Bringing automatic stereotyping under control: Implementation intentions as efficient means of thought control. Personality and Social Psychology Bulletin, 34, 1332–1345. Stucke, T. & Baumeister, R. (2006). Ego depletion and aggressive behavior: Is the inhibition of aggression a limited resource? European Journal of Social Psychology, 36(1), 1–13. Sutton, R. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3(1), 9–44. Todd, A., Bodenhausen, G., Richeson, J. & Galinsky, A. (2011). Perspective taking combats automatic expressions of racial bias. Journal of Personality and Social Psychology, 100(6), 1027. Todd, A. & Burgmer, P. (2013). Perspective taking and automatic intergroup evaluation change: Testing an associative self-anchoring account. Journal of Personality and Social Psychology, 104(5), 786. Todd, A. & Galinsky, A. (2014). Perspective-taking as a strategy for improving intergroup relations: Evidence, mechanisms, and qualifications. Social and Personality Psychology Compass, 8(7), 374–387. Williams, B. (1981). Moral Luck: Philosophical Papers 1973–1980. Cambridge: Cambridge University Press. Yancy, G. (2008). Black Bodies, White Gazes: The Continuing Significance of Race. Lanham, MD: Rowman & Littlefield. Yarrow, K., Brown, P. & Krakauer, J. (2009). Inside the brain of an elite athlete:The neural processes that support high achievement in sports. Nature Reviews: Neuroscience, 10, 585–596.
319
18 SOCIAL MOTIVATION IN COMPUTATIONAL NEUROSCIENCE (Or, if brains are prediction machines, then the Humean theory of motivation is false) Matteo Colombo
1. Introduction Scientific and ordinary understanding of human social behaviour assumes that the Humean theory of motivation is true. The Humean theory of motivation is committed to two claims. Desires and beliefs are ‘distinct existences’ – they are distinct mental states, where the existence of one does not imply the existence of the other. And being motivated to act is never merely a matter of having certain beliefs, but also always requires having some desire. In Hume’s words: “reason alone can never be a motive to any action of the will. . . . Reason is, and ought only to be, the slave of the passions, and can never pretend to any other office than to serve and obey them” (Hume 1978, 413, 415). Much work directed at understanding the social mind in disciplines like behavioural economics, social psychology, and social neuroscience appears to be consistent with Humeanism.1 But an ambitious and increasingly popular neurocomputational model of brain and mind called ‘predictive processing’ (Clark 2013a; Clark 2015a; Hohwy 2013; Friston 2010; Seth 2013) seems to be at odds with the Humean theory. For this model threatens to introduce alien substitutes for the everyday notion of desire, in which we conduct our moral and social lives. In the present chapter, I begin to examine the relationship between the predictive processing model and the Humean theory of motivation in the context of social norm compliance. I would like to suggest that if the predictive processing model is correct, then the Humean theory is false, and we’d better begin scouting alternative ways of conceiving of social mind and action. Along the way, I elucidate the Humean theory, and I relate it to social norms and to different modelling approaches in computational neuroscience. Neurocomputational modelling should be of special interest for researchers of social norms because of its unifying power. Neurocomputational approaches allow the integration of data and concepts from the psychology and 320
Social motivation
neuroscience of motivation with methods and concepts from the social sciences, as well as from machine learning and computer science. In the last few years, this blend of ideas and methods has advanced understanding of some of the basic computational features of social norm compliance, promoting integration across several disciplines (Wolpert et al. 2003; Behrens et al. 2009; Colombo 2014b).2 The chapter is in four sections. Section 2 clarifies the commitments of the Humean theory of motivation. Focusing on social norm compliance, Section 3 reviews two types of modelling approaches in computational neuroscience: Reinforcement Learning and Bayesian decision theory. Empirical results within these modelling approaches provide support to the Humean theory in the social domain. Section 4 makes the most significant contribution. It introduces the predictive processing theory, and shows that it is inconsistent with the Humean theory of motivation. Section 5 explores how our understanding of social motivation and social norm compliance may change, if the Humean theory of motivation is false and brains are probabilistic prediction machines all the way down.
2. The Humean theory of motivation According to the Humean theory of motivation, belief and desire are distinct kinds of mental states, and belief alone has no motivational force. Believing that the world is thus and so is insufficient for being moved to act. Some desire is always necessary for action, and no other mental states other than a desire and an instrumental belief are necessary for acting and for rationalizing action (Smith 1994; for a historically informed discussion of Hume’s theory and its impact on moral psychology, see Radcliffe 2008).3 A corollary to this claim is that conative states can be changed by the output of some cognitive process only if a conative state features somewhere in this process: “desires can be changed as the conclusion of reasoning only if a desire is among the premises of the reasoning” (Sinhababu 2009, 465). For practical reasoning to have any motivational force, some desire must feature among the premises of the reasoning. If there were no conative state featuring somewhere in the reasoning chain, the conclusion of the reasoning could never affect any of our conative states and could never move us to action. In order to elucidate the commitments of the Humean theory of motivation, we should clarify the notions of motivation and desire, and provide some criterion to distinguish desire from belief. A motivation is any disposition to initiate, direct, or maintain behaviour. Social motivations are dispositions to interact with others or in ways that are relevant to others. If you have a motivation to leave a tip at a restaurant after a dinner, then you have a disposition to initiate a certain action directed at certain people in certain context. The motivation to tip at a restaurant is an example of social motivation. If this motivation is not opposed, it will cause you to leave a tip at a restaurant after dinner. If this motivation is opposed by other motivations that dispose you to act differently, or is neutralized by external circumstances, then you may act differently and not leave a tip. Whether you tip or do not tip will impact your waiter, the patron of the restaurant, and probably other costumers nearby. The causal power of a motivation to control action depends on the strength of the motivation and on its relation to other mental states.You will be moved to act one way or another by your motivations that have greater aggregate strength, and that play an appropriate role in the causal network of mental states leading you to act. If you have a stronger motivation towards leaving a tip at a restaurant after dinner, or if this motivation is appropriately connected to other mental states that lead you to tipping, then you will very probably tip. 321
Matteo Colombo
Desire is a species of motivation; but not all motivations are desires. Other species of motivation include intentions, interests, appetites, cravings, urges, emotions, and perhaps mental states like the belief that an object or a state of affair is good or the judgement that a certain action is morally right. If desire is a species of motivation, then to have a desire entails having a disposition to act. But to have a desire typically involves more than just being disposed to act in a certain way. It also involves feeling a certain way and thinking in a certain way. If you desire to leave a tip for your waiter after dinner, then you may feel good when you tip him, you may think that tipping is right, or you may have your attention drawn towards information that bears on tipping. Some desires are intrinsic desires. They are desires that agents have for things for their own sake. Other desires are instrumental. They are desires that agents have for things as a means to achieve another end. For example, people may comply with some social norms for their own sake. They may comply with some social norms “even when there is little prospect for instrumental gain, future reciprocation or enhanced reputation, and when the chance of being detected for failing to comply with the norm is very small” (Sripada & Stich 2007, 285). In these cases, people have an intrinsic desire: they desire to comply with a social norm as an ultimate end, rather than as a means to other ends. Examples include complying with a norm of fairness in one-shot, anonymous dictator games,4 returning a lost wallet containing a good amount of cash, or punishing norm violators at substantial costs to oneself. Social desires are desires whose possession or realization is relevant to other agents. In behavioural economics, social preference is a related notion that has been extensively investigated. Agents with social preferences do not care exclusively about their own material outcomes; they also care about the material outcomes, well-being, beliefs, and preferences of other agents. A large body of empirical evidence has shown that people and some other mammals display social preferences in a great many contexts (Fehr 2009). There are several theories of desire (Schroeder 2014, Sec. 1). One of the most influential is the reward theory of desire (Dretske 1988, Ch. 5; Schroeder 2004; Schroeder & Arpaly 2014). According to this account, to desire that something is the case is to use representations of that something to drive reward-based learning. Desire satisfaction would bring about rewards, while desire frustration punishments.Within this account, ‘reward’ and ‘punishment’ should be understood as reinforcement learning signals, where rewards strengthen the causal relations between certain mental states and actions, and punishments weaken the causal relations between certain mental states and actions.The dopaminergic system in the midbrain would realize reward-based learning, and its activity would be the common cause of various phenomena traditionally associated with desire, such as action, pleasure, and some aspects of attention and thought. Drawing on the computational framework of Reinforcement Learning, the reward theory of desire is consistent with the Humean ideas that desire differs from belief, and that there is a unique causal connection between desire, motivation, and action. Other accounts of desire are inconsistent with the Humean theory of motivation. For example, according to desire-as-belief theories of desire, agents desire something to the extent that they believe it to be good. Accordingly, to the extent you believe it good to comply with a social norm of tipping, you will be motivated to comply with the norm (e.g., Smith 1987; Pettit 1987; Lewis 1988; Broome 1991). This view is inconsistent with the Humean theory because it claims that, for at least some belief, to have a belief is just to have a desire, while the Humean theory holds that belief and desire are ‘distinct existences’, and there is no belief such that to have that belief is to have a desire. One feature that is commonly used to distinguish desire from belief is their different direction of fit. Desire is said to have a direction of fit, and this direction of fit is the opposite to the 322
Social motivation
direction of fit of belief (Anscombe 1957; Humberston 1992). Beliefs are supposed to fit the world. The world is supposed to fit desires. Beliefs aim at truth, whereas desires aim at realization. Beliefs aim at informing us of how things are in the world, whereas desires aim at informing us of how things ought to be in the world. A belief is true if and only if the world is as the belief represents it to be. A desire is realized if and only if the world changes in conformity with the desire. Beliefs must be responsive to evidence that bears on their truth or falsity; they are subject to norms of confirmation and updating. Desires need not be responsive to evidence in the same way as beliefs. If your evidence indicates that the world is not currently as you believe it to be, then you are rationally required to revise your beliefs accordingly. If your evidence indicates that the world is not currently as you desire it to be, then you are not rationally required to change any of your desires. In summary, for a mental state to count as a desire, it should possess a world-to-mind direction of fit. The world ought to be changed to fit its content. For a mental state to count as a belief, it should have a mind-to-world direction of fit. Its content ought to match the world. With this distinction in place, we can formulate the central commitments of the Humean theory as follows: • • •
People possess different kinds of mental states with different directions of fit. States and processes with a mind-to-world direction of fit have no motivational force on their own. States and processes with a world-to-mind direction of fit are always necessary for action.
I now turn to review two modelling approaches to social norms in computational neuroscience, and show that empirical work within these frameworks support the Humean theory of motivation.
3. Two neurocomputational frameworks for social norms Social norms constitute a grammar of society (Bicchieri 2006). In nearly all social situations, people’s actions are influenced by some social norm.5 Like a grammar, social norms consist of rules governing social behaviour that need not be written, or legally enforced. Courting and mating, food sharing and dressing, worshipping and mourning, but also tipping, queueing, playing, and taking vengeance are all paramount cases where some social norm governs people’s social behaviour. Like a grammar, social norms specify what is appropriate and what is not appropriate, setting normative boundaries in the social environment. Like a grammar, we often follow social norms without realizing it, or without considering alternative lines of action. In these cases, norm compliance has become second nature to us, helping us to navigate the social environment smoothly and effortlessly. And just like a grammar, social norms are generally not the product of human intentional design. Social norms can emerge (and fade) rapidly and without planning. Many facts are known about social norms. Social norms are partly constituted by shared expectations about a certain type of behaviour in a certain context (Bicchieri 2006; Pettit 1990; Sugden 1986). The capacities to learn and follow social norms are developmentally robust and emerge early on in life (Rakoczy & Schmidt 2013). All human societies have social norms, and some non-human animals possess social norms too (Sober & Wilson 1999; Henrich et al. 2004; Colombo 2013). This does not entail, however, that the content of social norms is invariant across time and space, or that the capacities to learn and to follow norms are underlain by one single specialized mechanism (Casebeer & Churchland 2003). 323
Matteo Colombo
Rewards, punishments, and emotional sanctions play major roles in social norm compliance. People typically feel anger, contempt, or blame towards norm violators. Norm violators may feel shame and have a desire to hide. Behavioural reactions to norm violators include avoidance, ostracism, gossip, verbal abuse, and physical harm (Andreoni et al. 2003; CluttonBrock & Parker 1995). Nonetheless, norm violations can have political, cultural, and expressive functions. Sometimes people violate social norms in order to express their attitudes, to raise consciousness, or to create a public debate (Sunstein 1996). In these cases, norm violators are willing to incur sanctions in order to change the society in which they live, making salient alternative lines of action to which norm compliers may be blind (Hlobil 2015). Philosophers, social psychologists, anthropologists, and economists have offered different theories of social norms (Bicchieri 2006; Binmore 1994; Elster 1989; Gintis 2010; Lewis 1969; Pettit 1990; Sripada & Stich, 2007; Sugden 1986; Ullmann-Margalit 1977). Most of these treatments consist in what Bicchieri calls “rational reconstructions”, where a rational reconstruction “specifies in which sense one may say that norms are rational, or compliance with a norm is rational” (2006, 10–11). Some other accounts provide us with ‘boxological’ models aimed at describing a set of functionally individuated components (‘black boxes’) of the mechanisms of norm compliance (Sripada & Stich 2007). While rational reconstructions and boxological models of social norms are valuable for different purposes, neurocomputational approaches have more unificatory power, and they bring more precision to the table when it comes to understanding what goes on in people’s heads when they comply with norms (Colombo 2014b). Two neurocomputational approaches that have been helping researchers to draw precise and unified accounts of social norm compliance are Reinforcement Learning and Bayesian decision theory. Both approaches distinguish between conative states and cognitive states, where values and probabilities are computed and represented independently. Both approaches assume that conative states (or value signals) are always necessary for action. Support for the commitments of the Humean theory requires empirical evidence that signals in the brain transmitting information about rewards and value are distinct from signals in the brain transmitting information about the probability that a certain state obtains in the environment. It also requires evidence that signals in the brain transmitting information about rewards and value are always causally involved in the production of action. The good fit demonstrated by Reinforcement Learning and Bayesian models with an impressive body of behavioural and neural data concerning social norm compliance provides this evidence (Glimcher et al. 2009). 3.1. Reinforcement Learning and social norms Reinforcement Learning (RL) offers models of learning and decision-making in the face of uncertainty and rewards (Sutton & Barto 1998). The type of problem that RL addresses is defined by four basic ingredients . First, a set of states S = {s1, . . . , sn}, where each state is one configuration of the environment. Second, a set of actions A = {a1, . . . , an}, which the agent can execute in the environment. Actions can influence the next state of the environment and have different costs and payoffs. Third, a function that governs the transitions between states T: S × A → [0, 1]. Given the current state st and an action at executed by the agent, T(st, at, st+1) specifies the probability P(st+1 | st, at) of moving to state st+1. Fourth, a reward function: R: S × A → ℝ, which determines the reward r (or payoff) obtained by the agent for executing a certain action in the current state. Specifically, the reward function determines 324
Social motivation
the immediate costs (negative rewards or punishments) and payoffs (positive reward) incurred by performing different actions in the environment. Obtaining a positive reward generates a signal that increases the probability and intensity of the actions that brought about the reward. Instead, obtaining a negative reward (or punishment) decreases the probability and intensity of the actions that brought about the negative reward. Rewards generate approach and consummatory behaviour; punishments generate avoidance. RL agents’ behaviour is captured by a policy function π, which defines a mapping from perceived states to actions to be taken when in those states, on the basis of the value of an action in a state. Agents should learn a policy function that makes it most likely that they maximize the total amount of reward they receive in the long run starting from a certain state. RL modelling has advanced understanding of the decision-making processes that animals and humans employ when they select actions in the face of reward and punishment (Niv 2009). Recent work on social norm compliance makes this role particularly salient, and illustrates that results within RL support the Humean theory. Gu and colleagues (2015) used a RL model to investigate the neurocomputational mechanisms of the capacities to detect norm violations and to change social behaviour accordingly. Their hypothesis was that the insula and the ventromedial prefrontal cortex (vmPFC) play causally necessary, but dissociable, roles in the humans’ capacities to represent and to adapt to changes in social norms. The focus on these two neural circuits was motivated by previous work indicating that norm compliance recruits the ventral and dorsal regions of the prefrontal cortex, as well as the cingulate and insular cortex (Anderson et al. 1999; Sanfey 2007; Spitzer et al. 2007). Gu and colleagues examined the behaviour of four different groups of experimental participants playing the role of responder in an ultimatum game.6 One group of participants had a focal lesion in the insula; another had vmPFC lesions; a third group included patients with lesions other than vmPFC and insula; and a control group presented no brain damage. All participants played several rounds of the ultimatum game. In each round they were offered a split of 20 Chinese yuan. The offers were drawn from a normal distribution, and presented in a randomized order. Some offers were fair (e.g., 1 yuan) and others unfair (e.g., 10 yuan). For each offer, participants could either reject or accept it. Participants with vmPFC were less sensitive to fairness norms in comparison to the other groups of participants. Specifically, vmPFC patients were more likely to accept unfair offers than the other participants. To characterize quantitatively the nature of these effects, Gu et al. (2015) used a Rescorla– Wagner RL model (Rescorla & Wagner 1972).The model distinguished between cognitive and conative signals. In particular, it assumed that participants had an internally represented social norm of fairness, and that they could detect norm violations and update the norm representation as follows: fi = fi-1 + ε (si – fi-1)[1] where fi was the social norm of fairness at time step i; si was the offer received at time step i; and ε was the norm adaptation rate, which determined the extent to which the social norm could change based on the immediately preceding offer. The utility Vi (si) of an offer for a participant at a time i was modelled as: Vi (si) = si – α max { fi – si; 0}
[2]
325
Matteo Colombo
where α determined the extent to which an agent was averse to offers that deviated from the social norm. Based on Vi(si), the probability of accepting an offer was modelled according to a softmax function (Sutton & Barto 1998, Ch. 2.3): Pi ( si ) =
e
γ Vi ( si )
1+ e
γ Vi ( si )
[3]
where γ is the inverse temperature parameter, which controls the stochasticity of the policy π. With γ = 0, agents’ decisions about whether or not to accept an offer were totally random; higher setting of γ resulted in reliable acceptance of offers associated with the largest expected value. Fitting their model to each participant’s choices, Gu and colleagues found that patients with a lesion in the insula had a lower adaptation rate ε and a higher sensitivity α to unequal offers than other participants. These results indicate that both vmPFC and insula play causally necessary but distinct roles in social norm compliance. Specifically, the insula would play a causally necessary role in learning to adapt to novel social environments, while the vmPFC would play a necessary role for valuing fairness. Given the good fit of their model with participants’ behavioural and neural data, Gu et al.’s (2015) study shows that social norm compliance might be produced by RL mechanisms, which provides support to the Humean theory of motivation. Consistent with the Humean theory, RL assumes a neat separation between representations of the value of a state and representations of its probability, viz.: a separation between value representations and state representations. These representations play different functional roles in social motivation, and have different directions of fit. Value representations correspond to conative states and are always necessary for action. State representations are representations of the probability of a state, and correspond to cognitive states. In Gu et al.’s (2015) study, value representations concerned the positive (or negative) charge of different payoff distributions in the ultimatum game. Participants’ value representations were captured by the value function Vi (si) in [2], which described what the ‘good’ and ‘bad’ offers were for a participant in the long run. [2] also described how the ‘goodness’ of an offer for a participant would change as a function of her previous experience and her attitudes towards unfair offers. The good fit shown by their model indicated that value representations captured by equation [3] always featured in participants’ action-selection processes towards realizing high-valued states. The functional role of value representations corresponded to the role of motivating reasons. Value representations caused participants to comply with a social norm, providing a rationalizing explanation of what participants did. Since RL agents’ objective is to maximize the total reward they receive in the long run, the value function Vi (si) picked out participants’ goal. If participants had a goal in the task they were facing, then, in making their decisions, they were motivated by a state of mind with the direction of fit of a desire. Different decisions in different states had to make the social situation at hand fit participants’ goals. Distinct from value representations were representations of a social norm of fairness fi, and representations of offers si. These were state representations that jointly defined different configurations of the social situation faced by the experimental participants. Representations of norms fi depicted that a certain social norm was in place with a certain probability. Representations of norms changed as a function of observed offers si. Since the offers were explicitly revealed to participants, there was no need to infer si from some other observation, or to assign a probability to si. 326
Social motivation
In order to adapt to the social situation, participants had to keep track of the discrepancy between the fair offer fi and the offer si they received, and to update their representations of what counted as a fair offer on the basis of this discrepancy. If the evidence provided by si indicated that the social norm fi was not what participants believed, then they were rationally required to revise their norm representation accordingly. The norm representations fi of the experimental participants were supposed to fit the changing distribution of offers they received. So, participants’ representations of social norms of fairness corresponded to states of mind with the direction of fit of a belief. 3.2. Bayesian decision theory and social norms Bayesian inference is a type of statistical inference, where data are used to update the probability that a hypothesis is true. Probabilities are used to represent degrees of belief in different hypotheses. At the core of Bayesian inference is a rule of conditionalization, which prescribes how to revise degrees of belief in different hypotheses in response to new data. Consider an agent who is trying to infer the process that generated some data, d. Let H be a set of (exhaustive and mutually exclusive) hypotheses about this process (i.e., the hypothesis space). For each hypothesis h ∈ H, P(h) is the probability that the agent assigns to h being the true generating process, prior to observing the data d. P(h) is known as the ‘prior’ probability. The Bayesian rule of conditionalization prescribes that, after observing data d, the agent should update P(h) by replacing it with P(h | d) (known as the ‘posterior probability’). To execute the rule of conditionalization, the agent multiplies the prior P(h) by the likelihood P(d | h) as stated by Bayes’ theorem:7 P (h|d ) =
∑
P (d|h ) P (h )
P (d|h ) P (h )
[4]
h ∈H
where P(d | h) is the probability of observing d if h were true (known as ‘likelihood’), and the sum in the denominator ensures that the resulting probabilities sum to one. According to [4], the posterior probability of h is directly proportional to the product of its prior probability and likelihood, relative to the sum of the products and likelihoods for all alternative hypotheses in the hypothesis space H. The rule of conditionalization prescribes that the agent should adopt the posterior P(h | d) as a revised probability assignment for h: the new probability of h should be proportional to its prior probability multiplied by its likelihood. Bayesian conditionalization alone does not specify how an agent’s beliefs should be used to generate a decision or an action. How to use the posterior distribution to generate an action is described by Bayesian decision theory (BDT), and requires the definition of a loss (or utility) function L(A, H). For each action a ∈ A – where A is the space of possible actions or decisions available to the agent – the loss function specifies the relative cost of taking action a for each possible h ∈ H. To choose the best action, the agent calculates the expected loss for each a, which is the loss averaged across the possible h, weighted by the degree of belief in h. The action with the minimum expected loss is the best action that the agent can take given her beliefs. In the last 20 years or so, BDT has been playing a tremendous role in advancing understanding of the computational processes underlying a wide variety of cognitive phenomena including social norm compliance (Tenenbaum et al. 2011; on Bayesian unification in cognitive science, see Colombo & Hartmann 2015). 327
Matteo Colombo
Xiang, Lohrenz, and Montague (2013) used a Bayesian model to investigate the neurocomputational mechanisms of social norm compliance. The hypothesis under test was that activity in the anterior insula and in the ventral striatum supports the capacities to detect social norm violations, and to change behaviour accordingly. Xiang and colleagues examined the behaviour of healthy experimental participants playing the role of responder in the ultimatum game while undergoing fMRI scanning. All participants played several rounds of the ultimatum game; in each round they were offered a split of 20 American dollars. The offers were drawn from three different normal distributions with a Low (4 dollars), Medium (8 dollars), and High (12 dollars) mean, and the same standard deviation. To elicit different expectations of fairness, Xiang and colleagues divided their participants into four training groups. Group High-Medium received offers drawn from the distribution with High mean in the first half of their experimental session, and offers from the distribution with Medium mean in the second half. Group Low-Medium received low offers first, and then medium ones. Conversely, group Medium-High and group Medium-Low received medium offers in the beginning, and then high or low offers, respectively. Participants in different groups exhibited a different pattern of rejection rates for otherwise identical offers. Specifically, in the second half of the experimental session, group LowMedium rejected medium offers less frequently than group HM. Changes in expectations of fairness led to changes in rejection rates, which correlated with changes in activity in the striatum and anterior insula. Participants were modelled as Bayesian agents, whose beliefs about the distribution of offers u corresponded to a Gaussian probability distribution P(u) with variance σ2 and mean μ.When participants observed an offer xt at time t, Bayesian conditionalization was carried out to compute a posterior: P (ut|xt ) =
P (xt|ut ) P (ut -1 ) P ( xt )
[5]
The mean μt corresponded to the expected offer at time t.The expected offer corresponded to the social norm of fairness regulating behaviour in the game. A utility function U(xt) was defined that quantified the value associated with accepting an offer. According to this function, the value of an offer depended not only on the amount of money offered, but also on the extent to which the offer deviated from the social norm. Utilities were used by a softmax action-selection mechanism to determine choices. In order to quantify changes in belief about fairness, Xiang and colleagues computed two parameters of their model. Deviations between the expected offer and the offer xt actually received consisted in a ‘norm prediction error’ parameter: δt = xt – μt-1[6] A ‘variance prediction error’ parameter captured errors in predictions about the variance σ2 of the distribution P(ut). These prediction errors were used in the imaging analysis in order to uncover the neural signals involved in social norms violations. Norm prediction errors were positively correlated with activity in the ventral striatum and vmPFC, while variance prediction errors correlated with activity in the insula and anterior cingulate cortex. Xiang and colleagues’ (2013) Bayesian model showed a good fit with behavioural and neural data. This provides evidence for the existence of two distinct signals in the brain that transmit
328
Social motivation
information about social rewards, and about the degree of uncertainty that the social environment is in a certain state. Like in Gu et al.’s (2015) study, Xiang and colleagues (2013) distinguished between norm representations, captured by the mean of the distribution P(ut), and value representations, captured by U(xt). Representations of norms consisted of prior beliefs about offers in the game, which would get updated as a function of the observed offers xt via Bayesian conditionalization, as per [5]. Representations of social norms were sensitive to available evidence, and were supposed to fit the changing distribution of offers so that participants had true beliefs about the social situation they were facing. Posterior distributions were computed independently of expected utilities.Value (or utility) representations picked out what the ‘good’ or ‘bad’ offers were for a participant. The values of different offers defined participants’ goal in the task, viz.: maximizing expected utility. Participants’ representations of value always featured in their decision-making processes, motivating them to realize states with the highest expected utility. In summary, Gu et al.’s (2015) and Xiang et al.’s (2013) studies, along with an impressive body of literature in computational neuroscience (Glimcher et al. 2009), demonstrate that empirical results within the frameworks of RL and BDT support the tenets of the Humean theory of motivation. In particular, these results indicate that social motivation is causally dependent on distinct cognitive and conative states and processes, and that conative states are always involved in social norm compliance.
4. Desiring predictions: the predictive processing theory In this section, I first lay out the basic ideas of the probabilistic, predictive processing (PP) theory of brain and cognition, and then I show that, in its most precise formulation, PP is inconsistent with the Humean theory of motivation. 4.1. PP: some nuts and bolts PP claims that brains are homeostatic, active, prediction mechanisms, the central activity of which is to minimize the errors of their predictions about the sensory data they receive from their local environment. Brains would produce perceptual, cognitive, and motor phenomena by minimizing prediction errors courtesy of various monitoring- and manipulation-operations on hierarchical, probabilistic models of the causal structure of the world within a bidirectional cascade of cortical message passing (Friston 2009; Friston 2010; Hohwy 2013; Clark 2013a, b; Clark 2015a; Seth 2013). Before explaining how PP conceives of the relation between belief, desire, and action, several concepts deserve clarification (cf. Colombo & Wright 2016). PP defines prediction within the context of probability theory and statistics as the weighted mean of a random variable. The term prediction error refers to magnitudes of discrepancies between predictions about the value of a certain variable and its observed value.Within the PP framework, prediction errors quantify the mismatch between expected and observed sensory data. If predictions about sensory data are not met, then prediction errors are generated, which tune brains’ probabilistic models of the causal structure of the environment, and reduce the discrepancy between what was expected and what obtained. By minimizing prediction errors, cognitive agents are said to steer clear of surprising physiological states that may disrupt their homeostatic properties. Surprisal, which is a term from information theory referring to the negative log probability of an outcome, is a function of the sensory data received by an agent,
329
Matteo Colombo
and of a generative model, which is a probabilistic mapping from causes in the environment to observed data. The most precise formulation of the PP posits that computationally-bounded agents minimize surprisal indirectly, by minimizing free energy. In the context of PP, free energy can be directly evaluated and minimized. Since free-energy bounds (by being greater than) the surprisal on sampling some sensory data given a generative model of these data, minimizing free energy minimizes the probability that agents occupy surprising, maladaptive physiological states outside homeostatic ranges (Friston 2009). “This means that minimizing surprisal is the same as maximizing the sensory evidence for an agent’s existence, if we regard the agent as a model of its world” (Friston 2010, 128). The basic ideas are that every self-organizing agent is an embodied model of its sensory exchanges with the environment, and that every self-organizing agent should maximize the evidence for (or, equivalently, minimizing the uncertainty of) its own model in order to survive and thrive (Friston 2011). The quantities that define the free energy of an agent at a given time include: • the external state of the environment, which generates the agent’s sensory data, and depends on the action taken by the agent; • the agent’s sensory data, which depends on the external state of the environment and on the action taken by the agent; • the agent’s action, which depends on the agent’s sensory data and on the internal state of the agent’s brain; • the internal state of the agent’s brain, which causes action and depends on sensory data. Free energy is a function of sensory data, and internal states that encode probabilistic representations of the causes of sensory data. Given a generative model comprising a likelihood and a prior distribution defined over the set of sensory data received by an agent over its lifetime, and given the set of possible external states of the environment, the free energy of an agent can be minimized by changes in internal brain state, and by changes in action. Changes in internal states of the brain can reduce surprisal by minimizing the free energy of current sensory data. This will reduce the divergence between the probabilistic representation of the causes of sensory data encoded by brain states and the true conditional probability distribution of the causes of sensory data.This is perception, which reduces an agent’s uncertainty about its sensory exchanges with the environment by making the agent’s representations of the environment closer to the truth. If agents’ representations of the environment are true, then they are best equipped to avoid surprising states that are potentially noxious. Changes in action can reduce surprisal by selectively sampling the next sensory datum that, on average, has the smallest free energy.This will ensure that the agent will observe sensory data that are most likely to fit its model of the environment. The basic idea is that action reduces an agent’s uncertainty about its exchanges with the environment by selecting courses of action, whose sensory consequences are ‘familiar’ to the agent. If we selectively sample sensory inputs that we most expect, then our expectations cause themselves to become true, viz. they become self-fulfilling prophecies (Friston et al. 2010). Thus far, no experimental work has been conducted within the framework of PP that could be compared with Gu et al.’s (2015) or Xiang et al.’s (2013) studies of social norm compliance. Based on a diverse body of evidence from theoretical and simulation studies, PP claims that psychological effects spanning perception, action, cognition, and social behaviour are produced by the same kind of process, viz. by the interplay of downward-flowing predictions and upwardflowing sensory signals in the cortical hierarchy in the brain. At each cortical layer, inputs from 330
Social motivation
the previous layer are processed in terms of their degree of deviation from predicted features, and only unexpected features are signalled to the next layer up the hierarchy. Applied iteratively, this processing scheme leads to a two-way direction of processing, where feed-forward connections convey information about the difference between what was expected and what actually obtained, while feedback connections convey predictions from higher processing stages to suppress prediction errors at lower levels (Friston 2010; Hohwy 2013; Clark 2013a). 4.2. PP is inconsistent with the Humean theory of motivation PP is inconsistent with the Humean theory of motivation for three reasons. First, PP displays action, cognition, and perception as unified by a common functional principle, which erodes the distinction between cognitive and conative states and processes. Second, in its free energy formulation, PP ‘absorbs’ utility functions and reward into prior belief, which eliminates conative states as recognizable motivational states. Third and finally, it is not obvious that within PP the direction of fit of cognitive states and processes differ from the direction of fit of states and processes underlying action and motor behaviour. According to PP, the task of cognitive agents is to represent states of affairs in the world in order to maximize the sensory evidence for their own existence, and reduce surprisal (Friston 2011). In order to comply with this task, perception, cognition, and action all play the same basic functional role: reduction of sensory prediction error. This common functional role furnishes PP with a unifying principle of mind and brain. PP is said to offer “a deeply unified theory of perception, cognition, and action” (Clark 2013a, 186), and even to acquire “maximal explanatory scope” (Hohwy 2013, 242). In Friston’s words, “if one looks at the brain as implementing this scheme [i.e., free-energy minimization], nearly every aspect of its anatomy and physiology starts to make sense” (2009, 293). If perception, cognition, and action all play the same functional role in producing behaviour, and if kinds of mental states and processes are individuated in terms of the functional role they play in producing behaviour, then PP should imply that perception, cognition, and action are fundamentally the same kinds of states and processes. This is exactly what advocates of PP have asserted. For example, Adams, Shipp, and Friston (2013, 4) write: “The perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains.” And further: The primary motor cortex is no more or less a motor cortical area than striate (visual) cortex. The only difference between the motor cortex and visual cortex is that one predicts retinotopic input while the other predicts proprioceptive input from the motor plant. (Friston et al. 2011, 138) If the processes of perceptual and motor systems, which have been traditionally associated with cognitive and conative functions respectively, do not actually fulfil different functions, but both fulfil the ongoing pursuit of surprisal minimization, then they are identical kinds of processes.This conclusion erodes the distinction between cognitive and conative processes, and is therefore inconsistent with the Humean idea that belief and desire are distinct mental states. Some may object that this conclusion is too quick. For it does not take account of finegrained differences between perceptual and motor processes. Clark (2013a, 200) explains that perception and action are “different but complementary means to the reduction of (potentially affect-laden and goal-reflecting) prediction error in our exchanges with the world”. On the 331
Matteo Colombo
one hand, perception is said to minimize surprisal by minimizing the free energy of currently observed states. Perception would reduce free energy by optimizing model-based predictions about the causes of sensory data, where predictions are optimal if they are correctly tuned to the actual causes of sensory data. On the other hand, action is said to minimize surprisal by sampling sensory data that have, on average, the smallest free energy. Action would reduce free energy by selecting sensory data that are most likely to fit model-based sensory predictions. Yet, this fine-grained description will not help to ground a relevant difference between cognitive and conative processes. From the point of view of the scientific taxonomy employed by PP, action and perception differ, but only in ways that are relevant to their common function to minimize free energy. Action and perception are differently the same from the point of view of the theoretical framework of PP that classifies them.Their functional role is the same. How this functional role gets physically realized differs. If the processes of perceptual and motor systems are differently the same in this way, then these processes should not be said to fulfil different functions. This conclusion is consistent with the idea that no state or process is motivationally neutral from the perspective of PP. Following this idea, one may suggest that all mental states posited within PP consist of pushmi-pullyu signals, which “are undifferentiated between presenting facts and directing activities appropriate to those facts. They represent facts and give directions or represent goals, both at once” (Millikan 2004, 157). This suggestion is intriguing, but is inconsistent with the Humean tenet that desire and belief are ‘distinct existences’. So, if all states posited within PP consist of pushmi-pullyu representations, or of some other undifferentiated mixture of conative and cognitive states, then PP is inconsistent with the Humean theory. A second reason suggesting that PP and the Humean theory are inconsistent is the following. PP, at least in its free-energy formulation, rejects the distinction between the probability and the utility (or value) of a state. While this distinction maps onto the distinction between cognitive and conative states, nowhere in the definition of free energy, or in the definitions of action and perception within PP, are utility, value, or reward involved. According to PP, agents make decisions by minimizing a free-energy bound on the marginal likelihood of observed states given a model of the environment. Agents do not make decisions by maximizing expected reward. Assuming that hidden states with high probability just are states with high utility, Friston and colleagues show that optimal decision-making is tantamount to “fulfilling prior beliefs about exchanges with the world . . . [while] cost functions are replaced (or absorbed into) prior beliefs about state transitions” (Friston et al. 2012, 524; see also Friston et al. 2013; for critical assessments of this proposal, see Gershman & Daw 2012, Sec. 5.1, and Colombo & Wright 2016).8 Free energy is the only quantity that is optimized, and the utility (or value) of a state is not a cause of action, is not a motivating reason. Utility is at best epiphenomenal. In other words, PP, at least in its free-energy formulation, reduces decision-making problems to inference problems. Action aims at producing the most likely sensory data given beliefs about state transitions, instead of bringing about valuable outcomes. This means that within PP “desiring a muffin, for example, is having an expectation of a certain flow of sensory input that centrally involves eating a muffin” (Hohwy 2013, 89). What leads us to eat the muffin is not its associated expected reward. Rather, it is our expectation about the ‘familiarity’ of the train of sensory inputs associated with eating the muffin. “The ‘motivator’ ” – explains Hohwy (ibid.) – “is the urge to minimise prediction error” based on expectations concerning sensory input. If this is the ‘motivator’ of action, and reward and value are eliminated and replaced by prior belief, then PP disqualifies desire as a recognizable mental state that motivates us to act on the basis of expected reward. 332
Social motivation
Even if PP eliminates reward and value, advocates have another option to salvage desire as a recognizable state distinct from belief. They will appeal to the notion of direction of fit. Indeed, Hohwy (2013, 89) asserts that “[w]hat makes the desire for a muffin a desire and not a belief is just its direction of fit.” Clark (2015b, 21) similarly claims that “there remains . . . an obvious (and important) difference in direction of fit [between perception and action].”The idea is that perception and cognition have a mind-to-world direction of fit because they “match neural hypotheses to sensory inputs”, while action and motor control have a world-to-mind direction of fit because they “bring unfolding proprioceptive inputs into line with neural predictions” (Clark 2015b, 21; see also Shea 2013).9 This way of putting the difference, however, confuses the direction of fit of a mental state (or process) with the direction of causation of a mental state (or process). Once this confusion is dispelled, it is unclear that within PP conative processes and cognitive processes have a different direction of fit. As introduced by Elizabeth Anscombe (1957), the notion of direction of fit is a normative one. Anscombe illustrates the distinction with the different aims of two agents: a man who is guided in his shopping by a shopping list, and a detective who makes a list of the man’s shopping items as the man buys them. If we were asked what distinguishes the shopping list from the detective’s list, Anscombe explains that It is precisely this: if the list and the things that the man actually buys do not agree, and if this and this alone constitutes a mistake, then the mistake is not in the list but in the man’s performance . . . whereas if the detective’s record and what the man actually buys do not agree, then the mistake is in the record. (Anscombe 1957, 56) One’s beliefs have a world-to-mind direction of fit because they ought to conform to the world. If they do not conform to the world, then the mistake is in the beliefs. One’s desires have a mind-to-world direction of fit because the world ought to conform to them. If desires do not conform to the world, the mistake is not in the desires. This normative notion should not be confused with the notion of direction of causation of a mental state. PP advocates seem to have the latter in mind. They seem to believe that PP makes room for a genuine distinction between cognitive and conative states because in perception the world tends to cause internal changes in one’s expectations, whereas in action internal states and processes tend to cause changes in the world that conform to those internal states. However, this is not the notion of direction of fit introduced by Anscombe. Within PP, agents are regarded as models of their own world, and they aim at fitting the world (Friston 2010, 127; Friston 2011). Their actions and perceptions are geared towards minimizing surprising sensory exchanges with the environment. Thereby, agents’ physiological and biochemical states are maintained within their homeostatic range. As probabilistic models, agents are equipped with a space of hypotheses that are tested against data. Just like scientists’ fortunes depend on sound evaluation of scientific models, agents’ existence depends on testing hypotheses against sensory data sampled from the world. While agents’ hypothesis space is determined by species- and niche-specific evolutionary trajectories (Clark 2013a, 193), the sampling method is dictated by the relative uncertainty of the predictions that the neurally encoded generative model makes about incoming sensory data. Comparing the degree of agreement between sampled sensory data and model-based predictions provides evidence about the extent to which one’s embodied model fits the world. 333
Matteo Colombo
If model-based predictions do not conform to the world, then the mistake is in the model, and the model should be updated. If the model is grossly mistaken, and updating is unfeasible, then the agent’s survival is threatened.This lack of fit would mean that the agent’s physiological and biochemical states are likely to be outside their homeostatic range. Instead, to the extent that world and model have a good fit, agents would “maximize the evidence for their own existence” (Friston 2010, 136). In this case, physiological and biochemical states would be within their homeostatic range; and so the agent’s cells, tissues, organ systems, and body could function optimally. In summary, PP is inconsistent with the Humean theory of motivation. PP erodes the difference between cognitive and conative states by displaying them as unified by a common functional principle. In its free-energy formulation, PP eliminates utility and reward, which are notions distinctive of desire. Within PP, the direction of fit of cognitive states and processes is not obviously different from the direction of fit of states and processes underlying action and motor behaviour.
5. Social motivation and norm compliance for predictive brains Now, suppose that the Humean theory of motivation is false, and that PP provides us with a correct picture of how mind works. What follows about our understanding of social norms and social motivation? It follows that social norms are best understood as entropy-minimizing devices co-constituted by feedback relationships between mind, action, and world. Human minds would have initial biases towards sampling sensory data that are reliably tied to familiar social unfoldings. Examples are sensory data generated by happy facial expressions, open body postures, and cheerful tones of voice. These sensory data are reliably tied to ‘familiar’, low-uncertainty, social unfoldings that will maintain one’s internal milieu within adaptive, homeostatic bounds. For example, an extended hand affords a handshake, which is typically tied to co-adaptive social unfoldings like greeting, congratulating, expressing gratitude, or completing an agreement. By acting on our initial biases, we begin to sample more and more of the sensory space in our social environment.We learn how others generally act in certain contexts, and how they react to our own behaviour. In the same way in which we use models and empirical observation to pick up, refine, and revise hypotheses about the regularities governing the natural world, we would rely on embodied models and observed sensory data to uncover the regularities in our social world. We assume certain hypotheses about social unfoldings, we behave as if these hypotheses are true, and the outcomes of our behaviour provide us with evidence that bears on the truth of the hypotheses. As a function of this continuous process of hypothesizing, sampling, testing, and updating, our social biases change along with our embodied models of the social environment. As explained by Muldoon, Lisciandra, and Hartmann (2014, 4428), in the social world we react to the real or presumed regularities we identify. However, the social situation is unique in that by looking for regularities, regularities are created. . . . For beings like us, with the psychological tendencies we exhibit in the social world, rule discovery triggers an interest in following the rule. Once this process begins, norms can start to emerge. In this sense, they are created out of nothing, but become real enough once the individuals start to believe they hold true. When we discover a social regularity, we are attracted to comply with it. Compliance provides us with evidence that confirms our embodied model of the social world, and guarantees that 334
Social motivation
agents engage in ‘normal’, expected behaviour. When we comply with social norms, we know what to expect from one another, and we are more likely to occupy ‘familiar’ sensory states. Along with natural language, material symbols, and social institutions like written laws, political structures, and religions, social norms would help us structure and manage expected sensory uncertainties involved in our social exchanges. If PP is true, then social institutions are best conceived of as uncertainty-minimizing scaffolds that constrain and channel people’s behaviour, cueing expected types of cognitive routines and actions (Schotter 1981; Smith 2007). Social institutions would contribute to ‘normalizing’ human behaviour by making it reliably predictable. In the words of Mary Douglas: Institutional structures [can be seen as] forms of informational complexity. Past experience is encapsulated in an institution’s rules, so that it acts as a guide to what to expect from the future.The more fully the institutions encode expectations, the more they put uncertainty under control, with the further effect that behavior tends to conform to the institutional matrix. . . . They start with rules of thumb, and norms; eventually, they can end by storing all the useful information. (Douglas 1986, 48) From the perspective of PP, social norms exist because they help agents minimize the uncertainty of their embodied models of the world. By minimizing uncertainty over their social interactions, agents will tend to become fitting models of the social environment in which they are embedded. The social environment then becomes more and more transparent, and social threats easier to avoid (but see Colombo 2014b, 74–76, for some qualifications about norm violations, norm ambiguity, and normative clash). In summary, if PP is true, then social motivation is not grounded in desire and value. Social motivation is grounded in minimization of social uncertainty, and in hypothesis testing carried out in different ‘experiments of social living’. The functionality of social mind would be best captured by viewing human agents not as intuitive lawyers (Haidt 2001), but as intuitive scientists. Human social minds would be continuously and actively engaged in gathering evidence that is most likely to confirm expectations about the behaviour and mental states of other agents. Social minds’ active cycle of hypothesizing, sampling, testing, and updating would co-constitute and continuously re-shape our social landscapes.
Acknowledgements I am grateful to Julian Kiverstein, Jakob Hohwy, Bryce Huebner, and Chiara Lisciandra for their generous comments on previous versions of this chapter.
Notes 1 Philosophers have paid relatively little attention to the empirical adequacy of the Humean theory of motivation in the context of this body of work. Philosophers’ attention has been focused on the related but distinct view of motive internalism, according to which moral judgements are intrinsically motivating. Arguments against motive internalism have appealed to evidence about ventromedial patients and sociopaths, who seem to glide through their social environment with full knowledge of social norms, but without being moved by that knowledge (e.g., Roskies 2003; Kennett & Fine 2008; Colombo 2014a). 2 My focus on neural computation should not suggest that the best (or only) explanation of social norm compliance lies at the level of neural processes. Neurocomputational explanation always spans several levels of analysis. It does not only look at neural processes, it also requires consideration of psychological
335
Matteo Colombo function, and consideration of the relation between the whole mechanism and its environment across different temporal scales (Casebeer & Churchland 2003). 3 Belief and desire are the paradigmatic examples of a cognitive state and a conative state, respectively. Unless otherwise noted, I use ‘belief ’ and ‘cognitive state’, and ‘desire’ and ‘conative state’, interchangeably. 4 In the dictator game, a sum of money m is provided to player 1, ‘the dictator’, who determines that x units of the money (x ≤ m) be offered to player 2. Player 1 retains (m − x). Player 2 simply receives x, without having any input into the outcome of the game. 5 In what follows, the expression ‘social norm’ is used without distinguishing social norms from other kinds of norms that regulate our social/moral life (for a taxonomy of norms, and for an empirical study of the extent to which different norms are resistant to conformism, see Lisciandra et al. 2013). 6 The ultimatum (or ‘take-it-or-leave-it’) game is one of the simplest form of bargaining. This two-stage, two-person game is defined as follows. A sum of money m is provided. Player 1 proposes that x units of the money (x ≤ m) be offered to player 2. Player 1 would retain (m − x). Player 2 responds by either accepting or rejecting the offer x. If player 2 accepts, player 1 is paid (m − x) and player 2 is paid x; if she rejects, each player receives nothing (0, 0). In either case the game is over. 7 Bayes’ theorem is a provable mathematical statement that expresses the relationship between conditional probabilities and their inverses. Bayes’ theorem expressed in odds form is known as Bayes’ rule. The rule of conditionalization is instead a prescriptive norm that dictates how to reallocate probabilities in light of new evidence or data. 8 One may object as follows: “PP is not really a competitor of BDT and RL because PP can ‘subsume’ both, as shown by Friston and colleagues. If the Humean theory is supported by results within BDT and RL, it should also be supported by results within PP. So, PP cannot be inconsistent with the Humean theory.” The problem with this objection is that rewards and cost functions are in fact eliminated within free-energy formulations of PP (e.g., Friston et al. 2013; Colombo & Wright 2016). If rewards are eliminated and replaced by prior expectations about occupying different states in the environment, then the Humean theory can be supported by results within BDT and RL – which posit rewards – and be inconsistent with PP – which does not posit rewards. 9 If the states and processes underlying perception and action have different directions of fit, as asserted by Clark and by Hohwy, then these states cannot be pushmi-pullyu representations as suggested above. Pushmi-pullyu representations have both mind-to-world and world-to-mind directions of fit at the same time (Millikan 2004, Ch. 6).
References Adams, R. A., Shipp, S. & Friston, K. J. (2013). Predictions not commands: Active inference in the motor system. Brain Structure and Function, 218(3), 611–643. Anderson, S. W., Bechara, A., Damasio, H., Tranel, D. & Damasio, A. R. (1999). Impairment of social and moral behavior related to early damage in human prefrontal cortex. Nature, 2, 1032–1037. Andreoni, J., Harbaugh,W. & Vesterlund, L. (2003).The carrot or the stick: Rewards, punishments, and cooperation. American Economic Review, 93, 893–902. Anscombe, E. (1957). Intention. Oxford: Blackwell. Behrens, T. E., Hunt, L. T. & Rushworth, M. F. (2009). The computation of social behavior. Science, 324, 1160–1164. Bicchieri, C. (2006). The Grammar of Society:The Nature and Dynamics of Social Norms. New York: Cambridge University Press. Binmore, K. (1994). Game Theory and the Social Contract,Vol. I. Playing Fair. Cambridge, MA: MIT Press. Broome, J. (1991). Desire, belief and expectation. Mind, 100, 265–267. Casebeer, W. D. & Churchland, P. S. (2003). The neural mechanisms of moral cognition: A multiple-aspect approach to moral judgment and decision-making. Biology and Philosophy, 18(1), 169–194. Clark, A. (2013a). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Science, 36, 131–204. ———. (2013b). The many faces of precision. Frontiers in Psychology, 4, 270. http://dx.doi.org/10.3389/ fpsyg.2013.00270 ———. (2015a). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford: Oxford University Press.
336
Social motivation ———. (2015b). Embodied prediction. In T. Metzinger & J. M. Windt (Eds.), Open MIND: 7(T), 1–22. Frankfurt am Main: MIND Group. doi: 10.15502/9783958570115 Clutton-Brock, T. H. & Parker, G. A. (1995). Punishment in animal societies. Nature, 373, 209–216. Colombo, M. (2013). Leges sine moribus vanae: Does language make moral thinking possible? Biology & Philosophy, 28(3), 501–521. ———. (2014a). Caring, the emotions, and social norm compliance. Journal of Neuroscience, Psychology, and Economics, 7(1), 33–47. ———. (2014b). Two neurocomputational building blocks of social norm compliance. Biology & Philosophy, 29(1), 71–88. Colombo, M. & Wright, C. (2016, forthcoming). Explanatory pluralism: An unrewarding prediction error for free energy theorists. Brain and cognition. doi:10.1016/j.bandc.2016.02.003 Colombo, M. & Hartmann, S. (2015). Bayesian cognitive science, unification, and explanation. The British Journal of Philosophy of Science. doi: 10.1093/bjps/axv036 Douglas, M. (1986). How Institutions Think. New York: Syracuse University Press. Dretske, F. (1988). Explaining Behavior: Reasons in a World of Causes. Cambridge, MA: MIT Press. Elster, J. (1989). Social norms and economic theory. Journal of Economic Perspectives, 3(4), 99-117. Fehr, E. (2009). Social preferences and the brain. In P.W. Glimcher, C. Camerer, R. A. Poldrack and E. Fehr (Eds.), Neuroeconomics: Decision Making and the Brain (pp. 215–232). New York-Amsterdam: Elsevier Academic Press. Friston, K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13(7), 293–301. ———. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. ———. (2011). Embodied inference: Or I think therefore I am, if I am what I think. In W. Tschacher and C. Bergomi (Eds.), The Implications of Embodiment (pp. 89–125). Exeter: Imprint Academic. Friston, K., Daunizeau, J., Kilner, J. & Kiebel, S. J. (2010). Action and behavior: A free-energy formulation. Biological Cybernetics, 102(3), 227–260. Friston, K., Mattout, J. & Kilner, J. (2011). Action understanding and active inference. Biological Cybernetics, 104(1–2), 137–160. Friston, K., Samothrakis, S. & Montague, R. (2012). Active inference and agency: Optimal control without cost functions. Biological Cybernetics, 106(8–9), 523–541. Friston, K., Schwartenbeck, P., FitzGerald,T., Moutoussis, M., Behrens,T. & Dolan, R. J. (2013).The anatomy of choice: Active inference and agency. Frontiers in Human Neuroscience, 7, 1–18. Gershman, S. J. & Daw, N. D. (2012). Perception, action and utility:The tangled skein. In M. I. Rabinovich, K. Friston and P.Varona (Eds.), Principles of Brain Dynamics: Global State Interactions (pp. 293–312). Cambridge, MA: MIT Press. Gintis, H. (2010). Social norms as choreography. Politics, Philosophy and Economics, 9(3), 251–264. Glimcher, P. W., Camerer, C. F., Fehr, E. & R. A. Poldrack (Eds.) (2009). Neuroeconomics: Decision Making and the Brain. New York-Amsterdam: Elsevier Academic Press. Gu, X., Wang, X., Hula, A., Wang, S., Xu, S., Lohrenz, T. M., . . . Montague, P. R. (2015). Necessary, yet dissociable contributions of the insular and ventromedial prefrontal cortices to norm adaptation: Computational and lesion evidence in humans. The Journal of Neuroscience, 35(2), 467–473. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814. Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E. & Gintis, H. (2004). Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies. Oxford: Oxford University Press. Hlobil, U. (2015). Social norms and unthinkable options. Synthese, 1–19. doi:10.1007/s11229–015–0863–5 Hohwy, J. (2013). The Predictive Mind. Oxford: Oxford University Press. Humberstone, I. L. (1992). Direction of fit. Mind, 101, 59–83. Hume, D. (1978). A Treatise of Human Nature, 2nd edition, L. A. Selby-Bigge and P. H. Niditch (eds.). Oxford: Clarendon Press.
337
Matteo Colombo Kennett, J. & Fine, C. (2008). Internalism and the evidence from psychopaths and ‘acquired sociopaths.’ In W. Sinnott-Armstrong (Ed.), Moral Psychology: Vol. 3: The Neuroscience of Morality (pp. 173–190). Cambridge, MA: MIT Press. Lewis, D. K. (1988). Desire as belief. Mind, 97, 323–332. ———. (1969). Convention: A Philosophical Study. Cambridge MA: Harvard University Press. Lisciandra, C., Postma-Nilsenová, M. & Colombo, M. (2013). Conformorality. A study on group conditioning of normative judgment. Review of Philosophy and Psychology, 4(4), 751–764. Millikan, R. G. (2004). Varieties of Meaning:The 2002 Jean Nicod Lectures. Cambridge, MA: MIT Press. Muldoon, R., Lisciandra, C. & Hartmann, S. (2014). Why are there descriptive norms? Because we looked for them. Synthese, 191(18), 4409–4429. Niv,Y. (2009). Reinforcement learning in the brain. Journal of Mathematical Psychology, 53(3):139–154. Pettit, P. (1990).Virtus normativa: Rational choice perspectives. Ethics, 100, 725–755. ———. (1987). Humeans, anti-Humeans, and motivation. Mind, 96, 530–533. Radcliffe, E. (2008). The Humean theory of motivation and its critics. In E. Radcliffe (Ed.), A Companion to Hume (pp. 477–492). Oxford: Blackwell Publishing. Rakoczy, H. & Schmidt, M. F. (2013). The early ontogeny of social norms. Child Development Perspectives, 7(1), 17–21. Rescorla, R. A. & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A. H. Black and W. F. Prokasy (Eds.), Classical Conditioning II: Current Research and Theory (pp. 64–99). New York: Appleton Century Crofts. Roskies, A. L. (2003). Are ethical judgments intrinsically motivational? Lessons from acquired sociopathy. Philosophical Psychology, 16, 51–66. Sanfey, A. G. (2007). Social decision-making: Insights from game theory and neuroscience. Science, 318(5850), 598–602. Schotter, A. (1981). The Economic Theory of Social Institutions. Cambridge, MA: Cambridge University Press. Schroeder, T. (2004). Three Faces of Desire. New York: Oxford University Press. ———. (2014). “Desire,” The Stanford Encyclopedia of Philosophy (Spring 2014 Edition), Edward N. Zalta (ed.), URL = . Schroeder, T. & Arpaly, N. (2014). The reward theory of desire in moral psychology. In Justin D’Arms and Dan Jacobson (Eds.), Moral Psychology and Human Agency: Philosophical Essays on the New Science of Ethics (pp. 186–214). Oxford: Oxford University Press. Seth, A. K. (2013). Interoceptive inference, emotion, and the embodied self. Trends in Cognitive Sciences, 17(11), 565–573. Shea, N. (2013). Perception vs. action: The computations may be the same but the direction of fit differs: Commentary on Clark. Behavioral and Brain Sciences, 36(3), 228–229. Sinhababu, N. (2009). The Humean theory of motivation reformulated and defended. Philosophical Review, 118(4), 465–500. Smith, M. (1987). The Humean theory of motivation. Mind, 96, 36–61. ———. (1994). The Moral Problem. Oxford: Blackwell Press. Smith,V. (2007). Rationality in Economics: Constructivist and Ecological Forms. New York: Cambridge University Press. Sober, E. & Wilson, D. S. (1999). Unto others: The Evolution and Psychology of Unselfish Behavior. Cambridge, MA: Harvard University Press. Spitzer, M., Fischbacher, U., Herrnberger, B., Grön, G. & Fehr, E. (2007).The neural signature of social norm compliance. Neuron, 56, 185–196. Sripada, C. & Stich, S. (2007). A framework for the psychology of norms. In P. Carruthers, S. Laurence and S. Stich (Eds.), The Innate Mind: Culture and Cognition (pp. 280–301). Oxford: Oxford University Press. Sugden, R. (1986) The Economics of Rights, Cooperation and Welfare. Oxford: Blackwell. Sunstein, C. R. (1996). Social norms and social roles. Columbia Law Review, 96(4), 903–968. Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press. Tenenbaum, J. B., Kemp, C., Griffiths, T. L. & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.
338
Social motivation Ullmann-Margalit, E. (1977). The Emergence of Norms. Oxford: Oxford University Press. Wolpert, D. M., Doya, K. & Kawato, M. (2003). A unifying computational framework for motor control and social interaction. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 358, 593–602. Xiang,T., Lohrenz,T. & Montague, P. R. (2013). Computational substrates of norms and their violations during social exchange. Journal of Neuroscience, 33, 1099–1108.
339
PART IV
Naturalistic approaches to shared and collective intentionality
19 JOINT DISTAL INTENTIONS Who shares what? Angelica Kaufmann1
Some have conjectured that our capacities for certain forms of shared activity set us apart as a species. (Bratman, 2014, p. 3)
1. Introduction Like a team of soccer players, individuals react opportunistically to the present situation while taking into account the shared goal of the team. Some players will rarely make a goal, like defenders and goalies, but the success of the team will critically depend upon their contribution. This is very reminiscent to group hunting in chimpanzees where synchronisation of different coordinated roles, role reversal, and performance of less successful roles favour the realisation of the joint goal. (Boesch 2005, p. 629) Thus Christoph Boesch canvasses the similarities between a case of human joint action, such as that instantiated by a soccer team, with a case of nonhuman joint action, such as chimpanzees group hunting. This chapter compares views on the human and nonhuman capacities to entertain joint distal intentions, where these are here defined as the mental representations of goals to which joint plans are directed,2 and, hence, as the core mental elements of shared intentionality. This is the ability to think for cooperating (Tomasello, 2014, p. 125). Joint distal intentions3 have the following representational features: (1) they represent personal-level states, i.e. states of the other individual(s) rather than her subsystems; (2) they represent some unobservable goal of the other individual(s); and (3) they have fulfilment conditions, which are satisfied if the intention causes the action that ultimately achieves the goal represented by the other individual(s) (see Kaufmann, 2015).The question is: how much do nonhuman animals need to mentally share in order to be acknowledged as having this capacity for shared intentionality? In the animal kingdom, especially among primates, there are populations of individuals that exhibit complex purposeful group behaviour. Notable examples include: hand-clasp grooming (Bonnie & de Waal, 2006; McGrew & Tutin, 1978; Nakamura, 2002), nut-cracking (Matsuzawa, 2001), ant-dipping (Boesch & Tomasello, 1998), and snowball play-fight (Eaton, 1972). Some 343
Angelica Kaufmann
researchers (Boesch & Boesch-Achermann, 2000; Bonnie & de Waal, 2006; de Waal & Ferrari, 2010; Horner et al., 2005, 2010, 2011) support a Rich Account, and argue that nonhuman animals can have a common distal goal and take complementary roles in their joint activities. Some other researchers (Tomasello et al., 2005; Tomasello & Carpenter, 2007; Tomasello et al., 2012; Tomasello, 2014) defend a Lean Account, and claim that in these group activities, each participant is only attempting to maximize its own chances to pursue its individual and proximal goal. The advocates of the so-called Shared Intentionality Hypothesis maintain that this capacity to think cooperatively is constituted by a set of distinctively human skills, for humans possess a foundational ability to ascribe distal intentions to conspecifics, and to share distal intentions courtesy of this capacity. Humans appear to be provided with an over-sophisticated ability to coordinate action plans over time. I investigate to what extent such capacity can be observed to emerge in nonhuman animals as well. Taï chimpanzees’ group hunting, vastly documented by Christophe and Edwige Boesch (2000), in particular, is thought to be the most sophisticated nonhuman group activity; thus it represents the major challenge to the current stance of the Shared Intentionality Hypothesis. In what follows, I shall present the Lean Account and the Rich Account. The first and fundamental difference between the two is given by methodology. The Lean Account is built on the research methods of comparative psychology, that is, behavioural experiments run in artificial settings; the Rich Account is the result of the research methods of cognitive ethology, that is, research in the field, mostly done as observation of animal behaviour in the wild. These two approaches lead to very different conclusions: the Lean Account argues for a lack of motivation in pursuing joint action on the side of nonhuman animals, and from this lack of motivation it infers a lack in cognitive faculties that are needed in order to act jointly towards a shared distal goal. Conversely, the Rich Account distinguishes evidence for lack of motivation to interpretations about lack of cognitive capacities. As a result of this mixture of motivational drive and cognitive skills, from the point of view of psychology and ethology, the debate between the Lean and the Rich Accounts is about the competitive versus the collaborative force that dictate the rules in the social world of nonhuman animals. When the comparative psychologist asks the question of what is shared in the social mind, he is investigating the motivational aspect of instances that can reflect this purposeful behaviour. When the ethologist asks the question of what is shared in the social mind, he is investigating the behavioural criteria that can account for this cluster of action plans. Following Boesch (2005, p. 692), I argue that the two claims of the Lean Account can and ought to be kept separate: evidence that nonhuman animals are pursuing individual goals is not evidence that they are only capable of pursuing proximal goals. Evidence that nonhuman animals appreciate the individual nature of the goals of others is not evidence that they are only capable of ascribing proximal goals to others. I agree with the supporters of the Lean Account that the vast majority of nonhuman animals’ group activities is driven by individual intentions and by the capacity to ascribe individual intentions to others; what I disagree with in the context of the Lean Account is the assumption that this capacity to form and ascribe individual intentions is limited to proximal goals. For evidence from empirical research in support of the Rich Account points to the fact that nonhuman animals are capable of forming and ascribing joint distal intentions. This is the structure of the chapter: I introduce the Shared Intentionality Hypothesis; then I review, in this order, the Lean and the Rich Account. I conclude that the explanatory power of the empirical evidence on which the Rich Account draws is underestimated in the picture sketched by the advocates of the Shared Intentionality Hypothesis. 344
Joint distal intentions: who shares what?
2. The question: Who shares what? The Shared Intentionality Hypothesis rests on the findings of a comparative investigation into the cognitive mechanisms that underpin primates’ and young infants’ social interactions (Tomasello et al., 2005; Tomasello & Carpenter, 2007, Tomasello et al., 2012; Tomasello, 2014). The aim of the research paradigm is to identify the evolutionary and developmental paths that led to the unique trait of the human species: hyper-sociality.The proposal is that human beings’ exceptional cultural, institutional and technological achievements are made possible by the species-unique capacity of human primates to share distal intentions and pursue action plans over time.This is the twofold capacity to attribute distal intentions to conspecifics, and to share distal intentions. But what makes the content of a distal intention a shareable content? To date, this question is left unanswered. The question of “Who shares what?” is triggered by the urge for this conceptual clarification: mutually ascribing distal intentions does not imply having shared distal intentions. Philosophical theories of shared intention can be distinguished based on whether or not they conceive of shared intention as an ontologically primitive mental state (Searle, 1995). On Searle’s view the content of shared intentions consists of the representation of a future goal that is shared by two or more individuals. There is a more parsimonious notion of shared intention that denies shared intentions are ontologically primitive. Instead shared intentions are analysed in terms of representations of the goal of others that are complementary to the representation of my individual goal and that aim at achieving a future common goal (Bratman, 2014). On Bratman’s account we can make a distinction between ascribing individual distal intentions and forming shared distal intentions. My claim is that mutually ascribing distal intentions does not mean that two or more individuals have joint distal intentions with the exact same mental content, because this is empirically undeterminable in language-less animals. What mutually ascribing distal intentions demands is just the awareness of the content of the distal intentions of the others. This awareness can be described as an interpersonal, ascriptive and perspectival feature of distal intentions. In order to be aware of the content of the mental states of another subject, an agent has to represent the other subject as a self-representing subject that can, similarly, represent others as self-representing subjects themselves (see Peacocke, 2014). And this latter condition is all that is required to pursue joint distal intentions and plan joint actions. Until the late 1990s there was a lively debate regarding the question of whether only humans could create and participate fully in culture. Some researchers thought so because they argued that only humans were considered capable of recognizing others as intentional agents. This capacity would have enabled them to cooperate, to communicate and to socially learn from their group mates in unique ways (Premack & Woodruff, 1978; Heyes, 1998; Gòmez, 1996; Tomasello & Call, 1997; Tomasello et al. 2003; Povinelli & Vonk, 2003). However, in the last 55 years, ethologists and, especially, primatologists have discovered that great apes also understand others as intentional agents: they understand that others have goals, they understand that others perceive the world and they understand that overt behaviour is driven by unobservable mental processes (Tomasello et al., 2003 for work on chimpanzees; see also Bloom & Veres, 1999; Bloom & German, 2000; see also Tomasello & Call, 2008 for a review). This capacity is broadly labelled ‘perspective taking’ (Hare et al. 2000, 2001, 2006, Tomasello et al. 2003, Call & Tomasello, 2005, Tomasello & Call, 2006; Bräuer et al., 2007).4 The notion of shared intentionality first appeared in the philosophical literature in the late ’80s (see Searle, 1990).5 In comparative psychology, shared intentionality was initially very broadly defined as “the ability to understand conspecifics as being like themselves who have intentional and mental lives like their own” (Tomasello, 1999, p. 4). Later on the idea was 345
Angelica Kaufmann
narrowed down and structured to serve as a conceptual umbrella to cover a set of socialcognitive and social-motivational skills that provide the capacity to share psychological states, about unobservable states of affairs, i.e. possible future scenarios, among individuals that interact collaboratively (Tomasello et al., 2012). This set of social cognitive skills that characterize this capacity are divided into four groups: (i) joint attention, (ii) cooperative communication, (iii) collaboration, and (iv) instructed learning. Each of these social-cognitive skills describes a type of collaborative interaction, and each of them has a phylogenetic precursor found in the behaviour of pre-linguistic infants and other nonhuman primates that describes a type of individualistic interaction: (i/1) gaze following, (ii/2) social manipulation, (iii/3) group activity, and (iv/4) social learning. The skills characteristic of full-blown shared intentionality firstly emerge in the behaviour of 1–2-year-old human infants (for the full account, see Tomasello & Carpenter, 2007). I will focus on the third one, which concerns the distinction between collaboration and group activities: (iii/3) group activity is defined as the capacity to simply coordinate presentdirected action with others. (iii) Collaboration is understood as the capacity to coordinate action with others on the basis of joint goals and plans. As we have seen, there is no agreement about whether this capacity to act jointly belongs to animals other than humans (Boesch, 2005; Tomasello et al., 2005; Melis et al., 2006; Hamann et al., 2011; Warneken et al., 2011). One controversial angle of this debate is about how a creature has to represent the content of the mental states of others, i.e. the content of their distal intentions. The answer to the question of whether some nonhuman animals can ascribe distal intentions to each other depends on which theoretical notion of distal intention is adopted to understand their group behaviour. Both full-blown collaboration and more modest forms of group activities require successfully ascribing psychological states to others, but of a different type and with a different mental content. It seems that this distinction has not been supported by Tomasello and colleagues. I trace much of the disagreement between the Lean and the Rich Accounts to a lack of clarity concerning this distinction. The Shared Intentionality Hypothesis in its latest formulation draws on Michael Bratman’s view of shared agency (Tomasello, 2014). Bratman (2014) argues that his planning theory for individual agency is sufficiently rich for there not to be any need to add new elements to make the step towards understanding shared agency. This view consists in an augmented version of his theory of individual planning (Bratman, 1992). However, Tomasello and colleagues also appeal to a different notion of shared intention than the one offered by Bratman, as I shall explain shortly. I suggest that we follow Bratman’s view, and that we state clearly and simply that his view is different from other views on shared intentionality found in Tomasello’s writings. By following this approach, we can make room for a large body of empirical evidence from cognitive ethology, and reconcile the Lean with the Rich Account. According to Bratman, shared intentions “consist in relevant plan-embedded intentions of each of the individual participants, in a suitable context and suitably interconnected” (Bratman, 2014, p. 11). Bratman’s shared intentions are constitutively different from the shared intentions proposed in other influential accounts, most notably, John Searle’s we intentions, and Margaret Gilbert’s joint commitment. Indeed, Bratman says, “my approach to shared intention is part of an effort to forge the path between two extremes – a model of strategic equilibrium within common knowledge, and a model of distinctive interpersonal obligations and entitlements” (Bratman, 2014, pp. 10–11). This form of ‘augmented individualism’ – as he calls it – sees individual agents as planning agents whose agency is: (1) temporally extended, which means that an agent has the capacity to appreciate the place that one’s own acting has within a broadly structured action, plus the acknowledgement that one’s own activity is practically committed 346
Joint distal intentions: who shares what?
to that action; (2) regulated by self-governance, which means that an agent has the ability to take a practical standpoint as a guiding principle for one’s own acting; and (3) able to justify the capacity for sociality, that is, the capacity to interlock individual distal intentions, and to ascribe distal intentions with meshing sub-plans. Bratman shares with Searle and Gilbert the idea that sociality involves more than strategic equilibrium and common knowledge. But, contrary to Bratman, Searle and Gilbert argue that sociality involves new basic practical resources: we intention for Searle, and joint commitment for Gilbert. These two notions describe irreducible – in content – and distinctive attitudes of the sort that, as Tomasello (2014) points out, we should resist attributing to nonhuman animals, in the light of a lack of evidence. However, as we will see in section 4, sticking purely to Bratman’s view justifies ascribing joint distal intentions to the group activities of some nonhuman animals.
3. The Lean Account Tomasello et al. (2012) explain that the capacity for shared intentionality emerges in human infants around their first year of life, through the following skills: joint attention, cooperative communication, collaboration and instructed learning. There are two interacting developmental trajectories that guide this emergence: (1) the development of the understanding of intentional action and perception; and (2) the development of the capacity for sharing psychological states with others. The latter – Tomasello and colleagues say – is fundamental for the reach of full-blown shared intentionality. This is called the ‘two transitions argument’. Mutually ascribing distal intentions consists in sharing psychological states, since this is how distal intentions are conceived. I focus on the analysis of studies from the laboratory of Tomasello and coworkers that are based on experiments that question the motivational drive of chimpanzees’ social interactions. They conclude that chimpanzees are more skillful in a competitive context, and that this is evidence that they lack the cognitive capacity for shared intentionality. This experimental evidence suggests that chimpanzees understand some of the psychological states of others (Hare et al., 2000, 2001), and that they can even take measures to manipulate those states when it is to their own advantage. But when it comes to a scenario that requires trusting someone else (i.e. the experimenter or another chimpanzee) in order to achieve a goal, chimpanzees show surprisingly weak social cognitive skills. Hare and Tomasello (2004) suggest this might be based on a misunderstanding induced by competitive motivations (Fitch et al., 2010). These kinds of motivations are induced by food-related interactions, in which the successful outcome (i.e. obtaining the food) is calculated on the basis of the satisfaction of a contingent need. In agreement with this line of thought, Hare and Tomasello (2004) demonstrated that chimpanzees perform more skilfully when competing than when cooperating, leading to the assumption that there must be a facilitative cognitive effect for competition. Hare et al. (2001) have proposed the ‘competitive cognition hypothesis’: chimpanzees have been shown to be more skilled and motivated when engaged in competitive, rather than in collaborative, cognitive tasks. The model that has been used to corroborate this assumption is a series of variations of the ‘object choice paradigm’ task (designed by Tomasello et al., 1997).6 This experiment, based on hidden objects (mainly food), represents a single task setting within which chimpanzees experience social cues in either a competitive or a collaborative context. The setting of the experiments is the following: on a table there is some food and two cups. The experimenter hides the food under one of the cups and the chimpanzee, who does not know under which cup the food is hidden, has to choose the right cup. The choice is meant to be informed by the movement performed by the experimenter, which is either a collaborative attitude (i.e. ‘pointing to’) or 347
Angelica Kaufmann
a competitive attitude (i.e. ‘trying to reach’) towards the cup that hides the object. The results have shown that chimpanzees are not highly responsive to collaboration in this experimental setting. This seems to support the claim of the ‘competitive cognition hypothesis’, in that the experiment shows the limited range of possible interpretation of the meaning of an action that a chimpanzee can master. From these findings, the conclusion can be drawn that chimpanzees do not appreciate the intentional significance of the partner’s action (Hare & Tomasello, 2004). In addition, it can be argued that this is consistent with a motivational hypothesis, which means that the motivations to succeed were stronger during competition. One reading of the experiment draws on Nicholas Humphrey’s ‘social intelligence hypothesis’ (Humphrey, 1976), according to which the major engine of primate cognitive evolution was social competition. He explored this claim by analysing the way in which primates develop and deploy strategies about the selection of partners (and timing) in cooperation, defection and deceiving. This idea, and those that followed from its intuition (e.g. Byrne & Whiten, 1988), focused on the evolutionary advantages of a competitive behaviour. According to Humphrey, primates’ evolution was characterized by group activities in which individual payoffs are the motivational input towards collaboration. Lev Vygotsky (1978) also emphasized the social dimension of intelligence, but he focused on human primates and cultural artifacts.The resolution of this apparent conflict – competition versus cooperation in the formation of primate cognitive skills – is consistent with the results of the previously reported experiments. In fact, while Humphrey applied his ideas to nonhuman primates,Vygotsky referred, mostly, to human infants. The so-called ‘Vygotskian Intelligence Hypothesis’ (Tomasello et al., 2005) defends the proposal that primate cognition in general was driven mainly by social competition, but beyond that the unique aspects of human cognition were driven by, or even constituted by, social cooperation (Tomasello et al., 2005). This is a position that is gathering increasing consensus in evolutionary anthropology (e.g. Dunbar, 1998). Evidence has been provided for this Vygotskian intelligence hypothesis by comparing the social-cognitive skills of great apes with those of young human children in several domains of activity involving cooperation and communication with others (Moll & Tomasello, 2007). It has been argued that regular participation in cooperative, cultural interactions during ontogeny leads children to construct uniquely powerful forms of perspectival cognitive representations. Evidence suggests that human beings cooperate with one another and help one another (including non-kin) in ways not found in other animal species (Warneken & Tomasello, 2006). This is almost certainly so, and the current results demonstrate that even very young children have a natural tendency to help other people achieve a goal, even when the other is a stranger and they receive no benefit at all (by benefit, meaning, in this particular case, an immediate reward or near in the future). Chimpanzees do not seem to engage in the kinds of collaborative activities that humans do.The fact that chimpanzees act individually is consistent with what I have previously explained about their competitive attitudes in group activities, by describing how nonhuman primates seem to be incapable of understanding that a movement like pointing with a hand is totally different from grasping in order to try to reach something, even though both movements consist in an arm’s extension. The fact that the hidden object that constitutes the reward of a successful selection in the ‘object choice paradigm’ is food is not negligible. Quite the opposite, this is a crucial condition to take into account while discussing the results of the experimental trials. And we can, perhaps, ask ourselves if, in an experimental scenario, the possible presence of something other than food would motivate the chimpanzees to make the effort to choose at all. Experiments like these lead to the shaping of the Lean Account. On the subject of what it takes to live in a complex social world, this view has it that as far as nonhuman primates are concerned, developing individuals need to acquire skills that will enable them to either reach 348
Joint distal intentions: who shares what?
a dominant position or obtain the needed resources through ways other than genuine shared agency. These other ways consist in immediate cooperative–competitive interactions (Tomasello & Call, 1997) that establish situation-specific coalitions and alliances. These coordinated social interactions depend on the ability of individuals to predict others’ behaviour at a given time. This ability requires a certain amount of social knowledge, which in nonhuman primates, as shown, appears to be limited to the domains of food and mating. In these circumstances some species of primates regularly form coalitions and alliances, defined as “cooperation for competition” (Tomasello & Call, 1997, p. 206). As explained by Tomesello and Hamann (2012, p. 8) During the hunting, whereas each chimpanzee is trying to get the monkey for itself, and not helping the others in their roles at all, human hunters do such things for their partners as giving them weapons, clearing trails for them, carrying their child or weapon, repairing a weapon, instructing collaborators in best techniques, and so forth (Hill, 2002). The short story is thus that chimpanzees have no joint goal that “we” capture it and share it, helping the other in his role as needed, and no sense of commitment in either direction. A recent study highlights that the limitations that Tomasello and colleagues are worrying about are more likely to be motivational than cognitive. Melis and Tomasello (2013) present the results of the first experimental study showing chimpanzees manifesting collaborative tendencies when engaging in the same coordinated action. Chimpanzees display attention to the partner, and some form of knowledge about the role played by the partner. Melis and Tomasello are cautious about over-interpreting these findings. But, at the very least, this study shows that chimpanzees have some appreciation of the fact that the presence of an interacting partner is related to their own goal, and, crucially, that the engagement of the partner will determine the success or failure of the intended action (they also observe a number of limitations that I will not discuss here, but that you can find analysed at length in Melis & Tomasello, 2013). As I shall stress again, what is interesting for present purposes is the fact that the limitations in the collaborative performances of the chimpanzees tested by Melis and Tomasello might be more motivational than cognitive. This means that if chimpanzees are choosing not to cooperate, this is not due to a lack of understanding of the social dynamics on their side, but to a lack of interest in the offer. This means that, if they have the choice, chimpanzees prefer to act alone. But, if needed, they interact collaboratively. Thus, from the perspective of the Lean Account, the can question becomes a want question. The motivational question is a fascinating one, but not the one addressed here, and not the one we need to worry about in order to explore a richer account. A richer account wants to give more credit to the cognitive capacities of nonhuman animals, and disentangle them from the motivational capacities. Following the Rich Account, I maintain that cooperation for competition is nicely explained as being triggered by the mutual ascription of distal intentions.
4. The Rich Account Arguably,Taï chimpanzees’ group hunting is the most sophisticated group activity in the animal kingdom, outside of human societies (Tomasello, 2014, p. 35). This section analyses the issue as to whether group hunting is a specific type of joint planned action, and thus whether this practice is an instance of shared intentionality. Towards the end of the section additional evidence from the field is presented in support of the Rich Account. 349
Angelica Kaufmann
According to Boesch, from an evolutionary perspective it is important to stress the value of the studies on group hunting because they highlight the difference between captive studies and studies in the wild. Captive studies “inform us about the potential of some individuals within the special situation of their upbringing and social interactions”, but studies of wild individuals “inform us about the abilities that contribute to their biological success” (Boesch and BoeschAchermann, 2000, p. 226). Boesch is convinced that captive and wild chimpanzees happen to possess different cognitive abilities, and their intelligence may be domain-specific. The Lean Account is challenged by mean years of field studies on wild chimpanzees, and field work on semi-natural environmental conditions. According to Boesch (2005, p. 292): Ignoring most published evidence on wild chimpanzees, Tomasello et al.’s claim that shared goals and intentions are uniquely human amounts to a faith statement. A brief survey of chimpanzee hunting tactics shows that group hunts are compatible with a shared goals and intentions hypothesis. The disdain of observational data in experimental psychology leads some to ignore the reality of animal cognitive achievements. . . .Their proposition that the ability to share goals and intentions is a uniquely human capacity rests squarely on the assumption that no other species can do so. Boesch and colleagues maintain that group hunting among Taï chimpanzees constitutes compelling evidence for the claim that the capacity for mutually ascribing distal intentions is spread among nonhuman primates. Boesch acknowledges that the capacity for shared intentionality should not be over-attributed. As he notices, group hunting is not the rule in social animals. It is cognitively very demanding and it involves the coordinated actions of many individuals. Various instances of sophisticated social interactions are well documented among different animal species, like hyenas (Mills, 1990), lions (Stander, 1992), wild dogs (Fanshawe & Fitzgibbon, 1993) and wolves (Mech, 1970). However, it is not a settled issue to what extent these behaviours are evidence of a capacity to share joint distal intentions. For this reason, group hunting practices are divided in three types: synchronic, cooperative and collaborative (Boesch & Boesch-Achermann, 2000). Synchronic hunting is described as the mutual and simultaneous responsiveness to each other’s actions by two or more individuals. Cooperative hunting occurs when two or more individuals are capable of achieving spatio-temporal action coordination. Collaborative hunting consists of synchronic and cooperative attitudes towards the same target, and the appreciation that each participant to the activity covers a distinctive role which is complementary to the role covered by the other components of the group. With respect to the group hunting behaviour observed in other animal species, chimpanzee group hunting has a distinctive quality: it is collaborative, and defined as such because in 77% of the 274 group hunts followed, Taï chimpanzees performed the four hunting roles that the successful outcome of the hunting practice demands. And this very high percentage distinguishes the hunting practice of chimpanzees from those of other animal species. This is a very specialized activity among Taï chimpanzees and it affects the evolution of their social behaviour. Boesch and colleagues have reconstructed the optimal hunting scenario and have thus defined the practice as follows: it is a coalition of individuals, normally up to four, that requires a strategic coordination of the action of each member towards the same target: “individuals actively take part in a hunt by placing themselves in positions where they could perform a capture” (Boesch and Boesch-Achermann, 2000, p. 174). They explain that chimpanzees are selective toward a target prey, the red colobus monkey. These middle-sized mammals are very abundant in the highest strata of the Taï Forest canopy. Their hunting practice lasts 350
Joint distal intentions: who shares what?
approximately 16 minutes and it is a two-step procedure: first, localization (by auditory cues), and second, capture. The search for red colobus has been reported to be intentional half of the time (Boesch and Boesch, 1989). Crucially, the success of the coordinated action is determined by the stability of the role covered by the driver, the blocker, the chaser and the ambusher. When the hunt starts the driver tries to force the escaping red colobus monkey into a specific trajectory through the canopy. Simultaneously, on the one hand, the blocker has to keep track of the prey so that it will not deviate from the route imposed by the driver; and on the other hand, the chaser, climbing from the trees under the colobus monkey, monitors the movements of the prey, and attempts a catch. If this first try fails, eventually the ambusher blocks the last escape left to the red colobus monkey, and traps the prey by climbing in front of it. It is plausible that the roles are assigned on the basis of the current position of a chimpanzee at the moment when the hunt is planned to happen. Interestingly, when the hunt is successful, the reward is shared proportionally to the difficulty and the higher exposure to risk that the chimpanzee has undertaken.7 Whoever is more experienced works harder, and whoever works harder gets more of the spoil’s sharing. This has been frequently mistaken for a dominance mechanism, where the distribution of the plunder is always favouring the elder members of the community (Boesch & Boesch-Achermann, 2000, p. 189). But the existence of such a sophisticated system of reciprocity at play in these hunting practices seems to have disproven such an interpretation (for competing evidence, see Melis et al., 2006; Greenberg et al., 2010; Warneken et al., 2011).This suggests that there might be a relatively stable normative or, we can even say, proto-institutional component. But I do not intend to pursue this issue further here (see Boesch & Boesch-Achermann, 2000). To sum up, the description of the optimal scenario for the performance of a successful hunt is the result of a collaborative attitude within a hunting group. This activity is regulated by the following features: (a) a mechanism for individual recognition, (b) temporary memory for actions in the recent past, (c) attribution of value to those actions, and (d) social enforcement of those values. These features result in the capacity to understand the behaviour of another species, i.e. the red colobus monkey, and in the capacity to coordinate actions among individuals towards a common goal. The features that account for the establishment of genuine joint actions can be nicely traced in these reports from the field: The hunter, Christophe Boesch (2009, pp. 100–101) says, not only has to anticipate the direction in which the prey will flee (recorded as a half anticipation), but also the speed of the prey so as to synchronize his movements to reach the correct height in the tree before the prey enters it (recorded as a full anticipation). . . .We also recorded a double anticipation when a hunter not only anticipates the actions of the prey, but also the effect the action of other chimpanzees will have on the future movements of the colobus, that is he does not anticipate what he sees (the escaping colobus), but how a future chimpanzee tactic will further influence the escaping monkeys. As reported by Boesch and Boesch-Achermann (2000, p. 189): Taï chimpanzees hunt very regularly and have developed a sophisticated system of reciprocity in which hunters are rewarded for their contribution, not only for their participation in the hunt, but also for the type of contribution they make during the hunt. This indicates that they evaluate the contribution of different individuals and have a fine graded control of cheaters that pretend to hunt by adopting low costs 351
Angelica Kaufmann
tactics. Learning hunting tactics requires years and is fully acquired only by the older males within the community. Lastly, it should be noted that Taï chimpanzees’ success in group hunting increases with a higher number of participants in the hunt. During the hunt each participant has to synchronize and spatially coordinate with others, and sometimes anticipate the action of the others. This is crucial. Hunters can perform different complementary roles and change roles during the same hunt. This is evidence that they are capable of perspective taking and role reversal (see section 2). Boesch claims that group hunting behaviour fulfils the criteria set by Tomasello et al. for a social interaction to count as established by the capacity to mutually ascribe distal intentions.Tomasello pointed out that Boesch does not distinguish between goal and intention; however, I attempted to draw this distinction by appealing to Bratman’s view, showing that joint distal intentions can drive shared agency in chimpanzees. The research on group hunting among the chimpanzee populations of the Ivory Coast suggests that certain nonlinguistic animals have a rather rich understanding of the social nature of their joint successes. And the explanation for this rich understanding may lie in their capacity for a rudimentary form of shared intentionality. Recalling the discussion of the set of skills for shared intentionality (section 2), there is arguably more to what the defenders of the Lean Account concede, with respect to the third skill – the capacity for joint action planning. Another compelling case study, which I mentioned at the beginning of the chapter, is snowball play-fighting among the Japanese rhesus monkeys (Macaca fuscata), also known as ‘snow monkeys’. Japanese macaques living in Nagano can establish distinctively sophisticated group activities: they have been observed to construct snowballs and engage in play-fights (Eaton, 1972; Huffman & Quiatt, 1986; more recent evidence for joint activities come also from van Hooff & Lukkenaar, 2015).The reason to pick this evidence is that it is one of the few instances of joint action where the purpose of the interaction is overt: it is a game. And the same goal to play with manufactured snowballs is shared by each one of the agents taking part in the group activity. It would be redundant to invoke other psychological explanations, such as motivation to compete or dominance display. The conditions in which this behaviour occur are not triggered by any vital needs; they are spontaneous and the playing scenario is most likely a genuine joint venture. If this is the case, we should explain this group behaviour as instantiated by a mutual appreciation of the distal intention of others. Snowball play-fighting practices, in particular, play a crucial role in revealing the nature of nonhuman animals’ social interactions. For this reason the social character of these activities should not be overlooked (Leca et al., 2012) To recapitulate, the debate on shared intentionality is reduced to two main views. I called these the Rich Account and the Lean Account. According to the Rich account, animals have a common goal and take complementary roles in their hunting (Boesch & Boesch-Achermann, 2000; Tomasello, 2008, pp. 173–174). To date, this account lacks support from a theoretical background. According to the Lean Account, instead, in a hunting scenario each participant is attempting to maximize its own chances at catching the prey without any kind or prior joint plan or agreement or assignment of roles (Tomasello, 2008, pp. 174–178; Tomasello, 2014). More specifically, while human infants understand joint activity from a ‘bird’s-eye view’ with the joint goals and complementary roles all in a single representational format, chimpanzees understand their own actions from a first-person perspective and that of the partner from a third-person perspective (Hare & Tomasello, 2004). The Lean Account is theoretically supported by the philosophical assumption (Bratman, 1992; Gilbert, 1989) that:“the sine qua non for collaborative action is a joint goal and a joint commitment among participants to pursue it together, with a 352
Joint distal intentions: who shares what?
mutual understanding among all that they share this joint goal and commitment” (Tomasello, 2008, p. 181).
5. Conclusion This chapter confronts two views about the debated species-specific uniqueness of shared intentionality, the capacity for cooperating that multiple agents display upon a mutual appreciation of joint distal intentions. These psychological states are understood as mental representations of goals to which joint plans are directed. According to the leading philosophical view on action planning, Bratman’s theory of shared agency, mutually ascribing distal intentions does not imply having shared distal intentions with the exact same mental content. This chapter defends the idea that mutually ascribing distal intentions does not mean that two or more individuals have joint distal intentions with the exact same mental content, because this is empirically undeterminable in language-less animals. What mutually ascribing distal intentions requires is just the awareness of the content of the distal intentions of the others. And this latter condition is all that is required to pursue joint distal intentions and plan joint actions. In particular, the evidence on hunting practices suggests that lacking motivation for spontaneous collaboration does not imply lacking the cognitive capacities for articulated joint actions.
Notes 1 Corresponding author: Angelica Kaufmann, [email protected] 2 This definition draws on a specific view that I endorse: the causal theory of action, which I take to be the dominant view, at least among the advocates of dual or threefold intention theories (Anscombe, 1963; Davidson, 1963; Bach, 1978; Searle, 1983; Brand, 1984; Mele, 1992; Pacherie, 2000). The causal theory can be nicely framed in Alfred Mele’s description: “The causal contribution of intention is traceable both to motivational aspects of intention and to representational features. Intentions move us to act in virtue of their motivational properties and guide our intentional behavior in virtue of their representational qualities” (Mele, 1990, p. 289). 3 The distinction between two kinds of intention has been appreciated and illustrated by many philosophers, who have defended various “dual-intention theories” (Searle, 1983; Brand, 1984; Bratman, 1987; Mele, 1992; Pacherie, 2008 – here a trial-intention theory is presented).The cognitive function of intentions that guide proximal actions is the monitoring and guidance of ongoing bodily movements (Brand, 1984), while the cognitive function of intentions that guide future actions is the monitoring and guidance of a plan. I am only concerned with the latter, which I here call distal intentions. 4 The following authors disputed the perspective taking interpretation: Povinelli and Vonk (2003); Povinelli and Eddy (1996); Penn and Povinelli (2007); Reaux et al. (1999); Karin-D’Arcy and Povinelli (2002). 5 See especially: Bratman (1992, 2014); Gilbert (1989, 1990, 2010); Searle (1990, 1995); Tuomela (1995); Gallotti (2011); Gold and Sugden (2007); Kutz (2000); Ludwig (2007); Miller (2001); Schmid (2009); Seemann (2009); Smith (2011); Tuomela and Miller (1988); Tuomela (2005). 6 For a different experimental model see, for instance, the “Ultimatum Game” paradigm in Jensen et al. (2006). 7 It worth saying that the hunting behaviour is a learning process that starts around 8–10 years of age, and takes about 20 years of practice in order to be mastered. Different roles in the hunt require different levels of expertise and can be performed by more or less experienced individuals.
References Anscombe, G.E.M. (1963). Intention. Oxford: Blackwell. Bach, K. (1978). A representational theory of action. Philosophical Studies, 34(4), 361–379. Bloom, T. & German, T. (2000). Two reasons to abandon the false belief task as a test of theory of mind. Cognition, 77, B25–B31.
353
Angelica Kaufmann Bloom, P. & Veres, C. (1999). The perceived intentionality of groups. Cognition, 71, B1–B9. Boesch, C. (2009). Complex cooperation among Taï Chimpanzees. In F.B.M. de Waal and P. L. Tyack (Eds.), Animal Social Complexity: Intelligence, Culture, and Individualized Societies (pp. 99–110). Cambridge, MA: Harvard University Press. ———. (2005). Joint cooperative hunting among wild chimpanzees: Taking Natural observations seriously. Behavioral and Brain Sciences, 28, 692–693. Boesch, C. & Boesch, H. (1989). Hunting behavior of wild chimpanzees in the Taï National Park. American Journal of Physical Anthropology, 78, 547–573. Boesch, C. & Boesch-Achermann, H. (2000). The Chimpanzees of the Taï Forest. Oxford: Oxford University Press. Boesch, C. & Tomasello, M. (1998). Chimpanzee and human cultures. Current Anthropology, 39(5), 591–614. Bonnie, K. E. & de Waal, F.B.M. (2006). Affiliation promotes the transmission of a social custom: Handclasp grooming among captive chimpanzees. Primates, 47, 27–34. Brand, M. (1984). Intending and Acting:Toward a Naturalized Theory of Action. Cambridge, MA: Bradford-MIT. Bratman, M. E. (1987). Intentions, Plans, and Practical Reason. Cambridge, MA: Harvard University Press. ———. (1992). Shared cooperative activity. The Philosophical Review, 101(2), 327–341. Bratman, M. E. (2014). Shared Agency: A Planning Theory of Acting Together. Oxford: Oxford University Press. Bräuer, J., Call, J. & Tomasello, M. (2007). Chimpanzees really know what others can see in a competitive situation. Animal Cognition, 10, 439–448. Byrne, R. W. & Whiten, A. (1988). Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford: Clarendon Press. Call, J. & Tomasello, M. (2005). What chimpanzees know about seeing revisited: An explanation of the third kind. In C. Hoerl, N. Eilan, T. McCormack and J. Roessler (Eds.), Joint Attention (pp. 45–64). Oxford: Oxford University Press. Davidson, D. (1963). Action, Reasons, and Causes. Journal of Philosophy, 60(23), 685–700. de Waal, F.B.M. & Ferrari, P. F. (2010). Towards a bottom-up perspective on animal and human cognition. Trends in Cognitive Sciences, 14, 201–207. Dunbar, R. I. M. (1998). Grooming, Gossip and the Evolution of Language. London: Faber & Faber. Eaton, G. (1972). Snowball construction by feral troop of Japanese macaques (Macaca fuscata) living under seminatural conditions. Primates, 13(4), 411–414. Fanshawe, J. H. & Fitzgibbon, C. D. (1993). Factors influencing the hunting success of an African wild dog pack. Animal Behaviour, 45, 479–490. Fitch, T., Huber, L. & Bugnyar, T. (2010). Social cognition and the evolution of language: Constructing cognitive phylogenies. Neuron, 65, 1–21. Gallotti, M., (2011). Review: Why We Cooperate. Economics and Philosophy, 27, 183–190. Gilbert, M. (1989). On Social Facts. London: Routledge. Gilbert, M. (2010). Collective action. In T. O’Connor and C. Sandis (Eds.), A Companion to the Philosophy of Action (pp. 67–73). Oxford: Blackwell. Gilbert, M. P. (1990). Walking together: A paradigmatic social phenomenon. Midwest Studies in Philosophy, 15, 1–14. Gold, N. & Sugden, R. (2007). Collective intentions and team agency. Journal of Philosophy, 104(3), 109–137. Gómez, J.-C. (1996). Non-human primate theories of (Non-human Primate) minds: Some issues concerning the origins of mind-reading. In P. Carruthers and P. K. Smith (Eds.), Theories of Theories of Mind (pp. 330–343). Cambridge: Cambridge University Press. Greenberg, J. R., Hamann, K., Warneken, F. & Tomasello, M. (2010). Chimpanzee helping in collaborative and noncollaborative contexts. Animal Behaviour, 80, 873–880 Hamann, K., Warneken, F., Greenberg, J. R. & Tomasello, M. (2011). Collaboration encourages equal sharing in children but not in chimpanzees. Nature, 476, 328–311 (Letter, 18 August 2011). Hare, B., Call, J., Agnetta, B. & Tomasello, M. (2000). Chimpanzees know what conspecifics do and do not see. Animal Behavior, 59, 771–785. Hare, B., Call, J. & Tomasello, M. (2001). Do chimpanzees know what conspecifics know and do not know? Animal Behavior, 61, 139–151.
354
Joint distal intentions: who shares what? ———. (2006). Chimpanzees deceive a human competitor by hiding. Cognition, 101(3), 495–514.Hare, B. & Tomasello, M. (2004). Chimpanzees are more skilful in competitive than in cooperative cognitive tasks. Animal Behaviour, 68, 571–581. Heyes, C. (1998). Theory of mind in nonhuman primates. Behavioral and Brain Sciences, 21, 101–134. Hill, K. (2002). Altruistic cooperation during foraging by the Ache, and the evolved human predisposition to cooperate. Human Nature, 13(1), 105–128. Horner,V., Bonnie, K. E. & de Waal, F.B.M. (2005). Identifying the motivations of chimpanzees: Culture and collaboration. Behavioural and Brain Sciences, 28, 704–705. Horner,V., Carter, J. D., Suchak, M. & de Waal, F.B.M. (2011). Spontaneous prosocial choice by chimpanzees. PNAS, 108(33), 13847–13851. Horner,V., Proctor, D., Bonnie, K. E., Withen, A. & de Waal, F.B.M. (2010). Prestige affects cultural learning in Chimpanzees. PloS ONE, 5(5), 1–5. Huffman, M. A. & Quiatt, D. (1986). Stone handling by Japanese macaques (Macaca fuscata): Implications for tool use of stones. Primates, 27, 413–423. Humphrey, N. K. (1976). The social function of intellect. In P.P.G. Bateson and R. A. Hinde (Eds.), Growing Points in Ethology (pp. 303–317). Cambridge, UK: Cambridge University Press. Jensen, K., Hare., B., Call., J. & Tomasello, M. (2006). What’s in it for me? Self-regarded precludes altruism and spite in chimpanzees. Proceedings of the Royal Society (B): Biological Sciences, 273(1589), 1013–1021. Karin-D’Arcy, M. & Povinelli, D. J. (2002). Do chimpanzees know what each other see? A closer look. International Journal of Comparative Psychology, 15, 21–54. Kaufmann, A. (2015). Animal mental action: Planning among chimpanzees. Review of Philosophy and Psychology, 6(4), 745–760. Kutz, C. (2000). Acting together. Philosophy and Phenomenological Research, 61(1), 1–31. Leca, J.-B., Gunst, N. & Huffman, M. A. (2012).Thirty years of stone handling tradition in Arashiyama-Kyoto macaques: Implications for cumulative culture and tool use in non-human primates. In J.-B. Leca, M. A. Huffman and P. L. Vasey (Eds.), The Monkeys of Stormy Mountain: 60 Years of Primatological Research on the Japanese Macaques of Arashiyama (pp. 223–257). Cambridge: Cambridge University Press. Ludwig, K. (2007). Collective intentional behavior from the standpoint of semantics. Nous, 41(3), 355–393. Matsuzawa, T. (2001). Primate Origins of Human Cognition and Behavior. Hong Kong: Springer-Verlag. McGrew, W. C. & Tutin, C. E. G. (1978). Evidence for a social custom in wild chimpanzees? Man (n.s.), 13, 234–251. Mech, D. L. (1970). The Wolf. New York: Natural History Press. Mele, A. (1990). Exciting Intentions, Philosophical Studies, 59, 289–312. ———. (1992). Springs of Action: Understanding Intentional Behaviour. New York: Oxford University Press. Melis, A. P., Hare, B. & Tomasello, M. (2006). Engineering cooperation in chimpanzees:Tolerance constraints on cooperation. Animal Behaviour, 72, 275–286. Melis, A. P. & Tomasello, M. (2013). Chimpanzees’ (Pan troglodytes) strategic helping in a collaborative task, Biology Letters.The Royal Society, 9: 20130009, 1–4. Miller, S. (2001). Social Action: A Teleological Account. Cambridge: Cambridge University Press. Mills, M.G.L. (1990). Kalahari Hyenas: The Comparative Behavioural Ecology of Two Species. London: Chapman and Hall. Moll, H. & Tomasello, M. (2007). Cooperation and human cognition:The Vygotskian intelligence hypothesis. Philosophical Transactions of the Royal Society (Biological Sciences), 362(1418), 639–648. Nakamura, M. (2002). Grooming-hand-clasp in Mahale M group chimpanzees: Implication for culture in social behaviors. In C. Boesch, G. Hohmann and L. F. Marchant (Ed.), Behavioural Diversity in Chimpanzees and Bonobos (pp. 71–83). Cambridge: Cambridge University Press. Pacherie, E. (2000). The content of intentions. Mind and Language, 15(4), 400–432. Peacocke, C. (2014). Interpersonal self-consciousness. Philosophical Studies, 170(1), 1–24. Penn, D. C. & Povinelli, D. J. (2007). On the lack of evidence that non-human animals possess anything remotely resembling a ‘theory of mind’. Philosophical Transactions of the Royal Society B, 362, 731–744. Povinelli, D. J. & Eddy,T. J. (1996).What young chimpanzees know about seeing. Monographs in Social Research in Child Developmental, 61, 1–152.
355
Angelica Kaufmann Povinelli, D. J. & Vonk, J. (2003). Chimpanzee minds: Suspiciously human? Trends in Cognitive Science, 7, 157–160. Premack, D. & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1, 515–526. Reaux, J., Theall, L. & Povinelli, D. J. (1999). A longitudinal investigation of chimpanzees’ understanding of visual perception. Child Development, 70, 275–290. Schmid, H. B. (2009). Plural Action: Essays in Philosophy and Social Science. Dordrecht, NL: Springer. Searle, J. R. (1990). Collective intentions and actions. In P. Cohen, J. Morgan and M. Pollack (Eds.), Intentions in Communication (pp. 90–105). Cambridge University Press. Reprinted in Searle, J. R. (2002). Consciousness and Language. Cambridge: Cambridge University Press, 90–105. ———. (1995). The Construction of Social Reality. New York: The Free Press. Seemann, A. (2009).Why we did it: An Anscombian account of collective action. International Journal of Philosophical Studies, 17(5), 637–655. Smith, T. H. (2011). Playing one’s part. Review of Philosophy and Psychology, 2(2), 213–244. Stander, P. E. (1992). Cooperative hunting in lions: The role of the individual. Behavioural Ecology and Sociobiology, 29, 445–454. Tomasello, M. (1999). The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press. ———. (2014). A Natural History of Human Thinking. Harvard University Press. Tomasello, M. & Call, J. (1997). Primate Cognition. Oxford: Oxford University Press. ———. (2006). Do chimpanzees know what others see – or only what they are looking at? In Susan L. Hurley and Matthew Nudds (Eds.), Rational Animals? (pp. 371–384). Oxford University Press. ———. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Science, 12(5), 187–192. Tomasello, M., Call, J. & Gluckman, A. (1997). Comprehension of novel communicative signs by apes and human children, Child Development, 68, 1067–1080. Tomasello, M., Call, J. & Hare, B. (2003). Chimpanzees understand psychological states – the question is which ones and to what extent. Trends in Cognitive Sciences, 7(4), 153–156. Tomasello, M. & Carpenter, M. (2007). Shared Intentionality. Developmental Science, 10(1), 121–125. Tomasello, M., Carpenter, M., Call, J., Behne, T. & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioural and Brain Sciences, 28, 675–735. Tomasello, M. & Hamann, K. (2012). Collaboration in young children. The Quarterly Journal of Experimental Psychology, 65(1), 1–12. Tomasello, M., Melis, A., Tennie, C., Wyman, E. & Herrmann, E. (2012). Two key steps in the evolution of human cooperation: The Interdependency Hypothesis. Current Anthropology, 362(1418), 639–648. Tuomela, R. (2005). We-intentions revisited. Philosophical Studies, 125(3), 327–369. Tuomela, R. & K. Miller (1988). We-intentions. Philosophical Studies, 53(3), 367–389. Van Hooff, J.A.R.A.M. & Lukkenaar, B. (2015). Captive chimpanzee takes down a drone: Tool use toward a flying object. Primates. 56(4), 289–292. DOI 10.1007/s10329-015-0482-2 Vygotsky, L. S. (1978). Mind in Society. Cambridge, MA: Harvard University Press. Warneken, F., Lohse, K., Melis, A. P. & Tomasello, M. (2011).Young children share the spoils after collaboration. Psychological Science, 22(2), 267–273. Warneken, F. & Tomasello, M. (2006). Altruistic helping in human children and young chimpanzees. Science, 311(1301), 1301–1303.
356
20 JOINT ACTION A minimalist approach Stephen Butterfill
Introduction There are phenomena, call them joint actions, paradigm cases of which are held to involve two people painting a house together (Bratman 1992), lifting a heavy sofa together (Velleman 1997), preparing a hollandaise sauce together (Searle 1990), going to Chicago together (Kutz 2000), and walking together (Gilbert 1990). In developmental psychology paradigm cases of joint action include two people tidying up the toys together (Behne, Carpenter, and Tomasello 2005), cooperatively pulling handles in sequence to make a dog-puppet sing (Brownell, Ramani, and Zerwas 2006), bouncing a block on a large trampoline together (Tomasello and Carpenter 2007), and pretending to row a boat together. Other paradigm cases from research in cognitive psychology include two people lifting a two-handled basket (Knoblich and Sebanz 2008), putting a stick through a ring (Ramenzoni et al. 2011), and swinging their legs in phase (Schmidt and Richardson 2008, 284). What feature or features distinguish joint actions such as these from events involving multiple agents who are merely acting in parallel? This is a useful question to pursue because joint action raises a tangle of scientific and philosophical questions. Psychologically and neuroscientifically we want to know which mechanisms make it possible (Sebanz, Bekkering, and Knoblich 2006;Vesper et al. 2010; Sacheli et al. 2015). Developmentally we want to know when joint action emerges, what it presupposes, and whether it might somehow facilitate socio-cognitive, pragmatic, or symbolic development (Moll and Tomasello 2007; Hughes and Leekam 2004; Brownell, Ramani, and Zerwas 2006). Phenomenologically we want to characterise what (if anything) is special about experiences of action and agency when collective agency is involved (Pacherie 2010). Metaphysically we want to know what kinds of entities and structures are implied by the existence of joint action (Helm 2008; Searle 1994). And normatively we want to know what kinds of commitments (if any) are entailed by joint action and how these commitments arise (Roth 2004; Gilbert 2013). For investigating any of these questions it would be useful to understand which feature or features distinguishes joint actions from events involving multiple agents who are merely acting in parallel. Could coordination be this distinguishing feature? Compare two sisters cycling to school together in a way that sisters characteristically cycle together with two strangers who, some way behind the sisters, happen to be cycling the same route side-by-side. Both pairs of cyclists need to coordinate their actions in order to avoid colliding, but only the former is a joint action. So 357
Stephen Butterfill
the bare fact that actions are coordinated, even very tightly coordinated in ways that require expertise, cannot be what distinguishes joint action from events involving multiple agents who are merely acting in parallel. Another initially tempting idea is that common effects distinguish joint actions. When members of a flash mob in Café Central respond to a pre-arranged cue by noisily opening their newspapers, they perform a joint action with a common effect. But when someone not part of the mob just happens to noisily open her newspaper in response to the same cue, her action is not part of any joint action.Yet her action together with the actions of the flash mob members have a common effect in startling the people around them. So what distinguishes joint actions from events involving multiple agents who merely act in parallel can’t be just that joint actions have common effects. At this point it is natural to appeal to intention. Perhaps joint action occurs when there is an act-type, ϕ, such that each of several agents intends that they, these agents, ϕ together, and their actions are appropriately related to these intentions. Does the appeal to togetherness make this proposal circular? Not as long as we understand ‘together’ only in the sense in which the three legs of a tripod can support a flask together. Can people ever have intentions which concern not only their own actions but also each others’? Yes, at least if whether each person’s intention persists depends on the others’ intentions persisting (Bratman 1993). Appealing to intention seems to take us further than the first two ideas (coordination and common effects). Consider the cycling sisters again. Cycling together in the way that sisters characteristically cycle together plausibly involves each sister intending that they, the two sisters, cycle to school together. But nothing like this is characteristic of strangers who just happen to be cycling side-by-side – neither is likely to have intentions concerning whether the other gets to school. So have we already identified what distinguishes joint action? Not yet. Imagine two sisters who, getting off an aeroplane, tacitly agree to exact revenge on the unruly mob of drunken hens behind them by positioning themselves so as to block the aisle together.This is a joint action. Meanwhile on another plane, two strangers happen to be so configured that they are collectively blocking the aisle. The first passenger correctly anticipates that the other passenger, who is a complete stranger, will not be moving from her current position for some time.This creates an opportunity for the first passenger: she intends that they, she and the stranger, block the aisle together. And, as it happens, the second passenger’s thoughts mirror the first’s. So the feature under consideration as distinctive of joint action is present: each passenger is acting on her intention that they, the two passengers, block the aisle. But the contrast between this case and the sisters exacting revenge suggests that these passengers are not taking part in a joint action – at least, theirs is not the kind of joint action associated with the paradigm cases mentioned at the start of this chapter. Apparently, then, our being involved in a joint action can’t be a matter only of there being something such that we each intend that we, you and I, do it together. What are we missing? It’s just here that, in philosophy at least, things get a little wild. Attempts to provide the missing ingredient in characterising joint action include introducing novel kinds of intentions (Searle 1990) or modes (Gallotti and Frith 2013), novel kinds of agents (Helm 2008), and novel kinds of reasoning (Gold and Sugden 2007). Others suggest embedding intentions in special kinds of commitment (Gilbert 2013), or creating special nested structures of intention and common knowledge (Bratman 2014). Perhaps some or all of these innovations are in some way useful. But are they really needed just to understand how joint actions differ from events involving multiple agents who are merely acting in parallel? The dominant assumption is that they are. To illustrate, consider Gilbert’s position. According to her, all joint action involves shared intention, and our having a shared intention that 358
Joint action: a minimalist approach
we ϕ involves our being jointly committed to emulate a single body with an intention that it ϕ. In order to create the joint commitment necessary for us to have a shared intention, we must each openly express readiness to participate in this commitment. Further, it must be common knowledge among us that we each express such readiness. Her account thus implies that, in order for us to share a smile or carry a two-handled picnic basket together, each of us must know that the other is ready to form a joint commitment to emulate a single body with an intention to share a smile or carry the basket (Gilbert 2013, 334). Few would agree with Gilbert that exactly this nesting of mental states and commitments is necessary for joint action. (This is no reflection on Gilbert – few philosophers would agree in any detail with anyone’s view on what is necessary for joint action.) But many do follow Gilbert in thinking that distinguishing the kind of joint action involved in the examples given at the start of this chapter requires either comparably complex nested structures or novel ingredients. By contrast, Bratman has recently observed, in effect, that introducing such complex structures or novel ingredients is not obviously needed just to distinguish joint action from events involving multiple agents who are merely acting in parallel (Bratman 2014, 105). For all anyone has yet shown, there may be a way of capturing what is distinctive of the kind of joint actions mentioned at the start of this chapter without invoking novel ingredients or structures.This chapter explores the possibility that there is, with the twofold aim of constructing a minimalist theoretical framework for understanding at least simple forms of joint action and illuminating the nature of the intentions and commitments involved in the most sophisticated forms of joint action.
A minimalist approach: first step Taking a minimalist approach means finding a simplest possible starting point, adding ingredients only as needed, and avoiding as far as possible ingredients which would require the agents to have abilities additional to those already required.1 What determines whether an additional ingredient is needed? The aim is to distinguish joint actions like those mentioned at the start of the chapter from events involving multiple agents who are merely acting in parallel. As a promising starting point, consider a claim from Ludwig’s semantic analysis: A joint action is an event with two or more agents. (Ludwig 2007, 366) To illustrate, suppose two hunters each attack a deer. Neither attack was individually fatal but together they were deadly. In this case the hunters are agents of the killing of the deer, so the event counts as a joint action on Ludwig’s proposal. To fully understand Ludwig’s proposal we need to understand what it is for an individual to be among the agents of an arbitrary event and not just an action. This can be done in terms of a notion of grounding which I adapt from a discussion of action by Pietroski (1998). Pietroski identified a simple and elegant way of generalising from the idea that an individual can be the agent of an action to the idea that an individual can be the agent of a larger event. (His account does require a minor correction, but this is not relevant here.) This can be generalised to allow for any number of agents. Let us stipulate that events D1, . . . Dn ground E, if: D1, . . . Dn and E occur; D1, . . . Dn are each (perhaps improper) parts of E; and every event that is a proper part of E but does not overlap D1, . . . Dn is caused by some or all of D1, . . . Dn. Then let us say that for an individual to be among the agents of an event is for there to be actions A1, . . . An which ground this event, where the individual is an agent of some (one or more) of these actions. To illustrate, consider the hunters again. Let THE EPISODE be an event comprising only the hunter’s actions, the 359
Stephen Butterfill
deer’s death, and the events causally linking these. Since, for each hunter, there is a set of events including this hunter’s attacking which ground THE EPISODE, we can conclude that THE EPISODE is a joint action on Ludwig’s proposed definition.This definition is too broad.To see why, first recall our premise that one requirement on any account of joint action is this: it should distinguish joint actions like those mentioned at the start of this chapter from events involving multiple agents who are merely acting in parallel. Now consider two ways of elaborating the story about the hunters. In one they are best friends who have set out together with the aim of killing this deer, and they are exhibiting many features associated with paradigm cases of joint action. In the other elaboration, the hunters are bitter rivals completely unaware of each other’s presence. In fact, were either to have suspected the other was present, she would have abandoned the deer in order to target her rival. In both elaborations, Ludwig’s proposal entails that THE EPISODE is a joint action. But whereas the ‘best friends’ elaboration resembles paradigm cases of joint action, the bitter rivals are merely acting in parallel. By itself, Ludwig’s attractively simple proposal is insufficient.2 What is missing from this first attempt to capture joint action? Many joint actions are goaldirected in the sense that, among all their actual outcomes, there is an outcome to which they are directed. Perhaps we can make progress by integrating goal-directedness into our theoretical framework.
Goal-directed joint action A goal of some actions is an outcome to which they are directed. A goal-state is a state of an agent which specifies an outcome and which is, or could be, related to the agent’s actions in such a way that these actions are directed to the outcome represented. Intentions are perhaps the most familiar kind of goal-state; goals are things like the shooting of a deer or the carrying of a basket. Note that, confusingly, the term ‘goal’ is sometimes used for what I am calling goal-states. Whatever terminology is used, it is essential to distinguish goals from goal-states. From the fact that an action is directed to a particular goal it does not follow that the agent of the action has a goal-state representing this goal (Bratman 1984). It doesn’t even follow that the agent has any goal-states at all if, as some have argued, it is possible to understand what it is for at least some actions to be directed to goals without appeal to goal-states (e.g. Bennett 1976;Taylor 1964). In almost all of the events offered as paradigm cases of joint action in philosophy and psychology, there is a single goal to which all the agents’ actions are directed.To illustrate, return once more to the sisters cycling to school together. Cycling together in the way sisters characteristically cycle together plausibly involves there being a single goal to which the sisters’ actions are all directed. Perhaps the goal is the arrival of the two sisters at their school. By contrast, there is plausibly no goal to which all the actions of the two strangers cycling side-by-side are directed. Note that distinguishing the sisters from the strangers in this way depends on distinguishing same goal from same type of goal. The two strangers’ actions may have goals of the same type; for instance, each stranger’s actions may be directed to her own arrival somewhere. But this does not amount to there being a single goal to which the strangers’ actions are all directed. After all, if one stranger falls into a hole and is taken to hospital in an ambulance, the other’s actions may still succeed relative to the goal of reaching her destination. Consider a second attempt to characterise joint action, which builds on the earlier attempt by incorporating goals: A joint action is an event with two or more agents where there is a single goal to which all the actions grounding that event are directed.
360
Joint action: a minimalist approach
Although narrower than the earlier attempt (which was introduced in a previous section [A minimalist approach]), this attempt is still not narrow enough. To see why, consider the deer hunters again. The best friends’ actions are clearly directed to a single goal, that of killing the deer. But so are the bitter rivals’ actions. So this second attempt to characterise joint action still fails to distinguish the kind of joint action characteristic of best friends hunting together from the parallel but merely individual actions of the bitter rivals who end up hunting the same deer. More is needed.
Collective goals Some predicates can be interpreted either distributively or collectively (see Linnebo 2005 for an introduction). Consider the statement, ‘The injections saved her life.’ This could be true in virtue of her receiving several injections on different occasions, each of which saved her life. In this case, the injections saving her life is just a matter of each injection individually saving her life; this is the distributive interpretation. But the statement is also true if she was given two injections on a single occasion where each injection was necessary but not sufficient to save her life. In this case the injections saving her life is not, or not just, a matter of each injection individually saving her life; this is the collective interpretation. The difference between distributive and collective interpretations is clearly substantial, for on the distributive interpretation the statement can only be true if her life has been saved more than once, whereas the truth of the collective interpretation requires only one life-threatening situation. Just as some injections can be collectively life-saving, so some actions can be collectively directed to a goal. For some actions to have a collective goal is for there to be a single outcome to which the actions are directed where this is not, or not just, a matter of each of the actions being individually directed to that outcome. To illustrate, there is a sense in which some of the actions of swarming bees are directed to finding a nest and this is not, or not just, a matter of each bee’s actions being individually directed to finding a nest. So finding a nest is a collective goal of the bees’ actions. Likewise, when two people use a rope and pulleys to lift a heavy block between them, lifting the block will typically be a collective goal of their actions. In virtue of what could any actions ever have a collective goal? One possibility involves coordination. To illustrate, return to the deer hunters who are best friends again. Knowing the difficulty of killing a deer, they coordinate so that when one of them startles it, the other is positioned along the deer’s escape route. The coordination ensures that their actions being directed to the killing of the deer is not just a matter of each action’s being directed to this outcome, and so entails that killing the deer is a collective goal of their actions. In general, for an outcome to be a collective goal of some actions it is sufficient that all the actions are coordinated as a means to bringing about this outcome. It is natural to assume that the hunters’ coordination is a consequence of their intentions (at least if they are human rather than, say, lyncine).This may make it tempting to assume that what ultimately determines which actions have which collective goals is not coordination but intention. But this temptation should be resisted. The coordination needed for multiple individuals’ behaviours to have a collective goal can be provided by entirely non-psychological mechanisms, as popular findings about bees (Seeley 2010) and ants (Hölldobler and Wilson 2009, 178–783, 206–221) indicate. It is also likely that not all coordination between humans involves intention (see, for example, Repp and Su 2013, §3, and Knoblich, Butterfill, and Sebanz 2011 on emergent coordination). Where some actions are coordinated in order to bring about an outcome, the actions are collectively directed to that outcome in virtue of being so coordinated.
361
Stephen Butterfill
Staying with the deer hunters for a moment, note that appealing to collective goals enables us to distinguish the bitter rivals who are merely acting in parallel from the best friends who are performing a joint action. The actions of the bitter rivals are directed to a single outcome, that of killing a particular deer; but this is just a matter of each hunter’s actions being directed to this outcome. So killing this deer is not a collective goal of the bitter rivals’ actions. By contrast, the best friends’ actions are (by stipulation) coordinated in a way that would normally increase the probability of their killing the deer. So their actions do have a collective goal. Could appealing to collective goals eventually enable us to distinguish more generally between joint actions and events involving multiple agents who are merely acting in parallel? Consider invoking collective goals to narrow the previous attempt to characterise joint action: A joint action is an event with two or more agents where the actions grounding that event have a collective goal. This attempted characterisation is a step forward. But it does not enable us to distinguish all joint actions from events involving multiple agents who are merely acting in parallel. To see why, return to the contrast between two ways of blocking the aisle of an aeroplane (which was mentioned in the introductory section [Introduction]). First, two sisters tacitly agree to block the aisle by positioning themselves side-by-side; this is a paradigm case of joint action. Second, two strangers also succeed in blocking the aisle, although neither guesses the other’s intention and each knows of the other only that she is unlikely to move from her current position. The strangers appear not to be involved in a joint action because from either stranger’s point of view the other is merely a conveniently stationary and sufficiently bulky object. Yet the strangers’ actions are clearly coordinated: it is no coincidence that they are positioned as they are. Further, their actions are coordinated in order to block the aisle: each seeks to position herself relative to the other in such a way as to prevent passengers getting past. This implies that the strangers’ actions have a collective goal. Once again our attempted characterisation of joint action appears to be too broad. What to do?
Some intentions specify collective goals We have seen that distinguishing two ways of blocking the aisle, one involving sisters performing a joint action and the other involving strangers performing parallel but merely individual actions, cannot be captured either by a simple appeal to intention (see introductory section [introduction]) or by a simple appeal to collective goals (see previous section [Collective goals]). But perhaps we can distinguish the joint action from its merely parallel counterpart by invoking intentions which specify collective goals. How might intentions specify collective goals? An obvious possibility is for collective goals to feature in what agents intend. For instance, the sisters blocking the aisle might in principle each intend that they, the two sisters, perform actions which have the collective goal of blocking the aisle and succeed relative to this intention. If we were to suppose that intentions about collective goals are a characteristic feature of joint action, then we would have a simple way of distinguishing joint actions from parallel actions like those of the strangers blocking the aisle. After all, the strangers blocking the aisle cannot rationally intend that their actions have blocking the aisle as a collective goal (because each believes the other is not performing actions directed to this goal).
362
Joint action: a minimalist approach
But are intentions about collective goals a characteristic feature of joint action? Whereas joint action is a pervasive feature of everyday life, it would be surprising to discover that intentions about collective goals are similarly pervasive. After all, having the intention about the collective goal appears to require understanding quite generally what collective goals are. This motivates considering whether intentions might specify collective goals other than by virtue of being part of what agents intend. Start by thinking about ordinary, individual action. Consider events in which a vertical stroke is made with a pencil. On some occasions this is realised by an action directed to the goal of making a vertical stroke with a pencil. On other occasions it is realised by something not directed to any such goal; perhaps you are jolted while holding a pencil over some paper. It is a familiar idea that, often at least, someone who intends to make a stroke with a pencil has an intention that would not be fulfilled if, say, she was jolted while holding the pencil in such a way as to make a vertical stroke. Instead, fulfilling this particular intention requires performing an action directed to the goal of making a vertical stroke with a pencil. Should we infer that what she intends is not simply to make a vertical stroke but to perform an action directed to the goal of making a vertical stroke and to succeed relative to this goal? No. Either intending to make a vertical stroke just is intending the thing about the goal, or else the thing about the goal enters the satisfaction conditions of the intention without being part of what she intends. Either way, having an intention that locks onto a particular type of goal-directed action such as drawing a stroke with a pencil does not require having intentions (or any thoughts) about actions and goals generally. A related point holds for collective goals. Consider events in which two or more agents move a fallen tree that is blocking a road. Some such events involve actions for which moving the tree is a collective goal. Other such events involve actions which are merely individually directed to moving the fallen tree (the tree is so big and the storm so intense that the several agents are unaware of each other until after having moved the tree). Suppose Ayesha intends that she and Beatrice move a fallen tree blocking their path, and that fulfilling this particular intention requires moving the tree to be a collective goal of Ayesha’s and Beatrice’s actions. As in the related case of ordinary, individual goals, this does not require that collective goals feature in what Ayesha intends, or not in a way that goes beyond her intending that they, she and Beatrice, move the tree. Having an intention that locks onto a particular type of event involving a collective goal, such as the moving of fallen tree, does not require having intentions about collective goals generally. Why is this relevant? Earlier (in section [Introduction]) we briefly considered the simple idea that joint action occurs when there is an act-type, ϕ, such that each of several agents intends that they, these agents, ϕ, and their actions are appropriately related to these intentions. This simple idea seemed inadequate for distinguishing joint actions from events involving multiple agents who are merely acting in parallel. Why? Because the strangers blocking the aisle of an aeroplane are not involved in a joint action but do each intend that they, the two strangers, block the aisle. But we are now in a position to improve on the simple idea. Note that the strangers’ intentions do not require for their fulfilment that they, the strangers, perform actions with the collective goal of blocking the aisle. Indeed, by stipulation each stranger falsely believes that the other is not performing actions directed to blocking the aisle. So what each stranger believes is straightforwardly incompatible with blocking the aisle being a collective goal of their actions. This suggests that we can improve on the simple idea by requiring that the relevant intentions must require for their fulfilment actions with a corresponding collective goal. And we have just seen that imposing such a requirement would not entail tacitly imposing an implausible further requirement on abilities to think about collective goals generally.
363
Stephen Butterfill
Consider a further attempt to characterise joint action, one which builds on earlier efforts by requiring intentions that specify collective goals: A joint action is an event with two or more agents where: 1
the actions grounding that event are appropriately related to intentions on the part of each agent that they, these agents, ϕ together; and 2 each intention requires for its fulfilment that all the actions have a collective goal concerning ϕ-ing. Call this attempted characterisation the Flat Intention View. It improves on the earlier attempts insofar as, unlike them, it distinguishes all the joint actions so far considered from the counterpart events involving multiple agents who are merely acting in parallel. The Flat Intention View is so-called because it relies on a single, unnested intention where some other approaches require intentions nested in intentions.To motivate invoking nested intentions, Bratman (1992, 333; 2014, 49) introduces a pair of contrasting cases in which two people intend that they, the two people, go to New York City. One case involves the sort of situation that best friends planning a holiday might be in. The other involves two members of competing gangs. Each gangster intends that they, the two gangsters, go to New York City by means of her ‘throwing the other into the trunk of the car and driving to NYC’ (Bratman 2014, 49). Bratman takes this contrast between how intentions to go to NYC typically unfold in friendly situations and how intentions to go there ‘in the mafia sense’ unfold to motivate the view that distinguishing these situations requires not just intentions but intentions about intentions. But the Flat Intention View provides a way of distinguishing the friendly case from the mafia case without introducing higher-order intentions. If one gangster succeeds in bundling the other into the trunk of their car, the two gangsters will not perform actions with the collective goal of their going to NYC. For this reason, neither gangster can rationally and knowingly have both an intention whose fulfilment requires them to perform such actions and also an intention to bundle the other into the trunk. So the Flat Intention View excludes going to NYC ‘in the mafia sense’ without any need for intentions nested in intentions. While Bratman’s aims extend far beyond the issues considered in this chapter, the success of the Flat Intention View in distinguishing the ordinary case from the mafia case suggests that further arguments are needed to show that characterising joint action requires nested intentions. The Flat Intention View can be contrasted with views which invoke intentions with novel kinds of subjects, namely plural subjects (see, for example, Schmid 2009), novel kinds of attitudes such as ‘we-intentions’ as Searle (1990) characterises them, or novel kinds of commitments such as Gilbert’s (2013) joint commitments. By contrast, the Flat Intention View follows Bratman’s view in requiring neither a novel kind of subject nor a novel kind of attitude. On the Flat Intention View, the only special feature of the intentions associated with joint action is that their fulfilment conditions involve collective goals. Does the Flat Intention View need supplementing with requirements to the effect that each agent knows or believes something about the other’s (or others’) intentions? A positive answer would be consistent with the arguments of this chapter, and there is no obvious obstacle to supplementing the Flat Intention View in some such way. But, as far as I know, philosophers have yet to show that requirements about knowledge or belief are necessary (compare Blomberg 2015). It may be true, of course, that successful joint action often requires knowledge of others’ intentions. But it may be possible to explain this fact (if it is a fact) by appeal to rational requirements on having intentions such as those specified by the Flat Intention View. After all, 364
Joint action: a minimalist approach
you cannot rationally intend that, say, we, you and I, make a pizza together unless many background requirements are met. These background requirements may include requirements on your beliefs or knowledge about my intentions. So while successfully performing a joint action may often require that each agent knows or believes something about the other’s (or others’) intentions, this could be a consequence of the knowledge or belief being a requirement on the rationality of having the intentions specified by the Flat Intention View. As things stand, then, it appears we have yet to see sufficient reasons to complicate the Flat Intention View by adding requirements concerning knowledge of, or beliefs about, others’ intentions. Should we conclude that our latest attempt, the Flat Intention View, enables us to distinguish generally between joint actions and events involving multiple agents who are merely acting in parallel? This seems implausible. There may be further contrasts between joint actions and merely parallel actions which the Flat Intention View fails to discriminate.This would motivate further narrowing the Flat Intention View by adding additional ingredients. But at this point there is a more pressing reason not to simply accept the Flat Intention View: it is too narrow. The Flat Intention View does not enable us to distinguish between joint actions and merely parallel actions when the agents lack intentions concerning the joint actions. This matters not just if joint actions can occur in agents such as bees or ants which probably lack intentions altogether. It also matters because, arguably, some forms of joint actions in humans need not involve intention. Since the aim of the minimalist approach is to distinguish joint actions from events involving multiple agents who are merely acting in parallel, the Flat Intention View cannot be where the story ends.
The agents’ perspective Recall our earlier attempt to characterise joint action using collective goals only and no intentions (from section [Collective goals]): A joint action is an event with two or more agents where the actions grounding that event have a collective goal. This attempt is adequate for distinguishing joint actions from their merely parallel counterparts in many cases where intentions are not considered, such as the flash mob (see section [introduction]) and the deer hunters (see section [A minimalist approach]). It was only a contrast case involving intentions, namely that of blocking the aisle of an aeroplane (see section [Collective goals]), which showed us that this attempt is inadequate for distinguishing all joint actions from events involving multiple agents who are merely acting in parallel. So the grounds for finding this earlier attempt to characterise joint action inadequate do not motivate the further claim that all joint action involves having intentions. What then can we conclude from the failure of this earlier attempt to characterise joint action? The case of strangers blocking the aisle of the aeroplane involves features that create a tension.The fact that their actions are coordinated in such a way as to be collectively directed to the outcome of blocking the aisle indicates that they are performing a joint action. But the fact that the truth of the beliefs informing their intentions would require their actions to lack a collective goal indicates that they are not performing a joint action.These conflicting indicators should not be weighed against each other because they involve different perspectives. Seen from the outside there appears to be a joint action, whereas from the perspective of either agent there does not. It turns out that a minimalist approach to joint action needs to be pluralist. To capture joint action from the agents’ perspective, we sometimes need to invoke intentions or other 365
Stephen Butterfill
goal-states which specify collective goals. And to capture joint action while being neutral on the agents’ perspective, we need to avoid invoking any such intentions or other goal-states. We must therefore recognise that there are multiple contrasts and multiple kinds of joint action. This conclusion is unlikely to be controversial. Our question about what distinguishes joint actions from events involving multiple agents who are merely acting in parallel is analogous to one about ordinary, individual action. On ordinary, individual action, Davidson (1971) and others have asked, Which feature or features distinguish actions from events otherwise involving an agent? The variety of different agents, and the variety of control structures within relatively complex agents such as humans, suggests that answering this question will probably involve recognising that there are multiple kinds of action. The claim that there are multiple kinds of joint action is not more controversial. Recognising that there are multiple kinds of joint action suggests a simple answer to Schweikard and Schmid’s (2013) challenge to views, such as Bratman’s (2014) and the Flat Intention View (see section [Some intentions specify collective goals]), on which facts about individual agents’ intentions and other mental states are taken to explain how joint actions differ from events involving multiple agents who are merely acting in parallel. Schweikard and Schmid suggest that such views must presuppose the very distinction to be explained and ask, apparently rhetorically, ‘[H]ow can an individual refer to a joint activity without the jointness [. . .] already being in place?’ On the Flat Intention View, the intentions that constitute one kind of ‘jointness’ refer to joint actions characterised without intention – these are the joint actions we attempted to capture with the proposal that a joint action is an event with two or more agents where the actions grounding that event have a collective goal.
Conclusion What feature or features distinguish joint actions from events involving multiple agents who are merely acting in parallel? This chapter introduced a minimalist approach to answering this question, one which involves finding a simplest possible starting point, adding ingredients only as needed, and avoiding as far as possible adding ingredients which would require the agents to have abilities additional to those already required. This yielded attempted characterisations for two kinds of joint action. One characterisation involves collective goals only, and no intentions or other mental states: A joint action is an event with two or more agents where the actions grounding that event have a collective goal. The consideration that actions can have a collective goal even though each agent conceives of herself as merely exploiting a conveniently stationary and sufficiently bulky object (see section [Collective goals]) motivated introducing a further attempt, labelled the Flat Intention View: A joint action is an event with two or more agents where: 1
the actions grounding that event are appropriately related to intentions on the part of each agent that they, these agents, ϕ together; and 2 each intention requires for its fulfilment that all the actions have a collective goal concerning ϕ-ing.
366
Joint action: a minimalist approach
These attempted characterisations raise more questions than they answer. For each attempted characterisation of joint action, what are the counterexamples to its adequacy and what additional ingredients are needed to refine it? Offering a pair of characterisations involves commitment to a form of pluralism about joint action (see section [The agents’ perspective]). But is it pluralist enough? The pair of characterisations assume there is a single distinction between those joint actions that do, and those that do not, essentially involve intention. Are further distinctions necessary? For instance, are there multiple kinds of joint action which essentially involve intention? Even without answers to these basic questions, a minimalist approach to joint action is clearly useful in at least two related ways. First, the minimalist approach yields ingredients such as the notion of a collective goal that can be used in specifying the contents of intentions and other states. Some philosophers tacitly assume that characterising the kind or kinds of joint action which essentially involve intention is possible without reflection on kinds of joint action which do not essentially involve intention. For instance, Bratman (2014, 46) relies exclusively on activities which are ‘neutral with respect to shared intentionality’ in constructing an account of a kind of joint action that involves intention. The category of activities which are neutral with respect to shared intentionality is extremely broad. It includes actions with collective goals as well as actions with no collective goal (such as those intended by the strangers blocking the aisle in section [Collective goals], and Bratman’s gangsters who each intend that they go to NYC by means of her bundling the other into the trunk from section [Some intentions specify collective goals]). On Bratman’s construction, then, the agents of a joint action essentially involving intentions might as well have no conception of the possibility of joint actions not involving intentions; from their point of view, it might as well be that all joint actions essentially involve intentions to do things together. But, as we saw in section [Some intentions specify collective goals], there may be intentions whose fulfilment requires not merely activities which are neutral with respect to shared intentionality but, more demandingly, actions with collective goals. This suggests that kinds of joint action which essentially involve intention may be constructed on top of, and perhaps even emerge from, kinds of joint action which do not essentially involve intention. The minimalist approach is also useful as a way of testing claims that particular ingredients are needed to characterise joint action and related notions. To illustrate, in section [Some intentions specify collective goals] we saw that reflection on the Flat Intention View creates difficulties for an attempt to argue that characterising a kind of joint action which essentially involves intentions requires not merely intentions but intentions about intentions. This line of argument generalises. Many researchers agree that all joint action requires shared intention: for them, the central problem in giving an account of joint action is how to characterise shared intention.3 But since the Flat Intention View enables us to distinguish many paradigm cases of joint action from events involving multiple agents who are merely acting in parallel, it is at least unclear that shared intention is needed in characterising even forms of joint action that essentially involve intention. Is joint action fundamentally a matter of collective goals and the ordinary, individual intentions which specify collective goals? This seems unlikely. More ingredients are surely needed. And challenges to the further development of a minimalist approach surely lie ahead. Even so, as an alternative to currently dominant attempts to characterise joint action by appeal to complex nested structures or conceptually novel ingredients, a minimalist approach is promising.
Notes 1 An approach might be minimalist without being conceptually conservative in Bratman’s sense (see Bratman 2014, 14–15), and conversely. Minimalism concerns what a theory demands of the agents
367
Stephen Butterfill whose joint actions it characterises; conceptual conservativism is about what a theory demands of theoreticians. 2 Should we have considered the idea that a joint action is an action (rather than an event) with two or more agents? This question raises several issues beyond the scope of the present chapter.The short answer is no, because primitive actions (whether bodily movements or tryings) are ‘all the actions there are’ (Davidson 1971, 59), and in many paradigm cases of joint action there are clearly no primitive actions with multiple agents. In painting a house, walking together, or lifting a two-handled basket, we each move only our own bodies directly. The notion of a joint action as an action with two or more agents is therefore too narrow relative to our aim of theorising about a range of cases taken to be paradigmatic joint actions. (This is not to say that no actions have two or more agents; see Blomberg 2011.) 3 See, for instance, Gilbert (2006, 5): ‘I take a collective [joint] action to involve a collective [shared] intention’; and Alonso (2009, 444–445): ‘the key property of joint action lies in its internal component . . . in the participants’ having a “collective” or “shared” intention.’ See also Tomasello (2008, 181) and Carpenter (2009, 381). Clearly these authors’ claims should be restricted to the kind, or kinds, of joint action that essentially involve intention.
References Alonso, Facundo M. (2009). Shared intention, reliance, and interpersonal obligations. Ethics, 119(3), 444–475. doi:10.1086/599984 Behne, Tanya, Carpenter, Malinda & Tomasello, Michael. (2005). One-year-olds comprehend the communicative intentions behind gestures in a hiding game. Developmental Science, 8(6), 492–499. Bennett, Jonathan. (1976). Linguistic Behaviour. Cambridge: Cambridge University Press. Blomberg, Olle. (2011). Socially extended intentions-in-action. Review of Philosophy and Psychology, 2(May), 335–353. doi:10.1007/s13164–011–0054–3 Blomberg, Olle. (2015). Common knowledge and reductionism about shared agency. Australasian Journal of Philosophy, July, 1–12. doi:10.1080/00048402.2015.1055581 Bratman, Michael E. (1984). Two faces of intention. The Philosophical Review, 93(3), 375–405. ———. (1992). Shared Cooperative Activity. The Philosophical Review, 101(2), 327–341. ———. (1993). Shared Intention. Ethics, 104, 97–113. ———. (2014). Shared Agency: A Planning Theory of Acting Together. Oxford: Oxford University Press. Brownell, Celia A., Ramani, Geetha B. & Zerwas, Stephanie. (2006). Becoming a social partner with peers: Cooperation and social understanding in one- and two-year-olds. Child Development, 77(4), 803–821. Carpenter, Malinda (2009). Just How Joint Is Joint Action in Infancy? Topics in Cognitive Science, 1(2), 380–392. Davidson, Donald. (1971). Agency. In Robert Binkley, Richard Bronaugh and Ausonia Marras (Eds.), Agent, Action, and Reason (pp. 3–25). Toronto: University of Toronto Press. Gallotti, Mattia & Frith, Chris D. (2013). Social cognition in the we-mode. Trends in Cognitive Sciences, 17(4), 160–165. doi:10.1016/j.tics.2013.02.002 Gilbert, Margaret P. (1990). Walking together: A paradigmatic social phenomenon. Midwest Studies in Philosophy, 15, 1–14. ———. (2006). Rationality in collective action. Philosophy of the Social Sciences, 36(1), 3–17. ———. (2013). Joint Commitment: How We Make the Social World. Oxford: Oxford University Press. Gold, Natalie & Sugden, Robert. (2007). Collective intentions and team agency. Journal of Philosophy, 104(3), 109–137. Helm, Bennett W. (2008). Plural Agents. Nous, 42(1), 17–49. doi:10.1111/j.1468–0068.2007.00672.x Hölldobler, B. & Wilson, E. O. (2009). The Superorganism:The Beauty, Elegance, and Strangeness of Insect Societies. New York: W.W. Norton. Hughes, Claire & Leekam, Sue. (2004). What are the links between theory of mind and social relations? Review, reflections and new directions for studies of typical and atypical development. Social Development, 13(4), 590–619. Knoblich, Günther, Butterfill, Stephen & Sebanz, Natalie. (2011). Psychological research on joint action: Theory and data. In Brian Ross (Ed.), Psychology of Learning and Motivation (Vol. 51; pp. 59–101). San Diego, CA: Academic Press.
368
Joint action: a minimalist approach Knoblich, Günther & Sebanz, Natalie. (2008). Evolving intentions for social interaction: From entrainment to joint action. Philosophical Transactions of the Royal Society B, 363, 2021–2031. Kutz, Christopher. (2000). Acting together. Philosophy and Phenomenological Research, 61(1), 1–31. Linnebo, Øystein. (2005). Plural quantification. In Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2005 Edition). Stanford, CA: CSLI. Ludwig, Kirk. (2007). Collective intentional behavior from the standpoint of semantics. Nous, 41(3), 355– 393. doi:10.1111/j.1468–0068.2007.00652.x Moll, Henrike & Tomasello, Michael. (2007). Cooperation and human cognition: The Vygotskian intelligence hypothesis. Philosophical Transactions of the Royal Society B, 362(1480), 639–648. Pacherie, Elisabeth. (2010). The phenomenology of joint action: Self-agency Vs. Joint-agency. In Axel Seeman (Ed.), Joint Action: New Developments in Psychology, Philosophy of Mind and Social Neuroscience (pp. 343– 389). Cambridge, MA: MIT Press. Pietroski, Paul M. (1998). Actions, adjuncts, and agency. Mind, New series, 107(425), 73–111. Ramenzoni,Verónica C., Davis,Tehran J., Riley, Michael A., Shockley, Kevin & Baker, Aimee A. (2011). Joint action in a cooperative precision task: Nested processes of intrapersonal and interpersonal coordination. Experimental Brain Research, 211(3–4), 447–457. doi:10.1007/s00221–011–2653–8 Repp, Bruno H. & Su,Yi-Huang. (2013). Sensorimotor synchronization: A review of recent research. Psychonomic Bulletin & Review, 20(3), 403–452. doi:10.3758/s13423–012–0371–2 Roth, Abraham Sesshu. (2004). Shared agency and contralateral commitments. The Philosophical Review, 113 (3), 359–410. Sacheli, Lucia M., Candidi, Matteo, Era,Vanessa & Aglioti, Salvatore M. (2015). Causative role of left aIPS in coding shared goals during human-avatar complementary joint actions. Nature Communications, 6( July). doi:10.1038/ncomms8544 Schmid, Hans Bernhard. (2009). Plural Action: Essays in Philosophy and Social Science (Vol. 58). Dordrecht: Springer. Schmidt, Richard C. & Richardson, Michael J. (2008). Dynamics of interpersonal coordination. In Armin Fuchs and Viktor K. Jirsa (Eds.), Coordination: Neural, Behavioral and Social Dynamics (pp. 280–308). Berlin, Heidelberg: Springer. Schweikard, David P. & Hans Bernhard Schmid. (2013). Collective intentionality. In Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2013 edition). Stanford, CA: CSLI. Searle, John R. (1990). Collective intentions and actions. In P. Cohen, J. Morgan and M. E. Pollack (Eds.), Intentions in Communication (pp. 90–105). Cambridge: Cambridge University Press. ———. (1994). The Construction of Social Reality. New York: The Free Press. Sebanz, Natalie, Bekkering, Harold & Knoblich, Guenther. (2006). Joint action: Bodies and mind moving together. Trends in Cognitive Sciences, 10(2), 70–76. Seeley, T. D. (2010). Honeybee Democracy. Princeton, NJ: Princeton University Press Taylor, Charles. (1964). The Explanation of Behaviour. London: Routledge. Tomasello, M. (2008). Origins of Human Communication. Cambridge, MA: MIT Press. Tomasello, Michael, & Carpenter, Malinda. (2007). Shared intentionality. Developmental Science, 10(1), 121–125. Velleman, David. (1997). How to Share an Intention. Philosophy and Phenomenological Research, 57(1), 29–50. Vesper, Cordula, Butterfill, Stephen, Knoblich, Günther & Sebanz, Natalie. (2010). A minimal architecture for joint action. Neural Networks, 23(8–9), 998–1003. doi:10.1016/j.neunet.2010.06.002
369
21 COMMITMENT IN JOINT ACTION John Michael
1. Introduction Joint action is a pervasive and important feature of human sociality. From cooking meals to dancing, playing music, painting houses, and taking walks, humans routinely do things together – sometimes to achieve shared goals that could not otherwise be achieved, and sometimes because acting together is intrinsically pleasurable. In a broad sense, joint action can be defined as ‘any form of social interaction whereby two or more individuals coordinate their actions in space and time to bring about a change in the environment’ (Sebanz et al., 2006: 70; Butterfill, 2012). Although many species engage in joint action in this broad sense, such as chimpanzees hunting and bees swarming, it has been argued that humans are uniquely able and motivated to coordinate their actions in space and time in order to bring about shared goals, and do so more flexibly and in a wider variety of contexts than other species (e.g. Melis & Semmann, 2010; Tomasello, 2009; Silk, 2009; Searle, 1990). But joint action can also be risky insofar as it entails a dependency on the contributions of other agents, whose desires or interests, after all, may fluctuate. One way to deal with the uncertainty arising from dependency on other agents is to make commitments. Commitments, if they are credible, can shield agents’ plans from such fluctuations in their desires and interests, and thereby stabilize expectations about their behavior. In this way, commitments can facilitate the planning and coordination of joint actions involving multiple agents. Moreover, commitments can facilitate cooperation by making people willing to perform actions that they would not otherwise be willing to perform. For example, Ludmilla may be hesitant to go with her colleagues to the pub because she knows herself well enough to anticipate that she would find it difficult to leave early enough to get a good night’s sleep, and she has a presentation to give in the morning. Her colleague Stephen is also hesitant because he has a flight to catch in the morning. Both of them would like to go along for just one drink, but are justifiably concerned that, if they go along at all, they may well come to regret it. One way in which they may solve the problem is to make a joint commitment that they will both go home after one round and that each will ensure that the other does so as well. With this joint commitment in place, each of them feels partially responsible for the other’s well-being the following day, so they can now more confidently head to the pub for one round.
370
Commitment in joint action
2. Conceptualizing commitment While it is common in several areas of philosophy and psychology to refer to commitments, there is surprisingly little research that is devoted to analyzing the concept of commitment, or to clarifying the similarities and differences between different kinds of commitment. An important exception is Herbert Clark’s (2006) typology that distinguishes different types of commitments according to their recipient. Thus, one can make a commitment to oneself (self-commitments) or one can make a commitment to another agent (interpersonal commitments).1 In what follows, I will put aside self-commitments and focus on interpersonal commitments. Among interpersonal commitments, one can distinguish unilateral commitments (in which case one agent makes a commitment to a second agent but the second agent is not committed to anything) from mutual commitments (in which case she is also committed to something). Furthermore, mutual commitments can be either complementary (as when Peter is committed to digging a hole as long as Jim is committed to paying him for it) or joint (Peter and Jim are committed to a shared goal, such as painting the house together. What is the common core of these different kinds of commitments? According to a standard philosophical conception of commitment, commitment in the strict sense can be conceptualized as a relation among at least one committed agent,2 at least one agent to whom the commitment has been made, and an action which the committed agent is obligated to perform because she has given an assurance to the second agent that she will do so, and the second agent has acknowledged this under conditions of common knowledge (Austin, 1962; Scanlon, 1998; Searle, 1969; Shpall, 2014). For example, Susie has an obligation to Jennifer to pick up the kids from school because she (Susie) has expressed her willingness to do so, and Jennifer has acknowledged this. In the canonical case, the expression is effectuated by means of the speech act of promising. As Searle (1969) puts it: ‘the utterance . . . predicates some future act A of the speaker S . . . [and] counts as the undertaking of an obligation to do A’ (Searle, 1969: 63). Of course, one can make a commitment (and indeed perform the speech act of promising) without explicitly saying ‘I promise’, but whether one says ‘I promise’ or simply ‘yes’, the expression ‘will count as and will be taken as a promise in any context where it is obvious that in saying it I am accepting (or undertaking, etc.) an obligation’ (Searle, 1969: 68).3 According to this conception, the creation of Susie’s commitment depends upon Jennifer hearing and acknowledging Susie’s statement of her intention, and upon this being common knowledge between Susie and Jennifer. The concept of ‘common knowledge’ is a complex and contested one:4 according to more stringent analyses (e.g. Lewis, 1969; Schiffer, 1972), P is common knowledge for Susie and Jennifer if and only if Susie and Jennifer know that P, Susie knows that Jennifer knows that P, and Susie knows that Jennifer knows that Susie knows that P, and so on; and similarly for Jennifer.5 Thus, there is no commitment in the strict sense if Susie mistakenly believes that Jennifer has not heard her assurance that she will pick up the kids, or if Jennifer mistakenly believes that Susie mistakenly believes this, etc.6 More recently, however, many researchers have articulated less stringent analyses, which are intended to avoid the potentially infinite regress engendered by traditional analyses. Margaret Gilbert, for example, offers the following working definition: if some fact is common knowledge between A and B (or between members of population P, described by reference to some common attribute), then that fact is entirely out in the open between them – and, at some level, all are aware that this is so. Among
371
John Michael
other things, it would not make sense for any one of these persons to attempt to hide the fact from another of their number. (Gilbert, 2006b: 121) Similarly, Malinda Carpenter (2009) conceptualizes common knowledge in the context of developmental psychology as ‘what is known or has been experienced together’ (383), and suggests that having jointly attended to P is sufficient in a broad range of cases. While this standard conception presents a clear characterization of paradigm cases of commitment in the strict sense (i.e. commitments arising through promises or other explicit verbal assurances), Michael, Sebanz, and Knoblich (2016) have recently argued that it fails to illuminate three important puzzles: Puzzle 1: motivation The first puzzle arises from the fact that commitments foreclose options that may maximize an agent’s interests. It appears irrational to follow through on commitments when alternative options arise that may be more attractive than the action to which one is committed. Thus, if one makes a commitment to perform a particular action, and one’s interests or desires subsequently change, it is not clear why one should remain motivated to fulfill the commitment. Indeed, this issue is even more serious than it appears at first glance, insofar as the flipside of motivation is credibility: why should one agent expect a second agent to remain committed to a particular action if that second agent’s desires or interests change? In some cases, this problem is solved by externalizing commitments, as when alternative options are removed (e.g. when a general burns her bridges behind her) or when the payoff values of options are altered (e.g. by signing a contract that entails penalties for reneging). In many other cases, however, people engage in and follow through on commitments without such externalized motivational support. Recall the example, discussed above, of Ludmilla and Stephen making a joint commitment to have just one drink at the pub and then to go home. Given that there is no external device to enforce this commitment, what is to stop them from simply releasing each other from the commitment and then have a second and a third round? And, given this possibility, what is the point of making a commitment in the first place? Puzzle 2: implicit commitment Many commitments work not only without contracts but also without explicit agreements or promises – i.e. they are implicit. But in the absence of an explicit agreement or promise, or even any expression of one’s willingness to pursue a shared goal, it is unclear how people determine when commitments are in place, and how they assess the appropriate degree of commitment. To illustrate, consider the following example, adapted from one discussed by the philosopher Margaret Gilbert (2006a: 9): Two factory workers, Polly and Pam, are in the habit of smoking a cigarette and talking together on the balcony during their afternoon coffee break. The sequence is broken when one day Pam waits for Polly but she doesn’t turn up. In this case, there has been no explicit agreement to smoke a cigarette and talk together every day, and yet one might nevertheless have the sense that an implicit commitment is in place, and that Polly has violated that implicit commitment. This will depend on further details about the case. For example, if Polly and Pam have smoked and talked together every day for two or three weeks, Polly might feel only slightly obligated to offer an explanation, but she would likely feel more strongly obligated if the pattern had been repeated for two or three years. Thus, it seems that 372
Commitment in joint action
mere repetition can give rise to implicit commitment. Similarly, one agent’s reliance on a second agent may give rise to implicit commitment on the part of the second agent. If, for example, Polly and Pam always use Polly’s lighter, and Pam at some point even stopped bringing her own lighter, then Polly’s absence will completely undermine Pam’s goal of enjoying a pleasant cigarette break. In such a case, both parties are likely to think that an explanation, and perhaps even an apology, is all the more in order. Thirdly, one agent’s investment of effort or resources in a joint action may also give rise to an implicit commitment on the part of a second agent. If Pam, for example, must walk up five flights of stairs to reach the balcony where she and Polly habitually smoke together, Polly’s implicit commitment may be greater than if Pam only had to walk down the hall. In sum, there are many situational factors which can give rise to implicit commitments, and the standard philosophical characterization of commitment in the strict sense does not provide any basis for identifying these factors. Puzzle 3: development The third puzzle pertains to the onto- and phylogenetic origins of commitment. Specifically, if one adopts the standard philosophical conception of commitment, it is questionable whether commitment is applicable to young children or to non-human animals. This is because the standard conception of commitment presupposes an understanding of conditional obligations and common knowledge: an agent only has an obligation to do X if she has expressed her willingness to do so to some other agent, who has acknowledged that expression under conditions of common knowledge. Although one should be wary about ascribing the requisite cognitive sophistication to understand these kinds of conceptual relations to very young children (Brownell et al., 2006; Butterfill, 2012), there is evidence that very young children identify and respond to commitments in some sense (Warneken et al., 2006; Gräfenhain et al., 2009; Hamann et al., 2012). In one recent study, for example, Gräfenhain and colleagues (2009) compared a condition in which the experimenter made an explicit commitment to a joint action and a condition in which she simply entered into the action without making any commitment. Interestingly, 3-year-olds, but not 2-year-olds, protested significantly more when a commitment had been violated than when there had been no commitment. In experiment 2 of the same study, the tables were turned and the children were presented with an enticing outside option that tempted them to abandon the joint action. The children were less likely to succumb to the temptation if a commitment had been made. In cases in which they did succumb, they were more likely to ‘take leave’, to look back at the experimenter nervously, or to return after a brief absence. In a study by Hamann and colleagues (2012), one child received her part of a joint reward from a joint task before her partner received the other part, thus tempting her to abandon the joint task before her partner received her reward. Most of the children nevertheless remained engaged, suggesting that they sensed an obligation to remain engaged until both achieved their goal.7 One interpretation of these findings is that children, contra the aforementioned theoretical reservations, do understand commitments in the strict sense by around 3. However, there is also countervailing empirical evidence.8 Consider a study conducted by Mant and Perner (1988), in which children were presented with vignettes describing two children on their way home from school, Peter and Fiona, who discuss whether to meet up and go swimming later on. In one condition, they make a joint commitment to meet at a certain time and place, but Peter decides not to go after all, and Fiona winds up alone and disappointed. In the other condition, they do not make a joint commitment, because Fiona believes that her parents will 373
John Michael
not let her. She is then surprised that her parents do give her permission, and she goes to the swimming hole to meet Peter. In this condition, too, however, Peter decides not to go after all, so again Fiona winds up alone and disappointed. The children in the study, ranging from 5 to 10 years of age, were then asked to rate how naughty each character was. The finding was that only the oldest children (with a mean age of 9.5) judged Peter to be more naughty in the commitment condition than in the no-commitment condition. This may seem late, but it is in fact consistent with the findings of a study by Astington (1988), who reported that children under 9 fail to understand the conditions under which the speech act of promising gives rise to commitments.9 In view of these conflicting findings, Michael, Sebanz, and Knoblich (2016) have recently proposed the following minimalist approach. Rather than taking commitment in the strict sense as a starting point, and interpreting the findings of Gräfenhain and colleagues (2009; cf. also Hamann et al., 2012) as evidence that 3-year-olds understand and respond to commitments in the strict sense, they identify a less complex phenomenon that young children may understand and respond to even in the absence of a sophisticated understanding of common knowledge, conditional obligations, and the speech act of promising. This minimalist approach resonates with the view of many theorists that a simplified conception of joint action is needed in order to account for young children’s engagement in joint actions (Brownell et al., 2006; Butterfill, 2012; Tollefsen, 2005).10 This approach has the advantage of helping to explain how an understanding of commitments could emerge through engagement in joint actions rather than arguing that it is present as soon as children engage in joint actions.11
3. A Minimal framework 3.1. The minimal structure of commitment and the sense of commitment In addressing the three desiderata identified in the previous section, Michael, Sebanz, and Knoblich propose a characterization of the minimal structure of situations in which a sense of commitment can arise. This minimal structure can be expressed as follows: (i) There is an outcome (O) which one agent (ME) either desires to come about, or which is the goal of an action which ME is currently performing or intends to perform. (ii) The contribution (X) of a second agent (YOU) is crucial12 to bringing about O. Clearly, conditions (i) and (ii) specify a broader category than that of commitment in the strict sense. Nevertheless, situations with this structure may elicit a sense of commitment on the part of one or both agents. Michael, Sebanz, and Knoblich (2016) conceptualize the sense of commitment as follows: ME has a sense that YOU is committed to performing X to the extent that ME expects X to occur because (i) and (ii) obtain. YOU has a sense of being committed to performing X to the extent that YOU is motivated by her belief that ME expects her to contribute X. While the minimal structure is specified such that only one agent (ME) desires O and/or has the goal of bringing about O, there are many cases in which both agents desire O and/or have the goal O. In those cases, the commitment may be mutual, with each agent having a sense of being committed as well as a sense that the other agent is committed. 374
Commitment in joint action
It is also worth emphasizing that the two agents (ME and YOU) may differ with respect to their sense of commitment. For example, ME may have a sense that YOU is committed even though YOU does not have a sense of being committed. Or YOU may have a sense of being committed even though ME does not have a sense that YOU is committed. As already stated, conditions (i) and (ii) specify a broader category than that of commitment in the strict sense. In particular, while commitments in the strict sense arise intentionally (Gilbert, 1989), an agent can come to have a sense of commitment to doing X without performing any intentional action at all. Consider the following example. If Carla is running to catch the elevator and the door is beginning to close, and Victor is standing in the elevator, Carla may have a sense that Victor is committed to pressing the button to keep the door open simply because he is standing next to the button and pressing it would be a crucial contribution to her goal. And Victor may have a sense that he is committed to doing so simply because he believes that Carla expects him to. Moreover, there are also many cases in which a sense of commitment is triggered as a side effect of an intentional action. For example, Sam is cleaning up the living room and picks up a ball that had been lying on the floor. As it happens, his dog Woofer notices this and bounds over to him, apparently ready to play fetch. Sam was not intending to play fetch and does not particularly desire to, but may now feel obliged to, because he has generated an expectation on the part of Woofer that they will now play fetch together. Thus the unintentional generation of expectations can lead individuals to sense that a commitment is in place. Of course, if Sam intentionally makes eye contact with Woofer and waves the ball around in the air, he thereby generates a high degree of commitment to playing fetch. And if Woofer is sensitive to these cues, they may lead him to have a high expectation that Sam is now going to play fetch with him. Another necessary feature of commitment in the strict sense which can be absent in instances in which a sense of commitment is elicited is common knowledge. As already noted above, commitment in the strict sense requires that it be common knowledge that at least one agent (i.e. whoever is taking on the commitment) has expressed her willingness to perform an action. A sense of commitment, in contrast, can arise without any such expression becoming common knowledge. Recall Gilbert’s example of Polly and Pam, who are in the habit of smoking and chatting on the terrace each day during the coffee break, though they have never made an agreement to do so: The sequence is broken when one day Pam waits for Polly but she doesn’t turn up. The day after this, Polly comes up to her and apologizes for her absence: ‘I was off sick’.‘I wondered what happened’, says Pam, accepting her apology.‘Glad you’re back’. By this time, it would seem, it is common knowledge between the parties that each has expressed to the other her readiness jointly to commit with the other to uphold the practice of their meeting daily outside the factory for a smoke and a chat. At no point did the parties agree to start or engage in this practice.Yet their interchange suggests enough has passed between them jointly to commit them to uphold it. (Gilbert, 2006a: 9) The exchange on the day after Polly’s absence is required in order to generate a commitment in the strict sense, since it is only through this exchange that each party’s willingness to participate in the joint action has been expressed under conditions of common knowledge. However, the example also illustrates that a sense of commitment may be in place prior to that exchange (and that it may lead Polly to consider it appropriate to offer an explanation and Pam to consider it appropriate that Polly do so). 375
John Michael
As the situation has been described, it is natural to think that it is in fact clear to both parties that both are willing to sustain the interaction pattern. But we may also imagine a scenario in which this is not the case, and in which one or both parties nevertheless have a sense of commitment. For example, if we imagine that they have only smoked and chatted together two or three times, Polly may be unsure as to whether Pam wants to continue the pattern and/or as to whether Pam thinks that Polly wants to continue the pattern, etc. In this scenario, Polly may have a sense that she is committed to showing up despite the absence of common knowledge about each other’s willingness to continue the pattern. This modification of Gilbert’s example also serves to highlight a further important difference between commitment in the strict sense and the sense of commitment: the latter, in contrast to the former, is a graded phenomenon which can be modulated in a continuous fashion by subtle factors, such as repetition, reliance, and the investment of costs. Having characterized the sense of commitment in terms of expectations and motivations,13 we are now in a position to see why the sense of commitment is a graded phenomenon: expectations and motivations come in degrees.Thus, any factors which raise ME’s expectation that YOU will perform X and/or YOU’s motivation to perform X in instances in which the minimal structure is implemented, raise ME’s and/or YOU’s sense of commitment. In addition, the factors which are necessary for commitment in the strict sense but not for the sense of commitment (e.g.YOU having intended to raise ME’s expectation of X, this being common knowledge, etc.) may also serve to raise motivations and/or expectations, and thus function as factors modulating the sense of commitment. In sum, a sense of commitment may be elicited in many situations which instantiate the minimal structure specified above but in which there is no commitment in the strict sense.14 Furthermore, the sense of commitment, in contrast to commitment in the strict sense, is a graded phenomenon, and may be modulated by various factors (such as repetition, reliance, and the investment of costs) which serve to raise ME’s expectation of X and/or to make that expectation more salient to YOU. In this section, I have introduced a way of conceptualizing the sense of commitment in terms of agents expecting external contributions (i.e. X) to be made because the minimal structure is in place (i.e. conditions (i) and (ii)), and/or being motivated to make contributions because they believe they are expected to. In order to establish the plausibility of this proposal, it will be necessary to explain why anyone would have such expectations and/or motivations. In the next subsection, I will address the question as to why some agents may sometimes expect X to occur because (i) and (ii) obtain. The next step, in subsections 3.2 and 3.3, will be to address the question as to why some agents may sometimes be motivated to contribute X because they believe that they are expected to.Then, in subsection 3.4, I will explain how these expectations and motivations can reinforce each other over time, and thereby fulfill the social function of commitment, namely to stabilize agents’ expectations about other agent’s making contributions to their goals or to outcomes they desire. 3.2. Why would ME expect X because the minimal structure is instantiated? One conjecture is that the expectation that X will be contributed in cases instantiating the minimal structure has the status of a default in some agents, in particular in humans. When an agent comes to believe that X is a crucial contribution to an outcome which she desires or which is a goal of hers (i.e. O), it may trigger a default expectation that X will occur. This is because such outcomes are represented fundamentally in an agent-neutral manner – i.e. as outcomes that are to be brought about, irrespective of whose goals they are or who desires them (Butterfill, 2012; Vesper et al., 2010).15 In situations instantiating the minimal structure, then, 376
Commitment in joint action
the default assumption is that O will be brought about in the most efficient way possible, with all crucial contributions being made. In other words, an agent will not initially consider the possibility that O may be only her own goal, or an outcome that only she desires to be brought about. Hence, such a default expectation may generate or reinforce specific expectations that ME would not otherwise have about contributions (X) to be made to ME’s goals or to outcomes which ME desires to be brought about (O). A default expectation that others will contribute X in cases in which the minimal structure is instantiated would be consistent with many experiences that infants and young children have in their first years of life. Indeed, as soon as infants begin pursuing goals, there is typically at least one parent who is motivated to support them in their desires and goals. Moreover, infants experience distress or conflict when their desires are not fulfilled or their goals are not met. Once they are able to detect that they are dependent on external contributions for some outcomes, instances in which they fail to attain that outcome because a crucial contribution is not made may also elicit signs of conflict. Furthermore, the notion of agent-neutral outcome representation also suggests a more general reason why a default expectation of crucial contributions to O may be sustained throughout childhood and adulthood. This is because when YOU perceives ME acting to bring O, YOU may also come to represent O in an agent-neutral fashion. If YOU does this, then she may simply treat it as being equivalent to other outcomes she is acting to bring about rather than assigning it specifically to ME. As a result, the O may ‘slip’ from perception into action, and YOU may perform X simply because O is now the goal to which she is currently contributing, not because it is ME’s desire or goal. I will use the term goal slippage to refer to this process. A slightly different way of thinking about goal slippage is that YOU’s identification of ME’s goal or desired outcome (O) may lead her to expect that O will be brought about, and she may have a preference for things to go as she expects them to go. Although an agent’s motivation to bring about O in such cases may generally be lower than her motivation to bring about internally generated goals, goal slippage could nevertheless increase the likelihood of YOU doing X. Though I am not aware of any evidence directly bearing upon the hypothesis of goal slippage, there is one body of research that is relevant to consider in this context – namely, the work done on spontaneous instrumental helping behavior in young children and non-human primates (e.g. Liszkowski et al., 2007; Warneken & Tomasello, 2007). In a typical scenario, an agent attempts to reach for an object, such as a pencil, which is out of her reach but within reach of the test participant. In Warneken and Tomasello’s (2007) study, it was found that 18-monthold infants and chimpanzees tended to help the agent in this type of scenario. This seems to indicate that infants and chimpanzees have a tendency to engage in spontaneous instrumental helping – i.e. they may have an altruistic preference to support others in their goals. Nevertheless, the notion of goal slippage indicates an alternative (or complementary) explanation for these findings. Specifically, it raises the possibility that the test participants may simply represent the goal in an agent-neutral manner, and thus treat it equivalently to other goals of their own. In Bayesian terms, one might say that they predict that the agent will reach the pencil and help in order to reduce the prediction error. One way to test this hypothesis would be to investigate whether the children would persist in contributing to the goal even if the other agent ceased to pursue the goal or became distracted by an alternative option. 3.3. Why would YOU be motivated to contribute X because ME expects YOU to? In the previous section, I offered an explanation of why ME may sense that YOU is committed in instances in which the minimal structure is instantiated, i.e. why ME may sometimes expect 377
John Michael
X to occur because (i) and (ii) obtain. Now I will turn to the question as to why YOU may sense that she herself is committed in instances in which the minimal structure is instantiated, i.e. why YOU may sometimes be motivated to contribute X in instances in which the minimal structure is instantiated because YOU believes that she is expected to. One conjecture is that a tendency to be motivated to fulfill others’ expectations about one’s contributions to their goals or to outcomes which they desire (i.e. a preference for expectation fulfillment) has the status of a default in humans. Such a preference may serve as a proximal mechanism for reputation management. Moreover, insofar as YOU believes that ME expects X to occur,YOU may expect ME to show signs of conflict if X does not occur, and indeed to address YOU directly with these signs of conflict. For example, if my fellow passenger has tossed her book onto the window seat and then backed up into the aisle and cleared space for YOU to stand up and get out of her way, then YOU may infer that ME has a specific expectation about what YOU will do and sense that the path of least resistance is to fulfill that expectation. There is some evidence that is consistent with the hypothesis of a default preference for fulfilling others’ expectations. First of all, it is one possible explanation of the finding that people behave more generously in economic games when images of faces or eyes are present (Francey & Bergmüller, 2012; cf. also Bateson et al., 2006). It is also a plausible explanation of the robust finding that people tend to give away money in anonymous one-shot dictator games (i.e. when an experimenter seems to expect them to) but do not just go around handing out money in everyday life (Camerer, 2003). This suggestion fits well with the findings from a classic study by Gaertner (1973), in which a confederate called people on the telephone asking for money to help him out of a difficult situation. Political liberals were more likely to help than political conservatives – but only if they stayed on the phone long enough to hear his request, and in fact liberals were more likely to hang up sooner. These findings support two important claims: first of all, that people have a tendency to feel pressured into fulfilling others’ expectations; and secondly, that they accordingly try to avoid learning of others’ expectations in order to avoid being pressured into carrying out actions they do not want to carry out. Of course, this study involved a direct and explicit request for money, which is different from the cases of implicit expectations that we have been focusing on here. It could be interesting to modify this paradigm in order to investigate whether and to what degree people also feel pressured to fulfill implicit expectations. More recently, Dana and colleagues (2006) designed a dictator game in which the participant playing the role of dictator could pay $1 in order to exit from the dictator game, i.e. accepting a $9 payoff instead of being in a situation in which they could choose either to keep $10 for themselves or to give away as much as they wanted to. Many of the participants did indeed choose this option, but not in a condition in which they were told that the other person (the receiver) was unaware that she was a potential receiver in a dictator game.This suggests that making people aware of others’ expectations makes them more likely to be cooperative. The hypothesis of a default preference for expectation fulfillment also suggests a further possible interpretation of the spontaneous instrumental helping behavior that we discussed in the previous section. Specifically, the children in these scenarios may infer that they are expected to help and have a default preference to fulfill expectations that they take others to have of them. In Warneken and colleagues’ experiments, the adult experimenter performed actions that were not only highly unlikely to lead to their apparent goals but also highly inefficient. So it would be rational for the infants to infer that the experimenter is expecting them to help. This interpretation would be supported if it could be shown that making the other agent’s expectation more salient increased the helping behavior (e.g. if the agent announced to some third party that she expected the participant to help, or if she made eye contact with the participant). 378
Commitment in joint action
3.4. How the sense of commitment can stabilize expectations In the previous two subsections, I explained why some agents may sometimes expect X to occur because (i) and (ii) obtain, and why some agents may sometimes be motivated to contribute X because they believe that they are expected to. In this section, I will explain how these expectations and motivations can reinforce each other over time, and how the sense of commitment can thereby stabilize agents’ expectations about other agent’s making contributions to their goals or to outcomes they desire. On the one hand, ME’s default expectation that others (such as YOU) will contribute to ME’s goals or desired outcomes will be likely to be met and reinforced if other agents (such as YOU) are indeed likely to contribute because of the processes referred to in the previous two subsections (goal slippage and expectation fulfillment). On the other hand,YOU will be more likely to contribute X if YOU believes that ME expects this (expectation fulfillment). This does not imply that children or adult humans always expect others to contribute X in situations instantiating the minimal structure, nor that they always contribute X when they think they are expected to. Indeed, even infants’ and young children’s parents don’t always support their goals or fulfill their desires. So, in order to differentiate among various degrees of likelihood that X will co-occur, children must develop a more nuanced sensitivity to features of interactions that carry information about the reliability of various kinds of cues to X in various situations. By the same token, it would be inefficient for an agent always to contribute to others’ goals or desired outcomes whenever she believed that she was expected to. Hence, it would be useful for an agent to develop nuanced criteria for making contributions to others’ goals or desired outcomes. Thus, it is likely that children progressively become sensitive to the factors referred to in section 3.1 as being necessary for commitment in the strict sense (Did the other agent do something to raise the child’s expectation of X? If so, was it intentional? Is it common knowledge that the agent has expressed her willingness to do X?), and that these factors increasingly come to modulate children’s sense of commitment. In addition, their motivations and expectations are likely to become increasingly sensitive to other modulating factors, some of which were discussed briefly in section 3.1 (e.g. How often has the agent repeated the contribution of X so far? To what extent is the agent relying on X for the achievement of G?). Finally, the processes underpinning the sense of commitment (goal slippage and expectation fulfillment) are likely to become calibrated through experience to match those of other people in their culture, and to conform to cultural norms concerning when it is considered appropriate to make contributions to others’ goals and to expect contributions from others. As a result, people’s expectations about the extent to which others will be motivated by such processes will roughly match the extent to which others really are so motivated. Moreover, this makes it possible to distinguish between idiosyncratic emotional reactions and reactions which are considered justified within a culture, and to use the latter as a guide in judging the appropriate level of commitment. If this suggestion is on the right track, then the anticipation of emotional consequences of actions (e.g. guilt aversion; Charness & Dufwenberg, 2006) might serve as a heuristic for assessing the reliability of expectations.16 For example, in a case in which it would be considered justified for YOU to feel guilt or shame for neglecting to contribute to ME’s goal, or for ME to express anger, ME can be confident that YOU will make the contribution and thus avoid unpleasant emotional consequences. Such an emotion-based heuristic may in fact serve as a proximal mechanism in connection with the maintenance of reputation. One consequence of this would be that individuals who do not experience moral emotions in a typical manner or do not understand them as others in their culture do, may also exhibit an anomalous 379
John Michael
understanding of commitments. In the context of development, this would imply that children’s understanding of commitments should depend upon the development of their ability to anticipate moral emotions. Let us briefly consider some data that bears upon this conjecture. First of all, the predominant view in developmental psychology is that children begin to exhibit pride and embarrassment around their second birthdays, showing public elation when performing well at difficult tasks, and blushing and hiding their faces when they do not do well at some task or other. It is noteworthy that this is around the time when they first pass the mirror test (Bischof-Köhler, 1991), given that these emotions depend upon a self–other distinction and an understanding of how one appears from the outside, i.e. to the gaze of other people. As Philippe Rochat puts it: In such secondary or self-conscious emotions, children demonstrate unambiguously that what they hold as representation of themselves (i.e. self-knowledge) factors the view of others. They begin to have others in mind in how they construe who they are. With secondary emotions such as embarrassment, pride, or contempt, the child further demonstrates the triadic nature of self-knowledge, a knowledge that is coconstructed with others. (2008: 249) Furthermore, it is also worth noting that Rakoczy and colleagues (Rakoczy 2008; Rakoczy & Schmidt, 2013; Schmidt et al., 2011) have provided evidence that children are sensitive to social norms by around 24 months, and even inclined to enforce them by protesting against violators. But, of course, exhibiting or experiencing such emotions is different from understanding or anticipating them. And this is consistent with the finding that an understanding of complex moral emotions, such as guilt, pride, and shame, continues to undergo fundamental development until at least around 7 or 8 (Harris, 1989; Nunner-Winkler & Sodian, 1988). Interestingly, children under this age rarely refer to such complex emotions in their speech (Ridgeway et al., 1985), and when presented with vignettes where an agent either succeeds or fails at some action with a moral significance according to their effort, their luck, or outside intervention, children younger than 7 or 8 are not proficient at inferring the resultant moral emotions, such as shame, guilt, pride, and anger (Thompson, 1987; Thompson & Paris, 1981; Weiner et al., 1982). Barden et al. (1980) also reported that 4–5-year-olds predicted that a person would be ‘happy’ if they had committed an immoral act but not been caught, whereas 9–10-year-olds predicted that they would be scared or sad. When asked to predict their own emotions in such situations, the children exhibited the same pattern. If the anticipation of moral emotions serves as an important heuristic for the tracking commitments, then we should expect that children under about 9 should evince difficulties in some cases – in particular in making explicit judgments about violations of commitments. Given this general timetable for the development of moral emotions and the ability to understand and anticipate them, we should expect children to begin to act in accordance with commitments, and to protest when others fail to, around their second birthdays. But we should not expect them to reliably anticipate whether people are likely to honor commitments, or to make reliable judgments about commitment violations, until around 9.
4. Back to the three desiderata As I have already emphasized, the minimal structure of commitment is less restrictive than that of commitments in the strict sense. Thus, it accommodates many cases that do not qualify as 380
Commitment in joint action
commitments in the strict sense.With this conception of a sense of commitment in hand, let us now revisit the three desiderata identified in section 2 and explore what the minimal approach enables us to say about each (i.e. motivation, implicit commitment, development). 4.1. Revisiting desideratum 1: motivation The minimal approach raises two important points about motivation. Firstly, the processes of goal slippage and expectation fulfillment can go some way toward increasing motivation as well as credibility. On the one hand, the default expectation that others will contribute (X) to one’s goals or desired outcomes will be more likely to be met and reinforced if other agents are indeed more likely to contribute, either through goal slippage or through expectation fulfillment (or through any other means, be it altruism, explicit reputation management, mere habit, etc.). And on the other hand, agents will be more likely to believe that one expects them to contribute X if one indeed expects by default that they will do so. Thus, the very detection of the minimal structure would tend to reduce both the real uncertainty and the perceived uncertainty about crucial contributions being made. Secondly, the sense of commitment can derive motivational force by engaging moral emotions and sentiments. For example, if an agent (YOU) does not contribute X in a situation instantiating the minimal structure, this may cause her to feel ashamed or guilty, and it may cause others to become angry or contemptuous. And since she and everyone else anticipate these emotional consequences, and everyone knows that they are undesirable outcomes that she is motivated to avoid, her commitment is credible, so she succeeds in generating expectations. The anticipated emotional consequences of honoring or reneging on commitments change the payoff structure of action options in a way that parallels contracts: just as contracts reduce uncertainty by making particular action options highly unattractive, the anticipated emotional outcomes of commitment violations make particular action options unattractive and thereby reduce uncertainty about what actions agents will choose.17 4.2. Revisiting desideratum 2: implicit commitment The minimal approach offers a clear explanation of how agents identify, and assess the level of, their own and others’ implicit commitments: they track the minimal structure of commitments (as well as various modulating factors, such as those discussed in section 3.1). In some cases, agents not only have a sense that an implicit commitment is in place but in fact judge that it is appropriate to expect contributions to be made – as may be the case, for example, if Polly and Pam have smoked and chatted together every afternoon for many years. When this is the case, then that judgment may serve to stabilize expectations and motivations further. In other cases, people act as though they or others were committed even though they would not explicitly judge that a commitment is in place – as with Sam reluctantly playing fetch with Woofer. By situating such cases along a continuum with cases of commitment in the strict sense, the minimalist approach presents this tendency as an important key to identifying the processes that lead people to engage in and to expect cooperative behavior. 4.3. Revisiting desideratum 3: development According to the approach offered here, an understanding of commitment in the strict sense is constructed on the basis of the minimal structure through an increasingly nuanced sensitivity to subtler factors. Developmental findings provide support for the assumption that children are 381
John Michael
sensitive to the minimal structure by the second year of life (they are motivated to contribute X in such situations and they protest when others fail to). Later on, children also become sensitive to the other factors referred to in section 3.1, and thereby develop an increasingly sophisticated understanding of commitments over the course of childhood. It is interesting to note that in teaching children about commitments, it is common for parents to point out precisely these factors to children (e.g. ‘But didn’t you know that Susie was counting on you to do X?’ ‘Didn’t you expect that Mommie would be angry if you did not do X?’). In addition, the minimal approach sheds light on an otherwise mysterious pattern in the developmental findings. Recall that, in Gräfenhain et al. (2009) study discussed above, the main finding was that 3-year-olds, but not 2-year-olds, protest more over the experimenter’s abandonment of the joint action when the experimenter has made an explicit agreement to play together (commitment condition) compared to when she has not made an explicit agreement (no commitment condition). Interestingly, it is not the case that the 2-year-olds do not protest at all, and only the 3-year-olds understand the situation well enough to feel entitled to protest. On the contrary, the 2-year-olds protest just as much in both conditions as the 3-year-olds do in the commitment condition. This suggests that the sense of entitlement that inspires protest over an unfulfilled expectation is not the product of developmental changes over the third year but, rather, it is the default that is already in place by 2 or earlier. What changes in the third year is that children learn that they are not always entitled to expect contributions to their goals. In other words, the developmental process chips away from, rather than adding to, the cognitive architecture that underlies the protest behavior. Moreover, in the Mant and Perner (1988) study discussed above, one interesting detail is that 22 of the 46 six-year-olds actually rated the protagonist as being naughty in both conditions (while 11 rated him as neutral in both conditions), i.e. when Peter had violated a commitment and thereby caused Fiona to be disappointed and sad, and when he had not made any commitment in the first place and Fiona was disappointed and sad. It is as though whenever a goal is not achieved and actors are left disappointed, the default is to assign blame, and to work out the details later. Indeed, this is just the pattern that one would expect on the basis of the minimal approach presented here. The minimal approach spells out several factors that could drive children’s emerging sensitivity to commitment. Investigating these factors with existing developmental tasks (e.g. Gräfenhain et al., 2009, 2013; Hamann et al., 2012) will allow us to explain in more detail the nature of children’s emerging understanding of commitment. For example, the minimal approach generates the prediction that children’s tendency to remain engaged and to expect engagement can be influenced by ostensive cues such as eye contact and motherese (Csibra & Gergely, 2009). Crucially, these cues were present in Gräfenhain et al.’s (2009) ‘joint commitment condition’ but not in the ‘no joint commitment condition’. Therefore, it cannot be ruled out that 3-year-olds’ differential responses in those two conditions may have been due to such ostensive cueing rather than to any verbal expression of commitment per se. Moreover, the minimal account also suggests the possibility that children’s motivation to remain engaged and expectation of engagement from others may be modulated by various other cues that typically signal an intention to cooperate. For example, young children’s tendency to cooperate and to expect cooperation from others may be enhanced just as much if another agent simply announces to a third party that she intends to share the spoils of a joint action as it is if she makes an explicit verbal commitment to the child to do so. In sum, the minimalist approach helps to explain how an understanding of commitments emerges through engagement in joint actions rather than arguing that it is present as soon as children engage in joint actions. 382
Commitment in joint action
5. Conclusion I began by identifying three desiderata: to identify the motivational factors that lead agents to feel and act committed, to pick out the cognitive mechanisms and situational factors that lead agents to sense that implicit commitments are in place, and to illuminate the development of an understanding of commitment in ontogeny. I have suggested that the minimal framework makes it possible to provide satisfactory responses to these three desiderata. The core of the minimal framework is an analysis of the minimal structure of situations which can elicit a sense of commitment. I then proposed a way of conceptualizing and operationalizing the sense of commitment, and discussed cognitive and motivational processes which may underpin the sense of commitment. Finally, we saw how the expectations and motivations making up the sense of commitment can reinforce each other over time, and thereby fulfill the social function of stabilizing agents’ expectations about other agent’s making contributions to their goals or to outcomes they desire.
Acknowledgments John Michael was supported by a Marie Curie Intra European Fellowship (PIEF-GA-2012– 331140).
Notes 1 The notion of a commitment made only to oneself might seem odd: since one can release oneself from a commitment made to oneself simply by changing one’s mind, it may not be immediately clear how a commitment to oneself to do X differs from a simple desire to do X. Thus, if one changes one’s mind and decides not to quit smoking after all, what difference does it make that one has made a commitment to oneself to quit smoking? But commitments made to oneself can be rendered more compelling in various ways, for example by making them public, so that one’s reputation is at stake. In this manner, commitments to oneself can serve to stabilize one’s goals in the face of fluctuating desires and interests. 2 For simplicity’s sake, I will speak of one agent making a commitment. Thus, I will bracket out the interesting question whether there are any systematic differences between cases in which individuals enter into commitments and cases in which groups do so. 3 This standard conception helps to make clear how commitment differs from the related phenomenon of trust: unlike trust, commitments are linked to specific actions. Jennifer may trust Susie in general to behave in a responsible manner, to honor her commitments, etc. But her trust in Susie does not yet suffice for her to expect Susie to pick up the kids today or for her to judge that Susie is obligated to do so. It is not until Susie has agreed to perform this particular action that a commitment in the strict sense arises to perform this particular action. Thus, while the concept of trust picks out a general disposition to expect an agent to behave in a manner that supports one’s interests and well-being, the concept of commitment picks out specific obligations to perform specific actions, which arise because of agreements that have intentionally been made. 4 There is a vast literature to document this (for a recent review, see Vanderschraaf & Sillari, 2009). 5 This working definition is borrowed from Hohwy and Palmer (2014). 6 In the case of joint commitments, the situation is slightly more complicated insofar as a joint commitment in the strict sense is created when two (or more) individuals express their willingness to engage in a joint action conditionally upon the other(s) also expressing their willingness, and it is common knowledge among them that they have expressed their willingness (Gilbert, 2006a: 8–11; cf. also Gilbert, 1989). When a joint commitment is in place, both parties are obligated to perform the joint action and entitled to expect the other to do so as well. 7 This was compared to a condition in which the first child received the reward before the joint task even began, i.e. there was no collaboration at all, and therefore no sense of commitment. In this condition, the children were significantly less likely to assist the second child in attaining her reward.
383
John Michael 8 It must be noted that these earlier studies cannot be directly compared with Gräfenhain and colleagues’ study: not only did they use different measures, but they also implemented scenarios in which the children were asked to make judgments from a third-person perspective, which may be intrinsically more difficult than a first-person perspective. For present purposes, however, the relevant point is that the findings from these earlier studies give us reason to be cautious, and thus provide initial motivation for a thinner interpretation of Gräfenhain and colleagues’ findings. 9 For example, many of the children mistakenly judged that one can promise to bring about an event over which one has no control, and thereby commit oneself to bringing that event about. 10 It also consistent with the minimalist approach to the development of theory of mind developed by Butterfill and Apperly (2013). 11 The following discussion partly draws on and expands on points made in Michael, Sebanz, and Knoblich (2016). 12 In saying that the contribution is crucial, what is meant is that it is a necessary component of a particular strategy for bringing about O. 13 It is important to emphasize that the account does not identify the sense of commitment with just any expectations and motivations but with expectations and motivations pertaining to the contribution of X in situations instantiating the minimal structure. 14 As Michael and Salice (forthcoming) have argued, this opens the door to the possibility of designing robots that elicit and/or exhibit a sense of commitment – i.e. such that (i) humans agents are motivated by a sense of commitment toward them, (ii) human agents expect them to be motivated by a sense of commitment toward human agents, (iii) they are motivated by a sense of commitment toward human agents, and/or (iv) they expect human agents to be motivated by a sense of commitment toward them. 15 The ideomotor theory (Sebanz et al., 2003; Hommel et al., 2001) offers one way of articulating and motivating this idea. For present purposes, I remain neutral with respect to this theory in general, but endorse this particular aspect of the theory. 16 This is consistent with the idea, recently articulated by Szigeti (2013), that moral emotions serve as heuristics for assessing the moral status of actions. 17 This idea is adapted from Frank (1988); cf. also Michael and Pacherie (2014).
References Astington, J. W. (1988). Children’s understanding of the speech act of promising. Journal of Child Language, 15(1), 157–173. Austin, J. (1962). How to Do Things with Words. Cambridge, MA: Harvard University Press. Barden, R. C., Zelko, F. A., Duncan, S.W. & Masters, J. C. (1980). Children’s consensual knowledge about the experiential determinants of emotion. Journal of Personality and Social Psychology, 39(5), 968. Bateson, M., Nettle, D. & Roberts, G. (2006). Cues of being watched enhance cooperation in a real-world setting. Biology Letters, 2, 412–414. Bischof-Köhler, D. (1991).The development of empathy in infants. In M. E. Lamb and H. Keller (Eds.), Infant Development: Perspectives from German Speaking Countries (pp. 245–73). Hillsdale, NJ: Erlbaum. Brownell, C., Ramani, G. & Zerwas, S. (2006). Becoming a social partner with peers: Cooperation and social understanding in one- and two-year-olds. Child Development, 77(4), 803–821. Butterfill, S. (2012). Joint action and development. Philosophical Quarterly, 62(246), 23–47. Butterfill, S. & Apperly, I. (2013). How to construct a minimal theory of mind. Mind and Language, 28(5), 606–637. Camerer, C. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton, NJ: Princeton University Press. Carpenter, M. (2009). Just how joint is joint action in infancy? Topics in Cognitive Science, 1(2), 380–392. Charness, G. & Dufwenberg, M. (2006). Promises and partnership. Econometrica, 74, 1579–1601. Clark, H. H. (2006). Social actions, social commitments. In N. J. Enfield and S. C. Levinson (Eds.), Roots of Human Sociality: Culture, Cognition, and Interaction (pp. 126–150). Oxford, NY: Berg. Csibra, G. & Gergely, G. (2009). Natural Pedagogy. Trends in Cognitive Sciences, 13, 148–153. Dana, J., Cain, D. M. & Dawes, R. M. (2006).What you don’t know won’t hurt me: Costly (but quiet) exit in dictator games. Organizational Behavior and Human Decision Processes, 100(2), 193–201.
384
Commitment in joint action Francey, D. & Bergmüller, R. (2012). Images of eyes: Enhance investments in a real-life public good. PLoS ONE, 7(5): e37397. doi:10.1371/journal.pone.0037397. Frank, R. H. (1988). Passions Within Reason:The Strategic Role of the Emotions. New York: W.W. Norton & Co. Gaertner, S. L. (1973). Helping behavior and racial discrimination among Liberals and Conservatives. Journal of Personality and Social Psychology, 25, 335–341. Gilbert, M. (1989). On Social Facts. London: Routledge and Kegan Paul. ———. (2006a). Rationality in collective action. Philosophy of the Social Sciences, 36(1), 3–17. ———. (2006b) A Theory of Political Obligation. Oxford: Oxford University Press. Gräfenhain, M., Behne, T., Carpenter, M. & Tomasello, M. (2009). Young children’s understanding of joint commitments. Developmental Psychology, 45(5), 1430–1443. Gräfenhain, M., Carpenter, M. & Tomasello, M. (2013). Three-year-olds’ understanding of the consequences of joint commitments. Public Library of Science ONE, 8(9), e73039. doi:10.1371/journal.pone.0073039 Hamann, K., Warneken, F. & Tomasello, M. (2012). Children’s developing commitments to joint goals. Child Development, 83(1), 137–145. Harris, P. L. (1989). Children and Emotion:The Development of Psychological Understanding. Oxford: Blackwell. Hohwy, J. & Palmer, C. (2014). Social cognition as causal inference: Implications for common knowledge and autism. In M. Gallotti and J. Michael (Eds.), Social Ontology and Social Cognition, Springer Series Studies in the Philosophy of Sociality, 4, 167–189. Hommel, B., Müsseler, J., Aschersleben, G. & Prinz, W. (2001). Codes and their vicissitudes. Behavioral and Brain Sciences, 24(05), 910–926. Lewis, David K. (1969). Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. Liszkowski, U., Carpenter, M. & Tomasello, M. (2007). Pointing out new news, old news, and absent referents at 12 months of age. Developmental. Science, 10, F1–7. Mant, C. M. & Perner, J. (1988). The child’s understanding of commitment. Developmental Psychology, 24(3), 343–351. Melis, A. P. & Semmann, D. (2010). How is human cooperation different? Philosophical Transaction of the Royal Society, 365, 2663–2674. Michael, J. & Pacherie, E. (2014). On Commitments and other uncertainty reduction tools in joint action. Journal of Social Ontology, https://www.degruyter.com/view/j/jso.2015.1.issue-1/jso-2014-0021/jso2014-0021.xml Michael, J. & Salice, A. (forthcoming). The sense of commitment in human-robot interaction. International Journal of Social Robotics. Michael, J., Sebanz, N. & Knoblich, G. (2016). The sense of commitment: a minimal approach. Frontiers in Psychology ( Jan.). http://dx.doi.org/10.3389/fpsyg.2015.01968 Nunner-Winkler, G. & Sodian, B. (1988). Children’s understanding of moral emotions. Child Development, 59(5), 1323–1338. Rakoczy, H. (2008). Taking fiction seriously: Young children understand the normative structure of joint pretend games. Developmental Psychology, 44, 1195–1201. Rakoczy, H. & Schmidt, M. (2013). The early ontogeny of social norms. Child Development Perspectives, 7(1), 1750–8606. Ridgeway, D., Waters, E. & Kuczaj, S. A. (1985). Acquisition of emotion-descriptive language: Receptive and productive vocabulary norms for ages 18 months to 6 years. Developmental Psychology, 21(5), 901. Rochat, P. (2008). ‘Know Thyself!’. . . But what, how and why? In F. Sani (Ed.), Individual and Collective SelfContinuity: Psychological Perspectives (pp. 243–251). New York: Lawrence Erlbaum Publishers Scanlon, T. (1998). What We Owe to Each Other. Cambridge, MA: Harvard University Press. Schiffer, S. (1972), Meaning. Oxford: Oxford University Press. Schmidt, M., Rakoczy, H. & Tomasello, M. (2011). Young children attribute normativity to novel actions without pedagogy or normative language. Developmental Science, 14, 530–539. Searle, J. (1990). Collective intentions and actions. In P. Cohen, J. Morgan and M.E. Pollack (Eds.), Intentions in Communication (pp. 401–416). Cambridge, MA: Bradford Books, MIT Press. ———. (1969). Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press.
385
John Michael Sebanz, N., Knoblich, G. & Prinz, W. (2003). Representing others’ actions: Just like one’s own? Cognition, 88(3), B11–B21. Shpall, S. (2014). Moral and rational commitment. Philosophy and Phenomenological Research, 88(1), 146–172. Silk, J. (2009). Nepotistic cooperation in non-human primate groups. Philosophical Transactions of the Royal Society B, 364, 3243–3254. Szigeti, A. (2013). No need to get emotional? Emotions and heuristics. Ethical Theory and Moral Practice, 16(4), 845–862. Thompson, R. A. (1987). Empathy and emotional understanding: The early development of empathy. In N. Eisenberg and J. Strayer (Eds.), Empathy and Its Development (pp. 119–145). Cambridge: Cambridge University Press. Thompson, R. A. & Paris, S. G. (1981). Children’s inferences about the emotions of others. Paper presented at the biannual meeting of the Society for Research in Child Development, Boston. Tollefsen, D. (2005). Let’s pretend: children and joint action. Philosophy of the Social Sciences, 35(75), 74–97. Tomasello, M. (2009). Why We Cooperate. Cambridge, MA: MIT Press. Vanderschraaf, P. & G. Sillari (2009). Common Knowledge.The Stanford Encyclopedia of Philosophy. E. N. Zalta. Vesper, C., Butterfill, S., Sebanz, N. & Knoblich, G. (2010). A minimal architecture for joint action. Neural Networks, 23(8/9), 998–1003. Warneken, F., Chen, F. & Tomasello, M. (2006). Cooperative activities in young children and chimpanzees. Child development, 77(3), 640–663. Warneken, F. & Tomasello, M. (2007). Helping and cooperation at 14 months of age. Infancy, 11, 271–294. doi: 10.1111/j.1532–7078.2007.tb00227.x Weiner, B. Graham, S, Stern P. & Lawson, M. E. (1982). Using affecting cues to infer causal thoughts. Developmental Psychology, 18, 278–286.
386
22 THE FIRST-PERSON PLURAL PERSPECTIVE Mattia Gallotti1
1. Introduction In A Rose for Emily (1930), William Faulkner tells the story of Emily Grierson’s life and death in the town of Jefferson. We hear about the ups and downs of her idiosyncratic relationship with the townspeople, neighbours and town leaders, about her father and first love, the oddities of a secluded life and the mysteries of a once-grandiose house. The novel is short and intense, and it captures the community’s collective memory of its most peculiar member. The story is narrated in the first-person plural; however, the reader is left with the impression that we will never know whom the relevant “we” denotes. It could be the whole of the townspeople, or perhaps a sub-group of them. Most likely, the narrator is a person speaking on behalf of all who might turn out to have been closer to Emily than anyone else. As the readers find out more and more about the secrets of the main character, what it meant and felt like for the people of Jefferson to have Emily as one of them, we also realize that the “we-s” of the story mask a complex dialectic of collective and individual perspectives. The transition is fluid and graded, and it is made vivid by subtle shifts in focus in the minds of the narrator, and of the reader too.2 The dialectic is captured by different uses of the first-person plural pronoun. One is the social3 we, used loosely to express the undifferentiated point of view of the group. The other is the individual version of first-person plural narration, the differentiated we voiced by the individuals when their minds tune into one another through processes of reciprocal “alignment”. What makes it a differentiated we is the recognition, not that the townspeople of Jefferson have the exact same opinion of Emily, but that they can make sense of the facts of her troubled and mysterious life together, jointly. They can still see things differently, but things can also appear before their individual minds as theirs (“ours”). In other words, the social we is predicated on the assumption that there is a common world around us, a world of things that we inhabit and share in virtue of being part of the same community. The individual we reflects the idea that we can also, each individually, represent things and situations as common to us, as something we think about and act upon together, by exchanging information relating to each other in a reciprocal manner. I shall refer to the former as the “perspective of the we” and to the latter as the “we-perspective”, or the first-person plural perspective. The former has enjoyed wide currency in the reflection on sociality, at least since the study of plural subjects and group dynamics gained centre stage in 387
Mattia Gallotti
social science research (for a recent review, see Jarymowicz 2015). The latter, instead, has made its ways into the philosophy and science of sociality only fairly recently. The “we-perspective” refers to the mental attitudes of people when they each individually represent things, including themselves and others, “as a we” (Gallotti and Frith 2013). My aims in this paper are to provide a suitable formulation of the first-person plural perspective; to characterize the kind of mental experience accompanying it, and to examine its mode of existence in the broader mental life of people; and to sketch a theoretical framework for articulating an account of the we-mode consistent with the best cognitive science of the day. Historically, we concepts have developed across various lines of thinking about society, and its foundations. A theory of society ought to account for the way in which persons relate to one another, and to the social facts they constitute together. The first difficulty that one encounters in investigating social ontology concerns the appropriate level of description. Society is, at bottom, a world of human interactions, and hence the interaction of individual people should be the key to understanding the formation of social facts, broadly understood.4 Yet, the connection between the facts of society and the facts about the individuals who take part in social interactions has always proved difficult to capture beyond intuition.The tension, which is traditionally presented in the literature as a reduction problem, has constrained the range of explanatory options and tools available for theorizing across the social and the cognitive sciences.5 In recent studies of social cognition, for example, it is argued that, when the minds and bodies of individual people adjust in the course of interaction and become shared, they generate distinctive patterns of behaviour that are best described in first-person plural terms – i.e. as involving the ability of individuals to attune to one another’s mind and attend to things together.6 However, we-attitudes and we-representations typically resist a description in terms of the concepts employed in describing the representational structure of individual thought and action. Accordingly, it has become customary to posit socially relevant features about the intentionality of agents in interaction, or collective intentionality,7 to account for the fact that the minds of individual people can be directed at objects and states of affairs in the world jointly (Schweikard and Schmid 2013). If we-representations are mental representations held by individual people, what is the we postulated by the we-perspective? Thinking about the first-person plural perspective has evolved through a rich and increasingly inter-disciplinary intellectual trajectory. In order to identify its defining features, in section 2 I will examine some of the philosophical questions and issues facing theorists of the we-perspective. This will provide us with the relevant background to refine the notion of the we-perspective in terms of what the philosopher Wilfrid Sellars (1963) dubbed “we-mode”, which I will discuss in section 3. Not only did Sellars put forward what was to become the first formulation of a theory of collective intentionality in social ontology, he also pointed to a set of pressing methodological issues, which paved the ground for later discussions about the we-mode in social cognition. In section 4, I will set out a framework for analyzing emerging naturalistic approaches to aspects of the social mind, which openly make we-mode claims, and I will conclude with some remarks on future directions of research.
2. Definitional issues Uses of we fall in the area concerned with the study of the intentional attitudes of the individual people involved in forms of collective behaviour. Before we make distinctions, we should be clear about the role of collective intentionality in social ontology. The study of collective intentionality is the study of the properties of the mental states and processes of individuals, in interaction, which suffice to make them shared mental representations.8 Theories of collective 388
First-person plural perspective
intentionality have historically sprung from the confluence of long-standing assumptions about the emergent, i.e. group-level, traits of phenomena initiated and sustained by the interaction of people; a stock of folk psychological intuitions about the nature and identity of mental states; and considerations about the difficulty of reconciling a priori analyses of social concepts with the tenets and methods of causal-scientific explanations of (collective) behaviour. In this section, I discuss the concepts of we associated with two families of theories of collective intentionality. Since these theories stand at the opposites of a spectrum of theoretical possibilities, the goal is to compare them critically in order to shed light on the motives and intuitions feeding current discussions of the first-person plural pronoun. The facts of the social are grounded in rules anchored to a variety of causal factors, some of which are not psychological. This consideration has led an increasing number of philosophers to question the view that the social world should be conceived of as a projection of the collective attitudes of people onto worldly entities and states of affairs (Epstein 2015). Why, then, suppose that the key to understanding society, and its foundations, is the mental attitudes of individual people? And why talk of collective attitudes? Indeed, society is more than a mere collection of individuals. But acknowledging this simple fact of everyday social life has only strengthened the conviction that the starting point of an inquiry onto society should be to ask what, exactly, the individuals contribute to the formation of social entities and kinds. The point of asking this question is that, when the individuals pool their mental resources together and engage in common activities with the other people, they can achieve a result that is qualitatively different9 from the sum of the single contributions. Scientific research has showed at length that the alignment10 of minds and bodies with one another at different levels of interaction generates neural and behavioural patterns of individual-level activity not seen if the subjects act on their own (Dale et al. 2013). Since society is a world of human interactions, then a theory of society will have to encompass an account of the facts about the operations of the minds of interacting people, which sustain the formation and persistence of mental states jointly directed at worldly objects and states of affairs. The following operational definition of “shared” enjoys wide currency in the collective intentionality literature. As the story goes, the outcome of an activity is shared when the single individuals who take part in it pool their mental resources – knowledge, thoughts, experiences and so forth – in such a way that the outcome can no longer be partitioned over the single contributions. If the intentional structure of a shared activity cannot be made fully intelligible in terms of the contributions of the single individuals, what can, then? I take this question as the point of departure for defining the scope and relevance of we concepts in the literature on the social mind. Uses of the first-person plural pronoun have crystallized as explanatory resources in attempts to bring to light the shared aspect of collective intentional behaviour. As an illustration, consider the notion of we generated in the historically influential strand of research on crowd behaviour. The crowd is a paradigmatic example of a social phenomenon (Borch 2012). Its existence depends on the participation of single agents yet, when the group is formed, the crowd exerts a collective effect on the psychology of the agents involved in it. I use “collective” in this present context to indicate a set of emergent, group-like traits and dispositions observed when individual people join forces and engage in socially organized forms of behaviour (Smaldino 2014). As it is well documented, the attention of early social theorists and psychologists was directed at the remarkable force of crowds to transform the behaviour of their individual units.The language of transformation was informed by the ideological trends of the time, as proponents of the so-called “collective psychology” tradition of thought11 operated in a politically charged environment, marked by the debate on the promise of collectivities to open up unprecedented 389
Mattia Gallotti
avenues for social action and change. This motivation, together with the positivistic commitment to the claims and methods of social-scientific enquiry, yielded a peculiar we concept, one that externalizes the psychology of human sociality and locates its causal-functional history in a supra-individual realm.The crowd was conceived of as an undifferentiated plural subject, a “we” capable of changing the dispositions and traits of the single agents – the single “Is” – mostly in a pejorative fashion. This use of we thus denotes an irreducibly plural subject of thought and agency, capable of exerting psychological activity over and above the heads of the single individuals. “Irreducible” hints at the assumption that there is a fact of the matter – that is, a fact about the nature of an alleged group, or social, mind – responsible for the patterns of behaviour exhibited by the individuals when their behaviour is transformed through crowd membership. As such, the we of collective-psychology theorists stands out as an ontological response to the question of what enables the crowd to shape and constrain the behaviour of the individuals. Note that the undifferentiated perspective of a plural subject – a we capable of determining the mentality and conscience of the individual agents from outside (above) their own individuality – is not the same as the differentiated, inter-subjective perspective of the individuals when they think and enact things together. Although there are interesting points of contact between the perspective-of-the-we and the we-perspective, the two uses of we entail remarkably different conceptions of the social mind. In the recent literature on collective intentionality and the cognitive preconditions of social behaviour in philosophy and science, the first-person plural pronoun is made salient in claims that presuppose none of the ontologically problematic assumptions of the collective psychology tradition. By and large, these strands of research are committed to versions of ontological individualism, the underlying assumption being that there cannot be other forms of mentality than the individual’s. The argumentative line is to claim that group-like psychological features can be explained away in terms of the attitudes of the individuals prior to engaging in interaction with the other people. However, the question remains about the status of first-person plural claims. I address this question and elucidate the meaning of the first-person plural perspective by way of an example from the work of the psychologist Michael Tomasello on The Origins of Human Communication (2008). To provide some background, Tomasello identifies the emergence of a distinctive kind of social engagement as the decisive evolutionary adaptation enabling early humans to successfully adapt to pressing environmental challenges and develop species-specific forms of social cognition.The comparative advantage of this new form of social existence manifested itself in early forms of gestural communication based on pointing and pantomiming, which paved the ground for the emergence of more complex forms of shared intentionality. Interestingly,Tomasello articulates the concept of the (shared) representation of a joint goal in terms of an individual-level representation of the goal as shared. Shared representations are sustained by a “dual-level structure of simultaneous sharedness and individuality – a joint goal but with individual roles” attained by means of recursive mindreading and perspective taking (Tomasello 2014: 43). To elucidate the point, consider the following example: Suppose that you and I are walking to the library, and out of the blue I point for you in the direction of some bicycles leaning against the library wall. Your reaction will very likely be “Huh?”, as you have no idea which aspect of the situation I am indicating or why I am doing so, since, by itself, pointing means nothing. But if some days earlier you broke up with your boyfriend in a particularly nasty way, and we both know this mutually, and one of the bicycles is his, which we also both know mutually, then the exact same pointing gesture in the exact same physical situation might mean something very complex like “Your boyfriend’s already at the library (so 390
First-person plural perspective
perhaps we should skip it).” On the other hand, if one of the bicycles is the one that we both know mutually was stolen from you recently, then the exact same pointing gesture will mean something completely different. Or perhaps we have been wondering together if the library is open at this late hour, and I am indicating the presence of many bicycles outside as a sign that it is. (p. 3) If we are not on the same page, as it were, I cannot understand what you mean by pointing to that bicycle. But if we are aware of the story behind your gesture, then the message will get across smoothly. Sharing the relevant piece of information is by all means a prerequisite for the fixation of meaning (Sperber and Wilson 1995). However, there is more to the sort of meeting of minds hinted at by Tomasello than the fact that we understand each other and share some common ground. Tomasello emphatically remarks that the things we must be in the position to both know for the communication to work out must also be known by us mutually. Why mutually? Suppose that each of us does know that your bicycle was stolen last week, and each realizes that the bicycle leaning against the wall of the library is strikingly similar to yours. Wouldn’t this be enough for us to conclude that that bicycle is your bicycle? It would, but only if you knew that I know that your bicycle was stolen last week and, also, if I knew that you know that I know the story – and so forth. Things would have certainly been different if we had not lived in the moment together, as it were. All the pieces of information essential for the exchange should be out in the open and transparent to both of us for the message to get across effectively. Communication does not work if there is some asymmetry in the distribution of information. Therefore, to know something mutually is for any one of us involved in a social interaction to exchange and process information relating to each other in a reciprocal manner (Gallotti et al., manuscript). Notice that the key to reciprocity is that we know we are on the same page, not that the page is the same. In fact, the thought that the object is common to us, something we know about together, is not the same as the thought that there is a common object. Compare: if we each individually know about the relevant object of thought being out in the open and transparent to us, you attending to it with me, together, becomes part of the experience I have of the object attended to, and vice versa. We then use the first-person plural pronoun we to capture something different from the fact that there is a common object. Indeed, if there is a common object of thought, we might have some sort of replica in our individual minds, yet this does not capture the concept of we at stake.12 In more general terms, the difference is between the thought that there is a common world, a world of objects and states of affairs that we share and enact as a community of people, and the thought that this shared realm of things and events is a world we participate in together. In this paper, I am concerned with this particular meaning of we. The we-perspective is the attitude entertained by individual people when they see things, including themselves and others, from the perspective of the group they belong to. The group can be any socially identified category that one happens to be a member of at any given time, but it need not be so. What is essential for people to entertain first-person plural content, to think and act upon things “as a we”, is that each person individually attunes to one another’s mind mutually. As we will see in the course of the discussion, it is these attitudes, the way in which people exchange and process information relating to each other, that grants talk of a distinctively we-perspective. In fact, one important motivation for studying processes of alignment is the recognition that individual people manifest remarkable psychological properties when their minds and bodies adjust to each other in the appropriate manner. These properties can then be conceived of and 391
Mattia Gallotti
presented as the traits of a differentiated we, but only to the extent that the relevant we-claims are claims about the features of the attitudes of individuals which become manifested through social participation and membership.
3. The we-mode What is the status of the claims that individuals can entertain a we-perspective? What is the individual we, if appeal to group minds is emphatically ruled out, on ontological grounds, from a scientifically plausible accounts of collective behaviour? In order to address these questions, we have to clarify what justifies talk of a distinctively we-perspective in the first instance. The concept of “we-mode”, introduced by Sellars in his writings on the nature of morality, will come to help. However cursory Sellars’ account of collective intentionality might appear nowadays, it provided a blueprint for later developments in the research on we-intentionality (Tuomela 1984). The problem of the we-mode is traditionally presented as a reduction problem in the literature of collective intentionality. When philosophers debate the conditions of existence and identity of collective or we-attitudes, most often they refer to the question whether the relevant conditions are exhausted by a reductive explanation of collective intentional concepts in terms of those which are employed in describing individual intentional behaviour (for a review, see Schweikard and Schmid 2013). The answers are arrived at via sophisticated analyses of plural action sentences and common intuitions as to what counts as shared in paradigmatic instances of sociality. Reductivists hold that all is needed to share attitudes is that the mental states of the individual agents be properly configured and supplemented with something like mutual knowledge (Bratman 1992, 2014; Ludwig 2007; Gold and Harbour 2012).What makes a shared cognitive (conative, affective) activity shared is nothing but features of the relevant state of affairs, where the mental states of people become connected in the proper manner.13 For nonreductivists, on the contrary, there are plenty of counterexamples for the idea that collective intentional predicates entail more than just the proper configuration of first-person singular mental states (Searle 1990, 1995; Tuomela 2013). The facts that sustain the formation and persistence of shared mental states are facts about the mental ontology of individuals. Arguments for the irreducibility of collective intentionality make appeal to the fundamental idea of the we-mode. To enlighten this idea, let us turn to Sellars’ insights on the we-mode. In his writings on morality, Sellars was particularly interested in the question as to how to articulate an account of the collective character of moral and normative judgments in line with the principles and methods of scientific inquiry. Consider judgments expressed by statements like “People shall pay taxes”, for example. Social norms prescribe rules of conduct for the members of a community. Such rules display an inter-subjective force, which transcends the individuality of people and constrains their life and behaviour at the collective level. One way to make sense of the binding force of moral judgments, within a scientifically reasonable framework, is to examine their collective dimension in terms of a chain of individual-level dispositions, notably intentions. Sellars (1963) noticed that intention-based discourse offers a useful framework for linking moral decision making to practical reasoning; however, it also reveals a puzzling feature. Although one’s obligations and commitments to others can be examined by appeal to one’s own intentions, mental-state discourse postulating “egocentric” intentions fails to encompass the universality and inter-subjective bond of norms. This becomes all the more apparent when collective norms of behaviour clash with the preferences of individuals – as in the case: “We should not try to avoid paying taxes, but I do.” If conflicting attitudes are truthfully ascribed to the same thinking subject, we will expect the relevant statement to be genuinely contradictory. 392
First-person plural perspective
But what is puzzling about statements involving a disjunction between collective and individual attitudes is that they do make sense, even though both collective and individual attitudes are ascribed to one and the same subject. According to Sellars, if the collectively oriented attitude, qua intention, were conceived of as a representation with the same, or similar, content across all the single agents – say an intention held in parallel by all members of the community – there would indeed be a genuine conflict between putatively collective and individual intentions. As we saw in the previous section, only one of two explanatory options was available to theorists facing the prospects of a theory of society consistent with the generalizations and methods of science. The laws and principles of late-nineteenth century behavioural science provided them with a framework which only allowed externalizing the locus of the collective in a realm of facts over and above the heads of individuals. As Olen and Turner (2015) argue, Sellars was almost certainly aware of the theoretical issues facing these attempts to naturalize social ontology. The issues presented themselves to Sellars, as well as to Durkheim and his followers, under the guise of the classic problem of reduction in the philosophy of society – the problem of explaining the seemingly irreducible collective force of social facts within the broader view that social facts emerge from facts about the interaction of individuals. But Sellars took a different route than Durkheimians and, in the attempt to overcome the ontological extravagances of group-mind accounts, he introduced the notion of the we-mode. So, he concluded that we-attitudes must be attitudes of a different type, and that the “we” clause in plural action sentences is an expression of representationally peculiar intentional facts, that is, facts directed at objects and states of affairs collectively, rather than in parallel. In one of several passages where the idea of a we-perspective is laid out, Sellars gestures to the possibility that people’s mental life can in fact be enriched by a psychological attitude characterized as – in his own language – the primitive state of consciousness held by one person thinking and acting “as-one-of-us” (1963: 204–205; emphasis in original). Thus, the inter-subjectivity of socially relevant forms of thinking, like moral thinking, derive from truly collective attitudes (“we-intentions”) which are irreducible to the mere summation of individuals’ own preferences (“I-intentions”) at the appropriate level of description. Appropriate means that what makes werepresentations of a putatively different kind from I-representations is that their conditions of individuation are not conceptually exhausted by claims about the psychology of the individuals in the first-person singular. Claims about the we-mode can thus be characterized as a version of what philosopher Robert Wilson has dubbed the “social manifestation thesis” – the thesis that “socially manifested psychological traits are properties of individuals, but since they occur only in certain group environments, they cannot be understood in purely individualistic terms” (2004: 301; emphasis mine). In this respect, there is a stark contrast between claims about the we-perspective and claims about the perspective-of-the-we advocated by proponents of the collective psychology tradition of thought. Sellars and the philosophers of the we-mode, who have followed closely in his steps, are ontological individualists.They only postulate individuallevel intentional attitudes, but they hold that there is a difference in the intentional structure of thought between representations held in the first-person singular mode, and representations the content of which can be reconstructed under a description of the relevant psychological attitude as irreducibly collective. What emerges from the discussion is that we-attitudes are irreducible to I-attitudes conceptually.Yet, the irreducibility of the we-mode is compatible with the individualist’s rejection of any claim that there are supra-individual forms of collective mentality and conscience. But, then, if claims about the we-mode are to be understood as claims about properties and traits of individual minds, how can they be investigated empirically? In the last section, I set out a 393
Mattia Gallotti
theoretical framework for addressing issues of scientific reduction in light of emerging naturalistic trends in the study of the social mind, which make appeal to the first-person plural perspective.
4. Reciprocal alignment Sellars’ analysis set the concept of the we-mode on a fresh new trajectory. His grand picture of an empiricist philosophy of mind also gestured to a set of methodological issues which have later motivated, and also problematized, the prospects of embarking on a scientific study of the first-person plural perspective. In fact, the naturalness of the we-mode is simply granted on liberal grounds in the philosophical literature on collective intentionality. In spite of the ease with which we commonly think of individual people in interaction as capable of transcending their individuality and experiencing things from a we-perspective, the question as to what it takes for at least two individuals to be in a we-mode state has only been addressed as a matter of conceptual possibility until recently. Yet, first-person plural claims now figure in patterns of scientific reasoning and discourse pertaining to the causation of individual behaviour in social contexts. So, it is important and timely to ask what form a scientifically plausible account of the factual basis of the we-perspective will have, provided that claims about the we-mode are empirical claims of social cognitive science. To address this question, in this last section I will touch upon recent work on the mechanism and processes of mental alignment in social cognition. This emerging strand of research promises to shed light on the basis of the we-perspective in a way that is consistent with non-reductivist approaches to the nature and mechanism of social cognition, which give pride of place to the active role of the social environment, without giving up on the intuitions about the inter-subjective relevance and causal-functional role of thoughts and experiences in the first-person plural (Gallotti and Frith 2013).To make a start on this task, I set out a framework for analyzing claims about we-representations in current strands of scientific discourse.This framework will help us to uncover and examine some common assumptions of we-mode claims in the literature, and to point to an alternative direction of research. For a whole-hearted naturalist, science at its best establishes the proper form of explanation of things. Outside of social theory and philosophy, a paradigmatic natural-scientific approach is to discover some natural mechanism that enlightens aspects of the cognitive capacity to be explained in a testable manner. Now, suppose that we have access to a body of highly confirmed and reliable data with explanatory and predictive power, about the organization and implementation of internal states of the organism, which could enlighten the conditions that must obtain for any two people to be in the we-mode.What would be required by the effort of articulating a scientifically plausible characterization of the underpinnings of the we-mode in some presumably more fundamental language? We would probably articulate a theory aimed at providing insights into the relevant conditions, using terms that will make the experience of mutuality characteristic of the first-person plural perspective somehow self-explanatory. In so doing, we would make claims about internal facts of the organism as well as claims about the sort of practices whereby we identify and attribute first-person plural intentional predicates to people, including ourselves. Interestingly, both sets of facts – which Peter Godfrey-Smith (2004: 149) has aptly called the “wiring-and-connection” and the “interpretation” facts – are empirical facts, that is, facts of which we can reasonably expect to achieve scientific knowledge in the future. Many empirical efforts to study the cognitive and neural basis of social cognition involve claims which make prominent the use of the first-person plural pronoun.These claims, explicitly or implicitly, suggest that the relevant experimental studies assume that the subjects understand the structure of the task, or aspects of it, as a we. 394
First-person plural perspective
The scope and significance of this assumption for scientific theorizing, I contend, can be analyzed in terms of the distinction between so-called “top-down” and “bottom-up” approaches to explanation in cognitive research. In an illuminating discussion of the topic, Egan and Matthews (2006) present the thrust of a top-down approach as the effort to explain how the mapping of cognitive mechanisms and processes onto internal states of the organism should be hierarchically structured around levels, which constrain theorizing from the top-most level downwards. The top-most level is typically represented by a well thought-out description of the capacity to be explained. This description specifies features of the capacity in ways that, ideally, make its structure and function explanatorily transparent. As it is clear, the key concept is epistemic transparency: top-downers are in some way committed to the fact that a scientific account of the capacity to be explained should entail the features that they (qua theorists) take to be explanatorily perspicuous. In the case of the we-mode, for example, the best strategy for letting epistemic transparency steer the mapping would be to posit some sort of psychological infrastructure of internally represented states with semantic features that suffice to make them appear before the minds of the subjects as we-representations. When articulated in these terms, the scientific concerns of top-down theorists gesture to an interesting connection with the intuitions feeding classic philosophical views of the irreducibility of the we-mode. Sellars’ reflection on the we-mode as a primitive state of consciousness is a good case in point. So is Searle’s oft-quoted passage that the capacity of collective intentionality – to be understood in terms of the we-mode – is a biologically primitive phenomenon “that cannot be reduced to or eliminated in favor of something else (1995: 24). The thought behind these claims, and the concern as to how to make space for them in the realm of empirical-science explanations, clearly resonate with the approach of a top-downer. Across science and philosophy, the top-down strategy is widely exemplified by experimental studies of the mechanics of shared intentionality, which describe the capacity of people to attune to one another’s mind as involving individual-level representations with first-person plural content – something involving “flavors of togetherness” (Tollefsen et al. 2014).This capacity is standardly presented as the ability of interacting individuals to internally recognize aspects of the representational structure of the state they are in as shared. But if being in a we-mode state consists in having this capacity, then some knowledge of sharedness will be required as a constitutive condition for being in the relevant state. In other words, people would only be able to represent things as-a-we when they know they do. If one takes this to be the conclusion of a top-downer, then there will be good reasons for resisting it on considerations that are usually advocated by bottom-up theorists. Studies of low-level processes responsible for the formation of shared mental states point to a variety of cognitive phenomena at the level of perceptual processing and motor representation, synchronization and coupling, co-representation and entrainment (Knoblich et al. 2011). These accounts underlie a conception of the theory and practice of social cognition that is more modest in scope and less ontologically ambitious than top-down explanations. If any reference to the we-perspective is made by bottom-up theorists, it underlies some minimal commitment to the idea that the investigation of the factual basis of shared intentionality ought not be guided by pre-theoretical ideas of how the capacity works. Rather, the mapping proceeds upward and theories are formulated by adding single explanatory components as they turn out to be necessary for theorizing. This strategy would allow cognitive scientists to address several issues that classic analyses of collective intentionality informed by considerations about the we-mode have proved unable to meet, or simply ignored, like implementation and over-intellectualization problems (Tollefsen and Dale 2012). However, it may turn out to be difficult to come up with a complete explanation of the underpinnings of the we-mode without any idea of what would count as explanatory 395
Mattia Gallotti
transparent in the broad picture. After all, a top-downer would never say that the account which does the best job in describing a cognitive capacity, in a way that satisfies ordinary concepts of everyday language and thought, is also the one which tells us how the capacity really is.To characterize the structure of a certain capacity in terms of a high-level description of the intentional content of the internal state, which the capacity is supposed to generate, would work as a guide to theorizing about its implementing mechanisms at lower levels – a guide, to be clear, not to full understanding of what the capacity is (this will depend on scientific advances), but to what counts as explanatorily illuminating. Whether a certain (top-down) formulation turns out to be the brain’s solution is beside the point. In fact, the form of a scientific explanation follows from the discovered nature of the relevant capacity, rather than from merely examining the appearance of the capacity a priori.14 But whatever explanatory route one might opt for, what is particularly striking in this way of framing the debate is the narrowness of the question at stake – the question as to whether for any two people to be in a we-mode state, they must be aware that they are. Why should one assume that awareness of mutuality is a precondition of the we-mode? What sort of concern is this precondition supposed to address, which can only be met by positing the specific notion of we-mode discussed by top-downers and bottom-uppers? The point of analyzing we-claims through the lens of the dialectic between top-down and bottom-up approaches is to show that, despite relevant differences, both sets of approaches are premised on the view that social interaction should be theorized about and operationalized in terms of joint action. To bring this view to light, consider the assumption that the meeting of minds of the we-mode is a state attained by people who are aware of it as a we-mode state. This assumption implies that the agents are acting together intentionally. As it is often argued in philosophy, agents acting intentionally know what they are doing, they are aware of their intentions (Anscombe 1959). Similarly, when any two people engage in instances of collective intentional behaviour, or joint action, they are aware, not necessarily of the fact that their perspective on a common target is the same, or even similar, but of the fact that they are attending to it together, that whatever the object of thought is, it is known by each of them individually as theirs (“ours”). This way of presenting the kernel of the we-mode reveals that references to the first-person plural perspective, especially in the scientific literature, are very often motivated by questions that turns out to be about the nature and working of joint action. When individual agents pool their wants and plans together in pursuing a jointly orchestrated activity, they often experience their attitudes and bodily experiences in ways that make talk of the first-person plural perspective salient. Furthermore, the experience of we-ness held by the participants in a joint action is possibly foundational to other and more complex forms of social cognition and collective intentionality (Tomasello 2014). However, the awareness of mutuality that accompanies the unfolding of joint action is not the same as the mutuality of awareness characteristic of the we-mode. There are important reasons for arguing that a scientific account of the conditions that must obtain for any two people to be in the position to reciprocate their thoughts and experiences as-a-we will not amount to an account of the conditions that must obtain for them to know that they are (Gallotti et al. manuscript). These are two different scientific projects. The parties to an exchange in the we-mode can certainly be consciously aware that they are exchanging information relating to each other mutually, but this need not be so. Indeed, taking joint action as the precondition of the we-mode turns out to limit the space of explanatory options and to deliver a low-level resolution of the range of phenomena in which the we-mode occurs. For instance, there are cases of social interaction where the parties to an interaction see things in the world as a we even though they are not involved in joint action. One such case entails the switch to 396
First-person plural perspective
a we-perspective essential for solving social dilemmas in competitive scenarios, which involve antagonistic behaviour rather than the intention of the participants to act jointly (Bacharach 2006). Another interesting example is social interactions involving “unnecessary” coordination, such as the coordination of heart rate in people taking part in a ritual, which, however, brings about identification and participation in a we-perspective (Konvalinka et al. 2010). If joint action is not the key to a shared cognitive activity in the we-mode, what is then? One promising route is to shift the focus of explanation, from the type of interaction, i.e. joint action, to the nature of the information processes whereby the agents adjust minds and bodies to make use of self–other information. The concept of alignment, in particular, has become a valuable resource for describing the dynamic and graded nature of the mechanisms and processes underpinning the sharing of mental attitudes and dispositions in social interactions (Dale et al. 2013). From the point of view of someone interested in the steps through which the minds and bodies of individual people become aligned, independently of the task and the type of interaction at stake, it will become clear that interacting people take one another into account in many different ways, and hence there are different types of social interaction. This formulation seems to go against the commonsense view that all social interactions are joint actions. But, while the voluntary, intentional and mutual exchange of information occurring between jointly interacting agents is a paradigmatic case of human sociality, it is only one among several possible ways in which people’s mental attitudes and resources can adjust to each other. Likewise, not all forms of information exchange sustaining the formation and persistence of shared mental states will turn out to generate a we-mode state. On the assumption that social interactions occur when people align their thoughts and experiences in various manners, then thinking and experiencing things as a we can be identified with the occurrence of a special kind of alignment, which involves the reciprocal exchange of information between the agents. Mutual, or reciprocal, alignment unfolds in space and time when two systems have access to information about the thoughts and behaviours of each other in a bi-directional and dynamic manner. A detailed discussion of the prerequisites for the minds and bodies of individuals to become reciprocally aligned will take us too far afield. But it is worth noting that the approach to sociality based on a dimensional and graded picture of mental alignment promises to yield novel operational definitions of what counts as social in a social interaction. The richness and variety of forms of mental alignment thus becomes the clue into the richness and variety of forms of human sociality. Naturalization projects are explanatory projects motivated by the conviction that an account of the nature of a certain phenomenon exhausts all there is to know about the reality of the phenomenon itself. In the case of collective intentionality in the we-mode, this purpose can be achieved by bridging the gap between folk psychological explanations of collective intentional predicates and theories of the states and processes, within and between the heads of interacting individuals, that suffice to align them mutually. Top-down and bottom-up theorists undertake this task by exploring the nature and structure of joint action, but success is likely to depend on taking the phenomenon of reciprocity, rather than joint action, as the starting point into the investigation of the first-person plural perspective.
Notes 1 School of Advanced Study, University of London, Senate House, Malet Street, London WC1E 7HU, United Kingdom. Email: [email protected] The ideas in this paper developed in the course of many useful conversations with, and benefited from the generous comments of, Chris Frith, Bryce Huebner, Julian Kiverstein, and Dan Zahavi.
397
Mattia Gallotti 2 I thank Raphael Lyne for sharing his thoughts on the topic with me and for insightful conversations on the importance of first-person plural narration in poetry and literature. 3 I use “social” to refer to the facts pertaining to the study of the social reality, and hence the subject matter of social ontology, including social interactions. Instead, I will use “collective” to refer to the properties of the mechanisms and processes of the mind responsible for the formation of collective, or shared, mental states. 4 The meaning of “social facts” depends on what counts as social. A tentative list includes, though it is not limited to, facts such as social kinds and tokens; socially meaningful occurrences and regularities; purposeful plural actions; phenomena of collective intelligence; and, to some extent, the mechanisms and processes of social cognition. 5 Wilson (2004) provides a good point of entry into the individualism debate. 6 For a review, see the collection of essays edited by Gallotti and Michael (2014). 7 I mean intentionality in the philosophical sense of aboutness. Collective intentionality does not consist only in the instrumental fact that individuals act upon certain intentions to engage in voluntary and deliberate actions with others but, more generally, the fact that they can entertain mental representations with first person plural content i.e. mental states directed at states of affairs in the world jointly. 8 There is a tendency in the literature to use “collective” and “shared” interchangeably; hence theories of how the mental states of people become shared in social interactions are subsumed under the “collective intentionality” label (for a fresh overview, see Chant et al. 2014). 9 Example of group-level traits include:“The music rather than the rock band, the election of a leader who reflects the public interest rather than the democratic voting system, the sailing ship’s voyage rather than the crew positions, the economic surplus rather than the market economy” (Smaldino 2014: 244). 10 I borrow the term “alignment” from early studies of the processes in which the parties to a conversation adjust their words and thoughts to each other and integrate information relating to each other (for an application to philosophical theories of collective intentionality, see Tollefsen and Dale 2012). 11 I borrow the term from the insightful discussion contained in Wilson (2004). 12 In a lively discussion of we-intentionality, Dan Zahavi captures this point by way of the following story: My son and I return from a trip and, when seeing a common friend, I shout ‘We saw it! We found the hedgehog!’ In such a case, the experience isn’t simply given to me as my experience, but as ours; the action isn’t simply given to me as my action, but as our action. (2014: 243) 13 Philosophers of collective intentionality, by and large, are realists about the nature of intentional predicates in general. My reconstruction should not be read as implying that reductivist accounts entail a rejection of realism about the nature of intentional predicates. Nor am I saying that mental states are shared only in the eyes of an observer, be it folks or theorists, or that for any two people to have shared minds is merely for an account of collective intentionality to be compelling under a reconstruction of the relevant thoughts and experiences as shared. 14 How cognition and the brain work can turn out to be radically different from the way we, qua theorists, think they do. Still, it does make a difference for the success of top-down naturalization that the processes and operations indicated as genuinely insightful are those which are ultimately found in the brain (Egan and Matthews 2006: 383).
References Anscombe, M. (1959). Intention. Oxford: Blackwell. Bacharach, M. (2006). Beyond Individual Choice. Princeton, NJ: Princeton University Press. Borch, C. 2012. The Politics of Crowds: An Alternative History of Sociology. Cambridge: Cambridge University Press. Bratman, M. (1992). Shared cooperative activity. Philosophical Review, 101, 327–341. ———. (2014). Shared Agency. A Planning Theory of Acting Together. New York: Oxford University Press. Chant, S., Hindriks, F. & Preyer, G. (Eds.) (2014). From Individual to Collective Intentionality. Oxford: Oxford University Press Dale, R., Fusaroli, R., Duran, N. D. & Richardson, D. C. (2013).The self-organization of human interaction. In B. Ross (Ed.), Psychology of Learning and Motivation (pp. 43–95). Amsterdam, NL: Academic Press.
398
First-person plural perspective Egan, F. & Matthews, R. (2006). Doing cognitive neuroscience: A third way. Synthese, 153, 377–391. Epstein, B. (2015). The Ant Trap. New York: Oxford University Press. Faulkner, W. (1930). A Rose for Emily. Reprinted in W. T. Stafford (Ed.), Twentieth Century American Writing (pp. 654–665). Indianapolis: Odyssey Press, 1965. Gallotti, M., Fairhurst, M. & Frith, C. D. (Manuscript). Alignment in social interactions. Gallotti, M. & Frith, C. D. (2013). Social cognition in the we-mode. Trends in Cognitive Sciences, 17(4), 160–165. Gallotti, M. & Michael, J. (2014). Perspectives on Social Ontology and Social Cognition. Dordrecht: Springer. Godfrey-Smith, P. (2014). On folk-psychology and mental representation. In H. Clapin, P. Staines and P. Slezak (Eds.), Representation in Mind: New Approaches to Mental Representation (pp. 147–162). Amsterdam: Elsevier Publishers. Gold, N. & Harbour, D. (2012). Cognitive primitives of collective intentions: Linguistic evidence of our mental ontology. Mind & Language, 27(2), 109–134. Jarymowicz, M. (2015). Mental barriers and links connecting people of different cultures: Experiential vs. Conceptual bases of different types of the we-concepts. Frontiers in Psychology, 6(1950). doi: 10.3389/ fpsyg.2015.01950 Knoblich, G., Sebanz, N. & Butterfill, S. (2011). Psychological research on joint action: Theory and data. In B. H. Ross (Ed.), The Psychology of Learning and Motivation (pp. 59–101). Amsterdam, NL: Academic Press. Konvalinka, I., Vuust, P., Roepstorff, A. & Frith, C. D. (2010). Follow you, follow me: Continuous mutual prediction and adaptation in joint tapping. The Quarterly Journal of Experimental Psychology, 63(11), 2220–2230. Ludwig, K. (2007). Collective intentional behavior from the standpoint of semantics. Noûs, 41(3), 355–393. Olen, P. & Turner, S. (2015). ‘Durkheim, Sellars, and the origins of collective intentionality. British Journal for the History of Philosophy, 23(5), 954–975. Schweikard, D. P. & Schmid, H. B. (2013). Collective intentionality. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2013 Edition). . Searle, J. (1990). Collective intentions and actions. In P. Cohen, J. Morgan and M. E. Pollack (Eds.), Intentions in Communication (pp. 401–416). Cambridge, MA: Bradford Books. ———. (1995). The Construction of Social Reality. New York: Free Press. Sellars, W. (1963). Imperatives, intentions, and the logic of ‘ought’.’ In H. N. Castañeda and G. Nakhnikian (Eds.), Morality and the Language of Conduct (pp. 159–214). Detroit: Wayne State University Press. Smaldino, P. (2014). The cultural evolution of emergent group-level traits. Behavioural and Brain Sciences, 37, 243–295. Sperber, D. & Wilson, D. (1995). Relevance. Communication and Cognition, 2nd edition, Oxford: Blackwell. Tollefsen, D. & Dale, R. (2012). Naturalizing joint action: A process based approach. Philosophical Psychology, 25, 385–407. Tollefsen, D., Kruez, R. & Dale, R. (2014). Flavors of togetherness: Experimental philosophy and theories of joint action. In J. Knobe and S. Nichols (Eds.), Oxford Studies in Experimental Philosophy (pp. 232–252). Oxford: Oxford University Press. Tomasello, M. (2008). The Origins of Human Communication. Cambridge, MA: MIT Press. ———. (2014). A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. Tuomela, R. (1984). A Theory of Social Action. Dordrecht, Holland: D. Reidel Publishing Co. ———. (2013). Social Ontology. Collective Intentionality and Group Agents. New York: Oxford University Press. Wilson, R. (2004). Boundaries of the Mind. The Individual in the Fragile Sciences. Cambridge: Cambridge University Press. Zahavi, D. (2014). Self and Other: Exploring Subjectivity, Empathy, and Shame. Oxford, UK: Oxford University Press.
399
23 TEAM REASONING Theory and evidence Jurgis Karpus and Natalie Gold
Introduction Orthodox game theory identifies rational solutions to interpersonal and strategically interdependent decision problems, games, using the notion of individualistic best-response reasoning. When each player’s chosen strategy in a game is a best response to the strategies chosen by other players, they are said to be in a Nash equilibrium – a point at which no player can benefit by unilaterally changing his or her strategy. Consider the Hi-Lo and the Prisoner’s Dilemma two-player games illustrated in Figures 23.1 and 23.2.The strategies available to one of the two players are identified by rows and those available to the other by columns.The numbers in each cell represent payoffs to the row and the column players respectively in each of the four possible outcomes in these games. There are two Nash equilibria in the Hi-Lo game, (Hi, Hi) and (Lo, Lo), since, for either player, the strategy Hi is the best response to the other player’s choice of Hi and the strategy Lo is the best response to the other’s choice of Lo.1 As such, individualistic best-response reasoning identifies two rational solutions of this game but does not resolve it definitively for the interacting players. For many people, however, (Lo, Lo) does not appear to be a rational solution and it seems that the outcome (Hi, Hi) is a clear definitive resolution of this game. In the case of the Prisoner’s Dilemma game, there is only one Nash equilibrium, (D, D), since, for either player, the strategy D is the best response to whatever the other player is going to do. As such, individualistic best-response reasoning resolves this game definitively. However, due to the inefficiency of the outcome (D, D) compared to the outcome (C, C) – both players are better off in the latter than they are in the former – for some the outcome (C, C) is not obviously irrational and there is a division of opinion (at least outside the circle of professional game theorists) about what a rational player ought to do in this game. Orthodox game theory’s inability to resolve these games satisfactorily, in particular its inability to definitively resolve the Hi-Lo game, motivated the development of the theory of team reasoning.2 According to the theory of team reasoning, people may not always be employing individualistic best-response reasoning in games. The theory allows that people may, instead, identify rational solutions from the perspective of a team, a group of individuals acting together in the attainment of the best outcome(s) for that group. This, in turn, enables team reasoning
400
Team reasoning Hi
Lo
Hi
2, 2
0, 0
Lo
0, 0
1, 1
Figure 23.1 The Hi-Lo game.
C
D
C
2, 2
0, 3
D
3, 0
1, 1
Figure 23.2 The Prisoner’s Dilemma game.
to show how the Hi-Lo game can be rationally resolved definitively and how the outcome (C, C) in the Prisoner’s Dilemma game can be rationalized. The theory of team reasoning gives a new account of why coordination and cooperation can be rational by introducing the possibility of multiple levels of agency into classical game theory. But it is also supposed to tell us something about how people reason. It is a model of decision-making, which abstracts and simplifies, but “it captures salient features of real human reasoning” (Sugden 2000, p. 178). We might think of team reasoning as operating at Marr’s (1982) computational level, specifying the goal of the system and the logic behind the output, but leaving open how the computation is implemented and how it is realized in the brain (Gold, in press). A number of different versions of the theory of team reasoning have been proposed and developed. These differ with respect to what triggers decision-makers’ adoption of the team mode of reasoning and what team-reasoning individuals try to achieve.We review these developments in the first part of this chapter (Sections 1 to 3). There is also a nascent but growing body of experiments that attempt to test the theory. We review some of these studies in the second part (Sections 4 to 6). Finally, with Section 7 we conclude and present a suggestion for further experimental work in this field.
I. Theory 1. What is team reasoning? The individualistic best-response reasoning of orthodox game theory is based on the question of which of the available strategies in a game a particular player should take, given his or her individual preferences and his or her beliefs about what the other players are going to do. Each player’s personal motivations in games are represented by the payoff numbers they associate with the available outcomes, and the optimal strategy is that which gives the player in question the highest expected payoff. In this light, the best strategy for an individualistically reasoning player in the Hi-Lo game (see Figure 23.1 above) is conditional on that player’s belief about what the other player is going to do: play Hi or play Lo. In the Prisoner’s Dilemma game (see Figure 23.2) the best strategy is unconditionally to play D. Team reasoning, on the other hand, is based on the question of what is optimal for the group of players acting together as a team. A team reasoner first identifies an outcome of a game that best promotes the interests of the team and then chooses the strategy that is his or her part of attaining that outcome. If the outcome (Hi, Hi) is identified as uniquely optimal for the team, then team reasoning resolves the Hi-Lo game definitively. Similarly, if any of the outcomes associated with the play of C in the Prisoner’s Dilemma game, e.g., the outcome (C, C), are ranked at the top from the point of view of the team, the strategy C can be rationalized. It is important to note that reasoning as a member of a team is not a mere transformation of players’ personal payoff numbers associated with the available outcomes in games.To see this,
401
Jurgis Karpus and Natalie Gold
consider again the Hi-Lo game. Suppose that, from the point of view of the team, the outcome (Hi, Hi) is deemed to be the best, the outcome (Lo, Lo) is deemed to be the second-best and the outcomes (Hi, Lo) and (Lo, Hi) the worst. Replacing the two players’ original payoff numbers with numbers that correspond to the team’s ranking of the four outcomes in the game does not change the payoff structure of the original game in any way, since the players’ individual payoffs are already in line with the valuation of outcomes from the team’s perspective. The key difference here is that individualistic reasoning is based on evaluating and choosing a particular strategy based on the associated expected personal payoff, whereas team reasoning is based on evaluating the outcomes of the game from the perspective of the team, and then choosing a strategy that is associated with the optimal outcome for the team. There are two important questions that the theory of team reasoning needs to address: “when do people reason as members of a team?” and “what do people try to achieve when they reason as members of a team?” In other words, is it possible to identify circumstances or types of games in which the interacting players are likely to adopt team reasoning and the mechanism by which they adopt it, and, once they team reason, is it possible to specify a functional representation of what they take the goals of the team to be? We turn to reviewing the various proposals for answering these questions in the following two sections. 2. What triggers team reasoning? Different versions of the theory of team reasoning have different answers to the question of when people team reason. One answer, mainly associated with Bacharach (2006), is that the mode of reasoning an individual uses is a matter of that decision-maker’s psychological makeup, which in turn may depend on certain features of the context in which decisions are made, but otherwise lies outside of the individual’s conscious control. A second answer, proffered by Sugden (2003), is that an individual may choose to endorse a particular mode of reasoning based on considerations about the potential benefits of one or another possible mode of reasoning and his or her beliefs about the modes of reasoning endorsed by other players, but this choice is outside of rational evaluation. A third possibility, proposed by Hurley (2005a, 2005b), is that individual decision-makers come to choose the team mode of reasoning as a result of rational deliberation itself. The first position, the idea that the adoption of team reasoning is outside of an individual’s control, can be found in the version of the theory of team reasoning presented by Bacharach (2006) and Smerilli (2012). Here the mode of reasoning that an individual adopts is a matter of a psychological frame through which he or she sees a decision problem. The idea is similar to that of Tversky and Kahneman (1981, p. 453), who define a frame as “the decision-maker’s conception of the acts, outcomes, and contingencies associated with a particular choice”. In Tversky and Kahneman’s Prospect Theory, framing a decision in terms of losses or gains affects the part of the value function that decision-makers apply, thus affecting their choices. In Bacharach’s theory of team reasoning, framing a decision in terms of “we” or “I” affects the goals that decision-makers aim to achieve, which one might think of as using the individual or the group value function, and the mode of reasoning that they apply. If the individual frames the decision problem as a problem for him or her individually, i.e., in terms of individualistic best-response reasoning, then he or she identifies solutions offered by that mode of reasoning alone. If, on the other hand, the individual frames the decision problem as a problem for a group of players acting together as a team, i.e., in terms of team reasoning, then he or she identifies solutions offered by team reasoning and not by individualistic reasoning. This idea can be likened to that of seeing either a goblet or two faces in the goblet 402
Team reasoning
illusion picture illustrated in Figure 23.3 (also known as Rubin’s vase). Looking at this picture it is possible to see either a goblet or two faces opposite from each other, but only one of these images at a time and not both of them simultaneously. In the same way, a decision-maker is said to frame a decision problem either from the point of view of individualistic best-responding or from the point of view of reasoning as a member of a team, but not in terms of both these perspectives at the same time.3 The psychological frame through which an individual analyzes a particular decision problem may depend on factors that lie outside of the description of a game, but it can be influenced by the payoff structure of the game itself: Bacharach mentions as possible triggers the strong interdependence and double-crossing features. Roughly speaking, strong interdependence occurs when there is a Nash equilibrium that is worse than some other outcome in the game from every player’s individual point of view. Both the Hi-Lo and the Prisoner’s Dilemma games have this feature: in Hi-Lo, the Nash equilibrium (Lo, Lo) is worse for both players than the outcome (Hi, Hi); in the Prisoner’s Dilemma, the outcome (D, D) is worse for both than the outcome (C, C). This means that the outcomes (Lo, Lo) and (D, D) are not Pareto efficient. (An outcome of a game is said to be Pareto efficient if there is no other outcome available that would make some player better-off without at the same time making any other player worse-off.) According to Bacharach, strong interdependence increases the likelihood that an individual would frame a decision problem as a problem for a team. The double-crossing feature is the possibility of an individual personally benefiting from a unilateral deviation from the team reasoning solution. It is the incentive to act on individual reasoning when one believes that the other player is acting on team reasoning. This feature is present in the Prisoner’s Dilemma but not the Hi-Lo game. In the Prisoner’s Dilemma, each individual would personally benefit from a unilateral deviation from the cooperative play of (C, C). There is an incentive to double-cross the other player, playing D if the other player is expected to play C. According to Bacharach, the possibility of double-crossing decreases the likelihood of a particular decision-maker framing a decision problem as a problem for a team. Smerilli (2012) formalizes this intuition, providing a model where the double-crossing feature causes players to vacillate between frames.
Figure 23.3 The goblet illusion.
403
Jurgis Karpus and Natalie Gold
Another possibility, suggested by Bardsley (2000, Ch. 5, Section 6), is that payoff differences within cells introduce an inter-individual aspect to game situations and Pareto superior outcomes a collective one, which respectively inhibit or promote team reasoning. Zizzo and Tan (2007) introduce the notion of “game harmony”, a generic game property describing how conflictual or non-conflictual the players’ interests are, and suggest some ways of measuring it, the simplest one being just the correlation between the players’ payoffs across outcomes. They show that game harmony measures can predict cooperation in some 2 x 2 games (i.e., two-player games with two strategies available to each player). Note that this is a different idea from Bacharach’s strong interdependence and double-crossing features (as noted by Bacharach 2006, p. 83). The measures agree in pure coordination games, where players’ interests are perfectly aligned, and in zero-sum games, where players’ interests are perfectly opposed. However, in mixed motive games, the two ideas do not always point in the same direction. Bacharach is clear that common interest is strong when the possible gains from coordination are high or the losses from coordination failure are great, which leaves open how consensual players’ interests are in general, whereas game harmony is simply a measure of how consensual players’ interests are and does not take into account the size of the potential gains from cooperation. In addition to the structural features of games themselves, priming group or individualistic thinking in decision-makers could be expected to also play an important role in determining which frame of mind the individuals would be in and which mode of reasoning they would use in games. Bacharach (2006) surveys the literature from social psychology on group identity, the effect of social categorization and the minimal group paradigm, and took himself to be contributing to that literature. Group identity may be triggered by players’ recognition of belonging to the same social group or a particular category, having common interests, being subject to a common fate or simply having face-to-face contact. For Bacharach, group identity is a “framing phenomenon” (2006, p. 81).To group identify is to conceive of oneself as a group member: to represent oneself as a group member and have group concepts in one’s frame. Hence, for him, all these factors that trigger group identity may cause a shift from the “I frame” to a “we frame” (see, in particular, Bacharach 2006, pp. 76–81). Sugden (2003, 2011, 2015) takes the second position described above: an individual decisionmaker may choose to endorse team reasoning, but there is no basis for rational evaluation of this choice. For Sugden, there may be numerous modes of valid reasoning and an individual decision-maker may choose to endorse any one of them, but none of these modes of reasoning are privileged over others on the basis of instrumental rationality. Instrumental practical reasoning allows an agent to infer the best means to achieve its goals.Therefore instrumental rationality must presume both the unit of decision-making agency as well as its goals, and neither of these are amenable to evaluation by the theory of rationality itself. However, Sugden discusses a number of conditions that may need to be satisfied in order for an individual to endorse team reasoning. He sees team reasoning as cooperation for mutual advantage. Hence whether or not a person team reasons will depend on whether it is beneficial for that decision-maker to do so individually (in terms of his or her individual preferences and goals).4 Further, team play by a particular decision-maker may be conditional on the assurance that other players are reasoning as members of a team as well. Sugden can still accept a lot of what Bacharach says about the circumstances in which people team reason, as can any theory of team reasoning (Gold 2012). Sugden’s agents still need to conceive of the decision problem as a problem for the team, rather than as a problem for them as individuals, before they can team reason (Sugden 2000, pp. 182–183). The difference is that, in Sugden’s theory, people make a choice to team reason and assurance plays a part in this, whereas for Bacharach, team reasoning is the result of a psychological process and may lead 404
Team reasoning
team reasoners to be worse off than they would have been if they had reasoned as individuals (for instance they may cooperate in a Prisoner’s Dilemma when the other player defects; for more on how this can happen, see Gold 2012). Bacharach and Sugden agree that all goals are the goals of agents and that it is not possible to evaluate those goals without first specifying the unit of agency. Thus, even though Sugden allows for the unit of agency to be chosen, it is not a matter of instrumentally rational choice. In contrast, Hurley (2005a, 2005b) suggests that there is no need to identify the unit of agency with the source of evaluation of outcomes and that we can identify personal goals prior to identifying the unit of agency. Hurley says that, “As an individual I can recognise that a collective unit of which I am merely a part can bring about outcomes that I prefer to any that I could bring about by acting as an individual unit” (Hurley 2005a, p. 203). Hence, Hurley suggests that principles of practical rationality can govern the choice of the unit of agency; one should choose the unit of agency that best realizes one’s personal goals. If that unit is the team, then one should team reason as a matter of practical rationality.5 The problem for theories that allow rational choice of the unit of agency is how to specify the goals that we should be striving for, independent of the unit of agency. Hurley suggests that we should privilege personal goals but, once we recognize that there are other possible units of agency (and evaluation), we might question why it is the case that the personal level takes priority. For a decision-maker in Regan’s (1980) theory of cooperative utilitarianism, for example, the goal is always utilitarian and the question is what unit of agency one should be adopting given this goal. Further, taking goals as given to us by our theory of value, or moral theory, turns team reasoning from a theory of rational choice into a theory of moral choice, which is not intended by many of its proponents. The problem is brought out in recent work by Gauthier (2013). Gauthier has long held that it can be instrumentally rational to cooperate in the Prisoner’s Dilemma game (Gauthier 1986). In a recent re-working of his theory, Gauthier (2013) contrasts two opposed conceptions of deliberative rationality: maximization (equivalent to individualistic best-response reasoning) and Pareto-optimization. He suggests that Pareto-optimization is a necessary condition for rationality in multi-player games. A Pareto-optimizing theory “provides only a single set of directives to all the interacting agents, with the directive to each premised on the acceptance by the others of the directives to them” (Gauthier 2013, p. 607). The outcome selected must be both efficient and fair in how it distributes the expected gains of cooperation. Although he does not explicitly use the term “team reasoning”, it is clear that Gauthier’s theory is similar to ideas of team reasoning for mutual gain. His justification for team reasoning is that it would pass a contractarian test whereby it is “eligible for inclusion in an actual society that constitutes a cooperative venture for mutual fulfilment” (Gauthier 2013, p. 618). As Gauthier (2013, p. 624) puts it, his goal is to show that “social morality is part of rational choice, or at least, integral to rational cooperation”. However, whilst he has sketched out what Pareto-optimization would involve, Gauthier has not provided any argument for its rationality; he concludes that he has not yet been successful in bridging the two and that more needs to be done regarding the connection to rationality (in other words, how instrumental rationality may require us to cooperate in social interactions). But it is hard to see how Gauthier could bring Pareto-optimization within instrumental rationality. If he goes the same route as Hurley and privileges the individual’s perspective and goals, then he needs to explain why it is instrumentally rational to cooperate when the individual could do better by deviating in situations that have the double-crossing feature. Or, if the idea is that there is some addition to instrumental rationality for choosing the level of agency, then it is hard to see how to characterize such a process. A reasoning process already seems to presume an agent who is doing the reasoning. As 405
Jurgis Karpus and Natalie Gold
Bardsley (2001, p. 185) puts it, “the question ‘should I ask myself “what am I to do?” or “what are we to do?”?’ presupposes a first person singular point of view.” 3. What do teams strive for? We now turn to reviewing different proposals about a team’s goals. The approaches presented differ in whether they require individual decision-makers to sometimes sacrifice their personal interests for the benefit of other members of a team and whether they rely on making interpersonal comparisons of the interacting players’ payoffs. Bacharach (2006) mentions Pareto efficiency as a minimal condition, i.e., that if a strategy profile is superior in terms of Pareto efficiency, then it is preferred by the team to the strategy profiles that it is superior to. The exclusion of all Pareto inefficient strategy profiles, however, says nothing about how a team should rank the remaining strategy profiles where there is a conflict of personal interests, such as presented by the pair of outcomes (C, D) and (D, C) in the Prisoner’s Dilemma game. In some of the early developments of the theory, e.g., Bacharach (1999, 2006), as well as some of the more recent papers, e.g., Colman et al. (2008, 2014) and Smerilli (2012), the maximization of the average of the interacting players’ payoffs is used as an example of a team payoff function.This function – that is, a mathematical representation of a team’s goals in an interpersonal interaction – is consistent with the strong interdependence feature and the related Pareto efficiency criterion discussed in the previous section, and it is easy to see that it selects the outcomes (Hi, Hi) and (C, C) as uniquely best for a team in the Hi-Lo and in the above Prisoner’s Dilemma games respectively. (Specifically, maximizing the average payoff will select (C, C) in any Prisoner’s Dilemma game where the average of the payoffs from (C, C) is higher than the average from any other outcome.) This function, however, sometimes fails with respect to the notion of mutual advantage. Consider a slight variation of the Prisoner’s Dilemma game illustrated in Figure 23.4. Here the maximization of the average of the two players’ personal payoffs would prescribe the attainment of the outcome (D, C). As such, it would advocate a complete sacrifice of the column player’s personal interests for the benefit of the row player alone. The averaging function also relies on making interpersonal comparisons of the interacting players’ payoffs, which suggests, for example, that the row player prefers the outcome (D, C) to (C, C) to a greater extent than the column player prefers the outcome (C, D) to (D, D) in Figure 23.4. Strictly speaking, such comparisons go beyond the orthodox assumptions of expected utility theory, which make numerical representations of the interacting players’ preferences possible but do not automatically grant their interpersonal comparability. As such, a theory of team reasoning that uses this function as a representation of a team’s goals is only applicable in contexts when such interpersonal comparisons of payoffs are possible.6 Although not many alternative functional representations of a team’s goals have been proposed (perhaps partly because many works on the theory of team reasoning have so far considered examples where team-optimal outcomes seem evident, such as the outcomes (Hi, Hi) and (C, C) in the Hi-Lo and the Prisoner’s Dilemma games respectively), a number of properties that representations of a team’s goals should satisfy have been put forward. One of them is C
D
C
2, 2
0, 3
D
5, 0
1, 1
Figure 23.4 A variation of the Prisoner’s Dilemma game.
406
Team reasoning
the notion of mutual advantage discussed by Sugden (2011), which suggests that the outcome selected by a team should be mutually beneficial from every team member’s perspective. Although he does not present an explicit function of a team’s goals, in a recent paper Sugden (2015) proposes to measure mutual advantage relative to a particular threshold.The threshold is each player’s personal maximin payoff level in a game – the payoff that he or she can guarantee him- or herself independently of the other players’ chosen strategies. In the Hi-Lo game this is 0 for both players. In the Prisoner’s Dilemma game of Figures 23.2 and 23.4, this is 1, since it is the lowest possible payoff that either player can attain by playing D. A strategy profile is said to be mutually beneficial if (a) it results in each player receiving a payoff that is greater than his or her maximin payoff level in a game, and (b) each player’s participation in team play is necessary for the attainment of those payoffs.7 Karpus and Radzvilas (2016) propose a formal function of a team’s goals that is based on the notion of mutual advantage similar to the one above whilst also incorporating the Pareto efficiency criterion (in a weak sense of Pareto efficiency, which means that an outcome of a game is efficient if there is no alternative that is strictly preferred to it by every player in the game). It suggests that an outcome that is optimal for a team is one that is associated with the maximal amount of mutual benefit. The extent of mutual benefit is measured by the number of payoff units by which an outcome advances every player’s personal interests relative to some threshold points, such as the players’ maximin payoff levels in games as suggested by Sugden (2015). For example, if both players’ maximin payoffs (in a two-player game) are 0, an outcome associated with a payoff of 3 to Player 1 and payoff of 2 to Player 2 offers 2 units of mutual benefit (the additional unit of individual benefit to Player 1 is not mutual).8 As such, the function identifies the outcome (Hi, Hi) as uniquely optimal for a team in the Hi-Lo game and prescribes the attainment of the outcome (C, C) in all the versions of the Prisoner’s Dilemma game discussed above.9
II. Evidence 4. The difficulties of empirical testing There is a major difficulty that any empirical test of team reasoning will unavoidably face: the fact that a number of separate hypotheses are being tested at once. The main hypothesis to be tested is whether people reason as members of a team in a particular situation.This, however, is intertwined with two additional auxiliary hypotheses. The first is whether the particular situation at hand is one in which people might reason as members of a team in general, and the second is whether the experimenter has correctly specified the goals that the members of the team try to achieve. These may involve assuming particular answers to the “when do people reason as members of a team?” and the “what do people do when they reason as members of a team?” questions that we identified above. Also, if decision-makers do not follow individualistic best-response reasoning in certain situations, we need to be able to distinguish team reasoning from other possible modes of reasoning that they may choose to endorse, e.g., regret minimization or ambiguity aversion, or from factors that influence decisions, like risk aversion. Despite these difficulties, a number of relatively recent empirical studies have been carried out in an attempt to test the theory of team reasoning. Since the aim is to test the theory of team reasoning tout court, the experiments use situations where it is naturally invoked as an explanation of actual play. They can be broadly divided into two groups: those that focus on team reasoning where it resolves a Nash equilibrium selection problem (coordination problems) and those that focus on team reasoning where it selects outcomes that are not Nash 407
Jurgis Karpus and Natalie Gold
equilibria (as in the Prisoner’s Dilemma). We will review both types of studies in turn. The focus is on pitting team reasoning against other explanations of coordination and cooperation in these games, so experimenters hope that the outcome that they identify as the team goal is uncontroversial, although we will see that sometimes there is room for dispute. 5. Tests based on Nash equilibrium selection The first category of experiments involves games with multiple Nash equilibria where nonequilibrium outcomes yield no payoffs to the interacting players. As such, they are Nash equilibrium coordination games in which players try to coordinate their actions on one of the available equilibria in order to attain positive payoffs.Team reasoning is said to single out one of the equilibria as uniquely optimal for a team and is tested against other possible modes of reasoning that may be at play. The dominant alternative explanation of behaviour in these experiments (to that of the theory of team reasoning) is assumed to be cognitive hierarchy theory, which posits the existence of individualistic best-response reasoners who differ in their beliefs about what other players are going to do in games. The level-0 decision-makers are said not to reason much at all when playing games and choose any of the available options at random, i.e., they play each available option with equal probability.10 The level-1 reasoners assume everybody else to be cognitive level-0 and best-respond to the level-0 decision-makers’ strategy.The level-2 reasoners assume everybody else to be cognitive level-1 and, similarly, best-respond to the expected strategies of a level-1 player, and so on for higher level cognitive types. Although in principle the cognitive hierarchy theory allows for any number of cognitive types (where each type assumes other players to be of one level lesser type than themselves), in practice it is usually assumed that most decision-makers are level-1 or level-2 reasoners. Bardsley et al. (2010) conducted a similar experiment at two separate locations – one in Amsterdam and one in Nottingham – using a set of Nash equilibrium coordination games described above. An example is given in Figure 23.5. In this game, the best response to a player who chooses any of the options with equal probability is to pick one of the options associated with the payoff of 10. This is because somebody who chooses at random is expected to play each of the four available strategies with equal probability of ¼. As such, the expected payoff from choosing A, B or C (when the co-player chooses at random) is 10 × ¼ = 2.5, while the expected payoff from choosing D is 9 × ¼ = 2.25. Therefore, a level-1 reasoner would never choose D. From this it follows that level-2 reasoners would never choose D either, since they are best-responding to the choice of level-1 types, and would, hence, also pick one of the options associated with the payoff of 10. Bardsley et al. (2010) hypothesized that team reasoners would choose option D due to the uniqueness of the outcome (D, D) and the indistinguishability of the outcomes (A, A), (B, B) and (C, C), which allows players to easily coordinate their actions. In the experiment, games
A
B
C
D
A
10, 10
0, 0
0, 0
0, 0
B
0, 0
10, 10
0, 0
0, 0
C
0, 0
0, 0
10, 10
0, 0
D
0, 0
0, 0
0, 0
9, 9
Figure 23.5 An example of a game from the Amsterdam experiment in the form of a game matrix.
408
Team reasoning
were not presented to participants in the form of a matrix as shown in Figure 23.5 and there was no way to distinguish between the available strategies and outcomes other than in terms of payoffs that the players would attain if they managed to successfully coordinate their choices. For example, the outcome (A, A) could not be identified as being unique due to its top-left position in the matrix or because of being associated with choice options labelled with the first letter of alphabet. Notice that (D, D) is not Pareto efficient: it is inferior to the outcomes (A, A), (B, B) and (C, C). The reasoning behind the suggestion that team-reasoning decision-makers would opt for the outcome (D, D) is that, in the case of the three indistinguishable outcomes (A, A), (B, B) and (C, C), a player can only “pick” one of them and hope that the other player would “pick” the same one, whereas in the case of the outcome (D, D), a player is “choosing” the corresponding strategy D because of the uniqueness of that outcome. If both players pick one of the three indistinguishable outcomes, there is a ¹/³ chance that they will pick the same one, whereas if they both choose strategy D, they can be sure of attaining the outcome (D, D). So the expected payoff from trying to coordinate on one of the outcomes (A, A), (B, B) or (C, C) for a team-reasoning decision-maker is 3¹/³ while the certain payoff from coordinating on the outcome (D, D) is 9. (See Gold and Sugden’s introduction to Bacharach (2006) for more on this idea.) To put this differently, it may be said that ex ante, before the uncertainty about the other player’s action is resolved and when players take into account the likelihood of coordinating their actions in the computation of their expected payoffs, the optimal outcome in terms of Pareto efficiency is (D, D). Ex post, once the game has been played, the three outcomes (A, A), (B, B) and (C, C) Pareto dominate (D, D).11 The experimental results, though showing a clear deviation from individualistic bestresponse reasoning (assuming that it would not discriminate among the available Nash equilibria), are different in the Amsterdam and the Nottingham experiments. The results from Amsterdam seem to suggest the presence of team reasoning rather than cognitive hierarchy reasoning, whereas the results from Nottingham tend to suggest the opposite. In addition to making choices in numerical coordination games, such as the one illustrated above, both experiments asked the participants to complete other non-numerical “text” tasks. These differed between the two experiments and the authors speculate that there may have been spillover effects from the text tasks on the modes of reasoning used in the numerical coordination tasks. In Amsterdam, text tasks involved picking the odd one out, so participants may have tended to pick strategies that were associated with outcomes appearing as odd ones out in the number tasks, while in Nottingham text tasks gave more scope for picking favourites, so participants may have tended to focus on outcomes that were associated with their favourite payoffs. Another pair of experiments that focus on the Nash equilibrium selection problem was carried out by Faillo et al. (2013, 2016). Both experiments presented the participants with two-player games, in which they had to pick one of three options presented as segments of a pie. See Figure 23.6 for an example. Upon successfully coordinating on one of the three pie segments participants received positive payoffs, though these were not always the same for the two players. In the game of Figure 23.6, if we call the top left slice R1, the top right slice R2, and the bottom slice R3, then the outcomes (R1, R1), (R2, R2) and (R3, R3) yielded pairs of payoffs (9, 10), (10, 9) and (9, 9) to the two players respectively. A representation of this game using a game matrix is given in Figure 23.7. Like the experiments of Bardsley et al. (2010), these experiments were designed to pit the theory of team reasoning against cognitive hierarchy theory. Faillo et al. (2013, 2016) also followed Bardsley et al. in hypothesizing that team reasoners would take into account the probability of successful coordination when working out the expected payoffs associated with the available options. Pairs of Nash equilibria counted as indistinguishable from the perspective of 409
Jurgis Karpus and Natalie Gold
Figure 23.6 An example of a 3 × 3 pie game as seen by two interacting players. R1
R2
R3
R1
9, 10
0, 0
0, 0
R2
0, 0
10, 9
0, 0
R3
0, 0
0, 0
9, 9
Figure 23.7 An example of a 3 × 3 pie game in the form of a game matrix.
team reasoning when they were symmetric in terms of payoffs to the two players, such as the pair of outcomes (R1, R1) and (R2, R2) in the above example. In fact, the outcomes (R1, R1) and (R2, R2) were indistinguishable in all games in the two experiments and the team-optimal choice was always associated with the attainment of the outcome (R3, R3). (The labels R1, R2 and R3 were hidden from participants and the positions of pie slices were varied across three different treatment groups. The statistical analysis of results showed no significant effects of pie slice positions on the choice of R3 versus R1 or R2.) Table 23.1 summarizes the results of Faillo et al. (2013).12 Team reasoning is a good predictor in 7 out of the 11 games, where the modal choice was the option R3. The observed choices in the remaining four games (in addition to three of the games in which the theory of team reasoning is a good predictor) can be explained by cognitive hierarchy theory.13 As such, the results of the experiment are somewhat mixed. Faillo et al. (2013) conclude that team reasoning fails when it predicts the choice of a slice that is ex post Pareto dominated by the other two and this is not compensated by greater equality (games G3, G5 and G7) as well as when the teamoptimal outcome yields less equal payoffs than the other options and this is not compensated by Pareto superiority (G10). They suggest that we need a more general theory of team reasoning and offer two ways in which the theory could be amended to explain their results. One is to incorporate the circumstances of group identification (one of the auxiliary hypotheses in any test of team reasoning, as explained above). Ex post Pareto dominance and equality may play an important role in group identification, in which case ex ante Pareto dominance will not be sufficient to trigger team reasoning by itself. The other is to accept that people may not achieve the level of reasoning “sophistication” that would allow them to identify the optimality 410
Team reasoning Table 23.1 Summary of Faillo et al. (2013) results, showing the percentage of subjects making each choice in each game; in all games, team reasoning is assumed to predict the choice of R3 (highlighted in grey); choices predicted by cognitive hierarchy theory are indicated by CH Game
G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11
Payoffs
Results %
(R1, R1)
(R2, R2)
(R3, R3)
R1
R2
(9, 10) (9, 10) (9, 10) (9, 10) (10, 10) (10, 10) (10, 10) (10, 10) (9, 12) (10, 10) (9, 11)
(10, 9) (10, 9) (10, 9) (10, 9) (10, 10) (10, 10) (10, 10) (10, 10) (12, 9) (10, 10) (11, 9)
(9, 9) (11, 11) (9, 8) (11, 10) (9, 9) (11, 11) (9, 8) (11, 10) (10, 11) (11, 9) (10, 10)
14% 0% CH 51% 16% CH 48% 1% CH 51% 26% CH 16% CH 43% CH 6%
11% 1% CH 45% 4% CH 34% 3% CH 31% 22% CH 11% CH 27% CH 7%
CH
CH
R3 74% 99% 4% CH 80% 18% CH 96% 18% CH 52% 73% CH 30% 86% CH
of the ex ante Pareto efficiency. “Naïve” team reasoners may want to pursue the group interest but, because they do not identify the uniqueness of the outcome (R3, R3), they only use ex post Pareto efficiency and equality of payoffs (when it is not dominated in terms of Pareto efficiency) when determining what the group should do. Although the aim of this experiment was not to test the claim that game harmony predicts team reasoning, it is clear that the results do not support that idea. Whilst there is a high level of team reasoning in game G6, which has perfect alignment of payoffs, G5 also has perfect alignment of payoffs but relatively little team reasoning. In contrast, G4 and G9 have lower levels of payoff alignment but high levels of team reasoning. So the predictions that payoff alignment leads to team reasoning and payoff conflicts mitigate team reasoning is not supported by this set of games. There is another way to explain the results of Faillo et al. (2013), which challenges their assumption about what the team takes as its goals. Suppose that team-reasoning decision-makers first establish the optimal outcomes from the perspective of the team by identifying those outcomes that maximize the extent of mutual advantage as suggested by Karpus and Radzvilas (2016). These outcomes are always efficient in the weak sense of Pareto efficiency. (Recall that an outcome of a game is said to be Pareto efficient in the weak sense of Pareto efficiency, if there is no alternative that is strictly preferred to it by every player in the game.) The players then seek ways to coordinate their actions on one of the outcomes in the identified set using unique features of some outcome (if an outcome with unique features exists) as a possible coordinating device.This approach could explain choices observed in games G3, G5 and G7 in addition to the seven explained originally.14 For example, in the game G5, the outcomes (R1, R1) and (R2, R2) are strictly preferred to the outcome (R3, R3) by both players and, hence, by Karpus and Radzvilas’ approach, they are deemed optimal from the perspective of the team. Since there is no further way to discriminate between the latter two outcomes, team-reasoning decision-makers, according to this interpretation, end up playing a mixture of the two. In the game G1, on the other hand, none of the three equilibria can be excluded from the set of teamoptimal outcomes, since they all provide the same extent of mutual benefit to the two players. The outcome (R3, R3), however, is unique in this set and team-reasoning decision-makers, therefore, opt for this outcome. 411
Jurgis Karpus and Natalie Gold
6. Tests involving non-Nash equilibrium play We now turn to tests of team reasoning where a team selects outcomes that are not Nash equilibria. Although any empirical study of games in which team reasoning prescribes nonequilibrium play can be seen as a test of the theory (e.g., any test involving the Prisoner’s Dilemma game), reviewing all historical studies of games of this type is beyond the scope of this chapter. Instead, we will focus on two relatively recent experiments that specifically refer to the theory of team reasoning in their hypotheses. Colman et al. (2008) conducted an experiment (Experiment 2 in their paper) with five one-shot, 3 x 3, two-player games with symmetric payoffs (i.e., each game was played once, there were three strategies available to each player and the payoffs to the two players were symmetric). All games had a unique Nash equilibrium and a unique non-equilibrium outcome that was optimal from the perspective of a team. The study assumed team play to be the maximization of the average of players’ payoffs. The predicted outcomes, however, would be the same using any of the accounts of a team’s goals discussed in Section 3. An example of one of their games is given in Figure 23.8, where the unique Nash equilibrium is the outcome (E, E) and the optimal outcome for a team is (C, C). The results of the experiment show that in four games (out of five) slightly more than half of participants chose strategies that were associated with the team-optimal outcome and in one of the games this share was higher (86%). An important feature of all games in the experiment was that the team-optimal outcome was superior to the Nash equilibrium in terms of Pareto efficiency (which makes these cases somewhat similar to the Prisoner’s Dilemma game). This may suggest that in cases of one-shot interactions with unique Nash equilibria that are not Pareto efficient, about half of the decision-makers reason as members of a team and play accordingly. In a different experiment, Colman et al. (2014) used another set of eight one-shot, 3 x 3 and four 4 x 4, two-player games where every game (with the exception of one) contained a unique Nash equilibrium and distinct but also unique non-equilibrium predictions based on the theory of team reasoning and cognitive hierarchy theory.15 Examples of the games are given in Figures 23.9 and 23.10. The study assumed team play to be associated with the maximization of the average of players’ payoffs (the corresponding outcomes are indicated in bold in Figures 23.9 and 23.10).
C
C
D
E
8, 8
5, 9
5, 5
A
A
B
C
3, 3
1, 1
0, 2
D
9, 5
7, 7
5, 9
B
1, 1
1, 4
3, 0
E
5, 5
9, 5
6, 6
C
0, 0
2, 1
2, 5
Figure 23.9 An example of a 3 × 3 game from Colman et al. (2014).
Figure 23.8 An example of a 3 × 3 game from Colman et al. (2008). A
B
C
D
A
4, 4 2, 0
3, 2
1, 5
B
2, 2
2, 2
2, 0
3, 3
C
4, 3
2, 4
2, 5
3, 2
D
5, 2
0, 3
0, 0
1, 1
Figure 23.10 An example of a 4 × 4 game from Colman et al. (2014).
412
Team reasoning
Sugden’s (2015) notion of mutual benefit (see Section 3) and the function of teams’ goals discussed by Karpus and Radzvilas (2016) would yield different predictions in some of these games (for example, in Figure 23.9 the optimal outcome for the team based on the notion of maximal mutual advantage would be the outcome (A, A)). The results of the experiment are mixed, with at least two out of three or three out of four available strategies played quite frequently. This, combined with uncertainty about which outcome is the team reasoning solution, makes it difficult to identify which mode of reasoning predominates. Furthermore, many of these results could be explained by a combination of level-0 and level-1 reasoning, which simply corresponds to random picking and best-responding to a random choice of the other player (also see Sugden 2008, who suggests that these results would be obtained with a population consisting of 50% team reasoners, 40% level-1 and 10% level-0 types). There is some evidence that increasing the difficulty of a task increases the amount of randomizing (Bardsley and Ule 2014).16 Since the games that Colman et al. (2014) used had numerous strategies and non-symmetric variable payoffs, and appear to be quite complex and cognitively demanding in the identification of rational outcomes, random picking and the principle of insufficient reason (which means best-responding to a random choice) may provide a good explanation of the actual choices. 7. Conclusion and further directions In this chapter we reviewed some of the recent developments of the theory of team reasoning in games. Since its early developments, which were triggered by orthodox game theory’s inability to definitively resolve certain types of games with multiple Nash equilibria (such as the Hi-Lo game) and explain out-of-equilibrium play in others (such as the Prisoner’s Dilemma game), the theory has advanced in a number of different directions. From the theoretical point of view, different answers were proposed to the two fundamental questions that the theory of team reasoning needs to address: “when do people reason as members of a team?” and “what is it that they do when they reason in this way?” In response to the first question, it has been suggested that the mode of reasoning that an individual decision-maker adopts may depend on that decision-maker’s psychological make-up; it may be endorsed by the decision-maker depending on a number of conditions that need to be satisfied, such as the assurance of others’ participation in team play and the notion of mutual benefit; or it may be a result of rational deliberation about which mode of reasoning is instrumentally most useful in any given situation. In response to the second question, one aspect that differentiates the suggested answers is whether they allow team play to advocate a potential sacrifice of some members of a team for the benefit of others. The results of the nascent developments in empirical testing of the theory, a number of which we reviewed in the second part of this chapter, are, at best, mixed and further research in this field is needed. The studies start from the assumption that the games they use are situations where people could be expected to team reason. Nevertheless, some of them can be seen as providing indicative answers to the first question, “when do people reason as members of a team?”, because they arguably identify circumstances in which people are likely to team reason. One interpretation of Faillo et al. (2013) is that ex post Pareto dominance and equality play an important role in group identification. One interpretation of Colman et al. (2014) is that the team reasoning outcome needs to be simple and clear, as complex or cognitively demanding games lead people to randomize. However, these are speculative hypotheses which were developed post hoc to explain the experimental results and they still need to be put to the test. None of the experiments aim to test the mechanism by which people adopt team reasoning: 413
Jurgis Karpus and Natalie Gold
whether it is caused by a psychological process or a decision, and the role of assurance and players’ beliefs about what others will do. With regards to the second question, “what is it that team reasoning decision-makers strive for?”, in some of the games that have been studied, the predictions of the various functional representations of team interests coincide. This is often so in Nash equilibrium coordination games. But even then some differences are possible (recall, for example, the interpretation of results discussed by Faillo et al. (2013) based on ex ante vs. ex post optimality of the considered outcomes and the idea of coordination among outcomes that are maximally mutually beneficial). In more complex scenarios studied by Colman et al. (2008, 2014), the differences between various predictions of team play loom larger, which may therefore offer a better ground to test the competing assumptions about team-reasoning decision-makers’ goals, keeping in mind that if games get too complex that may mitigate against team reasoning. Any experimental test of the theory of team reasoning is complicated by the multiplicity of hypotheses that are to be tested simultaneously in connection with the above questions. It may thus be necessary to apply methods that go beyond mere observation of decision-makers’ choices in games, e.g., asking the participants to explain the reasons behind their choices, or encouraging the adoption of one or another mode of reasoning through the use of additional pre-play tasks. One possibility for further experimental work is to study how priming group or individualistic thinking affects people’s choices in simple Nash equilibrium coordination games where the team-optimal outcome seems to be obvious. Such a test would accord with a number of versions of the theory with respect to what is assumed to trigger the shift in individuals’ adopted mode of reasoning as well as a number of suggested functional representations of a team’s goals.
Acknowledgements We are extremely grateful to Nicholas Bardsley, Julian Kiverstein, Guglielmo Feis and Mantas Radzvilas for their invaluable suggestions which we used to improve earlier versions of this work. We are also grateful to James Thom for a large number of insightful discussions on the topic during the course of preparing this chapter. Our work on this chapter was supported by funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007–2013) / ERC Grant Agreement n. 283849.
Notes 1 These are Nash equilibria in pure strategies. There is a third Nash equilibrium in mixed strategies, in which both players randomize between the two available strategies by playing Hi with probability 1/3 and Lo with probability 2/3. 2 For some of the early and later theoretical developments, see Bacharach (1999, 2006), Sugden (1993, 2000, 2003, 2011, 2015), Gold and Sugden (2007a, 2007b), and Gold (2012). 3 However, see Bacharach (1997) for a model where, as well as the “I frame” and the “we frame”, there is also an “S (superordinate) frame”, which is active when someone manages to see a problem from both the “I” and the “we” perspectives. Someone in the “S frame” is still compelled only to evaluate the outcomes from either an “I” or a “we” perspective, and the cooperative option is chosen by a player in the “S frame” if it is the best option from the perspective of team reasoning and not worse than any other rational solution in terms of individualistic best-responding. 4 To understand the idea of mutual benefit it is important to note that the payoff numbers associated with different outcomes in games are meant to represent the interacting players’ preferences that, in some sense, mirror their goals and motivations in these games. In this light, higher payoff values represent higher levels of preference satisfaction. This interpretation of payoffs, however, causes a general difficulty
414
Team reasoning in experiments, in which we need to assume that the payoffs presented to participants are correctly aligned with their true motivations and preferences. If games used in experiments are incentivized using monetary payoffs, for example, we need to assume that the interacting participants’ true motivations are aligned with the maximization of personal monetary payoffs. 5 Hurley (2005b) follows Kacelnick (2006) in distinguishing two conceptions of rationality: rationality as consistent patterns of behaviour and rationality as processes of reasoning that underlie that behaviour. Hurley subscribes to the first conception, therefore the processes in an agent that actually generate his or her rational behaviour need not be isomorphic with the theoretical account of why the behaviour counts as rational. Hence she investigates local procedures and heuristics from which collective units of agency can emerge. According to her picture, choices can be instrumentally rational even if they result from a crude, low-level heuristic. 6 If the numbers in game matrices, for example, represent monetary payoffs and all players value money in the same way (that is, an additional unit of currency is subjectively worth just as much to one player as it is to another), then interpersonal comparisons of payoffs are not problematic. If, however, the payoff numbers in games represent players’ personal motivations as von Neumann-Morgenstern utilities, then such comparisons are tricky. 7 Note that according to the above definition, both (Lo, Lo) and (Hi, Hi) are mutually beneficial outcomes in the Hi-Lo game, since even (Lo, Lo) guarantees both players more than their maximin payoff. Hence, the definition of mutual advantage does not, by itself, exclude Pareto inefficient outcomes and, for Sugden (2015), which of the mutually beneficial outcomes will be sought by a team depends on which outcome each player in a game will have “reciprocal reason to believe” will be sought by every other player. See also Cubitt and Sugden (2003) for more details on “reason to believe”, which is based on a reconstruction of Lewis’ (1969) game theory. 8 In Karpus and Radzvilas’ function payoffs are first normalized so that, for each player, the least and the most preferred outcomes in a game are associated with payoff values 0 and 100 respectively. 9 There is a connection between the notion of mutual benefit in team play and Gauthier’s (2013) idea of rational cooperation discussed earlier. For Gauthier, rational cooperation is attained by maximizing the minimum level of personal gains across players relative to threshold points beyond which individuals would not cooperate. This is similar to the way the maximally mutually beneficial outcomes are identified using the function of teams’ goals presented by Karpus and Radzvilas (2016). Gauthier, however, does not provide a clear characterization of what the aforementioned threshold points are and his justification for rational cooperation is based on the idea of “social morality” (see earlier discussion in Section 2) rather than the interacting players attempting to resolve games in mutually advantageous ways. 10 This is assumed in the most frequently occurring version of the cognitive hierarchy theory. For a slightly different version, where level-0 decision-makers randomize between all of the available options, but assign slightly higher probability to the play of the strategy associated with the highest personal payoff or that with the most salient label, see, for example, Crawford et al. (2008). 11 Note that this idea is based on an implicit assumption that decision-makers are not extremely risk-loving. If the interacting decision-makers both preferred the 1 in 3 chance of receiving a payoff of 10 to a certainty of the payoff of 9 (i.e., if they both were extremely risk-seeking), then the team-optimal choice may be to pick one of A, B or C in the hope of coordinating on one of the outcomes (A, A), (B, B) or (C, C) respectively. 12 The type of pie games used and the conclusions drawn in the two experiments are quite similar.We here focus on the results reported in the first study. 13 For example, in the game G3 the cognitive hierarchy theory predicts level-1 reasoners will play the strategy associated with the highest personal payoff. This is the option R1 for the player who receives the payoff of 10 from the outcome (R1, R1) and the option R2 for the player who receives the payoff of 10 from the outcome (R2, R2). Thus, the level-2 reasoners’ best-response strategies to the choices of level-1 types will be a mixture of options R1 and R2, depending on which player they are. As a result, the cognitive hierarchy theory predicts a mixture of R1 and R2 choices with no play of R3. 14 In the game G10 this approach establishes team-optimal outcomes to be (R1, R1) and (R2, R2), thus predicting no play of (R3, R3). 15 The study also refers to a mode of reasoning called the strong Stackelberg reasoning, but, since the latter always predicts the play of a Nash equilibrium, in all (but one) of the studied cases it is indistinguishable from individualistic best-responding. 16 Bardsley and Ule (2014) test for team reasoning vs. cognitive hierarchy and the principle of insufficient reason in a “risky” coordination game, where players may experience losses as well as gains. Their results favour team reasoning. (We learned of this paper too late to review it in detail in this chapter.)
415
Jurgis Karpus and Natalie Gold
References Bacharach, M. (1997). ‘We’ Equilibria: A Variable Frame Theory of Cooperation. Oxford: Institute of Economics and Statistics, Oxford University. ———. (1999). Interactive team reasoning: A contribution to the theory of co-operation. Research in Economics, 53, 117–147. ———. (2006). Beyond Individual Choice: Teams and Frames in Game Theory, Princeton, NJ: Princeton University Press. Bardsley, N. (2000). Theoretical and Empirical Investigation of Nonselfish Behaviour: The Case of Contributions to Public Goods, PhD Dissertation, University of East Anglia. ———. (2001). Collective reasoning: A critique of Martin Hollis’s position. Critical Review of International Social and Political Philosophy, 4, 171–192. Bardsley, N., Mehta, J., Starmer, C. & Sugden, R. (2010). Explaining focal points: Cognitive hierarchy theory versus team reasoning. The Economic Journal, 120, 40–79. Bardsley, N. & Ule, A. (2014). Focal points revisited: Team reasoning, the principle of insufficient reason and cognitive hierarchy. Munich Personal RePEc Archive (MPRA) Working Paper No. 58256. Colman, A. M., Pulford, B. D. & Lawrence, C. L. (2014). Explaining strategic cooperation: Cognitive hierarchy theory, strong Stackelberg reasoning, and team reasoning. Decision, 1, 35–58. Colman, A. M., Pulford, B. D. & Rose, J. (2008). Collective rationality in interactive decisions: Evidence for team reasoning. Acta Psychologica, 128, 387–397. Crawford, V. P., Gneezy, U. & Rottenstreich, Y. (2008). The power of focal points is limited: Even minute payoff asymmetry may yield large coordination failures. The American Economic Review, 98, 1443–1458. Cubitt, R. P. & Sugden, R. (2003). Common knowledge, salience and convention: A reconstruction of David Lewis’ game theory. Economics and Philosophy, 19, 175–210. Faillo, M., Smerilli, A. & Sugden, R. (2013).The Roles of Level-k and Team Reasoning in Solving Coordination Games, Cognitive and Experimental Economics Laboratory Working Paper (No. 6–13), Department of Economics, University of Trento, Italy. ———. (2016). Can a Single Theory Explain Coordination? An Experiment on Alternative Modes of Reasoning and the Conditions Under Which They Are Used, CBESS [Centre for Behavioural and Experimental Social Science] Working Paper 16–01, University of East Anglia. Gauthier, D. (1986). Morals by Agreement. Oxford: Oxford University Press. ———. (2013). Twenty-five on. Ethics, 123, 601–624. Gold, N. (2012). Team reasoning, framing and cooperation. In S. Okasha and K. Binmore (Eds.), Evolution and Rationality: Decisions, Co-operation and Strategic Behaviour (Ch. 9, pp. 185–212). Cambridge: Cambridge University Press. ———. (in press).Team reasoning: Controversies and open research questions. In K. Ludwig and M. Jankovic (Eds.), Routledge Handbook of Collective Intentionality. Routledge. Gold, N. & Sugden, R. (2007a). Collective intentions and team agency. Journal of Philosophy, 104, 109–137. ———. (2007b). Theories of team agency. In F. Peter and H. B. Schmid (Eds.), Rationality and Commitment (pp. 280–312). Oxford: Oxford University Press. Hurley, S. (2005a). Rational agency, cooperation and mind-reading. In N. Gold (Ed.), Teamwork: MultiDisciplinary Perspectives (pp. 200–215). Basingstoke: Palgrave MacMillan. ———. (2005b). Social heuristics that make us smarter. Philosophical Psychology, 18, 585–612. Kacelnik, A. (2006). Meanings of rationality. In S. Hurley and M. Nudds (Eds.), Rational Animals? (pp. 87–106). Oxford: Oxford University Press. Karpus, J. & Radzvilas, M. (2016). Team reasoning and a rank-based function of team interests. manuscript under review. Lewis, D. (1969). Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. Marr, D. (1982). Vision: A Computational Approach. New York: W. H. Freeman and Company. Regan, D. (1980). Utilitarianism and Co-operation. Oxford: Clarendon Press. Smerilli, A. (2012). We-thinking and vacillation between frames: Filling a gap in Bacharach’s theory. Theory and Decision, 73, 539–560.
416
Team reasoning Sugden, R. (1993). thinking as a team: Towards an explanation of nonselfish behavior. Social Philosophy and Policy, 10, 69–89. ———. (2000). Team preferences. Economics and Philosophy, 16, 175–204. ———. (2003). The logic of team reasoning. Philosophical Explorations: An International Journal for the Philosophy of Mind and Action, 6, 165–181. ———. (2008). Nash equilibrium, team reasoning and cognitive hierarchy theory. Acta Psychologica, 128, 402–404. ———. (2011). Mutual advantage, conventions and team reasoning. International Review of Economics, 58, 9–20. ———. (2015). Team reasoning and intentional cooperation for mutual benefit. Journal of Social Ontology, 1, 143–166. Tversky, A. & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453–458. Zizzo, D. J. & Tan, J. H. (2007). Perceived harmony, similarity and cooperation in 2 × 2 games: An experimental study. Journal of Economic Psychology, 28, 365–386.
417
24 VIRTUAL BARGAINING Building the foundations for a theory of social interaction Nick Chater, Jennifer B. Misyak, Tigran Melkonyan, and Hossam Zeitoun
It is a familiar observation that we ascribe beliefs, desires, and actions not just to each other, but also to couples, families, sports teams, organizations, and entire nations. But it is by no means clear whether such ascriptions merely represent careless metaphor, or capture a genuine insight. Does it really make sense to ask not only what do I think, but what do we, the couple, sports team, or nation, think? This question becomes particularly pertinent in the context of understanding social interaction, the focus of this chapter, where it seems that making sense of we-thinking, in some form, may be of crucial theoretical importance. Suppose, for example, that, sitting on opposite sides of the dining table, two people, Ms Stuart and Mr MacDonald, wonder which of two so-far untouched wine glasses is ‘theirs’ – i.e., which they are entitled to pick up and drink from. If both people identified the same glass as theirs, then conflict is likely to arise; as long as they choose to reach for different glasses, whichever they choose, conflict will be avoided. From the standpoint of each person, i.e., from the point of view of traditional I-thinking, even this trivial social interaction seems problematic. Stuart can reason that it is in her best interests to choose the glass that MacDonald does not choose. Stuart also knows that MacDonald is likely to be reasoning along precisely the same lines: MacDonald will be attempting to choose whichever glass Stuart has not chosen. But, of course, Stuart has not yet chosen any glass: figuring out which glass to choose is precisely the problem with which Stuart started. So Stuart’s reasoning has gone round in a circle. Now, of course, either player could think: I’ll take the glass nearest me or, for that matter, I’ll take the glass furthest away from me. Either strategy will succeed as long as the other adopts the same strategy. So now we have a recurrence of the same type of problem, but at a more abstract level. Rather than coordinating on whose glass is whose, they instead have to coordinate on which strategy to use to decide whose glass is whose. And this problem is no easier than the original.1 From the point of view of I-thinking, then, we have deadlock. Stuart wishes to choose differently from MacDonald; MacDonald wishes to choose differently from Stuart. However much they each reason about the other, reasoning about them, and so on, there seems to be no way out. This is where we-thinking may come to the rescue. Suppose that, rather than asking what should I do?, Stuart and MacDonald instead ask themselves What should we do? Now, even the 418
Virtual bargaining
subtlest asymmetry in the arrangement of the glasses may provide a way of breaking the deadlock. Suppose, for example, that one glass is slightly nearer than the other to Stuart’s side of the table and further from MacDonald’s side of the table. Or, instead, suppose both glasses are precisely in the middle of the table, but one glass happens, whether by chance or design, to be slightly nearer to Stuart’s name card (the dinner is somewhat formal). Or perhaps the layout of the glasses is utterly unhelpful, but one wine glass sits on a mat bearing the tartan of the Stuarts; the other is on a mat bearing the tartan of the MacDonalds. In the light of these asymmetries, it is easy to answer for Stuart and MacDonald the question: What should we do? Clearly, considered as a pair, it would make more sense for each to take the glass nearest them (rather than each to reach awkwardly across the table to the furthest glass); also on grounds of economy of movement, and perhaps also in the light of the perceptual tendency to group together nearby or related objects, it will clearly be less confusing for both if Stuart chooses the glass near the Stuart name card, and similarly for Stuart choosing the glass on the Stuart tartan. Social interactions are replete with coordination problems of this type. For example, consider one person reaching out to another with an apple. Is the aim to show or offer the other person the apple? As long as both parties have the same interpretation of the action, social harmony will ensue; but if they have different interpretations, then there is potential for considerable irritation. Or consider the interpretation of ambiguous facial expressions, gestures, pronouns, names, and so on. As long as both participants in a communicative interaction agree what is meant, then communication proceeds successfully; if they do not, then communication will misfire. As before, I-thinking doesn’t resolve the question of which interpretation to alight; each party can infer only that they should choose the same interpretation as the other, but which that may be they cannot tell. But, potentially at least, we-thinking offers hope; it is clear that one interpretation is better in some way: more efficient, less confusing, or more informative. One interesting challenge for the we-thinking perspective is to develop the details of these and other cases, and subject the resulting accounts to analysis and experimental tests. Here, however, our focus differs: to provide an account of we-thinking and, in particular, to show how the group mind, or at least the group decision-making, may emerge from the beliefs and desires of the individuals within the group. We shall return to this challenge shortly. First, though, we explore a type of interaction, which seems to pose a puzzle for many versions of both I- and we-thinking accounts of social behaviour, and which provides motivation for aspects of the virtual bargaining account of we-thinking developed below.
The booby-trap game Suppose two entirely self-interested and amoral robbers each contemplate stealing the contents of the other’s safe. Each safe contains $1,000,000 of artworks. If stolen, these artworks can only be sold on the black market for $100,000 – still, of course, a considerable temptation. But there is a possible defence: for just $100, either or both robbers can buy and fit a booby-trap to their safe. Then, if the other robber attempts to break in, the booby-trap will be detonated, killing them in the attempt, but also destroying the artworks. Therefore, for concreteness, let us suppose that each robber must choose from three mutually exclusive options (and they must choose simultaneously, and without knowledge of the actions of the other player): 1 2 3
Do nothing Attempt to break into the other robber’s safe Buy and fit a booby-trap 419
Nick Chater et al.
This setup, which we call the booby-trap game (Misyak & Chater, 2014), is designed so that, whatever the other player does, it is always better to do nothing than to buy the booby-trap.2 If the other player does not attempt to break into one’s safe (i.e., chooses options 1 or 3), then the purchase of the booby-trap is clearly a waste of $100. And if the other player does attempt to break into one’s safe, having bought the booby-trap achieves nothing: the $1,000,000 of artwork is lost in any case. (We assume that the robbers are entirely unconcerned about other people, deriving neither delight nor displeasure in the fate of the other). In the terminology of game theory, option 3 is dominated by option 1.3 According to almost any theory of I-thinking, dominated options cannot rationally be chosen.4 But if, from an I-thinking perspective, the dominated option cannot be chosen by either player (we assume that both robbers are fully rational), then that option can be eliminated from consideration, so that both players must choose between just two options: 1 2
Do nothing Attempt to break into the other robber’s safe
This simple game has the structure of the classic Prisoner’s Dilemma, where doing nothing corresponds to cooperating, and attempting to break into the other’s safe corresponds to defecting. Given just these two options, option 2 dominates option 1. That is, whatever the other person does, it is better to attempt to break into the other’s safe – indeed, better by the tune of $100,000. We thus conclude that, from an I-thinking perspective, both players should play option 2: that is, they both break into each other’s safe. But this seems to be a rather disastrous outcome: each robber’s wealth collapses from the $1,000,000 of artwork in their own safe to the $100,000 value of the artwork stolen from the safe of the other, which can only be disposed of on the black market at a knock-down price. Does we-thinking offer a way out? Intuitively, it seems that it may: surely both robbers will realize that it is better for both of them to buy the booby-trap for $100; then neither has the temptation to try to steal, because they know they will be blown up. So each knows that her artworks are safe. Their wealth declines only by the cost of buying and fitting the booby-trap, leaving their fortunes at a healthy $999,900. But one prominent viewpoint on we-thinking, team reasoning (see Chapter 23, Jurgis Karpus and Natalie Gold, this volume; Bacharach, 2006; Colman, 2003; Colman, Pulford & Rose, 2008; Sugden, 1993, 2003), neither applies nor predicts this intuitive solution. First, team reasoning is typically assumed to apply when the players are, in a sense, members of the same team: that is, if they are attempting to achieve some common interest. But this is not true of the robbers. Each would be delighted to keep their own art, and steal that of the other, if they could get away with it: they are adversaries rather than teammates. Second, if team reasoning were, nonetheless, to apply, then it seems that option 1 (no action) would be chosen, as this is best for both players (they both keep their $1,000,000 of artwork, without spending $100 on the booby-trap). But while the robbers could credibly both buy a booby-trap, they will not credibly both do nothing: indeed, if one robber believes that the other has not bought the booby-trap, then she will steal. Both players doing nothing would indeed be best for the team (and best for both team members), but it is simply not feasible. The virtual bargaining account (introduced in Misyak, Melkonyan, Zeitoun & Chater, 2014) provides a different model of we-thinking, which addresses these two difficulties. The key idea is that we-thinking should be construed as what we would agree, if we were able to bargain. If it is ‘obvious’ which bargain we would agree to, then we do not need to actually go 420
Virtual bargaining
through the bargaining process. We can simply implement that bargain directly. So, in the case of the booby-trap game, we can imagine the robbers discussing what they might do. Given the complete lack of trust between them, an attempt to reach a bargain in which neither does anything (i.e., they both select option 1) would be impossible: each would anticipate that the other would double-cross her. In the absence of some third party to enforce any agreement, that bargain is simply not credible. However, it would be entirely possible for both robbers to agree to buy and fit the boobytrap. Robber A knows that, as long as she buys the booby-trap, Robber B has no incentive to steal; and conversely, Robber B knows that, as long as he buys the booby-trap, Robber A has no incentive to steal. Moreover, both players can figure out that this would be a good agreement to reach, without actually having to go through the bargaining process. So, according to the virtual bargaining account of social behaviour, the players may implement this ‘agreement’ in the absence of any actual bargaining, or indeed opportunity to bargain. The agreement involves both players choosing dominated strategies, which seem counter to individual rationality. Nonetheless, the resulting outcome for both players is good, as their million-dollar collections of artworks remain intact. Notice, too, that neither player requires trust or reliance on the other’s goodwill: indeed, one might suspect that booby-traps will be bought especially in cases where trust is very low. We will describe one way of formalizing virtual bargaining shortly; but, first, note that the account neatly extends to deal with the coordination problems, such as not choosing the same wine glass, that we described earlier. It might, for example, be obvious to both parties that if we can explicitly discuss which wine glass to choose, we will economize on the physical effort of reaching, by assigning my wine glass to be that which is nearest to me, and your wine glass to be that which is nearest to you. Or we might agree that it is less likely to cause confusion, given how perception and memory operate, and reduce the risk of inadvertently picking up the wrong glass, if my glass is the one nearest my name card or resting on my tartan, rather than the other way round. According to the virtual bargaining account, the success of coordination will depend precisely on the degree to which one solution is obviously better, for parties, than the others. Sometimes, there will be no obvious solution and coordination will likely fail; in practice, one or more parties may communicate, perhaps by word or gesture, to clarify which solution is best (e.g., one person may nod, questioningly, towards one of the wine glasses, and the other may smile, in assent).The degree of communication can be fairly minimal, as long as virtual bargaining can be relied upon to do the rest.
Virtual bargaining: an informal sketch of a formal account The virtual bargaining account of social behaviour is primarily a general theoretical claim: that social interactions follow what would be agreed by the participants, were they able to bargain, reach agreement, or negotiate explicitly. Thus, any factors that would influence an explicit bargaining process should, other things being equal, be likely to influence virtual bargaining. For example, if the robbers in the booby-trap game happen to be good friends, or members of the same family, it might be that, if they were able to bargaining swiftly, they would agree that neither would steal the other’s art collection, since trust would be especially high that neither would feel the need to buy the booby-trap. In such situations, we should anticipate that virtual bargaining is likely to lead to the same outcome. Equally so, if there is a sharp asymmetry in power between the participants in a social interaction, and this would influence the outcome of any actual bargaining, then the same influence would be felt for virtual bargaining. So, for 421
Nick Chater et al.
example, if an admiral and a naval rating have to coordinate on a choice of a brand-new versus battered life jacket when abandoning ship, it is likely that they will immediately coordinate on the brand-new life jacket being worn by the admiral. The admiral does not have to give the order, precisely because it is ‘obvious’ what the order would be, and that it would be carried out (here, the bargaining positions of the two parties are extremely asymmetrical). Similarly, reputational facts about the parties to a social interaction may play a decisive role. If two admirals must decide between the brand-new and battered life jackets, and one is known to be excessively vain and status conscious, then both may spontaneously coordinate so that the vain admiral has the smarter life jacket, as it is known that, were bargaining actually to take place, the vain admiral would be utterly intransigent, even as the ship began to sink. Indeed, it is typically only necessary to engage in explicit bargaining when it is not clear what the result of such bargaining would be:5 where the result is clear, the process of virtual bargaining allows both parties to implement that bargain directly. In the light of the wide range of factors that can influence real bargaining, and the fact that these will typically also influence virtual bargaining, it is clear that a complete formal theory of virtual bargaining is overly ambitious. Nonetheless, key aspects of the theory can be made mathematically precise using the tools of game theory, albeit in a non-standard way. The first challenge is to define what counts as a possible bargain. For two-person interactions, these bargains correspond to a particular pair of moves by the two individuals (or, slightly more generally, strategies, which correspond to probability distributions over moves; but we will leave aside such complexities here). One approach towards identifying which pair(s) of ‘agreed’ moves may count as virtual bargains has the flavour of cautiousness: we consider the potential agreement’s worst-case scenario for each social participant – that is, we identify the lesser of the two outcomes that results from either both participants adhering to the agreement or from the other participant ‘breaking’ the agreement by unilaterally departing from it for greatest possible personal gain (if there is such a possibility). We then define a (weakly) feasible bargain as the pair of moves such that the worst-case scenario for either participant is at least as good, if not better, than the worst case associated with any of the other potential bargains in which one participant adopts a different move.6 (Weakly) feasible agreements, as such, have the consequence of staving off ‘exploitation’ (i.e., one person unilaterally departing from a bargain in such a way that he or she benefits to the other’s detriment).The exceptions (that follow by virtue of the feasible bargain’s definition) are situations where some degree of potential exploitation is tolerable (and minimized) since it still leaves one better off relative to other possible agreements. In particular, for example, virtual bargaining does not allow cooperate-cooperate as a solution to the Prisoner’s Dilemma – because, intuitively, virtual bargains have to be self-enforcing (there is no third party to enforce the bargain; indeed, the bargain is entirely virtual and hence not available for the scrutiny of a third party). More concretely, cooperate-cooperate is not a weakly feasible equilibrium, because the worst-case scenario for Player A is very bad, if Player A cooperates as agreed but Player B decides to ‘best-respond’ to the bargain by defecting. In particular, Player A can guarantee a less bad worst-case scenario by switching to defecting herself. So, being cautious and hence concerned about making the worst-case scenario as good as possible, each player chooses to defect. So, in the Prisoner’s Dilemma, the only weakly feasible agreement is the Nash equilibrium. By contrast, in the booby-trap game, there is a weakly feasible agreement in addition to the Nash equilibrium, and which is better for both players. If both players agreed to choose the booby-trap, then both players do have a potential incentive to ‘break’ the agreement by deciding to do nothing instead, thus saving themselves the $100 expense of the booby-trap. But 422
Virtual bargaining
while one player may unilaterally depart from the bargain for personal benefit, this outcome causes no relative detriment to the other player. The other player is assured $999,900 either way (this is technically the ‘worst’ that can happen with this bargain – which, of course, is not bad at all). Furthermore, given any pair of moves in which one robber ‘agrees’ to booby-trap, it will do no good for the second robber to ‘agree’ to commit to a different course of action: agreeing to steal could foolhardily prove fatal (in the very event that both adhere to their agreed actions), and agreeing to do nothing would invite theft (as the first robber may be encouraged to break from the agreement to steal). Clearly, neither of these two worst-case scenarios improves upon the bargain wherein both robbers ‘agree’ to booby-trap: hence, the agreement wherein both players mutually booby-trap is a weakly feasible virtual bargain of the game. By contrast, cooperate-cooperate is not a feasible equilibrium in the Prisoner’s Dilemma, given that either player can exploit the other by defecting. For the player who is ‘betrayed’ by the potential defection, this outcome is much more detrimental than that which would arise from adopting a different agreement – say, if he or she had attempted instead to ‘agree’ to defect; in the latter scenario, the worst-case (and most likely) outcome arises from both players then defecting, which is nonetheless better than the ‘payoffs’ from being unilaterally ‘cheated’.7 Having established what bargains are possible (i.e., weakly feasible), the second step is to adjudicate between these bargains: i.e., to decide which of these would actually be chosen, if explicit bargaining were allowed.Various formal theories of bargaining can be imported from game theory to play this role; the most straightforward is Nash bargaining (1950). Informally, Nash bargaining proposes that the chosen bargain is that which maximizes the product of the utility gains for each player, by adopting the bargain, in comparison with a ‘default’ outcome in which no bargain is reached. Nash bargaining is simple, and has an elegant axiomatic foundation. To see how this works, consider our two admirals. If they cannot agree who has which life jacket, both will drown. So either allocation of life-jackets-to-admirals yields to a better outcome than drowning (and either allocation is a feasible bargain, because neither has an incentive to unilaterally switch to choose the same life jacket as the other – indeed, this will be likely to incur disaster). But the vain admiral will be made especially miserable by having to wear the battered life jacket; so the product of the utilities gained by the admirals will be lowest in this case. Hence, Nash bargaining successfully prefers the coordination outcome where the vain admiral gets to wear the smarter life jacket. Or consider the admiral and the rating. The allocation in which the rating gets the smarter life jacket is a feasible bargain since the admiral does not have an incentive to tussle for the smarter life jacket, endangering them both. But the positive aspects of this outcome will be strongly ameliorated, for the rating, by the likelihood that the admiral will then extract some form of retribution for the rating’s insubordination, and both the rating and the admiral know this. So the utility gained by the rating is severely reduced; and consequently the product of the utility gains for both players is reduced. Thus, the alternative solution, in which the admiral has the smarter life jacket, and where no such retribution is exacted, is preferred.
Communication depends on virtual bargaining We have suggested that, when a unique virtual bargain is not ‘obvious’, then participants in a social interaction may engage in communication to clarify which bargain to choose. But could communication itself rest on virtual bargaining? Note, first, that communication can be viewed as a coordination problem. A ‘signal’ (which might be a nod, a facial expression, a pointing action, an utterance, etc.) is ‘sent’ by one 423
Nick Chater et al.
participant in a social interaction. But what ‘message’ does the signal convey? Typically, communicative signals are highly ambiguous – communication succeeds just when the sender and receiver assign the same message to a particular signal. So, for example, returning to our example of the dinner table, suppose that Stuart utters the words ‘my wine glass is empty.’ Then Stuart will successfully have communicated with MacDonald if, among other things, Stuart and MacDonald agree on the interpretation of ‘my wine glass’. But this is precisely the problem that we described earlier, when Stuart and McDonald were wondering which wine glass to reach for. If they coordinate successfully, then they are talking about the same wine glass; if they do not coordinate successfully, then they take the utterance to refer to distinct wine glasses, and communication will fail. Of course, for communication to be successful, they may need to coordinate in other ways too, for example, regarding whether empty is interpreted as implying a complete vacuum, or the absence of liquid. Recognize that, from this perspective, communication should be sharply distinguished from conveying information. If Stuart notices that MacDonald’s eyes are continually drifting towards one of the glasses, she might tentatively infer that MacDonald believes this to be his glass. Indeed, MacDonald might exploit this possibility, by deliberately attending more to one glass, rather than the other. Here, MacDonald is successfully conveying information to Stuart, by engaging in actions (eye-movements) to encourage Stuart to draw a particular inference (that MacDonald believes this glass to be his). But this appears fundamentally different from the case in which the action is overtly communicative, and crucially so: i.e., where the inferences regarding the interpretation of an action or utterance depends on mutual recognition that the action or utterance is intended to convey a message, and that both parties must coordinate on an interpretation of that message. To see the distinction in sharp focus, consider the case of deception. Person A hides a prize under one of two buckets; person B chooses a bucket. If person B chooses the bucket with the prize, B gets the prize; otherwise A gets the prize. So A’s and B’s interests directly conflict. A may be unable to inhibit glancing at the bucket with the prize, inadvertently conveying information to B. B may perhaps apprehend this information, choose the correct bucket, and win. On the other hand, A may be one step ahead, furtively glancing at the wrong bucket, and hoping that B will draw precisely the wrong inference. And, indeed, there is endless scope for bluff, double bluff, and so on. But suppose, instead, A explicitly and directly points out, or nods towards, one of the buckets.Then it is clear to both A and B that the most natural interpretation of this signal is that the prize is under the bucket indicated by the gesture. There is, of course, as ever, a coordination problem to be solved: it could be that A is attempting to highlight the empty bucket, rather than the bucket with the prize. But both A and B will agree that this is not the case, perhaps because there is a general tendency to highlight psychologically salient items, rather than nonsalient items. In any event, the key point is that both A and B know that the pointing action, or nod, has a context along the lines of ‘that is the bucket with the prize’, even though both of them recognize that the statement may carry little or no information at all (because A has no incentive to reveal the truth to B, and indeed, every incentive to mislead B as far as possible). So, virtual bargaining can explain how even bitter enemies can agree on the communicative content of a message from one to the other, even though there is no presumption of any degree of cooperation regarding whether that content is helpful or true. Thus, virtual bargaining appears to require a hypothetical version of something like Grice’s (1975) cooperative principle: the interpretation of a signal is received on the basis that if both parties were being cooperative, they will interpret the signal as corresponding to a particular message; but there need be no presumption of actual cooperativeness – e.g., it may be that the concept of the message is being 424
Virtual bargaining
used deliberately to deceive. Thus, even sworn enemies can understand what each other is saying, even if they may not believe a word of it.
Common knowledge We have so far talked rather loosely about the chosen virtual bargaining being, if all goes well, obvious to the participants in a social interaction. But more is required than that a particular solution is obviously best to each participant, considered individually; in particular, it is crucial that their preferred bargain be common knowledge, in the sense of Lewis (1969). A piece of information, K, is common knowledge between A and B if: A and B know that K; A knows that B knows that K, and B knows that A knows that K; A knows that B knows that A knows that K, B knows that A knows that B knows that K; and so on, indefinitely. To see why this is important, consider the tartan placemats. Even if Stuart knows the identity of the Stuart and MacDonald tartans, she cannot use this as a way of coordinating with MacDonald, unless she knows that MacDonald also knows his tartans. Even if she does know that MacDonald knows his tartans, MacDonald cannot use this information to attempt to coordinate unless MacDonald not only knows that Stuart knows her tartans, but also knows that Stuart knows that he knows his tartans; and so on, indefinitely. And only given full-blown common knowledge of tartans will it be possible to justify the conclusion that the superiority of a particular coordination solution is itself common knowledge – and hence can be chosen by both players, without explicit bargaining or communication. Common knowledge may seem an excessively restrictive starting point: can we ever be assured that the infinite number of conditions required to ensure common knowledge really hold? We suggest that, from a psychological point of view at least, the assumption of common knowledge may be much more widespread than one might first anticipate: indeed, we can conjecture that the starting point for virtual bargaining for each participant is the default assumption that everything they know is common knowledge. If this is right, then rather than attempting to establish common knowledge, the reasoning system may take common knowledge as the starting point and instead focus on determining which of this information is not common knowledge (e.g., what do I know, that others may not know, or know that I know, and so on). Indeed, the ‘curse of knowledge’ (Camerer, Loewenstein & Weber, 1989) is the observation that if a person knows something, she tends to reason and behave as if others also know it; such a bias has been connected to a wide range of social-cognitive phenomena across adults and young children (e.g., Birch & Bloom, 2004; Nickerson, 1999). We suggest that this ‘curse’ might be reconceptualized here as the ‘curse of common knowledge’: we tend to assume, as default, not just that others know what we know, but that what we know is full-blown common knowledge with respect to others with whom we are interacting.8 One interesting implication of this viewpoint is that Stuart, who knows tartans very well, might immediately conclude that her wine glass should be that on her own tartan; and only with considerable cognitive effort might she realize that MacDonald may know nothing about tartans, or he may know about tartans but have no reason to believe that she does. If these latter thoughts can be generated rapidly enough, Stuart may be able to inhibit what may be an embarrassingly ‘forward’ lunge for one of the wine glasses.
Joint action, joint attention, and virtual bargaining How does virtual bargaining relate to joint action (Chapter 20, this volume, Butterfill; Bratman, 2014; Sebanz, Bekkering & Knoblich, 2006) and joint attention (Chapter 12, this volume, 425
Nick Chater et al.
Fiebich, Gallagher & Hutto)? A simple starting point would be to say that an action is a joint action when and only when it results from the implementation, by all parties, of the same virtual or explicit bargain for a shared goal. From this point of view, two people carrying a heavy table upstairs is a joint action, to the extent that their behaviour is what they actually agreed, or what it is common knowledge that they would have agreed. For example, they might actually explicitly have agreed which person will reverse up the stairs. It might not be necessary, however, to explicitly agree on the need to tip the table upwards, as the stairs are approached. If both participants spontaneously and appropriately adjust their grip and posture, as they come to the bottom of the stairs, we might see this as resulting from virtual bargaining: it is ‘obvious’ to both parties that the relevant combination of forces must be deployed at the appropriate time, if the table is to move up the staircase without collision. That is, it is common knowledge that this would have been agreed, had any discussion of the issue taken place; and precisely because this is common knowledge, such discussion can be omitted. From this point of view, then, behind every joint action, X, there is an explicit or implicit agreement let’s do X.9 Joint attention can then be viewed as a type of joint action, where the ‘action’ involves coordinating sensorimotor and information processing resources.10 Thus, from a virtual bargaining account, we jointly attend to a movie, a piece of music, or an object when we explicitly agree to attend to it (and go through with that agreement); but also, more typically, and more interestingly, when it is ‘obvious’ that we would make such an agreement, were we to discuss the matter at all. And, because this is obvious, actual communication is unnecessary. According to this viewpoint, when two people are jointly attending to, say, the radio news, it is not merely the case that they are both listening to the radio news, or, for that matter, that it is common knowledge that they are both listening to the radio news. What is crucial is that it is common knowledge that they have implicitly agreed: let’s listen to the radio. We have so far considered the virtues of virtual bargaining through its ability to provide an explanation of various aspects of social behaviour. Moreover, the virtual bargaining approach makes a range of empirical predictions. For example, other things being equal, we should expect any factor that would impact actual bargaining (e.g., status, reputation, past history, preferences of the participants, and so on) should be likely to affect social interactions if explicit bargaining is not possible, according to the virtual bargaining account. Furthermore, specific formal versions of the virtual bargaining account will make more precise predictions in experimental games played in the laboratory (e.g., Misyak & Chater, 2014). Many of these predictions will not arise from other approaches to we-thinking or from standard models of I-thinking. Such experimental tests are, of course, a natural direction for future research.
Top-down and bottom-up approaches to social behaviour Separate from the distinction between I-thinking and we-thinking is another crucial split between approaches to social behaviour – and one which is, we suggest, usually illuminated by the virtual bargaining account. Should social interactions be viewed bottom-up, from the standpoint of the ‘minds’ involved in the interaction (typically individual people, but they might just as well be teams, companies, or entire countries)? Or do we attempt to understand interactions top-down, from the perspective of the interaction itself? Cognitive psychologists and rational choice theorists, whether in economics or other areas of the social sciences, have typically focussed on the individual. Then a particular social interaction, such as in exchange of goods or information, can be understood in terms of the cognitive mechanisms, beliefs, desires, and so on, of the individuals involved in that social interaction. Drawing a parallel with chemistry, individuals correspond 426
Virtual bargaining
to molecules, and social interactions correspond to chemical reactions. The appeal of bottomup explanation is the thought that properties of an interaction must, ultimately, depend only on the properties of the constituent parts; so, by understanding the parts, we should be able to understand the interactions in which they participate. Top-down approaches to social interaction, which are predominant in anthropology, sociology, and the humanities, insist, by contrast, that social interactions exhibit their own distinct regularities, which cannot be usefully predicted from the component individuals. One line of thinking, which could be traced, for example, to Wittgenstein (1953/1963), is that social behaviour, including language, has its significance in virtue of its role in a cultural ‘form of life’. So, for example, the significance of the dollar arises from the fact that it can be exchanged for goods, cannot (legally) be copied, can be converted into other currencies, and so on. These properties cannot, of course, be predicted from the physical properties of the paper and ink of a dollar bill, although these physical properties are not irrelevant (e.g., concerning how easy it is to forge). Similarly, the meaning of a gesture, facial expression, or utterance may depend on a rich network of other communicative acts, and perhaps physical actions (e.g., passing hammers, holding nails, putting in screws, if we are jointly assembling a piece of furniture): this ‘language game’ cannot be understood merely by analyzing its pieces in isolation, any more than the game of chess can be understood by close analysis of the physical or functional properties of individual playing pieces and chessboard. From this cultural perspective, the real drivers of human social interaction are not decisions made by the individual, but the framework of norms, customs, and conventions within which such decisions are constrained. The virtual bargaining account provides a framework in which both bottom-up and topdown approaches naturally fit. Each new social interaction faces its participants with a substantial cognitive challenge: typically norms, customs, and conventions do not precisely and unambiguously determine how participants should act. And, indeed, people can readily decide what to do in normal situations, such as the booby-trap game, which do not readily fit into some cultural ‘form of life’. But, faced with the question of how to behave in a social situation, each participant knows that the appropriateness of their behaviour depends not only on their own actions, but also on the actions of others; and knows, too, that the other participants in a social interaction are reasoning in the same way. So, as we have seen, the question each individual faces is ‘What should we do?’ And, in the light of the answer to this question, each participant can then answer the question ‘What should I do?’ So, rather than seeing each participant in a social interaction as running along ‘rails’ set by the culture, the virtual bargaining approach sees the social interaction, however routine, as potentially requiring sophisticated inference and decision-making: in particular, for example, each participant must figure out which virtual bargain would be realized, given the common knowledge of the participants. So far, so individualistic. Note, however, that so many social interactions have structural coordination problems in which there are multiple good virtual bargaining solutions, and it can be challenging to determine which of the solutions is most ‘focal’.There may be several equally good ways of distributing tasks between people, categorizing objects, or linking phonological word-forms to things in the world. Crucially, though, in such cases the question of what counts as the focal solution to today’s virtual bargaining problem can often be resolved by referring to yesterday’s solution to a similar virtual bargaining problem. The first time we have to decide who will hold a hammer, and who will hold the nail, we may struggle to coordinate; the second time, past precedent immediately suggests what we should do; and, after many trials, we may have, effectively, established a convention between us – perhaps that the person nearest the hammer, or the older person, uses the hammer. If we then conduct many trials between different pairs of people, the ‘convention’ might mutate and spread across the population. In another 427
Nick Chater et al.
set of interactions in another population, a different convention might emerge. Indeed, one might imagine that some similar origin might surround underlying conventions concerning whose wine glass is whose, as discussed above, leading, for example, to the standard convention that drinking glasses be put on the right of the place setting. From a virtual bargaining perspective, cultural norms, customs, and conventions emerge from layers of virtual bargains, where past virtual bargains provide a set of precedents for future virtual bargains, in much the same way as past legal cases provide precedents for future legal cases in common law. The adjudication of new cases is not determined by previous cases – to explain new judgements, we also have to go to look inside the thought processes of individual lawyers and judges. But a purely individualistic approach to understanding culture, or the law, would be inadequate. The fact that past cases powerfully influence future cases allows us to speak of case law as providing the basis for a legal, cultural, or linguistic tradition, rather than a collection of arbitrary and unrelated judgements: and many aspects of such tradition may be a consequence of entirely arbitrary ‘symmetry-breaking’ choices, perhaps made in the long forgotten past. So rather than seeing the individual or culture as primary, it may be more appropriate to see the act of virtual bargaining itself as the fundamental unit of analysis. And each new virtual bargain depends both on the culture, i.e., the tradition of past virtual bargains, as well as the individual cognitive processes that allow us to transform those past precedents to apply them to new social situations.
Conclusion Much human interaction involves negotiating and reaching agreements. According to the virtual bargaining account, outlined in this chapter, this is true even when no explicit communication takes place. If it is ‘obvious’ to all parties in social interaction that a particular bargain would be reached if negotiation would take place, then those parties can jump directly to the resulting bargain, without need for the negotiation process to transpire.Virtual bargaining provides a way of analyzing ‘we-thinking’, and thinking about joint attention and joint action; we suggest, too, that even when explicit negotiation or discussion does occur in a social interaction, virtual bargaining is still important – because the local ambiguity of communicative signals generates coordination problems which may naturally be solved by virtual bargaining. We suggest, moreover, the virtual bargaining may provide the basic unit of cultural evolution: past virtual bargains provide precedents for future virtual bargains, just as past cases provide precedents for future cases, in a common law legal tradition.
Acknowledgments J. M. and N. C. are partially supported by ERC grant 295917-RATIONALITY; N. C. is partially supported by the ESRC Network for Integrated Behavioural Science, the Leverhulme Trust, Research Councils UK Grant EP/K039830/1, and the Templeton Foundation.
Notes 1 A more complex case may also be helpful. Suppose Ms Stuart and Mr MacDonald must choose a glasses which are inconveniently located on high shelves. Both are awkwardly difficult to reach – but one is significantly lower. As it happens, Ms Stuart is considerably taller than Mr MacDonald. It therefore seems ‘natural’ that Ms Stuart should reach for the higher glass and Mr MacDonald for the lower. Yet if they both choose the easiest glass to reach, thinking purely individualistically, then they will tussle over the lower glass. But if they were to ask themselves What should we do?, then it is clear that, as a team, the best
428
Virtual bargaining strategy is that the taller person should reach for the upper glass, and the shorter person for the lower, and coordination is achieved. 2 We don’t include the possibility that the players might both fit a booby-trap and steal. Adding this option would not change the picture, though. Pure stealing dominates stealing-and-buying-the-booby-trap whatever the other player does, because it saves the cost of the booby-trap. So this fourth option is also immediately eliminated as a possible move. 3 Crucially, whether one player chooses to fit a booby-trap cannot have any causal impact on the actions of the other player – in particular, it cannot affect the beliefs of the other player, because each player chooses simultaneously and privately. 4 Newcomb’s problem (Nozick, 1969) provides a very interesting possible counterexample, as Nozick has pointed out, but this arises under extremely unusual circumstances, such as when one’s adversary is an omniscient being. 5 Or, of course, by using communication to change the grounds from which the bargain is reached, for example, by introducing new information to common knowledge, or offering a favour or a bribe, making a threat, and so on. 6 This way of modelling virtual bargains is distinct from, though related to, an earlier formal approach, by Misyak and Chater (2014); and the notion of weakly feasible equilibrium is closely related to very interesting notion of maximin equilibrium introduced by Ismail (2014). 7 Here we are considering only ‘pure’ strategies, in the terminology of game theory – that is, people must agree on one choice or another. But the account generalizes to so-called ‘mixed’ strategies, where agreements can be probability distributions over choices. For example, the players might agree to buy the booby-trap with some probability, p, and cooperate otherwise. If p is sufficiently high, then the other player will still be deterred from defecting; but less ‘cost’ is incurred, on average. Indeed, both players adopting this strategy will yield a weakly feasible equilibrium – and behaviour close to this equilibrium is observed experimentally (Misyak & Chater, 2014). 8 The notion of knowledge, and indeed of common knowledge, can spelled out in a wide variety of ways, which will lead to different instantiations of the present account. For example, knowledge may be assumed to be declarative or procedural (e.g., Ryle, 1949); common knowledge might be explained as based on an infinite hierarchy of knowledge claims (e.g., Lewis, 1969), fixed points (e.g., Barwise, 1988) might be taken as primitive, and so on. These important issues are beyond our scope here, however. 9 In the booby-trap game, though, where the robbers have no trust or common interest, we have argued that virtual bargaining leads them to buy the booby-trap. Framing this as an agreement, Let’s both buy the booby-trap seems awkward, given the bargain is made, as it were, through gritted teeth. And for the same reason it seems unnatural to think of the ‘booby-trap–booby-trap’ combination of choices as a joint action. An interesting challenge for future work is what additional constraint may be appropriate here (if, indeed, one is required), perhaps capturing some notion of common interest or having shared goals. 10 Fiebich and Gallagher (2013) provide a different analysis of the relevant concepts, but also reach a similar conclusion in arguing that joint attention, when engaged in by participants with shared intentions under their construal, should be recognized as a basic joint action.
References Bacharach, M. (2006). Beyond Individual Choice:Teams and Frames in Game Theory. (N. Gold & R. Sugden, eds.) Princeton, NJ: Princeton University Press. Barwise, J. (1988). Three views of common knowledge. In M.Y.Vardi (Ed.), Proceedings of the Second Conference on Theoretical Aspects of Reasoning About Knowledge (pp. 365–379). San Francisco: Morgan Kaufman. Bratman, M. E. (2014). Shared Agency: A Planning Theory of Acting Together. Oxford: Oxford University Press. Birch, S.A.J. & Bloom, P. (2004). Understanding children’s and adults’ limitations in mental state reasoning. Trends in Cognitive Sciences, 8, 255–260. Camerer, C., Loewenstein G. & Weber, M. (1989). The curse of knowledge in economic settings: An experimental analysis. Journal of Political Economy, 97, 1232–1254. Colman, A. M. (2003) Cooperation, psychological game theory, and limitations of rationality in social interactions. Behavioral and Brain Sciences, 26, 139–153. Colman, A. M., Pulford, B. D. & Rose, J. (2008). Collective rationality in interactive decisions: Evidence for team reasoning. Acta Psychologica, 128, 387–397.
429
Nick Chater et al. Fiebich, A. & Gallagher, S. (2013). Joint attention in joint action. Philosophical Psychology, 26, 571–587. Grice, H. P. (1975). Logic and conversation. In P. Cole and J. Morgan (Eds.), Syntax and Semantics 3: Speech Acts (pp. 41–58). New York: Academic Press. Ismail, M. (2014). Maximin equilibrium. RM/14/037. Working paper. Maastricht University School of Business and Economics. Lewis, D. K. (1969). Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. Misyak, J. B. & Chater, N. (2014).Virtual bargaining: A theory of social decision-making. Philosophical Transactions of the Royal Society B: Biological Sciences, 369, 20130487. Misyak, J. B., Melkonyan, T., Zeitoun, H. & Chater, N. (2014). Unwritten rules:Virtual bargaining underpins social interaction, culture, and society. Trends in Cognitive Sciences, 18, 512–519. Nash, J. F. (1950). The bargaining problem. Econometrica, 18, 155–162. Nickerson, R. S. (1999). How we know – And sometimes misjudge – What others know: Imputing one’s own knowledge to others. Psychological Bulletin, 125, 737–759. Nozick, R. (1969). Newcomb’s problem and two principles of choice. In N. Rescher (Ed.), Essays in Honor of Carl G. Hempel (pp. 114–146). Dordrecht: Reidel. Ryle, G. (1949). The Concept of Mind. Chicago: University of Chicago Press. Sebanz, N., Bekkering, H. & Knoblich, G. (2006). Joint action: Bodies and minds moving together. Trends in Cognitive Sciences, 10, 70–76. Sugden, R. (1993). Thinking as a team: Towards an explanation of nonselfish behaviour. Social Philosophy & Policy, 10, 69–89. ———. (2003). The logic of team reasoning. Philosophical Explorations, 6, 165–181. Wittgenstein, L. (1963). Philosophical investigations (G.E.M. Anscombe,Trans.). Oxford: Basil Blackwell. (Original work published 1953).
430
25 SOCIAL ROLES AND REIFICATION1 Ron Mallon
Stand in a crowded urban center, and you can see an extraordinarily diverse, amazingly complex social world that is structured in part by widespread understandings about the properties, functions, rights, privileges, duties, obligations, and restraints of certain groups of people in certain domains – structured in part by social roles. One source of such roles is our institutions. Our institutions are full of social roles like teacher, lawyer, doctor, member of Congress, Starbucks barista, licensed pipefitter, and fullypaid-up member of the American Philosophical Association, and these social roles structure permissions, restrictions, expectations, and behaviors in ways that facilitate coordination and, sometimes, cooperation. When we look across the crowd, we see many conspicuous markers of such social role status, including uniforms and badges that allow us to fix expectations for interpersonal regulation, coordination, and cooperation. But there is also a long social constructionist tradition that insists upon the importance of another type of social role: social roles like being a man or a woman; homosexual or straight; Asian, black, or white; angry or sad.This second group of roles differs from institutional roles in that these categories are widely understood to be natural kinds of person – kinds that exist or have their character independently of our classificatory practices, perhaps because they are some sort of biological kind. Again, when we look out on the street, we see many indicators of membership in these roles. For sex and race, conspicuous features of the body are often taken to be indicators of kind membership. In some cases, clothing or other artifacts may indicate membership as well. In the face of the temptation to treat such kinds as natural, social constructionists provocatively insist that these kinds are actually some sort of social kind – a product of our culture, decisions, or social practices. Social constructionists think that these kinds exist because of our (historical and current) practices of thinking of them and treating them as different. In effect, they attempt to explain kind differences not by appeal to nature, but by appeal to social roles. Borrowing some terminology from Paul Griffiths (1997), we can call the transparent, institutional sort of social role an overt role or construction, and the latter, putatively natural but actually social kind, a covert role or construction. Examples of covert constructionist claims about human kinds are not hard to find in the contemporary humanities and social sciences. Begin with race.The philosopher Paul Taylor writes that, White supremacist societies created the Races they thought they were discovering, and the ongoing political developments in these societies continued to re-create
431
Ron Mallon
them. . . . All of this is to say: our Western races are social constructs. They are things that we humans create in the transactions that define social life. (2013, 179) Michael Omi and Howard Winant similarly claim: The effort must be made to understand race as an unstable and “decentered” complex of social meanings constantly being transformed by political struggle. . . . The concept of race continues to play a fundamental role in structuring and representing the social world. The task for theory is to explain this situation. It is to avoid both the utopian framework which sees race as an illusion we can somehow “get beyond,” and also the essentialist formulation which sees race as something objective and fixed, a biological datum. Thus we should think of race as an element of social structure . . . (1994, 55) This core idea, that races are products of human sociocultural activity, but are widely believed to be natural, is widespread among humanistic racial theorists. Claims of the construction of gender kinds are also ubiquitous. For example, Catherine McKinnon claims of gender that: Gender has no basis in anything other than the social reality its hegemony constructs. Gender is what gender means. The process that gives sexuality its male supremacist meaning is the same process through which gender inequality becomes socially real. (1987, 173) The suggestion is, again, that gender differences are products of our cultural practices of assigning meanings to bodies and differentially treating them, resulting in differences that are mistakenly interpreted as natural. Another influential example is Michel Foucault’s (1978) claim that as ideas or concepts of sexuality shifted in nineteenth-century Europe, they became more committed to the idea that sexual behavior was indicative of importantly different, underlying kinds of people. And Foucault believes these new understandings actually caused homosexuality to come to exist (not just the label, the idea, or the concept, but the kind itself). Such claims raise many questions; here I focus upon the question of what covertly constructed kinds are. Suppose that we allow that covert constructionists are correct that some putatively natural human kinds are actually a production of our social and conceptual activities, that they are some sort of social role. What explains their apparent existence and sustains their causal power over time? Notice that the natural kind theorist answers these questions by appealing to the natural kind; the natural kind gender theorist, for example, answers these questions by appeal to biological sex. In contrast, the social constructionist wants to appeal to widespread understandings of gender and the social practices that they structure to explain gender kinds. But how shall we understand these? Because they are produced in some way by our social and conceptual activities, it is tempting to understand covert social roles by analogy with overt institutional roles. In both cases, our social and cultural practices serve to differentiate social role members from others, creating realworld effects. And, in both cases, if we could intervene upon these representational practices, we could thereby alter their consequences. 432
Social roles and reification
Despite these similarities, covert, socially constructed human categories are a different kind of kind than institutional categories.The mechanism that stabilizes and sustains these kinds over time is different (Mallon in press). Appreciating this difference is crucial to appreciating the different sorts of ways our social world acquires its structure and to understanding how we might go about transforming or eliminating them. Here is how I proceed. In the first section, I briefly articulate this understanding of covert constructionist explanations as appealing to social roles. I then consider the possibility of interpreting such constructed roles as products of overt construction. But, I go on to argue, overt and covert constructions are different kinds of kinds. I conclude by connecting the discussion to concerns with social change.
Social roles I am interpreting the social constructionist as understanding social kinds as a sort of social role (cf. Griffiths 1997; Mallon 2003, Mallon and Kelly 2012, Mallon 2016). As I use the term, a social role is created when a kind representation becomes widespread, common knowledge in a community. A social role in this sense exists in a community when ideas about the membership conditions for the role, about the permissions, duties, and other norms attaching to role members, and about role-typical properties, become something everyone in a community knows, knows that every knows, and so on.2 For covert social roles, these representations represent kind membership or kind typical properties as “natural” facts. In many cases, such facts may be explained as the result of biological or endogenous features of kind members, but it could also include extrinsic causes like “divine will”.The important feature of this sense of “natural” is that the existence and character of the kind is understood as given and as independent of human choices either in the past or at present. Such representations may also include stereotypes or other sorts of representations that distinguish category members. The causal power of the resulting kinds is then explained by appeal to the causal effects of the representations upon behavior with regard to, toward, and by kind members. Such causal effects of representations may occur via a range of causal pathways, including both actions and policies guided by explicit beliefs about kind members (e.g. racist or sexist beliefs) and also pathways in which representations influence individual behavior automatically and perhaps unconsciously in ways that can be at odds with background beliefs or intentions (cf. Bargh and Chartrand 1999; Gendler 2008; Huebner 2009; Mallon in press). (Actual or possible) features of the constructed kinds may then be understood or explained by considering the meaning of the kind representations as well as the mechanisms by which those representations influence behavior by and towards kind members. This is an elaboration of the covert constructionist position, going beyond what constructionists typically say. Still, something like this elaboration seems necessary in order for the sorts of claims considered above (and many others in the humanities and social sciences) to be true. It allows us to understand provocative claims like Foucault’s insistence that homosexuality first came to exist in the nineteenth century as a claim about a certain kind representation coming to be widespread in a way that structured social life. On such a picture, the social constructionist about a human kind appeals to the social role – the widespread representation and its systematic causal effects on the social environment and on kind members – in explanation of the existence or features of a category in a way that parallels the natural kind theorists’ appeal to underlying natural difference. Both the constructionist and the natural kind theorist hold that the underlying category is a real thing, but they 433
Ron Mallon
disagree about the character or nature of that thing. What remains for both of them to provide is a further specification of the underlying kinds or mechanisms, and an account of how, exactly, these explain whatever differences or regularities are in question.
Appealing to overt kinds But how should we understand the mechanism or mechanisms that sustain covertly constructed social roles and make them causally powerful? One promising suggestion is that we should understand socially constructed covert kinds in the same way we understand socially constructed overt kinds, like social institutions. As I noted at the outset, there are many institutional social roles, roles like being a licensed driver, or being the President of the United States, for which the differential features of the persons in the roles apparently stem from some set of social institutions and conventions. Many philosophers hint at this strategy by using social institutions as illustrations for understanding the features of covert social constructions. For example, Sally Haslanger writes: Consider a landlord. One is a landlord by virtue of one’s role in a broad system of social and economic relations which includes tenants, property, and the like. Even if it turned out as a matter of fact that all and only landlords had a closed loop in the center of their right thumbprint, the basis for being counted a landlord is different from the basis for being counted as having such a thumbprint. Likewise for gender, one is a woman, not by virtue of one’s intrinsic features (for example, body type), but by virtue of one’s part in a system of social relations which includes, among other things, men. Gender is a relational or extrinsic property of individuals, and the relations in question are social. (2012, 42–43) Haslanger draws a useful analogy between an overt, institutional role like “landlord” and the sort of covert social role hypothesized by the constructionist, appealing to conceptually structured relations among people to explain the reality of each. But how far can we take this analogy? Can we understand covert constructions as the same kind of thing as overt constructions? The fruitful analogy between them makes this tempting. And this temptation to treat them as alike is further aided by two background trends. First, a long tradition of social philosophical work intimates the possibility that conventional or institutional facts become reified or naturalized, in a process in which their institutional or social origin comes to be no longer widely understood. Second, recent decades have brought considerable progress in philosophical social ontology, that is, in the offering of explicit accounts of the nature of such entities as social institutions, conventions, and group mental states. Here, as elsewhere in philosophy, it is tempting to understand what is poorly understood by appeal to what is better understood. Reification One old, Marxist strand of social scientific thought understands apparently natural facts as the “reified” products of past social arrangements. In Capital, Marx writes of “commodity fetishization”: A commodity is therefore a mysterious thing, simply because in it the social character of men’s labour appears to them as an objective character stamped upon the product 434
Social roles and reification
of that labour. . . . In the same way the light from an object is perceived by us not as the subjective excitation of our optic nerve, but as the objective form of something outside the eye itself. (Marx 1967, 77) Marx’s work on commodity fetishization motivates later work by György Lukács (1923) and others on “reification”: the process by which facts produced by social arrangements later come to be conceived as existing and having come about independently of those relations. These facts may even be seen as having occasioned those relations. A connection between reification of this sort and covert constructions is not hard to draw. Certain distinctions between superficially different categories might begin as explicit institutions or conventions, but, over time, the differences these conventions produce come to be accepted as natural divisions or the causal consequence of natural divisions that are rooted in intrinsic differences among category members. For instance, a division of labor by sex that begins as a mere convenience (“Let’s say that the women will do the A tasks, and the men, the B tasks”) comes to be understood as a natural assignment, reflecting organic, sex-specific fitness to the tasks (“Women are just naturally better at the A tasks, and men are naturally fitted to the B tasks”). Similarly, John Searle (1995, 4, 47, 118; cf. Searle 2010) suggests that people may forget, or never even learn, the social character of social facts, mistakenly believing that gold or money is intrinsically valuable, or that some kind of person is naturally suited to rule. He writes: The complex structure of social reality is, so to speak, weightless and invisible. The child is brought up in a culture where he or she simply takes social reality for granted. We learn to perceive and use cars, bathtubs, houses, money, restaurants, and schools without reflecting on the special features of their ontology and without being aware they have a special ontology. They seem as natural to us as stones and water and trees. (1995, 4) According to Searle, such social facts are established by our socio-linguistic activities, but then later mistaken for natural facts. More recently, he writes: There is no general answer to the question of why people accept institutions. Indeed, there are all sorts of institutions where people cheerfully accept what would appear to be unjust arrangements. One thinks of various class structures, the low position of women in many societies, and vastly disproportionate distributions of money, property, and power. But one feature that runs through a large number of cases is that in accepting the institutional facts, people do not typically understand what is going on. They do not think of private property, and the institutions for allocating private property, or human rights, or governments as human creations.They tend to think of them as part of the natural order of things; to be taken for granted in the same way they take for granted the weather or the force of gravity. Sometimes, indeed, they believe institutions to be consequences of a Divine will. (2010, 107) Searle here suggests a specific account by which covert construction comes about: explicit, institutional structures are established by social practices, and then later mistaken as features of the natural world – an example of what we called reification above. 435
Ron Mallon
How do such mistakes come about? One venerable tradition emphasizes the role of powerful interests in creating an ideology that rationalizes these interests. Looked at in this way, failure to preserve or promulgate the correct account of some component of social reality can be part of a plan to conserve or enhance the interests of those who benefit from this arrangement by deceiving the population. It is also possible that such a shift occurs for much more mundane reasons – not because of an intention to mislead, but because of a failure to be concerned with the truth, perhaps assisted by an implicit deference to existing states of affairs or power structures and perhaps aided by psychological biases like folk essentialism (Gelman 2003) or the “fundamental attribution error” in which behavior is explained by appeal to intrinsic features of individuals rather than features of their situation (Ross and Nisbett 1991). Call such a shift from the knowing creation of institutions to ignorance “collective forgetting”. Whatever its source or cause, such collective forgetting suggests the possibility that covert constructions are essentially just overt constructions – they are, for instance, simply human institutions that we have ceased to recognize as such. This is Searle’s view. Crucially, on Searle’s rendering, reification does not change the institutional character of what is reified. If he is right, then to understand the nature of covert constructions, we need simply to understand the nature of institutions and other aspects of overtly constructed social reality. Social ontology: institutions and conventions Searle’s own account of institutional facts has it that such facts depend upon the collective imposition of function on some thing or things that occurs when there is collective acceptance of a statement of the form: X counts as Y in C.3 For Searle, “X” is a specification of the individual or type to which the status function applies. For instance, it might specify that tokens of a certain type produced by the U.S. mint count as money in the United States. As Searle emphasizes, status functions of this type go beyond what the intrinsic features of X can secure according to natural laws. There is nothing about the intrinsic features of the tokens that we use as money that gives them the values that they are assigned. Indeed, we can assign the same value to vastly (intrinsically) different tokens: paper, coins, or digital data structures. Their value obtains in virtue of collective acceptance of one or more status functions. “C” specifies the context in which this imposition occurs.4 According to Searle, collective acceptance reflects a “collective intentionality”, a feature of human cooperative practices. And, according to Searle, the status functions it imposes carry social obligations – “deontic powers” – that also explain their success as devices for social organization. As Searle understands them, these collective intentions are a sort of distinctive propositional attitude, “we-intentions” (1990, 1995, 23–26; 2010, 43ff ). For example, they might come about when a group decides that “we intend that paper marked as such from the U.S. mint count as money in the United States.” According to Searle, they are irreducible to the “I-intentions” that may be present in the mind of members of a population, indicating an understanding by those individuals of the collective character of the undertaking.5 Searle contrasts his account with other accounts of social reality that make no use of “weintentions”. For instance, a different prominent account of some important features of social reality is found in David Lewis’s (1969) account of conventions. On Lewis’s account, a convention is sustained when it is true that, and it is common knowledge that, everyone participates 436
Social roles and reification
in a regularity (for example, driving on the right side of the road) on condition that everyone else does (1969, 76). Crucial to Lewis’s account of convention is that such a conventional conformity is a Nash equilibrium, a regularity that no one can unilaterally deviate from and better satisfy her own preferences. Because it is irrational for an individual to unilaterally deviate from the regularity, individual self-interest stabilizes the regularity. Can we overtly construct covert kinds? How could we employ Searle’s account of institutions or Lewis’s account of conventions to understand covert kinds? Returning to the example of race, we might – as Naomi Zack has (1999, 2002) – apply Searle’s schema straightforwardly to generate an account of the social construction of a set of racial categories. Zack writes: Let skin shade of a certain reflective index, or the existence of an ancestor with skin shade of that reflective index be X. In the context of the United States, X could count as “racially black” (or as “racially white” for a different value of X). . . . Any criterion for racial membership that has a physical factual basis could be some X that gets constructed as a Y or racial classification. Thus, it is racial membership that is socially constructed in Searle’s sense. (2002, 108) It is not hard to imagine applying the schema to other putatively natural categories as well, and when we do so, it nicely explicates the constructionist idea that the differential features of category members are dependent upon our socio-linguistic practices in differentially treating them. Alternatively, we might try to understand a norm of differential treatment of some group of people as a Lewisian convention. Consider a case where we take members of a group G to have a favored or disfavored status with regard to a kind of activity A. This could be a racial group that is supposed to be good at sports or at math. Or a sex group that is supposed to be bad at fighting or at nurturing. In each of these cases, we could try to understand our treatment of members of G as preferred for A as a conventional fact, as adherence to the norm: Treat G’s as worse at A. On the condition that others do so. Where a system of such norms existed with regard to a category G, we could explain the apparent reality of G by appeal to the system of Lewisian conventions. If this were the right account, it would explain how our social practices of differential treatment depend upon the rational dispositions of individual actors, given their self-interest and their beliefs about what others will do.
Covert constructions are not institutions or conventions Unfortunately, this approach does not work. Covert constructions are, by hypothesis, different than overt constructions like Searlean institutions or Lewisian conventions. For covert constructions, but not overt constructions, we treat them differentially because we mistakenly take them to be naturally different, and this false story rationalizes differential treatment, which stabilizes the thought and behavior that sustains the social role over time. This difference is not merely epistemic, but actually constitutes overt and covert kinds as different kinds of kinds. 437
Ron Mallon
Kind identity What makes two sorts of kinds the same or different? Here I understand the notion of a kind using Richard Boyd’s influential idea of a causally homeostatic property cluster kind (1988, 1991, 1999). While natural kinds have traditional been understood to be characterized by a simple, all-or-nothing essence, Boyd instead suggests that such kinds can be characterized by a “cluster” of typical but not individually necessary properties that co-occur in the world and whose possession is not an all-or-nothing affair for members of the kind. The co-occurrence of such properties in the kind is not an accident, but is the result of some sort of mechanism of causal homeostasis that sustains the co-occurrence of the properties, enabling reference to the kind to be useful in our predictive and explanatory enterprises. It is a virtue of such an account that it allows us to extend the idea of natural kinds to the complex and varied kinds that we find in the special sciences, and indeed, as Boyd himself has acknowledged, allows extension to kinds that “combine naturalistic and conventional features in quite complex ways” (1991, 140). I have argued elsewhere that we can understand constructed social role kinds in this way (Mallon 2003, 2016).When we do, we get that constructed kinds are individuated, at least in part, by the mechanism of causal homeostasis for the kind. Where these are the same, we can say that the same kind occurs, and where different, different.6 Here I assume that a claim of the sort “A and B are different kinds” is true whenever A and B have different mechanisms of causal homeostasis. It is difference between such mechanisms that distinguishes overt and covert socially constructed social roles. Covert constructions are different: the epistemic argument We can begin to see that covert constructions are different than overt constructions by considering the epistemic distinctiveness of overt constructions. Edouard Machery (2014) has argued that reified social facts cannot be Searlean impositions of function because the ignorance that everyone (including Searle) suggests accompanies reification is incompatible with the knowledge required by the collective imposition of function. [People] tend to think of races as natural, biological kinds, and they often adopt an essentialist attitude toward them. But if people think that races are natural, biological kinds, they cannot collectively recognize, in the relevant sense, the social status of races nor their attendant social roles. (Machery 2014, 93) These arguments bring out the contradiction in treating covert constructions as institutions: a thing cannot be simultaneously widely believed to be natural and widely believed to be a product of a collectively imposed convention or status function. But Machery’s argument makes an assumption that Searle seems to reject, the assumption that the operation of status functions requires collective recognition. Searle writes, The acceptance of an institutional fact, or indeed, of a whole system of status functions, may be based on false beliefs. From the point of institutional analysis, it does not matter whether the beliefs are true or false. It only matters whether the people do in fact collectively recognize or accept the system of status functions. (2010, 119) 438
Social roles and reification
Searle is correct that his account makes no assumption about the nature of the justification or reasons for engaging in the imposition of a status function. Consider: G. We intend that persons with properties p1, p2, . . . pn count as members of race r1 in C. The reasons for G could advert to social structure, history, magic, nature, or whatever you like, and it makes no difference to his institutional account. However, it is Searle’s account of the propositional attitude G itself that runs afoul of the epistemic argument. Someone who genuinely believes in natural races does not participate in a collective intention like G. Rather, she has a different, simpler propositional attitude: H. I believe that persons with properties p1, p2, . . . pn are members of race r1. False beliefs about naturalness are not Searlean impositions of functions. Thus, Machery replies to Searle: Searle’s response only seems plausible because of an equivocation. If recognizing that some entity x has some property P just means judging that x has P, then, no doubt, people can recognize that reified social entities have a social status and an attendant function. In that sense, people can recognize, indeed collectively recognize, that water is constituted of H2O. But, if collective recognition is to be understood on the model of a declaration, then it makes little sense to recognize facts that we believe are natural phenomena. As noted above, we do not recognize, in that sense, that water is H2O. Because declarations are, for Searle, the basis of the mode of existence of social entities, reification is a genuine problem for his account of social ontology. (2014, 98) It does look as if the idea at the core of Searle’s account of social institutions, the idea of acts that collectively impose status functions, acts that he comes to say have “the same logical form as declarations” (2010, 13), requires collective recognition of the social nature of the fact. Thus, Machery is correct that Searle cannot simultaneously allow for reification and for understanding such reified kinds as institutions, in his sense.7 However, this passage and others from Searle suggest another possibility that Machery does not consider, one connected to what we called collective forgetting. What if reification occurs in a transition from an explicitly imposed status function to a mistakenly imposed status function? Let an event E at time t collectively impose a status function in Searlean fashion: We intend that X’s count as Y’s in C. On Searle’s view, E can be itself a declaration, or simply “have the logical form” of a declaration. Machery’s point is that E entails collective recognition that the created fact is a social fact and this is inconsistent with X’s counting as Y being understood as a natural fact. But if E creates a social institution, that fact may not be recognized at a later time, a time after t. Suppose at this later time, people forget (or fail to learn) about E and come to believe that X’s are Y’s is a natural fact. 439
Ron Mallon
This collective forgetting model of reification seems to allow Searle to have his collective recognition (at t) and deny it too (later on). Anyone who wants to argue (like Machery and I) that covert constructions cannot be understood as reified Searlean institutions needs an argument for why these reified facts do not continue to be institutional facts even after their mode of creation is forgotten. In the next section, I offer such an argument. I have been framing this discussion in terms of Searle’s account of institutions that makes use of the idea of we-intentions. However, we can see related phenomena on other accounts of social reality. Consider again Lewis’s account of conventions. For Lewis, individual behavior according to a conventional regularity is conditional upon each individual understanding of what others will do given common knowledge of the situation and regularity. Conventions amount to conformity with regularities that are Nash equilibria, where no one can unilaterally deviate from the regularity and do better. Notice that this cognitive state – the state of believing one’s action is the best conditional upon the belief that others also recognize that their actions are the best – is not the mental state that obtains of participants in a covert construction. Rather, when one takes something to have its properties as a result of some natural, intrinsic property, one treats it as having those properties independently of how oneself and others regard it. In such cases, one’s actions in conformity with a regularity are conditional upon beliefs about the human-independent features of the kind, not conditional upon the dispositions or actions of others. Different kinds of kinds: the ontological argument Covert constructionists hold that putatively natural kinds are in fact produced by our sociolinguistic activities. It follows from this general view: 1 2
Terms for covert constructions are widely believed to pick out natural kinds. Terms for covert constructions actually pick out some sort of socially produced kind.
What we have been asking is if understanding the socially produced kind in 2 as a Searlean institution or Lewisian convention is consistent with the truth of 1. In this sub-section, I argue that it is not; “reification” makes a social role a different kind of kind. What sort of kind is a Searlean institution? Take a simple example. Four friends find that they all enjoy singing a cappella. They decide to declare themselves a barbershop quartet: the Windy Wendells. They democratically elect a leader, Director of the Windy Wendells. Before taking the vote, the quartet members have the intentional state: We intend that whoever gets the most votes in our secret ballot counts as the Director of the Windy Wendells for two years. They hold an election, and Reed wins. They then have the mental state that: We intend that Reed counts as the Director of the Windy Wendells for two years. Furthermore, they agree: We intend that the Director of the Windy Wendells shall set the concert schedule and the song list for each concert.
440
Social roles and reification
The term “Director of the Windy Wendells” picks out a property that Reed has in virtue of the existence of the imposed status function, one that gives him certain other prerogatives. What gives the kind “Director of the Windy Wendells” its causal power and stability over time? In its simplest formulation, the mechanism of an imposed status function is the set of “we-intentions” that produce and maintain it, mental states that involve reference to oneself and others as a collective. The we-intention works to produce and regulate behaviors by and towards members of a kind because we-intentions carry with them what Searle calls “deontic powers” that work upon prima facie dispositions to cooperate with others in sustaining the institution. The mental dispositions of people to remember imposed status functions and respect their deontic force stabilize institutional kinds, giving them their causal power. This is not true for covert constructions. Even if an entity begins as a Searlean institution, reification suggests that cooperative we-intentions no longer obtain. Reification involves a situation in which status functions are no longer remembered as such, and so they do not have deontic power. In such cases, X’s are Y’s is not seen as a result of cooperation or collective intentionality. It is rather believed to be the result of natural facts about X’s. It follows that covert constructions lack the mechanism that lies at the heart of overt, Searlean institutions: the existence of we-intentions and the attendant dispositions to cooperate. What holds covert constructions together, then? Covert constructions contrast with overt constructions in that covert constructions are stabilized by the mistaken belief in their naturalness. The idea that a kind can be stabilized by mistaken belief in its naturalness is very old in philosophy. Famously, Plato in The Republic advocates stabilizing the social order of the Republic by spreading a “noble lie”: we will say in our tale, yet God in fashioning those of you who are fitted to hold rule mingled gold in their generation, for which reason they are the most precious – but in the helpers silver, and iron and brass in the farmers and other craftsmen. (1961, 415a-b) The different social roles of the Republic are stabilized not only by institution or by convention, but also by belief in underlying facts about the natures of each of the social classes. On this interpretation, the constructionist view rests on the idea, exploited by Plato, that belief in the naturalness of a category is a mechanism that can stabilize a kind. Searle himself realizes the possibility that, “In the extreme case an institutional fact might function only because it is not believed to be an institutional fact” (2010, 119). But he fails to acknowledge that such a fact would no longer be institutional in his sense, because the kind it makes reference to is not sustained by we-intentions, deontic powers, or cooperative dispositions. Because the mechanism of causal homeostasis is different, the kind is different. We can illustrate these differences between overt and covert constructions by considering the differing counterfactuals that emerge between overt and covert constructionist views (Mallon in press). Consider a role like “traffic officer”. For Searle, roughly, what makes the traffic officer a traffic officer, and gives her the right to direct traffic and issue traffic citations, is the collective imposition of these functions upon her by a community. Counterfactually, if everyone stopped imposing these functions, then there would be no status function and no obligation to obey the officer. For each individual, there would no longer be a disposition to continuing to act as if the traffic cop could give orders or issue citations. And the point, again, extends to other accounts of social reality. Lewisian conventions depend upon strategic choices by individuals, behavior that is contingent upon the rational preferences
441
Ron Mallon
of others. Consider the norm “stop when the traffic officer indicates to do so.” If this norm were sustained by a Lewisian convention, each person’s participation in the regularity would depend upon that person’s beliefs about what others will do, prefer, and so forth. If no one else participated in the convention, that would undermine one’s own reasons for participating in it. In contrast, in the covertly constructed social role case of the sort we have been considering, behavior is sustained by beliefs in the natural features of the world. Consider a community c in which men and women are equally able to perform a job type J, but that for various historical reasons women are more prevalent at job type J. Suppose, moreover, that, in c, it is widely believed that Q. Women are better at job type J because of their biology. Suppose Sarah is an employer seeking to fill a job of type J. If Sarah, as others in c, believes Q, then Sarah might rationally prefer women candidates for the job because of their putative superiority at that type of work. This preference is not conditional upon everyone else endorsing a status function nor is it conditional upon a desire to behave in accord with a regularity, given that everyone else does. It is not even conditional upon others believing Q; if everyone else believed that men were naturally better at J, Sarah might still rationally prefer to hire women because of her belief in Q. This sort of illustration makes clear that covert constructions are stabilized and explained by different facts than those that stabilize and explain overt constructions. This is a reflection of the different mechanisms that stabilize overt and covert kinds – a reflection of the fact that they are different kinds of kinds. Generalizing, if one treats men or women or members of different races as having some functions in virtue of a we-intention or convention, then that treatment is the result of mental states that are conditional upon the psychological and behavioral dispositions of others. But if one treats men or women or members of different races as different in virtue of natural facts, then that treatment is the result of mental states that are not conditional upon beliefs about others but rather conditional upon beliefs about the natural features of the world. This discussion has been narrowly focused upon the character of socially constructed human categories, arguing that there are at least two kinds of kinds that secure the stability of human kinds: one centered on attitudes involving others (for Searle, collective acceptance; for Lewis, common knowledge of a Nash equilibrium), and one centered upon mistaken naturalness. But other recent work on social reality makes it tempting, and ultimately, I think correct, to draw the more general conclusion that there are, as Brian Epstein has put it, “many kinds of social glue” that hold the social world together (Epstein 2014). These may include something like “collective intentions” of the sort that figures in work by Searle and others, the sort of gametheoretic equilibria that Lewis’s account of convention emphasizes (cf. Mallon 2003), as well as mistakes about what is natural that figure in constructionist work. Once we see this, we might then add a range of other mechanisms that may play a role in stabilizing human categories and add to their causal power (Mallon 2003; Mallon and Kelly 2012; Guala and Hindriks 2015). If this is right, the argument I have given is a simply a piece in a larger shift towards recognition of many mechanisms that produce social reality.
Reification and social change The claim that covert constructions are not the same kind of kind as overt constructions is consistent with the view that covert constructions began as institutions or conventions, sustained by collective recognition or individual appreciation of a solution to a coordination problem. 442
Social roles and reification
Practices that were initially adhered to out of collective recognition might eventually have come to be understood as natural facts. Because of the gradual turnover in population, such a transition may be described, as I have, as a sort of collective forgetting.The present argument is that as: We intend that X’s count as Y’s in C. gives way to the simple: X’s are Y’s. the character of the kind picked out by X undergoes an “alchemical” transformation: it becomes a different kind because the mechanism that sustains it changes. Recognition of multiple representational mechanisms that sustain social reality also suggests a broader conclusion about critical practices: where human intentional processes produce or stabilize the social world, transforming the social world via reasoned argument requires addressing the precise content of the intentions that produce and sustain it. Where overt construction occurs, this involves explicit attention to the character of the status functions or conventions that are in play, and what alternatives might take their place. But where covert construction is in play, social transformation requires addressing and refuting false beliefs about the mechanisms that produce the social world. Many obstacles face such attempts at social transformation. Such false beliefs may themselves be supported epistemically by the causal order the constructive processes have created. Powerful interests who benefit from existing arrangements may also stabilize them.8 And they may be stabilized by psychological biases that create cognitive illusions of plausibility around mistaken explanations. Thus conceived, we can see the project of many covert social constructionists as the attempt to, in effect, overcome these obstacles and reverse reification. Covert constructionists hope that when we see that our practices are sustained by mistakes about the nature of human kinds, the continuation of these practices will become a matter of collective choice or coordination, opening the possibility of better choices or arrangements.
Notes 1 I am grateful to Frank Hindriks and Julian Kiverstein for helpful criticism of an earlier version of this paper. 2 Cf. Mallon (2016), ch. 2 for more discussion. 3 Later, Searle (2010) suggests several alternate renderings of the imposition of function. 4 Searle is just one prominent examples of a thriving literature on social ontology (see also, e.g., Tuomela 2013; Gilbert 1989). 5 More recently, Searle (2010) offers a number of emendations to his basic account, including acknowledging “though ‘X counts as Y in C’ is one form of Status Function Declaration, there are also other forms” (19). Still, he maintains: “all of human institutional reality is created and maintained in existence by (representations that have the same logical form as) [Status Function] Declarations” (13). 6 Craver (2009), Khalidi (2013), and Slater (2015) all suggest that individuating mechanisms are not necessary for kinds. Here I assume only that distinct mechanisms are a sufficient reason for a fine-grained individuation of kinds. 7 Amy Thomasson (2003) similarly suggests that Searlean institutions produce a kind of first person authority with regard to the nature of the institution. Such authority looks to be inconsistent with the kind of error that reification seems to involve. 8 See, e.g., Petrovic´ (1983). Some elaborations of Marx’s idea emphasize the connection of reification to capital. Honneth (2008) offers further social theoretic discussion of reification.
443
Ron Mallon
References Bargh, J. & T. L. Chartrand (1999).The unbearable automaticity of being. American Psychologist, 54(7), 462–479. Boyd, R. (1988). How to be a moral realist. In G. Sayre-McCord (Ed.), Essays on Moral Realism (pp. 181–228). Ithaca, NY: Cornell University Press. ———. (1991). Realism, anti-foundationalism and the enthusiasm for natural kinds. Philosophical Studies, 61, 127–148. ———. (1999). Kinds, complexity and multiple realization. Philosophical Studies, 95, 67–98. Craver, C. F. (2009). Mechanisms and natural kinds. Philosophical Psychology, 22(5), 575–594. Epstein, B. (2014). How many kinds of glue hold the social world together? In M. Gallotti and J. Michael (Eds.), Social Ontology and Social Cognition (pp. 41–55). Dordrecht, NL: Springer Foucault, M. (1978). The History of Sexuality,Vol. I: An Introduction. New York: Pantheon. Gelman, S. A. (2003). The Essential Child: Origins of Essentialism in Everyday Thought. New York: Oxford University Press. Gendler, T. (2008). Alief and belief. Journal of Philosophy, 105(10), 634–663. Gilbert, M. (1989). On Social Facts. Princeton, NJ: Princeton University Press. Griffiths, P. E. (1997). What Emotions Really Are. Chicago: The University of Chicago Press. Guala, F. & F. Hindriks (2015). A unified social ontology. Philosophical Quarterly, 65(259), 177–201. Haslanger, S. A. (2012). Resisting Reality: Social Construction and Social Critique. New York: Oxford University Press. Honneth, A. (2008). Reification: A New Look At An Old Idea. Oxford; New York: Oxford University Press. Huebner, B. (2009). Troubles with stereotypes for spinozan minds. Philosophy of the Social Sciences, 39(1), 63–92. Khalidi, M. A. (2013). Natural Categories and Human Kinds: Classification in the Natural and Social Sciences. New York: Cambridge University Press. Lewis, D. (1969). Convention: A Philosophical Study. New Castle: Basil Blackwell. Lukács, G. (1923/1971). Reification and the Consciousness of the Proletariat. In R. Livingstone (Ed.), History and Class Consciousness. R. Livingstone. (trans.) (pp. 83–222). Cambridge, MA: MIT Press. Machery, E. (2014). Social ontology and the objection from reification. In M. Gallotti and J. Michael (Eds.), Perspectives on Social Ontology and Social Cognition (pp. 87–102). Dordrecht: Springer. MacKinnon, C. A. (1987). Feminism Unmodified : Discourses On Life and Law. Cambridge, MA: Harvard University Press. Mallon, R. (2003). Social construction, social roles and stability. In F. Schmitt (Ed.), Socializing Metaphysics (pp. 327–353). Lanham, MD, Rowman and Littlefield. ———. (in press). The Construction of Human Kinds. Oxford University Press. Mallon, R. & D. Kelly (2012). Making race out of nothing: Psychologically constrained social roles. In H. Kincaid (Ed.).The Oxford Handbook of Philosophy of Social Science (pp. 507–532). Oxford: Oxford University Press. Marx, K. (1967). Capital: Unabridged, Vol. 1: A Critical Analysis of Capitalist Production. Edited by Frederick Engels. Translated by Samuel Moore and Edward Aveling from the Third German Edition. New York: International Publishers. Omi, M. & H. Winant (1994). Racial Formation in the United States: From the 1960s to the 1990s. New York: Routledge. Petrovic´, G. (1983). Reification. In T. Bottomore, L. Harris,V. G. Kiernan and R. Miliband (Eds.), A Dictionary of Marxist Thought (pp. 411–413). Cambridge, MA: Harvard University Press. Plato (1961).The Republic. In E. Hamilton and H. Cairns (Eds.), The Collected Dialogues of Plato Including the Letters (pp. 575–844). Princeton, NJ: Princeton University Press. Ross, L. & R. E. Nisbett (1991). The Person and the Situation. Philadelphia: Temple University Press. Searle, J. R. (1990). Collective intentions and actions. In P. R. Cohen, J. Morgan and Martha Pollack (Eds.), Intentions in Communication (pp. 401–415). MIT Press. ———. (1995). The Construction of Social Reality. New York: Free Press. ———. (2010). Making the Social World : The Structure Of Human Civilization. Oxford; New York: Oxford University Press.
444
Social roles and reification Slater, M. H. (2015). Natural kindness. British Journal for the Philosophy of Science, 66(2), 375–411. Taylor, P. C. (2013). Race: A Philosophical Introduction. Cambridge, UK; Malden, MA: Polity Press. Thomasson, A. (2003). Realism and human kinds. Philosophy and Phenomenological Research, 67(3), 580–609. Tuomela, R. (2013). Social Ontology: Collective Intentionality and Group Agents. New York: Oxford University Press. Zack, N. (1999). Philosophy and racial paradigms. Journal of Value Inquiry, 33, 299–317. ———. (2002). Philosophy of Science and Race. New York: Routledge.
445
PART V
Social forms of selfhood and mindedness
26 DIACHRONIC IDENTITY AND THE MORAL SELF Jesse J. Prinz and Shaun Nichols1
1. Introduction Everyone undergoes many changes during the lifespan, and, in some cases, these changes may be quite dramatic. Intuitively, however, some of these changes do not threaten personal identity, while others do. We do not tend to suppose that identity is threatened by the physical aging process: wrinkles, gray hair, and the like. Significant psychological changes, brought on by aging, injury, or life events, seem to pose more of a threat. This, at least, is a thesis shared by many philosophers who have weighed in on debates about “diachronic personal identity” – the question, What kinds of continuity are required to qualify as the same person over time? For example, the most widely discussed suggestion is that diachronic identity is secured by the preservation of memory links of some kind. Others have emphasized narrative coherence and agency. Curiously, these competing proposals emphasize aspects of the mind that are not strongly associated with sociality. Admittedly, memories, life stories, and our capacity for agency all tend to involve social interactions as a matter of fact, but sociality is not constitutive of any of them. This reflects an individualistic orientation in leading approaches to diachronic identity, and we aim to offer a corrective. We believe that one of the most important aspects of diachronic identity is fundamentally related to social attitudes and social behavior: our moral values. We will defend the thesis that moral continuity (i.e., retaining the same moral values over time) is central to ordinary beliefs about what makes someone qualify as the same person as they advance through life. In fact, moral continuity is more important, according to our ordinary understanding, than memory, narrative, or agency. Each contributes to our sense of identity over time, but moral continuity contributes appreciably more. Our defense of this thesis is empirical. We present a series of studies that indicate moral continuity is regarded as extremely important for personal identity, and, indeed, more important than factors emphasized by prevailing theories. First, some preliminaries. In saying that moral continuity is more important than other factors, we don’t mean to imply that it is the one true theory of personal identity and other theories are false. We don’t think the question of identity over time depends on some deep metaphysical fact. That is not to say we don’t think the question is metaphysical. We think it is metaphysical, but not deep; that is, it doesn’t depend on some hidden fact about the structure of reality. Rather, it depends on us. Facts about identity are a consequence of classificatory attitudes and practices. For any object that changes, those that label the object can decide which transformations matter for 449
Jesse J. Prinz and Shaun Nichols
the application of that label. When it comes to persons, we suspect that multiple factors matter to us. We also suspect that different individuals and cultures may vary in which factors matter most. Here our goal is to show that randomly sampled members of our contemporary Western society regard moral continuity as extremely important.This is part of our folk theory of identity. That theory does not aim to capture some deep underlying reality, but rather determines the reality. Folk intuitions establish the conditions of identity, and thus polling these intuitions is directly relevant to settling questions about diachronic identity. Unsurprisingly those intuitions point to multiple factors.We are content to show that morality is one of these factors and it outweighs some others that have received more attention. If we are right that questions of personal identity are settled by how we do, in fact, classify, then this is a case where experimental philosophy can actually contribute to metaphysical debates. Surveys, in this case, do not just tell us what ordinary people think; they reveal the actual correlates of identity, because ordinary practices of classification determine conditions of identity. For those who are disinclined to share this view about the metaphysics of identity, the use of surveys may seem unmotivated. To those readers, we are content to point out that ordinary beliefs about identity are interesting even if they don’t cut metaphysical ice. Such beliefs may be important in human life, as when we deal with aging family members whose psychological profiles have changed, or with old friends who change political outlooks, or felons who undergo moral reform, or community members who become radicalized. Here we will approach the question of morality and identity at a high level of abstraction, using simply hypothetical cases, but the intuitions that we are tapping into may be important for many real-world decisions about how to regard and treat those who undergo evaluative transformations. We will broach these issues as metaphysical questions, as has been the custom in philosophical discussions of personal identity, but these practical issues may ultimately be more important. Understood as a question about our classificatory attitudes and practices, it is probably best to resist the idea that personal identity is a matter of numerical identity in a mathematical sense. We doubt that ordinary folk would be much troubled by paradoxes that arise when philosophers imagine elaborate cases of fission and fusion. A broken teleporter, for example, might produce a duplicate of a Captain Kirk on the face of some planet without destroying the Captain Kirk who entered the device on the Starship Enterprise; the two individuals cannot both be numerically identical to the first, since they are not identical to each other, and numerical identity is transitive. What matters here is what Parfit (1984) calls survival. When we discuss diachronic identity, we mean to address folk intuitions about psychological transformations that threaten survival. Our focus on diachronic identity should not be taken to imply that moral values matter only for survival, however. They presumably also matter for synchronic identity. “Synchronic identity” refers to those standing traits that are taken to make us who we are. Some traits, such as being allergic to peanuts, might not be crucial for a person’s identity, while others are. Often theories of diachronic identity and theories of synchronic identity are specified in ways that are disjoint. Memory, for example, involves a link between present and past, and thus lends itself more to a theory of diachronic identity than synchronic identity. We think moral values are probably important for both.We are not the first to suggest that moral values matter synchronically. Frankfurt (1971) characterizes conditions of personhood in terms of there being a true self, and the true self is comprised of the set of volitional states that a person wants to possesses. These presumably include moral attitudes, since moral attitudes are usually regarded as volitional, and most of us want to have at least some of the values that we possess. We will not take up the issue of synchronic identity here, but we do want to flag that it is plausible that moral 450
Diachronic identity and the moral self
values contribute to synchronic identity. If so, then the thesis that moral values contribute to diachronic identity promises to establish that at least some aspects of synchronic identity matter diachronically as well. The thesis that morality is part of synchronic identity is not especially controversial, even if has not been given much explicit attention in the literature. The thesis that morality contributes to diachronic identity is likely to meet with more resistance. This may sound surprising at first, since it often suggested that diachronic identity is importantly related to morality. In his seminal discussion, John Locke (1690) says that personal identity is a “forensic” concept, meaning it relates to the assignment of moral and legal praise and blame.We care about identity for moral reasons. But Locke does not thereby mean that moral values are aspects of identity. Rather, he is suggesting that we need a theory of identity in order to assign moral responsibility. One might express this by saying that identity is for morality, which is decidedly different from the thesis that morality is partially constitutive of identity. Locke offers a theory of identity in terms of memory and “same consciousness” that makes no explicit reference to moral continuity. Thus, for him, identity matters morally, but morality doesn’t matter to identity. Here we will not discuss issues of responsibility, but we will try to show that Locke missed out on an important link between morality and identity, emphasizing one direction of that dependency and not the other. To summarize, we are neither aiming to establish a deep metaphysical truth about diachronic identity, nor are we supposing that there is just one true theory. But we do aim to show that prevailing theories have neglected an important aspect of identity. Our studies suggest that moral continuity matters for survival over time according to lay intuitions, and it matters more than other characteristics that have dominated philosophical debates. If lay intuitions can arbitrate conditions on identity, it follows that moral continuity is an important part of what makes us qualify as the same people as we undergo changes in our lives.Thus, personal identity is more connected to our social nature than is often appreciated. We turn now to our studies.2
2. Study 1: keeping promises The idea that moral values might be central to how people conceive of their identity is neglected in most philosophical discussions, but it is passingly anticipated in a case devised by Parfit (1984). Parfit helped reinvigorate debates about personal identity when he advanced the view that survival depends on psychological connectedness. When applied to non-human animals, survival may just be a matter of bodily continuity, but, for us, Parfit thinks that survival is a matter of having psychological continuity, which is based on “connectedness” – direct psychological links from one’s present self to one’s past (1984, 206). Following Locke, Parfit mostly emphasizes memory connections (or, for technical reasons, “quasi-memory” connections, because the term “memory” may presuppose identity). Officially, however, his account also includes other psychological connections. In this context, he mentions intentions, desires, beliefs, and character. These aspects of connectedness are given little attention, and moral connectedness is not explicitly discussed except in one example, Parfit’s case of the Nineteenth Century Russian: In several years, a young Russian will inherit vast estates. Because he has socialist ideals, he intends, now, to give the land to the peasants. But he knows that in time his ideals may fade. To guard against this possibility, he does two things. He first signs a legal document, which will automatically give away the land, and which can be revoked only with his wife’s consent. He then says to his wife, ‘Promise me that, if 451
Jesse J. Prinz and Shaun Nichols
I ever change my mind, and ask you to revoke this document, you will not consent.’ He adds, ‘I regard my ideals as essential to me. If I lose these ideals, I want you to think that I cease to exist. I want you to regard your husband then, not as me, the man who asks you for this promise, but only as his corrupted later self. Promise me that you would not do what he asks.’ (Parfit 1984, 327) As Parfit notes, the Russian’s request, phrased in terms of successive selves, seems perfectly natural. We think that Parfit has hit upon an important link between values and identity, but he does not use the case to draw any general conclusions along those lines. Rather, it is used as a springboard to reflect on the idea that changes in identity can impact obligations, such as promises. Parfit does not dwell on the special role that moral values have in survival, much less on the possibility that they may be more important than memory – the aspect of connectedness that he emphasizes most in his other examples. Our first study was inspired by Parfit’s Russian.3 We sought to experimentally explore how intuitions about changes in values compare to intuitions about changes in memory. Participants were either given a case in which the protagonist has amnesia or a case in which the protagonist has a radical change in values. The amnesia case was the following: Consider this story about a Brazilian couple named Alberto and Claudia. Throughout their marriage, Alberto worked very hard to promote a political party called the Trabalhadores. He volunteered for the party and donated money. He also wrote a will asking Claudia to donate $1000 to the Trabalhadores after his death. Shortly after he retired, Alberto fell down and suffered a brain injury. The brain injury caused Alberto to lose his memories, but his values and his cognitive abilities, such as language and thinking, remained intact. After the brain injury, Alberto forgot all about the Trabalhadores, whom he had once supported. He also drafted a new will, which offered no money to the Trabalhadores. The next year, Alberto died. After Alberto’s death, Claudia found both the old will and the new will, and her attorney told her that there is no legal way to determine which is binding, because Alberto did not explicitly nullify the old will. It is now up to Claudia to decide whether to donate $1000 to the Trabalhadores. She wants to follow Robert’s wishes, but she must decide which wishes to follow. The values case went like this: Consider this story about a Brazilian couple named Alberto and Claudia. Throughout their marriage, Alberto worked very hard to promote a political party called the Trabalhadores. He volunteered for the party and donated money. He also wrote a will asking Claudia to donate $1000 to the Trabalhadores after his death. Shortly after he retired, Alberto fell down and suffered a brain injury. The brain injury caused a complete transformation of Alberto’s moral values, but his cognitive abilities, such as language and thinking, remained intact. After the brain injury, Alberto stopped caring about the Trabalhadores, whom he once supported. He also drafted a new will, which offered no money to the Trabalhadores. The next year, Alberto died. After Alberto’s death, Claudia found both the old will and the new will, and her attorney told her that there is no legal way to determine which is binding, because Alberto did not explicitly nullify the old will. It is now up to Claudia to decide whether to donate $1000 to the Trabalhadores. She wants to follow Robert’s wishes, but she must decide which wishes to follow. 452
Diachronic identity and the moral self
For each case, participants were asked whether Claudia should disregard or follow the newer will, which does not donate to the Trabalhadores. In the amnesia case, participants tended to say that Claudia should disregard the new will (M = 3.09 on a 6-point scale where 1 = completely disregard and 6 = completely follow). In the values case, participants tended to say that Claudia should follow the new will (M = 4.9). The difference between responses to these cases was significant (t(41) = -3.34, p = 0.002). These results suggest that memory and morality impact intuitions about identity in different ways. If so, Parfit may not be entitled to treat them as equivalent. Their contributions to identity may differ, at least quantitatively (we will not explore qualitative differences here).This is an important first step in motivating our investigation into moral identity. At the same time, this study has a major limitation. The findings can be interpreted in two very different ways. On the one hand, one might think that our findings prove that memory continuity is more important for identity than moral continuity. On this interpretation, Claudia disregards her husband’s revised will in the memory case, because she thinks he is no longer the same person after memory loss, and she feels a strong sense of obligation to his earlier self. In the case of moral change, she regards her husband as the same person, so she thinks he has the right to change the content of his will. If this is the correct interpretation, it would deliver a serious blow to the idea that moral changes impact identity. On the other hand, there is a reading that has the opposite implications. In the memory case, Claudia may reason that her forgetful husband is the same person as he always was, and she should honor his original preferences because after all they are still part of who he is; he has simply forgotten that. In the moral case, on this reading, Claudia regards her husband as a new man after the injury, and since the old Alberto is gone, Claudia has no more obligations to him, and should honor the will of her new husband. Given these two opposing interpretations, we set out to test intuitions about moral change more directly.We decided to eliminate the issue of promise-keeping, which complicates Parfit’s original case and our variant. When a person makes a promise to someone who changes dramatically, two issues are easily confused: Is the change a change in identity? And, if so, does the person who makes the promise have an obligation to the original person, or is that discharged by that person’s demise? Since we are primarily interested in the first question, we sought to bypass the second. Still, this study established something important: memory and morality lead to different results. Our first goal was to look into this difference more deeply.
3. Studies 2a and 2b: moral self vs. mnemonic self As noted, the previous study doesn’t establish whether people think that the changes to values disrupted the personal identity of the protagonist. It is open to interpretations on which moral change leaves identity intact. To address this directly, we designed a new vignette. We also wanted to verify that intuitions regarding moral change differ from intuitions regarding memory change, so we paired our new moral vignette with one involving memory. We asked participants about personal identity either in a case of amnesia or in a case of a change in values. The cases that we devised go as follows: • Amnesia Imagine that John accidentally falls while walking in the mountains. The accident causes a head injury that has a profound effect on John’s memory. Because of the injury, John can no longer remember anything about his life before the accident. He is, however, able to form new memories. And his personality and values remain the 453
Jesse J. Prinz and Shaun Nichols
same as before the accident. For example, before the accident he used to do helpful things for people in his community, and he continues to do this with enthusiasm after the accident, though he doesn’t remember that he used to do it before. • Bad values Imagine that John accidentally falls while walking in the mountains. The accident causes a head injury that has a profound effect on John’s values. His memory and general intelligence remain the same as before the accident, but the injury causes John to stop behaving morally. For example, before the accident he used to do helpful things for people in his community, and, after the accident, he stops caring about any of that and only acts to fulfill his own happiness even at the expense of others. In our first study with these vignettes (Study 2a), participants were asked to read one of these two scenarios, and then rate, on a 1–6 scale, the extent to which John is the same person as before the accident (1 = not at all the same person; 6 = completely the same person).We found that participants in the amnesia condition tended to think that John was the same person (M = 4.32; one sample t(18) = 3.2, p = 0.0049), whereas participants in the values condition tended to think that John wasn’t the same person (M = 2.56; one sample t(17) = 2.9, p = 0.01), and the difference between these responses was significant (t(35) = 4.2867, p < 0.0001). These findings suggest two things: moral continuity is important to identity, on lay intuitions, and significantly more important than continuity of memory. There is, however, a small worry. In the values vignette, John becomes a morally worse person. Perhaps participants were somehow inclined to see John as transformed because they didn’t like his new values. Perhaps their judgments reflect negative attitudes towards the transformed John more than any reliable intuition about his identity.To determine whether this negative change is crucial to getting the effect, we also included a vignette in which John changes for the morally better: • Good values Imagine that John accidentally falls while walking in the mountains. The accident causes a head injury that has a profound effect on John’s values. His memory and general intelligence remain the same as before the accident, but the injury causes John to transform morally. For example, before the accident he used to be very selfish, working to fulfill his happiness even at the expense of others. After the accident, he suddenly stops acting selfishly and instead does helpful things for people in his community. Again, participants were asked about the extent to which John was the same person as before the accident. Participants again tended to say that John was not the same person (M = 2.59; one sample t(38) = 4.27, p = 0.0001), and this was significantly different from the memory condition (t(56) = 4.88, p < 0.0001). By contrast, it made no difference whether the moral changes were good or bad (t(55) = 0.089, p = 0.93). Study 2a strongly suggests that moral changes are regarded as threats to identity, regardless of whether those changes are good or bad. Still, this study leaves another question about the generality of the effect unanswered. Perhaps the effect is specific to how we think about the identity of other people. To explore this, we designed new vignettes (Study 2b) by simply replacing “John” with “you” in the above vignettes. We still got the key effect on morality vs. 454
Diachronic identity and the moral self
memory. People tended to say that if the accident caused significant changes to memory, they would still be the same person (M = 3.91, one sample t(21) = 1.22, p = 0.235), but that if the accident caused changes to their morals, they would not be the same person (M = 2.09, one sample t(21) = 5.21, p < 0.0001), and these responses were significantly different (t(42) = 4.22, p < 0.001). And once again, even for positive moral changes, people still tended to say that such moral changes would mean that they were a different person (M = 2.39, one sample t(22) = 5.9, p = 0.0001). As with the vignettes involving another person, the vignettes focused on self showed no difference based on whether the changes were good or bad (t(43) = 0.83, p = 0.41). Finally, we compared self vs. other on all the paired vignettes. The short answer is that we found no differences depending on whether the question was about self or other. Whether the question was about bad moral changes (t(38) = 1.1, p = 0.28, n.s.), good moral changes, (t(60) = 0.59, p = 0.55, n.s.) or memory changes (t(39) = 0.9432, p = 0.35, n.s.), no difference emerged for the self–other contrasts. No matter what we did, moral changes dominate over memory changes. In summary, moral changes matter to identity, whether they are good or bad, yours or another’s, and they matter more than changes in memory. Defenders of the memory approach may resist this last verdict. They might even complain that we are not justified to suppose that memory and morality are disjoint capacities. Consider, for example, the finding that moral deficits are sometimes comorbid with deficits related to memory. Gerrans (2007) has explored such links in his discussion of individuals with ventromedial prefrontal brain injuries. Such individuals have moral impairments – they behave inappropriately and unreliably – as well as impairments with “mental time travel” – the ability to plan for the future. Mental time travel, in turn, is closely related to memory; the ability to look into the future shares mechanisms with the ability to revisit the past. Still, we think it is noteworthy that ventromedial patients do not seem to have profound problems with autobiographical recall. Their memory impairments do not seem to involve the kind of memory capacities implicated in memory-based theories of personal identity. What these comorbidity studies show most clearly is that morality is not just important for decisions here-and-now. It is also crucial for future planning. This adds support to the idea that morality matters for survival over time. Rather than rescuing the memory theory, it helps to make sense of the idea morality has equal claim to being a capacity with temporal significance. We don’t mean to suggest that memory did not matter at all. Loss of memory has some impact on intuitions about identity. But the impact seems to be rather small. Changes in memory fell above the midpoint on our scale, which measures intuitions about sameness of person. Averaging across 2a and 2b, the mean was 4.12, where 6 indicates completely the same person.4 In contrast, moral changes resulted in a mean of 2.51, where 1 indicates not at all the same person. The scalar nature of these results suggest that identity is graded, and probably no single factor matters. But it does seem that morality matters a great deal, and memory matters only modestly. This is striking because memory has been emphasized more than any other psychological trait in the literature on diachronic identity. It is a key element in Locke’s seminal thought experiments, and is also given disproportionate attention by Parfit. Memory is also the basis of identity in an influential theory developed by Paul Grice (1941), and plays a central role in Sydney Shoemaker’s (1959) work on the topic. If memory makes only a modest contribution to identity, according to lay intuitions, then its prominent position in philosophy may need to be re-evaluated. In contrast to memory, moral continuity may be more important, at least in contexts like those that our vignettes are tapping into. We now want to suggest that moral continuity may also be more important than any of the main competitors to the memory theory of identity as well. 455
Jesse J. Prinz and Shaun Nichols
4. Studies 3a and 3b: moral self vs. agentic self The previous results suggest that morality is regarded as much more significant to personal identity than memory. In all of the studies, we looked at cases in which an injury caused a brain lesion, which in turn caused the psychological changes to the individual. An obvious feature of these cases is that the brain lesions were in no sense chosen. These were changes that were externally inflicted on the individual. Perhaps this is the key to why we find people saying that there is a change in identity. Such an interpretation would be suggested by defenders of Agency theories of personal identity. Agency theories of identity have been defended by authors in the Kantian tradition, such as Christine Korsgaard. According to such theories, personhood constitutively involves agency, and agency involves the ability to make choices. Agency is said to secure a kind of identity through time, because, in making choices, we determine future outcomes, and those outcomes therefore count as ours.Transformations that are imposed on us without consent, as in many thought experiments involving memory loss, are threats to identity not because memories are taken away, as Parfit might suggest, but because we are not the authors of such changes. For the Agency theorist, changes in one’s psychological characteristics would not disrupt personal identity in cases where the changes are chosen. Korsgaard (1989, 123) expresses the point this way: Where I change myself, the sort of continuity needed for identity may be preserved, even if I become very different. Where I am changed by wholly external forces, it is not. This is because the sort of continuity needed for what matters to me in my own personal identity essentially involves my agency. In our previous vignettes, there was no choice involved.Thus, while our results pose a problem for a Memory theory of identity, they might be accommodated by Agency theories. Korsgaard could say the moral transformation in our vignettes was perceived as a blow to agency not because it was a moral change, but because it was externally induced. To explore this issue we conducted another study (3a) using a vignette in which there is moral change that results from a decision by the agent: • Decision Imagine that John accidentally falls while walking in the mountains. He is not injured in the accident in any way; his memory and general intelligence remain unchanged. But the accident causes John to rethink his life. John decides that life is too short to waste time worrying about acting morally. For example, before the accident he used to do helpful things for people in his community, and, after the accident, he stops caring about any of that and only acts to fulfill his own happiness even at the expense of others. The Agency theory predicts that we should think that John is still the same person since he decided to make these changes in himself. By contrast, the Morality theory predicts that since moral change is part of the scenario, this case would generate significantly lower judgments of identity. That’s exactly what we found. The mean response was 2.14 on our 6-point scale, and this was significantly lower than a midline response (one sample t(21) = 4.71, p = 0.0001). In addition to the new vignette, we also presented a new set of participants with the Amnesia vignette from study 2a. People thought that John was the same person to a greater extent in the Amnesia case (M = 3.69) than in the Decision case (t(49) = 4.33, p < 0.001). 456
Diachronic identity and the moral self
So we found that even when a person makes a decision to become less moral, people still think that the moral change has a major effect on their personal identity. But perhaps agency makes a difference over and above the effect of morality that we keep finding.To assess this, we compared responses to the Decision case with responses to the case of Bad Values from study 2a, drawn from a new set of participants. In the Bad Values case, the moral change was the product of a brain lesion, and the mean response for this case was 2.58; this is not significantly different from what we found for the Decision case (t(46) = 1.1465, p = 0.26, n.s.). Our Decision case focuses on a case in which participants evaluate the identity of another person in a hypothetical case. For our next study (3b), we wanted to see whether framing the case in terms of the self would change the results. Following Kant, Korsgaard (1989, 120) notes that human beings can be studied from two perspectives: as objects of theoretical understanding, as in the human sciences, and as agents. Perhaps the vignette uses in Study 3a encouraged a theoretical orientation. To address this possibility, we reasoned that ideas about agency may become more salient if participants were asked to imagine making choices themselves. As before, we simply replaced “John” with “you” in the Decision vignette, and we also ran the Amnesia and Bad Values cases from study 2b. In the Decision case, the mean response was 2.16 on our 6-point scale, and this was significantly lower than an average response (one sample t(24) = 4.98, p < 0.0001), suggesting that even when imagining themselves making the decision, they still regard the future self as a different person. Once again, we compared Amnesia (M = 4.05) vs. Decision (M = 2.16) and found a significant difference (t(42) = 4.8, p < 0.001). But, as with the cases that asked about John, in the Self versions, there was no significant difference between moral change as a result of a lesion (M = 1.85) and moral change as the result of decision (t(43) = 0.8754, p = 0.39, n.s.). We also compared responses to the Self and Other versions of the Decision case and found no difference (t(45) = 0.06, p = 0.95, n.s.). These results cast doubt on the suggestion that ordinary people regard agency as the lynchpin to identity. For the Agency theorist, changes that we choose should keep our identities intact.This is not what we found. Participants in our studies perceived moral change as a threat to identity even when such change is chosen. This remained true when they were asked to imagine choosing such changes themselves. The link between morality and identity is robust across this dimension. Moral changes continue to matter regardless of whether they are internally or externally imposed. Korsgaard might not be very moved by our results. First, she might reply that her notion of identity is not as thin as we’ve implied. Elsewhere, she defends the idea of practical identity, which includes various self-conceptions, including one’s gender, ethnicity, social roles, and religion (Korsgaard 1995).This notion looks both broad enough and thick enough to include moral identity as a proper part. Consequently, she might be sympathetic to our approach, and insist that her account shares our prediction that moral changes threaten identity. We would be happy to have Korsgaard as an ally here, but we suspect that she cannot follow us all the way down the road we are recommending. Despite her recognition that self-concepts (including one’s moral commitments) are part of practical identity, she is firmly committed to the view that agency is the ultimate arbitrator when it comes to change. If we choose to change our values, by an act of deliberation, we qualify as the same person. Current self-concepts may furnish us with reasons when making such choices, but we can also transcend those concepts, for Korsgaard, by viewing ourselves as members of the human community, and thereby abandoning commitment inherited from some narrower social group.When this happens, Korsgaard would say we have retained our personal identity. Our data suggest that this is not fully consistent with folk intuitions. This brings us to another way in which Korsgaard might try to resist our conclusions. She might insist that ordinary intuitions are irrelevant to settling questions of personal identity. Her 457
Jesse J. Prinz and Shaun Nichols
case for the link between agency and identity stems from an analysis of personhood, and that, she might suppose, is a conceptual claim that holds regardless of whether people happen to notice. For present purposes, we don’t want to get embroiled in debates about methodology. We are content to establish that people in our sample don’t regard choice as a difference-maker, even if, for some conceptual reason, they should. We are not out to refute Agency theories, or any others, because we are content to accept that many things may matter for identity. Our main observation here is that moral continuity continues to matter to people, regardless of agency, and that suggests it is a robustly important aspect of ordinary beliefs about diachronic identity.
5. Studies 4a and 4b: moral self vs. narrative self The final prominent theory of personal identity that we’d like to consider is the Narrative theory. The core idea of the view is that a narrative – a life story – is necessary for establishing diachronic personal identity. The view has been developed perhaps most effectively by Marya Schechtman, who encapsulates the view as follows: “a person creates his identity by forming an autobiographical narrative – a story of his life” (Schechtman 1996, 93).5 Schechtman then goes on to suggest that the capacity to construct life narratives distinguishes human beings from some other sentient and sapient creatures. Memory and Agency theories entail that a creature can have a diachronic identity in virtue of having the capacity to recall past episodes or make choices. For Schechtman, that is not enough. Diachronic identity is secured by a kind of story telling. If this capacity is unavailable – if an individual cannot construct a narrative of different pieces of the past – then identity is disrupted. What would a Narrative theorist say about the cases we’ve already investigated? Note that we find people regard moral change as having much more significant effects on personal identity than memory change. One might suppose that a loss of memory would be a greater insult to one’s life story than a shift in moral values. But the Narrative theory seems so flexible that it’s a bit hard to say. In each scenario, one can imagine the individual developing a life story that included the accident or the decision. Or, one could imagine the individual not developing a story that included the accident. This flexibility of the Narrative theory is a kind of liability, since a theory that can twist to accommodate cases so easily might seem to lack a sufficiently firm backbone to be a substantive theory. There are thus theoretical concerns associated with the Narrative theory, but rather than relying solely on these theoretical concerns, we designed a new study (4a) with a vignette that stipulated that the protagonist lost the capacity for narrative. The vignette went as follows: • Narrative Imagine that John accidentally falls while walking in the mountains.The accident causes a head injury that has a profound effect on John’s ability to construct coherent stories. Before the injury, John often thought about his life in terms of an ongoing story. After the injury, he can no longer tell stories or follow stories, and he stopped thinking about his life in terms of an ongoing story. But his personality and values remain the same as before the accident. For example, before the accident he used to do helpful things for people in his community, and he continues to do this with enthusiasm after the accident, though he doesn’t think about how this fits into his life as part of an ongoing story. The vignette claims that John can no longer form a life narrative. The narrative theory of personal identity would thus hold that John’s identity has been radically disrupted. But this is not 458
Diachronic identity and the moral self
how the participants saw it. Even though John’s ability to form narratives was destroyed, participants still tended to say that he was the same person after the injury (M = 4.11; one sample t(26) = 2.67, p = 0.0128). For contrast, we also had another set of participants evaluate the Bad Values vignette (study 2a). As before, for that case, participants tended to say that John was a different person (M = 1.89; one sample t(27) = 8.9, p < 0.0001), and this differed significantly from responses to Narrative (t(53) = 7.64, p < 0.0001). As in previous studies, we also wanted to see whether there was something special about using a third-person case, so we rendered Narrative into a first-person case by replacing “John” with “you”. The results were much the same. People still did not give the response predicted by the Narrative theory. Rather, responses were on the “same person” side of the scale (M = 3.77; though not, in this case, significantly different from midline: one sample t(25) = 0.86, p = 0.39, n.s.). Again, we also ran a first-person version of Bad Values and, as expected, participants tended to say that they would not be the same person in this scenario (M = 2; one sample t(21) = 10.2, p < 0.0001). Responses to the Bad Values case differed significantly from responses to the Narrative case (t(46) = 4.8610, p < 0.0001). In addition, when we compared the first-person version of the Narrative case to the third-person version, we found no difference (t(51) = 0.89, p = 0.3767). For these studies, we asked participants to explain their answers. Since the idea of narrative construction is a bit less familiar than memory or agency, we wanted to get a flavor of how people conceived it. Some subjects did cite the significance of narrative, e.g.: “Telling stories is a main part about who you are. If you are unable to do that then I think that significantly alters parts of your communicative skills and may make you less outgoing.” “I think that narrative is the way we construct and understand our lives. Without this, a person would be completely different.” But more subjects registered an explicit rejection of the significance of narrativity for personal identity, e.g.: “I think a person’s ability to tell a coherent story is a miniscule part of who they are, so I think John is basically the same person. John or the person he is telling a story to may experience some confusion but that doesn’t mean what essentially makes John himself has changed.” “Although he can no longer think about his life as an ongoing story or follow stories these are very minor parts of who he is. The major parts, his personality and values are still there and make up a very large part of who he is.” “He is not able to tell his story but still has the values of his life intact. He is still able to help his community and be a good person. He just cannot put things together as far as the story of his life.” For the moral transformation, by contrast, subjects tended to be quite explicit in endorsing the significance of morality for identity: “Our morals are part of who we are as a person, and determines our interactions with others and social success. If he isn’t as moral, he isn’t the person that everyone remembers.” “If John’s values change he is not the same person. However, I believe the change is not his fault whether it be for the better or worse.” 459
Jesse J. Prinz and Shaun Nichols
“While John may remember who he was, and may have the same level of intelligence, his moral code and personality is what made him who he was. If that changes then the person changes with it.” One subject actually articulated the Morality theory of personal identity with perfect economy: “His morals changed. Therefore he changed.” Overall, the results provide no support for the idea that Narrative is central to how most people think of personal identity. More importantly, though, even if narrative matters somewhat to identity, our results suggests that the significance of narrative pales compared to the significance of morality.
6. Summary and implications We have now presented a serious of studies exploring folk intuitions about the relationship between morality and the self. Our first study took inspiration from Parfit and established that changes in values and changes in memory have a different impact on intuitions about spousal obligations. This initial finding motivated further investigation, and we then probed intuitions about survival more directly, asking participants whether someone is the same person after undergoing a moral change. We found strong support for a negative answer: moral change significantly alters personal identity. This was true whether the moral change was positive or negative and whether it was presented from a third- or first-person perspective. This suggests that moral continuity matters a great deal on folk understanding of diachronic identity. We take this to be our main positive finding. In addition, we compared the importance of moral continuity to other psychological factors that have been emphasized more frequently in the philosophical literature. We identified three factors that figure prominently in some of the most prominent theories of personal identity: Memory, Agency, and Narrative. In each case, moral continuity was found to be more important, according to lay intuitions. Loss of memory and narrative capacity has only a modest impact on judged sameness of person, and the presence of agency did little to mitigate the insult to identity associated with moral change. As we’ve stressed, we don’t mean to imply that these other approaches to personal identity are mistaken. We think there is no deep metaphysical fact about identity, and multiple factors may be used to individuate persons and track their identity over time. At the same time, our findings do indicate that morality may matter more than some factors that have dominated discussion. This suggests that the moral dimensions of identity deserve more study. In making this comparative point, we are not trying to suggest that morality is the most important factor in identity. Other factors may be equally important. We have not looked at non-moral values, occupation, and personal relationships, for example. That said, moral continuity has been compared to other psychological traits and capacities in follow-up studies that were recently published (Strohminger and Nichols 2014). Participants were asked to imagine scenarios in which there were changes in personality and a variety of cognitive and perceptual faculties. All of these make some contribution to intuitions about survival, but none has the same impact as moral change.Thus, ongoing research done in support of our core findings here has yet to unearth any feature of the mind that is given as much weight in diachronic identity as morality. It might be objected that folk intuitions cannot settle questions about what matters to identity. As noted in section 4, we allow that conceptual analyses of personhood could reveal 460
Diachronic identity and the moral self
conditions of survival that the folk haven’t fully appreciated. On the other hand, we think questions about personal identity can be directly illuminated by research of this kind. Often, when we ask about survival we are asking about what people really care about. Surveys can help us answer this question. Suppose a loved-one changes her values. Would we think of her as the same person? That is a question that can be investigated using survey methodology. It would be valuable to explore real-world cases as well, but surveys are well-suited for tapping into what we care about. It’s a reasonable further question whether there are some other things that we should care about more. But if we do care about morality, then that fact alone suggests that morality plays a role in the individuation of persons. It is part of how we construct the category. It’s independently plausible that personhood is a category whose boundaries are determined by us, not a natural kind with a biologically given essence. If so, the criteria we use in person individuation are partially determinative of what identity consists in. In our sample, morality is clearly part of the story. Future work will have to explore whether other populations share these intuitions. We want to end by considering some of the implications that might follow if moral continuity is indeed a crucial component of diachronic identity. First, though our vignettes are fanciful, moral changes do occur in many people’s lives. These include religious conversion experiences, indoctrination, radicalization, and shifts in political orientation. Each of these may suggest a kind of discontinuity or rupture in identity. This is not an affront to intuition. We do often feel alienated from those who undergo such changes. The work presented here sheds light on those feelings. We implicitly register that those whose values change can cease to be fully recognizable as the same persons, despite many other continuities. When family members try to recover a relative who has been radicalized, for example, there is a palpable sense that the old self must be brought back to life. Moral identity may also have implications for cases of prison reform. It’s a familiar refrain from prisoners at parole hearings that they have changed – “I’m not the same person.” Here’s one recent example. Jonathan Coryell was convicted (with two others) in the 2001 murder of Jeff Smulick. Coryell was sentenced to 22 years. In 2010, Coryell tried to get a reduced sentence by emphasizing changes since sentencing – engaging in thousands of hours of community service and working with at-risk youth. Coryell stressed that he now wanted to devote his life to such children. At a key point in his request, he writes, “I am not the same person who this court sentenced years ago.” Our findings suggest that such claims are corroborated by folk intuition. Morality is regarded as an important dimension of identity, and thus cases of reform – or corruption – may qualify as changes of identity. If so, this may have implications for exculpation.We are currently investigating the impact of reform on attitudes towards parole. From a descriptive and normative perspective, moral change may imply that certain inmates should be given reduced sentences, not just because they are less likely to offend after reform, but because they are not the ones (to some degree) who committed the earlier crimes. There is another implication that also bears on responsibility in criminal contexts. Certain individuals may have an impaired capacity for morality. If so, that dimension of personhood, and personal continuity over time, will be correlatively diminished. Consider clinical psychopaths. According to some researchers, psychopaths cannot fully grasp moral rules (cf. Blair 1995). They regard moral violations as akin to etiquette violations and cannot fully articulate why it is wrong to cause harm. This raises questions about their culpability. Legal ethicists ask whether those who cannot fully grasp morality can be held morally responsible.The concept of moral identity introduces further questions that have received less attention. Psychopaths may be said to lack moral identity, both synchronically and diachronically. If so, we can ask whether psychopaths qualify as persons in the same sense as the rest of the population and whether the 461
Jesse J. Prinz and Shaun Nichols
conditions for continuity in their lives differ from ours. In ongoing work, we are looking at how individuals with psychopathic tendencies view the role of morality in identity. It will be equally important to explore how we view them in this connection. One possibility, for example, is that certain moral relations that have a temporal dimension will work out differently for this population. Would a promise made to a psychopath be binding to the same degree and for the same reason as a promise made to a non-psychopath, if they lack moral identity? Psychopaths are not the only individuals who have been alleged to lack a full moral capacity. The same claim is sometimes made about young children, though they ordinarily acquire moral values later on. It is an implication of our view that young children lack an aspect of identity that is regarded as important in adults. That means that the emergence of morality in early life brings a kind of identity into place that may not be there, at least to the same degree, at the outset. It would follow that there is a sense in which a child is not the same person before and after moral education. This may sound odd at first, but it is not an objection to the theory. First of all, other aspects of identity may be in place, so we need not conclude that there is no diachronic continuity here. Second of all, it is not outlandish to suppose that aspects of the self emerge in early life, and that we are not fully identical to our premoral counterparts. A slightly more embarrassing implication concerns moral improvement. Suppose one takes it upon oneself to embark on a program of moral enhancement. Realistic cases include those who discover that they harbor forms of bigotry and set out change that. Such selfdirected consciousness-raising seems like a good thing, but, if morality is part of identity, it may qualify as a form of suicide. In improving our values, we may be destroying our old selves. Given other aspects of continuity, this may not involve a complete loss of self, but the ideas we have been exploring suggest that moral improvement is a kind of self-inflicted harm. That may sound problematic, not least of all because it undercuts the incentive to improve ourselves. We think the worry can be addressed. The fact that one has to sacrifice aspects of identity in moral reform does not render such efforts odious. Instead, we can think of moral improvement as a justified self-sacrifice: I will give up my old self for the sake of a better self. This does not strike us as an odd way of talking. Those who shed deeply entrenched forms of bigotry may come to regard their past selves as alien, and, before embarking on an effort to change, the old self may justly assert: I don’t like myself; I want to become a different person. Of course, we may feel ashamed of our old selves, but shame can also extend to close kin. Looking back, we can rejoice in the fact that our flawed predecessors made the ultimate sacrifice on our behalf. A further implication concerns the question of how many personal identities a human being is likely to have in a lifespan. In advancing his neo-Lockean view, Parfit provokes readers by suggesting that, if we focus on connectedness when evaluation personal identity, persons are relatively ephemeral; human lives lack reliable memory connections over time, and this implies that the self can be rather ephemeral. One might think that an even more radical instability follows from the idea that identity involves morality. Moral values can be likened to character, and there is a sizeable literature suggesting that people lack robust, causally efficacious, temporally stable character traits (Doris 2002). Does the view that morality is part of identity therefore entail that identity is fleeting or fragile? We don’t think so. Unlike some aspects of moral character, which may lack stability (e.g., honesty), we suspect that moral identity is relatively stable across the lifespan. There is evidence, for example, that political party affiliation remains remarkable stable. Sears and Funk (1999) found that the correlation in party membership for an American sample was 0.80 over ten-year periods, and close to 0.70 when measured over almost four decades. Skeptics about character sometimes cite a study showing that students at Bennington became more liberal during their years at the college, indicating a high degree or 462
Diachronic identity and the moral self
moral plasticity (Newcomb 1943). This we don’t deny, but follow-up work showed that such changes are long lasting; fifty years on, the Bennington graduates remained more liberal and more political than peers who studied elsewhere (Alwin et al. 1991). Given that political values have a strong moral dimension, we can infer that moral values can be stable for long periods. That doesn’t mean there is no change: we have given examples of conversion experiences, and studies suggest that people become slightly more conservative as they age (Tilley and Evans 2014). But moral values may be more stable than, say, memory and narratives. The reasons for this stability deserve to be explored. One explanation is that moral values are grounded in emotions, and thus threats to moral values are upsetting to us. Mechanisms such as cognitive dissonance may make us resilient to moral change. Another compatible explanation is even more interesting. Moral values may remain stable because they are socially sustained. When one joins a moral group, such as a political party, one ends up with peers who serve to regulate attitudes and behavior. Group members are our conversation partners, and they penalize us when our beliefs stray. To change values would threaten close social relations; we risk scorn, ridicule, and even ostracism. Through both positive and negative reinforcement, social groups enforce attitudinal continuity. This relates to an idea defended by Maria Merritt (2000); though skeptical about character traits, she argues that social relationships can serve to impose a kind of behavioral consistency. In the case of memory, such mechanisms are less likely to be in place. This brings us to a final implication – one which bears most directly on the central theme of this volume. As noted at the outset, many of the prevailing theories of diachronic identity are individualistic in orientation: Memory, Autonomy, and Narrative are all things that an individual could, in principle, have if she were alone in the world. Memory and narrative aspects of identity are also individualistic in a further way: they are things that differentiate us. Each person has her or his own biography – a unique personal narrative. Autonomy is a feature that all healthy human beings share, but it is a generic aspect of human nature, not something that we share in virtue of being linked together in social groups (except on certain Hegelian perspectives, which we won’t review here). Morality is different. Unlike memory and narrative, moralities are shared. They are systems of norms that we have in common with other people in our communities. They are also socially inculcated, and moral rules concern behavior and are intended to coordinate social interactions. Moral beliefs can be idiosyncratic, but the very function of morality is social. In these respects the moral aspects of our identity are collectivist, rather than individualistic. To state it ironically, it is a part of personal identity that is not personal, in the narrow sense of being distinctive of individual persons. The claim that personal identity is social can be understood as a thesis about folk beliefs; we regard morality as important to identity; morality tends to depend on membership in social groups; therefore, we are implicitly committed to a social view of the self. At the beginning of this discussion, we suggested that folk beliefs determine metaphysical facts when it comes to personal identity. If so, the claim that personal identity is social can be read as a metaphysical hypothesis. What we are as individual persons is both influenced by social factors and sustained by social affiliations. We come into our personhood, one might say, by forging links to moral communities. What makes me me is, in part, something that binds me to you. Though this has not been our focus here, we think it is perhaps the most important implication of the view we are advocating. In overlooking the importance of morality to personal identity, philosophers have also been neglecting a way in which human selfhood is social, and the social affiliations inherent in our moral values secure a kind of stability that goes beyond security and camaraderie. In some sense, having a life – persisting through time – depends on traits that can be described as social. 463
Jesse J. Prinz and Shaun Nichols
Acknowledgments We are grateful to Julian Kiverstein for his patience and his extremely helpful comments, which deepened out thinking on these issues. We are also grateful to Javier Gomez-Lavin and Nina Strohminger. The studies reported here were conducted several years ago, and, prior to publication, they inspired a series of follow-up studies, on which we have been fortunate to have Javier and Nina as collaborators. Their insights and efforts have informed our understanding and confidence in the phenomenon that we report here.
Notes 1 Both authors contributed equally to this work. 2 We began these studies in 2009. The work reported here has provided a foundation for subsequent and ongoing research that deepens and extends the conclusions that we draw here. 3 All studies were conducted on mTurk. 4 The means both Studies 2a and 2b place memory transformations on “same person” side of the scale, though this may not differ from a chance response. 5 The view is defended by a number of other philosophers (e.g. MacIntyre, Taylor, DeGrazia) and psychologists (e.g. Fivush 2011; McAdams 2004). For discussion, see D. Shoemaker 2016.
References Alwin, D. F., Cohen, R. L. & Newcomb, T. M. (1991). Political Attitudes Over the Life Span: The Bennington Women After Fifty Years. Madison, WI: University of Wisconsin Press. Blair, R.J.R. (1995). A cognitive developmental approach to morality: Investigating the psychopath. Cognition, 57, 1–29. Doris, J. M. (2002). Lack of Character: Personality and Moral Behavior. New York, NY: Cambridge University Press. Fivush, R. (2011). The development of autobiographical memory. Annual Review of Psychology, 62, 559–582. Frankfurt H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68, 5–20 Gerrans, P. (2007). Mental time travel, somatic markers and “myopia for the future.” Synthese,159, 459–474. Grice, H. P. (1941). Personal identity. Mind, 50, 330–350. Korsgaard, C. M. (1989). Personal identity and the unity of agency: A Kantian response to Parfit. Philosophy & Public Affairs, 18, 101–132. ———. (1995). The Sources of Normativity. New York: Cambridge University Press. Locke, J. (1690/2009). An Essay Concerning Human Understanding. New York, NY: WLC Books. McAdams, D. P. (2004). The redemptive self: Narrative identity in America today. In D. R. Beike, J. M. Lampinen and D. A. Behrend (Eds.), The Self and Memory (pp. 95–116). New York: Psychology Press. Merritt, M. (2000).Virtue ethics and situationist social psychology. Ethical Theory and Moral Practice, 3, 365–383. Newcomb, T. M. (1943). Personality and Social Change: Attitude Formation in a Student Community. New York: Holt. Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press. Schechtman, M. (1996). The Constitution of Selves. Ithaca, NY: Cornell University Press. Shoemaker, D. (2016). Personal identity and ethics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), forthcoming URL = . Shoemaker, S. (1959). Personal identity and memory. The Journal of Philosophy, 56, 868–882. Sears, D. O. & Funk, C. L. (1999). Evidence of the long-term persistence of adults’ political predispositions. The Journal of Politics, 61, 1–28. Strohminger, N. & Nichols, S. (2014). The essential moral self. Cognition, 131, 159–171. Tilley, J. & Evans, G. (2014). Ageing and generational effects on vote choice: Combining cross-sectional and panel data to estimate APC effects. Electoral Studies, 33, 19–27.
464
27 THE EMBEDDED AND EXTENDED CHARACTER HYPOTHESES Mark Alfano and Joshua August Skorburg
Introduction This chapter brings together two erstwhile distinct strands of philosophical inquiry: the extended mind hypothesis and the situationist challenge to virtue theory. According to proponents of the extended mind hypothesis, the vehicles of at least some mental states (beliefs, desires, emotions) are not located solely within the confines of the nervous system (central or peripheral) or even the skin of the agent whose states they are. When external props, tools, and other systems are suitably integrated into the functional apparatus of the agent, they are partial bearers of her cognitions, motivations, memories, and so on. According to proponents of the situationist challenge to virtue theory, dispositions located solely within the confines of the nervous system (central or peripheral) or even the skin of the agent to whom they are attributed typically do not meet the normative standards associated with either virtue or vice (moral, epistemic, or otherwise) because they are too susceptible to moderating external variables, such as mood modulators, ambient sensibilia, and social expectation signaling. We here draw on both of these literatures to formulate two novel views – the embedded and extended character hypotheses – according to which the vehicles of not just mental states but longer-lasting, wider-ranging, and normatively-evaluable agentic dispositions are sometimes located partially beyond the confines of the agent’s skin. Put another way, we will examine the ways in which moral and intellectual character are dependent on, or constituted by, one’s social environment. We believe this is a natural but underexplored next step in both the extended mind and situationist research programs. Virtues and vices can be understood as dispositions to token a suite of occurrent mental states and engage in signature behaviors in response to configurations of external and internal variables. If those mental states are sometimes extended, perhaps the dispositions to have them are too. Presumably, the dispositions don’t extend in every case, just as the states don’t extend in every case. Perhaps some people are honest all on their own. Perhaps some people are intelligent all on their own. But if our suggestion is on the right track, in some cases, a person is honest because (among other things) she is suitably integrated with props, tools, or other people outwith her brain and body. Likewise, if our suggestion is on the right track, in some cases, a person is intelligent because (among other things) he is suitably integrated with props, tools, or other people outwith his brain and body. 465
Mark Alfano and Joshua August Skorburg
Here is the plan for this paper. We begin by briefly explaining the situationist challenge, with special emphasis on the ways in which social expectations influence people’s thought, feeling, and behavior. Next, we explore two phenomena in which character, or at least the manifestation of character, is dependent on or constituted by such influence: stereotype threat and friendship. We argue that, construed correctly, people who are susceptible to stereotype threat have socially embedded intellectual character and that people who have robust friendships have socially extended moral character. The similarities and differences between these two phenomena are then used to help map the conceptual space of embedded and extended character, with reference to dynamical systems theory. We conclude with a few brief remarks situating the embedded and extended character hypotheses in the larger context of skepticism about traits and character.
The situationist challenge At least some neo-Aristotelian virtue-theoretic views (e.g., Foot 2003) are proudly naturalistic. On the one hand, this makes them methodologically attractive to those of us with independent naturalistic commitments. On the other hand, it means that these views face empirical critiques that non-naturalistic normative theories can sidestep. The situationist challenge to virtue ethics began when Harman (1999) and Doris (2002) argued that the dominant neo-Aristotelian conception of moral virtue was empirically inadequate. Alfano (2012, 2013a, 2014c) took the challenge a step further, arguing that virtue epistemology faces an empirical challenge.1 Contemporary virtue ethicists and virtue epistemologists typically combine the ancients’ idea that virtues are admirable, cross-situationally consistent traits of character acquired through habituation, with the more modern egalitarian assumption that almost everyone can – at least at an early enough stage of their life – become virtuous. Unfortunately, seemingly trivial and normatively irrelevant situational influences such as ambient sounds, ambient smells, ambient light levels, mood elevators, mood depressors, and social expectation signaling seem to be at least as powerful predictors and explainers of someone’s thoughts, motivations, feelings, deliberations, and behaviors as any traits they may have (Alfano 2013a, 2013b). Given the eminently plausible assumptions that parents and other educators try their best to instill virtue in their wards and that the vast majority of adults aim to be virtuous, these results from social psychology are worrisome. It’s not for lack of trying that we fail to be virtuous in the traditional sense. Situational influences on moral and intellectual conduct are many and diverse. The bestsupported model of situational influences in social psychology seems to be the “Situational Eight DIAMONDS” model (Rauthmann et al. 2014), which stands for: • • • • • • • •
Duty: a job must be done; Intellect: the situation affords a chance to demonstrate one’s intellect; Adversity: one reacts either prospectively or retrospectively to blame; Mating: one modulates one’s behavior because potential romantic partners are present; pOsitivity: the situation is potentially enjoyable; Negativity: the situation is potentially unenjoyable or anxiety-provoking; Deception: it is possible to deceive someone; and Sociality: social interaction is possible.
Together, these eight kinds of situational influences account for a large amount of the variance in people’s behavior (24–74%), much more than trait dimensions do (3–18%). Here, we focus on a type of situation that harnesses several of these situational influences in what can fairly 466
The embedded and extended character
be described as a perfect storm: the signaling of expectations by another person with whom one may continue to have ongoing social contact. Your boss asks you to do something you find problematic. Your friend raises her eyebrows expectantly. Your child gives you a plaintive look. Such situations invoke duty: we tend to feel that we owe people with whom we have this sort of relationship particular duties. Such situations invoke adversity: we feel put upon. They invoke mating when the other person is a (potential) sexual partner. They invoke positivity to the extent that we find it rewarding to meet the expectations others have for us (a common preference). They invoke negativity to the extent that we find it unenjoyable or anxietyprovoking to flout someone’s expectations (again, a common preference). They involve sociality by definition. Much of the groundbreaking social psychology of the second half of the twentieth century investigated the power of expectation confirmation. The most dramatic demonstration was of course the Milgram paradigm (1974), in which roughly two-thirds of participants were induced to put what they thought was 450 volts through another participant (actually an actor who was in on the experiment) three times in a row.While there were many important features of this study, the key upshot was that the participants were willing to do what they should easily have recognized was deeply immoral based on the say-so of a purported authority figure. Blass (1999) shows in a meta-analysis that Milgram’s results were no fluke: they have been replicated all around with the world with populations of diverse age, gender, and education level. Another example of the power of social expectations is the large literature on bystander apathy (Darley & Latané 1968; Latané & Nida 1981). It turns out that the more bystanders are present in an emergency situation, the lower the chances that even one of them will intervene. What seems to happen in such cases is that people scan others’ immediate reactions to help themselves determine what to do. When they see no one else reacting, they decide not to intervene either; thus everyone interprets everyone else’s moment of deliberation as a decision not to intervene. Reading off others’ expectations and acting accordingly doesn’t always lead to bad outcomes, though. Recent work on social proof shows that the normative valence of acting in accordance with expectations depends on what’s expected. For instance, guests at a hotel are 40% more likely to conserve water by not asking for their towels to be washed if they read a message that says, “75% of the guests who stayed in this room participated in our resource savings program by using their towels more than once” than one that says, “You can show respect for nature and help save the environment by reusing towels during your stay” (Goldstein, Cialdini, & Griskevicius 2008). Psychologists and behavioral economists have also investigated the effect of subtle, thoroughly embodied, social distance cues on moral behavior. In a string of fascinating studies, it’s been shown that people are more willing to share financial resources (Burnham 2003; Burnham & Hare 2007; Rigdon, Ishii, Watabe, & Kitayama 2009), less inclined to steal (Bateson, Nettle, & Roberts 2006), and less disposed to litter (Ernest-Jones, Nettle, & Bateson 2011) when they are “watched” by a representation of a face.The face can be anything from a picture of the beneficiary of their behavior to a cartoon robot’s head to three black dots arranged to look like eyes and a nose.2 One way to understand what’s going on in these scenarios is that people’s character is flimsy, that their behavior, thought, and feeling are largely explained by seemingly trivial and normatively irrelevant situational influences.3 But another way to understand what’s going on in these scenarios is that people’s character is thoroughly embedded in a social context or even partially constituted by the social context. When the bonds holding them in that context are tight and modally robust, perhaps it makes sense to think of their character as extending out into 467
Mark Alfano and Joshua August Skorburg
the social environment. When the bonds holding them in that context are relatively looser and less modally robust, perhaps it makes sense to think of them as merely embedded in the social context, with some properties that are metaphysically independent of it.4 To quote Shepard (1984, p. 436) slightly out of context, our contention here is that perception – including social perception – is “externally guided hallucination”. Our friends, parents, children, enemies, coworkers, bosses, and underlings make us up. They revise their view only when the hallucination they’ve projected onto us doesn’t induce us to act as expected. In any event, the only point we need to make here is that people’s dispositions to manifest what are traditionally considered character traits are highly socially dependent, to different extents and in different ways.The devil is in the details. In the remainder of this chapter, we explore some of those details to argue that character is sometimes embedded and sometimes extended. This way of understanding the Milgram paradigm and related phenomena accords with Alfano’s (2014a) suggestion: instead of thinking of virtue in the traditional way, as a monadic property of an individual agent, perhaps we should think of it as a relation between the agent and another agent, between the agent and a broader social milieu, or among the agent, a social milieu, and an asocial environment.When the agent is suitably integrated with these externalia, the situational influences that otherwise interfere with or thwart the possession or expression of virtue might systematically support it. For instance, Alfano (2013a) argues that, even if people lack virtues as they are conceived in neo-Aristotelian orthodoxy, they may have “factitious” or artificial virtues that simulate traditional virtues but are partially externally located. A factitious virtue is supported both by the agent’s self-concept (thinking of herself as, say, generous) and, more importantly, by the social expectations signaled to her by her friends, family, colleagues, and acquaintances (realizing that others think of her as generous, expect her to act accordingly, and knowing that she knows this about them). On this view, virtue inheres “in the interstices between the person and her world. The object that possesses the virtue [is] a functionally and physically extended complex comprising the agent, her social setting, and her asocial environment” (2013a, p. 185). In other words, the embedded and extended character hypotheses are the best hopes for virtue ethics and virtue epistemology to defend against situationism. Under what conditions is it the case not just that an agent’s social environment causes her to think, feel, desire, deliberate, and behave as a virtuous person would, but also that these environmental features are part of her virtue? We contend that when an agent is functionally integrated through ongoing feedback loops with her social environment, the environment doesn’t just causally influence her but becomes part of her character, for good or ill (Alfano 2015).What her good friends, her romantic partners, her domestic abuser – anyone with whom she has a deep, ongoing relationship – expect influences what she thinks they expect, which influences (among other things) what she expects of herself, the reasons she’s sensitive to, her levels of motivation, and her behaviors; this in turn confirms and strengthens (or disconfirms and undermines) her associates’ expectations, which are again transmitted to her, further shaping her thought, feeling, deliberation, and behavior, which again influences her associates’ expectations, and so on. In the next section, we explore two examples of this kind of functional integration with the social environment. We believe these cases provide intuitive support for the hypotheses of embedded and extended character.
Two examples: stereotype threat and friendship In the previous section, we introduced the situationist challenge, along with Alfano’s claim that the challenge can be deflected by an interactionist theory of virtue, which makes character dependent on the environment, especially the social environment. If this is on the right 468
The embedded and extended character
track, then perhaps long-lasting, wide-ranging, and normatively-evaluable agentic dispositions are sometimes best understood as embedded or even as extended into the social environment. In the present section, we offer an example which we believe helps to illustrate how some components of one’s intellectual character – that is, agentic dispositions to perform in certain characteristic ways on intellectual tasks – might best be understood as embedded in this way. We then turn to friendship to argue for the more audacious claim that moral character is sometimes extended. Stereotype threat and embedded character First, a few brief remarks are in order regarding the connection between stereotypes and intellectual character. If there are such things as intellectual character and intelligence, then intelligence is surely one of the most important components of intellectual character (along with curiosity and intellectual humility). If you have to predict how two people will perform on a problem-solving task, it makes sense to bet on the one who’s more intelligent. If intelligence is a psychologically credible construct,5 then someone’s performance on a given problem-solving task can be partially explained by reference to their intelligence.6 To what does this construct refer, though? One tempting answer, which has often and notoriously been proposed by researchers and politicians with racist and sexist agendas, is that someone’s intelligence inheres inside them. Perhaps it’s hardwired into the brain. Perhaps it’s hard-coded by genetics. Perhaps it ossifies during early childhood under the influence of parenting and culture. Whatever its details, the denouement is almost always the same: some people are more intelligent than other people.They always will be. Their intelligence is their own and inflexible. In the United States, it almost always turns out that the more intelligent group is white males (though who counts as “white” changes over time), while other groups (especially blacks, but also Latinos and women) are less intelligent. A good example is Herrnstein and Murray (1996), though other examples can easily be multiplied. It’s not hard to see how such interpretations of the psychological evidence both draw on and reaffirm existing stereotypes. Stereotypes characterize members of a group as typically having particular configurations of properties in virtue of their group membership, which confers on them an essence. If our discussion of social dependence in the previous section is on the right track, however, stereotype-influenced perception may be a kind of hallucination. Someone who typically faces expectations based on their group membership may end up acting in a way that confirms the stereotype because they find it is too burdensome and futile to try to oppose it. To better understand how this might play out, Taylor and Walton (2011) ask us to imagine a black student at a predominantly white university enrolled in what is widely known to be an intellectually challenging course. Further, this course is meant to diagnose which students can advance to the next required course in the series. When it comes time for an exam in this setting, the student’s awareness of the negative stereotypes associated with his racial group may be heightened. He may worry that any confusion he feels, any questions he asks, any mistakes he makes, will serve to confirm the negative stereotypes associated with his racial group. This is the “social-psychological predicament” that, in a series of seminal experiments, Steele and Aronson (1995) dubbed stereotype threat: the existence of such a [negative] stereotype means that anything one does or any of one’s features that conform to it make the stereotype more plausible as a self-characterization in the eyes of others, and perhaps even in one’s own eyes. (p. 795) 469
Mark Alfano and Joshua August Skorburg
In one study, black and white students were given an exam consisting of questions from the verbal section of the GRE. Differences in individual skill level were controlled for by reference to SAT verbal scores. In the stereotype threat condition, the exam was described as diagnostic of intellectual ability. In the control condition, the exam was described as a problem-solving task that was not diagnostic of intellectual ability. What makes the experimental condition threatening is the extant stereotype that black students underperform in school. Thus, a poor individual performance by a black student would be perceived as a confirmation of this stereotype. Being consciously aware of the stereotype and one’s own relation to it, Steele and Aronson hypothesized, would lead to a decrease in performance. And indeed, they found that “Black participants performed worse than White participants when the test was presented as a measure of their ability, but improved dramatically, matching the performance of Whites, when the test was presented as less reflective of ability” (p. 801). In order to test whether this effect was related specifically to stereotypes, rather than something like test anxiety, Steele and Aronson conducted another experiment. Before taking the same test, one group was required to fill out a demographic questionnaire. In the second group, participants were required to fill out the questionnaire after completing the exam. If the threat experienced in the first study was racially specific, then students who were required to call to mind their membership in a negatively stereotyped racial group before taking the exam should score worse than those who were required to do so after taking the same exam. Not only did Steele and Aronson find this to be the case, but they also found that “priming racial identity depressed Black participants’ performance on a difficult verbal test even when the test was not presented as diagnostic of intellectual ability” (p. 808, emphasis added).Thus, a basic demographic survey seems to be sufficient to activate the kind of threat that negatively affects intellectual performance. Indeed, in questionnaires, participants in the stereotype threat condition reported greater cognitive activation of racial stereotypes, greater concerns about their intellectual ability, greater tendencies to make excuses in advance for their test performance, and a greater reluctance to have racial identity linked to performance, than participants in the non-threat condition (p. 805).7 All of this suggests that, in the case of negatively stereotyped racial minorities, the more reliably present the negative stereotypes, the more likely the test-taker will encounter low expectations. In a social environment with a highly reliable stereotypic signal – a society where it is frequently borne in on people what groups they do and don’t belong to, and what is expected of them in virtue of their group membership – the hallucinatory self-confirmation of stereotypes will be rife.This predictably leads to a decrease in performance.While the precise mechanisms responsible for this performance decrease are not known,8 there is little doubt as to the reality and efficacy of stereotype threat for racial minorities in academic contexts.9 There are many ways one can be reminded of one’s membership in a negatively stereotyped group, and hence, many ways for stereotype threat to be activated. Demographic questionnaires, preference questionnaires, music, movies, video games, television shows, political institutions, peer chatter, jokes, and other relevant examples could be easily multiplied. The environment the test-taker inherits, that is, is one in which racial stereotypes are readily and reliably present. We can think of the sum total of these features as the level of threat present in the environment. And we can think of the sum total of the worries, anxieties, and distractions the test-taker experiences as the perceived level of threat. As a wealth of empirical research has shown, these worries, anxieties, and distractions can lead to a decrease in intellectual performance.When the test-taker’s perceived level of threat is high, he is more likely to perform poorly on the exam. When this disposition to perform poorly on exams in threatening conditions is manifest, it then provides feedback to the extant stereotypes in his environment: “Blacks consistently score lower on this exam because they are poor students.” If the effects of stereotype threat are not 470
The embedded and extended character
accounted for, then the exam score starts to look like supporting evidence for the previous statement. As this kind of information builds up, perhaps in the context of a single course, but also over the course of a college career, it feeds forward to create a higher level of threat in the environment. And when the level of threat in the environment is high, the perceived level of threat fed back to the test-taker is also heightened. When he faces a similar task, he becomes more likely to underperform as a result of the threatening conditions. Worse still, it is unlikely that an exemplary performance by our test-taker – perhaps even multiple exemplary performances – will do much to drown out the negative stereotypes in his environment. The presence of stereotypes in the environment frequently and reliably influences the test-taker, while the test-taker has very little such influence on the stereotypes present in his environment. His scoring in the ninety-fifth percentile on the midterm will not change stereotypic media portrayal by finals week. That is to say, the feedback loops between stereotypes and their targets are largely asymmetric and unidirectional. How ought we conceive the relationship between stereotypes and their target’s intellectual character? There are (at least) four relevant routes available. First, one could ignore the phenomenon of stereotype threat and point to standardized test scores as evidence of the inferior intellectual character of racial minorities. On this view, intellectual character is viewed as an individual, innate disposition. Second, one could draw a skeptical conclusion from the social psychological premises: the phenomenon of stereotype threat proves that intellectual character is flimsy, and too susceptible to things like stereotypes, expectations, noises, worries, distractions, anxieties, etc., to be of much use. On this view, intellectual character is perhaps viewed as a convenient fiction.Third, one could hold onto a notion of intellectual character by suggesting that the locus of intellectual character is wider than has been previously assumed (Alfano 2014b). On this view, intellectual character might sometimes depend on the stereotypes, expectations, noises, worries, distractions, and anxieties in the social environment. Put another way, the connections and feedback loops between an agent and the relevant features his social environment might be tight enough and reliable enough to think of his intellectual character as embedded in his social environment. Fourth, one could take a more radical route still, and claim that intellectual character is quite literally constituted by the stereotypes, expectations, noises, worries, distractions, and anxieties in the social environment. On this view, the connections and feedback loops between the agent and his social environment are so tight and so reliable that his intellectual character extends to include these features of his social environment as proper components. We believe the third route is the best for understanding intellectual character in the context of stereotype threat. That is, intellectual character might sometimes depend on the stereotypes in the social environment, and the feedback loops between an agent and the relevant features his social environment might be tight and reliable enough to think of his intellectual character as embedded. On the one hand, we’re skeptical of views that treat intellectual character as an innate, monadic disposition. On the other hand, it’s not clear that the feedback loops between an agent and the stereotypes in his social environment are so tight and reliable that his character would extend to include them. The signal from the social environment to the target of stereotype threat is relatively reliable, but the feedback from the agent to the environment is much less so. We think the framework of embedding best captures this asymmetric and largely unidirectional relationship. Friendship and extended character In the previous section, we argued that the phenomenon of stereotype threat shows that intellectual character is sometimes embedded because the agent and his environment form a 471
Mark Alfano and Joshua August Skorburg
coupled system in which the social expectations directed at the target are near-ubiquitous and highly reliable (leading to self-confirming effects), whereas the feedback and behavior from the target to the social environment is highly unreliable. We don’t mean to imply that intellectual character is never extended and at most embedded, but other examples would have to be employed to demonstrate extension. In this section, we go a step further, arguing that moral character is sometimes extended. Our example here is friendship.10 Imagine two agents, Ashley and Azim. Ashley and Azim are best friends.They spend as many as three or four days a week with each other. They care deeply about each other – not just about whether the other is suffering or feeling good, not just about whether the other is getting what he or she wants. Beyond these more mundane concerns, Ashley cares about whether Azim is a morally good person, and cares whether Azim thinks that she is a morally good person. Likewise, Azim cares whether Ashley is a morally good person, and cares whether Ashley thinks of him as a morally good person. Moreover, Ashley knows that Azim cares about her and her opinion of him; likewise, Azim knows that Ashley cares about him and his opinion of her. Indeed, Azim knows that Ashley knows that Azim cares about her and her opinion of him, and Ashley knows that Azim knows that Ashley cares about him and his opinion of her. There may even be common knowledge between them of their caring attachments: he knows she cares, and she knows that he knows that she cares, and he knows that she knows that he knows that she cares, and so on. Insisting on this might seem a bit precious, but we think it’s important, and that it characterizes many real friendships. Imagine how you would feel if your friend said, “I don’t even know whether you care about me.”You might respond, “You may not realize it now, but I do care about you, and it’s important to me not only that you see that, but also that I can rest assured that you see it.” Like everyone, Ashley and Azim have their flaws, and they’re not foolish enough to think themselves perfect.They rely on each other to – gently, and in a spirit of friendship – point out these flaws from time to time. When Ashley is headed down a particular course of action, she infers from the fact that Azim hasn’t tried to convince her to change course that he approves, or at least doesn’t disapprove too strongly. When Azim is unsure of himself, when he fears that he may have acted badly, he looks to Ashley for reassurance, or at least for lack of condemnation. In their deliberations, each of them weighs reasons like the rest of us, but they have also internalized each other’s voices. Ashley consults her internal-Azim: What would he tell her to do? How would he feel about her plans? How would he react to her behavior? What emotion would his face register if he were watching right now? Likewise, Azim consults his internalAshley: How will he feel if and when he tells her about what he just did? How will she react when he tells her how he feels right now? Their internalized models of each other are imperfect, of course. Everything is. But they’re not too shabby, either. After all, Ashley’s internalized Azim gets updated every time she gets actual feedback from him. If internal-Azim tells her to do one thing but actual-Azim says the opposite, she updates internal-Azim. Likewise, Azim’s internalized Ashley gets updated every time he gets actual feedback from her. If internal-Ashley reacts with approbation but actual-Ashley reacts with shock, he updates internal-Ashley.11 Along these lines, Adam Morton (2013) argues that what distinguishes moral emotions from garden-variety emotions is that the former essentially involve imagining a perspective from which an emotion is directed at you. For instance, guilt is the state of imagining a perspective from which anger is directed at you; shame is the state of imagining a perspective from which contempt is directed at you. The perspective from which the self-directed emotion emanates can be a desiccated ideal observer, but it can also be one’s internal model of a particular person. None of this is meant to suggest that Ashley slavishly follows Azim’s or internal-Azim’s advice (or vice versa). Nevertheless, both Ashley and Azim trust each other enough to treat 472
The embedded and extended character
the other’s (dis)approval of an action or plan as a pro tanto reason for (against) it. And retrospectively, they treat each other’s (dis)approbation as evidence that an action was right (wrong). Indeed, each of them regards the other’s (dis)approval as both an instrumental and an intrinsic reward (punishment). The instrumental value of others’ good opinion is obvious: they’ll be more inclined to trust and cooperate with you if they think well of you. Beyond that, if they broadcast their view of you, they may induce still others to take up the same opinion. And if they broadcast their view to you, you gain information about how you are – or at least about how you are perceived. Likewise, the instrumental disvalue of others’ bad opinion is obvious: they’ll be less inclined to trust and cooperate with you, and more inclined to sanction you if they think ill of you. If they broadcast a negative view of you, they may induce still others to take up the same opinion. Interestingly, if they broadcast it to you, you still gain potentially useful information about how you are – or at least about how you are perceived.12 But the (dis)approbation of others may have intrinsic worth as well. As Philip Pettit (1995) points out, among the things people (dis)value is the (dis)approbation of others.This moral psychological fact can be given a cynical reading, on which people are vain esteem-seekers. It can also, though, be given a more positive reading, on which the good opinion of a good (enough) person is intrinsically valuable. This is perhaps most obvious when one considers that the good opinion of a bad (enough) person is often regarded as an insult. Furthermore, just as there are multiple levels of mutual knowledge between Ashley and Azim (she knows he cares about her, and he knows that she knows he cares about her, and she knows that he knows that she knows he cares about her, and so on), so they often find themselves in episodes where they direct higher-order emotions at one another. Robert Roberts (2013) explores the ways in which emotions and emotional feedback loops strengthen and desiccate such relationships as friendship, enmity, civility, and incivility. For example, consider a sister who generously and in a spirit of friendship gives her brother her own ticket to a concert that he would like to attend. He feels the emotion of gratitude for this gift, which he expresses with a token of thanks. Satisfied that her generosity has hit its mark, she is “gratified by his gratitude. [. . .] And he may in turn be gratified that she is gratified by his gratitude” (p. 137). Despite the fact that this is a tiny schematic example, it plausibly contains a fourth-order emotion (he is gratified that she is gratified that he is gratified that she was generous). Such episodes are, in Roberts’s view, constitutive of friendship and other normative personal relationships (pp. 140–141). They naturally fit into the framework discussed here. Not only is the friendship between Ashley and Azim partly constituted by such emotional ping-pong, but the ongoing feedback such episodes embody makes each of their moral dispositions more modally robust. When Azim plans, he is guided by his internal-Ashley. When he acts, he often gets direct feedback from her. If he acts badly (in her eyes), she makes him know it. If he continues to act badly (in her eyes), she makes him know that too. Thus, there are multiple opportunities for correction and adjustment built into their relationship. Azim may never avail himself of the fourth or fifth or sixth contingent intervention, but were he to need it, it would be there. Likewise for Ashley. Unlike in the case of stereotype threat, where the signal in one direction is strong and reliable whereas the signal in the other direction is weak and noisy, friendship (ideally) involves strong reliable signaling (and attuned receiving) in both directions. Next, consider the truism that one’s possibilities for action are constrained by one’s modal knowledge. If you think that something is impossible – even if it’s not – you can’t try to accomplish it. Ashley’s impression of her own possibilities for action (and thus the range of actions she can actually take) is expanded by Azim’s confidence in her. When he signals that he thinks, trusts, or hopes that she can do X, he opens up the possibility of X for her. Likewise, Azim’s impression of his own possibilities for action (and thus the range of actions he can actually 473
Mark Alfano and Joshua August Skorburg
take) is expanded by Ashley’s confidence in him. When she signals that she thinks, trusts, or hopes that he can do Y, she opens up the possibility of Y for him. As Victoria McGeer (2008) reminds us, human motivation is often complicated and confusing. Sometimes we don’t know what we really desire, like, or love. Sometimes, we forget what we really value. Sometimes, we don’t know what we’re capable of. In those cases, it’s helpful to refer to a normative lodestone, a model of good conduct. Here we quote at length: For help in this regard, we are sometimes encouraged to look outside ourselves for role models, finding in others’ thoughts and actions laudable patterns on which to fashion our own. And this may serve us pretty well. However, something similar can occur, often more effectively, through the dynamic of hopeful scaffolding. Here we look outside ourselves once again; but instead of looking for laudable patterns in others’ behavior, what we find instead are laudable patterns that others see – or prospectively see – in our own. We see ourselves as we might be, and thereby become something like a role model for ourselves. The advantage in this is clear: Instead of thinking, ‘I want to be like her,’ – i.e., like someone else altogether – the galvanizing thought that drives us forward is seemingly more immediate and reachable: ‘I want to be as she already sees me to be.’ (pp. 248–249; see also James 1978/1896) Hope of this kind might best be construed not as feedback but as feedforward: Ashley’s model of Azim is robust to his momentary self-doubt, and when she signals her ongoing confidence in him, she nudges him back towards a confident equilibrium (and, once again, vice versa). This recalls Shephard’s (1984) characterization of perception – including, we want to emphasize, social perception – as externally guided hallucination. Importantly, though, such hallucinations sometimes influence the empirical facts on the ground and thereby the sensory data. This is an example of what Tad Zawidzki (2013; see also Mameli 2001) refers to as “mind-shaping”. If these remarks on friendship are on the right track, they show how friendship can be modeled as a coupled system with strong reliable signaling (and attuned receiving) in both directions. David Wong explores such influences in Natural Moralities (2006, pp. 133–137). Drawing on Confucian ethics, he explores the ways in which children learn norms, rules, and values through ongoing interactions with family members. This learning is sometimes explicit but more often implicit. It essentially depends on the existence of regular, cross-situational, and extensive interactions in a trusting relationship embodying (ideally) shared norms. But Wong emphasizes that such interpenetration of moral character occurs not only in childhood but also in adulthood, arguing that others help to shape and crystallize traits and desires that are especially congruent with our most important ends. Or rather, there are often times when increased self-knowledge merges with the crystallization of a trait or desire – when, for instance, understanding oneself better is at the same time making more determinate tendencies and impulses within one’s character that are in some degree inchoate. I have in mind ways that others can help us through some insight as to what our “real” feelings and motivations are, where that insight is partly an accurate portrayal of what is already there but also helps to reinforce and make more determinate what those feelings and motivations are. A friend who points out to a person that she is more compassionate than she understands herself to be, who points to certain recurring instances of compassionate 474
The embedded and extended character
behavior as evidence, may not just be pointing to what is already there but crystallizing and making more motivationally salient that trait in his friend. (p. 136) Friendship and close relationships on these accounts seem importantly different from stereotype threat. In the case of stereotype threat, there’s not much the target of threat can do to influence the stereotypes in his environment. In the case of friendship, by contrast, Ashley can pull Azim’s levers, and Azim can pull Ashley’s. They are sensitive to each other in real-time and respond differentially to the other’s behavior and intentions. Azim’s expectations for himself, his self-knowledge, his understanding of which actions are available to him, his motivation, the reasons that appear salient to him and their weights, and his deliberative strategies – all of these are influenced in a systematic and ongoing way by Ashley. Likewise, Ashley’s expectations for herself, her self-knowledge, her understanding of which actions are available to her, her motivation, the reasons that appear salient to her and their weights, and her deliberative strategies – all of these are influenced in a systematic and ongoing way by Azim.13 Because there are multiple feedback contingencies for both of them, their dispositions become modally robust. Or, in the language of dynamical systems theory (e.g., Palermos 2014), they erect attractors and repellors. What Ashley considers bad behavior, thought, feeling, etc., is a repellor for Azim because when he veers that way, she gives him multiple, increasingly strong nudges back towards equilibrium. What Ashley considers good behavior, thought, feeling, etc., is an attractor for Azim because when he starts acting, thinking, and feeling in these ways she gives him ongoing feedback that reinforces these dispositions. Likewise, what Azim considers bad behavior, thought, feeling, etc., is a repellor for Ashley because when she veers that way, he gives her multiple, increasingly strong nudges back towards equilibrium. What Azim considers good behavior, thought, feeling, etc., is an attractor for Ashley because when she starts acting, thinking, and feeling in these ways he gives her ongoing feedback that reinforces these dispositions. Given the tight coupling and reliable feedback present in cases of friendship, we contend that friendship can be understood as a case of extended moral character. In the case of stereotype threat, the feedback loops between agent and environment were largely asymmetric and unidirectional. For this reason, we argued the framework of embedding was appropriate. In robust friendships, however, these feedback loops are much tighter and more reliable. They are largely bidirectional and symmetric, and for this reason, the framework of extension is appropriate.
Conclusion To conclude, it will be worthwhile to consider how the arguments advanced here fit into the larger debates about traits and character. Does social psychological evidence dictate that we abandon the idea of robust character traits? Or is there reason to think we should retain a place for character traits in our moral psychology? This chapter has aimed less at staking out a novel position in the character skepticism debates, and aimed more at precisely mapping the ways in which character is sometimes dependent upon, or constituted by, the social environment.14 On the one hand, we remain skeptical about character if what is meant by character is a strictly monadic, internal, individualist disposition. On the other hand, we can see a place for character in moral psychology if the concept can be operationalized to include the dependence and constitution relations with the social environment we have described here. 475
Mark Alfano and Joshua August Skorburg
In the end, our discussion raises more questions than it answers. Are we responsible for our own embedded character in the same we that we’re allegedly responsible for our own internal character? If we’re not, who is? Perpetrators of stereotypes? Does this make us, in a way (worrisome or encouraging), our brothers’ and sisters’ keepers? Are we responsible for our own extended character in the same way that we’re allegedly responsible for our own internal character? If we’re not, who is? Our friends? Does this make us, in a way (worrisome or encouraging), our friends’ keepers? The embedded and extended character hypotheses, if true, seem to make us both more vulnerable (regarding our own character) and more responsible (regarding the character of others).
Notes 1 See Alfano (2013a), esp. pp. 111–139, for a more fine-grained discussion of the situationist challenges to responsibilist and reliabilist brands of virtue epistemology. 2 For further discussion of such influences, see Alfano (2014a). 3 We pause to note here that we’re not talking only about behavior: behavior is influenced in large part because thought and feeling are also influenced. This should settle once and for all the charge – often unfairly leveled against philosophical situationists and their sympathizers – of being “behaviorists” in some pejorative sense. 4 Our distinction between embedding and extension of character mirrors the debate in the extended mind literature between the hypothesis of embedded cognition (e.g., Rupert 2004) and the hypothesis of extended cognition (e.g., Sprevak 2010). 5 And it certainly seems to be so. It is probably the most-studied individual difference in scientific psychology, with a history of over a century and robust predictive and explanatory power (Kovacs & Conway 2016). 6 Though, if Kovacs and Conway (2016) are on the right track, the ultimate psychological explanation will always appeal in turn to more specific cognitive and neural mechanisms that tend to have a high degree of overlap and thus to explain the “positive manifold” that is so well documented in intelligence research. 7 It is easy to underestimate the effect of stereotype threat here in at least two ways: first, the SAT scores meant to control for differences in skill level were presumably acquired under the same kinds of threatening conditions that lead to a decrease in performance. Second, even when an exam is presented as non-diagnostic of intellectual ability, it is likely that a student who has taken an SAT or ACT exam would recognize GRE questions as intellectually diagnostic. More generally, it is worth noting that in a metaanalysis of stereotype threat effects conducted by Nguyen and Ryan (2008), the overall mean effect size was |.26|, and in some cases as high as |.64|. To put this into perspective, |.10| is considered to be a small effect size relative to most social psychological effects, |.20| medium, and |.30| large (Richard, Bond Jr., & Stokes-Zoota 2003, p. 339). 8 One possible explanation, offered by Schmader and Johns (2003), is that a decrease in performance is mediated by a decrease in working memory capacity. Given that working memory capacity is a limited resource that is highly correlated with fluid intelligence (meta-analyses by Kane, Hambrick, & Conway (2005) and Oberauer, Schulze, Wilhelm, & Süss (2005) estimate the correlation at r = 0.72 and r = 0.85, respectively), it would be unsurprising that the more of it that is allocated to worrying about one’s group identity and one’s performance, the less would be available for the processing demands of intellectually challenging tasks. 9 In academic contexts, the effect extends beyond racial minorities to women – especially in STEM fields (Schmader, Johns, & Barquissau 2004), and also to low SES individuals (Croizet & Claire 1998). Stereotype threat is also experienced outside of academic contexts, such as in negotiations (Kray, Thompson, & Galinsky 2001, 2002), athletics (Stone, Lynch, Sjomeling, & Darley, 1999), and driving tests (Yeung & von Hippel 2008). 10 Much of this section is informed by and expands on Alfano (2015). Beyond friendship, examples of potentially more complicated dyadic extension in this vein might include romantic partnerships and domestic abuse. 11 Thanks to J. Adam Carter and Andy Clark for emphasizing the importance of updating internal models. 12 Presumably, this is one of the reasons why people may prefer to have anger directed at them rather than being treated as, in Strawson’s (1974) words, someone “to be managed”.
476
The embedded and extended character 13 The existence and impact of such ongoing feedback loops has been empirically investigated in the context of romantic partnerships (Srivastava et al. 2006; Assad et al. 2007). Further work should examine similar effects in other close relationships. 14 Our treatment is by no means exhaustive of the possible relations with the social environment. Given limited space, we have not considered, for example, Sterelny’s (2010) notion of scaffolding, Menary’s (2007) notion of integration, nor the notion of distribution (e.g., Sutton et al. 2010).
References Alfano, M. (2012). Extending the situationist challenge to responsibilist virtue epistemology. Philosophical Quarterly, 62:247, 223–249. ———. (2013a). Character as Moral Fiction. Cambridge: Cambridge University Press. ———. (2013b). Identifying and defending the hard core of virtue ethics. Journal of Philosophical Research, 38, 233–260. ———. (2014a). What are the bearers of virtues? In H. Sarkissian and J. C. Wright (Eds.), Advances in Experimental Moral Psychology (pp. 73–90). New York: Continuum. ———. (2014b). Stereotype threat and intellectual virtue. In O. Flanagan and A. Fairweather (Eds.), Naturalizing Epistemic Virtue (pp. 155–174). Cambridge: Cambridge University Press. ———. (2014c). Extending the situationist challenge to reliabilism about inference. In A. Fairweather (Ed.), Virtue Epistemology Naturalized: Bridges Between Virtue Epistemology and Philosophy of Science (pp. 103–122). Dordrecht: Synthese Library. ———. (2015). Friendship and the structure of trust. In J. Webber and A. Masala (Eds.), From Personality to Virtue: Essays in the Psychology and Ethics of Character (pp. 186–206). Oxford: Oxford University Press. Assad, K., Donnellan, M. B. & Conger, R. (2007). Optimism: An enduring resource for romantic relationships. Journal of Personality and Social Psychology, 93(2), 285–297. Bateson, M., Nettle, D. & Roberts, G. (2006). Cues of being watched enhance cooperation in a real-world setting. Biology Letters, 12, 412–414. Blass, T. (1999). The Milgram paradigm after 35 years: Some things we now know about obedience to authority. Journal of Applied Social Psychology, 29(5), 955–978. Burnham, T. (2003). Engineering altruism: A theoretical and experimental investigation of anonymity and gift giving. Journal of Economic Behavior and Organization, 50, 133–144. Burnham, T. & Hare, B. (2007). Engineering human cooperation. Human Nature, 18(2), 88–108. Croizet, J. & Claire, T. (1998). Extending the concept of stereotype threat to social class: The intellectual underperformance of students from low socioeconomic backgrounds. Personality and Social Psychology Bulletin, 24, 588–594. Darley, J. & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 8, 377–383. Doris, J. (2002). Lack of Character: Personality and Moral Behavior. Cambridge: Cambridge University Press Ernest-Jones, M., Nettle, D. & Bateson, M. (2011). Effects of eye images on everyday cooperative behavior: A field experiment. Evolution and Human Behavior, 32(3), 172–178. Foot, P. (2003). Natural Goodness. Oxford: Oxford University Press. Goldstein, N. J., Cialdini, R. B. & Griskevicius, V. (2008). A room with a viewpoint: Using social norms to motivate environmental conservation in hotels. Journal of Consumer Research, 35, 472–482. Harman, G. (1999). Moral philosophy meets social psychology:Virtue ethics and the fundamental attribution error. Proceedings of the Aristotelian Society, New Series 199, 316–331. Hernstein, R. J. & Murray, C. (1996). The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press. James, W. (1979/1896). The Will to Believe. Cambridge, MA: Harvard University Press. Kane, M., Hambrick, D. & Conway, A. (2005). Working memory capacity and fluid intelligence are strongly related constructs: Comment on Ackerman, Beier, and Boyle (2005). Psychological Bulletin, 131(1), 66–71. Kovacs, K. & Conway, A. (2016). Process overlap theory: A unified account of human intelligence. Psychological Inquiry, 27, 1–27.
477
Mark Alfano and Joshua August Skorburg Kray, L., Galinsky, A. & Thompson, L. (2002). Reversing the gender gap in negotiations: An exploration of stereotype regeneration. Organizational Behavior and Human Decision Processes, 87, 386–409. Kray, L., Thompson, L. & Galinsky, A. (2001). Battle of the sexes: Gender stereotype confirmation and reactance in negotiations. Journal of Personality and Social Psychology, 80, 942–958. Latané, B. & Nida, S. (1981). Ten years of research on group size and helping. Psychological Bulletin, 89, 308–324. Mameli, M. (2001). Mindreading, mindshaping, and evolution. Biology and Philosophy, 16, 591–628. McGeer,V. (2008). Trust, hope, and empowerment. Australasian Journal of Philosophy, 86(2), 237–254. Menary, R. (2007). Cognitive Integration: Mind and Cognition Unbounded. New York: Palgrave Macmillan. Milgram, S. (1974). Obedience to Authority. New York: Harper Collins. Morton, A. (2013). Emotion and Imagination. London: Polity. Nguyen, H. & Ryan, A. (2008). Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence. Journal of Applied Psychology, 93(6), 1314–1334. Oberauer, K., Schulze, R., Wilhelm, O. & Süss, H. (2005). Working memory and intelligence -their correlation and their relation: Comment on Ackerman, Beier, and Boyle (2005). Psychological Bulletin, 131(1), 61–65. Palermos, O. (2014). Loops, constitution, and cognitive extension. Cognitive Systems Research, 27, 25–41. Pettit, P. (1995). The cunning of trust. Philosophy and Public Affairs, 24(3), 202–225. Rauthmann, J., Gallardo-Pujol, D., Guillaume, E.,Todd, E., Nave, C., Sherman, R., . . . Funder, D. (2014).The situational eight DIAMONDS: A taxonomy of major dimensions of situational characteristics. Journal of Personality and Social Psychology, 107(4), 677–718. Richard, D., Bond Jr., C. & Stokes-Zoota, J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7(4), 331–363. Rigdon, M., Ishii, K., Watabe, M. & Kitayama, S. (2009). Minimal social cues in the dictator game. Journal of Economic Psychology, 30(3), 358–367. Roberts, R. (2013). Emotions in the Moral Life. Cambridge: Cambridge University Press. Rupert, R. (2004). Challenges to the hypothesis of extended cognition. Journal of Philosophy, 101, 389–428. Schmader, T. & Johns, M. (2003). Converging evidence that stereotype threat reduces working memory capacity. Journal of Personality and Social Psychology, 85(3), 440–452. Schmader, T., Johns, M. & Barquissau, M. (2004). The costs of accepting gender differences: The role of stereotype endorsement in women’s experience in the math domain. Sex Roles, 50, 835–850. Shepard, R. (1984). Ecological constraints on internal representation: Resonant kinematics of perceiving, imagining, thinking, and dreaming. Psychological Review, 91, 417–447. Sprevak, M. (2010). Inference to the hypothesis of extended cognition. Studies in History and Philosophy of Science, 41(4), 353–362. Srivastava, S., McGonical, K., Richards, J., Butler, E. & Gross, J. (2006). Optimism in close relationships: How seeing things in a positive light makes them so. Personality Processes and Individual Differences, 91(1), 143–153. Sterelny, K. (2010). Minds: Extended or scaffolded? Phenomenology and the Cognitive Sciences, 9(4), 465–481. Stone, J., Lynch, C., Sjomeling, M. & Darley, J. (1999). Stereotype threat effects on black and white athletic performance. Journal of Personality and Social Psychology, 77, 1213–1227. Strawson, P. (1974). Freedom and Resentment and Other Essays. London: Methuen. Steele, C. & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69(5), 797–811. Sutton, J., Harris, C., Keil, P. & Barnier, A. (2010). The psychology of memory, extended cognition, and socially distributed remembering. Phenomenology and the Cognitive Sciences, 9(4), 521–560. Taylor,V. & Walton, G. (2011). Stereotype threat undermines academic learning. Personality and Social Psychology Bulletin, 37(8), 1055–1067. Wong, D. (2006). Natural Moralities. Oxford: Oxford University Press. Yeung, N. & von Hippel, C. (2008). Stereotype threat increases the likelihood that female drivers in a simulator run over jaywalkers. Accident Analysis & Prevention, 40, 667–674. Zawidzki, T. (2013). Mindshaping. Cambridge, MA: MIT Press.
478
28 MINDSHAPING AND SELF-INTERPRETATION Tadeusz W. Zawidzki
Introduction Human beings use the same concepts to interpret themselves as others. We think of both ourselves and others as believers, desirers, hopers, intenders, goal pursuers, and experiencers of sensations and emotions. Although this is obvious, it has been the source of centuries of puzzlement and controversy in both philosophy and psychology. We seem capable of interpreting ourselves in a way that we cannot interpret others. For example, we seem to require no behavioral evidence to know what we think, feel, or want; yet, such direct insight is impossible in the case of knowing what others think, feel, or want. These interpretive, socio-cognitive concepts and categories appear to have a radically disjoint nature: their application to the self seems unmediated by behavioral evidence and incorrigible, while their application to others seems completely dependent on behavioral evidence and epistemically fraught. Different theories of human social cognition take very different positions on the relationship between self- and other interpretation. On some views, e.g., so-called “simulation theory” (Goldman 2006), self-interpretation is primitive, direct, and quasi-perceptual, and other interpretation is derivative of it. We interpret others by projecting onto them self-interpretations, generated when pretending to be in their circumstances. On other views, e.g., so-called “theory theory” (Gopnik & Wellman 1992), the asymmetry between self- and other interpretation is an illusion. In both cases, we infer mental states from observed behaviors, in something like the way scientists infer theoretical facts from data (Carruthers 2011). If there is any difference between self- and other interpretation, it is due to the fact that, in the former case, we have access to more and a greater variety of kinds of behavioral data, and are better at applying theoretical concepts automatically in response to such data. Recently, I have defended a theory of human social cognition that is substantially different from these two alternatives (Zawidzki 2008, 2013). On this view, human social cognition is more about mindshaping than mindreading. We succeed in our social endeavors not primarily because we are good at projecting self-perceived mental states onto others, nor because we are good at inferring others’ mental states from observed behavior, but, rather, because we are good at shaping each other to think and act in predictable ways in shared contexts. This new perspective suggests a different explanation of the asymmetries between self- and other interpretation than those defended by simulation theorists and theory theorists. The mindshaping 479
Tadeusz W. Zawidzki
explanation of self–other interpretive asymmetry has much in common with some newly emerging philosophical accounts of this asymmetry that fall under the broad category of what might be called “constitutive” views of self-interpretation. The basic idea is that the asymmetry between self- and other interpretation consists not in differences of epistemic access, as suggested by both simulation theorists and theory theorists, but in the radically different role that self-interpretation plays. In particular, unlike interpretation of others, self-interpretation is not, primarily, in the business of discovering truths about independently constituted mental states. Rather, it helps constitute the mental states of self-attributors by shaping them to fulfill self-attributions. On the version of this view that I defend below, there is no commitment to all mental states being dependent on self-attribution. This view is implausible and problematic, e.g., it seems to require a thoroughly non-mentalistic account of the psychological mechanisms of self-attribution. What follows avoids such implications because it argues only that some mental states are dependent on self-attribution, i.e., those that depend on discursive commitments over which normal human adults have first-person authority.1 Although in this and other respects the view owes a great debt to Dennett’s brand of “interpretivism”, i.e., his “intentional stance” (1987), it takes this view in a direction that is under-explored by Dennett: interpretations are more than just fictions useful for predicting the behavior of self and other; they become selffulfilling prophecies in self-interpretation, and hence can play what McGeer calls “regulative” roles (1996, 2007). I proceed as follows. In the second section, I give a brief overview of the mindshaping hypothesis. The third section then gives a detailed account of self-interpretation from the mindshaping perspective, explaining how it differs from other views in its responses to the various puzzles about self-interpretation.The fourth section situates the mindshaping approach to self-interpretation within the broader family of constitutive views of self-interpretation, highlighting similarities and differences. I conclude in the fifth section.
Mindshaping According to the “mindshaping” hypothesis (Mameli 2001; Zawidzki 2008, 2013), human social cognition owes its distinctive profile to diverse, sophisticated, and pervasive mindshaping mechanisms and practices. Mindshaping mechanisms and practices are defined as any mechanism or pattern of behavior that aims to shape a mind’s dispositions to approximate those of a social model.2 Here “aim” is understood teleofunctionally (Millikan 1984), i.e., in terms of effects that explain the stability and persistence of the mechanism or practice. Social models can be specific behaviors exemplified in actual individuals, e.g., adult experts whom novices imitate, or more abstract objects, like patterns of behavior attributed to fictional or ideal agents, e.g., the protagonists of myths or moral ideals, that people try to emulate. This broad definition of mindshaping encompasses a wide variety of exemplars, including imitation, pedagogy, norm enforcement, and self-constituting narratives. Some of these, especially “low-level” imitation and other forms of rudimentary social learning, are present in both human and non-human populations (Subiaul et al. 2004; Zentall 2006). However, there is a great deal of empirical evidence that human beings are particularly talented at, dependent on, and devoted to a number of biologically distinctive and sophisticated mindshaping practices. For example, though non-human primates certainly acquire information from social models, e.g., what goals are worth pursuing and what means to these goals are worth exploring, as in chimpanzee termite fishing, only humans routinely and obsessively “overimitate” (Lyons 2009; Nielsen & Tomaselli 2010). From a very young age, and throughout our 480
Mindshaping and self-interpretation
lifetimes, we not only acquire information about goals and potential means of achieving them from social models; we also obsessively copy the precise means of accomplishing goals observed in others, even when we know of other, more efficient means of accomplishing those goals. As another example, consider pedagogy. In many non-human species, adults modify ecologically important behaviors in the presence of offspring in ways that arguably allow offspring to acquire these behaviors more easily. For example, many predators slow down and exaggerate behaviors involved in hunting in the presence of very young offspring (Caro & Hauser 1992; Thornton & McAuliffe 2006). However, human pedagogy is significantly more pervasive and sophisticated. Human offspring appear innately equipped with a set of adaptations for what Gergely and Csibra call “natural pedagogy”: from a very early age we interpret adult “ostensive” signals like eye contact, followed by “referential signals” like shifts in gaze to prominent objects, as preludes to the demonstrations of important, generic information about those objects (Csibra & Gergely 2006, 2009, 2011; Csibra 2010). In general, “master–apprentice” style pedagogy is a cultural universal, and key to explaining one of the central distinguishing marks of human sociality: cumulative cultural evolution (Sterelny 2012). Unlike other species, we gradually accumulate and improve technologies and social practices over generations. This is key to our evolutionary success, and it seems impossible without very robust forms of mindshaping, like overimitation, natural pedagogy, and master–apprentice learning, capable of quickly, efficiently, and faithfully reproducing a culture’s know-how in a new generation, which can then tinker with and improve it. Finally, there are forms of human mindshaping for which it is very difficult to identify nonhuman analogs or homologs. For example, there is little evidence that other species of primate are willing to pay material costs in order to punish norm flouters, a behavior that seems universal and irresistible among human beings, as demonstrated with experiments involving the “ultimatum game” (Henrich et al. 2006). Finally, because of the lack of grammatical language in other species, only humans appear capable of articulating favored patterns of behavior in terms of “virtual” social models, such as protagonists of myths or moral ideals, and then attempting to regulate their own behavior to emulate such virtual models. There are a number of reasons to suspect that these distinctive varieties of human mindshaping constitute the linchpin of the distinctively human socio-cognitive syndrome.We differ from other species not just in our mindshaping mechanisms and practices, but also in the sophistication of our theory of mind, the complex structure and semantic flexibility of our languages, and the pervasiveness and sophistication of our cooperative endeavors. It is widely assumed that this socio-cognitive syndrome is made possible by our sophisticated theory of mind, rather than by mindshaping (Humphrey 1980; Tooby & Cosmides 1995, p. xvii; Baron-Cohen 1999; Leslie 2000, p. 61; Mithen 2000; Sperber 2000; Dunbar 2000, 2003, 2009; Siegal, 2008, p. 22; Kovacs et al. 2010, p. 1830). After all, how can one know what a model is doing without reading her mind? However, it turns out that the kinds of mindreading that distinguish us from other primates, e.g., the attribution of full-blown propositional attitudes like belief and desire, cannot be reliable without prior mindshaping: interpreters and their targets must be socialized to think similarly in similar circumstances for observed behavior to be a reliable signal of targets’ propositional attitudes (Apperly 2011, pp. 29, 160; Zawidzki 2013, pp. 97–98). Distinctively human forms of mindshaping, like overimitation, natural pedagogy, and norm enforcement, presuppose mindreading capacities of no greater sophistication than those also present in other primates, but make possible more sophisticated ones, like propositional attitude attribution (Zawidzki 2013). Furthermore, these forms of human mindshaping can also explain the evolution of pervasive and sophisticated cooperation, as well as structurally complex and semantically flexible 481
Tadeusz W. Zawidzki
human language. As Sterelny (2012) argues, the socio-ecological tasks facing our ancestors were too complex to solve through individual learning, yet too unstable to navigate using innately specified information. Sophisticated social learning provided just the right balance of informational conservatism and flexibility for individuals to master this unique socio-ecology. Novices can draw on information conserved from the discoveries of previous generations, rather than relying solely on individual learning to master informationally demanding tasks, like foraging techniques, tool manufacture and use, and systems of communication and social norms, since they can acquire them from adult models via sophisticated mindshaping mechanisms, like overimitation, natural pedagogy, and apprentice–master learning. At the same time, they are not limited to inflexible, innately specified background assumptions about their socio-ecologies, since the traditions they acquire from adult models are themselves dynamically adjusting to constantly changing constraints. One of the most important socio-cognitive advantages provided by distinctively human mindshaping is that it makes us more transparent to each other, i.e., it makes mindreading much easier. As human cognition became more complex and individually variable, the skilled behavioral anticipation on which our coordinative, communicative, and cooperative feats depended became more challenging. Knowing how to interact with our fellows in ways that maximize mutual advantage is a daunting task, given that such decisions must often be made on the basis of limited observable evidence, in seamless, time-constrained, dynamic contexts. Someone raises an eyebrow as one reaches for an object of (apparent) joint attention.What does it mean? What will this potential interactant do next? Such meager behavioral data are compatible with indefinitely many different goals, or beliefs, or desires. And even if one were able to correctly attribute some of one’s interactants’ mental states, these are compatible with indefinitely many different future behaviors. This is the problem of holism: because human behavior depends on whole constellations of mental states, the kinds of mental states attributed in distinctively human mindreading are compatible with indefinitely many distinct future behaviors, and the limited observations of contexts and behaviors upon which we must usually base such attributions are compatible with indefinitely many distinct sets of such mental states (Morton 1996, 2003; Bermúdez 2003, 2009; Nichols & Stich 2003, p. 139; Goldman 2006, p. 184; Apperly 2011, pp. 118–119; Zawidzki 2013, pp. 66–75). It is not clear how individual, internal cognitive heuristics can solve this problem quickly enough to enable successful coordination (Zawidzki 2013, pp. 74–82). How can one search the immense space of possibilities, ruling out all but likely ones? This task is made considerably more tractable if one’s likely interactants are shaped to react similarly to similar situations, to pursue similar goals in similar situations, and to make similar assumptions about what is relevant in similar situations (Zawidzki 2013, pp. 97–98). Mindshaping practices, like various forms of imitation, pedagogy, conformism, and norm compliance, can function to make groups of likely interactants, i.e., members of the same tribes and cultures, more cognitively homogeneous, thereby making mindreading more tractable. Such mindshaping for improved mindreading can be conceived of as a kind of trans-generational “epistemic action” (Kirsh 1996; Clark 1997). Just as individuals often alter their physical environments in order to make them easier to predict and control using limited neuro-cognitive resources, cultural groups alter their social environments through varieties of mindshaping in order to make coordination with likely social partners easier using limited neuro-cognitive resources. One important variety of such social epistemic action consists in the expression of discursive commitments, using a grammatically articulated language. The evolution of structurally complex, recursive language continues to elude consensus explanation. However, one plausible candidate notes an analogy between human language and birdsong (Fitch 2004, 2010). The key problem in explaining the evolution of grammatically 482
Mindshaping and self-interpretation
complex, recursive language is that it appears to be expressively much richer than it needs to be. A recursively structured language can be used to talk about anything, including events that can have no possible bearing on the reproductive success of speakers, e.g., the origins of the universe, or other radically spatio-temporally displaced events. Birdsong presents a similar puzzle: it is a form of communicative display of great structural complexity, in some cases approaching recursive potential (Okanoya 2002). However, it is used, in most cases, to convey a very simple message: “mate with me!” The explanation in the case of birdsong has to do with costly signaling of mate quality (Zahavi & Zahavi 1997). Female birds prefer more complex male songs because these are correlated with better genes. Such circumstances can trigger runaway sexual selection for increasingly complex birdsong, in the same way that ostentatious plumage evolved in peacocks (Cronin 1991). So structurally complex communication can earn its keep as a kind of advertising of underlying quality. The same principle may explain the evolution of structurally complex human language. In the case of prehistoric humans, ritualistic signaling systems may have grown more complex due to runaway selection for better and more costly signaling of cooperative potential (Zawidzki 2013, pp. 163–167). If there was a significant period in human phylogeny where biological success was highly dependent on choosing appropriate partners for coordination on cooperative projects, from among candidates of whom individuals had little personal knowledge, then coordinative displays in complex and demanding group rituals may have served an essential filtering function (Sosis 2003).This might have set up selection pressures favoring individuals capable of acquiring complex communicative capacities rapidly and at an early age.When integrated with a purely lexical, agrammatical “protolanguage” (Bickerton 1990, 1995, 1998, 2000) selected earlier for conveying referential and categorical information, such an evolutionary trajectory could have yielded human language as we now know it (Fitch 2010, pp. 503–504; Miyagawa et al. 2013): a semantically flexible and structurally complex system for communicating information about an unlimited variety of topics. On this view of the evolution of human language, its natural “home” lies in the use of complex communicative performances, relating lexical items signaling relevant referential and predicative information, that function to advertise coordinative potential on cooperative tasks. Given that human cooperation has likely always been sustained by the normative attitudes of group members, which help maintain cooperation by supporting the punishment of free riders (Henrich 2004), it is natural to interpret such complex communicative preludes to cooperation as discursive expressions of cooperative commitments, the reneging of which carries potential social costs, i.e., punishments such as ostracism. For example, rituals engaged in as preludes to cooperative tasks such as hunting, warfare, or reproductive bonding, can be understood as committing participants to specific, pro-social courses of behavior, in virtue of the normative attitudes of group members, which support various kinds of punishment for failure to conform to such behaviors.3 This would constitute a very early, prehistoric version of distinctively human forms of mindshaping mentioned above: not just norm enforcement, but the use of discursively encoded, idealized models to shape actual behavior.The idea here is that ritualistically encoded commitments constitute publicly expressed ideals that regulate the behavioral dispositions, i.e., shape the minds, of those who express them, in virtue of the group-level normative attitudes that institute such ideals. Once such communicative and coordinative practices are on the scene, they can trigger a proliferation of virtual social models for mindshaping.These social models are “virtual” because they are not necessarily embodied in the behavior of any actual individual; rather, they consist in publicly and symbolically encoded patterns of behavior, like mythical narratives, that specify ways of playing social roles, e.g., being a parent, that are tacitly sanctioned by a community. 483
Tadeusz W. Zawidzki
Rituals come to constitute public expressions of commitments to respect the norms encoded in such myths. Assuming the central ecological importance of attracting partners for coordination on an open-ended variety of cooperative projects, individuals now have a strong incentive to internalize and inhabit social roles encoded in widely publicized virtual models. The idea is that once our ancestors find themselves in a socio-ecology in which status depends essentially on living up to publicly expressed and socially regulated commitments to social roles, they come to conceptualize themselves, and hence, to routinely interpret and regulate their own behaviors using such categories. For example, parents come to code and shape their own behavior, routinely, in ways that promote the approximation of sanctioned parental behavior, as publicly represented in prevalent myths. Thus, when individuals conceptualize themselves in terms of these roles, these self-conceptualizations play an important mindshaping role: their point is not to describe but to regulate mental states and behavioral dispositions (McGeer 1996, 2007). On the mindshaping view of distinctively human social cognition, this constitutes the birth of self-interpretation in the special sense that has so puzzled and fascinated philosophers and psychologists.
Self-interpretations as interaction tools My proposal is that human self-interpretation essentially involves self-conceptualization in terms of virtual social models encoded via practices of discursive commitment, descended from complex rituals selected for advertising competence at coordination on cooperative projects. This is a highly sophisticated form of self-directed yet socially scaffolded mindshaping that is distinctive of our species. By conceiving ourselves in terms of virtual social models, and publicly expressing such self-conceptualizations, we set up incentives to shape ourselves to approximate these social models. These incentives derive in part from the normative attitudes of our group-mates, which support various forms of punishment that enforce adherence to courses of behavior deemed compatible with such public expressions. It is also plausible that thousands of years of evolution have produced endogenous, affective mechanisms, involving, e.g., guilt and shame, which help enforce such adherence (Frank 1988). So, for example, public declarations of self-conceptualizations and commitments to play social roles such as “parent”, “mate”, “doctor”, “police officer”, “president”, “teacher”, etc., alter our incentives in ways which encourage shaping ourselves to fit the expectations members of groups to which we belong hold regarding such categories. It is useful to compare such virtual social models and self-conceptualizations to Dennett’s notion of a “cognitive tool” (Dennett 2014). According to Dennett, the key to humanity’s cognitive success is the capacity to develop and deploy external tools that enhance our limited internal, neuro-cognitive resources. For example, many previously intractable mathematical tasks were suddenly brought within our ken thanks to the development and deployment of Arabic numerals. This is a paradigmatic example of what Dennett means by “cognitive tool”: it is a culturally developed and transmitted external tool that supplements our endogenous cognitive resources to make possible previously impossible cognitive feats. Human cultures are repositories of countless examples of such cognitive tools, everything from spoken and written languages, to calculation devices and strategies, traditions of diagramming, organizational rituals, etc. Clearly, virtual social models and self-conceptualizations based on them are a kind of cognitive tool. They are culturally developed and transmitted patterns of behavior, reliant on extra-mental scaffolding, especially the normative attitudes and behaviors of group-mates, which simplify social cognition. People are much easier to read if they conform to prevalent expectations associated with the social roles they explicitly inhabit. However, this class of 484
Mindshaping and self-interpretation
cognitive tools has an interesting dimension that sets it apart from the kinds of cognitive tools on which Dennett focuses. Cognitive technologies like Arabic numerals, spoken and written language, and diagramming or calculating techniques transform our cognitive capacities in ways that enable us to master new cognitive domains, like certain kinds of mathematics. But they do not transform the domains themselves. The development of calculus, for example, did not change any facts about how dynamic variables behave; it simply gave us new tools to track those facts. But virtual social models and self-conceptualizations are different in this regard. In virtue of their mindshaping functions, they not only provide us with new tools for tracking the social domain; they alter the social domain itself. In fact, they succeed as tools for enhancing social cognition only to the extent that they alter the domain of social cognition in ways that make it more trackable. It is only in virtue of changing people, e.g., of making people who express commitment to acting like parents act more like we think parents are supposed to act, etc., that virtual social models and self-conceptualizations succeed as cognitive tools that enhance our capacities to track the social domain. For this reason, it is appropriate to mark these kinds of cognitive tools off with a special term. From now on, I will call them “interaction tools”,4 since their point is not so much to enhance our cognition of some independently constituted domain, e.g., in the way that calculus enhances our cognition of dynamical variables, but to enhance social interactions by altering both our social cognition and the properties of the social domain, i.e., human behavior, to which it applies.5 It is also useful to conceive of many such interaction tools in terms of the game theoretic notion of a “commitment device” (Nesse 2001; Schelling 2007). Commitment devices are behaviors aimed at changing the incentive structures of strategic interactions. For example, consider an army that burns bridges after crossing them, thereby making retreat impossible. This changes the incentives of its soldiers when facing an enemy: their only option is to fight, so they are likely to be far more formidable adversaries. If bridge burning is done in full view of enemies, it constitutes a commitment device: it expresses a credible commitment to fight with unusual ferocity. Such apparently irrational behaviors can play a strategically important signaling role. Adversaries must now recalibrate signalers’ incentives, and this may lead to less willingness to fight. There are many examples of such apparently irrational displays actually playing a rational, strategic role, in virtue of signaling credible commitment. For example, highly emotional reactions to perceived slights can be interpreted as signaling that a person is not to be mistreated: if one has a reputation for irrational rage, potential adversaries must recalibrate incentives when deciding whether or not to take advantage (Frank 1988). Public expressions of commitment to social roles can also be construed as commitment devices in this sense. By letting potential interactants know that one is committed to some course of behavior, one thereby alters one’s incentives, due to prevalent normative attitudes enforcing such courses of behavior through potential punishment. Signing contracts, declaring marital commitments, etc., are less about expressing prior intentions one has detected in one’s mind, than about taking advantage of normative social practices to alter one’s incentives in ways that shape one’s future dispositions. One can now expect certain behaviors from others in virtue of the fact that others are assured of certain behaviors from oneself, given the new social incentives set up by one’s public expression of commitment. For example, in virtue of taking on the commitments encoded in the contract one signs when accepting employment as a professor, one can expect certain behaviors from administrators and students. Even self-conceptualizations that are not publicly expressed in this way may qualify as commitment devices. The reason is that, as I noted above, we may have evolved endogenous mechanisms for incentivizing conformity to the social roles we take ourselves to inhabit, such as moral emotions like shame 485
Tadeusz W. Zawidzki
and guilt (Frank 1988). If this is true, then even private self-conceptualizations might act as expressions of commitment to social roles, changing incentive structures in ways that encourage conforming to them. For example, simply thinking of oneself as a parent might, in virtue of guilt and shame associated with behavior which one’s community deems incompatible with this role, change incentives in ways that shape one’s dispositions in the direction of playing socially sanctioned parental roles. Such dynamics are not limited only to familiar social roles such as those that constitute familial positions or jobs or political offices. It is possible to analyze more philosophically canonical cases of self-conceptualization in similar ways. Typically, when discussing the asymmetry between self- and other interpretation, philosophers focus on the attribution of individual mental states, like specific beliefs or desires, rather than the constellations of mental states and dispositions that define broad social roles like “parent”, “mate”, “professor”, “prime minister”, etc. How might one use the concept of an interaction tool or a commitment device to understand self-attributions of individual beliefs or desires? It is possible to understand expressions of belief and desire as expressions of discursive, deontic attitudes (Brandom 1994). On this view, to say that one believes that p is to undertake a commitment to the claim that p. Such commitments involve, among other things, commitment to other claims entailed by p, commitment to denying claims ruled out by p, entitlement to claims not ruled out by p, and an obligation to provide evidence for p. In Brandom’s terms, to undertake commitment to p is to make a move in a discursive, score-keeping game, altering the discursive “score” that one’s interlocutors are entitled to attribute. Another way of understanding this is in terms of a very fine-grained social role: when one expresses the belief that p, one is taking on the social role of a believer-that-p, a social role defined by the commitments, entitlements, and obligations one’s interlocutors attribute to believers-that-p. Understanding self-interpretations as interaction tools and commitment devices has the potential to shed light on many of the traditional philosophical and psychological puzzles about self-interpretation. In particular, it can help explain the asymmetries between self- and other interpretation: why self-interpretation seems direct and incorrigible, while other interpretation is inferential and epistemically fraught. It can also help explain why seemingly false self-interpretations seem so hard to dislodge. The key is to adapt a metaphor that Dennett proposes regarding cognitive tools. According to Dennett, cognitive tools, like Arabic numerals, calculus, and serial, linear, and discrete public languages, are akin to software that human brains run in order to better navigate certain cognitive domains. They are like “apps” on an iPhone. If we think of interaction tools and commitment devices in the same way, and we think of selfconceptualizations as interaction tools and commitment devices, then their apparent directness and incorrigibility become less mysterious. The reason is that the point of software is not to track facts about the machine that runs it, on the basis of evidence; rather, the point of software is to create new facts about the machine that runs it. The idea is that a self-interpretation like “I am a professor” functions not, in the first instance, as an evidence-based description of facts about oneself, like one’s behavioral dispositions. Rather, its role is regulative: it expresses a commitment one undertakes, an ideal one tries to realize. This is much like the relationship between computer software and the hardware that runs it. Software does not aim to describe facts about the computer that runs it; rather, software aims to cause the hardware to behave in ways that match the constraints it specifies. For example, word processing software turns a machine that is not a word processor into a word processor. The hypothesis proposed here is that human self-interpretation is exactly like this: a person’s self-interpretation as a professor aims to cause her to behave in ways that professors are supposed to behave, not to describe her behavioral dispositions. The same can be said of a 486
Mindshaping and self-interpretation
person’s self-interpretation as a believer-that-p. To interpret oneself as believing that p, on this view, is not to entertain an evidence-based description of facts about oneself, like behavioral dispositions or internally tokened mental states. Rather, its role is to turn oneself into the kind of person that conforms to the socially instituted constraints on believing that p, the kind of person who rejects claims inconsistent with p, endorses claims implied by p, and regulates her behavior in the light of p. Such self-regulation through self-interpretation is possible because self-interpretations are commitment devices that change both internal and external incentives: by undertaking commitment to p, one opens oneself up to sanctions, both internal/emotional and social/punitive, for failing to play the role of a believer-that-p. On this picture, the asymmetries between self- and other interpretation are completely unmysterious. It is unsurprising that one’s self-interpretations do not require behavioral evidence in the way that interpretations of others do. This is because self-interpretations are not evidence-based hypotheses regarding facts about oneself, like behavioral dispositions or mental causes. They are commitments one has undertaken, or software one’s brain is running, and hence specifications of facts one is attempting to realize rather than describe. The asymmetry with other interpretation comes from the fact that one’s interpretations can play this role only for oneself. Interpreting another as a believer-that-p cannot play the kind of direct role in guiding their behavior as interpreting oneself as a believer-that-p can play in guiding one’s own behavior. One cannot, typically, undertake commitments on behalf of others, nor can interpretations be treated as software by others’ brains, in the way that one’s own brain treats self-interpretations. McGeer (1996, p. 507) puts it very nicely: . . . we are able to ensure a fit between the psychological profile we create of ourselves in first-person utterances and the acts our self-attributed intentional states are meant to predict and explain simply by adjusting our actions in appropriate ways. Thus, because we do not just wait to see if our actions make sense in light of intentional self-attributions, but rather make them make sense, the tale we tell of ourselves from the intentional stance is importantly unlike the tale we tell of other people (or even of other things). I cannot make it the case that you behave in ways coherent with what I say you hope, desire, or fear any more than I can make it the case that the world is a certain way by announcing how (I think) it is; but I can and do govern my own actions in ways that fit with the claims I make about myself. If so-called “knowledge” of our own minds thus consists largely of claims we have both made and acted in light of, it is no surprise that such “knowledge” is peculiarly authoritative. There may be some interesting exceptions to this. For example, on some theories, caretakers’ intentional interpretations of infant behavior actually play a role in boot-strapping infants’ capacity to behave intentionally (Bruner 1983; McGeer 2001).This phenomenon may be quite widespread. For example, there is evidence that infants construe gendered social expectancies in ways that lead them to confirm gender stereotypes (Golombok & Fivush 1994). Adults interpret the same behavior in the same infant as expressing anger if the infant is dressed as a male, and distress if the infant is dressed as a female. If infants have incentives to confirm such social expectancies, they can come to play a regulative role in their development (Mameli 2001). In general, children may internalize conceptualizations of them by significant adults. For example, there is evidence that children raised to think of achievement as the product of innate endowment are less likely to master challenging academic subjects than those raised to think of achievement as a product of effort (Nix et al. 2015). Some have argued that this is a general phenomenon involving relations between low-status and high-status individuals, whereby the 487
Tadeusz W. Zawidzki
former automatically and unconsciously internalize and try to confirm their conceptualizations by the latter (Haslanger 2012). Although these are significant phenomena, I do not think they undermine the idea that the role of self-interpretation in self-regulation can explain the asymmetries between self- and other interpretation. Even when interpretations of others function regulatively, they cannot provide the same degree of constant, direct, and fine-grained behavioral control that self-interpretations supply self-regulation, unless they are first internalized as self-interpretations.6 Thus, if we understand self-interpretations as interaction tools or commitment devices, and hence as software our brains run to make us better coordination partners relative to the expectations of our group-mates, then many of the classic asymmetries between self- and other interpretation noted by philosophers are no longer mysterious. Self-interpretation does not require mediation by behavioral evidence, while interpretation of others does, because self-interpretation aims to regulate our mental states and behavioral dispositions, rather than describe them, while interpretation of others, typically, aims to describe others’ mental states and behavioral dispositions on the basis of evidence. Self-interpretation seems incorrigible, unlike interpretation of others, not because of some mysterious kind of epistemic access that persons have only to their own mental states, but because, as McGeer puts it, agents “have an obvious and particular ability to make such [self-] characterizations true of themselves. Thus, they have authoritative ‘self-knowledge’ ” (1996, p. 509). This understanding of self-interpretation also has the potential of explaining a psychological puzzle about self-knowledge: our conviction in and the persistence of self-interpretations that are manifestly false. Decades of research in social psychology show that human beings are surprisingly unreliable self-interpreters. When asked about the reasons for our decisions, we often confabulate answers that are manifestly false (Nisbett & Wilson 1977; Carruthers 2011, Chapter 11). Our memories are easily manipulated to yield false judgments about what we have witnessed or experienced in the past (Loftus 1996). Our judgments about what is under our control are likewise rendered false by simple manipulations (Wegner 2002).When appropriately incentivized, we readily self-attribute beliefs we do not have (Carruthers 2011, pp. 356–365). Besides these specific types of mistaken self-attributions, general self-conceptions, long shown through reflection and empirical data to be false, show remarkable resilience, with no signs of waning. In many cultures, humans tend to think of their own behavior as freely chosen, on the basis of the best reasons, to which they have transparent access, via direct reflection on the states of a unified, coherent mind. But as philosophers from both Western and Eastern traditions have persuasively argued for millennia, and as recent results in social and cognitive psychology make clear, this is almost certainly a radically misleading picture of the actual nature of the human mind. We have far less control over our behavior, far less access to the reasons for it, and our cognition is far less rational, unified, and coherent, than most people suspect.7 If the goal of self-interpretation is to generate true self-conceptions, i.e., accurate sets of beliefs about how our minds actually work, then the persistence of such misleading selfconceptions is deeply puzzling. This is a problem for the leading theories of self-interpretation, according to which, as with the interpretation of others, its goal is generating accurate selfconceptions. Simulation theorists claim this is achieved through direct perception of one’s own mental states, while theory theorists claim this is achieved via the same sorts of reliable inferences from observed behaviors that support accurate interpretation of others. Neither of these theories has a good explanation of why self-interpretation is so unreliable. If simulation theory is correct, then why do we so often directly misperceive our cognitive states, self-attributing reasons we do not have, past experiences we never underwent, and control and beliefs we lack? If theory theory is correct, then why are the inferential mechanisms supporting such 488
Mindshaping and self-interpretation
self-attributions on the basis of observed data so unreliable? Presumably, they are not unreliable in the case of attributing mental states to others; otherwise, why would they be selected in evolution? These embarrassing questions do not arise for the mindshaping view of self-interpretation. If the point of self-interpretation is to shape minds rather than read them, then it is no surprise that our self-interpretations are often misleading. Just as computers often fail to perfectly execute the programs they run, due to processing limits, etc., our brains can live up to our self-interpretations only imperfectly. On the mindshaping view, the norms relevant to assessing self-interpretations are not epistemic ones, like accurately depicting how our minds work, but socio-pragmatic ones, i.e., turning us into good coordination partners relative to our likely interactants. Thus, to explain what sorts of self-conceptions prevail in groups one must look not at facts about how minds work, but at facts about the demands of coordination in specific groups. This suggests that self-conceptions that agree with or complement those of one’s likely coordination partners are likely to persist, whether or not they are true depictions of how one’s mind works. If one’s coordinative projects depend for their success on meeting the expectations of one’s partners, and these include expectations that people act on freely chosen reasons, that reside in unified, coherent, rationally ordered minds, to which people have transparent access, then successful coordinators will work, as much as possible, to approximate these assumptions through mindshaping. Even if such assumptions are false, and easily tripped up with relatively simple manipulations, they will persist as long as they are necessary for successful coordination. And they will remain necessary for successful coordination as long as they are widespread among communities of interactants. Thus, although our self-interpretations may be unreliable when considered as descriptions of independently constituted facts about the psychological causes of our behaviors, they are reliable at facilitating successful coordination, as long as they complement those that prevail among our likely interactants. On the mindshaping view of self-interpretation, self-conceptions are frequency dependent phenomena: their persistence is explained not in terms of their truth, but in terms of their prevalence. Languages are good analogs. English, for example, is not a truer depiction of any facts than any other language. Its prevalence is explained not in terms of any epistemic virtues but, rather, in terms of historical accidents that have led to its widespread use as a medium of communication. Once a certain critical mass was reached, among economically and politically important players, the persistence and spread of English was incentivized. The suggestion here is that a similar dynamic explains the persistence of false self-conceptions. The mindshaping account of self-interpretation thus displays some explanatory advantages over other accounts. It suggests a natural explanation of the apparent asymmetry between self- and other interpretation: our self-interpretations play a direct role in self-shaping that our interpretations of others cannot play in shaping others. It also yields a nice explanation of the persistence of false self-conceptions: if these are interaction tools or coordination devices, what matters is that they complement those of one’s likely interactants, not that they are true. I turn now to a brief comparison of the mindshaping account of self-interpretation to other views that understand it as having a primarily constitutive rather than an epistemic function.
Comparison with other views The mindshaping view of self-interpretation is a fully naturalistic hypothesis that is strongly influenced by anti-naturalistic philosophical theories of self-interpretation. For example, Brandom (1994) and Bilgrami (2006) have explicitly defended the idea that self-attributions of propositional attitudes are kinds of commitments. Bilgrami (2006) uses this idea to explain the 489
Tadeusz W. Zawidzki
peculiar authority of self-attributions in much the same way as I suggest above. As commitments, self-attributions are not the kinds of judgments that require evidence or aim at truly describing independently constituted facts.Whether or not one lives up to one’s commitments, it can still be true that one has them. Hence, if self-attributions of propositional attitudes are commitments, self-attributers have an incorrigibility or authority over their own propositional attitudes that they do not have over those of others. However, both Bilgrami and Brandom explicitly link this commissive, normative understanding of propositional attitude attribution to anti-naturalism about the propositional attitudes. It is because propositional attitudes are essentially bound up in norm-governed commitments that they cannot be explained in terms of natural facts about, e.g., human brains, or human beings considered as biological organisms. However, the view defended here is committed to the compatibility of the commissive/ normative conception of propositional attitude attribution with naturalism. My first task in this section is to defend this commitment against the claims of philosophers like Brandom and Bilgrami. After this, I turn to a comparison between the mindshaping view of self-interpretation and another naturalist view: Peter Carruthers’ (2011) inferentialist theory of self-interpretation. There is a close affinity, at the level of likely neural mechanisms, between the view I defend above and Carruthers’ theory: we both see self-interpretations as inferred from behavioral and endogenously available data.8 But Carruthers still sees the point of such inferred selfinterpretations as generating correct judgments about our own mental states, despite the fact that he emphasizes how bad we are at this – in fact, this is one of his central arguments for the inferentialist theory. And he explicitly rejects proposals, akin to Bilgrami’s and mine, that the constitutive function of self-interpretation can explain its epistemic asymmetries with the interpretation of others (Carruthers 2011, pp. 96–108). So, my second task in this section is to defend my naturalist, constitutive view of self-interpretation against Carruthers’ alternative. There are weaker and stronger senses of “naturalism” about cognitive concepts, like the propositional attitudes. On some varieties of naturalism, the facts to which cognitive concepts apply must, in some sense, depend upon or derive from fully natural facts, that is, facts established by the natural sciences. The weakest version of this invokes the concept of supervenience: cognitive facts, e.g., facts about what an agent believes, must supervene on natural facts, e.g., facts about the agent’s brain states. This means that there can be no difference in cognitive facts without some difference in natural facts. It is impossible for two agents identical with respect to all natural facts to differ with respect to any cognitive fact. Two agents to whom all and only the same natural concepts apply must, for example, believe all and only the same propositions.This is the form of naturalism that Bilgrami (2006) denies. According to Bilgrami, specifying all the natural facts about an agent leaves open the question of what cognitive facts are true of her.The reason is that, according to Bilgrami, cognitive facts are a species of normative fact. Just as an exhaustive description of all the natural facts about any situation leaves open whether the situation counts as morally permissible or not, it must also leave open whether the situation counts as involving a belief that p or not. Bilgrami traces this indeterminacy to the distinction between first-person and third-person perspectives on the world. By this, he does not mean the distinction between what it is like to be a subject and publicly observable properties, in Nagel’s (1974) sense. Rather, he means the distinction between conceiving of oneself as bound by certain commitments and norms, and conceiving of oneself as an object like any other, characterized by a set of normatively neutral properties. The latter kind of conception, no matter how exhaustive, does not determine the former kind of conception. No set of third-person facts about one’s states and dispositions, even dispositions to behave in ways that can be interpreted as expressions of commitment, 490
Mindshaping and self-interpretation
determines the norms one is bound by from the first-person perspective. Brandom (1994, pp. 592–601) ultimately suggests similar reasons for rejecting naturalism about the normative and hence cognitive. For Brandom, facts about one’s discursive, deontic status transcend natural facts about one’s dispositions in virtue of a distinction implicit in our discursive, score-keeping practices: the distinction between undertaking and acknowledging commitment to claims. Any attribution of a discursive commitment implicitly distinguishes between the commitments a target of attribution is disposed to acknowledge and the commitments she actually undertakes. This precludes identifying the target’s actual discursive status – the proposition to which she is actually committed or believes – with any naturalistically specifiable facts about her, e.g., what she is disposed to acknowledge. For Brandom, this essential perspectival difference is a formal property shared by all attributions of discursive status, or propositional attitudes, and it explains why these are taken to be governed by objective norms, like truth, which cannot be identified with any natural facts, like dispositions or naturally specifiable relations. Ultimately, both Bilgrami (2006) and Brandom (1994) trace their anti-naturalism to a kind of distinction in speech acts. No mere description of facts about an object, including oneself, can be equivalent to an expression of commitment. Cognitive concepts, like the propositional attitudes, are essentially bound up in a practice of undertaking commitments; hence, no naturalistic description can ever fully capture an attribution of propositional attitudes. Such descriptions always leave open the question what the relevant agents are committing themselves to in virtue of their self-attributions of propositional attitudes, a question that makes sense only from the perspective of participants in, rather than describers of, a practice. Whatever the merits of this perspective against the strong senses of naturalism targeted by Bilgrami and Brandom, it seems completely compatible with another kind of naturalism; in fact, it seems to presuppose this kind of naturalism. Even if it is true that one must be a participant in a discursive practice, on which one can take a specific kind of first-person perspective, involving the undertaking of commitments, in order to legitimately employ propositional attitudes, and other normative and cognitive concepts, this still leaves open a host of questions that require naturalistic answers. For example, what sorts of natural objects are capable of engaging in such practices? Also, in virtue of what natural properties or capacities do these sorts of natural objects count as capable? And how and why did some natural objects acquire these properties or capacities? These are the sorts of questions that the naturalistic approach I defend above seeks to answer, in terms of a conception of distinctively human social life as dependent on mindshaping mechanisms and practices. An analogy can make this clear. I think the point both Bilgrami and Brandom are trying to make can be usefully expressed in terms of different attitudes that one can take toward any game, say a game of chess.There is all the difference in the world between reporting on a game, e.g., giving an exhaustive description of what transpires in it, and participating in the game, i.e., being bound by certain rules, e.g., not moving the King more than one space at a time, and earning certain statuses, e.g., putting one’s opponent in check. No matter how exhaustive one’s description of the game, this can never amount to being bound by the rules of the game or earning statuses via moves in the game. However, granting this point, one can still ask about what sorts of skills and capacities are necessary for a person to participate in chess. This is exactly how I see the relationship between the mindshaping hypothesis and anti-naturalistic, constitutive views of self-interpretation, like Bilgami’s and Brandom’s. Perhaps they are right that the observer/participant distinction is enough to scuttle the naturalistic ambition of establishing the metaphysical dependence of cognitive facts on natural facts. But this still leaves open a plethora of questions that admit only of naturalistic answers: What sorts of natural objects can participate in normative, discursive practices, and why? When, how, 491
Tadeusz W. Zawidzki
and why did such practices emerge? Why are groups of organisms capable of engaging in such practices evolutionarily stable? The mindshaping hypothesis constitutes a plausible answer to such questions. Humans are biologically successful because they are excellent at solving problems of coordination and cooperation.We do this by shaping each other and ourselves to approximate discursively expressed social models that, together with other, complementary social models, prevail in groups of likely interactants, like tribes or cultures. Such discursively expressed social models function as interaction tools or commitment devices that relate to our brain states and behavioral dispositions in much the way that computer software relates to hardware. The mindshaping account of self-interpretation also has a strong affinity with another, fully naturalistic view: Peter Carruthers’ (2011) theory that self-interpretation is mediated by inferences from behavioral observations, just as interpretation of others is. Carruthers considers constitutive views of self-interpretation at length and with some sympathy (2011, pp. 96–108), granting that self-attributions can sometimes lead to behavior that appears to fulfill them. However, he claims that such self-fulfilling dynamics are not sufficient to make self-interpretation authoritative and hence unlike interpretation of others. The reason is that such apparently selfinterpretation-fulfilling behavior is not caused in the appropriate way: rather than being caused by the actual, first-order propositional attitudes that are self-attributed, it is caused by higherorder mental states, like beliefs that one has judged that or is committed to p. True, first-order judgments that p lead directly to beliefs that p, and other associated states and behaviors, but such higher-order beliefs do this only when combined with other higher-order mental states, like desires to behave consistently with what one has believed oneself to judge, etc. Thus, even if there is an asymmetry in the roles that self- and other attributions of cognitive states play, this does not constitute the kind of epistemic asymmetry on which philosophers have focused: we do not have immediate, non-inferential access to our first-order propositional attitudes, even if constitutive views are correct. Carruthers does not deny that constitutive views of self-interpretation can explain a kind of asymmetry between self- and other interpretation: self-interpretations can play a role in one’s cognitive economy that is unlike any role interpretations of others can play in others’ cognitive economies. What he denies is that this asymmetry in authority amounts to an epistemic asymmetry: it does not constitute a kind of immediate access to the self-interpreter’s first-order cognitive states that is lacking in the interpretation of others. If one takes epistemic relations to be exhausted by the paradigm of judgments about facts that are constituted independently of those judgments, then Carruthers is correct. But it is not clear that this is the only kind of epistemic relation available to language-using humans. Consider successful performative acts, such as declarations that a couple is married by appropriate authorities, or enacted decisions to move white chess pieces in particular ways by players playing white. In performing such moves, such performers gain an epistemic access to the facts the moves constitute that is unavailable to others. As an official licensed by community to declare marriages, I can know in virtue of my decision to utter the words, “I declare you married,” that a couple is married. As a chess player playing on the white side, I can know in virtue of my decision to move the king one space to the left that the king has moved there. These are ways of knowing facts that are unavailable to observers. They can infer these facts from observing my behavior and determining my status, but they cannot infer them from their own decisions, in the way that I can. I think this is the kind of epistemic authority that constitutive views like Bilgrami’s and Brandom’s claim for self-attribution of cognitive states. Given one’s status in a discursive community, one can know simply in virtue of one’s decision to undertake a commitment that one has undertaken that commitment. Observers of one’s behavior do not have that kind of epistemic access to the commitments one has undertaken. 492
Mindshaping and self-interpretation
In any case, the key difference between the mindshaping view of self-interpretation and Carruthers’ concerns the function of self-interpretation, rather than its cognitive basis. On both views, inference from observations of non-cognitive states, including only endogenously observable ones like sensations and affective states, plays an important role in self-attribution of cognitive states (Carruthers 2011, p. 3). However, for the mindshaping view, asymmetry with interpretation of others remains because of the radically different roles that self-interpretation plays. On the mindshaping view, self-interpretation depends on an inferential move from observations of non-cognitive states not to a hypothesis about the cognitive causes of those states, but, rather, to a decision to assume a certain social role. Therefore, the norms relevant to assessing self-interpretations are radically different from the norms relevant to assessing interpretations of others. Rather than assessing self-interpretations in terms of the accuracy of their depiction of independently constituted facts about the self-interpreter’s cognitive states, we should assess them in terms of a cost-benefit analysis of the roles to which they commit self-interpreters relative to their capacities and the expectations of members of their community. Carruthers is still committed to the view that self-interpretations aim primarily to depict self-interpreters’ cognitive states accurately; in fact, their unreliability at this is one of his key arguments for his view that self-interpretations are mediated by inferences from observations of non-cognitive states (2011, Chapter 11). But for this very reason, his view leaves mysterious why our self-interpretations should be so unreliable. The mindshaping view provides a natural explanation of this: their function is not to accurately depict independently constituted cognitive facts, but, rather, to shape self-attributors in socially useful ways. The mindshaping view, like other constitutive views, also does a better job of explaining the peculiar authority of self-attributions. When one claims that one believes that p, for example, and one is told by a third-person observer that this is false, the typical reaction is quite unlike reactions to other forms of correction by third-person observers. For example, it is quite unlike being told that one is wrong regarding the relative lengths of two lines, as in the Mueller-Lyer illusion. One can cheerfully accept the latter kind of correction, and make use of it to temper future judgments about visual stimuli. However, the former kind of correction is more likely to issue in a kind of resentment or moralistic defiance: who are you to tell me what I do or do not believe?! Constitutive views make perfect sense of this: the peculiar authority of self-interpretations derives from our status as autonomous agents; only I can decide what roles I play. If false selfinterpretations are mere epistemic illusions, akin to visual illusions, it is hard to make sense of such moralized resistance to correction.
Conclusion The foregoing has been a far-from-exhaustive defense of the mindshaping view of selfinterpretation against prominent alternatives. Much has been left out. For example, I have not even considered exteroceptive accounts of self-interpretation (Evans 1982; Byrne 2005), according to which we self-attribute propositional attitudes simply by seeing what is true of the world outside of our minds. For example, on such views, to self-attribute the belief that it is snowing, one need only look around and judge that it is snowing. Nor have I considered expressivist accounts, according to which self-attributions of propositional attitudes count as direct expressions of those propositional attitudes, and hence at least partially constitute them (Bar-On 2004). I think both of these approaches might actually be compatible with constitutive views, like the mindshaping view. After all, when deciding what claims to commit to, observing relevant facts seems like a reasonable strategy. And if expressions of commitment to claims typically count as undertaking such commitments relative to one’s community, then, on 493
Tadeusz W. Zawidzki
the mindshaping view, self-attributions of propositional attitudes typically do count as direct expressions of them. In any case, the mindshaping view of self-interpretation constitutes a distinct alternative among currently viable hypotheses, with some important advantages. It is distinct in marrying the insights of anti-naturalistic, constitutive views of self-interpretation with a systematic, neo-Darwinian, naturalistic account of its origin and function. We evolved to shape each other’s and our own minds in ways that make us more transparent to each other, and hence better at coordinating on cooperative projects. One prominent variety of such mindshaping involves publicly expressing and inhabiting commitments to play certain social roles, ranging from broad, open-ended ones, like being a parent, to narrow, relatively specific ones, like being a believer-that-p. Such self-interpretations are interaction tools or commitment devices that relate to our brains and behavioral dispositions in much the same way that computer software relates to computer hardware: as specifications of behavioral targets rather than descriptions of behavioral facts.This explains the asymmetries between self-interpretations and interpretations of others. Unlike the latter, the former have a peculiar authority because of the direct role they can play in shaping our minds, and they are evidence independent because they do not aim to accurately depict independently constituted facts. The mindshaping view of self-interpretation also explains the persistence of and our conviction in seemingly inaccurate self-conceptions: if their goal is to get us to behave in ways that align with the expectations of our likely interactants, then we should not expect them to be accurate depictions of how our minds work.
Notes 1 For a highly congenial view, see Frankish’s Mind and Supermind (2004). 2 Why think of shaping behavioral dispositions as shaping minds? “Mind” is intended in a very minimal sense here, as whatever is the causal basis for behavior. It is hard to see how anything can shape behavioral dispositions without shaping minds in this sense. Furthermore, it is possible to engage in and be subject to mindshaping in this sense, without having any concept of mind. Here and throughout I mean by “mindshaping” any mechanism or practice that as a matter of fact shapes minds, whatever these turn out to be, whether or not it is mediated by folk conceptualizations of minds, e.g., as composed of propositional attitudes, and whether or not these folk conceptualizations are true of the actual causes of behavior. 3 For a highly congenial theory of the function of certain kinds of religious belief, see Norenzayan (2015). 4 As Julian Kiverstein points out to me, interaction tools have a strong affinity to what Hacking (1995) calls “looping kinds”, i.e., social categories which, unlike natural kinds, have various effects on the domain to which they are applied (human beings), due to the fact human beings are often aware of how they are categorized. I think that interaction tools are a species of looping kind: those that make possible otherwise impossible coordinative feats. Not all looping kinds have such beneficial effects, as Hacking emphasizes, e.g., labeling abnormal behavior as neuropathological. 5 Though he does not explicitly note this difference, it is clear from Dennett’s views on the role of language in structuring cognition (1991), and the role of assumptions of responsibility in making people more responsible (1984, 2003), that he is aware of the special properties of some socio-cognitive tools. 6 As Julian Kiverstein points out to me, it is possible for an agent to use a stereotype to control behavior constantly, directly, and to a fine grain, without internalizing it as a self-interpretation, i.e., when the agent is committed to acting in counter-stereotypical ways. This is an interesting phenomenon. However, it is not immediately relevant to the point about the regulative role of interpretation. In such cases, nobody is being interpreted in terms of the stereotype. Rather, the stereotype is being used to help with the regulative function of a correlative self-interpretation, i.e., one’s self-interpretation as anti-stereotypical. 7 As Julian Kiverstein reminds me, there is a tension between this way of characterizing our unreliability at self-interpretation and the constitutive/regulative view of self-interpretation that I defend. If selfinterpretations are primarily regulative, and partly constitute self-attributed mental states, then in what sense are they unreliable? Does the claim that we are unreliable not presuppose the descriptive view to which I am trying to articulate an alternative? But I think this is part of my point: if we construe self-interpretation as primarily descriptive, and no different from interpretation of others, then, besides
494
Mindshaping and self-interpretation failing to respect intuitions about authority and directness, we are also left with the puzzle of why such descriptions are both unreliable and persistent. After all, the point of descriptions is to be true of independently constituted facts; so, if our self-interpretations are routinely false when considered as descriptions of independently constituted facts, it is a mystery why they persist. But, if their point is not describing independently constituted facts, there may be a way of avoiding this mystery. 8 Whereas Carruthers thinks of self-interpretations as descriptions of independently constituted facts inferred from such evidence, I think of self-interpretations, roughly, as the best personal policies or programs to follow, inferred from such evidence and knowledge of the demands of one’s social environment. In other words, while for Carruthers such inferences have roughly the form, “Given evidence E, I am likely in mental state M”, on the view I defend they have roughly the form, “Given the evidence E, and the demands of my social environment S, it is best for me to play the role of a person with mental state M. Hence, I hereby play this role.”
References Apperly, I. A. (2011). Mindreaders. Hove, UK: Psychology Press. Bar-On, D. (2004). Speaking my Mind: Expression and Self-knowledge. Oxford: Clarendon Press. Baron-Cohen, S. (1999). The evolution of a theory of mind. In M. C. Corballis and S.E.G. Lea (Eds.), The Descent of Mind (pp. 261–277). New York: Oxford University Press. Bermúdez, J. L. (2003). The domain of folk psychology. In A. O’Hear (Ed.), Minds and Persons (pp. 25–48). Cambridge: Cambridge University Press. ———. (2009). Mindreading in the animal kingdom. In R. Lurz (Ed.), The Philosophy of Animal Minds (pp. 145–164). Cambridge: Cambridge University Press. Bickerton, D. (1990). Language and Species. Chicago: University of Chicago Press. ———. (1995). Language and Human Behavior. Seattle: University of Washington Press. ———. (1998). Catastrophic evolution: The case for a single step from protolanguage to full human language. In J. R. Hurford, M. Studdert-Kennedy, and C. Knight (Eds.), Approaches to the Evolution of Language: Social and Cognitive Bases (pp. 341–358). Cambridge: Cambridge University Press. ———. (2000). How protolanguage became language. In C. Knight, M. Studdert-Kennedy, and J. R. Hurford (Eds.), The Evolutionary Emergence of Language: Social Function and the Origins of Linguistic Form (pp. 264–284). Cambridge: Cambridge University Press. Bilgrami, A. (2006). Self-knowledge and Resentment. Cambridge, MA: Harvard University Press. Brandom, R. B. (1994). Making it Explicit. Cambridge, MA: Harvard University Press. Bruner, J. (1983). Child’s Talk: Learning to Use Language. New York: W. W. Norton. Byrne, A. (2005). Introspection. Philosophical Topics, 33(1), 79–104. Caro, T. M. & Hauser, M. D. (1992). Is there teaching in nonhuman animals? Quarterly Review of Biology, 67(2), 151–174. Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford: Oxford University Press. Clark, A. (1997). Being There: Putting Brain, Body and World Together Again. Cambridge, MA: MIT Press. Cronin, H. (1991). The Ant and the Peacock. Cambridge: Cambridge University Press. Csibra, G. (2010). Recognizing communicative intentions in infancy. Mind and Language, 25, 141–168. Csibra, G. & Gergely, G. (2006). Social learning and social cognition:The case for pedagogy. In Y. Munakata & M. H. Johnson (Eds.), Processes of Change in Brain and Cognitive Development (pp. 249–274). London: Oxford University Press. ———. (2009). Natural pedagogy. Trends in Cognitive Sciences, 13(4), 148–153. ———. (2011). Natural pedagogy as evolutionary adaptation. Philosophical Transactions of the Royal Society of London: Series B, 366, 1149–1157. Dennett, D. C. (1984). Elbow Room:The Varieties of Free Will Worth Wanting. Cambridge, MA: MIT Press. ———. (1987). The Intentional Stance. Cambridge, MA: MIT Press. ———. (1991). Consciousness Explained. Boston: Little, Brown. ———. (2003). Freedom Evolves. New York:Viking Press. ———. (2014). Intuition Pumps and Other Tools for Thinking. New York: W. W. Norton & Company.
495
Tadeusz W. Zawidzki Dunbar, R. (2000). On the origin of the human mind. In P. Carruthers and A. Chamberlain (Eds.), Evolution and the Human Mind: Modularity, Language, and Meta-cognition (pp. 238–253). Cambridge: Cambridge University Press. ———. (2003). The social brain: Mind, language, and society in evolutionary perspective. Annual Review of Anthropology, 32, 163–181. ———. (2009). Why only humans have language. In R. Botha and C. Knight (Eds.), The Prehistory of Language (pp. 12–35). Oxford: Oxford University Press. Evans, G. (1982). The Varieties of Reference. Oxford: Oxford University Press. Fitch, W. T. (2004). Kin selection and “mother tongues”: A neglected component in language evolution. In D. Oller and U. Griebel (Eds.), Evolution of Communication Systems (pp. 275–296). Cambridge, MA: MIT Press. ———. (2010). The Evolution of Language. Cambridge: Cambridge University Press. Frank, R. H. (1988). Passions within Reason. New York: W. W. Norton. Frankish, K. (2004). Mind and Supermind. Cambridge: Cambridge University Press. Goldman, A. I. (2006). Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. Oxford: Oxford University Press. Golombok, S. & Fivush, R. (1994). Gender Development. Cambridge: Cambridge University Press. Gopnik, A. & Wellman, H. M. (1992). Why the child’s theory of mind really is a theory. Mind and Language, 7(1 & 2), 145–171. Hacking, I. (1995). The looping effects of human kinds. In D. Sperber, D. Premack and A. J. Premack (Eds.), Causal Cognition: A Multi-Disciplinary Debate (pp. 351–383). New York: Oxford University Press/Clarendon Press. Haslanger, S. (2012). Resisting Reality: Social Construction and Social Critique. Oxford: Oxford University Press. Henrich, J. (2004). Cultural group selection, coevolutionary processes, and largescale cooperation. Journal of Economic Behavior and Organization, 53, 3–35. Henrich, J., McElreath, R., Barr, A., Ensminger, J., Barrett, C., Bolyanatz, A., . . . Ziker, J. (2006). Costly punishment across human societies. Science, 312, 1767–1769. Humphrey, N. (1980). Nature’s psychologists. In B. D. Josephson and V. S. Ramachandran (Eds.), Consciousness and the Physical World (pp. 57–80). Oxford: Pergamon Press. Kirsh, D. (1996). Adapting the environment instead of oneself. Adaptive Behavior, 4(3–4), 415–452. Kovács, Á. M., Téglás, E. & Endress, A. D. (2010). The social sense: Susceptibility to others’ beliefs in human infants and adults. Science, 330, 1830–1834. Leslie, A. M. (2000). How to acquire a “representational theory of mind.” In D. Sperber (Ed.), Metarepresentations: A Multidisciplinary Perspective (pp. 197–223). Oxford: Oxford University Press. Loftus, E. F. (1996). Eyewitness Testimony. Cambridge, MA: Harvard University Press. Lyons, D. E. (2009). The rational continuum of human imitation. In J. A. Pineda (Ed.), Mirror Neuron Systems: The Role of Mirroring Processes in Social Cognition (pp. 77–106). New York: Human Press. Mameli, M. (2001). Mindreading, mindshaping, and evolution. Biology and Philosophy, 16, 597–628. McGeer, V. (1996). Is “self-knowledge” an empirical problem? Renegotiating the space of philosophical explanation. Journal of Philosophy, 93(10), 483–515. ———. (2001). Psycho-practice, psycho-theory and the contrastive case of autism. Journal of Consciousness Studies, 8(5–7), 109–132. ———. (2007). The regulative dimension of folk psychology. In D. D. Hutto and M. Ratcliffe (Eds.), Folk Psychology Re-Assessed (pp. 137–156). Dordrecht: Springer. Millikan, R. G. (1984). Language, Thought, and Other Biological Categories: New Foundations for Realism. Cambridge, MA: MIT Press. Mithen, S. (2000). Palaeoanthropological perspectives on the theory of mind. In S. Baron-Cohen, H. TagerFlusberg and D. J. Cohen (Eds.), Understanding Other Minds (pp. 488–502). Oxford: Oxford University Press. Miyagawa S., Berwick, R. C. & Okanoya, K. (2013). The emergence of hierarchical structure in human language. Frontiers in Psychology, 4, Article 71. Morton, A. (1996). Folk psychology is not a predictive device. Mind, 105(417), 119–137.
496
Mindshaping and self-interpretation ———. (2003). The importance of Being Understood: Folk Psychology as Ethics. London: Routledge. Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83(4), 435–450. Nesse, R. M. (Ed.) (2001). Evolution and the Capacity for Commitment. New York: Russell Sage Press. Nichols, S. & Stich, S. (2003). Mindreading: An Integrated Account of Pretense, Self-Awareness and Understanding Other Minds. Oxford: Oxford University Press. Nielsen, M. & Tomaselli, K. (2010). Overimitation in Kalahari Bushman children and the origins of human cultural cognition. Psychological Science, 21(5), 729–736. Nisbett & Wilson (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. Nix, S., Perez-Felkner, S. & Kirby, T. (2015). Perceived mathematical ability under challenge: A longitudinal on sex segregation among STEM degree fields. Frontiers in Psychology, 6, article 530, doi: 10.3389/ fpsyg.2015.00530. Norenzayan, A. (2015). Big Gods: How Religion Transformed Cooperation and Conflict. Princeton: Princeton University Press. Okanoya, K. (2002). Sexual display as a syntactical vehicle: The evolution of syntax in birdsong and human language through sexual selection. In A. Wray (Ed.), The Transition to Language (pp. 46–63). Oxford: Oxford University Press. Schelling (2007). Strategies of Commitment and Other Essays. Cambridge, MA: Harvard University Press. Siegal, M. (2008). Marvelous Minds:The Discovery of What Children Know. Oxford: Oxford University Press. Sosis, R. (2003).Why aren’t we all Hutterites? Costly signaling theory and religious behavior. Human Nature, 14, 91–127. Sperber, D. (Ed.). (2000). Metarepresentations. New York: Oxford University Press. Sterelny, K. (2012). The Evolved Apprentice. Cambridge, MA: MIT Press. Subiaul, F., Cantlon, J. F., Holloway, R. L. & Terrace, H. S. (2004). Cognitive imitation in rhesus macaques. Science, 305, 407–410. Thornton, A. & McAuliffe, K. (2006). Teaching in wild meerkats. Science, 313, 227–229. Tooby, J. & Cosmides, L. (1995).The language of the eyes as an evolved language of mind. In S. Baron-Cohen (Ed.), Mindblindness: An Essay on Autism and Theory of Mind (pp. xi–xviii). Cambridge, MA: MIT Press. Wegner, D. M. (2002). The Illusion of Conscious Will. Cambridge, MA: MIT Press. Zahavi, A. & Zahavi, A. (1997). The Handicap Principle. New York: Oxford University Press. Zawidzki, T. W. (2008). The function of folk psychology: Mind reading or mind shaping? Philosophical Explorations, 11(3), 193–210. ———. (2013). Mindshaping: A New Framework for Understanding Human Social Cognition. Cambridge, MA: MIT Press. Zentall, T. R. (2006). Imitation: Definitions, evidence, and mechanisms. Animal Cognition, 9, 335–353.
497
29 VICARIOUS EXPERIENCES Perception, mirroring or imagination?1 Pierre Jacob and Frédérique de Vignemont
Introduction Empathy is the subject of much current psychological investigation and philosophical scrutiny. We take it as a fundamental condition of adequacy on an account of empathy that it should be able to reflect both the similarities and the differences between empathetic experiences and the four following related psychological phenomena: the target’s affective or emotional state that is the cause of the empathetic experience; emotional contagion; non-empathetic mindreading; and sympathy. In previous work, we have offered an account of empathy (de Vignemont and Singer, 2006; de Vignemont and Jacob, 2012; Jacob, 2011; Jacob, 2015) that rests on the following four fundamental ideas. First, the term “empathy” primarily applies to one’s responses to a restricted subset of others’ psychological states, namely others’ affective and emotional experiences, not to their propositional attitudes (beliefs, intentions or desires).2 Secondly, both empathy and emotional contagion are vicarious experiences, i.e. a kind of experience that is caused by the awareness of another’s affective experience and that also resembles the experience that caused it. In a nutshell, both empathetic responses and emotional contagion satisfy what we call the interpersonal similarity condition. Thirdly, although both empathetic responses and contagious responses to another’s emotion are vicarious experiences, they nonetheless have different directions of intentionality: while contagious responses are self-centered, empathetic responses are other-directed. This difference reflects different degrees of embodiment. Finally, the generic mechanism that generates empathetic responses is the process of mental imagery, i.e. a process of non-propositional imagination, whereby one’s standard affective resources are used off-line, rather than on-line. Here, in order to highlight the specificity of our model of empathy, we shall at first contrast it with two other influential theoretical frameworks: the direct-perception model and the mirroring approach. In the first section, we examine the view held by advocates of the directperception model that empathy is a perceptual experience of others’ affective states. In the second section, we examine the mirroring approach to empathy. In the next pair of sections, we summarize our own approach to empathetic pain and respond to recent objections to our account of empathetic pain. In earlier work, we focused mostly on empathetic pain because much is known about brain activities underlying pain, empathy for pain and contagious pain.
498
Vicarious experiences
In the last section of the present chapter, we examine to what extent our account can be generalized to empathetic responses to others’ affective experiences other than pain.
The direct-perception model While it is generally agreed that the primary target of an individual’s empathetic response is another’s affective state (i.e. a state whose content involves an evaluative dimension), one of the most controversial issues in recent debates has been to what extent interpersonal similarity is a necessary condition for empathy, i.e. to what extent empathy involves affective sharing. Some philosophers from the Phenomenological tradition deny that it is a necessary condition and instead endorse a direct-perception model of empathy (Scheler, 1954; Zahavi, 2011; Gallagher, 2008), in accordance with the Schelerian dictum that “empathy has to do with a basic understanding of expressive others.” After briefly reviewing their main claims, we shall question their account of empathy, including their application of their model to address the problem of other minds. Empathy without affective sharing Advocates of the direct-perception model of empathy reject the interpersonal similarity condition on empathy. To the extent that one primarily empathizes with another’s affective experience, what advocates of the direct-perception model deny is that one’s empathetic experience is itself a kind of affective experience. Instead, empathy turns out to be a basic or primitive epistemic awareness (or knowledge) of another’s affective experience, which may itself be devoid of any affective content. Thus, the direct-perception model rests on a pair of basic assumptions, the first of which is that human expressive bodily behaviour is, in Zahavi’s (2008, 2011) own terms, “soaked with mindedness”. The second twin assumption is that the perception of another’s expressive bodily behaviour enables one to be directly acquainted with the content of her affective experience, as it is instantiated, made manifest or “given” in her expressive bodily behaviour.3 Thus, empathy so conceived turns out to be a primitive kind of knowledge of another’s affective experience restricted to one’s perceptual acquaintance with another’s expressive bodily behaviour and devoid of any affective dimension.4 What is likely to make this view of empathy attractive to Phenomenologists is that it affords a direct and immediate epistemic access to aspects of others’ mental lives (their affective experiences) that is taken to stand in sharp contrast with either the theory-theory or the simulation approach to mindreading, both of which are taken to rest on complex inferential processes. It further seems to hold the promise to offer a simple solution to the philosophical problem of other minds. However, it turns out to lack the resources for drawing the intuitive distinction between empathy and non-empathetic mindreading. On the direct-perception model, empathy is what enables me to have primitive non-inferential epistemic access to your emotions. It is a primitive kind of knowledge because it is limited to the boundaries of my perceptual acquaintance with your expressive behaviour. Indeed, it seems clear that all of us have the capacity to ascribe emotions to others, without feeling what they feel: I can form the belief that you are in pain or that you are jealous of your partner without feeling either pain or jealousy (let alone about your partner). But if (in accordance with the Phenomenological conception) affective sharing is not required for empathy, then it is not clear how to distinguish between empathizing with your affect and forming a perceptually based belief about your affect. Moreover, the directperception model gives rise to an uncomfortable dilemma, as we shall now argue.
499
Pierre Jacob and Frédérique de Vignemont
A dilemma for the direct-perception model? The direct-perception model gives rise to the following dilemma: either another’s overt expressive behaviour is constitutive of her affective experience or it is not. If it is not, then by perceiving another’s expressive behaviour, one does not ipso facto perceive her affective experience. On the other hand, the assumption that another’s expressive behaviour is constitutive of her affective experience seems tantamount to accepting behaviourism ( Jacob, 2011). One may believe that the Phenomenological account of what makes expressive behaviour expressive (namely that it is soaked with mindedness) makes their approach immune to the behaviourist horn of the dilemma. But things are not that simple. Arguably, the expressivist conception of behaviour is not meant to apply to every single psychological state: only an individual’s goals and affects, not her beliefs (let alone her mathematical or scientific beliefs), are held by Phenomenologists to be manifest (or given) in her expressive behaviour. If so, then one could only directly perceive another’s goals and affects, not her beliefs.5 However, what is the further principle that enables Phenomenologists to assert that only her affect, not her relevant belief, can be directly perceived? It seems that an agent’s goal-directed behaviour does not merely reflect the agent’s goal, but some of her beliefs (about, e.g., the target’s location) as well. Similarly, the behaviour whereby an agent expresses her fear is likely to reflect her belief about the location and the dangerousness of the source of her fear. But if so, then an agent’s expressive behaviour should be said to make her belief as well as her affect or her goal manifest. If not, then it is not clear what Phenomenologists are committed to by their claim that expressive behaviour is soaked with mindedness. Some advocates of the direct-perception model have explored a second response to the above dilemma: they have tried to argue that an agent’s expressive bodily behaviour constitutes a proper part of her affective experience. If so, then by perceiving an agent’s expressive behaviour, one perceives the affect of which it is a part (Krueger, 2012; Krueger and Overgaard, 2012). But one could only see something (e.g. an individual’s full body or a full tomato) by seeing one of its parts (e.g. the individual’s head or the front of the tomato) if the latter is indeed an uncontroversial part of the former. The main problem with this second response to the dilemma is, as Smith (in press) notices, that an individual’s expressive behaviour is not an uncontroversial part of her affective experience. Affects are psychological states, but behavioural expressions are processes. Behavioural expressions, but not psychological states, have parts; furthermore, the former are effects, not parts, of the latter (de Vignemont, forthcoming). Finally, not only does the direct-perception model give rise to an uncomfortable dilemma, but it is also far from clear that it can reasonably hope to resolve the philosophical problem of other minds, as we shall now argue.
The problem of other minds The philosophical problem of other minds is widely construed as the task of providing a response to the skeptical challenge directed to one’s claim to know that there are other minds. The Phenomenologists reject the solution to the problem of other minds based on the argument by analogy because it rests on the Cartesian asymmetry between direct first-personal access to one’s own mind and indirect third-personal access to others’ minds. On the one hand, the Cartesian asymmetry “underestimates the difficulties involved in self-experience and overestimates the difficulties involved in the experience of others” (Zahavi, 2008: 518). On the other hand, only if each of us were directly (i.e. perceptually) acquainted with the minds of others could one hope to dissolve the “skeptical conundrum” about the existence of other 500
Vicarious experiences
minds: “We should avoid construing the mind as something visible to only one person and invisible to everyone else” (Zahavi, 2007: 33). However, it is far from clear that the direct-perception model of empathy can really answer the skeptical challenge in the case of the problem of other minds, for three conspiring reasons. First of all, being able to see another’s anger is not a necessary condition for being able to see (and therefore to know) that another is angry. As Dretske (1969, 1973) has argued, one can see that another is angry if one sees another’s behavioural display of anger and if it is reliably correlated with her feeling angry. Nor is the fact that one can see something that is visible and which happens to exemplify property F sufficient for one to be able to see that it is F. As Dretske (1973) has argued, seeing a fully visible spy or a fully visible counterfeit bill is not sufficient for seeing, and thereby knowing, that the former is a spy and the latter a counterfeit. Now, if we grant (for the sake of argument) that instances of anger (or fear) at particular places and times are visible and can be directly seen, it is far more contentious that one could further be visually acquainted with others’ anger, with their affects in general, let alone with their minds (or mindedness). Clearly, the fact that Mary is angry (or scared) logically entails that there are other minds. So if I see, and thereby know, that she is angry (or scared), then I can infer that someone else has a mind and therefore believe that there are other minds (if I have the concepts anger and mind). But it is implausible that I could visually experience others’ mindedness. If so, then the direct-perception model of empathy does not seem to have the resources to adequately address the skeptical challenge in the case of the problem of other minds.
Mental simulation, mirroring and empathy While advocates of the direct-perception model entirely reject the interpersonal similarity condition on empathy, advocates of the simulation approach to mindreading endorse it as a necessary (or a strongly enabling) condition, not only on empathy, but also on mindreading others’ psychological states in general (Goldman, 2006). Following the discovery of mirror neurons in the premotor cortex of macaque monkeys, Gallese and Goldman (1998) have argued that mirroring processes can be construed as processes of mental simulation, whereby the same area is being activated in both the agent’s and the observer’s brain. This paved the way for the mirroring approach to empathy. The mirroring approach to empathy Mirror neurons were first found to fire both endogenously when an animal performs a transitive goal-directed action and exogenously when it observes another execute the same kind of action (Rizzolatti et al., 2001). So exogenous mirror neuron activity in an observer’s brain was taken to be a covert vicarious motor response to another’s overt goal-directed action. On the basis of the two-step direct-matching model of action understanding, Gallese and Goldman (1998) further hypothesized that the function of mirroring was to mindread the agent’s goal or intention along the following lines. First, the perception of an agent’s goal-directed action is supposed to cause the observer to covertly replicate the agent’s bodily movements. Second, by covertly replicating the agent’s bodily movements, the observer is supposed to come to share the agent’s goal or intention. What makes the mirroring approach to social understanding appealing is its parsimony: the very same resources that are necessary for executing an action are also taken to be sufficient for perceiving and understanding others’ actions. Since it meets the interpersonal condition, the 501
Pierre Jacob and Frédérique de Vignemont
mirroring model can also aspire to shed light on empathy. If so, then the first challenge for the mirroring approach to empathy is: how could it satisfy the affectivity condition? There are two basic ways to address this challenge according to whether mirroring processes are taken to be exclusively motoric or not, i.e. whether what can be directly matched onto an observer’s motor repertoire must be restricted to an agent’s bodily movements, at the expense of her sensations and affects, or not. While advocates of the more conservative strategy are prone to restrict mirroring to motor processes, advocates of the more liberal strategy are not. On the face of it, it seems as if only an agent’s bodily movements, not her sensations and affects, could be directly perceived and therefore directly matched onto an observer’s motor repertoire. This is presumably why Rizzolatti et al. (2004: 431) define mirror neurons as a specific class of neurons that discharge both when an agent acts and when it observes “a similar action done by another monkey or the experimenter”. But as Goldman (2009), the advocate of the liberal strategy, has pointed out, this could only be a definition of action-mirroring (or motoric mirroring), not of mirroring in general, on the grounds that mirror neurons should not be restricted to action-related events, and should instead be equally allowed in the domains of touch, pain and emotion (as suggested by findings reported by, e.g., Keysers et al., 2004 and Wicker et al., 2003). As a result, Goldman (2009: 9) proposes a more flexible definition of mirroring events, whereby mirror neurons can be endogenously activated when an individual undergoes “a certain mental or cognitive event” and exogenously activated when an individual “observes a sign that another individual undergoes or is about to undergo the same type of mental or cognitive event”. In a nutshell, on Goldman’s liberal strategy, the output of mirroring satisfies the affectivity condition because the input to mirroring already does. However, Goldman’s flexible definition of mirroring seems to face the following dilemma. Either the input to mirroring (or direct-matching) is purely perceptual or it is not. By assuming that the input to mirroring is purely perceptual, one seems to thereby endorse the directperception model of others’ sensations, affects and emotions. But if so, then, as advocates of this model have argued (Gallagher, 2008; Zahavi, 2008), why should mirroring be necessary at all? In any case, the direct-perception model itself is vulnerable to our criticisms above. On the other hand, if mirroring is not restricted to perceptual inputs, then (as we shall argue shortly) it becomes difficult to distinguish mirroring from imagining. The alternative strategy called “embodied simulation” by Gallese (2009) rests on acceptance of the motoric requirement that only an agent’s bodily movements could be matched onto the observer’s motor repertoire, in accordance with Rizzolatti et al.’s (2004) definition. According to embodied simulation, mirroring processes have the capacity to convert an input that does not meet the affectivity condition into an output that does. On the direct-matching model of action understanding, if an agent performs a goal-directed action, mirroring takes as input the agent’s bodily movements. By covertly rehearsing the agent’s bodily movements, which can be directly perceived, the observer comes to share the agent’s goal or intention, which cannot be directly perceived. So mirroring should be able to convert the perception of an agent’s perceived bodily movements into a shared goal or intention. Furthermore, the direct-matching model of action understanding can be easily extended to expressive actions: if the agent performs an expressive action, mirroring also takes as input the agent’s bodily movements. Moreover, by covertly rehearsing the agent’s bodily movements, the observer comes to share the agent’s affect, which cannot be directly perceived. In a nutshell, embodied simulation assumes that mirroring can convert the perception of an agent’s bodily movements into a shared affect. The basic challenge for the approach to empathy based on embodied simulation is whether it has the resources to distinguish empathetic responses, which satisfy the ascription condition, 502
Vicarious experiences
from contagious responses, which do not. It is unlikely that mirroring another’s affect alone could be sufficient for empathizing because by mirroring an agent’s expressive action, one could at best share her affect. But sharing is not ascribing. In response to this basic challenge and on behalf of embodied simulation, Gallese and Sinigaglia (2011) have tried to draw a distinction between two kinds of attribution or ascription, which they call respectively functional and representational. They acknowledge that the mirror mechanism can only play a causal, not a constitutive role, in representational attribution, but they claim that it plays a constitutive role in functional attributions. To attribute a belief, an intention or an affect to an agent in the representational sense amounts to forming a belief about (or metarepresenting) the agent’s relevant belief, intention or affect. Clearly, Gallese and Sinigaglia concede that sharing another’s intention or affect is not sufficient for ascribing it in this representational sense. On the other hand, they argue that there is another functional sense in which “an attribution is a representation of a goal, intention or belief which plays some role in enabling one to deal with an agent by virtue of its being appropriately related to that agent’s goal, intention or belief ” (ibid.: 517). For example, it seems as if one meets the condition for attributing a goal or intention to an agent in the functional sense if one entertains a common or joint goal with another on the basis of which one can perform some joint action (e.g. moving a piece of furniture together). But if so, then it seems as if there is no real difference between sharing another’s goal or intention and attributing this goal or intention to another in the functional sense. Attributing in the functional sense looks much more like sharing than like attributing. In this functional sense then, there is no real difference between sharing another’s affect and attributing this affect to another. But if so, then attributing an affect to another in the functional sense may meet the condition for experiencing contagious vicarious affects, but not for empathizing. Low-level and high-level processes of mental simulation Goldman’s (2006) simulation-based approach to mindreading rests on two distinctive ingredients. On the one hand, he does not take simulation (let alone mirroring) to be constitutive of mindreading: instead, he takes the former to be at best causally relevant to, but not sufficient for, the latter, which further involves the ascription (or projection) of a psychological state to another. On the other hand, he draws a distinction between high-level and low-level processes of mental simulation. On his account, while mirroring exemplifies low-level simulation, imagination exemplifies high-level simulation. But as we noted in the previous section, and as we shall now spell out more fully, this is not compatible with Goldman’s (2009) flexible definition of mirroring. As Goldman (2011: 199) has put it on behalf of his version of the simulation approach to mindreading, “empathy is a key to mindreading [. . .], the most common form of mindreading.” Empathy could only be the most common form of mindreading if interpersonal similarity was a necessary condition for both empathy and mindreading, in accordance with the simulation approach to mindreading. Unlike Goldman, we take interpersonal similarity as a condition on empathy, not on mindreading in general: one could form a belief about another’s affect without feeling what she feels. The evidence further suggests that empathy is not the default answer to one’s awareness of another’s affect (in particular, pain).6 We do not accept the simulation approach to mindreading, because although we take interpersonal similarity as a necessary condition on empathy, we do not take it as a necessary condition on non-empathetic mindreading. Nor do we think that Goldman’s (2011) distinction between a mirroring and a reconstructive route to empathy best reflects his insightful distinction between low-level simulation 503
Pierre Jacob and Frédérique de Vignemont
(mirroring) and high-level simulation (imagination). Instead, we think that the duality between contagious affective experiences and empathetic vicarious affective experiences better reflects Goldman’s distinction between low-level and high-level processes of mental simulation. It is widely recognized that there are at least two broad kinds of imaginative processes: propositional imagination (as when one imagines or supposes that p) and mental imagery or non-propositional imagination (as exemplified by visual or motor imagery). Only the latter is relevant to the analysis of high-level simulation. As it turns out, the combination of Goldman’s internalized definition of endogenous mirror neuron activity and of his liberal definition of exogenous mirror neuron activity is not entirely consistent with his own distinction between mirroring and imagining. Just to take one example, motor imagery, which he takes to be an example of high-level simulation, would meet the conditions for mirroring on his liberal definition.7 Since we fully accept Goldman’s latter distinction, we cannot accept his liberal approach to mirroring. We fully endorse a simulation-based approach to vicarious experiences. On our view, experiencing vicarious pain, or any other emotion, is to imagine being in pain, or feeling any other emotion. We assume that non-propositional imagining is equivalent to a process of mental simulation, whereby a psychological mechanism is being used off-line. Given its basic information-processing function, a cognitive mechanism takes canonical inputs and produces a canonical output in response. For example, when working on-line, vision takes retinal inputs and produces visual percepts; the motor system transforms motor instructions into the execution of motor acts; the decision system takes goals and beliefs as inputs and produces a decision as a basis for action. However, as several scientists and philosophers have argued, a cognitive mechanism can also be taken off-line. For example, visual imagery has been construed as an instance of imagining seeing (or visualizing) something, whereby one’s visual system is run off-line: it is provided with inputs from memory, not retinal inputs. In response, it produces a visual image, instead of a visual percept. Motor imagery has been hypothesized to be the output of a process whereby the motor system is taken off-line and one imagines producing a movement. Finally, one’s decision system has been hypothesized to be used off-line for the purpose of predicting another’s decision, instead of taking a decision on the basis of which to act. Similarly, we assume that one can imagine being in standard pain, using one’s own pain system off-line. Interestingly, recent neuroscientific evidence shows that the process of imagining being in pain involves similar activity in the brain as the experience of standard pain ( Jackson et al., 2006; Ogino et al., 2007). The assumption is that the vicarious experience of another’s emotion in general is the output of the process of imagining another’s emotion by running off-line one’s emotional system: for example, one experiences vicarious fear by running off-line one’s own fear system.
Vicarious experiences After dealing with the shortcomings of two major contenders, we now turn to our own preferred account of empathy, according to which interpersonal similarity is necessary and can be achieved thanks to a process of non-propositional imagination (or imagery). This imaginationbased account avoids the problems that we have highlighted for the mirroring account. For example, part of the difficulties for the motoric mirroring model is that it involves exclusively motor processes. By contrast, the imagination model is not so restricted to actions. When imagining being in pain, one can imagine any component of what is involved in experiencing pain – whether it is the facial expression of pain, the bodily reaction or the affective unpleasantness. 504
Vicarious experiences
Although we take interpersonal similarity to be necessary for vicarious experiences in general and for empathy in particular, we nonetheless agree with advocates of the direct-perception model of empathy that interpersonal similarity between a mindreader’s psychological state and her target’s affective state is not a necessary, nor even an enabling, condition for non-empathetically mindreading another’s affective state. We can now spell out the four conditions which we take to be necessary for one individual X to empathize with her target Y’s psychological state: i Affectivity condition: X is in some affective state or other s*; ii Interpersonal similarity condition: X’s affective state s* stands in some suitable similarity relation to Y’s affective state s; iii Causal path condition: X is caused to be in state s* by Y’s being in state s; iv Ascription condition: X’s being in s* makes X aware that her being in s* is caused by Y’s being in s. We shall now highlight the crucial role played by the interpersonal similarity condition, while refining what we mean by “suitable similarity relation”. The scope and limits of interpersonal similarity Acceptance of the condition of interpersonal similarity (ii) on empathy enables us to draw, as one should, the distinction between empathy and non-empathetic mindreading. It also enables us to distinguish empathy from sympathy. Sympathy is a kind of sui generis social affective attitude: no matter what another’s affective experience is (e.g. pain, jealousy, anger), to sympathize with her is to feel sorry for her. In contrast, we assume that only if the empathizer’s affective state stands in some relevant similarity relation to her target’s affective state can the former be said to empathize with the latter. However, one may note that sometimes sympathy seems to meet the interpersonal similarity condition as well. Suppose Y sympathizes with X, who feels sorry because her husband is deeply sick. If Y sympathizes with X, then Y feels sorry for X. If so, then on the face of it, X and Y experience the same emotion: they both feel sorry. Should that count as a clear-cut case of empathy (Zahavi 2011; Michael, 2014; Deonna, 2007)? Not necessarily. Both feel sorry, but the intentional object of their respective sorrow is entirely different. One feels sorry because her husband is deeply sick, and the other feels sorry about her friend’s sorrow. The precise extent of interpersonal similarity is still to be determined, but it can safely be assumed that it is having the same intentional object that matters. We claim that the difference in intentional content between X’s and Y’s feelings is inconsistent with the interpersonal similarity condition. As a result, we claim that Y fails to empathize with X. Instead, it is a mere coincidence if Y’s sui generis feeling sorry about X overlaps with X’s feeling sorry to have missed his friend. And Y’s feeling sorry is better construed as an example of sympathy for X rather than of empathy with X. Moreover, it is worth noting that in many cases the intentional object of a vicarious emotion is likely to be less determinate than that of the emotion that caused it. For instance, X is afraid of a specific bully at school whereas Y, who empathizes with X, is vicariously afraid of bullies. The intentional object of the vicarious experience of fear may even be so indeterminate that it could be phrased as “whomever X is afraid of ”. As a result, imposing interpersonal similarity on empathy turns out not to make excessive cognitive demands on empathy as it does not require complete background knowledge about the person that one empathizes with. On our account, the interpersonal similarity condition is a necessary condition on empathy. But this is not to say that it is sufficient. Suppose that individuals X and Y are both afraid 505
Pierre Jacob and Frédérique de Vignemont
as a result of hearing a dog’s loud barking. In this case, X and Y share their fear as a result of a common cause (the dog’s loud barking). But neither needs empathize with the other. So interpersonal similarity is not a sufficient condition on empathy. In fact, our condition (iii) is precisely meant to distinguish the vicarious experience of an emotion, which is caused by another’s standard (or non-vicarious) emotion and also resembles it, from cases in which the interpersonal similarity condition is met just by coincidence, e.g. in virtue of some common cause and is therefore not caused by another’s standard emotion. A last condition is required if empathy is not to be confused with emotional contagion. Indeed, both empathetic and contagious responses to another’s affective state satisfy our first three conditions, and thus constitute vicarious states. The crucial question that arises is: why do some, but not all, vicarious affective states contribute to affective mindreading? Empathetic vicarious states do, but contagious vicarious states do not. In order to distinguish empathetic from contagious responses, a further condition must be added, which only empathy can meet and which we call the ascription condition (iv): namely, the empathizer must be aware of the target’s affective state. In a nutshell, empathetic experiences contribute to affective mindreading because they are vicarious responses that are other-directed. By contrast, contagious states are self-centered. In the next section, we shall explore the differences between contagious pain and empathetic pain, and more generally, between contagious experiences and empathetic experiences. It will turn out that whereas self-centered responses to another’s pain focus on one’s own specific bodily feelings, other-directed responses focus instead on the affective dimension of the unpleasantness of pain. The duality of vicarious pain Of particular importance for the understanding of vicarious pain is the widely recognized dual nature of painful subjective experiences: physical pain has both a sensory component (the intensity of pain and its bodily location) and an affective (or evaluative) component (the unpleasantness of pain). Since it lacks somatotopic organization, the unpleasantness of pain, represented by the affective component, seems dissociable from the bodily location of pain.8 When one experiences standard pain as a result of some bodily injury, both components are active. But in vicarious pain, what components are active? The neuroscientific evidence indicates that an experience of vicarious pain can be primarily – but by no means exclusively – generated by the selective activation of one or other of the two components of physical pain: the sensory-discriminative or the affective component. For instance, using one experimental paradigm, Avenanti et al. (2005) found that seeing a needle deeply penetrate another’s hand causes in the observer the same sensorimotor response (i.e. muscle-specific freeze) as in the person whose hand is being penetrated. By contrast, using a different experimental paradigm, Singer et al. (2004) found that experiencing pain and observing another’s pain selectively activate the same affective component of the pain neural matrix with no activation of the sensorimotor component. When participants were explicitly asked to pay more attention to the intensity of pain or to its bodily location, both the affective and the sensory components of pain were activated (e.g. secondary somatosensory cortex, cf. Cheng et al. 2007; Lamm et al. 2007). However, none of these studies found a somatotopic organization of the brain responses (and no activity in primary somatosensory cortex). In other words, vicarious pain was not encoded in a particular part of the participants’ bodies. Thus, it seems as if there are two types of vicarious pain. Whereas the former is body-part specific, the latter is indifferent to the bodily location of the pain. Whereas the former is automatic (Avenanti et al.,
506
Vicarious experiences
2006), the latter can be inhibited and is subject to top-down modulation by a wide range of factors (for review, see Engen and Singer, 2013). Thus, as we read it, the neuroscientific evidence shows three things. First, it shows that the brain activity underlying vicarious pain partially overlaps with the brain activity underlying physical pain. On the widespread assumption that overlap of brain activity is part of a sufficient condition for shared experience (either between two individuals at the same time or within a single individual at two different times), this supports the claim that one can share to some extent another’s experience of physical pain. But secondly, since partial overlap is not identity, it also shows that vicarious pain should not be confused with physical pain. Finally, it shows that there are two kinds of vicarious experiences of pain: unlike vicarious sensory pain, vicarious affective pain is not localized in a particular bodily part. How should this empirical dissociation be interpreted in the light of the conceptual distinction between contagious and empathetic responses? As we have argued, what matters to the distinction between contagious and empathetic responses to another’s pain is the ascription condition (iv). We assume that a vicarious experience of pain cannot be both other-directed and self-centered. Let us first consider sensory vicarious pain. As we mentioned earlier, Avenanti et al. (2005) reported that seeing another’s hand being subjected to painful stimulation causes motor inhibition in the participants’ own corresponding hands. Interestingly, this response seems to be primarily self-centered, as shown by the following findings. First, the effect was not increased when participants explicitly adopted the target’s perspective. In a follow-up study, Avenanti et al. (2006) found indeed no difference when participants were asked respectively to focus on the qualities of the painful event or to mentally simulate the target’s pain. One would have expected the opposite result if the motor response was otherdirected. Secondly, they did not find any correlation between the strength of the responses and the participants’ scores on empathy questionnaires. Finally, a recent study using the same experimental paradigm recorded motor inhibition only when the hand in which the needle penetrated was presented from a first-person visuo-spatial perspective, but not when it was presented from a third-person perspective (Bucchioni et al. in press). Thus, following Avenanti and colleagues (2009) and Bucchioni and colleagues (in press), we propose to interpret vicarious sensory pain in terms of self-centered contagious pain. When seeing another’s hand subjected to painful stimulation, while knowing nothing about whose hand it is, one maps the other’s bodily part subjected to painful stimulation onto one’s own bodily counterpart, and one anticipates the sensorimotor consequences of pain at this bodily location. As a result, one’s experience of vicarious pain is both anticipatory and entirely self-centered: it is an instance of contagious pain, not empathetic pain. By contrast, one vicariously experiences the unpleasantness of another’s pain by activating the affective component of one’s own pain system. This does not require pain to be represented at a definite bodily location. Unlike vicarious sensory pain, vicarious affective pain is other-directed, as confirmed by several empirical findings (Singer et al., 2004, 2006). The most conclusive example is the following study. Participants were told that some patients reacted with pain when they received a soft touch, but not when they were pinpricked. It was found that participants displayed activity in the affective component of pain only when they saw the patients being touched by a Q-tip (Lamm et al., 2010). Following these findings, we propose to interpret affective vicarious pain in terms of other-directed empathetic pain. In a nutshell, contagious pain and empathetic pain are two distinct vicarious experiences of pain. Whereas the former is self-centered, the latter is other-directed. We suggest that the
507
Pierre Jacob and Frédérique de Vignemont
direction of intentionality (i.e. self-centered vs. other-directed) is determined by whether it is primarily the sensory or the affective component of pain that is vicariously activated. These differences between the two types of vicarious experiences help us understand why affective vicarious experiences alone can meet the ascription condition. In either standard pain or contagious pain, the unpleasantness of actual or hypothetical pain is correlated with the localization of pain in some definite bodily part. By contrast, in vicarious affective pain, there is an asymmetry between the strong activity of the affective component (which generates a strong psychological disarray) and the weak activity of the sensory component of the pain system (which generates a weak global bodily feeling). The lack of bodily location makes empathetic pain a highly specific type of pain. One can mis-localize standard pain (e.g. referred pain), but one can never experience standard pain without ascribing it to a rough bodily location. The experience of the unpleasantness of standard pain motivates a selective range of bodily movements, whose function is to prevent or alleviate actual or potential pain (e.g. remove your hand from the hot stove), and which is driven by the bodily location of pain conveyed by the sensory component of pain. However, in empathetic pain, the sensory component of pain is not active at all or very weakly so. Consequently, the feeling of empathetic pain has no definite bodily location and no definite sensorimotor expectation can be generated. Lacking definite sensorimotor expectations about the consequences of pain at a definite bodily location, one feels instead a global bodily feeling of the unpleasantness of generic pain. As a result, one becomes aware that one’s own psychological disarray is being caused by another’s standard pain. This, we surmise, is why experiences of empathetic pain alone meet the ascription condition.
Beyond empathy for pain Michael and Fardo (2014) have recently raised three related objections against the above account of empathetic pain. First, the question arises whether the complexity of our account of the ascription condition (necessary for empathetic pain) is really justified. Secondly, we heavily rely on neuroscientific findings, but the interpretation of the findings is controversial. Thirdly, our account of empathetic pain does not seem to generalize to other types of vicarious emotions. The ascription condition We argue that empathetic experiences of pain are other-directed in virtue of an inferential process whereby one monitors the activity of the affective component of one’s own pain system. By contrast, Michael and Fardo (2014) endorse what looks like the simpler suggestion that vicarious experiences of pain are bound to be other-directed from the start, in virtue of their perceptual origins.They assume that it is sufficient for the ascription condition to be met that a vicarious experience of pain results from the perception of other people in pain. Their suggestion, however, fails to account for the difference between contagious and empathetic vicarious states. If they agree that only the latter, not the former, can contribute to affective mindreading, as they seem willing to, then the reason must lie not in what they have in common, but instead in what makes them different from one another. All vicarious experiences of pain share the same kinds of inputs: awareness of cues indicating another’s standard pain. So the distinctive other-directedness of empathetic pain cannot directly stem from the inputs to both kinds of vicarious experiences of pain. It must be generated at a later stage in the process whereby one becomes primarily aware of the activity of the affective component of one’s own pain system, at the expense of the sensorimotor component. 508
Vicarious experiences
The pain matrix revisited Michael (2014) and Michael and Fardo (2014) further argue that recent work by Iannetti and colleagues showing that activation of the pain matrix is not restricted to responses to nociceptive stimuli casts doubt on our account of empathetic pain. We disagree. On the one hand, Iannetti et al. (2013) have argued that overlap of brain activity between physical pain and social pain (caused by social exclusion) cannot show that social pain “hurts”. On the other hand, Legrain et al. (2011) report that the pain matrix can be activated in response not merely to nociceptive stimuli but also to salient visual, auditory or tactile stimuli in the space immediately surrounding the body. If so, then arguably the pain matrix should be relabeled the alarm matrix, which can be activated by all sorts of threats lying close to the body or on the body. For example, awareness that another person is in pain can also trigger the alarm. Arguably these findings shed light on the nature of physical pain itself: pain is an alarm system. If so, then the affective component of pain is the evaluative component of this alarm system: by offering a negative evaluation of an actual or potential threat to one’s bodily integrity, it motivates an appropriate response (Cutter and Tye, 2011; Bain, 2013).9 The affective component is associated with the dedicated sensory component of pain when the disturbance falls within the limits of the body. If the disturbance lies immediately outside the body and may harm it, then the affective component can also be associated with other sensory representations – visual or auditory. On this account, empathetic pain (i.e. vicarious affective pain) is generated by the evaluative activity of the affective component of one’s pain system because the affective component of the pain system works mostly as an alarm system that evaluates, and motivates responses to, threats. In standard pain, the affective component of one’s pain system is triggered by the detection of threats to one’s own body. But it can also be activated by the detection of stimuli that are threats not to one’s own body, but to another’s body instead (de Vignemont, forthcoming). Empathetic pain thus meets the interpersonal similarity condition. Far from disproving our account of empathetic pain, the findings by Iannetti and colleagues showing that an individual’s pain matrix can be activated in the absence of nociceptive stimuli are consistent with our account.
From empathy for pain to empathy for emotions The next question to be addressed is the scope and limits of our account, which seems restricted to empathetic pain. Pain, however, is far from being a prototypical emotion. The crucial question is whether it makes sense to draw a distinction between two kinds of vicarious responses in the case of other emotions (e.g. fear and disgust). Does it make sense to distinguish contagious fear (or contagious disgust) from empathetic fear (or empathetic disgust), where the former is supposed to be fundamentally self-centered and the latter is supposed to be fundamentally other-directed? In other words, the question is: what is it about the content of contagious fear (or contagious disgust) that makes it self-centered? What is it about the content of empathetic fear (or empathetic disgust) that makes it other-directed? Most emotions may not have exactly the same dual nature as pain. Still, on some accounts at least, they can be characterized in terms of two distinct dimensions, namely, their evaluative dimension and their bodily dimension. Most conceptions of emotions have actually oscillated between over-intellectualizing them and over-embodying them. On the one hand, some theories have focused on the intentionality of the emotions (e.g. fear of something), thereby accounting for emotions in purely cognitive terms (Solomon, 1993). On the other hand, other theories have focused on the phenomenology of the emotions (e.g. I feel frightened), thereby 509
Pierre Jacob and Frédérique de Vignemont
accounting for some (if not all) emotions in terms of experiencing bodily changes (e.g. James, 1884; Damasio, 1999; Prinz, 2004). Some recent proposals, however, suggest an intermediate approach, according to which emotions are both bodily and evaluative attitudes: we understand why emotions are evaluations once we admit that they relate to values by virtue of being experiences of one’s body being ready or poised to act in some specific manner towards a given object or situation. (Deonna and Teroni, 2014, p. 29) Emotions have two fundamental dimensions: on the one hand, as their phenomenology shows, they are anchored to basic bodily feelings. On the other hand, they have a basic evaluative function: to experience an emotion is to evaluate or appraise some event, fact, property or object in a distinctive way, which is in turn revealed by some specific associated action-readiness.10 We shall argue that each of the two basic components of standard emotions can be mapped onto each of the two kinds of vicarious emotions. Let us first consider contagious experiences. For example, I am in the middle of a crowd and someone starts panicking.The panic automatically spreads to everybody, including me.What do I experience? It seems relatively uncontroversial that I experience contagious fear. I feel afraid: I feel my heart beating faster and also the urge to run as much as everybody else around. My contagious fear is primarily driven by the bodily feelings associated with fear, not by the evaluative affective component of fear. I may become aware of the immediate source of my vicarious fear from different cues. But if so, then this information is not conveyed by the activity of the evaluative affective component of fear. My vicarious fear is thus strongly embodied. This is why most instances of emotional contagion are described in embodied rather than in affective terms: one talks of contagious crying or contagious laughter rather than contagious distress or contagious happiness. Similarly, experiences of vicarious sensory pain are vicarious experiences of strongly embodied aspects of pain: they are primarily self-centered and represent distinctive bodily parts. By contrast, suppose I perceive cues of a child’s fear of a lion behind bars in a zoo. I may not be afraid of the lion myself. Nonetheless, even if I am not, I can still vicariously feel the child’s fear of the lion. Clearly, my vicarious fear of the lion is quite different from my undirected contagious fear caused by crowd panic. What primarily drives empathetic fear is the affective evaluative component of fear, not the bodily feelings associated with fear. My contagious fear need not represent any intentional object. My empathetic fear, on the other hand, must be directed and be about something, e.g. the lion. More specifically, my vicarious fear of the lion consists in an evaluative representation of the lion as dangerous. But how does my empathetic fear of the lion differ from the child’s standard fear? How can it meet the ascription condition? We assume that an agent’s standard emotion involves both an evaluative appraisal and a bodily feeling, both anchored to the agent’s own bodily perspective. Now the evaluative component of an agent’s standard emotional experience involves a distinctive set of parameters. On the one hand, danger is always appraised relative to some agent: what is dangerous for a young child is not necessarily dangerous for a healthy adult. On the other hand, the evaluative component of an agent’s fear involves standards of appraisal of the danger of a threatening stimulus, relative to the agent’s own cognitive resources and values. For example, the evaluative component of the child’s experience of fear involves an appraisal of the danger of the lion behind bars in the zoo, at a location near the child’s body, relative to the child’s own values and cognitive resources. An experience of either standard fear or contagious fear is primarily self-centered: it is likely to directly cause one to run away from the source of the fearful experience in order 510
Vicarious experiences
to protect oneself. But what underlies the experience of empathetic vicarious fear is primarily the activity of the evaluative component of one’s own fear system (at the expense of the bodily feeling of fear). In empathetic vicarious fear, there may be a discrepancy between danger as appraised by one’s own standards and one’s awareness of the cues of another’s fear. If so, given that by one’s own standards of appraisal of danger one should not experience fear at all, one must shift one’s own standards in order to make sense of the cues of another’s fear. In the case of pain, empathetic pain is generated by running off-line the affective component of one’s pain system (de Vignemont and Jacob, 2012). In the case of empathetic fear, one appraises danger according to someone else’s standards of evaluation by running off-line one’s fear system. This is what it takes to respond empathetically to another’s cues of fear: one uses standards of appraisal of danger that belong to someone else so that one can run off-line the evaluative component of one’s fear system. If one does not share those standards, then one must shift one’s own standards in order to match the other’s standards. Consequently, experiences of empathetic fear whereby one runs off-line the evaluative component of one’s fear system are fundamentally other-directed. Hence, what make a vicarious experience other-directed, and thus empathetic, are (i) the fact that it necessarily consists of an evaluative attitude, and (ii) the fact that the evaluation is performed on the basis of another individual’s standards. For example, I am able to appraise the presence of the lion behind bars as dangerous for myself and the child, according to the child’s cognitive resources and values. Thus, if and when I experience empathetic vicarious fear, I am not tempted to run away from the lion at all, but instead to move towards the child and to comfort her by trying to change her standards of appraisal of danger, by e.g. pointing to the protective bars.11 To recapitulate, we made two basic points. On the one hand, empathetic emotional experiences differ from contagious experiences because they are evaluative attitudes that face outward. On the other hand, they differ from standard emotions because in empathetic emotional experiences, one shifts one’s standards of evaluation relevant to a given emotion to match another’s standards of evaluation. In virtue of these specificities, empathetic experiences meet both the interpersonal similarity and the ascription condition. So our account of empathy in terms of interpersonal similarity allows us both to distinguish it from other related social attitudes not only in the case of pain, but also in the case of other emotions. Given the necessity of interpersonal similarity for empathy, the following question now arises: what does it take to undergo vicarious experiences? It would be puzzling how one individual’s standard experience of s could give rise to another individual’s vicarious experience of the same state unless there was a mechanism enabling one individual to map her standard experience of affective state s at t onto a vicarious experience of the same state at t+1. But what is this mechanism? In line with our application of the imagination-based model to the case of vicarious emotions such as fear, we assume both that one’s experience of fear is the canonical output of one’s fear system and that one’s vicarious experience of fear is the output of one’s fear system taken off-line.
Concluding remarks No doubt, a certain amount of stipulation is unavoidable in the way one uses quasi-technical terms such as “empathy”. This is why at the outset, we took it as a condition of adequacy on an account of empathy that it ought to recognize the distinction between it and four related, though distinct, psychological phenomena (standard emotion, affective contagion, sympathy and emotion ascription). On the one hand, we take it as corroborating evidence 511
Pierre Jacob and Frédérique de Vignemont
for the non-propositional imagination model of empathy that, unlike two major contending accounts – the mirroring account and the direct-perception model – it can meet the above condition of adequacy. On the other hand, while our twofold account of vicarious experiences was primarily designed to explain empathetic pain, it turns out to be applicable to a wide range of vicarious emotional experiences.
Notes 1 We dedicate this paper to the memory of Marc Jeannerod. We are very grateful to Joel Smith and Julian Kiverstein for their detailed criticisms on this chapter. We gratefully acknowledge the support of ANR. We also gratefully acknowledge support of the European Research Council under the European Union’s Seventh Framework Program (FP7/2007–2013)/ERC grant agreement n° [609819], SOMICS. 2 For a more relaxed view according to which one can empathize with another’s action, see Rizzolatti and Craighero (2005: 108): “by observing others, we enter in an ‘empathic’ relation with them.This empathic relation concerns not only the emotions that others feel but also their actions.” 3 As Zahavi (2008: 518) puts it, “affective and emotional states are not simply qualities of subjective experience, rather they are given in expressive phenomena, i.e. they are expressed in bodily gestures and actions, and they thereby become visible to others.” 4 By restricting empathy to the boundaries of one’s perceptual acquaintance with another’s expressive behaviour, advocates of the direct-perception model exclude the possibility that one may empathize with a person who is absent while being referred to by a reliable speaker. 5 “What is being suggested is not that every aspect of the mental life of others is perceptually accessible” (Zahavi, 2011: 551). 6 Cf. Singer et al. (2006). 7 The distinction between low-level and high-level simulation is problematic (de Vignemont, 2009). Consequently, whereas Goldman (2009) clearly denies that motor imagery is an instance of mirroring, Goldman (2011) is willing to acknowledge that some cases of motor imagery are instances of mirroring and others instances of E-imagining. 8 In fact, the sensorimotor and the affective components of pain are dissociated in pain asymbolic patients who no longer seem to mind the pain. 9 As Bain (2013: S71) puts it, “a subject’s being in unpleasant pain consists in his (i) undergoing an experience (the pain) that represents a disturbance of a certain sort, and (ii) that same experience additionally representing the disturbance as bad for him in the bodily sense.” 10 The evaluative attitude can be about an external non-bodily object or event (a lion, for example), but it can also be about the subject’s own body (i.e. reflexive emotions). Even in this latter case, the distinction between the two dimensions holds: the body is both a source of feelings and an intentional object. 11 The standard experience of disgust also involves an evaluative component. One can also experience vicarious empathetic disgust by running off-line the evaluative component of one’s disgust system in order to match another’s standards of appraisal of dangerous food.
References Avenanti, A., Bueti, D., Galati, G. & Aglioti, S. (2005).Transcranial magnetic stimulation highlights the sensorimotor side of empathy for pain. Nat Neurosci, 87, 955–960. Avenanti, A., Minio-Paluello, I., Bufalari, I. & Aglioti, S. (2006). Stimulus-driven modulation of motorevoked potentials during observation of others pain. Neuroimage, 321, 316–324. Avenanti, A., Minio-Paluello, I., Sforza, A. & Aglioti, S. (2009). Freezing or escaping? Opposite modulations of empathic reactivity to the pain of others. Cortex, 45(9), 1072–1077. Bain, D. (2013). What makes pains unpleasant? Philosophical Studies 166(1), 69–89. Bucchioni, G., Fossataro, C., Cavallo, A., Mouras, H., Neppi-Modona, M., & Garbarini, F. (in press). Empathy or ownership? Evidence from corticospinal modulation during pain observation. Journal of Cognitive Neuroscience. Cheng,Y., Lin, C. P., Liu, H. L., Hsu,Y. H., Lim, K. E., Hung, D. & Decety, J. (2007). Expertise modulates the perception of pain in others. Current Biology, 1719, 1708–1713.
512
Vicarious experiences Cutter, B. & Tye, M. (2011). Tracking representationalism and the painfulness of pain. Philosophical Issues, 21(1), 90–109. Damasio, A. (1999). The Feeling of What Happens. London: William Heinemann. Deonna, J. A. (2007). The structure of empathy. Journal of Moral Philosophy, 4(1), 99–116. Dretske, F. (1969). Seeing and Knowing. Chicago: University of Chicago Press. ———. (1973). Perception and other minds. Noûs, 7(1), 34–44. Engen, H. G. & Singer, T. (2013). Empathy circuits. Current Opinion in Neurobiology, 23(2), 275–282. Gallagher, S. (2008). Direct perception in the intersubjective context. Consciousness and Cognition, 17, 535–543. Gallese, V. (2009). Mirror neurons and the neural exploitation hypothesis: From embodied simulation to social cognition. In J. A. Pineda (Ed.), Mirror Neuron Systems:The Role of Mirroring Processes in Social Cognition (pp. 163–190). New York: Humana. Gallese,V. & Goldman, A. (1998). Mirror neurons and the simulation theory of mindreading. Trends in Cognitive Sciences, 2, 493–501. Gallese,V. & Sinigaglia, C. (2011). What is so special about embodied simulation? Trends in Cognitive Sciences, 11, 512–519. Goldman, A. (2006). Simulating Minds, the Philosophy, Psychology and Neuroscience of Mindreading. Oxford: Oxford University Press. ———. (2009). Mirroring, mindreading, and simulation. In J. Pineda (Ed.), Mirror Neuron Systems: The Role of Mirroring Processes in Social Cognition (pp. 311–330). New York: Humana Press. Reprinted in Goldman, A. (2013). Joint Ventures, Mindreading, Mirroring, and Embodied Cognition (pp. 89–109). Oxford: Oxford University Press. ———. (2011). Two routes to empathy: Insights from cognitive neuroscience. In A. Coplan and P. Goldie (Eds.), Empathy: Philosophical and Psychological Perspectives (pp. 31–44). New York: Oxford University Press. Reprinted in Goldman, A. (2013), Joint Ventures, Mindreading, Mirroring, and Embodied Cognition (pp. 198– 217). Oxford: Oxford University Press. Iannetti, G. D., Salomons,T.V., Moayedi, M., Mouraux, A. & Davis, K. D. (2013). Beyond metaphor: Contrasting mechanisms of social and physical pain. Trends in Cognitive Science. 17(8), 371–378. Jackson, P., Brunet, E., Meltzoff, A. & Decety, J. (2006). Empathy examined through the neural mechanisms involved in imagining how I feel versus how you feel pain. Neuropsychologia, 445, 752–761. Jacob, P. (2011). The direct-perception model of empathy: A critique. Review of Philosophy and Psychology, 2(3), 519–540. ———. (2015). Empathy and the disunity of vicarious experiences. Rivista Internazionale di Filosofia e Psycologia, 6(1), 4–23. James, W. (1884). What is an emotion? Mind, 9, 188–205. Keysers, C., Wicker, B., Gazzola,V., Anton, J. L, Fogassi, L. & Gallese,V. (2004). A touching sight: SII/PV activation during the observation and experience of touch. Neuron, 42, 335–346. Krueger, J. (2012). Seeing mind in action. Phenomenology and the Cognitive Sciences, 11, 149–173. Krueger, J. & Overgaard, K. (2012). Seeing subjectivity: Defending a perceptual account of other minds. In S. Miguens and G. Preyer (Eds.), ProtoSociology: Consciousness and Subjectivity, 47, 239–262. Lamm, C., Batson, D. & Decety, J. (2007). The neural substrate of human empathy: Effects of perspectivetaking and cognitive appraisal. Journal of Cognitive Neuroscience, 191, 42–58. Lamm, C., Meltzoff, A. & Decety, J. (2010). How do we empathize with someone who is not like us? A functional magnetic resonance imaging study. Journal of Cognitive Neuroscience, 22, 362–376. Legrain,V., Iannetti, G. D., Plaghki, L. & Mouraux, A. (2011). The pain matrix reloaded: A salience detection system for the body. Prog Neurobiol, 93(1), 111–124. Michael, J. (2014). Towards a consensus about the role of empathy in interpersonal understanding. Topoi, 33(1), 157–172. Michael, J. & Fardo, F. (2014).What (if anything) is shared in pain empathy? A critical discussion of de Vignemont and Jacob’s theory of the neural substrate of pain empathy. Philosophy of Science, 81(1), 154–160. Ogino, Y., Nemoto, H., Inui, K., Saito, S., Kakigi, R. & Goto, F. (2007). Inner experience of pain: Imagination of pain while viewing images showing painful events forms subjective pain representation in human brain. Cerebral Cortex, 17, 1139–1146.
513
Pierre Jacob and Frédérique de Vignemont Prinz, J. J. (2004). Gut Reactions: A Perceptual Theory of Emotion. Oxford: Oxford University Press. Rizzolatti, G. & Craighero, L. (2005). Mirror neuron: A neurological approach to empathy. In J.-P. Changeux, A. Damasio and W. Singer (Eds.), Neurobiology of Human Value (pp. 107–123). Berlin: Springer. Rizzolatti, G., Fogassi, L. & Gallese,V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Review Neuroscience, 2, 661–670. ———. (2004). Cortical mechanisms subserving object grasping, action understanding, and imitation. In M. Gazzaniga (Ed.), The Cognitive Neurosciences III (pp. 427–440). Cambridge, MA: MIT Press. Scheler, M. (1954). The Nature of Sympathy (P. Heath, trans.). London: Routledge and Kegan Paul. Singer,T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. & Frith, C. (2004). Empathy for pain involves the affective but not sensory components of pain. Science, 303, 1157–1162. Singer, T., Seymour, B., O’Doherty, J., Stephan, K., Dolan, R. & Frith, C. (2006). Empathic neural responses are modulated by the perceived fairness of others. Nature, 439, 466–469. Smith, J. (in press).Vision and the ontology of emotion and expression. Solomon, R. C. (1993). The philosophy of emotions. In J. M. Haviland-Jones and M. Lewis (Eds.) Handbook of Emotion (pp. 3–15) New York: The Guilford Press. Teroni, F. & Deonna, J. A. (2014). In what sense are emotions evaluations? In S. Roeser and C. Todd (Eds.), Emotion and Value (pp. 15–31). Oxford: Oxford University Press de Vignemont, F. (2009). Drawing the boundary between low-level and high-level mindreading. Philosophical Studies, 13(4), 457–466. ———. (forthcoming). Can I see your pain? An evaluativist model of pain perception. In Jennifer Corns (Ed.), Routledge Handbook of Philosophy of Pain. de Vignemont, F. & Jacob, P. (2012). What is it like to feel another’s pain? Philosophy of Science, 79, 295–316. de Vignemont, F. & Singer, T. (2006). The empathic brain: How, when and why? Trends in Cognitive Science, 10(10), 435–441. Wicker, B., Keysers, C., Plailly, J., Royet, J. P., Gallese,V. & Rizzolatti, G. (2003). Both of us disgusted in my insula: The common neural basis of seeing and feeling disgust. Neuron, 40, 655–664. Zahavi, D. (2007). Expression and empathy. In D. D. Hutto and M. Ratcliffe (Eds.), Folk Psychology Re-assessed (pp. 25–40). Dordrecht: Springer. ———. (2008). Simulation, projection and empathy. Consciousness and Cognition, 17, 514–522. ———. (2011). Empathy and direct social perception: A phenomenological proposal. Review of Philosophy and Psychology, 2(3), 541–558.
514
30 PHENOMENOLOGY OF THE WE Stein, Walther, Gurwitsch Dan Zahavi and Alessandro Salice
Introduction Has phenomenology anything of interest to say on the topic of intersubjectivity? Whereas the received view in the heyday of Critical Theory was negative – due to its preoccupation with subjectivity, phenomenology was taken to be fundamentally incapable of addressing the issue of intersubjectivity in a satisfactory manner (cf. Habermas 1988) – recent decades of research have done much to disprove this verdict. As closer scrutiny of the writings of such figures as Husserl, Scheler, Reinach, Stein, Heidegger, Gurwitsch, Sartre, Merleau-Ponty and Levinas has revealed, intersubjectivity, be it in the form of a concrete self–other relation, a socially structured life-world, or a transcendental principle of justification, is ascribed an absolutely central role by phenomenologists. It is no coincidence that the first philosopher to ever engage in a systematic and extensive use of the very term intersubjectivity (Intersubjektivität) was Husserl. The fact that most of the phenomenologists wrote on the topic is, however, not to say that they all agreed on how to approach and handle it. One important internal division within phenomenology is precisely centred on the question of how best to conceive of the foundations of sociality (cf. Zahavi 1996, 2001). Should one prioritize the concrete face-to-face encounter and highlight the importance of the difference between self and other (cf. Sartre 1956: 246–247), or should one rather focus on an everyday being-with-one-another characterized by anonymity and substitutability, where others are those from whom “one mostly does not distinguish oneself ” (Heidegger 1996: 111)? The specific divide in question is also played out in divergent phenomenological accounts of we-intentionality that seek to understand the nature of emotional sharing, group membership, communal experiences, joint agency, etc. Some attempted to approach we-intentionality through an analysis of (reciprocal) empathy; others denied that social cognition of any kind was the key to collective intentionality and argued that, rather than being founded upon an otherexperience, the we preceded any such experience. In the following contribution, we will exemplify and investigate this specific tension.We will first discuss the respective contributions of Stein and Walther, who in different ways both highlight the importance of empathy and face-to-face recognition. We will then in a subsequent step present Gurwitsch’s criticism of Walther and consider his own alternative contribution. 515
Dan Zahavi and Alessandro Salice
Edith Stein on empathy In the first decades of the 20th century, many phenomenologists investigated empathy and insisted that it constituted the basis of interpersonal understanding (cf. Zahavi 2014a, 2014b). What do they understand by the term ‘empathy’? Generally speaking, they describe empathy as a distinct form of other-directed intentionality that targets others’ mental states. More specifically, and this is something all phenomenologists agree on, in empathizing, the other’s mental states are given to the subject precisely as mental states that the other (and not the empathizing subject) is living through. The importance of such self-other differentiation for empathy is advocated by both Edith Stein and Gerda Walther, who also insist on the importance of empathy for any plausible account of collective intentionality. That is, if we intend to do something together, or if we feel something together, such sharing of intentions and experiences has to presuppose empathy, or so would Stein and Walther contend. One might wonder, however, how the idea of experiential sharing is to be spelled out more precisely and in which sense sharing an experience presupposes empathy. After all, if the idea of experiential sharing suggests a certain communality and perhaps even unification among the subjects, then how can this idea be squared with the radical self–other differentiation preserved in empathy? In this and the next section, we tackle Stein’s and Walther’s theories of empathy and collective intentionality and attempt to provide an answer to these questions. Already in her dissertation, On the Problem of Empathy, from 1916, Stein distinguishes empathy from other mental states that can be elicited by the empathizing subject and that, by relying on empathy, are closely related to – and yet clearly distinct from – empathy itself. When Stein talks of empathy, she talks of an intentional act by means of which the subject is directed towards the other’s mental states in an intuitive and quasi-perceptual way, e.g., when empathically grasping the other’s sadness, I literally see it; I neither infer it from a certain pattern of behaviour, nor do I project it onto the other. However, when I see the other’s emotion – and, for the sake of simplicity, let us in the remainder of this chapter restrict our focus to emotions – the emotion is not given to me in the same way in which I experience my own emotion. This makes empathy an intentional act of a rather peculiar kind. It confronts me with the presence of an experience that I am not living through myself; it is given to me without being mine. Empathy allows the subject to be acquainted with the mental states of others. This is precisely why empathy highlights and preserves self–other differentiation. Stein’s analysis emphasizes the cognitive and epistemic dimension of empathy. It is, consequently, not an inherently pro-social or benevolent act. This notwithstanding, Stein does conceive of empathy as a necessary precondition for the adoption of a responsive attitude towards the other.There are two radically different ways in which this could happen. First, while grasping an emotion that is perceived as distinct, other and different from one’s own, the subject can be brought to emotionally respond to this emotion.When seeing the sadness in the other’s face, I can, for instance, commiserate with her sadness. It is important to emphasize, however, that such an emotional response doesn’t have to be pro-social: it is also possible to imagine scenarios in which, given certain motivations and psychological predispositions, one rejoices when realizing that the other is sad (this is what in German is called Schadenfreude). In addition to this first kind of emotional reaction (whether pro- or anti-social), there is a second way in which the subject can participate in the affect of another subject – one that, apparently, cannot be thought of without collectivity or sharedness proper. This second kind typically encompasses cases in which the emotions of the involved subjects are interdependent in a distinctly integrative fashion, i.e., cases in which it is not (merely) me feeling an emotion and you feeling a similar emotion, but where the emotions in question are felt by the two of 516
Phenomenology of the we
us as being ours. For an example, consider one of Stein’s own: a special edition of the newspaper reports that a fortress has fallen (remember that her dissertation was written during World War I). As a result, the readers are all seized by ‘the same’ excitement and joy. A first question to ask is whether sharing the same emotion entails that the borders between the participating individuals have broken down. Stein denies this. On her account, I feel my joy, and I empathically comprehend the others’ joy and see it as the same as mine. As a result, our respective joys overlap and mesh. I come to feel their joy as mine and vice versa. What we feel when we share joy is consequently different from what I feel and what you feel in isolation. Hence, the we arises, as she puts it, from the “I” and the “you” (Stein 1989: 17–18). A number of interesting ideas are introduced here. Most relevant for our present purposes, however, is the suggestion that, rather than simply being a question of having the same kind of emotion as another, shared emotions involve a reference to the first-person plural. There is an interplay of both identification (with the other) and differentiation, of unity in plurality. But how can one make sense of these phenomenological facts? There are various aspects that, according to Stein, need to be considered in order to adequately account for shared experiences and, in particular, for shared emotions. To begin with, the interdependence of shared emotions has to be of the correct kind in order to exclude cases of merely co-regulated emotions. If we are to share an emotion together, the way in which your emotion and mine are interdependent must differ from the way in which, e.g., the sadistic rapist’s pleasure is interlocked with the terror of his victim (cf. Zahavi 2014b: 245). It is difficult to determine exactly what, according to Stein, makes interdependence be of the ‘right’ kind, but it seems plausible that at least three elements need to be in place. First, and this should be rather uncontroversial, the emotions have to be responses to the same object or event. If we rejoice together at the fall of the fortress, my emotion and your emotion are directed at the same correlate, namely, the military victory. But an identical correlate is not sufficient: if you and I are enemies, we can be emotionally directed at the same correlate, but the fortress’s fall that causes your joy may elicit my desperation. Therefore, it seems that, for emotions to be shared, their correlate not only has to be the same, it also has to be evaluated and appraised in a similar manner: if we rejoice together at the fall of the fortress, then we both appraise the event in the same way. To clarify this emotive appraisal, one would have to determine the relation between emotions and values, but this aspect of her theory need not concern us here (cf.Vendrell Ferran 2015). Suffice it to say that identity in the value but difference in the correlate is also insufficient for an emotion to be shared: if I rejoice in your joy (which is, say, about the success of your exam), both emotions of joy are elicited by their respective correlates’ being considered under the same value, but my joy targets your joy, whereas your joy targets your exam. As highlighted above, in this case, my feeling towards you is pro-social, but we are not sharing an emotion. Second, what we call here ‘interdependence’ should include the fact that the emotion of one individual ‘enriches’ (bereichern) that of the other (cf. Stein 1989: 18). More specifically, given that our intentional acts are necessarily perspectival, features and aspects of the world are always accessed from a given perspective.When we share an emotion, your emotional access to the world, which depends on your current perspective on the world, is relevant to and has an impact on my emotional access to the world and vice versa – and this is because your perspective significantly bears on my issue, given that it is our issue. This leads to the third and probably most important feature that characterizes shared experiences – namely the fact that such mental states are experienced as being ours. As we saw above, Stein even goes so far as to claim that the proper subject of these emotions is neither me nor you but is rather us (Stein 1989: 18) – in other words, shared emotions are emotions of a ‘we’ 517
Dan Zahavi and Alessandro Salice
(cf. Mulligan 2016; Szanto 2015). This sheds light on a feature that, apparently, always has to figure in shared emotions, namely, reciprocity. If you and I share an emotion, then each of us has to do our respective bit in order for us to share that experience. Each of our perspectives has to be enriched by the other, each of us has to frame the emotion as our emotion and each of us has to be aware of all of this. Against this theoretical background, various questions arise. To begin with, one can easily discern that, in Stein’s account of shared emotions, empathy plays a crucial role. However, as we have also seen, in empathy, the experiences I am acquainted with are not my experiences but precisely those of the other. If so, how can empathy help us understand how we come to share emotions and feel them as ours? Furthermore, Stein’s theory seems to take for granted that there is a multiplicity of individuals entering into reciprocal relations with one another. In fact, the very idea of mutual enrichment between the individual emotions seems to suggest that emotional sharing requires the individuals to be mutually aware of each other. In the next section, we will return to this idea and consider how Walther further articulates it. However, in an interesting passage from a 1922 essay, Stein claims that, in certain cases, collective emotions can be felt even if only one member (of a group) feels the emotion, i.e., she admits cases in which a collective emotion can occur even in the absence of any actual inter-individual reciprocity. In discussing an example in which a troop loses one of its members, she writes: If none of the members feels the appropriate grief [for the loss of the member], then one has to say that the loss is not correctly appreciated by the unit. If only one member has realized within himself the rationally required [vernunftmäßig gefordert] content of sense [Sinnesgehalt], then that no longer holds: for then the one is feeling “in the name of the unit,” and in him the unit has satisfied the claim placed upon it. (1922, pp. 115f. [pp. 136f.], trans. mod.) But how can Stein’s remark be accommodated within her more general theory? Further reflections on this can be found in the work of another phenomenologist: Gerda Walther.
Gerda Walther on experiential sharing Gerda Walther’s dissertation Ontology of Social Communities from 1921 contains insights that might allow for a more comprehensive account of experiential sharing. In her analysis,Walther largely agrees with Stein and carefully distinguishes experiential sharing from empathy, sympathy and imitation (as well as from emotional contagion). On her account, to empathically grasp the experiences of the other is quite different from sharing his experiences. In empathy, I grasp the other’s experiences insofar as they are expressed in words, gestures, bodily postures, facial expressions, etc. Throughout, I am aware that it is not I who am living through these experiences, but that they belong to the other, that they are the other’s experiences, and that they are only given to me qua expressive phenomena (Walther 1923: 73). Even if we were by coincidence to have the same kind of experience, this would not amount to a shared experience, to an experience we were undergoing together. Despite the similarity of the two experiences, they would not be unified in the requisite manner, but would simply stand side by side as belonging to distinct individuals (Walther 1923: 74). To feel sympathy for somebody, to be happy because he is happy or sad because he is sad, also differs from being happy or sad together with the other (Walther 1923: 76–77). Finally, we also need to distinguish experiential sharing from emotional contagion. In the latter case, I might take over the experience of somebody else and come to experience it as my own, rather than as ours. But, insofar as that happens, and insofar as I then 518
Phenomenology of the we
no longer have any awareness of the other’s involvement, it has nothing to do with shared experience. For the latter to occur, there has to be a preservation of plurality. However, there is also a phenomenological difference between I–thou relations and shared experiences, although the former might be considered a stepping stone to the latter. What is the difference? When an experience is shared, each partner is not only conscious of the other’s experiencing, but identifies with and incorporates the other’s perspective. The other’s perspective is understood as part and parcel of our perspective and, hence, it matters to me. This is why, in the case of, say, shared joy, the joy is no longer simply experienced by me as yours and/or mine, but as ours (Walther 1923: 75; cf. Spiegelberg 1975: 231). In her analysis, Walther further distinguishes two kinds of groups, namely communities and societies. The distinction between these two kinds of groups was not new in the Germanspeaking debate about the foundations of the social sciences and sociology. First introduced by Ferdinand Tönnies in 1887, this distinction was reformulated and revisited within phenomenology, especially by Scheler (1973), who roughly depicts it in the following way: society is an aggregation of individuals who decide to join forces based on purely strategic or instrumental considerations. Communities, by contrast, are formed by individuals who understand themselves and others as members of a we, and who are tied together by bonds of solidarity (cf. Salice 2016).1 Both Stein and Walther embrace this distinction and Scheler’s description of it – but Walther adds a new angle that apparently was not considered within phenomenology before her work (but was further investigated after her work, e.g., by Stavenhagen 1933, by Spiegelberg 1975: 215–245, and, though critically, as we shall see, by Gurwitsch). The issue concerns the question of what exactly it means to have a sense of belonging to a community. Let us look at an example that Walther discusses at some length: As an example just take a number of randomly picked workers from Slovakia, Poland, Italy, etc., who are all employed on a construction site. They don’t understand one another’s language, they don’t know each other, they have never had anything to do with each other before – they just want to earn their living and have accidentally been hired by the same construction company. Now, for instance, they build a wall, some of them take the bricks, others pass them on to someone else and give them to the brick layers, who apply the mortar and place the bricks one on top of the other. [. . .] We have here a number of persons, who are aware of each other and who, in their behaviour, are mutually [in Wechselwirkung] directed to each other [. . .]. Furthermore, at one level of their mental life, they are directed to the same intentional object in one unity of sense [in einer Sinneinheit]: [. . .] the entire construction. A partially homogeneous mental-spiritual life, pervaded by one unity of sense and governed by the same intentional object (the construction and the earning of each his own living by working on that construction), results from this. [. . .] Do we have here a community? (1923: 31, our trans.) The individuals in question are directed towards the same object (and they are all directed towards it in the same manner), their experiences are interdependent and reciprocity is in place. These elements closely resemble the necessary conditions that Stein spelled out in her analysis of shared emotions. However, note also that the idea of the experiences being our experiences is not present in the scenario depicted in the above quotation. And, indeed, this is the whole point of Walther’s answer to the question she raises – for she writes: “In our view, [the answer is] no! [. . .] Only with their inner bond [Verbundenheit], with that feeling of belonging together [Gefühl der 519
Dan Zahavi and Alessandro Salice
Zusammengehörigkeit] – even if loose and limited – is a social formation transformed into a community” (1923: 33, our trans.). Now, according to Walther, exactly this, i.e., such a feeling of togetherness or even of unification (Einigungsgefühl), is what distinguishes communities from societies and this, it seems plausible to conjecture, is what leads an individual to feel an experience as ours. However, how can one describe this feeling of togetherness more accurately? It seems that we cannot cash out this experience in purely cognitive terms. In other words, it is not the case that, when I feel such a sense of togetherness, I recognize or make a judgement about the fact that I belong to a community. Obviously, I can do so, but then the facts that those judgments are about would have to be explained, i.e., we would have to explain what brings about the community and/or what makes me a member of that community. But this explanation is exactly what the sense of togetherness is supposed to provide, according to Walther: without the individuals’ having that feeling, there would be no community. That is why Walther does not use the term ‘Gefühl’ just as a metaphor, but rather conceives of this experience as a particular kind of feeling – in contrast to purely cognitive acts like, e.g., judgments.2 The idea seems to be that, when I feel united with another individual (or with several individuals, for that matter), the essential first step towards the creation of a community has already been accomplished (the next being that the other individuals feel so too, as we shall soon see). In addition, the feeling in question can come in two different forms – actual and habitual or sedimented – but, before tackling the relevance of this distinction, let us briefly return to the first question raised by Stein’s theory. Remember this was about how, given the radical self– other differentiation ensured by empathy, a shared experience, i.e., an experience conceived by its subject as our experience, can occur. With Walther’s contribution taken into consideration, the following answer presents itself: experiential sharing requires a multi-layered interlocking of intentional acts of different kinds. In the first layer, the subject feels something and empathizes with another subject, who has a similar feeling.Then, this feeling starts to be accompanied by a sense of togetherness or unification to the effect that the sense of ownership that accompanies the subject’s own experience undergoes a peculiar transformation – the experience is not felt as private and singular (owned by the subject alone), but as shared and collective (as co-owned by a plurality of subjects): “Collective experiences in our sense [. . .] are definitely only those experiences that emerge in me from them and in them from me, from us [. . .] on the basis of my unification [Einigung] with the others” (Walther 1923: 72, our trans.). However, I can feel an emotion as being our emotion, without your feeling so too. That is why, for us to share the emotion, an additional layer has to be added: not only I but you too must feel that sense of togetherness. And, on top of that, I must be empathically aware of your feeling of togetherness, and you must be empathically aware of mine (Walther 1923: 84ff).3 Only once this complex structure of interlocked acts is in place does an affect qualify as a shared emotion.4 But then do all these requirements apply to Stein’s aforementioned example in which a collective emotion is claimed to be felt by one single member of the group? One possible solution would be to dismiss the example and argue that the individual is simply mistaken. And yet something phenomenologically important seems to be lost if one decides to do so for, remember, Stein argues that the affect is an adequate one vis à vis the group’s attitude towards the tragedy of losing one of its members. That is, there is a sense in which the individual feels something ‘in the name of ’ the group. The question would then be how to interpret this expression – for Stein could hardly have meant that the individual ‘represents’ the group in a legal or institutional sense while feeling that emotion (cf. Stein 2000: 134). How would Walther address the question? She would in all likelihood appeal to the idea that the feeling of togetherness can be either actual or habitual. In those cases where the 520
Phenomenology of the we
feeling is actual, the requirements mentioned above (the interlocking of acts) must be met in order to safeguard genuine sharing. However, Walther also explores the diachronic dimension that accompanies the feeling of unification and, especially, the fact that such a feeling can also occur in a habitual form. This, Walther seems to contend, can happen in two different ways. The first presupposes that an actual feeling of unification undergoes a process of sedimentation through time. Just as certain intense and lively emotions (e.g., love) can sediment themselves and transform into more habitual states of mind, so can a similar sedimentation take place in the case of unification. To see this, think of the difference between the feelings of unification or togetherness characterizing a friendship – fervent, lively and constantly reinforced at the beginning, they eventually become sedimented background-feelings. Unification, in this case, is first explicit, but becomes habitual over time. In addition, Walther seems to recognize a second form of unification. As she writes: “besides an actual unification that became habitual, one can also talk of a habitual and subconscious growing together [von einem habituellen unterbewussten Zusammengewachsensein]” (1923: 44, our trans.; cf. also 1923: 69). Here Walther suggests that there is a form of unification that is thoroughly habitual, as it were, and that is not preceded by any explicit feeling of unification. It might have been preferable here to distinguish more clearly between the habitual and the implicit. During early development, for instance, one might come to grow together as a family without this involving any explicit act of unification or attachment. This is not to deny that, at some point, such implicit unification can become explicit. But this occurs after the fact and merely confirms the sense of togetherness that was already in place. It is harder to make sense of the proposal that such implicit unification is from the very start habitual. In any case, Walther highlights the importance of such habitual unifications. As she writes, “the habitual unifications are almost more important for the foundation of communities and of the communal life than the actual unifications, which dissolve quickly” (1923: 48, our trans.). Indeed, “habitual unification is what, in the first place, must found and underpin [untergrundieren] the whole communal life” (1923: 69, our trans.). Such an emphasis on habitual unification does not necessarily undermine the importance of our direct awareness of and interaction with others; rather it is compatible with a recognition of the distinction between we-experiences that require the cognitive and affective architecture sketched above (1923: 66, 68) and those that do not (any longer) require the actual occurrence of such interlocking acts due to the fact that they rely on habitual unification. Indeed, and to come back to Stein’s example now, one could argue that, if Walther’s considerations about habitual unification are correct, then Stein might be right in that there is a legitimate sense in which an emotion felt by only a single member of a group can be ‘collective’ even if it does not fulfil all the requirements tackled above. It can be qualified as collective because the individual’s affective response is appropriate when assessed from the group’s perspective, but also because the individual feels it by relying on habitual unification, i.e., by having others that also feel it in the background of his experience (1923: 69). In short, as a result of this feeling of unification, individuals can experience themselves as members of a group, can feel united with other members of the same group and can come to have experiences they wouldn’t otherwise have had. Terminologically, one can precisely say that such experiences are collective, even if they are not shared in the narrower sense of the term (cf. Szanto 2015).
Gurwitsch on partnership, communal membership and group membership In the preceding two sections, we have illustrated how Stein and Walther understand the idea of experiential sharing and shown how, according to them, experiential (and especially emotional) 521
Dan Zahavi and Alessandro Salice
sharing presupposes empathy. Although empathy can be considered a necessary condition for collective emotions, both phenomenologists also concur that more conditions have to be fulfilled for emotions to be shared. Not only does reciprocal empathy have to be in place, but a certain process of affective identification also seems to play a prominent role in the phenomenology of shared emotions. As indicated at the very beginning of our paper, however, this reliance on empathy did not find universal approval among phenomenologists. Let us now consider one of the critics. In the first part of his book, Human Encounters in the Social World (completed in 1931, but only published in 1977), Gurwitsch sets out by discussing the classical problem of other minds. According to the traditional outlook, I might impute mental states to others, but these states are never immediately given to me. All that is perceptually given to me are physical qualities and their changes. Seeing a radiant face means seeing certain changes in the facial muscles.This description, however, is not in accord with our everyday experience. When somebody with a radiant face tells us something, we do not first perceive distorted facial muscles in order to then through the employment of some theoretical apparatus impute psychological meaning to the other. Rather, we directly and immediately witness the joy in the other’s face. Our conviction that, in daily life, we are engaging with other minded creatures is, in any case, quite unlike any ordinary theoretical hypothesis. In ordinary life, we are never faced with the choice about whether or not we wish to take the people we are meeting in the street or conversing with as real people or as mere automatons. Our certitude that the other is minded far exceeds our confidence in well-confirmed scientific hypotheses. It is far more deep-seated and unshakeable and must be considered a precondition for much of what we do in daily life (Gurwitsch 1979: 10). According to Gurwitsch, one decisive problem with the traditional approach concerns the question of what is immediately given. If other people are initially given to us simply as physical bodies, how could we then ever acquire anything resembling direct access to the mental life of others? But perhaps this is the premise we need to reconsider. Perhaps a new understanding of the given is required. If, instead, the realm of expressive phenomena were accepted as the primary datum or primitive stratum of perception, access to the minds of others would no longer present the same kind of problem (Gurwitsch 1979: 29–32, 56). This was indeed the view defended by Scheler in The Nature of Sympathy. In his view, we are confronted neither with a mere body nor with a pure soul in the face-to-face encounter but, rather, with the unity of an embodied mind. Scheler speaks of an expressive unity (Ausdruckseinheit) and claims that behaviour is a psychophysically undifferentiated notion. It is only subsequently, through a process of abstraction, that this unity is divided, and our interest then proceeds “inwards” or “outwards” (Scheler 2008: 218, 261). Thus, on Scheler’s account, our primary knowledge of nature is of expressive phenomena, and the most fundamental form of perception is the perception of such psychophysically undifferentiated expression. At this point, however, Gurwitsch’s criticism sets in. He readily acknowledges the importance of expressive phenomena, but criticizes Scheler for having been too one-sided in his approach and then argues that the realm of expressive phenomena is neither the only, nor the primary, dimension to be considered if we wish to understand what it is that enables us to encounter other human beings (Gurwitsch 1979: 33). In short, Gurwitsch quite explicitly questions the accuracy of an account that gives primacy to empathic face-to-face relations. Let us return to our deep-seated conviction that we live in a common world together with others. It is not a part of this conviction that we are constantly aware of expressive phenomena. In fact, according to Gurwitsch, we neither primarily nor ordinarily encounter others as thematic objects of cognition. Rather, we encounter them in the world in which our daily life occurs or, to be more precise, we encounter others in worldly situations, and our way of 522
Phenomenology of the we
being-together and understanding each other is co-determined in its meaning by the situation at hand (Gurwitsch 1979: 35–36, 95, 106). To exemplify this, Gurwitsch analyzes a situation where two workers are cobbling a street. In this work situation, one worker lays the stones while the other knocks them into place. Each worker is related to the other in his activity and comportment. When one worker understands the other, the understanding in question does not involve grasping some hidden mental occurrences. There is no problem of other minds. There is no problem of how one isolated ego gets access to another isolated ego; there is no need for mindreading. Even in the absence of any explicit expression of the other’s wishes, aims and interests, they are “provided by the setting of the things” (Gurwitsch 1979: 105). Indeed, both workers understand each other in virtue of the roles they play in the common situation (Gurwitsch 1979: 104, 108, 112). In fact, in such situations, I don’t encounter the other as a specific individual, but rather as somebody precisely defined and exhausted by the role he bears. The person in question might even be substituted by somebody else as long as the substitute assumes the function and role determined and prescribed by the situation at hand. In all of this, empathy and the understanding of expressive phenomena are of no significance – or so Gurwitsch would claim.5 The kind of being-together discussed so far, the kind of instrumental association that is defined in terms of substitutable roles and functions, is termed partnership by Gurwitsch and amounts to the social formation that Tönnies labelled society (Gesellschaft) (Gurwitsch 1979: 117). It remains characterized by a somewhat external relation between the different individuals. They are not related to each other independently of the specific situation in which they participate and remain, as Gurwitsch remarks, “alien to one another” (1979: 118). This detached and strategic form of interpersonal relation can be contrasted with another form of being-together that Gurwitsch labels ‘communal being-together’, one characterized by solidarity, mutuality, warmth and belonging. How do we get from society to community? Here is where Gurwitsch takes issue with Walther’s account. As we saw above, Walther argued that the decisive ingredient present in community but missing in society was a certain emotional dimension, the feeling of inner unification or togetherness. One difficulty with this proposal, however, is that it seems to imply that the only difference between society and community is the presence of supervenient positive sentiments. In both cases, the whole underlying structure remains exactly the same. Gurwitsch criticizes this proposal and argues that one should recognize not only that partnerships can sometimes occur accompanied by positive feelings, but also that a community isn’t threatened or undermined in cases where conflicts or feuds take the place of positive sentiments. Membership in a community can persist even when negative interpersonal emotions are present. We should consequently reject the proposal that the presence of positive feelings is constitutive of communal membership. In fact, when present, the feelings in question are merely expressing the existence of an already existing community-basis (1979: 121–122). Moreover, when members of a community employ the first-person plural, they merely articulate their communal membership. The use of the ‘we’ makes explicit what was already immanently present in the communal being-together (1979: 130). But, if a feeling of togetherness is not what constitutes a community qua community, what then is decisive? Gurwitsch’s reply is that a more comprehensive life-context is essential, a lifecontext centred on a (material and spiritual) communal possession that is at its core made up of a shared tradition (1979: 122). Whereas partnerships can be voluntarily initiated and discontinued, one is born into and brought up within a community, and this communal membership is not something from which one can voluntarily dissociate oneself (1979: 124). In fact, it is quite beyond the domain of personal will and decision. This is also why, according to Gurwitsch, the being-together in the 523
Dan Zahavi and Alessandro Salice
dimension of the community is not a being-together of individuals qua particular individuals, but qua community members (1979: 130). Those with whom one is communally joined have not been selected by free choice on the basis of their personal qualities, but rather on the basis of a shared heritage. Communalization is consequently essentially historical. Our membership in a community determines the way we understand both the world and ourselves and provides us with a deep rootedness in a context that is taken-for-granted. When members of a community encounter each other, this encounter will consequently be informed and shaped by their shared communal possession. This is also why the relation between members of a community is quite unlike the relation between partners in a work situation. In the latter case, the individuals have their identity prior to engaging in a partnership. In the former case, by contrast, the comprehensive life-context and historicality precede the actual being-together, for which reason the whole might be said to be prior to the parts (1979: 132). At this point, two caveats must be mentioned. First of all, it is not as if all interactions between members of a community are necessarily motivated by the community itself. If, to use an example of Gurwitsch’s, two brothers play chess together, they do so as partners, not as community members. Furthermore, according to Gurwitsch, each community member will always retain a sphere of life and action that remains free of the dictates, regulations and influences of the community. It is because of this preserved space of freedom that individual community members can engage in and commence new forms of activity and group relations (1979: 131). After having discussed partnership and communal membership in turn, Gurwitsch sets out to explore a third form of social formation in the final chapter of his book, namely group membership. Although he argues that the existence of a community (and communal language) is a precondition for the existence of any group, he also argues that the voluntary identification with a group involves the rejection of one’s natural communal membership (Gurwitsch 1979: 147). Here a certain oddity in Gurwitsch’s analysis becomes apparent. The only kind of group membership he discusses is that of the charismatic group (or religious sect). He talks about how members of such a group cut their ordinary ties with worldly possessions, home and family; how, by being gripped by a common idea and the charismatic power of their leader, they are brought together and come to feel united as ‘one’; and how this ultimately amounts to a certain fusion taking place between the group members, a fusion that makes them lose their individual being and differences in a much more radical sense than in the case of partnership or communal membership (Gurwitsch 1979: 141–142). Whatever one might think of this analysis, it seems problematic to argue that “this ‘being united as “one” in spirit’ is a distinctive feature for the being-together in a group” (1979: 141). Certainly there are other, more prototypical, forms of groups than religious sects. Surely some of these, e.g., friendships, are forms of being-together that preserve and cherish the individuality of the members.To put it differently, Gurwitsch’s analysis seems clearly wanting in that none of the social formations he discusses include individual-to-individual relations. It is precisely this shortcoming that the theorists discussed earlier avoid.
Conclusion In the preceding, we have outlined two diverging models of we-intentionality. Let us conclude by stressing what we take to be the elements of truth in both proposals and by suggesting that one ought to aim for an account that combines both approaches. One phenomenologist who did exactly that was Schutz. According to Schutz, the face-to-face encounter provides for the 524
Phenomenology of the we
most fundamental type of interpersonal understanding (Schutz 1967, 162). It constitutes the basis of what he terms the “we-relationship” or “living social relationship”, which is the central notion in his account of experiential sharing. Such an emphasis on the face-to-face encounter puts Schutz’s account in line with both Stein’s and Walther’s views. At the same time, however, Schutz also emphasizes the heterogeneity of the social world and the fact that interpersonal understanding comes in many forms and shapes. If we wish to do justice to this variety and complexity, we have to go beyond what a narrow empathic focus on the bodily, co-present other can deliver. Ordinarily, our understanding of and engagement with others is regulated by various interpretative schemes. Typifications (and stereotyping) inform and mediate our relationship with others, be they our consociates, contemporaries, predecessors or successors (Schutz 1967; cf. Zahavi 2010, León & Zahavi 2016). Against this background, it is hard not to see a certain convergence between Schutz and Gurwitsch. Although Schutz would in all likelihood be sceptical about the claim that the sharing of a common world is more basic than any concrete interpersonal encounter (since such encounters allegedly have to occur against some context of shared inherited meaning if they are to be at all meaningful and intelligible), it would on his account be entirely legitimate to maintain that our typical being with others is contextual while still insisting on the significance of empathy.6 This chapter began by highlighting the disagreement among phenomenologists with regard to questions about the foundation of sociality. One can now conclude it by following a suggestion originally put forward by Scheler (1973). This is the idea that an investigation into the foundation of sociality ought to be twofold in nature. On the one hand, one should start with the idea that any plausible account of we-intentionality has to recognize that there are multiple types of we or, as Scheler says, forms of being-together (Formen des Miteinandersein). For instance, one rather ephemeral form that is bound to the here and now emerges in the faceto-face interaction with a particular other. But there are, of course, other far more impersonal, anonymous and linguistically mediated forms of we-intentionality and we-identity that go beyond the here and now and that involve in-group out-group differentiations as well as the construction of a common cultural ground by means of conventions, norms and institutions (cf. Tomasello 2014). On the other hand, these different forms of being-together are certainly not unrelated – certain relations of foundation hold between them, and it is sensible to ask what the most fundamental form is. There are good reasons to favour the primacy of dyadic interaction. Consider for instance the process of socialization, which surely plays a crucial role for proper communal membership. How could that possibly occur in the absence of a basic empathic capacity to directly engage in dyadic relationships with other minded creatures (cf. Zahavi & Rochat 2015, Zahavi & Satne 2016)? As we have seen, however, it is precisely with respect to this question that opinions diverge within phenomenology. And yet, as Scheler is at pains to emphasize, ascertaining such relations of foundation does not imply any reduction of one form to another – peculiar aspects and properties specifically inhere in each of these forms of togetherness and cannot be traced back to heterogeneous elements. Put another way, our social world is rich, complex and multifaceted. And this, one can argue, is an idea on which all phenomenologists concur.
Notes 1 Walther also distinguishes between different types of communities, including purely personal communities (rein personale Gemeinschaften) and objectual communities (gegenständliche Gemeinschaften). For more on this, cf. León and Zahavi (2016).
525
Dan Zahavi and Alessandro Salice 2 Here,Walther follows Pfänder (cf. 1913: 45), whose investigation into the notion of unification, however, does not primarily focus on the problem of experiential sharing. 3 For a more extensive discussion of Walther’s rather complicated model, cf. León and Zahavi (2016). 4 According to Husserl, additional elements are needed in order to ensure the possibility of a we-experience. In particular, he highlights the importance of being aware of oneself in the accusative, as attended to or addressed by the other (Husserl 1973: 211). For a defense of this idea, cf. Zahavi (2015). 5 A more extensive analysis of worldly situations will show that they contain references to others, regardless of whether others are actually present or not. Gurwitsch provides the example of a situation where one is sitting at home in one’s study preparing a lecture and argues that, in such a case, an anonymous audience will be ‘co-present’, since it influences one’s preparation of the lecture. More generally speaking, in our everyday activities others will be horizontally co-included as bearers of various roles, and our awareness of their presence feeds into our conviction of living in a social and public world. Gurwitsch’s analysis bears a striking resemblance to Heidegger’s account according to which, in our daily life of cares and concerns, we are constantly making use of entities that refer to indeterminate others, such that, in utilizing tools or equipment, we are being-with others, regardless of whether or not other persons are factually present (Heidegger 1982: 292). 6 Perhaps Gurwitsch himself became more sympathetic to this approach. In an article from 1962, Gurwitsch discusses Schutz’s theory in some detail and expresses no reservations vis-à-vis the perspective just outlined (Gurwitsch 1962).
References Gurwitsch, A. (1962).The common-sense world as social reality: A discourse on Alfred Schutz. Social Research, 29(1), 50–72. Gurwitsch, A. (1979). Human Encounters in the Social World. Pittsburgh: Duquesne University Press. Habermas, J. (1988). Nachmetaphysisches Denken. Frankfurt am Main: Suhrkamp. Heidegger, M. (1996). Being and Time. Trans. J. Stambaugh. Albany: SUNY. ———. (1982). The Basic Problems of Phenomenology. Trans. A. Hofstadter. Bloomington: Indiana University Press. Husserl, E. (1973). Zur Phänomenologie der Intersubjektivität II.Texte aus dem Nachlass. Zweiter Teil. 1921–28. Ed. I. Kern. Husserliana 14. Den Haag: Martinus Nijhoff. León, F., & Zahavi, D. (2016). Phenomenology of experiential sharing: The contribution of Schutz and Walther. In A. Salice and H. B. Schmid (Eds)., The Phenomenological Approach to Social Reality. History, Concepts, Problems (pp. 219–236). Dordrecht: Springer. Mulligan, K. (2016). Persons and acts – Collective and social. From ontology to politics. In A. Salice and H. B. Schmid (Eds.), The Phenomenological Approach to Social Reality. History, Concepts, Problems (pp. 17–46). Dordrecht: Springer. Pfänder, A. (1913). Zur Psychologie der Gesinnungen I. Jahrbuch für Philosophie und phänomenologische Forschung, 1, 325–404. Salice, A. (2016). Collective intentionality and the collective person in Max Scheler. In S. Rinofner and H. Wiltsche (Eds), Analytic and Continental Philosophy: Methods and Perspectives (pp. 277–288). Berlin: De Gruyter. Sartre, J.-P. (1956). Being and Nothingness: An Essay in Phenomenological Ontology, H. E. Barnes (trans.). New York: Philosophical Library. Scheler, M. (1973). Formalism in the Ethics and Non- Formal Ethics of Values. A New Attempt Toward the Foundation of an Ethical Personalism, Eng. trans. by M. S. Frings, R. L. Funk. Evanston: Northwestern University Press. ———. (2008). The Nature of Sympathy. London: Transaction. Schutz, A. (1967). The Phenomenology of the Social World. Northwestern University Press. Spiegelberg, H. (1975). “We”: A linguistic and phenomenological analysis. In H. Spiegelberg (Ed.),. Doing Phenomenology. Essays on and in Phenomenology (pp. 215–245). Dordrecht: Springer. Stavenhagen, K. (1933). Charismatische Persönlichkeitseinungen. In E. Heller and F. Löw (Eds.), Neue Münchener Philosophische Abhandlungen (pp. 36–68). Leipzig: Barth. Stein, E. (1922). Beiträge zur philosophischen Begründung der Psychologie und der Geisteswissenschaften, Jahrbuch für Philosophie und Phänomenologische Forschung 5, 1–284. Eng. trans. by M. C. Baseheart,
526
Phenomenology of the we M. Sawicki, 2000, Philosophy of Psychology and the Humanities. The Collected Works of Edith Stein, Vol. 7, Washington: ICS Pubns. ———. (1989). On the Problem of Empathy. Trans. by W. Stein. Washington, DC: ICS Publication. Szanto, T. (2015). Collective emotions, normativity, and empathy: A Steinian Account. Human Studies. doi:10.1007/s10746–015–9350–8 Tomasello, M. (2014). A Natural History of Human Thinking. Cambridge, MA: Harvard University Press. Tönnies, F. (1887). Gemeinschaft und Gesellschaft: Grundbegriffe der reinen Soziologie. Berlin: Karl Curtius. Vendrell Ferran, I. (2015). Empathy, emotional sharing and feelings in Stein’s early work. Human Studies. doi:10.1007/s10746–015–9346–4 Walther, G. (1923). Zur Ontologie der sozialen Gemeinschaften. Jahrbuch für Philosophie und Phänomenologische Forschung, 6, 1–158. Zahavi, D. (1996). Husserl und die transzendentale Intersubjektivität: Eine Antwort auf die sprachpragmatische Kritik. Dordrecht: Kluwer Academic Publishers. ———. (2001). Beyond Empathy. Phenomenological Approaches to Intersubjectivity. Journal of Consciousness Studies, 8(5–7), 151–167. ———. (2010). Empathy, embodiment and interpersonal understanding: From Lipps to Schutz. Inquiry, 53(3), 285–306. ———. (2014a). Empathy and other-directed intentionality. Topoi, 33–1, 129–142. ———. (2014b). Self and Other. Exploring Subjectivity, Empathy and Shame. Oxford: Oxford University Press. ———. (2015). You, me, and we: The sharing of emotional experiences. Journal of Consciousness Studies, 22 (12), 84–101. Zahavi, D. & Rochat, P. (2015). Empathy ≠ sharing: Perspectives from phenomenology and developmental psychology. Consciousness and Cognition, 36, 543–553. Zahavi, D. & Satne, G. (2016). Varieties of shared intentionality: Tomasello and classical phenomenology. In J. Bell, A. Cutrofello and P. Livingston (Eds.), Beyond the Analytic-Continental Divide: Pluralist Philosophy in the Twenty-First Century (pp. 305–325). London: Routledge.
527
31 SOCIAL APPROACHES TO INTENTIONALITY Glenda Satne
The nature of intentionality has been at the center of some of the most important debates in Philosophy of Mind and Language in the last few decades. Intentionality was first defined by Brentano (1995) as the property of thought to be ‘about’ something, to refer to something beyond itself. Attempts to characterize intentionality can be schematically classified following Haugeland (1990) along three different types of theory.1 First, Mentalism is committed to the idea that mental contents are original to the individual mind and prior to the existence of socio-cultural practices. This view was first advanced by Locke (1975: III) and is to be found since then in a variety of forms. Among contemporary proponents of such a view are Fodor and Millikan. Second, Interpretationism is defined by its cautious attitude towards assuming the existence of mental states prior to the practice of ascribing them to make sense of each other’s words and behavior; contrariwise, it renders content in terms of intentional ascriptions. Quine was the first to develop a view along these lines and Dennett and Davidson are among its most prominent advocators. Finally, Communitarian accounts, including Haugeland’s, take social practices to be at the basis of content. They claim that contentful tokens [e.g. mental contents], like ritual objects, customary performances, and tools, occupy determinate niches within the social fabric – and these niches define them as what they are. Only in virtue of such culturally instituted roles can tokens have contents at all. (Haugeland 1990: 404) Both Interpretationism and Communitarian accounts are social approaches to intentionality, their differences notwithstanding. It was in the context of accounting for the nature of linguistic meaning that the most prominent social accounts of intentionality developed. This should come as no surprise since linguistic meaning is a paradigmatic case of an intentional relation. Sentences, singular and general terms seem to hold relations of reference and aboutness to the world, in much the same ways as thoughts do. The relation between language and thought has been widely examined during the Twentieth Century. Some have sought to understand the foundation of linguistic content in terms of
528
Social approaches to intentionality
a more general account of intentionality. In this vein, following the mentalist lead some have argued that the meaning of expressions in public languages are to be explained in terms of the contents of the mental states of users of those languages (notably Grice 1957 and Lewis 1975).2 In this view, linguistic signs acquire their meaning by being associated with thoughts, whose contents are prior and independent. In contrast, others have claimed that it is not possible to account for human thinking without assuming it has been attained and shaped through social practices, including language as a public tool for communication. Accordingly, meaning does not depend on inner states but on social practices. Minds are thus conceived as constitutively social.3 As a consequence, two main approaches to the relation between language and thought can be distinguished. The first one – that we may call an individualistic approach – conceives of thinking and more generally of intentionality as conceptually independent of language, a set of publicly accessible signs that are combined according to certain rules to form meaningful sentences. The second one – the anti-individualist social approach – starts by understanding language as a public medium for communication and then inquiries into the nature of meaning and thought as they are exhibited in linguistic communication. Among proponents of the latter approach are authors such as Dewey, Quine, Putnam, Wittgenstein, Davidson, Rorty and Brandom.4 Anti-individualist approaches to intentionality are inspired by some remarks and arguments that can be traced back to the 1950s. The first source of inspiration of these arguments comes from later Wittgenstein’s reflections on the nature of meaning. In his Philosophical Investigations he argues that two conditions of adequacy must be fulfilled by any account of meaning. First, any account of meaning must accommodate its normativity, i.e. it must make room for the distinction between correct and incorrect uses of a term or concept, in a way that would amount to defining correct uses as what it is appropriate to do when acting according to the meaning of the term/concept in a given situation. Second, any account of meaning must fulfill a descriptive condition of adequacy, namely, the requirement that our descriptions of meaning are consistent with what speakers can do and result in proper descriptions of our linguistic practices. To put it otherwise, how our linguistic practices actually are imposes constraints on any account of content. Those who reject the individualist approach to intentionality are driven by the alleged failure of mentalistic accounts to meet both these conditions of adequacy. Accounting for meaning in terms of internally accessible mental states – so they claim – makes it impossible to meet the normative condition – i.e. to distinguish between correct and incorrect applications of a term/ concept – since from the individual’s point of view any performance she judges to be correct would be correct. Nor can they account for the epistemic condition – the constraints imposed on accounts of meaning by what speakers do in their linguistic practices. For example, theories are criticized that demand that speakers make hypotheses about what they mean by their words and utterances or that imply that speakers have the capacity to consult and follow an infinite list of correct uses of a term to be able to use it correctly (see Kripke 1982).That individuals do not do this is part of what we know about meaning from the observation of everyday linguistic practices and the normal capacities of those involved in them. I will not argue here in favor of such claims.5 I will rather assume them to examine the different strategies that aim to provide social accounts of intentionality and assess their prospects for success in the light of these two conditions of adequacy. As I have said, a distinction between two kinds of anti-individualist strategies – a communitarian and an interpretationist one – is in order. While the former starts by assuming the communitarian character of meaning, the latter accounts for the social nature of intentionality in terms of attributions within one-to-one interactions between speakers.
529
Glenda Satne
The communitarian approach aims to accommodate the social character of meaning while at the same time describing its normativity in terms of the interplay between the attitudes of the members of a community and the community’s as a whole. The distinction between correct and incorrect uses of a term/concept is cashed out in terms of the way in which the community uses a term vis-à-vis individual members’ deviations from it. Some have reacted against communitarian accounts offering an alternative rendering of what it means to say that meaning is essentially social. They assume an interpretational strategy. According to it, meaning is not to be understood as constituted by private mental states nor in terms of the attitudes of a community as a whole but starting from the interactions between two or more individuals. The interpretationist accounts, their differences notwithstanding, share three basic assumptions: (1) they aim to model normativity of meaning in terms of principles that govern intersubjective interpretation, since for different reasons they find it impossible to do it starting from independent facts prior to human interpretation;6 (2) they doubt the notion of community could be the source of the normativity of meaning; and (3) they characterize human communicative interaction in terms of interpretation, i.e. the mutual attribution of mental states and meaning to each other’s utterances. Several objections have been raised to these accounts motivating the need to move beyond both communitarianism and interpretationism. As a result, a number of alternative social approaches to intentionality have arisen. In what follows, I will examine communitarian and interpretational accounts of intentionality, summarize the criticisms they face and briefly present some alternative approaches.
The communitarian approach According to the communitarian approach meaning-facts are constituted by the consensus of the members of the community in the judgments they accept. Paradigmatically, communitarians claim that “(t)here is nothing to be said about either truth or rationality apart from descriptions of the familiar procedures of justification that a given society – ours – uses in one or another area of inquiry” (Rorty 1989: 169). Communitarian accounts typically claim that it is our membership in a community what accounts for our being subject to the same semantic norms. In this this view, judgments, including intentional attributions, acquire their content within communitarian practices and are justified by appeal to such practices of taking them to be so.The normativity involved in linguistic practices is then grounded on the brute fact that speakers of a language agree on most of their judgments.7 There are nevertheless different ways of giving flesh to this claim. In the following I review some of the most prominent ways to understand the notion of communitarian consensus that is at issue. According to the simplest way of understanding this claim, consensus equates with the spreading of a certain behavior within a certain group. Individuals are trained by other members of the group to respond with assent to the judgments and actions they all accept as correct.This simple account of the shared nature of semantic norms, which we may call social conformism, models social agreement in terms of a behavioral coincidence (‘What the community does’). The community as a whole sets the standards of correction for the individuals whereas the community as such “merely goes” (Wright 1980: 220). Accordingly, individuals respond to that which is appropriate in the eyes of the community by acting in similar ways to those prevailing within it. Kripke’s proposal on how to read Wittgenstein’s considerations on rule-following (Kripke 1982) can be seen as an example of such view. Kripke reads Wittgenstein as rejecting the 530
Social approaches to intentionality
existence of semantic facts, facts that determine the meanings of words and thoughts. Instead he thinks that Wittgenstein proposes a skeptical solution – analogous to Hume’s approach to causal relations that dispenses with facts that would ground causal necessity and cashes out causal relations in terms of the constant conjunction of events and a corresponding custom of perceiving them together. Kripke’s skeptical solution suggests that there are no facts for establishing when an individual is acting in accordance with a semantic norm. Instead, it is the attitude of the community of taking individuals as acting in accordance with its prevailing standards what counts as the relevant criterion. At the level of individuals, all there is are primitive inclinations to take responses as correct or incorrect (Kripke 1982: 90–91). The community accepts an individual as following a norm – for example as meaning addition by ‘+’ – by accepting a contrapositive conditional (Kripke 1982: 93ff): if an individual does not go against the responses the community would give under similar circumstances, then the community will take her as following the norm in question. If an individual passes sufficient trials, then the community will accept her unconditionally as a rule-follower (ibid.). These are conditions of assertion for community users and not truth-conditional norms based on semantic facts. This view has been criticized as being either inadequate or else radically insufficient to give an account of meaning (Goldfarb 1985; McDowell 1996). The problem appears when one considers more carefully the content of the conditionals the community uses to test individual candidacy for rule-following. In one rendering, the community is using a conditional of this form: (*) It is licensed to assert that a person means addition by ‘+’ when that person has responded with the sum in every case so far attempted. That this clause cannot be used for this purpose transpires from the fact that it requires the use of the concept of ‘sum’, and thus points beyond the community towards facts that determine meaning beyond the communitarian practice and thus is of no use for a communitarian account of intentionality. An alternative possible rendering of the conditional would be the following: (**) It is licensed to assert that a person means addition by ‘+’ if she responds as (in the same way) the community does (or the majority of its members, or a group of experts of the community).8 The problem in this case lies on the notion of ‘sameness of response’. The use of (**) requires one to distinguish responses that are similar from those that are not. But ‘sameness of response’ only makes sense against some standard, determining the relevant aspect for sameness and difference. Thus, it either presupposes the standard or norm that it is designed to define, or is insufficient to determine a criterion of correct use (see Goldfarb 1985). Classifying it as a regularist conception of norms (Sellars 1954, Brandom 1994: 37–42), where meaning is identified as regular behavior, another line of criticism has underlined the model’s insufficiency to account for the existence of any norm. That is because regular behavior can conform to indefinite many alternative norms. This is the so-called gerrymandering problem (Brandom 1994: 27–28): any norm can be read off the same behavior. Furthermore, as a consequence, the view would be incapable of accounting for the individual’s ability to follow norms. As Brandom (1994: 28) has pointed out, to follow a norm is not merely to act regularly, regardless of whether what is at issue is the regular behavior of the individual or of the community as well. To count as following a norm individuals need to know how to go on, for 531
Glenda Satne
instance, knowing what comes next in order to continue the series ‘2, 4, 6, 8 . . .’. One might doubt that in this model individuals know how to continue since the account leaves open for the individual the question as to what regularity she should conform to. Moreover, regularism fails to account for the fact that the agent is acting because the norms require her to act in this way, guided by norm, instead of it merely showing up in her behavior as an externally attributed regularity (see Brandom 1994: 26ff.).9 Some have proposed a more sophisticated and richer version of the communitarian approach. According to it, individuals are trained by other members of the community through mechanisms of social conformism to respond in judgment and action in the same way as others do. This view might be labeled social conformism in thought as it stresses the fact that individuals develop a conception of the norm they follow by being trained in the communal practices of taking some actions to be correct or incorrect with respect to the community’s prevailing standards.10 This capacity is to be thought of as a complex second-order disposition that humans have by nature (wired-in). It presupposes the capacity to react differently, via sensory discrimination, to adapt behaviorally (that is to learn, as in conditioning or habit formation) and to influence the behavior of others (by setting an example, reinforcing, punishing and the like) (Haugeland 1998: 147–148). As Haugeland puts it: When community members behave normally, how they behave is in general directly accountable to what’s normal in their community; their dispositions have been inculcated and shaped according to those norms, and their behavior continues to be monitored for compliance. (Haugeland 1990: 406) The members of the community shape the behavior of each other via censorious acceptance and rejection and act in conformity with one another. There is a positive tendency to induce one’s neighbors to do likewise and suppress variation and a corresponding tendency to respond to such induction by bringing one’s own actions in accordance with that of the neighbors. As the subjects are trained in this way, they develop a consciousness of what they are doing in terms of the norm they are following. Conformism in this model works along the same lines as natural selection, engendering structure and order by means of which a new pattern of behavior emerges. This is the layer where intentionality proper, semantic contentful thinking and mental states typically assumed to be propositional attitudes, has its locus. For example, the individuals are first trained to respond with ‘2’ to the sum of 1 plus 1 and ‘4’ to the sum of 2 plus 2 and so on. But at some point they start to understand what they are doing in terms of the concept of addition.They now know that what they are doing is correct or incorrect according to the measure set by the concept of addition that they thereby deploy. It could be objected that it is not clear how the community contributes to the specification of the content of the norm in a stronger sense than the mere contingency of individuals having learnt it from someone else. In fact, the view seems to leave open two alternative renderings of how meaning should be understood as being socially instituted by social conformity mechanisms. According to one reading, when a child learns the concept of addition from others, she learns how to act, but it is not part of her concept of addition that this concept is to be shared by every other member of the community. Arguably, in this rendering the proposal could be understood as bootstrapping communitarianism altogether. The community seems to figure only as a means to explain the acquisition of norms, leaving room for a non-communitarian account of their content. If this is so, then meaning is not socially constituted after all. 532
Social approaches to intentionality
According to another reading, meaning itself is constituted by the regular and systematic behavior of the community as a whole (Haugeland 1998: 315). But this rendering might be suspected to be vulnerable to the same criticism the simplest model was vulnerable to: its appeal to the community’s normal behavior would be insufficient to determine a single standard of correction and thus would be unsuited to account for the individual’s competence in following a norm. Moreover, it seems to construe the practice of following norms as responding to someone else’s sanctions and assessments of actions instead of answering to the norms themselves (McDowell 1998). To mitigate this sort of criticism, Haugeland introduces a number of distinctive abilities that he conceives as part and parcel of the individual’s competence in following social norms, including mundane skills – the ability to engage in a given game or practice by recognizing and manipulating its relevant elements (for example, in the case of addition, numbers, mathematical signs such as ‘=’, ‘x’ and so on) – constitutive skills – the ability to identify what is permitted and prohibited according to a given standard in a practice, as every practice excludes some performances as impossible via its constitutive standards that distinguish what is possible in a practice and what is not – and existential commitments to bring their individual performances in accord with the communitarian standards or give up the practice. Such constitutive standards plus the abilities aforementioned are supposed to constrain both what can be a norm for the community and to flesh out questions about how individual agency and competence work in a social account of meaning. That notwithstanding, it is still not clear how an engagement of this sort, falling short of requiring intentional contentful thought on the part of the individuals involved, can really do the trick of explaining the institution of semantic contents (Hutto & Satne 2015: 528–529). This problem is more acute once, as Haugeland does, one conceives of intentionality as being of a piece and derived from social practices (Haugeland 1990: 414).What kind of abilities make possible the sort of engagement between individuals that leads to the institution of contents? Can we make sense of intelligent interaction without presupposing the manipulation of contents from the outset, as that to which individuals are responding to when interacting?11 One could complicate the communitarian model in a different direction, such that it would include the knowledge of the behavior being shared as the source of motivation for acting in conformity with social norms instead of the mere conformity to those norms via a natural tendency to conform. In this way, the hope is that one might be able to give an account of content in social terms without encountering the aforementioned gerrymandering problem or presupposing what needs explaining. In this model, subjects have a conception of the norm as shared. Accordingly, the conception of the norm they are acting in the light of can be specified in the following lemma: you should act as the community does. In this more sophisticated version of social conformism, the individual tries to accommodate her conduct to a general principle for her action, no longer through a purely behavioral disposition to conform, but in thought.12 In this model, the individual acquires the knowledge of the norm she is to follow by deriving it from a general principle that reads ‘act as the community does’: each individual must notice if the conduct of the community is the same as hers and adjust their behavior accordingly. But again, the behavior of the community as a whole can be interpreted in endless ways, that yield different verdicts as to what norm is being followed by the community.The principle itself remains external to the observed behavior and thus in need of an interpretation to determine which standard is to be followed. Many interpretations are possible, and each would result in a different standard of correction. This asks for a way of determining the correctness of the first interpretation and thus for another rule with that purpose, but again this rule calls for an 533
Glenda Satne
interpretation, leading to a regress. This is Wittgenstein’s problem of the regress of interpretations (Wittgenstein 2009: 201ff). As we have seen, communitarian approaches in their different guises fail to provide a way to specify how meaning can be a normative standard for the members of a given community.The community may very well play a role in the acquisition of the individual’s competence with meanings, even in the monitoring of the proper use of terms, but appeals to such tutoring role of the community do not seem to suffice to explain how intentional content is determined through social practices and how it acquires its normative import on them. Be that as it may, this sort of account has been criticized on a different ground. According to what we may call the objectivity objection, the problem is that even if the account attempts to draw the distinction between seeming to be right and being right at the level of the individual by comparing her behavior with that of the community as a whole, it fails to do so because it does not draw a corresponding distinction at the level of the community (McDowell 1998; Wright 1986). If the community cannot go wrong, then there are no standards to which the community responds, and hence no standards for the individuals to inherit. As Haugeland (1998: 315ff) puts it, if everyone in a community performs a folk dance in a given way, it makes no sense for a member to object to the way in which the dance is performed. Since the practice of dancing does not have an objective purport, it does not call for extra communitarian standards of correction. But it is perfectly conceivable that the community collectively fails to correctly identify something as ‘yellow’ and that an individual member is in the position to correct it.13 While Haugeland has attempted to answer this challenge by appealing to the enriched account mentioned above, it is not clear that communitarian accounts can meet the challenge without appealing to extra communitarian semantic facts.
The interpretational approach Authors committed to interpretationism reject mentalist foundational theories of meaning, opposing both the idea that meanings can be determined in the individual isolated mind and that there are facts independent of human social engagements that can determine meaning. They do so through a variety of arguments. I mention three of them that have paved the way for an interpretative account of intentionality. First, Quine’s argument for the inscrutability of reference, that argues for the indeterminacy of meaning. Second, Davidson’s sling-shot argument according to which sematic attributions cannot be grounded on individual facts but instead the whole language responds together to one and the same fact.Third, Putnam’s modeltheoretical argument that targets truth-conditional semantics purporting to show that any arbitrary combination of facts can make true the same set of sentences of a language, defining an alternative truth-predicate for that language. Putnam concludes that facts alone, without the intervention of social interpretative practices, cannot determine meaning. Interpretationists also reject communitarian approaches to intentionality, claiming that they provide a mistaken account of the shared character of meaning. According to these authors, the very idea of ‘community’ could only be accounted for once we have understood the interaction among its members.14 Interpretationists instead share the basic assumption that we need to start from human interaction to account for meaning. Quine, Davidson, Brandom and Dennett are the most prominent advocators of such view.15 They describe intelligent interactions between two people in terms of mutual attributions of mental states and linguistic meaning to each other’s utterances. To make sense of another agent’s behavior is to put such an action (including utterances) in the context of the reasons that the agent might be thought to have to behave in that way. 534
Social approaches to intentionality
Communication is achieved when such attributions are mutually performed and the interaction flows (cf., e.g., Quine 1980: 79). Interpretationism briefly sketched states that to be an intentional creature is to be a language user and this requires having a social mind, being able to partake in a communicational exchange with someone else that is situated in the same world. Such notions are accounted for in terms of interpretation: to be an intentional being is to be able to interpret other creatures’ actions as meaningful. To interpret someone is to attribute meaning to their conduct, conceiving it as oriented by wishes, beliefs and other propositional attitudes in the context of a common perceived world. The interpretation of language is just a part of the global task of attributing meaning to other creatures’ behavior. Due to the holistic character of interpretation, along with any specific attribution of meaning, a massive set of norms and mental states must be attributed. In interpreting that B is adding, the interpreter attributes to her that she knows the series of natural numbers, that she has consistent and mostly true beliefs, that she is rational and avoids contradiction, and so on.This involves using a set of rational principles, as it is simply impossible to interpret someone else as being rational without assuming at the same time that most of her beliefs are true, generally coherent and caused by the same objects (understood as distal stimuli) that usually are the source of our beliefs. In sum, to interpret someone is to implicitly construct a theory about the content of her beliefs, wishes and other propositional attitudes, including her utterances, in the context of a world where both the interpreter and the interpretee are commonly situated. Quine pioneered this interactionist conception of meaning and language, but his view was largely influenced by a behaviorist take on the evidence that was available for the linguist or the child16 that construe an interpretation of language. In Quine’s view “minds are indifferent to language insofar as they are behaviorally inscrutable” (Quine 1970: 4–5), since Language is a skill that each of us acquires from his fellows through mutual observation, emulation and correction in jointly observable circumstances. When we learn the meaning of an expression, we learn only what is observable in overt verbal behavior and circumstances. (Quine 1987: 130) Followers of Quine in the interpretationist approach have been more liberal in the tools they allow to take part of interpretation. While deeming Quine’s view as insufficient to render an account of intentionality, they justify the appeal to semantic and intentional devices to make sense of meaning as a social instituted device of mutual interpretation. According to Dennett, for instance, interpretation requires an intentional stance, the full-fledged point of view of an interpreter that attributes intentional attitudes to explain the behavior of others. For Davidson, in its turn, interpretation is to be understood in terms of triangulation that presents a conceptual dependence between linguistic interaction with someone else and the distinction between belief and objective truth (Davidson 1975, 1992). Triangulation presupposes the existence of causal relations between two people and a common environment (causal origin of beliefs), and an interpretative relation between them (communication) governed by the principles of rationality, through which they conceive of each other as reacting similarly to the same world. If the world has only a causal role in the constitution of beliefs, beliefs can only be about it as a result of a device capable of turning causal relations into propositional contents. Such a device is none other than interpretation, which brings the causal into the net of propositional articulated contents. To be in communication with someone else provides the interlocutors with the distinction between their subjective perspectives and a commonly perceived world to which these perspectives respond. 535
Glenda Satne
Brandom enriched the interpretationist treatment of normativity centered around the role of rational principles of interpretation, by including in his account the dimensions of authority, which is bestowed onto someone who is treated as someone entitled to a belief, and that of responsibility, which is properly demanded of someone when she is interpreted as committed to a certain belief. In his proposal, interpretation then depends on the attribution of normative attitudes such as commitments and entitlements and the relation of incompatibility among the contents attributed. Meaning is construed in terms of inferential roles associated with these different dimensions of assessment – commitment, entailment and incompatibility – and not in terms of truth conditions, as in Davidson’s picture. A number of criticisms have been addressed to interpretationism. In the following section I summarize four of them: (1) the content objection; (2) the normativity objection; (3) the second-person objection; and (4) the continuity objection. In examining each of them, it is shown how they led to a number of alternative views that, while also conceiving of intentionality in social terms, depart from the interpretationist and communitarian models.
Beyond communitarian and interpretationist approaches The content objection A noteworthy objection due to McDowell argues that interpretationist positions such as Davidson’s and Brandom’s assume there is “a dichotomy between the natural and the normative” (McDowell, 1996: xix). Such identification is precisely what he finds in Davidson’s and Brandom’s insistence on the purely causal significance of experience. The main reason that led them to such a conclusion is the urge to abandon reductive naturalism, and especially the ‘Myth of the Given’. The Given can be simply defined as (1) “what stands apart from the conceptual” (McDowell 1996: 4) while at the same time (2) that which is relevant to the justification of or warrant for our beliefs. To put it otherwise, “the Given is the idea that the space of reasons [i.e. the space of meaning where our intentional attitudes are situated] [. . .] extends more widely than the conceptual sphere,” incorporating “non-conceptual impacts from the outside of the realm of thought” (McDowell 1996: 7). Avoiding the Myth entails for Davidson and Brandom rejecting this latter assumption ((2) above), hence concluding that “nothing can count as a reason for holding a belief except another belief ” (Davidson 2001: 141). By the same token they assume that experience itself can only play a causal role, as an impact on the space of reasons that lacks any epistemological significance. McDowell complains that if this were the case, concepts would be empty – as they would lack any fulfillment from experience – and the world would be epistemologically irrelevant for our thoughts and judgments. This would eo ipso amount to its lack of normative significance – the world could not correct our beliefs – and to the incorrigibility of the interpreter’s point of view (see the normativity objection in the next section). McDowell, echoing Davidson’s triangulation, accepts that “knowledge of the non-mental around us, knowledge of the minds of others and knowledge of our minds are mutually irreducible but mutually interdependent” (McDowell 2009b: 152), but insists that the connection between self, other and the world should not be understood following the central lines of an interpretational-perspectival approach. He starts from the idea he attributes to Sellars that “we must sharply distinguish natural-scientific intelligibility from the kind of intelligibility something acquires when we situate it in the logical space of reasons” (McDowell, 1996: xix) and claims that we can understand the world’s impact on belief-formation as belonging already to the conceptual sphere, and hence as already being of epistemological relevance. McDowell 536
Social approaches to intentionality
claims that because experience is actually capable of playing such role as warrant for beliefs, it has to be conceived as conceptually informed. Consistently, he endorses a truth-conditional semantics based on a realist conception of reference and truth, and defends the idea that truth is responsive to worldly facts. In this way, he rejects both Davidson’s coherentism and Brandom’s inferentialism. The normativity objection A second line of criticism states that interpretationism fails to account for the normativity of meaning. The problem is that it is not clear how the interpreter’s states can have a normative status. In particular, interpretationism fails to account for the way in which such states can provide the subject with criteria to assess and correct her own behavior. This puts at risk the capacity of these theories to accommodate the normativity of meaning since the necessary space for disagreement between interlocutors, the space that allows for the difference between belief and truth or, to put it otherwise, between what seems correct and what is correct, disappears when conceiving of interaction in terms of interpretational relations. The interpreter’s point of view is by hypothesis incapable of being corrected directly by the way the world is presented to her without the contribution of another with whom the interpreter is in conversation. This is a central idea of triangulation and the origin of the content objection previously presented. But the possibility of her being corrected by others is completely up to her. The interpreter can always interpret someone else’s beliefs as agreeing with hers by merely making the relevant modifications (remember that her interpretative task involves the holistic interpretation of language as well as the different propositional attitudes of the speaker). Thus, the interlocutor cannot be conceived as someone capable of assessing and correcting the behavior of the interpreter as her reactions remain subject to interpretation.This is a consequence of the interpretationist’s underlying commitment to a third-person standpoint. It follows that in this account, there is no constraint on the interpreter’s propositional attitudes that could be described as normative. McDowell (2009a) expresses a related worry, stating that whereas self, other and world are concepts that are irreducible and yet interdependent, the interpretational understanding of their mutual relation privileges the intersubjective as that through which objectivity is to be understood (this is mostly salient in Brandom’s project of making explicit objectivity after having unpacked the dynamics of perspectival interpretation that belongs to the interpretational practice (Brandom 1994, ch. 8,VI)). McDowell insists on the idea that these three concepts should be kept at the same level, refraining from the temptation of explaining one in terms of the others. In this vein, he expresses skepticism about the possibility of the existence of I–Thou sociality, the kind of minimal sociality that interpretation envisages as an account of meaning, without assuming a shared language in the background, something that pertains to a we. The central worry is that if the different perspectives characteristic of the I–Thou relation cannot stand as perspectives in their own right until they are related, and consequently properly understood as perspectives on the world, how is it that by being related they can covert from merely differential responses to the world into proper rational responses to it.This kind of perspectival approach is affected – McDowell claims – by a related problem that concerns the account of how the individuals’ states can constitute standings in the space of reasons: individuals in the interpretational view seem to be incapable of being subject to rational constraints in their own right.The ‘supervising of ’ others through interpretation cannot hope to overcome such a problem. In McDowell’s approach sociality is articulated as an I–We relation rather than in terms of I–Thou relations privileged by interpretationism. What makes thought social is the fact that 537
Glenda Satne
reasons belong to language and language is itself a complex of interrelated practices, linguistic and non-linguistic, inserted in a tradition, a ‘form of life’ in Wittgenstein’s words (Wittgenstein 2009: §§ 7, 19, 241). The availability of meanings comes from being initiated in a language where a world-view is transmitted. Because of this initiation, our capacity to understand others consists directly and without mediation in our ability to “hear someone else’s meaning in his words” (McDowell 1998: 258), and not in the capacity to interpret or otherwise calculate their meaning. The second-person objection A different line of criticism points towards the fact that these theories take as their basic starting point a third-person interpretational stance and ignore the relevance of including a secondperson dimension of interaction. Different authors have underlined the need of a secondperson facet in Brandom’s account of normative linguistic practices. Against Brandom’s picture, Habermas (2000) has claimed that if we think of the interpreter stance in a third-person way, we lose the idea of language as being a way in which individuals engage in the pursuit of common goals and values. Brandom (2000) responds that according to a third-person point of view of meaning attribution, one can actually engage in social linguistic practices without pursuing common goals or sharing values.17 So according to Brandom, a second-person kind of interaction among language users is needed only to make sense of common goals shared by them, but not to make sense of the possibility of there being linguistic practices altogether. On the other hand, Kukla and Lance (2009) and Wanderer (2010) have argued that being addressed is an essential dimension of speech acts in the game of giving and asking for reasons and that Brandom, even if going beyond other interpretationist approaches in this respect, does not stress the importance and role of such aspect. According to Kukla & Lance (2009), this addressive, second-person character of speech acts – mostly clear in imperatives, invitations, promises and so forth – is characteristic of every speech act, even if implicitly, and necessary for them in order to perform their normative function. In Wanderer’s (2010) opinion, this is an essential feature of certain speech acts – challenges, in Brandom’s terms – and absolutely essential for those to be such. While these authors agree that the addressive second-person aspect of normative linguistic practices is essential for a social account of meaning to meet the normative condition of adequacy and to describe what speakers actually do, they allow for this dimension to be implicit in such practices. Furthermore, in Wanderer’s and Kukla and Lance’s views, the person addressed will be failing to give an appropriate response only if ignoring the address, and will be acknowledging it no matter how she responds to it (compliance, refusal or anything in between) (Kukla & Lance 2009: 162). This line of argument does not put into question the interpretational picture that is the basis of the dynamics of the normative practice, but rather seeks to complement it. But one might wonder if a second-person understanding of normativity might not require something stronger, i.e. that the speaker’s address be recognized by the addressee’s compliance in a manner that would necessarily lead the interlocutors to agreement (e.g. if the promisee were only interpreting that a promise is being made to her, then no commitment would hold for the other party no matter how misleading were her acts).18 In this vein, some have made the more radical claim that an understanding of the practice of giving and asking for reasons as a second-person sort of interaction is in tension with the interpretational understanding of it (Rödl 2010; Satne 2014a, 2014b; Thompson 2012). A secondperson account is thus characterized as not endorsing an inter-attributive conception of linguistic practices and intentional stances. On the contrary, the main idea of this approach is to 538
Social approaches to intentionality
take the notion of shared meaning as the base-ground from which to understand minds as social. That minds are social means that meaning is shared. Instead of thinking of the other as being in agreement or disagreement with an interpreter’s point of view, some have proposed to think of the subject as being responsive to the assessments of others (Satne 2014a). More specifically, this would amount to taking the others’ words and attributions as relevant for the determination of the meaning of the words we ourselves use. It is through her approval or disapproval of our own linguistic behavior that the other shapes what our words mean. A social picture of meaning would then be sketched based on responsiveness to correction along with a dynamics of mutual recognition where each one aims to be recognized by the other as following the same norms (Satne 2014a).This does not need to imply that objectivity is accounted for on the basis of such interactional dynamics but rather that the objectivity of meaning impacts on our practices through the dynamics of mutual correction. The continuity objection A final well-known objection targets interpretationism as positing an explanatory gap when explaining the possibility of meaning through interaction (Bar-On 2013, Satne 2014a). It seems a naturalistic desideratum to be able to accommodate the way in which human capacities emerged from more primitive non-human forms of life as well as the way in which we humans become conceptual creatures through learning and development. But interpretationism is deemed to be unable to do so. In fact, Davidson’s reticence and skepticism about the possibility of giving phylogenetic and ontogenetic explanations of how language is mastered is well known; it is what Bar-On (2013) called his ‘continuity scepticism’. The objection runs as follows.To be an interpreter is to have the concept of belief: to be able to interact with somebody else is to be able to attribute beliefs to her. The theory presupposes that the interactants use the concept of belief to make sense of meaningful interaction but cannot explain how this concept is gained through interaction so conceived. Thus it produces an explanatory gap in accounting for the very possibility of becoming an intentional being: the model seems to be committed to the idea that at some point the ability to deploy the concept of belief emerges and enables one to take part in triangulation but it is not clear how it develops from previous, more basic ones. An analogous criticism has been addressed to Dennett, since he is committed to ‘intentional stances’ to make sense of intentional attributions without explaining how it is possible to assume such stance (Hutto & Satne 2015). Furthermore, because of the identification between thought, talk and interpretation, it is problematic for these theories to account for the ability to entertain thoughts without speaking a language (as may be the case with some non-human animals), or for the possibility to have rudimentary forms of thought and talk (as in the case of young children), and, a fortiori, these theories cannot describe those abilities as forming a continuous path of little steps. Thus, this kind of theory seems unsuited to explain our attribution of thought to animals and children. Such attributions would be at the most mere ‘ways of talking’ (Hutto 2008), which would not be justified in terms of the abilities exhibited by the behavior of such agents. This leaves unexplained the nature of the intentional capacities of animals other than human and of human children, and the continuity between their ways in the world and ours. A number of social approaches to intentionality have attempted to explain the emergence and acquisition of meaning in social terms. Some of them have explicitly assumed the challenge of providing a phylogenetic story about the origins of meaning. Tomasello, for instance, has long defended the idea “that the amazing suite of cognitive skills and products displayed by 539
Glenda Satne
modern humans is the result of some sort of species-unique mode or modes of cultural transmission” (Tomasello 1999: 4). And this is so because thinking itself is enabled by and possible only within a socio-cultural matrix: [Thinking] is a solitary activity all right, but on an instrument made by others for that general purpose, after years of playing with and learning from other practitioners [. . .]. Human thinking is individual improvisation enmeshed in a social-cultural matrix. (Tomasello 2014: 1) One can see this proposal as a contemporary re-elaboration of some of Dewey’s central topics and concerns relating to naturalism, language and human minds. For he too explains the evolution of language as a consequence of the vaster transformation that the evolution of culture exerts on human organisms: For man is social in another sense than the bee and ant, since his activities are encompassed in an environment that is culturally transmitted, so that what man does and how he acts, is determined not by organic structure and physical heredity alone but by the influence of cultural heredity, embedded in traditions, institutions, customs and the purposes and beliefs they both carry and inspire. (Dewey 2008: 49) Furthermore, according to Dewey, “to speak, to read, to exercise any art, industry, fine or political, are instances of modifications wrought within the biological organism by the cultural environment” (Dewey 2008: 49). For Dewey, as for Quine, minds only exist within a community of language-using individuals. But Dewey, opposing the behaviorist tradition championed by Quine of denying the relevance of an ‘inner realm’, claims that the inner world of the mind can be understood as a consequence of turning language ‘inwards’, thus giving a central role to language in the understanding of human psychology. Along similar lines, the enactivist program (Hutto & Satne 2015, 2016) calls on the idea that the construction of socio-cultural cognitive niches enables the establishment of stable practices through which public representational systems emerged. This proposal follows Clark in thinking that language is a “cognition-enhancing animal-built structure [. . .] a kind of selfconstructed cognitive niche” (Clark 2006a: 370). Clark argues that rather than serving merely as a vehicle of already existing symbolic thought, language comes to actually constitute part of the process of thinking (Clark 1997; a classical source for this idea is Vygotsky 1962). By materializing thoughts in words, humans have created structures that are themselves proper objects of perception, manipulation and (further) thought. Language is thought to radically transform the human mind, and mark a genuine discontinuity in the space of animal minds. While Clark’s picture is very useful for understanding how language can transform minds without posing a chasm in nature, he assumes, as other important voices in the field, that there is contentful thought (i.e. ‘internal representations’) prior to the mastery of linguistic practices (Clark 2006b: 293).19 As explained, social accounts of meaning reject this move following Wittgenstein’s and others’ criticisms of mentalism. Clark emphasizes the externality of language as a material tool for thinking, but key to language is not merely its external materiality but also what this materiality enables, namely, the dialogical engagement with an interlocutor (Fusaroli, Gangopadhyay & Tylén 2014). The enactivist program embraces the idea that language works as an external scaffolding that transforms minds while denying that it is necessary to postulate internal mental representations in order to make sense of this idea. The radical enactivist, for 540
Social approaches to intentionality
example, thinks of the proper abilities that are necessary to engage in such enabling practices in terms of non-contentful directed intentionality (Hutto & Satne 2015, 2016). She then complements this idea with the claim that special forms of social engagement, possible without representing the mental states of the partners in interaction (Hutto 2015; Satne 2014a), enabled humans to set up and engage in socio-cultural practices involving signs and words. It is by such engagements that their minds were scaffolded and enhanced, amounting to the skills of manipulating meaningful signs and deploying contentful mental states.These assumptions need not imply a discontinuity in nature unless it is also, implausibly, assumed that the existence of such socio-cultural niches implies an inexplicable gap in nature. The challenge for this sort of proposal lies in providing the details as to how social engagement might take place without representational tools (Lavelle 2012; Carruthers 2011). This problem leads to issues that lie outside the concerns of this chapter to a consideration of a number of different proposals on how to understand social cognition and interaction in nonrepresentational terms (De Jaegher & Di Paolo 2007; De Jaegher, Di Paolo & Gallagher 2010; Hutto 2015; and Fiebich et al., this volume, Carpendale et al., this volume). That notwithstanding, an important issue for these proposals is to take due caution in understanding the kind of explanations they may offer. Davidson and McDowell, following Wittgenstein’s quietist plea, claim that it is not possible to provide a philosophical explanation of intentionality that defines content/thinking in terms of the stages prior to conceptual thinking. Nevertheless, distinguishing between scientific and so-called philosophical explanations, it might be possible to show how an explanation of the natural origins of content might go by appealing to the empirical findings of a wide range of natural and social sciences.This sort of proposal describes (structurally) what needs explaining without trying to explain what it is not possible to explain (e.g. the actual meanings of words from ‘outside’ a given socio-cultural practice).
Notes 1 Haugeland 1990 branded these three main types of theory as neo-Cartesian, neo-Behaviorist and neoPragmatist. Since some important theories, e.g., Brandom’s and Davidson’s, can be thought to fall within both the second and the third categories as he defines them, in what follows I dispense with using Haugeland’s terminology. 2 Grice characterizes the content of public languages as inherited from inferences that involve interrelated speakers’ intentions to be understood by an audience as meaning something. Lewis defines public meaning in terms of conventions that prevail in a community. Since both views are committed to mentalism, they are not social accounts of intentionality in the sense defined here and won’t be discussed in this entry. 3 We are here concerned with what Davies calls analytical priority and no-priority views. Davies (1998) distinguishes three types of priority that apply to the relation of dependency that one might think holds between thought and language. Ontological priority, i.e. the claim that one (e.g. thought) cannot exist without the other (e.g. language), while the converse is not true; epistemological priority, i.e. the claim that knowledge of one (e.g. of linguistic meaning) goes via knowledge of the other (e.g. thought), while the converse does not hold; and analytical priority, i.e. the claim that that key notions in the philosophical study of one (thought/language) can be elucidated or analysed in terms of key notions in the study of the other (language/thought). Besides the relative priority of one with respect to the other, within this debate some have held a no-priority view, meaning that there is interdependence between the notions. As discussed below, Davidson is a prominent advocator of the no-priority view, in the three senses discussed above. As noted by Davies, even if these naturally go together, none of them entails the others. Quine and Dewey, in their turn, argue that there is analytical priority of language over thought. Grice’s theory of meaning is a paradigmatic example of the analytical priority claim of thought over linguistic meaning. 4 It is important to note that social accounts need not endorse a priority claim of language over thought in any of the senses described above. To resist individualist accounts in favor of social ones it is sufficient to claim that language and thought are interdependent.
541
Glenda Satne 5 For a thorough argumentation, see Satne (2005). 6 The main arguments against the possibility of grounding semantic attributions in facts are Quine’s inscrutability of reference, Davidson’s sling-shot argument and Putnam’s model-theoretical argument. See below. 7 This position has been endorsed, among others, by Wright (1980), Kripke (1982) and Horwich (1995). 8 Even if Putnam advocates a view of meaning according to which meaning is determined by the relevant group of experts in the community (Putnam 1975), his view as well as Burge’s (1979) has it that individuals need not to behave like the experts to mean by their words what the community does. These views might be subject to the objection of bootstrapping a communitarian account of meaning in the direction of a non-social account of meaning. See below. 9 The above considerations notwithstanding, social conformism might be a good account of shared regular behavior (for considerations in favor of the idea that social conformism might play this role, see Hutto & Satne 2015 and Satne 2016). 10 Both Haugeland (1990) and Ginsborg (2011) provide accounts along these lines. 11 The above considerations notwithstanding, social conformism in thought might help understanding different sorts of socially motivated behavior such as group identification. 12 I am indebted in what follows to Thompson (2001), where Thompson criticized Rawls and others for deriving normativity from a general principle. Cf. for example Thompson (2001): 128ff. I must thank Sebastian Rödl for drawing my attention to this line of argument against the attempts to ground normativity on a general metaprinciple. For an extended treatment of this, see Satne (2014b). 13 For an argument in this direction, see Wright (1986). 14 Brandom (1994: 38–39) points out that the communitarian approach by attributing authority to what the ‘the community’ takes as correct, mystifies the nature of the authority of norms by modeling it in terms of the authority of a superperson. Being a member of a community and following communitarian norms are normative notions that have to be themselves elucidated through a social account of normativity. 15 See, e.g., Davidson (1967, 1982, 1992, 2001); Brandom (1994); Dennett (1979, 1987) and Stalnaker (1987). 16 Quine takes these two points of view to be structurally equivalent, see e.g. Quine (1990), “Reply to Charles Parsons” in Barrett & Gibson (1990), p. 291. 17 Brandom (2000: 362) acknowledges Habermas’ account of his theory as a “fair characterization”. 1 8 See Thompson (2004). 19 Tomasello also assumes that there are internal representations that are in place prior to the emergence of socio-cultural practices. For a criticism of this view, see Satne (2016).
References Bar-On, D. (2013). Expressive communication and continuity skepticism. The Journal of Philosophy, 110(6), 293–330. Brandom, R. (1994). Making it Explicit. Reasoning, Representing and Discursive Commitment. Cambridge, MA/ London: Harvard University Press. ———. (2000). Facts, norms and normative facts: A reply to Habermas in European Journal of Philosophy, 8(3), December 2000, 356–374. Brentano, F. (1995). Psychology from an Empirical Standpoint, trans. by A.C. Rancurello, D.B. Terrell, and L. McAlister, 2nd edition. London: Routledge. Burge, T. (1979). Individualism and the mental. Midwest Studies in Philosophy, 4(1), 73–121. Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-knowledge. Oxford: Oxford University Press. Clark, A. (1997). Being There: Putting Brain Body and World Together Again. Cambridge, MA: MIT Press ———. (2006a). Language, embodiment and the cognitive niche. TRENDS in Cognitive Sciences, 10(8), 370–374. DOI:10.1016/j.tics.2006.06.012 ———. (2006b). Material symbols. Philosophical Psychology, 19(3), 291–307. Davidson, D. (1967). Truth and meaning. Synthese, 17, 304–323. ———. (1975). Thought and talk. In S. Guttenplan (Ed.), Mind and Language (pp. 7–23). Oxford: Clarendon Press.
542
Social approaches to intentionality ———. (1982). Rational animals. Dialectica, 36(4), 317–327. ———. (1992). The second person. Midwest Studies in Philosophy, 17(1), 255–267. ———. (2001). A Coherence Theory of Truth and Knowledge in Subjective, Intersubjective, Objective. Oxford: Oxford University Press. Davies, M. (1998). Language, thought and the language of thought. In P. Carruthers and J. Boucher (Eds.), Language and Thought (pp. 31–39). Cambridge: Cambridge University Press. De Jaegher, H. & Di Paolo, E. A. (2007). Participatory sense-making: An enactive approach to social cognition. Phenomenology and the Cognitive Sciences, 6(4), 485–507. De Jaegher, H, Di Paolo, E. A. & Gallagher, S. (2010). Can social interaction constitute social cognition? Trends in Cognitive Sciences, 14(10), 441–447. Dennett, D. (1979). True believers. The intentional strategy and why it works. In A. Heath (Ed.), Scientific Explanation (pp. 53–75). Oxford: Oxford University Press. ———. (1987). The Intentional Stance. Cambridge, MA: MIT Press. Dewey, J. (2008). Logic: The Theory of Inquiry Kathleen Poulos (ed.) (Vol. 12). The Later Works, Carbondale: Southern Illinois University Press. Fusaroli, R. Gangopadhyay, N. & Tylén, K. (2014).The dialogically extended mind: Language as skillful intersubjective engagement. Cognitive Systems Research, 29–30, September 2014, 31–39 Ginsborg, H. (2011). Primitive normativity and skepticism about rules. The Journal of Philosophy, 108(5) (2011), 227–254. Goldfarb, W. (1985), Kripke on Wittgenstein on Rules. The Journal of Philosophy, 82. Grice, P. (1957). Meaning. The Philosophical Review, 66, 377–388. Habermas, J. (2000). From Kant to Hegel: On Robert Brandom’s pragmatic philosophy of language. European Journal of Philosophy, 8, 322–355. Haugeland, J. (1990). The Intentionality All-Stars. Philosophical Perspectives, 4, 383–427. ———. (1998). Understanding Natural Language. In Having Thought. Essays on the Metaphysics of the Mind (pp. 47–62). Cambridge, MA/London: Harvard University Press. Horwich, P. (1995). Meaning, use and truth. Mind, 104, 355–368. Hutto, D. (2008). Folk-Psychological Narratives.The Sociocultural Basis of Understanding Reasons. Cambridge, MA: MIT Press. ———. (2015). Basic social cognition without mindreading: Minding minds without attributing contents. Synthese, 10.1007/s11229–015–0831–0. Hutto, D. & Satne, G. (2015), The natural origins of content, Philosophia. Philosophical Quarterly of Israel, 43, 3. ———. (2016). Continuity scepticism in doubt: A radically enactive take. In C. Durt, and T. Fuchs (Eds.), Embodiment, Enaction and Culture. Cambridge, MA: MIT Press, in press. Kripke, S. (1982). Wittgenstein on Rules and Private Language. Cambridge, MA: Harvard University Press. Kukla, R. & Lance, M. (2009). ‘Yo!’ and ‘Lo!’:The Pragmatic Topography of the Space of Reasons. Cambridge, MA: Harvard University Press. Lavelle, J. S. (2012). Two challenges to Hutto’s enactive account of pre-linguistic social cognition. Philosophia, 40, 459–472. Lewis, D. (1975). Languages and language. In Keith Gunderson (Ed.), Minnesota Studies in the Philosophy of Science (pp. 3–35). Misessota: University of Minnesota Press. Locke, J. (1975). The Clarendon Edition of the Works of John Locke: An Essay Concerning Human Understanding, Peter H. Nidditch (ed.). Oxford: Oxford University Press. McDowell, J. (1996). Mind and World, 2nd Edition. Cambridge, MA/London: Harvard University Press. ———. (1998). Wittgenstein on following a rule. In Mind,Value and Reality (pp. 221–262) Cambridge, MA/ London: Harvard University Press. ———. (2009a). Motivating Inferentialism. In The Engaged Intellect (pp. 288–307). Cambridge, MA/London: Harvard University Press. McDowell, J. (2009b). Subjective, intersubjective, objective. In The Engaged Intellect (pp. 152–159). Cambridge, MA/London: Harvard University Press. Putnam, H. (1975). The meaning of meaning. Minnesota Studies in the Philosophy of Science, 7, 131–193. Quine, W.V.O. (1970). Philosophical progress in language theory. Metaphilosophy, 1(1), 2–29.
543
Glenda Satne ———. (1980). From a Logical Point of View. Cambridge: Harvard University Press. ———. (1987). Meaning. In Quiddities: An Intermittently Philosophical Dictionary (pp. 130–131). Cambridge, MA: Harvard University Press. ———. (1990). Reply to Charles Parsons. In R. Barrett and R. Gibson (Eds.), Perspectives on Quine (pp. 1–16). Oxford: Blackwell. Rödl, S. (2010). Normativity of mind vs philosophy as explanation. In J. Wanderer and B. Weiss (Eds.), Reading Brandom: On Making It Explicit. London: Routledge Rorty, R. (1989). Solidarity or objectivity? In M. Krausz (Ed.), Relativism: Interpretation and Confrontation (pp. 167–183). Indiana: University of Notre Dame Press. Satne, G. (2005). El argumento escéptico de Wittgenstein a Kripke. Buenos Aires: Grama. ———. (2014a). Interaction and self-correction. Frontiers in Psychology, 5, 798. doi: 10.3389/fpsyg.2014.00798 ———. (2014b). What binds us together. Normativity and the Second Person. In S. Rödl and J. Conant (Eds.), Philosophical Topics, 42(1), 43–62. Special Issue: The Second Person. ———. (2016). A two-step theory of the evolution of human thinking: Joint and (Various) collective forms of intentionality. Journal of Social Ontology, 2(1), 105–116. Sellars, W. (1954). Some reflections on language games, Philosophy of Science, 21(3), 204–228. Stalnaker, R. C. (1987). Inquiry. Cambridge, MA: MIT Press. Thompson, M. (2001). Two forms of practical generality. In Arthur Ripstein and Christopher Morris (Eds.), Practical Rationality and Preference (pp. 121–52). Cambridge: Cambridge University Press. ———. (2004). What is it to wrong someone? A puzzle about justice In R. Jay Wallace, P. Pettit, S. Scheffler and M. Smith (Eds.), Reason and Value: Themes from the Moral Philosophy of Joseph Raz (pp. 332-–384). Oxford: Clarendon Press. ———. (2012). You and I. Some Puzzles about mutual recognition. Public Presentation at the Aristotelian Society Meeting. Available online: http://www.pitt.edu/~mthompso/i+you.pdf Tomasello, M. (1999). The Cultural Origins of Human Cognition. Cambridge: Harvard University Press. ———. (2014). A Natural History of Human Thinking. Cambridge: Harvard University Press. Vygotsky, L. S. (1962). Thought and Language. Cambridge, MA: MIT Press. Wanderer, J. (2010). Brandom’s challenges. In J. Wanderer and B. Weiss (Eds.), Reading Brandom: On Making It Explicit (pp. 96–114). London: Routledge. Wittgenstein, L. (2009) Philosophical Investigations, J. Schulte and P. Hacker (eds.), G.E.M Anscombe, P. Hacker and J. Schulte (trans.), revised 4th edition. London: Wiley/Blackwell. Wright, C. (1980). Wittgenstein and the Foundations of Mathematics. Cambridge, MA/London: Harvard University Press. ———. (1986). Theories of meaning and speakers knowledge. In Realism, Meaning, and Truth (pp. 204–238). Oxford: Blackwell.
544
32 NORMATIVITY Joseph Rouse
Intelligence abides in the meaningful. This is not to say that it is surrounded by or directed toward the meaningful, as if they were two separate phenomena, somehow related to one another. Rather, intelligence has its very existence in the meaningful as such – in something like the way a nation’s wealth lies in its productive capacity, or a corporation’s strength may consist in its market position. Of course, the meaningful, here, cannot be wholly passive and inert, but must include also activity and process. Intelligence, then, is nothing other than the overall interactive structure of meaningful behavior and objects. —John Haugeland, Having Thought, p. 230
Introduction Normativity becomes a central concern in the philosophy of mind primarily through consideration of the normativity of meaning. Even the formulation of this concern is often contested, however. In the passage above, Haugeland speaks of “intelligence” rather than mind, and of “the meaningful” rather than meaning, in part to avoid the initial presumption that minds or meanings are distinctive kinds of entity. The phenomena in question can nevertheless be readily identified despite considerable disagreement about how to describe, explain, or assess them. Human capacities for experience, thought, speech, and action are paradigm cases, but their relations to one another and to other phenomena in the same vicinity are often at issue philosophically. These nearby phenomena include the behavior or capacities of (some) non-human animals; the capacities of computers, their stored programs, or their robotic bodies; and the doings of social institutions or groups, including social animals or animal societies. At the limit, the boundaries of this domain are explored by asking how to recognize meaningful thought, utterances or actions in unfamiliar or even alien form, which has generated reflections on the plight of field linguists (Quine 1960), radical interpreters (Davidson 1984), intentional stance-takers (Dennett 1987), or even field teleologists (Okrent 2007, ch. 2). An underlying difficulty has been the holism of both the phenomena to be understood and their philosophical characterization. The mindfulness or meaningfulness of various states, events, or performances characteristically depends upon their relations to other such states, events, or performances.1 Their contribution to more extensive, mutually supportive patterns is crucial to their characterization as meaningful, intentional, or normative. More important, the various concepts used to explicate these phenomena philosophically often come as a 545
Joseph Rouse
package: philosophical approaches to mind, understanding, meaning, intentionality, rationality, and action are not readily specifiable one at a time, and different explications of these several concepts typically do not play well together. The question of whether and how to understand mind and meaning as normative acquires its characteristic philosophical form in efforts to place these phenomena on both sides of a modern scientific conception of nature. The issue is whether they are intelligible as natural phenomena, while also recognizing that they are constitutive of scientific comprehension of nature. For Plato, as a contrasting example, the tripartite human soul or mind, comprising reason, spirit, and appetite, was thoroughly and unproblematically normative. The soul is directed toward, responsive to, and governed by the Good, because the cosmos in which we find ourselves is normatively ordered, and our place within it is only intelligible in terms of the Good. The emergence of modern natural sciences excised any normative order from predominant conceptions of the natural world. That excision frames contemporary possibilities for understanding the place of mind in nature, whether as imposing or instituting normative order within anormative nature, or as explicating mind in anormative terms. The philosophical centrality of this issue responds to an implicit threat: a modern scientific understanding of nature may render unintelligible our very capacity to understand the world scientifically. Care must be taken at the outset to clarify what it would mean to consider mind and meaning as normative. Uses of the term “normative” vary in scope and significance. Some uses imply a contrast between descriptive and prescriptive content, for example. Since part of what is in question in asking about the normativity of mind and meaning is whether the contentfulness of any thoughts, utterances, or actions is a normative phenomenon, it would beg the question to presume such a contrast from the outset. Sometimes normativity is identified with specific theoretical conceptions of the normative, for example, as encoded in explicit rules or principles, as marking social differentiation of normality from deviance, or as comprising “values” grounded in actual, presumptive, or default “valuings”. To separate questions of whether mind and meaning are normative from how to understand their normativity, however, we need to hold in abeyance more specific conceptions of what normativity consists in. For our purposes, “normativity” is a covering term for any phenomena for which it makes good sense to understand them as open to assessment, whether in terms of success and failure, correctness and incorrectness, appropriateness and inappropriateness, justification or lack thereof, right or wrong, justice or injustice, and so forth. As Brandom (1979) once noted, the question of which phenomena are normative might then be understood to invoke not a factual difference among them but a normative difference in how to respond to them correctly or appropriately. My suggestion above in asking which phenomena it “makes sense” to understand as open to assessment falls within the scope of Brandom’s proposal, since making sense is itself a normative notion. Two further clarifications are needed before taking up contemporary conceptions of the normativity of mind and meaning and its possible sociality. The first clarification distinguishes the normativity of meaning and mind from the normativity of epistemic, ethical, or political justification, whether of actions, beliefs, or social practices and institutions. Mindedness is the capacity to think thoughts, make meaningful utterances, and undertake actions.2 Thoughts, utterances, and actions are open to epistemic, pragmatic, moral, political, and other forms of assessment, but the normativity of meaning and mind concerns their candidacy for assessment in these ways, not the outcome of those assessments. To assess the truth or falsity of a belief or statement, for example, one must understand what it says. To assess the success or failure or the moral significance of an action, one must grasp what the agent was doing or trying to do.
546
Normativity
Performances and practices are open to assessment and can fail at this level of candidacy: an utterance may be empty or nonsensical, an event may be a mindless motion or meaningless behavior rather than an action, and a group of agents or performances may not constitute an intelligible practice or institution open to assessment as a whole. The second clarification concerns the place of propositional or sentential content within the more general realm of the normativity of mind and meaning. Understanding and expressing sentential or propositional content has been the primary focus of philosophical consideration. That broadly linguistic focus includes thoughts and actions that are not themselves linguistically expressed, but whose content or meaning is expressible in terms of propositional attitudes such as belief or desire.We should acknowledge that the domain of mind and meaning extends beyond the domain of performances with propositional content. There are at least three sets of issues in this vicinity. First, some philosophers (Taylor 1971 or Heidegger 1927 are good examples) regard propositional or sentential content as integral to and dependent upon larger patterns of meaningful interaction that constitute social practices, cultures, or “worlds”. Second, an ongoing debate divides those (e.g., McDowell 1994) who take the discursively articulable conceptual domain as unbounded from those defending some form of non-conceptual content, whether located in qualitative experience or bodily habit or skill. Third, mindful engagement with the world surely extends to the emotional or other affective aspects of experience, thought, and action, which likely cannot be adequately characterized solely in terms of affective “attitudes” toward linguistically articulable contents. In what follows, I nevertheless set these issues aside here to attend to the normativity of discursive or conceptual capacities, as these three debates raise too many issues to address. Moreover, the normativity of discursive aspects of mind and meaning plays a central role in any account, regardless of responses to these broader concerns. Conceptions of mindedness and the meaningful as normative phenomena in this broad sense trace back to Kant, like so much else in contemporary philosophy. For Kant, concepts are rules for organizing experience and its contribution to possible judgments, and for determining action.Thought and action are distinct from movement or behavior in being governed by a law the agent imposes upon herself as a rule or norm, rather than inexorably according to natural law.To think, speak, or act is to answer to what one ought to think, say, or do, rather than merely in accord with regular or necessary patterns in nature. The difference concerns the direction of fit between rules and events: should a rule or norm answer to events (as correctly or incorrectly describing or explaining them), or should events answer to the rule (which renders them intelligible or unintelligible, correct or incorrect, moral or immoral, etc., according to whether they “obey” the rule)? Kant sought to situate mind in nature by defending the compatibility of two comprehensive stances. Understood theoretically, everything we say and do is a natural occurrence, correctly describable in accord with inexorable natural law. Understood practically, however, we are obligated to regard ourselves as free to determine our own thoughts and behavior in accord with norms we give to ourselves as rational agents. Despite their apparent conflict, Kant regarded these stances as compatible, because the first correctly grasps these events as objects of possible experience (phenomena), and the second commits to the possibility that “in themselves” (noumena) they involve a causality of freedom not manifest within conceptually organized experience. This Kantian problematic acquired distinctive reformulation and renewed philosophical salience in the early 20th Century through Frege’s (1984) and Husserl’s (1970) criticisms of psychologism. They objected that empirical scientific descriptions of thought cannot account for how we ought to think. Frege insisted that logic is a normative science; Husserl advocated
547
Joseph Rouse
a phenomenological science of meaning and essence to provide normative grounding for empirical sciences of fact. They were joined by the early Wittgenstein, neo-Kantians, and logical positivists in seeking to ground the normativity of meaning in transcendentally or logically necessary structures that allowed thoughts to have a definite content open to empirical assessment. Even Carnap’s (1956) later, more pragmatic work presented the choice of syntactically structured linguistic frameworks as a domain of freedom, accountable only to norms of clear communication that would allow empirical resolution of disagreements. Contemporary conceptions of mind and meaning as normative phenomena have been shaped by far-reaching criticisms of these efforts to ground their normativity in essential or necessary structures of thought, and/or an immediately given content not open to further assessment. Among the more influential criticisms were Quine’s (1953) challenge to the analytic/synthetic distinction, Wittgenstein’s (1953) reflections on rule-following, Sellars’s (1997) criticisms of the Myth of the Given, Goodman’s (1954) new riddle of induction, Davidson’s attack on the very idea of a conceptual scheme, Derrida’s (1967) rejection of the metaphysics of presence, and Heidegger’s (1927/1963) criticism of “ontic” explanations of meaningful disclosure that appeal to entities such as consciousness, language, or meanings. Earlier accounts of the normativity of meaning mostly appealed to formal, immaterial structures of logic or transcendental consciousness that supposedly provided necessary conditions for any meaningful thought. These critics relocated the sources of normative authority and force securely within the spatiotemporal world of nature and history.That shift gave renewed import to the question of how to situate conceptual normativity in relation to a scientific conception of the world. Responses to that question generally take a Kantian form without accepting Kant’s own proposed resolution.Three broad options seem available: we can treat the world as only encountered experientially within a scientific conception of nature answerable to rational norms; we can treat the apparent normativity of conceptions of the world as instead scientifically explicable in terms of natural causes or laws; or we can seek other ways to render natural-scientific and normative orientations as both legitimate and mutually compatible. The first, broadly idealist option, has largely fallen from philosophical favor, for reasons that I shall not directly discuss here.3 The other two options share a common concern: how to situate the normativity of mind and meaning intelligibly with respect to nature as scientifically understood. They differ concerning whether satisfying that concern still requires taking its normativity at face value, or instead showing how its only apparently normative character arises as a natural phenomenon. The most characteristic responses have sought to ground the normativity of mind and meaning in some aspect of human social life: here we find Wittgenstein’s enigmatic appeals to shared forms of life, Sellars’s normative functionalism, Quine (1960, ix) on “language as a social art”, Heidegger on the anonymous conformity of everyday ways of life, Kuhn’s (1970) appeals to scientific communities and their paradigms, or Putnam (1975) on the epistemic division of labor within linguistic communities, among others. The turn to broadly social conceptions of mind and meaning raises two central issues that will occupy the remainder of this essay. The first is to differentiate more clearly the disparate conceptions of language and mind as social phenomena embedded in often vague references to social practices, linguistic communities, or forms of life. The second is to consider whether and how these appeals to the social world can help explicate the (apparent) normativity of mind and meaning with respect to a broadly scientific conception of its encompassing natural world. Explications of normativity typically must address at least three interrelated issues. The first concerns the determinacy of normative assessment. There must be a difference, and a way to tell the difference, between success and failure, correctness and incorrectness, sense and senselessness, justice and injustice, or any other normative considerations involved. If some 548
Normativity
phenomenon admits of no difference between correctness and merely seeming to be correct or thinking one is correct, then the notion of correctness has no place there, and similarly for other normative considerations.The second issue concerns the authority or legitimacy of these normative determinations. As Rebecca Kukla once noted, For [something] to genuinely bind or make a claim, its authority must be legitimate. There can be no such thing as real yet illegitimate authority, since such ‘authority’ would not in fact bind us; the closest there could be to such a thing would be coercive force which makes no normative claims upon us. (Kukla 2000, 165) The third issue concerns the “force” of normative authority. Neither causal nor coercive force by themselves will account for how normative authority is binding on performances, but there must be some basis for how normative considerations apply, such that the difference between accord and lack of accord with a legitimate normative determination makes a difference. This third issue might instead be construed as addressing how the first two issues fit together, such that the source of normative authority actually bears on specific thoughts, utterances, or actions in a definite way. In the remainder of the essay, I consider four alternative ways of responding to these issues with social conceptions of the normativity of mind and meaning: regulist, regularist, interpretivist, and temporal. These conceptions differ in how they understand normativity, its character as social, and the standpoint from which to explicate it philosophically. I will indicate how such approaches account for the determinacy, authority, and force needed to establish the normativity of a social practice, and some of the central issues confronting those conceptions. I also indicate their bearing upon the underlying problem of how to place the (apparent) normativity of mind and meaning with respect to the scientifically intelligible natural world.
Regulism Regulist conceptions identify social practices by the constitutive rules or norms that govern their performers and performances. Games are often regulist models for practices. One can only be a chess player making moves within a game of chess, because those performances answer to the rules of chess. Similarly on regulist conceptions, one can only speak a language, have thoughts with determinate content, or contribute to a scientific research program, if one’s performances are governed by and mostly accord with the constitutive rules or norms of those practices. Some errors are tolerable, but some minimal degree of compliance is needed to sustain the applicability of the norms. Strictly speaking, the rules that constitutively govern a social practice do not apply only to participants or their performances. As Haugeland (1998, ch. 13) notes, practices also typically place constraints on the behavior of other components besides the players or participants. Even chess requires a degree of compliance from the pieces and board, which cannot change relevant positions, shapes, or color on their own, and must be visible, discriminable and movable by the players. A language must likewise be learnable, and its expressions articulable and discernible by its speakers. Some practices make their governing rules explicit, but more commonly, some or all of the rules that govern games, languages, or other social practices are implicit in practitioners’ performances and responses to other performances. Linguists have repeatedly proposed rules governing the construction and interpretation of grammatically formed expressions in languages, whose applications are readily recognized and endorsed by most speakers of the 549
Joseph Rouse
language without their needing to formulate, endorse, or even understand the governing rules. To this extent, regulist conceptions of social normativity are formulated from the philosophical stance of an “anthropological” observer, whose codification of governing rules need not be part of the practice itself, although they can be incorporated within it, and perhaps evaluated and revised. Regulists ascribe a complex direction of fit for the normativity of meaning. Individual performances and other contributions to a practice are accountable to the norms or rules governing the practice. Those norms are nevertheless often only implicit in the community’s practice, including its responses to other performances. Efforts to make the rules explicit must then fit what the community actually does or accepts; if a proposed explication of the governing norms is in conflict with the community’s “general telling” (Haugeland 1998, 314–315), the explication is mistaken.4 The basic “social” relationship invoked by regulist conceptions is between a community of practitioners as a whole, and its individual members. The authority of governing norms comes from their acceptance and application in practice by the community as a whole. For regulists there can be no further basis for assessing whatever the community accepts. One can try to persuade a community to change its norms, and offer reasons for change, but the fact of acceptance does the work, not the reasons, and changes in a community’s practice can occur without generally accepted reasons. Any reasons offered can at best motivate acceptance but not justify it, since there is no higher standard of appeal except to the extent that a practice is itself nested within a larger set of practices with more encompassing governance. Epistemic divisions of labor (Putnam 1975) allow a whole community to defer to expert interpretation or application of its governing norms. A community’s standards can also change over time, rejecting performances that were once accepted and vice versa. Even here, however, the community’s subsequent acceptance is what retroactively transforms the status of an earlier performance from mistake to innovation. Membership in a normative community involves both acceptance by the community and uptake by the members. Membership in a community or participation in a social practice is itself a normative matter, for which community acceptance is authoritative. A community’s acceptance of members or their performances nevertheless acquires some of its normative authority through participants’ uptake and response.5 The normative force of participation in a social practice arises through the capacities, achievements, and goods enabled or blocked by membership in the community and accord with its norms. Most characteristically, accord with a social practice enables attainment of what Alasdair MacIntyre (1980, ch. 14) described as “goods internal to a practice,” that is goods whose significance and worth are only adequately recognized and appreciated from within the practice, according to the recognitive and appreciative capacities it cultivates. As salient examples, games enable forms of excellence most clearly recognizable and appreciable by competent players. Accord with the norms of linguistic or artistic communities similarly enables capacities of articulation, expression, communication, and assessment that are partially opaque to those who do not accept the authority of the practice’s governing norms and acquire the skills needed to respond to them. As Brandom once noted, Without a suitable language there are some beliefs, desires, and intentions that one simply cannot have. Thus we cannot attribute to a dog or a prelinguistic child the desire to prove a certain conjectured theorem, the belief that our international monetary system needs reform, or the intention to surpass Blake as a poet of the imagination. . . . Expressive freedom is made possible only by constraint by norms, and is not some way of evading or minimizing that constraint. (1979, 194) 550
Normativity
Regulist conceptions of social normativity have encountered several characteristic criticisms. The most basic challenge is that social-regulist conceptions do not actually institute normative authority, but at most a descriptive regularity in social behavior. They can account for the deviance of individual performances from a social pattern (“normality”) but not for their normativity (correctness or incorrectness), unless the community’s performances and general telling were themselves normatively accountable in turn. Any quasi-anthropological description of what a community for the most part does cannot suffice to show that deviant performances are mistaken unless the community can in turn exercise its authority correctly or incorrectly. Social-regulist accounts would thus merely export the debilitating inability to distinguish being correct from merely seeming correct, from individual performances of a social practice to the practice as a whole. In a classic version of this criticism, John McDowell objects to both Wright (1980) and Kripke (1982) that, If regularities in the verbal behavior of an isolated individual, described in norm-free terms, do not add up to meaning, it is quite obscure how it could somehow make all the difference if there are several individuals with matching regularities. (350) Wittgenstein’s (1953) own reflections on rule-following posed a different challenge to regulist conceptions of normativity, concerning the determinacy of norms, so conceived, rather than their authority. If normativity is a matter of following rules or norms, interpreting a rule correctly would require a further rule for interpretation. That rule would itself be open to interpretation, however, such that interpretive determinacy could never be achieved: “this was our paradox: no course of action could be determined by a rule, because every course of action can be made out to accord with the rule. . . . And so there would be neither accord nor conflict here” (1953, I, par. 201). On Kripke’s (1982) controversial reading, this argument uncovers a new sceptical paradox about the very possibility of meaning, which calls for a “sceptical solution” paralleling Hume’s reinterpretation of causal relations as instances of regularities. Wittgenstein himself presented this argument as a challenge to regulism: “what this shows is that there is a way of grasping a rule which is not an interpretation” (1953, I, par. 201), i.e., not itself in accord or conflict with a rule.6 Perhaps the most sophisticated regulist response to these lines of argument has been John Haugeland’s (2002, 1998, ch. 10–13) proposal that intentionality or mindedness requires multiple interdependent levels of normativity and critical scrutiny in order to be genuinely intentional, that is, directed toward and accountable to objects.7 At the first level, object-directed beliefs, utterances or other performances must be governed by communal norms of proper performance – that is, of proper procedure and technique – in terms of which particular performances can be critically judged . . . [and] self-critical in that [one] carefully scrutinizes actual procedures to ensure that they are in accord with the norms of proper performance. (2002, xx) Absent such applicability of communal norms, there would be no difference between correct performance, and performances that merely seem correct. The move to a second level of self-criticism begins with the recognition that community agreement by itself is not sufficient for accountability to objects themselves. Norms of correct performance make individual performance accountable to the community, but the community’s performances are in turn 551
Joseph Rouse
accountable to the demand that proper performances must also yield compatible outcomes. A second level of normativity thus requires laws or rules that establish what it would be for otherwise proper performances to be incompatible: [by] imposing constraints on combinations of results, [objects as] individual loci of incompatibility can resist or refute particular proper (or improper) performances, . . . ruling out the bulk of conceivable combinations, [and binding] the totality of actual results within the narrow bounds of possibility. (1998, 338) The first-level norms govern practitioners’ performances, but the second-level rules (“laws”) constitute objects as loci of possible incompatibility. If performances of the practice yield incompatible results, and the performances themselves stand up to first-level critical scrutiny, then something must be wrong with the community’s standards. Critical scrutiny at this level is directed to the skills and norms of proper performance, oriented toward revision or repair of the community’s skills and standards so as to restore compatibility among the norms of proper performance, and the outcomes those performances produce. A third level of normativity arises at the point at which efforts at revision or repair of the community’s skills and norms fail to restore compatibility among the outcomes of properly performed skills. Here we find practitioners’ commitment to uphold the standards at the first two levels: a refusal to accept performances that violate the communal norms, or incompatible results. This commitment has a double edge. It first obligates practitioners to assess and correct improper performance or community skills and standards that produce incompatible results; if those efforts fail, however, it then obligates them to give up on the entire practice, and the beliefs and skills it supposedly generated. Only by being able to hold its entire practice accountable to the collective intelligibility of its performances can a social practice genuinely be intentionally directed toward anything. The empirical sciences are Haugeland’s guiding model for social practices that are genuinely directed toward objects. It is not enough that experiments and calculations be properly performed; these norms of proper performance would be free-floating unless there were also something akin to scientific laws, which specify when various measurements, calculations, and predictions are not compatible. The laws in turn would have no force unless the community (or some of its members) could tell when the laws were violated, and could refuse to tolerate incompatible outcomes. At the limit, such refusal must extend to the point of giving up the entire practice, out of fidelity to its norms and laws. Not all social practices have the degree of explicit articulation and critical reflection characteristic of sciences, but Haugeland (2013, 229; 1998, ch. 10) sees the same basic structure also at work in other professional skills and even in everyday perception and understanding of one another and the world. The normative authority of our discursive practices thus arises from their exclusion of some combinations of occurrences as “impossible”, and their normative force from the commitment of practitioners not to tolerate what the practice’s standards exclude. He concludes that, To perceive things as objects is to insist on their coherent integrity – the constitutive standard for thinghood – just like insisting upon legality in chess, rationality in interpretation, and ordering with precision and scope in empirical science. (1998, 262) On Haugeland’s version, intentional normativity thus only functions together with a broadly alethic-modal differentiation of what a practice makes possible from what it rules out. The 552
Normativity
difficulty of sustaining that equilibrium makes the pattern of a practice more than just a descriptive regularity, while the possible impossibility of the practice allows the normative authority of the instituting community itself to be in question. The issue then confronting Haugeland’s multi-faceted regulism is whether the apparent voluntarism of practitioners’ commitment to maintain the practice deprives that commitment of any authority over the practitioners’ performances. Is that authority merely akin to that of a monarch in a country in which revolution is legitimate (Kierkegaard 1954, 203), or, as Haugeland suggests, does it gain a distinctive kind of “existential” force because the practice is partially constitutive of who the practitioners are or can be, yet also vulnerable to failure? Stephen Turner (1994) mounted a different challenge to regulist accounts of norms, rules, presuppositions, paradigms, worldviews, or anything else supposedly shared by participants in a practice that explains or explicates their performances. Turner denies that “social practice” theories can simultaneously account for the determinacy and the force of what is supposedly shared or presupposed by participants. Regulist accounts of social practices fail to account for the transmission, psychological reality, or causal efficacy of whatever is posited to unify the diverse performances that supposedly comprise a single practice: The concept of shared practices – the ‘social theory of practices’ – requires that practices be transmitted from person to person. But no account of the acquisition of practices that makes sense causally supports the idea that . . . the same practice is reproduced in another person. (Turner 1994, 13) Turner’s criticism reflects a broader naturalistic suspicion of appeals to normative authority to explicate thought, language, and action. Those with a more stringent view of naturalistic constraints upon conceptions of mind are often inclined to reject appeals to normative authority, force, or content as mysterious and in need of scientific clarification or replacement. The only relevant “forces” are causal, law-governed, or otherwise explicable scientifically. Such challenges range from Quine’s (1960) arguments for the indeterminacy of translation, to efforts to account for meaning and uptake in social practices in terms of dispositions, habits, or other notions that are supposedly more naturalistically respectable. Regulists tend to accept a more inclusive conception of what naturalistic scruples require. On such liberal naturalistic conceptions, only practitioners’ abilities to learn, express, and respond to the normative significance of performances of a practice must be naturalistically explicable. So long as regulists’ accounts of practitioners’ abilities and performances do not violate scientific understanding of human capacities, however, they need not provide a further account in scientific terms of norms or meanings instituted by those abilities and performances, or the normative authority and force established through the ongoing exercise of those abilities.8
Regularism Advocates of more stringently naturalistic accounts of social practices have often been inclined toward regularist rather than regulist conceptions. Regularists also explicate the (apparent) normativity of social practices in terms of relations between individual performances and the overall practice of a community. They differ from regulists in treating social practices as regularities in the performances, habits, or dispositions of communities and their members rather than as an acceptance of rules or norms. Regularists often adopt a methodological or ontological individualism that locates the relevant regularities in individual agents’ habits or dispositions 553
Joseph Rouse
(Turner’s 1994 appeal to habits in lieu of any “social theory of practices” is exemplary). Perhaps the best-known effort to develop a social-regularist account of intentional normativity is Kripke’s (1982) “sceptical solution” to the sceptical paradox concerning meaning that he attributes to Wittgenstein’s remarks on rule-following (Wright 1980 is another prominent example). Regularist accounts of social normativity ascribe broad regularities of performance or assessment to linguistic communities. These regularities are not without exceptions, since people sometimes make mistakes or deliberately deviate from established patterns of practice. The relevant regularities are thus not just a simple sameness of performance in every case, but comprise a more complex pattern of performance that incorporates corrections of deviant performances and training of new participants in the practice. For Kripke, to mean something by the use of a word is to commit oneself to a determinate rule for how one ought to use that expression in the future. If in past uses of the word “plus” I meant the plus function, and I intend to use the term in the same way, then I ought to say that 125 is the value of 57 plus 68. The conclusion he draws from Wittgenstein’s remarks on rule-following, however, is that the behavior and psychological states of an individual agent provide no basis for determining what the agent ought to say or do even in continuity with her past practice. In his now-classic example, the psychological states or behavior of an individual agent are not sufficient to determine that she previously meant plus rather than quus by the word “plus”, where a quus-like function tracks plus up to some value larger than the agent had previously computed, but has the value “5” for all larger inputs. Kripke then argues that we can only understand the normativity of meaning on the basis of individuals’ conformity or nonconformity with a larger pattern of social practice. Kripke expresses the paradox he finds in this result by ascribing the indeterminacy of meaning to past uses of a term. He is well aware that it then casts doubt upon the possibility of determinately meaning anything by what one says or does, even in trying to describe this very paradox. He proposes to resolve the paradox at the level of the individual’s participation in a shared form of life. This resolution does not solve the paradox, for example by providing necessary and sufficient conditions for the use of expressions with determinate content. Kripke instead proposes a “sceptical solution”, which attends to the contrapositive of the conditional, “If a speaker meant plus by previous uses of ‘plus’, then she ought to give the answer ‘125’ in response to 57 + 68.” The underlying idea is that conformity with a broad regularity of performance of a larger community provides the (only) justification for attributing a determinate meaning to an individual’s performances. The regularity is broad in the sense that it has sufficient slack to allow for a range of intelligible “mistakes” and corrections, where mistakes simply mean nonconforming performances, correctable by bringing them back into conformity. Nothing justifies attributions of meaning other than the brute fact of broad conformity with a shared form of life that can be sustained over time. If and when there is irresolvable discord over how and when to apply various expressions, then those applications are no longer meaningful, that is, they are no longer part of a sustained, recognizably shared form of life. Regularist conceptions have also been challenged, with two lines of criticism especially prominent. The first criticism is that regularist accounts, in Kripke’s version or others, abolish the normative authority of meaning and mind rather than explicating it.The brute fact of conformity to a communal regularity does not show that anyone ought to conform to what others do, or that such conformity has any significance or meaning beyond that brute fact. A certain utility in such conformity may explain its persistence, but does not justify it. After all, as Kripke (1982, 96) notes, we could instead imagine a different form of life in which participants used an expression in a quus-like pattern, and however bizarre and incomprehensible that would seem to us, it would then be “meaningful” in the same way as our own practice. Social regularities 554
Normativity
are contingent patterns of actual behavior, which would thereby fail to achieve any expressive or justificatory significance. Regularists may respond that such contingent conformity is all we are entitled to claim absent deeper grounds for normative attribution. At the least, this response turns on the presumption that no other account of the normativity of meaning and mind is viable.The critic can also respond in turn, however, that what was thereby proposed as a justification of the regularist account can itself only be a description of what we happen to do, with no normative authority or force beyond the maintenance of de facto conformity. The second prominent criticism is that regularism does not actually resolve the sceptical paradox, even “sceptically”. While the social practices of adding numbers or using language do have a kind of regularity, they also exhibit many other regularities as well, including quuslike regularities. Nothing in Kripke’s argument picks out one of these regularities as the one that expresses what we mean by our performances, and consequently, he fails to account for the determinacy of norms. To be sure, social practices proceed in a more or less definite way, and do seem to sustain a kind of “agreement”. Such agreement is only a concurrence in what seems correct, and cannot sustain a distinction between seeming to be correct and being correct. Regularists typically acknowledge the point while rejecting its supposed critical import. What social-regularist accounts do is to salvage a difference between correctness and the mere appearance of correctness at the level of individual performances, at the cost of denying any such difference applicable to the community as a whole, or any normative significance above and beyond de facto social conformity and its social utility.
Interpretivist approaches Two other approaches develop very different conceptions of the social normativity of meaning and mind. Regulists and regularists agree that the social relation that explicates that normativity is between individual practitioners and a community. Interpretivist accounts of social normativity instead look to the distributed interactions among individual participants in a practice rather than to any part-whole “I-We” relation (Brandom 1994, 38–39). Instead of describing social practices from the outside as a quasi-anthropological observer, interpretivists characterize their normativity from the internal standpoint of participants interacting with others. The normativity of these social practices arises from “triangulation” (Davidson, 2001, ch. 6, 8, 9, 13, 14) among the holistic attribution of beliefs, desires, actions, and meanings to a speaker, what is thereby attributed, and the partially shared circumstances of interpretation. The systematic, interconnected combinatorial uses of linguistic expressions provide sufficient constraints on interpretation, whether those interconnections are treated as inferential norms or systematic interconnections of beliefs, desires, utterances, and actions. Both the attributed contents and the interpreter’s attribution are answerable to norms of rationality and truthfulness. For someone’s overall pattern of behavior and attributed psychological states to be meaningful is for what she says and does to be interpretable as mostly true and mostly rational in context; my interpretation of her is justified if it maximizes the overall attribution of truth and rationality; and for me to be a speaker, believer, and agent is to be able implicitly to interpret myself as rational in the same way. The interpretive practices that disclose intentional normativity have been variously characterized as radical translation (Quine 1960), the intentional stance (Dennett 1987), radical interpretation (Davidson 1984), or the game of giving and asking for reasons (Brandom 1994). These accounts split between asymmetric and symmetric versions. In the former case, the force with which interpretive practice is norm-governed is its predictive reliability. By treating some parts of the world as intentional systems that mostly behave rationally, the interpreter enhances 555
Joseph Rouse
her ability to render her environment more reliably and extensively predictable. Some “real patterns” (Dennett 1991) in the world can only be manifest from this interpretive point of view. The rationality attributed is then a substantive property of these intentional systems. Although both Dennett and Quine say that this interpretive stance also applies to the interpreter, it is not clear in what sense we thereby render ourselves more predictable to ourselves. To this extent, Quine’s or Dennett’s approaches might also be understood as a distinctive kind of regularism. Symmetric versions (Davidson and Brandom are prominent examples) instead treat interpretation as a kind of mutual recognition. We each make sense of ourselves as rational agents and believers by making sense of one another. The rationality involved is not a substantive property ascribed to other parts of the world, but a constitutive norm governing the entire discursive practice. To this extent, Davidson’s or Brandom’s approaches might also be understood as a distinctive kind of regulism, in which the norm of rationality governing the practice as a whole only does so as embodied in the mutual interactions among speakers–interpreters. The normative authority of interpretive social practices comes from the worldly circumstances to which interpretations are accountable, and are most commonly expressed in terms of interpretive “objectivity” (Davidson 1984, essay 13; Brandom 1994).The objectivity in question is not the objective correctness of beliefs or utterances, however, but their objective purport as meaningfully about objects and accountable to them. Davidson, as a telling example since he does regard truth as the central semantic concept, nevertheless inverts more familiar treatments of the concept, so that the truth of most of a speaker’s utterances and beliefs is not an outcome of interpretation but a criterion of its adequacy. Accountability to worldly circumstances is itself internal to interpretive practice, however.The attributed truth conditions for utterances and beliefs are typically encountered only as expressible within a language, via a disquotational or prosentential truth predicate.The interpreter can say under what circumstances an attributed belief is true or an inference is good, but can only do so in an uninterpreted metalanguage; in the canonical case, we interpret the utterance “Schnee ist weiss” as true if and only if snow is white. Interpretivist accounts thus diverge in how they situate mind and meaning in the world. Asymmetric accounts treat the interpretation of speakers as part of one’s overall theory of the world, and thus as part of science. The result is a naturalism that incorporates philosophical understanding within science, while providing no further explication or justification of scientific understanding. Scientific theorizing and prediction is not an activity itself held accountable to philosophical assessment, but instead provides the context for philosophical reflection on the normativity of mind and meaning (or of epistemic justification). Symmetric accounts recognize scientific theorizing as itself integral to a holistic interpretation of the world, and hence as accountable to the constitutive norm of rationality, but only from “inside” the interrelated social practices of interpretation. There is no standpoint from which to assess interpretive practices except from within. Brandom thus concludes that, The symmetry [among interpretive perspectives] ensures that no one perspective is privileged in advance over any other. Sorting out who should be counted as correct, whose claims and applications of concepts should be treated as authoritative, is a messy retail business of assessing the comparative authority of competing evidential and inferential claims. . . . That issue is adjudicated differently from different points of view, and although these are not of equal worth, there is no birds’-eye view above the fray from which those that deserve to prevail can be identified. (1994, 601) 556
Normativity
A similar openness extends to the question of who ought to be understood as a speaker, subject only to the demand to extend interpretation as far as possible. The latter demand applies whether it is taken as instrumental, to improve our predictive capacities, or as a constitutive norm of rationality and mutual recognition governing the entire interpretive practice as a “space of reasons” (Sellars 1997, 76). Interpretivist accounts of the social normativity of mind and meaning also confront some challenging criticisms.Advocates of symmetrical interpretivism object that asymmetric accounts leave the interpreter’s own activities out of the account, and are therefore seriously incomplete. They can understand what it is to be an ascribed “intentional system” (predictable by others on that basis), but cannot understand what it is to take the intentional stance and ascribe intentional states. Haugeland (1998, 2002) extends this line of argument by claiming that all interpretivist conceptions need the kind of interpretive resilience and integrity that he characterizes as existential commitment, but cannot account for it in their own terms. The rationality that supposedly governs interpretation would thus have no normative force. If interpreters were to conclude from the failure of desultory and uncritical efforts to make sense of others that they are not intentional systems (Dennett 1987), language-speakers (Davidson 1984), or part of the broader interpretive community of those capable of saying “we” (Brandom 1994, 3–4), these interpretivist theories would offer no basis for challenging that conclusion apart from the contingent emergence of a more diligent and responsible interpreter. Ebbs (2009) argues that interpretivist conceptions make the conjoined mistakes of mischaracterizing both the evidence base and the target for linguistic interpretation. Interpretation supposedly begins with the evidence of what speakers say when, understood in terms of a “token-and-explanatory-use” conception of words as linguistic elements, in order to assign semantic significance to the words as the outcome of interpretation. On such conceptions, “two word tokens are of the same semantic type if and only if they are spelled or pronounced the same way and facts about them determine that they have the same meanings and satisfaction conditions” (Ebbs 2009, 112). Ebbs argues instead that we can only account for how word types and tokens are discriminated perceptually and practically by bringing semantic considerations to bear in the interpretation, as an extension of familiar arguments for the theoryladenness of perception. We do so from the perspective of participants in a more encompassing linguistic practice whose semantic units are words that we can justifiably take at their semantic “face value,” ascribing semantic divergences only when interpretation breaks down.9 It is thus a mistake to think interpreters target a speaker’s idiolect rather than a shared linguistic practice. As a further reason for rejecting idiolectal conceptions of what speakers say and mean, Ebbs argues that the supposedly constitutive norm of charitable interpretation, when applied to individual speakers’ idiolects, constrains the ability to learn from others about shared circumstances. If we are obligated to minimize our disagreement with others whenever possible by offering non-standard interpretations of their words, then we will sometimes fail to recognize occasions when their words ought to incline us to change our own beliefs instead. Symmetric interpretive theories have also been taken to invoke a debilitating dualism between the causal and the rational, the natural and the normative, or first and second nature, which would undermine any semantic determinacy to our utterances and actions. Davidsonians and Sellarsians argue that merely causal impact or experiential presence cannot be incorporated within rational semantic normativity; to think otherwise would invoke the Myth of the Given or the second or third dogma of empiricism. On the other hand, the interpretive practices that exhibit semantic normativity only do so because they are ultimately objectively accountable to causally efficacious objects. Interpretivists try to secure such accountability in diverse ways, whether by appealing to the token identity of mental and physical events (Davidson 1980), the 557
Joseph Rouse
incorporation of judgments of causal reliability within rational assessments (Brandom 1994), or the entitlement to a conception of rational “second nature” within our law-governed conceptions of ourselves as animals (McDowell 1994). In the canonical criticism of these conceptions, McDowell argues that because Davidson treats perception as a merely causal impact that cannot play a role in the justification of beliefs (“nothing can count as a reason for holding a belief except another belief ” – Davidson 2001, 141), he cannot account for how such an interconnected system of beliefs could ever be rationally accountable to the world. That inability to ascribe rational significance to perceptual experience renders the entire conceptual realm into a “frictionless spinning in a void” (1994, 66), disconnected from any objective accountability. Haugeland (1998) and Rouse (2002) further develop this line of argument, extending it explicitly to apply also to Brandom and McDowell himself. Kukla and Lance (2009) develop a more complex interpretivist conception of the social normativity of mind that seeks to overcome these problematic dualisms, while also highlighting and addressing other problems for interpretivists. Their account of the pragmatics of discursive interaction aims to show how its normativity integrally belongs to our concretely incarnated social world. They diagnose the failure of other interpretivists to account for the worldliness of discursive practices as resulting from the “declarative fallacy” of presuming that semantic content is fully explicable in terms of the functioning of declarative sentences or thoughts. Because these views give philosophical primacy to the third-personal, agent-neutral speech acts of declarative assertions, they overlook the vocative and recognitive aspects of speech through which we call one another, acknowledge and respond to those calls, and bring our capacities for perceptual recognition into public discursive practice. The pragmatics of calling and responding highlight the ineliminably first- and second-personal aspects of discursive practice. Second-person calls and first-person uptake lets us inhabit an incarnate discursive community accountable to a shared, objective, epistemically accessible world. Only thereby can we make assertions accountable to that world and authoritative for anyone in our discursive community.
Temporal conceptions A fourth way to construe the social normativity of meaning and mind appeals to the temporally extended character of the discursive practices that articulate conceptual understanding. Recent developments of this approach include Ebbs (2009) and Rouse (2002, 2015), although arguably, the locus classicus for temporal conceptions of the normativity of mind is the account of “originary temporality” in Heidegger’s (1927) Sein und Zeit. Temporal conceptions of normativity share with interpretivists an understanding of social practices that emphasizes interactions among individual performances rather than relations between individuals and a community. These interactions nevertheless also constitute an encompassing practice that is open to assessment as a whole. What unifies a temporally extended social practice is not some feature that its individual performances have in common, but the mutual responsiveness of those performances over time. For Ebbs, linguistic practices are sustained by speakers’ practical identifications of words and practical judgments of sameness of satisfaction.These default practical commitments allow them to treat one another as using the same words and talking about the same things, even when they disagree. These practical commitments can be overridden for particular, local reasons in a given context, but only against the background of other default commitments. Such practical judgments are needed to let us make discoveries about what others are also talking about, rather than just changing the topic or stipulating new uses for old terms. Language thereby involves a division of linguistic labor over time. 558
Normativity
Rouse’s temporal conception of social practices encompasses more than just language. Practices are composed of mutually interdependent, situated performances. My ability to teach a class, shop for groceries, vote for a political candidate, do a scientific experiment, or utter a sentence depends upon a supportive alignment of other performances and the relevant circumstances. I cannot ordinarily teach a college class, for example, if students do not enroll, show up at different times or places, do not understand the language of instruction, or have not done the reading, but also if there is no suitable space with appropriate lighting, seating, and absence of interruption, or no supporting institutional arrangements. In the face of various mis-alignments, performers adjust what they do, re-arrange the circumstances, call for others to adjust, or persevere in the face of incongruity or failure. These ongoing patterns of interaction raise issues wherever adjustments are called for to enable the practice to proceed intelligibly. What is at issue in such ongoing responsiveness is whether and how the practice will continue. What is at stake in the resolution of those issues is what kind of lives the participants can lead, in what circumstances. Participants can be engaged in the same practice without agreeing upon what that practice is, what issues might call for its revision or repair, or what is at stake in how those issues are resolved. These issues and stakes are thus only identifiable anaphorically over time, by reference to what practitioners have been doing and what they can do in response to their circumstances. On this account also, the normativity of social practices cannot be understood from an observer’s standpoint that views the performances and circumstances from sideways on, but only from within their ongoing development.10 The normativity of social practices does not consist in regularities of performance or governance by already-determinate rules or norms, but in the mutual accountability of their constitutive performances to issues and stakes whose definitive resolution is always prospective. The normative accountability of a practice is an interactive orientation toward a common future that encompasses present performances within its past.11 That normativity is characteristically two-dimensional, in the sense that the issues raised by any practice involve interplay between whether the practice continues at all, and what it thereby becomes. I have not attempted to argue for a temporal conception of the normativity of mind in this essay, but the essay has been constructed in its terms. I have characterized what has been at issue and at stake in recent philosophical efforts to understand the normativity of mind as a social phenomenon. These projects do not agree in their conceptions of normativity, or of what it is for a practice or activity to be social, but regulist, regularist, interpretivist, and temporal conceptions of social normativity have been mutually responsive on these issues. Underlying their disagreements is a concern to render a philosophical conception of mindedness intelligible in relation to a broadly scientific understanding of nature. What is at stake in this concern is whether we can intelligibly accommodate our self-understandings both as agents, speakers, and thinkers, and as part of scientifically comprehensible nature.This concern is itself still contested by advocates of a temporal conception of social normativity. For Ebbs (1997), a participants’ perspective on our actual linguistic interactions is complementary to but independent of a scientific conception of nature. For Rouse (2015), our temporally extended discursive practices, including our capacities for scientific understanding, should be understood naturalistically in evolutionary biological terms, as a form of behavioral niche construction.The resolution of this issue must nevertheless be left to another occasion.
Notes 1 For the sake of brevity and flow, I hereafter speak of mind and meaning rather than mindedness and meaningfulness, but retain the qualification that we are not thereby ontologically committed to minds or meanings.
559
Joseph Rouse 2 The place of perception in conceptions of mind is more contested, in significant part because of differences concerning how perceptual awareness is related to conceptual capacities for thought and action. 3 In this sense, certain strands of empiricism are idealist in taking our conceptions and judgments to be answerable only to experience, and not to the world experienced. 4 Haugeland speaks of a community’s general telling, in order to distinguish the norm established by the community’s acceptance from its own actual performances. As Haugeland summarizes: There are two relevant generalities: the usual responses of any given individual, and the common responses of most individuals. It can happen that isolated individuals are consistently out of step with their peers, and it can also happen that, on isolated occasions, all or most members of the community (by an amazing coincidence) happen to misperform in the same way at the same time. What cannot happen is that all or most of the community members systematically respond wrongly to a certain class of instituted conditions – for their common systematic responses define the very conditions in question. (1998, 315) Even when explicit rules are part of the practice, they are subordinated to the community’s implicit practice. Automobile driving practices are exemplary: even where speed limits are legally instituted and posted, the explicit rule bears a more complex relation to the community’s practice. The variance is not just in how fast drivers typically travel, but also what limits police and courts will enforce, and which legal sanctions the community would endorse as fair or legitimate. 5 Communities can ascribe normative statuses to people who do not in turn acknowledge or endorse this status or its significance for others within that community. The responses of others to normative statuses ascribed within the community’s practices, and to the performances of those to whom the statuses are ascribed, nevertheless affect what is possible or intelligible for persons to do, even if they do not endorse or accept what is ascribed to them. The concept of power has the expressive role of characterizing how the causal or coercive effects of the actions and dispositions of others affect what it is intelligible for agents to say and do, and what significance those performances can have in context. For more extensive discussion of the normative and expressive role of the concept of power, see Rouse (2002, 259–60; 2003, 108–119). 6 Whether Kripke’s interpretation of these arguments accords with what Wittgenstein says is an exercise left to the reader. 7 Haugeland uses the term “object” in a strictly formal way, to indicate whatever could serve as authoritative over an intentional directedness. As we see below, he further specifies this conception of objects in modal terms, as a locus of possibly incompatible commitments. 8 De Caro and Macarthur (2004, 2010) bring together an influential collection of such liberal conceptions of naturalism that do not reduce or eliminate normativity. 9 Ebbs does not extensively discuss language learning, but clearly his account would emphasize getting a holistic grip upon a broader linguistic practice and its interconnected performances as the basis for discriminating its semantically significant bits, rather than first identifying the bits phonemically and orthographically. 10 Performers sometimes do stand back and reflect upon the practices they participate in, but these efforts are incorporated within the practice itself; “external” observers are likewise incorporated within an ongoing practice, whether as further performances within the practice, or as part of the circumstances in which it continues. 11 Kukla and Lance (2009) could be read as developing a temporal conception rather than a sophisticated interpretivism. I treat them as interpretivists because they give primary emphasis to the retrospective temporality of discursive normativity through which speakers are retroactively constituted as participants in discursive practice, but do not in the same way treat the content and force of normativity as prospective.
References Brandom, Robert. (1979). Freedom and constraint by norms. American Philosophical Quarterly, 16, 187–196. ———. (1994). Making It Explicit. Cambridge, MA: Harvard University Press. Carnap, Rudolf. (1956). Empiricism, semantics and ontology. In Meaning and Necessity (pp. 205–221). Chicago: University of Chicago Press.
560
Normativity Davidson, Donald. (1980). Essays on Actions and Events. Oxford: Oxford University Press. ———. (1984). Inquiries into Truth and Interpretation. Oxford: Oxford University Press. ———. (2001). Subjective, Intersubjective, Objective. Oxford: Oxford University Press. De Caro, Mario & Macarthur, David (eds.). (2004). Naturalism in Question. Cambridge, MA: Harvard University Press. ———. (2010). Naturalism and Normativity. New York: Columbia University Press. Dennett, Daniel. (1987). The Intentional Stance. Cambridge, MA: MIT Press. ———. (1991). Real patterns. Journal of Philosophy, 88, 27–51. Derrida, Jacques. (1967). La voix et le phénomène. Paris: Presses Universitaires de France. Ebbs, Gary. (1997). Rule-Following and Realism. Cambridge, MA: Harvard University Press. ———. (2009). Truth and Words. Oxford: Oxford University Press. Frege, Gottlob. (1984). Gottlob Frege: Collected Papers on Mathematics, Logic, and Philosophy, B. McGuinness (ed.). Oxford: Blackwell. Goodman, Nelson. (1954). Fact, Fiction and Forecast. Cambridge, MA: Harvard University Press. Haugeland, John. (1998). Having Thought. Cambridge, MA: Harvard University Press. ———. (2002). Authentic intentionality. In M. Scheutz (Ed.), Computationalism: New Directions (pp. 160–74). Cambridge, MA: MIT Press. ———. (2013). Dasein Disclosed. Cambridge, MA: Harvard University Press. Heidegger, Martin. (1927). Sein und Zeit. Frankfurt: Max Niemeyer; J. Macquarrie and E. Robinson (trans.) (1962). Being and Time. New York: Harper & Row. Husserl, Edmund. (1970). Logical Investigations, J. Findlay (trans.). London: Routledge and Kegan Paul. Kierkegaard, Søren. (1954). Fear and Trembling and The Sickness Unto Death, W. Lowrie (trans.). Princeton, NJ: Princeton University Press. Kripke, Saul. (1982). Wittgenstein on Rules and Private Language. Cambridge, MA: Harvard University Press. Kuhn, Thomas. (1970). The Structure of Scientific Revolutions, 2nd edition. Chicago: University of Chicago Press. Kukla, Rebecca. (2000). Myth, memory and misrecognition. Philosophical Studies, 101, 161–211. Kukla, Rebecca & Lance, Mark. (2009). Yo! and Lo! Cambridge, MA: Harvard University Press. MacIntyre, Alasdair. (1980). After Virtue. Notre Dame, IN: University of Notre Dame Press. McDowell, John. (1984). Wittgenstein on following a rule. Synthese, 58, 325–363. ———. (1994). Mind and World. Cambridge, MA: Harvard University Press. Okrent, Mark. (2007). Rational Animals. Athens, OH: University of Ohio Press. Quine, W.V.O. (1953). From a Logical Point of View. Cambridge, MA: Harvard University Press. ———. (1960). Word and Object. Cambridge, MA: MIT Press. Putnam, Hilary. (1975). Mind, Language, and Reality. Cambridge: Cambridge University Press. Rouse, Joseph. (2002). How Scientific Practices Matter. Chicago: University of Chicago Press. ———. (2003). Power/Knowledge. In G. Gutting (Ed.), The Cambridge Companion to Foucault (2nd ed., pp. 95–122). Cambridge: Cambridge University Press. ———. (2015). Articulating the World. Chicago: University of Chicago Press. Sellars, Wilfrid. (1997). Empiricism and the Philosophy of Mind. Cambridge, MA: Harvard University Press. Taylor, Charles. (1971). Interpretation and the sciences of man. Review of Metaphysics, 25, 3–51. Turner, Stephen. (1994). The Social Theory of Practices. Chicago: University of Chicago Press. Wittgenstein, Ludwig. (1953). Philosophical Investigations. New York: MacMillan. Wright, Crispin. (1980). Wittgenstein on the Foundations of Mathematics. London: Duckworth.
561
INDEX
Note: Page numbers in italics indicate figures and tables. action-mirroring 502 action tendencies 282, 285, 288, 290 Act-R programming language 25 Adam, J. S. 230 – 1 Adams, R. A. 331 adaptationist theories of ethnic cognition 97 – 8 adaptive commitment 270 – 4, 275 affective sharing in development, (2 months and up) 224 – 5, 226, 499 affectivity condition 502, 505 agent-neutral theories 103 – 4, 109 – 13; in adolescence 112 – 13; for children 111 – 12; extended self concept 112; simulation theory and 109; Strawson’s non-dualist account of 110; theory of mind and 109 – 10 agent-relative theories 103, 106 Alfano, M. 11, 12, 465 – 76 Allport, G. 310 altruism 61, 93 – 4, 96, 102 – 7, 109, 201, 249, 281 analytic cultures 173 ancient number system (ANS) 5, 80, 84 Andrews, K. 5 – 6, 26 – 8, 29, 30, 117 – 33 Anscombe, E. 333 anthropomorphic reflexivity 19 – 20 anthropomorphism as ethnocentrism 26 – 8 appeasement emotions 291 Apperly, I. 118 – 19, 120, 157, 184, 185, 217 Aristotle 1 Aronson, J. 469, 470 artefacts, culture and 54, 76 – 7, 175 ascription condition 502, 505, 506, 508, 510 – 11 ASD see autism spectrum disorder (ASD) “as if ” evolutionary reasoning 24 assertion, norm of 46 – 7
Astington, J. W. 374 Astuti, R. 92 – 3 attractors 60, 224, 475 augmented individualism 346 – 7 autism spectrum disorder (ASD) 160 automatic perspective taking 255 – 6; social regulation and 256 – 7 Avenanti, A. 506, 507 Axelrod, R. 232 Bacharach, M. 402, 404, 405, 406 Baillargeon, R. 153, 154, 157, 158 – 9, 160, 164, 172, 190 Baldwin, J. M. 192 Ballard, D. 74 Bard, K. 128 Barden, R. C. 380 Bardsley, N. 404, 406, 408 – 9 Bar-On, D. 539 Baron-Cohen, S. 172 Barresi, J. 5, 102 – 13 Barrett, C. 175 Barrett, H. C. 155 – 6 Barrett, L. 4, 19 – 30 Bartleby, the Scrivener: A Story of Wall Street (Melville) 280 – 1 Bates, E. 195 Bateson, M. 250, 251, 252 Bateson, P. 178 – 9 Baumard, N. 248, 258 Bayesian decision theory (BDT) 321, 324, 327 – 9; rule of conditionalization 327, 336; social norms and 327 – 9 Bechara, A. 300
563
Index Becker, G. de 306 – 7 Beersma, B. 249 belief, desire and 26 – 7, 118 – 19, 121 – 2, 133, 172 – 3, 209, 211, 214 – 17, 321 – 3, 331, 419, 481, 486 belief reasoning, two-systems model 118 – 19 Berghe, P. L. van den 96, 97 Berndt, R. 61 – 2 Bernhard, H. 94 Bicchieri, C. 324 Bilgrami, A. 489 – 91 Binmore, K. 106 – 7 biological interactions 73, 74 biologization 91 – 3, 94, 97 Birdsell, J. 61 Birnbaum, D. 91, 98 Blatt, B. 311 blended interactions 74, 75 Boesch, C. 9, 343, 344, 350 – 2 Boesch, E. 344 Boesch-Achermann, H. 350 – 2 booby-trap game 419 – 22, 427 Boutel, A. 4, 53 – 68 Boyd, R. 37, 42, 54, 55, 57, 61, 67, 89, 438 brain size evolution 20 – 3; see also social intelligence hypothesis Brand, R. J. 45 Brandom, R. 489, 490, 491, 531 – 2, 538, 546, 550 Brannon, E. M. 81 Bratman, E. M. 345 Bratman, M. E. 9, 148, 346, 359, 364, 366, 367 Brentano, F. 528 Broad, C. D. 67 Brooks, R. 25 Brownstein, M. 8, 298 – 313 Bruneau, E. 311 Bryan, C. J. 254 Butler, L. 44 Buttelmann, D. 90, 155 Butterfill, S. A. 9 – 10, 118 – 19, 120, 184, 357 – 67 by-product theories of ethnic cognition 96 – 7 Call, J. 25 Callaghan, T. 176 Cantlon, J. F. 81 Capital (Marx) 434 – 5 Carlo, G. 253 Carnap, R. 548 Caro, T. M. 39, 40, 48 Carpendale, J. I. M. 6, 189 – 202 Carpenter, M. 90, 192, 226, 372 Carruthers, P. 180 – 1, 183 – 4, 209, 490, 492 – 3 Castro, L. 41 causal history explanations 216, 217 causal models 122, 125, 299 Cavalli-Sforza, Luca 58 – 9 CE see cultural evolution (CE) theory
Chater, N. 10 – 11, 418 – 28 Clark, A. 331 – 2, 333, 540 Clark, H. 371 co-conscious sharing in development, (21 months and up) 227 – 9 cognition, defined 25 cognitive capital 76, 78, 83 cognitive challenges in teaching 42 – 4 cognitive efficiency 263, 266 – 7, 270, 274 cognitive evolution theory 19 – 20; criticism of 22; pair-bonded species and 22 – 3 cognitive integration (CI) 72, 83; see also integrated cognitive systems (ICS) cognitive practices (CPs) 73 – 5; biological interactions as 73, 74; blended interactions as 74, 75; corrective practices as 73, 74; epistemic practices as 73, 74; epistemic tools as 74, 75; learning driven plasticity and 75; manipulation thesis and 73 – 4; motor programmes and 73; representational systems as 74, 75 cognitive tools 82, 192, 484 – 5, 486 Cole, M. 79 – 80 collaboration, defined 346 collaborative hunting 350 collective goals, joint actions and 361 – 2 collective intentionality 2 – 3, 5 – 6, 9 – 11, 141, 143, 145 – 8, 228, 388 – 90, 392, 394 – 7, 436, 441, 515 – 16; role of, in social ontology 388 – 9 collective intentions 436 – 7, 439 Colman, A.M. 406, 412, 413, 414 Colombo, M. 8 – 9, 320 – 35 Coltheart, M. 210 – 11 commitment 370 – 83; common knowledge and 371 – 2; concept of 371 – 4; development of 373 – 4, 381 – 2; expectations and 379 – 80; implicit 372 – 3, 381; interpersonal 371; introduction to 370; joint mutual 371; minimal structure of 374 – 8; motivation and 372, 381; mutual 371; sense of 374 – 6; three desiderata and 380 – 2; types of 371; unilateral 371 commitment devices 12, 281 – 2, 294, 485 – 8, 492, 494; emotions as 282 – 3 common knowledge 10 – 11, 248, 346 – 7, 358 – 9, 371 – 6, 379, 425 – 7, 433, 436, 440, 442, 472; commitment and 371; defined 371 – 2; virtual bargaining and 425 commonsense 126, 140 – 1, 172, 397 communal/group membership, Gurwitsch on 521 – 4 communicative and narrative skills 211 – 12, 214 – 17 communitarian approach to intentionality 530 – 4; claims of 530; defined 530; regular behavior and 531 – 2; rule-following and 530 – 1; social conformism mechanisms and 530, 532 – 3; social norm conformity and 533 – 4
564
Index communities 7, 38, 41, 45 – 8, 61, 81, 119, 130, 156, 176, 463, 489, 519 – 21, 548, 550, 553 – 4 comparative folk psychology 23 – 5 competence vs. performance false-belief failures 159 – 62; attention/motivation and 160; processing demands and 160 – 2 competitive altruism 249 computational theory of mind (CTM) 198 conceptual change 181 – 2, 184 – 6, 215; two systems account 184 – 6 conceptual-shift accounts of false-belief understanding 156 – 8; defined 156; minimalist 157 – 8; non-mentalistic 156 – 7 conformism bias 59 – 60, 63, 67; memetics and 59 – 60; S-shaped adoption curve and 59 consequentialism 263 – 4, 265, 267, 269, 271 – 2 constitutive views of self-interpretation 480, 491 – 2, 494 contagious responses 498, 503, 506 content objection to interpretationism 536 – 7 context-relativity 146 continuity objection to interpretationism 539 – 41 continuity skepticism 539 cooperation 7, 27, 89 – 90, 93 – 6, 104 – 10, 113, 117, 132, 140, 143 – 5, 148, 152, 163 – 4, 192, 200 – 2, 226, 231 – 2, 249, 251, 262, 265, 268, 273, 275, 281 – 5, 287, 290 – 2, 308, 348 – 9, 382, 404; for competition 349; vs. distrust/hostility 93; in hominins, evolution of 104 – 7; social roots of 200 – 2 cooperative hunting 77, 105, 350 Coryell, J. 461 Cosmides, L. 56, 96 covert roles 431; compared to overt 437 – 42; defined 431; different kinds of kinds and 440 – 2; examples of 431 – 2; kind identity and 438; social roles and 433 – 4 credibility question 298, 303, 306 – 9, 312; difficulty of 307 – 9 cross-cultural data, social cognition and 172 – 86; areas of difference 173 – 4; false belief understanding, differences in 176 – 7; false belief understanding, synchrony in 174 – 6; introduction to 172 – 3; macro- and micro-cultural influences 177 – 8; nativism/ empiricism and 178 – 81; theories of mind and 181 – 6 Csibra, G. 40, 43, 117, 213, 223, 481 cultural evolution (CE) theory 53 – 68; conformist bias as 59 – 60; cultural group selection as 60 – 2; defined 54 – 6; demographic transition as 58 – 9; describing 55; evolutionary psychology and 56 – 7; gene/culture co-evolution as 58; individual learning and 65 – 6; introduction
to 53 – 4; kinetic models of, power and 66 – 8; memetics and 57 – 8; methodological individualism and 63 – 4; social learning and 64 – 5; social mind and 62 – 8 cultural groups 60 – 2, 89 – 90, 482; tribes as 89 cultural group selection 60 – 2; Australian aboriginals example of 61 – 2; described 60 – 1; Nuer/Dinka example of 61 cultural information, defined 54 – 6, 60 cultural inheritance 72, 75 – 7, 268 cultural learning, social mind and 3 cultural ratchet effect 77, 78 cultural variants, described 54 culture: artefacts and 54; components of 54; defined 88; fairness and 235 – 6; role of 88 – 9; sharing and 235 – 6 cumulative culture 4, 35, 38 – 40, 42; social learning role in 38 – 9 Cushman, F. 8, 262 – 76, 302 Daly, M. 56 Dana, J. 378 Daum, M. 90 Davidson, D. 366, 535, 548 D’Cruz, J. 308 – 9 De Cruz, H. 82 Deeb, I. 98 deliberation-volatility 308 – 9 demonstration, as teaching form 47 – 8 Dennett, D. 37, 172, 480, 484 – 5 d’Errico, F. 77 Derrida, J. 548 desire: belief and 322 – 3; desire-as-belief theories of 322; direction of fit and 322 – 3; instrumental 322; intrinsic 322; motivation and 322; social 322; theories of 322 desire-as-belief theories of desire 322 De Smedt, J. 82 developmental systems theory 190 – 1, 200 – 2 Developmental Systems Theory 65 Dewey, J. 19, 540 diachronic identity, moral self and 449 – 63; vs. agentic self 456 – 8; implications for 460 – 3; introduction to 449 – 51; vs. mnemonic self 453 – 5; vs. narrative self 458 – 60; promise keeping and 451 – 3 Diesendruck, G. 93, 98 direct-matching model of action understanding 501 – 2 direct-perception model of empathy 499 – 501, 505; affective sharing and 499; expressive behaviour dilemma for 500; other minds and, problem of 500 – 1; overview of 499 discrete number system (DNS) 5, 80 Doris, J. 466 Douglas, M. 335
565
Index Downey, G. 79 Dretske, F. 501 Duchenne smile 308 Dunbar, R. I. 20, 21, 22 – 3 Duval, T. S. 257 Ebbs, G. 557, 558, 559 ecological inheritance 76 Egan, F. 395 Egyed, K. 214 embedded character hypothesis 465 – 76; described 465 – 6; introduction to 465 – 6; situationist challenge to virtue ethics and 466 – 8; stereotype threat and 469 – 71 embodied engagements, ICS and 73 – 5 embodied simulation 502 – 3 emotional responses 103, 154, 265, 516 emotions 280 – 94; as commitment devices 282 – 3; empathy for 509 – 11; feeling to acting 292 – 4; as Machiavellian strategies 283 – 4; re-conceptualizing social functions of 285 – 94; social functions of 281 – 5; as social learning forms 286 – 9; as social recalibration motivation 284 – 5; as social transactions 289 – 92 empathy 233; see also vicarious experiences; ascription condition and 508; described 498; direct-perception model of 499 – 501; as interpersonal similarity condition 498; introduction to 498 – 9; mirroring approach to 501 – 3; as other-directed 498; for pain, to emotions 509 – 11; pain matrix and 509; simulation-based approach to 503 – 4; Stein on 516 – 18; vs. sympathy 505; without affective sharing 499 emulation 38, 60, 535 enculturation 73, 77 – 81, 83, 119; learning driven plasticity and 79; mathematical cognition as process of 80 – 3; normative patterned practices and 77 – 8; timescales 78 – 9; transformative effects of 79 – 80 Engelmann, J. M. 7, 247 – 60 environment of evolutionary adaptedness (EEA) 56 epistemic practices 73, 74 essentialism, tribalism and 91 – 3 ethnic concepts acquisition device (ECAD) 91 ethnic markers 89 – 91, 93, 95, 97; ethnic concepts acquisition and 91 evolutionary psychology 2, 4, 56; cultural evolution and 56 – 7; social mind and 2 existential commitments 533, 557 expectation fulfillment 378 – 81 experiential sharing, Walther on 518 – 21 explicit prejudice 310 expressivist accounts of self-interpretation 493 – 4 extended character hypothesis 465 – 76; described 465 – 6; friendship and 471 – 5; introduction to
465 – 6; situationist challenge to virtue ethics and 466 – 8 extended mind hypothesis 465 Faigenbaum, G. 234 Faillo, M. 409 – 11, 411, 413 fairness 222 – 37, 253 – 4, 259, 283, 322, 325 – 6; see also sharing in development; culture and 235 – 6; in development 229; ‘how’ of sharing and 234; inequity aversion and 229 – 30; introduction to 222 – 3; ‘what’ of sharing and 230 – 2; ‘who’ of sharing and 232 – 4 false-belief understanding 152 – 64; conceptual-shift accounts of 156 – 8; differences in 176 – 7; in infants and toddlers 153 – 6; mentalistic accounts of 158 – 63; overview of 152 – 3; social/cognitive development for children and 163 – 4; synchrony in 174 – 6; theoretical accounts of 156 Fardo, F. 508, 509 Faucher, L. 91, 92, 95 Faulkner, W. 387 Fazio, R. 310 feasibility worry 308 Fehr, E. 94, 229 Feldman, M. 58 – 9 Fessler, D. M. 93, 250 Fiebich, A. 6 – 7, 208 – 18 first-order intentionality 6, 139 – 40 first-person plural 9 – 10, 387 – 97, 517, 523; definitional issues 388 – 92; introduction to 387 – 8; reciprocal alignment and 394 – 7; we-mode and 392 – 4 Fischbacher, U. 94 flat intention 10, 364 – 7 folk psychology 4, 6, 21, 23 – 8, 30, 117 – 33, 172, 186, 208 – 11, 214 – 16, 300, 389, 397; Andrews and animal forms of 26 – 8 forward-models 299 Foucault, M. 432 Frank, R. 272, 282, 284 Frankfurt, H. 450 Frayn, M. 6, 189 – 202 free energy 330 – 2, 334 Frege, G. 547 Friedman, O. 190 Friston, K. J. 331 Frith, C. 120 functional plasticity 75 Gaertner, S. L. 378 Galinsky, A. 311 Gallagher, S. 6 – 7, 208 – 18 Gallese,V. 501, 502, 503 Gallotti, M. 10, 387 – 97 game theory see team reasoning Gauthier, D. 405
566
Index Genealogy of Morality (Nietzsche) 284 gene/culture co-evolution 58, 64 – 5, 68 Gergely, G. 43, 117, 481 German, T. P. 190 Gerrans, P. 455 Giere, R. 122 Gilbert, M. 141, 346, 359, 364, 371 – 2 Gillett, A. J. 4 – 5, 72 – 83 Gil-White, F. 67, 92, 95, 97 – 8 Gino, F. 254 Ginsborg, H. 126 Glazer, T. 8, 280 – 94 goal-directed joint actions 360 – 1 Godfrey-Smith, P. 122, 125, 394 Gold, N. 10, 400 – 14 Goldman, A. 501, 503 – 4 Goldstone, R. L. 82 Goodman, N. 548 Gopnik, A. 215 Gräfenhain, M. 373, 374, 382 Greene, J. 264–5, 268, 275, 282, 285 Grice, H. P. 424 Grice, P. 42 – 3, 44 – 5, 455 Griffin, M. 79 – 80 Griffiths, P. 283, 284, 431 group activity, defined 346 Gu, X. 325 – 6 Gurwitsch, A.: on communal/group membership 521 – 4; on partnership 521 – 4 Habermas, J. 201, 538 Habyarimana, J. 94 Haidt, J. 275 Haley, K. J. 250 Halperin, E. 94 Hamann, K. 349, 373 Hamilton, W. D. 96 Hamlin, J. K. 213 Hardy, S. 253 Hare, B. 347 Harman, G. 172, 466 Hartmann, S. 334 Haslanger, S. 434 Haugeland, J. 528, 532 – 4, 545, 549, 551 – 3, 557, 558 Hauser, M. D. 39, 40, 48 He, Z. 158 – 9, 160 Hebb, D. 190, 199 Hegel 1 Heidegger, M. 548, 558 Heine, S. 172 – 3, 177, 258 Henrich, J. 37, 42, 67, 172 – 3 Hernstein, R. J. 469 Heyes, C. 22, 44, 64 – 5, 120, 157 higher-order intentionality 140 high fidelity learning mechanisms 38 – 9 high-level mindreading 184, 185 – 6
Hi-Lo game 10, 400 – 3, 401, 406 – 7, 413; see also team reasoning Hirschfeld, L. A. 97 Hobson, R. P. 226 Hofmann, W. 312 Hohwy, J. 333 Hoicka, E. 45 Holekamp, K. E. 26 holistic cultures 173 hominins, evolution of cooperation in 104 – 7 House, B. R. 235 Huebner, B. 8, 280 – 94, 302, 303 – 4, 305 Human Encounters in the Social World (Gurwitsch) 522 human folk psychology 6, 27 – 8; see also pluralistic folk psychologyhuman social abilities, developmental approach to 191 – 3 Humean theory of motivation 320 – 35; claims of 320; desire and 322; motivation defined 321; overview of 321; predictive processing theory inconsistency with 331 – 4 Humphrey, N. K. 2, 20, 29 – 30, 348 Humphreys, M. 94 Hurley, S. 402, 405 Huss, B. 28, 29, 30 Husserl, E. 547 – 8 Hutchins, E. 78 Hutto, D. D. 6 – 7, 208 – 18 Iannetti, G. D. 509 imitation learning 38 – 9 Implicit Association Test (IAT) 306, 307, 311 implicit attitudes 8, 298 – 313; credibility question and 306 – 7; model-free learning and 303 – 6; moral authority and 309 – 12; moral credibility and 307 – 9; spontaneous judgments and 298 – 9; value-based decision-making and 299 – 303 implicit commitment 372 – 3, 381, 383 individual intentionality 139 – 41, 143 – 4, 148; first-order 139 – 40; second-order 140 – 1 inequity aversion 7, 223, 229 – 30, 234 – 5, 237; see also fairness; sharing in development inferentialist theory of self-interpretation 490, 492 – 3 institutional reality, development of 145 – 7 instrumental desires 322 integrated cognitive systems (ICS) 72 – 83; embodied engagements and 73 – 5 (see also cognitive practices (CPs)); enculturation and 77 – 80; introduction to 72 – 3; learning driven plasticity and 72, 75; mathematical cognition and 80 – 3; niche construction and 76 – 7; normative patterned practices and 72 – 3 intentionality 139 – 48, 341 – 443, 528 – 41; see also interpretationism; individual types; anti-individualist approaches to 529; collective 145 – 7; communitarian approach to 530 – 4;
567
Index individual 139 – 41; interpretational approach to 528, 530, 534 – 6; language/thought relations and 528 – 9; mentalism and 528; shared 141 – 5; social approaches to 528 – 9 intentional relations 107 – 11 interactionist theory of development 6 Interaction Theory (IT) 211 interaction tools 484 – 9, 492, 494; as commitment devices 485 – 6; defined 485; Dennett’s cognitive tool and 484 – 5, 486; self-interpretation as 484 – 9 interpersonal commitments 371 interpersonal similarity condition 498 – 9, 501, 504 – 6, 509; conditions needed for empathy 505; scope/limits of 505 – 6; vicarious pain duality and 506 – 8 interpretationism 528, 530, 534 – 7, 539; Brandom and 536; content objection to 536 – 7; continuity objection to 539 – 41; criticisms of 536 – 41; described 535; normativity objection to 537 – 8; Quine and 535; second-person objection to 538 – 9; shared assumption of 534 – 5 interpretivist approaches to normativity 555 – 8 Iowa Gambling Task 300 I-Thou relation 519, 537 – 8 Izzard, E. 19 Jacob, P. 11, 12, 498 – 512 James, W. 223 joint actions 9, 109, 217, 345, 351, 357 – 67, 370, 374, 382, 397; agents’ perspective of 365 – 6; collective goals and 361 – 2; commitments in (see commitment); defined 370; described 357; features of 357 – 8; Flat Intention View and 364 – 5; goal-directed 360 – 1; intentionality and 143 – 5; intentions specifying collective goals 362 – 5; introduction to 357 – 9; minimalist approach to 359 – 60; virtual bargaining and 425 – 6 joint attention 3, 9, 110, 142 – 3, 147 – 8, 192, 211, 213, 225 – 6, 228, 346 – 7, 425 – 6, 428, 482: intentionality and 142 – 3; virtual bargaining and 426 joint commitments 9, 346, 347, 352, 359, 364, 370, 372 – 3, 382 joint distal intentions 343 – 53; defined 343; features of 343; group behaviour and 343 – 4; introduction to 343 – 4; Lean Account and 347 – 9; Rich Account and 349 – 53; Shared Intentionality Hypothesis 345 – 7 joint intentionality 228 joint mutual commitments 371 Jolly, A. 20 Kahneman, D. 402 Kalaska, J. F. 199
Karpis, J. 400 – 14 Karpus, J. 10 Kaufmann, A. 9, 343 – 53 Keller, M. 112 Kelly, D. 93 Kelly, R. 61 Kirsh, D. 74 Kitcher, P. 102, 104, 111 Knoblich, G. 372, 374 knowledge-ignorance task 176 – 7, 181, 183 Kobayashi, H. 194 Kohlberg, L. 228 Kohshima, S. 194 Korsgaard, C. 260, 456, 457 – 8 Kripke, S. 530 – 1, 551, 554 – 5 Kucharczyk, P. 6, 189 – 202 Kuhn, T. 548 Kukla, R. 538, 549, 558 Kuntoro, I. A. 178 Kurzban, R. 96 – 7 Laland, K. N. 78 Lance, M. 538, 558 Landy, D. 82 Language Acquisition Device 91 language and thought relations: anti-individualist social approach to 529; individualistic approach to 529; intentionality and 528 – 9 Lavelle, J. S. 6, 172 – 86 Legrain,V. 509 Lende, D. 79 Leslie, A. 172 Leslie, A.M. 190 Lewens, T. 4, 53 – 68 Lewis, D. 425, 436 – 7, 441 – 2 Lillard, A. 174 Lisciandra, C. 334 Liu, D. 176, 179, 186 Locke, J. 236, 451, 528 Lohrenz, T. 328 – 9 low-level mindreading 179 – 80, 184 – 5 Ludwig, K. 359 Lukács, G. 435 Lyons, I. M. 81 Machery, E. 5, 88 – 98, 438, 439 “Machiavellian Intelligence” (Byrne & Whiten) 20, 284 Machiavellian strategies, emotions as 283 – 4 MacIntyre, A. 550 macro- and micro-cultural influences 177 – 8 Maglio, P. 74 Maibom, H. 123 Malle, B. F. 216, 217 Mallon, R. 11, 431 – 43 Mameli, M. 178 – 9 Mant, C. M. 373, 382
568
Index Markman, E. 44 Marr, D. 401 Martin, T. 307 Marx 1 master-apprentice style pedagogy 481 mathematical cognition 5, 73, 75, 80 – 3, 139 Matsuzawa, T. 131, 132 Matthews, R. 395 Mayer, A. 176 McDowell, J. 536 – 8, 551 McGeer,V. 126, 474, 480, 487 McKinnon, C. 432 Mead, G. H. 19, 196, 198, 201, 255, 256 Melis, A. P. 349 Melkonyan, T. 10, 418 – 28 Meltzoff, A. N. 191 Melville, H. 280 memetics 56 – 9 Menahem, R. 93 Menary, R. 4 – 5, 72 – 83 mentalistic accounts of false-belief understanding 158 – 63; competence vs. performance 159 – 62; cultural differences and universal capacity for 162 – 3 mental simulation 12, 501, 503 – 4 Meristo, M. 179 Merritt, M. 463 Mesoudi, A. 54 methodological individualism 63 – 4, 66 Michael, J. 10, 370 – 83, 508, 509 Milinski, M. 249 mindreading 5 – 7, 117 – 21, 177, 179 – 186, 196, 208 – 10, 390, 479, 481 – 2, 498 – 9, 501, 503, 505 – 6, 508; Conceptual Change approach 181 – 2, 184 – 6; described 118; human folk psychology and 118 – 20; infant behaviourist approach 181 – 2; mentalistic accounts 182 – 4; multi-system theories 210; social cognition theories 117, 209 – 10 mindshaping 119, 474, 479 – 97; forms of 481 – 2; human language and 483; imitation and 480 – 1; mechanisms and practices defined 480; norm enforcement and 481; pedagogy and 481; recursive language and 482 – 3; transparency and 482; virtual social models for 483 – 4 mindshaping view of self-interpretation 479 – 94; comparisons with other views 489 – 93; described 479 – 80; hypothesis overview 480 – 4; introduction to 479 – 80; self-interpretations as interaction tools 484 – 9 minimalist approach to joint actions 359 – 60 mirroring approach to empathy 501 – 3 mirror neurons, defined 199, 501 – 2 Misyak, J. B. 10, 418 – 28 Mithen, S. 30 model-based evaluative learning systems 269, 270, 299, 300
model-free learning systems 269, 270, 299 – 300; implicit attitudes and 303 – 6 Mogilner, C. 254 Moll, H. 22, 226 Montague, P. R. 328 – 9 Moore, C. 107, 108 Moore, R. 4, 35 – 48, 131 moral authority 299, 309 – 12 moral identity 11, 112, 227 – 8, 251, 253 – 4, 257 – 9, 273, 275, 453, 457, 461 – 2 Morgan, G. S. 94 Morgan, T. J. H. 41 Morris, M. 173 – 4 Morton, A. 472 motivations 5, 7 – 9, 46, 64, 102 – 7, 113, 117 – 18, 126, 131, 159 – 60, 162, 182, 184, 192, 247, 250 – 1, 255, 258 – 9, 282 – 91, 294, 320 – 40, 344; defined 321; social 321; species of 322 motoric-mirroring 502, 504 motor programmes 73, 75 Moya, C. 91, 92 – 3, 94, 95, 98 Muldoon, R. 334 Murray, C. 469 mutual commitments 371 ‘Myth of the Given’ 536, 548, 557 Nagel, T. 103 – 4, 490 naïve normativity 126, 130, 132, 133 Narrative Practice Hypothesis 216 Nash equilibrium games 400, 403, 407, 412 – 14, 422; team reasoning and 408 – 11 Nativism 173, 178 – 81, 186 natural pedagogy 43 – 5, 117, 214, 481 – 2 Natural Pedagogy Hypothesis 43 – 4, 45, 117 Navarrete, C. D. 93 Nersessian, N.J. 79 Nesdale, D. 229 neural plasticity 75, 77 – 8 Newen, A. 123 niche-construction 65, 76 – 7 Nichols, S. 11 – 12, 449 – 63 Nichomachean Ethics (Aristotle) 270 Nietzsche, F. 284, 294 Nisbett, R. 177 non-consequentialism 262 – 76 non-Nash equilibrium games, team reasoning and 412 – 13 non-verbal communication 42 – 3 Norenzayan, A. 172 – 3, 177 normativity 11, 13, 121 – 132, 145 – 7, 200, 529 – 30, 536 – 8, 545 – 59; human 121, 125 – 7; interpretivist approaches to 555 – 8; introduction to 545 – 9; Kant and 547 – 8; of mind and meaning 546 – 8; naïve 126, 133; in other species 130 – 2; philosophy of mind and 545 – 6; pluralistic folk psychology and 125 – 7, 130 – 2; primitive 126; regularism and 553 – 5;
569
Index regulism and 549 – 53; temporal conceptions of 558 – 9 norm-complying agents 251 – 2 Nussbaum, M. 293 Obama, B. 301, 307 object permanence 140 Olen, P. 393 Omi, M. 432 Onishi, K. 153, 157, 172, 190 On the Problem of Empathy (Stein) 516 ontogenetic timescales 78 – 9 Ontology of Social Communities (Walther) 518 Origins of Human Communication,The (Tomasello) 390 overt roles: compared to covert 437 – 42; defined 431; epistemic distinctiveness of 438 – 40; reification and 434 – 6; Searle and 436 – 7 ownership, sharing and 230, 233 – 4 pain matrix 509 Pan-Am smile 308 Pareto-optimization 405 – 6 Parfit, D. 103, 450, 451 – 3 partner choice, strategic non-consequentialism and 272 – 3 partnership, Gurwitsch on 521 – 4 pedagogy, defined 4, 39 – 42 pedagogy learning 35 – 48;communicative acts and 39 – 40; cumulative culture and 35 – 42; defined 39; demonstration as 47 – 8; features of 39 – 40 Peng, K. 173 – 4 Penn, D. 23 – 5, 28, 30 perceptual/motor abilities, social intelligence hypothesis and 25 – 6 Perner, J. 373, 382 personhood 5, 102 – 13, 450, 456, 458, 460 – 1, 463; agent-neutral perspectives, development of 109 – 13; concepts of persons and selves 107 – 9 person models 123 perspective taking 22, 23, 103, 112, 141, 226, 230, 233, 251, 255 – 8, 310, 311 – 13, 345, 352, 390 Pettigrew, T. 310 Pettit, P. 473 Philosophical Investigations (Wittgenstein) 529 phylogenetic timescales 78 physical action responses 154 Piaget, J. 231 Pietroski, P.M. 359 Pinker, S. 36 Plato 247, 441, 546 pluralistic folk psychology 27, 117 – 33, 208, 209, 210, 211, 214 – 16, 218, elements of 121 – 7; mindreading and 118 – 21; model approach to 122 – 5, 128 – 30, 129; normativity and 125 – 7,
130 – 2; in other species 127 – 32; overview of 117 – 18 policy consequentialism 263 – 4, 266, 267, 269, 272 population thinking 55 Posner, D. N. 94 power, kinetic models and 66 – 8 predictive processing (PP) theory 8, 320, 321, 329 – 34; free energy and 330, 331, 332, 334; surprisal and 329 – 32 prestige bias 67 primary intersubjectivity 211, 212 – 13, 224 primitive normativity 126 principle of rationality 159 Prinz, J. J. 11 – 12, 449 – 63 Prisoner’s Dilemma game 10, 400 – 1, 403, 405, 406, 407, 420, 422, 423; see also team reasoning problem of other minds 189, 191 – 3, 196, 499, 500, 501, 522, 523 process consequentialism 263 – 5, 266, 269, 271 proto-linguistic communication 42 psychological essentialism, tribalism and 91 – 2 psychological-reasoning system 159, 160, 162; ASD children and 160 pushmi-pullyu signals 332 Putnam, H. 548 Quine, W. V. O. 535, 548, 553 racism, tribalism as 88, 89, 95, 96 Radford, L. 81 Radzvilas, M. 407, 411, 413 Railton, P. 299, 309 – 10, 312 – 13 Rakoczy, H. 6, 119 – 20, 139 – 48 Rawls, J. 107 referential sharing in development, (7–9 months and up) 225 – 7 Regan, D. 405 registrations 158, 184, 186 regularism 532, 553 – 6 regulism 549 – 53, 556 Reifen Tagar, M. 94 reification 431 – 43; covert roles and 431, 437 – 42; overt kinds, appealing to 434 – 7; social change and 433, 442 – 3; Reinforcement Learning (RL): social norms and 324 – 7 replicators, cultural variants as 57 Republic,The (Plato) 441 reputation: effects 232 – 3; moral behavior and 247 – 60; moral identity and 273 – 4; watching eyes experiments and 255 – 8 reward theory of desire 322 Richerson, P. J. 37, 42, 54, 55, 57, 61, 67, 89 Rizzolatti, G. 199, 502 Robbins, E. 7, 222 – 37 Roberts, J. 301 Roberts, R. 473
570
Index Roby, E. 6, 152 – 64 Rochat, P. 7, 222 – 37, 380 Rose for Emily, A (Faulkner) 387 Rouse, J. 11, 13, 545 – 59 Rubin’s vase 403 Ruffman, T. 215 rule-following 530 – 1, 548, 551, 554 Salice, A. 11, 12, 515 – 25 Samson, D. 255 Santa Barbara school 56 Sarkissian, H. 308 Sartre, J.- P. 255, 283 Satne, G. 11, 12 – 13, 528 – 41 Saxe, R. 311 Scarantino, A. 283 Scelza, B. 92, 94 Schadenfreude 516 Schechtman, M. 458 Scheler, M. 519, 522, 525 Scheman, N. 284 – 5 Schmid, H. B. 366 Schmidt, K. M. 229 Schweikard, D. 366 Scott, R. M. 6, 152 – 64, 182 Searle, J. 6, 148, 345, 346, 364, 371, 435, 436, 438 – 40, 441 Sebanz, N. 372, 374 secondary intersubjectivity 211, 213 – 14, 223, 225, 228 second-order intentionality 6, 140 – 1, 144 Segall, G. 98 Sein und Zeit (Heidegger) 558 self-commitments 371 self-esteem 229, 259 self-interpretation 11, 479 – 97; constitutive views of 480; expressivist accounts of 493 – 4; inferentialist theory of 490, 492 – 3; as interaction tools 484 – 9; mindshaping view of 479 – 94; simulation theory and 479; theory theory and 479 self-regulation 251 – 2, 254, 487 – 8 Seligman, M. 302 Sellars, W. 388, 392 – 3, 548 selves, concepts of 107 – 9 Setoh, P. 161 shared agency 346, 349, 352 shared experiences, Stein on 517 – 18 shared intentionality 3, 9, 22, 131 – 2, 139 – 48, 192, 202, 228, 343 – 7, 349 – 50, 352 – 3, 367, 390, 395; see also joint distal intentions; defined 345 – 6; development of 141 – 2; joint action 143 – 5; joint attention 142 – 3 Shared Intentionality Hypothesis 345 – 7; characteristics of 346; group activity and 346 – 7; overview of 345; philosophical theories of 345 – 6; “Who shares what?” question in 345
sharing in development 222 – 9; see also fairness; affective (2 months and up) 224 – 5; co-conscious (21 months and up) 227 – 9; culture and 235 – 6; described 223 – 4; ‘how’ of 234; introduction to 222 – 3; ownership and 234; referential (7–9 months and up) 225 – 7; ‘what’ of 230 – 2; ‘who’ of 232 – 4 Shepard, R. 468, 474 Shipp, S. 331 Shoemaker, S. 455 Shook, N. 310 simulation theory (ST) 109, 209, 210, 479 Sinclair, S. 312 Singer, T. 506 Sinigaglia, C. 503 Situational Eight DIAMONDS model 466 – 7 situational influences on moral/intellectual conduct 466 – 7 situationist challenge to virtue theory 465, 466 – 8; extended/embedded characters and 467 – 8; power of social expectations and 467 Skitka, L. J. 94 Skorburg, J. A. 11, 12, 465 – 76 Small, W. 40 – 1 Smerilli, A. 402, 406 Smith, M. A. 6, 152 – 64 Sober, E. 29, 61 social brain hypothesis 4, 19, 21, 23, 25 – 6; see also social intelligence hypothesis social cognition 3, 6 – 7, 22 – 3, 25, 30, 79, 96 – 8, 117 – 18, 123, 127, 133, 147, 172 – 86, 208 – 18, 388, 390, 394 – 6, 479 – 80, 484 – 5, 515, 541; see also pluralistic folk psychology: human 121 – 7; mechanisms for 117; mindreading and 117, 118 – 20; non-human 127 – 32; pluralist account of 208 – 18; communicative/narrative practices and 214 – 17; hybrid theories of 210; mindreading theories and 209 – 10; skills acquired throughout ontogeny 211 – 17; traditional accounts of, and alternatives 209 – 11 social conformism 13, 530, 532 – 3 social constructionist traditions 95, 431 – 3, 443 social desires 322 social functions of emotions 281 – 6; as commitment devices 282 – 3; as Machiavellian strategies 283 – 4; as motivation for social recalibration 284 – 5; re-conceptualizing 285 – 94 Social Heuristics Hypothesis (SHH) 270 social intelligence hypothesis 4, 19 – 30; anthropocentric focus of 20 – 2; folk psychology and 23 – 5; bias against attribution of high level abilities to non-humans, 29 – 30; skeptical null hypothesis of 29 – 30 social kinds 431, 433 social learning 3 – 4, 22 – 3, 35 – 48, 53, 57 – 9, 62 – 6, 68, 77, 78, 83, 131, 214, 286, 298 – 313;
571
Index emotions as forms of 286 – 9; imitation 38 – 9, 64; pedagogy and 38, 39 – 42; role of, in cumulative culture 38 – 9; social manifestation thesis 393 social mind 2 – 4, 17 – 113, 222, 237, 320, 335, 344, 388 – 90, 394, 535; cultural evolution and 62 – 8; cultural learning and 3; evolutionary psychology and 2; individual learning and 65 – 6; intentionality forms and 2 – 3; kinetic models of, power and 66 – 8; mental contents and 62 – 3; methodological individualism and 63 – 4; social learning and 64 – 5 social motivation 8 – 9, 192, 320 – 35, 346; Humean theory 321 – 3; predictive processing theory 329 – 34; social norms and 323 – 9 social norms 7 – 8, 46, 120, 133, 185, 200, 281 – 2, 284, 291, 293, 304, 320, 322 – 5, 327 – 9, 334 – 5, 380, 392, 482, 533; Bayesian decision theory 327 – 9; boxological models of 324; grammar comparison to 323; known facts about 323; neurocomputational frameworks for 323 – 9; Reinforcement Learning and 324 – 7; theories of 324; violators of 324 social organization of tribes 89 – 90 social recalibration, emotions as motivation for 284 – 5 social referencing 130, 213 – 14, 225 social roles 11, 123, 217, 284 – 5, 431 – 43; covert 433 – 4; creation of 433; institutions as sources of 431; reification and 431 – 43; types of 431 social transactions, emotions as 289 – 92 socio-cognitive skills 208, 211; acquired throughout ontogeny 211 – 17; communicative and narrative 211, 214 – 17; primary intersubjectivity 211, 212 – 13; secondary intersubjective processes 211, 213 – 14 Southgate,V. 155 Sperber, D. 2, 43, 46, 57, 60, 93, 248, 258 spontaneous judgments, implicit attitudes and 298 – 9 S-shaped adoption curve, conformist bias and 59 – 60 Steele, C. 469, 470 Stein, E.: on empathy 516 – 18; shared experiences and 517 – 18 Sterelny, K. 22, 47, 54, 66, 76, 117, 283, 482 stereotypes, models and 124 stereotype threat 466, 468 – 71, 473, 475; described 469 – 70; embedded character and 469 – 71; intellectual character and 469, 471 still face paradigm 212 strategic agents 251 – 4, 260 strategic compliance 247 – 50, 255 strategic non-consequentialism, reputation, partner choice and 272 – 3 Strawson, P. 110 Sugden, R. 402, 404 – 5, 407, 413
surprisal 329 – 32 sympathy 103 – 4, 253, 492, 498, 505, 511, 518, 522 synchronic identity 450 – 1 Tan, J. H. 404 Tang,Y. 81 Taumoepeau, M. 178, 215 Taylor, P. 431 – 2 Taylor,V. 469 teaching 2, 35, 39 – 48, 55, 63, 65, 66, 75, 76, 78, 79, 117, 126 team reasoning 10, 400 – 14, 420; described 401 – 2; empirical testing of 407 – 8; evidence for 407 – 14; future research for 413 – 14; goals of 406 – 7; introduction to 400 – 1; Nash equilibrium games and 408 – 11; non-Nash equilibrium games and 412 – 13; theory of 401 – 7; triggers of 402 – 6 temporal conceptions of normativity 558 – 9 temporal difference reinforcement learning (TDRL) 302 Tennie, C. 36 tertiary intersubjectivity 227, 229, 237 theory of cooperative utilitarianism 405 theory of mind scale 176, 179, 181 – 6 theory theory (TT) 109, 122, 209, 210, 216, 479, 488, 499; empiricist version of 209, 214, 215; of mind 109 – 10; nativist version of 209 thick descriptions of individuals 55 – 6 thin descriptions of individuals 55 Tindale, N. 61 – 2 Todd, A. 311 toddlers, false-belief understanding in 153 – 6 Tomasello, M. 9, 22, 25, 77 – 8, 117, 131, 192, 226, 228, 248, 255, 346, 347, 349, 377, 390 – 1, 539 – 40 Tönnies, F. 519, 523 Tooby, J. 56, 96 Toro, M. 41 transformation thesis 79 – 80 Träuble, B. 176 Trevarthen, C. 211 tribalism 88 – 98; biologization and 92, 93; by-product theories of 96 – 7; cooperation vs. distrust/hostility 93; defined 90; distinct ethnic adaptation and 97 – 8; ethnic concepts and 91; ethnicized groups and 90; evolution of 96 – 8; history 88 – 90; inherited/immutable ethnic identity and 92 – 3; moral psychology and 93 – 4; overview of 88; psychological essentialism and 91 – 2; as racism 95; selective pressures and 90; social-psychological phenomena and 95 – 6; unified 94 tribe psychology 5
572
Index tribes: as cultural groups 89; ethnic markers and 89 – 90, 93, 95, 97; features of 89, 90; social organization of 89 – 90 Tronick, E. 212 Tropp, L. 310 Tuomela, R. 2 – 3 Turner, S. 393, 553 turn taking 223, 224 Tversky, A. 402 two-way shared mutual gaze 224 unobservability principle 209 Vaart, E. van der 24, 25 value-based decision-making 299 – 303; model-based 299 – 300, 303; model-free 299 – 300 Van Kleef, G. 249, 290 Vernetti, A. 155 vicarious experiences 12, 498 – 512; see also empathy; ascription condition of 508; defined 498; interpersonal similarity condition and 504 – 6; introduction to 498 – 9; of pain, duality of 506 – 8; sensory pain 510; simulation-based approach to 504 vicarious pain, duality of 506 – 8 Vignenmont, F. de 11, 12, 498 – 512 virtual bargaining 10 – 11, 418 – 28; account of social behaviour 421 – 3; booby-trap game 419 – 23; common knowledge and 425; communication dependency on 423 – 5; joint action and 425 – 6; joint attention and 425 – 6; overview of 418 – 19; top-down/bottom-up approaches and 426 – 8 Vygotskian Intelligence Hypothesis 22, 348 Vygotsky, L. 19, 63, 197, 348 Waal, F. de 29 Walther, G. 516; on experiential sharing 516, 518 – 21 Walton, G. 469 Walton, K. 147 Wanderer, J. 538 Warneken, F. 377 watching eyes experiments: alternative interpretation of 257 – 8; automatic perspective taking and 255 – 6; reputation and 255 – 8; social regulation and 256 – 7 the we, phenomenology of 515 – 25; Gurwitsch and partnership, communal/group
membership 521 – 4; introduction to 515; Stein and empathy 516 – 18; Walther and experiential sharing 518 – 21 weakly feasible bargain 422 we-attitudes 388, 392, 393 Weinstein, J. M. 94 we-intentions 9, 141, 143, 346, 347, 364, 392, 393, 436, 440, 441, 442, 515, 524 Weintraub, M. 228 – 9 W.E.I.R.D. see western, educated, industrial, rich, and democratic (W.E.I.R.D.) Wellman, H. 176, 179, 180, 186 we-mode 9, 10, 388, 392 – 7; defined 388; first-person plural and 392 – 6; reciprocal alignment and 10, 394 – 7 we-perspective 387, 388, 390, 391, 392, 393, 394, 395, 397 we-representations 388, 394, 395 western, educated, industrial, rich, and democratic (W.E.I.R.D.) 172 – 3, 235 ‘what’ of sharing 230 – 2 ‘who’ of sharing 232 – 4 Wicklund, R. 257 Wilde, O. 308 Williams, B. 308 Wilson, D. 43 Wilson, D. S. 61 Wilson, M. 56 Wilson, R. 393 Wilson, R. A. 62 Winant, H. 432 Wisthoff, R. 132 Wittgenstein, L. 197, 427, 534, 548, 551 Wong, D. 474 – 5 Wright, C. 551 Xiang, T. 328 – 9 Yancy, G. 307 Yarrow, K. 301 Yoon, J. M. D. 44 Zack, N. 437 Zahavi, D. 11, 12, 499, 515 – 25 Zawidzki, T. W. 11, 12, 117, 119, 474, 479 – 94 Zeitoun, H. 10, 418 – 28 Zeller, C. 7, 247 – 60 Zhang, J. 82 Zizzo, D. J. 404 Zmyj, N. 90
573